uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,081 | arxiv | \section{INTRODUCTION}
A variety of models have been introduced in an attempt to explain
quark confinement. The center vortices model
\cite{Hoof79,Corn79} introduced in the late 1970's
by 't Hooft is one of those attempts.
A center vortex is a topological line-like (in D=3 dimensions) or
surface-like (in D=4 dimensions) field configuration.
The vortex carries magnetic flux quantized
in terms of elements of the center of the group. The fluxes form narrow
tubes with constant energy per unit length (surface).
In order for the vortex to have finite energy per unit length, the
gauge potential at large transverse
distances must be a pure gauge. However, the gauge transformation
which produces that potential is non-trivial. It is discontinuous
by an element of the gauge center.
It is the non-trivial nature of the gauge transformation which forces
the vortex core to have non-zero energy and makes
the vortex topologically stable.
Faber, Greensite and Olejn\'{i}k \cite{Fabe97} introduced
fat-center-vortices to obtain confinement of both
fundamental and higher representation static sources.
According to the fat-center-vortices model, the vacuum is a condensate
of vortices of some finite thickness. Confinement is produced by the
independent fluctuations of the vortices piercing each unit area of a
Wilson loop.
Faber {\it et al.} explicitly worked out the model for SU(2) and gave results
for a particular flux distribution within the vortices. Here, I work in
SU(3) and investigate a wide variety of vortex flux distributions. I study the
existence of Casimir scaling at intermediate distances and the pattern of
screening at large distances.
The model confirms Casimir scaling qualitatively at intermediate distances,
however, in general the model
does not agree with Casimir scaling quantitatively, in contrast to the precise
agreement recently found by numerical simulations \cite{Deld98,Bali99}.
The lack of convexity of the potentials predicted by the model is also
discussed.
For completeness, I first briefly explain their model and then apply it to
SU(3), using it to study the potentials between static quarks for the
fundamental and several other representations.
\section{The model of Faber, Greensite and Olejn\'{i}k}
In the fundamental representation of $SU(N)$, a center vortex linked to a
Wilson loop has the effect of multiplying the Wilson loop
by the gauge group center,
\begin{equation}
W(C) \rightarrow \exp^{\frac{2\pi i n}{N}} W(C)~~~~~~~~n=1,2,...,N-1.
\end{equation}
Based on the vortex theory, the area law for a Wilson loop is due to
the quantum fluctuations in the number of center vortices linking the loop.
Adjoint Wilson loops are not affected by center vortices unless the
vortex core overlaps the perimeter of the
loops. If the vortex thickness is large enough, the fat-center-vortices model
can explain confinement and the Casimir scaling of higher representation
string tensions. The average Wilson loop
predicted by this model has the following form:
\begin{equation}
<W(C)> = \prod_{x_{v}} \{ 1 - \sum^{N-1}_{n=1} f_{n} (1 - Re {\cal G}_{r}
[\vec{\alpha}^n_{C}(x_{v})])\},
\label{sigmac}
\end{equation}
where $x_{v}$ is the location of the center of the vortex, $A$ is the area of
the loop $C$, and ${\cal G}_{r}$ is defined as:
\begin{equation}
{\cal G}_{r}[\vec{\alpha}] = \frac{1}{d_{r}} Tr \exp[i\vec{\alpha} . \vec{H}].
\label{gr}
\end{equation}
$d_{r}$ is the dimension of representation r, and
$\{H_{i}\}$ is the subset of the generators needed to generate all
elements of the center of the group.\@ (For $SU(3)$, $\lambda_{8}$
is sufficient.)
The parameter $f$ represents the probability that any given unit area is
``pierced" by a vortex; {\it {i.e.}}, a line running through the center of
the vortex tube intersects the area.
The parameter $\alpha_{c}(x)$ is determined by the
fraction of the vortex flux that is enclosed by the Wilson loop. Therefore
$\alpha_{c}(x)$ depends on the profile of the vortex as well as the shape of
the loop and the position $x_{v}$ of the center of the vortex
relative to the perimeter. For SU(3) $\alpha_{c}(x)$ is equal to
$4\pi/\sqrt{3}$, if the flux is entirely inside the minimal area of the loop
and it is zero if the flux is entirely outside the minimal area of the loop.
\section{General features of Casimir scaling and screening}
Numerical simulations \cite{Camp86,Mich92,Mich98,Deld98,Bali99}
have shown that the potentials in $SU(3)$ at zero temperature
are linear at intermediate distances and roughly proportional to the Casimir
operator. The proportionality of potential to the Casimir operator is known
as ``Casimir scaling.'' Assuming such scaling, the potentials of each
representation to that of fundamental representation would be 2.5, 2.25,
4.5, 7, 4 and 6 for representations 6, 8, 10, 15-symmetric, 15-antisymmetric
and 27, respectively. The Casimir scaling regime is expected to exist for
intermediate distances, perhaps extending from the onset of confinement to
the onset of screening.
Screening can be understood as follows:
Each representation can be labeled by the ordered pair $(n,m)$, with
n and m the original number of 3 and $\bar{3}$ which participated in
constructing the representation. Triality is defined as (n-m) mod 3.
Screening occurs for representations with zero triality:
$8 \equiv (1,1)$, $10 \equiv (3,0)$, and $27 \equiv (2,2)$.
For these representations, as the distance between the two adjoint
sources increases, the potential energy of the flux tube rises. A pair of
gluons pops of vacuum when this energy is equal or greater than the
twice of glue-lump mass. (A glue-lump is the ground state hadron with a
gluon field around a static adjoint source.) At large distances, the
static sources combine with the adjoint(8) charges (dynamic gluons) popped out
of the vacuum and produce singlets which screen.
Therefore the potential between static sources is no longer
$R$ dependent. Static sources in representations 10 and 27 transform into the 8
first by combining with a dynamic gluon, and then the 8 transforms into the
singlet by combining with a second gluon.
\begin{equation}
8 \otimes 8= 27 \oplus \bar{10} \oplus 10 \oplus 8 \oplus 1,
\end{equation}
\begin{equation}
10\otimes 8= 8 \oplus 10 \oplus 27 \oplus 35,
\end{equation}
\begin{equation}
27 \otimes 8 = 64 \oplus 27 \oplus 27 \oplus 35 \oplus \bar{35} \oplus 10
\oplus \bar{10} \oplus 8.
\end{equation}
Therefore, we expect the potential in representations 10 and 27 to screen only
at higher energy than in representation 8.
Static sources in representations with non-zero triality, $6 \equiv (2,0)$, $15_{s} \equiv (4,0) $
and $15_{a} \equiv (2,1)$, transform into the lowest order representation (3
and $\bar{3}$) by binding to the gluonic $8's$ which are popped out of the
vacuum:
\begin{equation}
6 \otimes 8 = \bar{3} \oplus 6 \oplus 15 \oplus 24,
\end{equation}
\begin{equation}
15_{a} \otimes 8= 42 \oplus \bar{24} \oplus 15_{a} \oplus 15_{a}
\oplus \bar{6} \oplus 3 \oplus 15_{s},
\end{equation}
\begin{equation}
15_{s} \otimes 8 = 48 \oplus 42 \oplus 15_{s} \oplus 15_{a}.
\end{equation}
15-symmetric changes to 15-antisymmetric first, so it needs to interact
with the 8's (popped from the vacuum) twice to transform to 3. Screening
does not occur for representations with non-zero triality,
since there is no way to get a zero
triality representation by crossing a non-zero one with any number of 8's.
As a result, the slope of the linear potentials of the representations
with non-zero triality changes to the slope of the fundamental one,
and a universal string tension is observed for large R. We expect the
representation 15-symmetric to require a larger value of $R$ to approach the
fundamental slope than representations $6$ or 15-antisymmetric
because two pairs of 8's must be popped from the vacuum in the 15-symmetric
case.
\section{Applying the Fat-center-vortices model to $SU(3)$}
To find the potential $V_{r}(R)$ in SU(3), first I need to find $H_{i}$ in
Eqn.\ \ref{gr} for each representation: 3, 6, 8, 10, 15-symmetric,
15-antisymmetric, and 27. For the fundamental representation,
$H_{1}=T_{8}=\frac{\lambda_{8}}{2}$; where $\lambda_{8}$ is the diagonal
Gell-Mann matrix.
I obtain $T^r_{8}$ of other representations by using the tensor method.
Define $\{X^i_{r}; i=1,...,d_{r}\}$, which are the basis vectors for
the space on which the representation act. The corresponding
generators are obtained from \cite{Geor92}:
\begin{equation}
[T_{a}^{D_{1}\otimes D_{2}}]_{ix,jy}=[T_{a}^{D_{1}}]_{ij}\delta_{xy}+
\delta_{ij}[T_{a}^{D_{2}}]_{xy}.
\label{prod_rep}
\end{equation}
$T_{a}'s$ are the group generators for representations $D_{1}$, $D_{2}$,
$D_{1}\otimes D_{2}$. The elements of $T^r_{8}$ can be found by:
\begin{equation}
T^r_{8} X^i_{r} = \sum^{d_{r}}_{j=1} C_{ij}X^j_{r}.
\label{Tr8}
\end{equation}
To study potentials for $SU(3)$, one needs to define an appropriate form of
the function $\alpha_{c}(x)$ in Eqn.\ \ref{sigmac}. To understand more about the
effect of the vortex profile on potentials and Casimir scaling, I assume a
density of flux $\rho(r)$ in an axially symmetric vortex core, where $r$ is
the radial distance from the vortex center. Let $\rho(r)=0$ for $r>a$
so the vortex has a sharp boundary at $r=0$.
Now let $\beta(x_{v})$ denote the amount of the flux of the vortex contained
in the region $x>0$. Thus, $\beta(x_{v})=0$ for $x_{v}< -a$, and
$\beta(x_{v})=\frac{4\pi}{\sqrt{3}}$ for $x_{v}>a$. For $-a \leq x_{v} \leq a$,
$\beta(x_{v})$ is determined by the integral of $\rho(r)$ over the
fraction of the vortex in the region $x>0$. Finally, let the Wilson loop
have sides $x=0$ and $x=R$. Then $\alpha_{c}(x_{v})$, the fraction of flux
within the loop, is given by:
\begin{equation}
\alpha_{R}(x_{v})= \beta(x_{v})-\beta(x_{v}-R),
\label{alpha-def}
\end{equation}
A simple choice for $\rho(r)$ that I have tried is a uniform distribution
$\rho(r)=\rho_{0}$ for $r<a$. Another possibility with a smoother edge at
$r=a$, is:
\begin{equation}
\rho(r)=\rho_{0}\exp[{\frac{-b}{(\frac{|r|}{a}-1)^2}}],
\end{equation}
where $b$ is an adjustable constant and $\rho_{0}$ is fixed by the requirement
that the total flux is $\frac{4\pi}{\sqrt{3}}$.
Fig.\ \ref{expnroa} shows potentials obtained from this flux distribution
with $b=0.1$, $a=20$ and $f=0.1$.
From the plot, it can be seen that, for each representation, there exists a
region in which the potential is approximately linear and qualitatively in
agreement with Casimir scaling. Screening
occurs for representations 8, 10 and 27 while the slope of the potentials
for representations 6, 15-symmetric and 15-antisymmetric changes to the
slope of the fundamental representation.
Note the non-convexity near $R=8$ to $R=20$ for all representations, and
especially representations 15-symmetric and 27.
Even though the fat-center-vortices model predicts
some of the expected behavior of the potential between static quarks,
it has some limitations. In particular, it violates the fact that the
potential should be always a convex function of distance \cite{Bach86}.
Qualitative agreement with Casimir scaling is observed for all the
axially symmetric distributions I tried. This
is true even if one defines the density to be zero everywhere except on the
outer boundary ($r=a$) of the vortex. Fig.\ \ref{Pdens} plots potentials for this
distribution with $a=20$ and $f=0.1$. Fig.\ \ref{delta} shows potentials for
the maximally non-axially symmetric core where the flux is zero everywhere
except at the two points where
the vortex first enters and exits the Wilson loop. A linear regime still
exists at intermediate distances but qualitative agreement with Casimir scaling
is lost: slope of potentials with larger Casimir operators have smaller string
tensions. For example, the string tension for representations 6 and 8 are
larger than the ones for representations 15-antisymmetric and 27.
In this case the order of potentials at long distances
changes as well. For example, the potential for representation 27 is less than
the potential for representation 8 in the screened regime.
It still remains true, however, that the zero triality representations screen,
and non-zero triality ones approach the fundamental string tension at long
distances. However the non-convexity of potentials is almost gone.
Note that this flux distribution is probably unphysical. We expect the flux
in the lowest energy vortex to be axially symmetric. The fact that
potentials obtained from physical flux distributions agree qualitatively with
Casimir scaling, is the strength of the vortex model. However recent numerical
simulation results \cite{Bali99} show that potentials are quantitatively in agreement
with Casimir scaling with an accuracy of 5 percent, the feature that is lost in
fat-center-vortices results. For example, from Fig.\ ref{Pdens}, ratios of
potentials for representations 6, 8, 10, 15-symmetric, 15-antisymmetric, and
27 to the fundamental representation are 2, 1.82, 3, 5.4, 2.7, and 3.4,
respectively whereas the same ratios for Casimir scaling would be 2.5, 2.25,
4.5, 7, 4, and 6, respectively.
One can get similar results for potentials at intermediate and long distances
using one dimensional functions for $\alpha(x)$ similar to those of
ref.\ \cite{Fabe97}. I have tried several functions. An example is:
\begin{equation}
\beta(x_{v})= \left \{ \begin{array}{lll}
\frac{4\pi}{\sqrt{3}} & \mbox{$x_{v}>a$ } \\
0 & \mbox{$x_{v}<-a$ }\\
\frac{2\pi}{\sqrt{3}}+\frac{2\pi}{\sqrt{3}}(\exp\{b[1-\frac{1}{(x_{v}/a+1)^2}]\}- \exp\{b[1-\frac{
1}{(x_{v}/a-1)^2}]\}) & \mbox{$-a<x_{v}<a.$}
\end{array}
\right.
\end{equation}
$\alpha_{R}(x_{v})$ is then given by Eqn.\ \ref{alpha-def}.
Note that in this case no density distribution is defined and therefore no
integration is needed to find $\beta(x)$.
For moderate $b$ this results in a flux profile similar to the one used in
ref.\ \cite{Fabe97} and leads to qualitative Casimir scaling.
However for $b$ large enough this gives similar plot to Fig.\ \ref{delta}
and violates Casimir scaling.
Thus integrating a circularly symmetric distribution inside the Wilson loop
is more physical than just picking a function $\beta(x)$ at random.
An arbitrary one dimensional distribution is not necessary consistent
with axial symmetry, and in the above example, large $b$ certainly does
not correspond to any axially symmetric distribution.
\section{Conclusion}
By applying the fat-center-vortices model to $SU(3)$ and using presumably
physical axially symmetric density distributions for the vortex, I
showed that for several representations there exists a region at intermediate
distances in which the
static potential is linear and qualitatively in agreement with Casimir
scaling. This is also in agreement with the observation in SU(3)
simulations of a linear potential in proportion to Casimir ratio of the
representation \cite{Camp86,Mich92,Mich98,Deld98,Bali99}. However, the Casimir proportionality
is dependent on the flux distribution in the vortex
and it is possible to lose this feature by changing
the distribution function to a non-axially symmetric distribution. At large
distances, zero-triality representations will be screened and the potentials
for non-zero triality representations parallel the one for the
fundamental representation. Some of the expected features of the screening
pattern are also lost for non-axially symmetric distributions.
The conclusion is that Casimir scaling and the pattern of screening depend
on the detailed vortex structure and are not simple kinematic
consequence of the fat center vortex picture. However it is also clear
that these properties are rather robust and are likely to survive with
physically realistic vortices. On the other, potentials are not
quantitatively in agreement with Casimir scaling as predicted by recent numerical
simulations. This suggests that the fat-center-vortices model needs further
refinement if it is to remain viable. In particular, one may need an appropriate
physical flux distribution of vortex sizes. Further numerical studies of these
issues is in progress.
\section{Acknowledgement}
I wish to thank Claude Bernard for his help in
this work.
|
1,477,468,750,082 | arxiv | \section{Introduction}
Burgers' equation in which convection and diffusion play an important role
arises in applications such as meteorology, turbulent flows, modelling of
the shallow water. Burgers' equation is considered to be useful model for
many physically problems. Thus It is often studied for testing of both real
life problems and computational techniques. Not only does exact solutions of
nonlinear convective problem develops discontinuities in finite time, and
might display complex structure near discontinuities. Efficient and accurate
methods are in need to be tackled the complex solutions of the Burgers'
equation, as expected, many numerical researches are strived to beat
difficulties. Though analytical solutions of the Burgers' equation exist for
simple initial condition ,\ the numerical techniques are of interest to meet
requirement of the wide range of solutions of the Burgers' equations. Some
variants of the Spline methods have set up to find the numerical solutions
of the Burgers' equations such as Galerkin finite element method \cite%
{da,dag3,ozis,bs1,chap}, least square method \cite{sk}, collocation method
\cite{ali,dag,ra,dag4,bs2,bs3,mittal}, finite difference method \cite{f3,f1}%
, differential quadrature method \cite{korkmaz,arora}, method based on the
cubic B-spline quasi interpolant \cite{zhu,jiang}, Taylor--Galerkin and
Taylor-collocation methods \cite{dag2}, etc.
Finite element methods are mainly used methods to have good functional
solutions of the differential equations. The accuracy of the finite element
solutions are increased by the selection of suitable basis\ function for the
approximate function over the finite intervals. We will form the combination
of the exponential B-spline as an approximate function for the finite
element method to get the solution of the Burgers' equation. The exponential
B-splines are suggested to interpolate data and function exhibiting sharp
variations \cite{mc1,mc2,mc3,mc4} since polynomial B-splines\ based
interpolation cause unwanted osculation for interpolation. Some solutions of
the Burgers' equation show sharpness. Thus we will construct the finite
element method together with the exponential B-splines\ to have solutions of
the Burgers' equation. Over the finite element, the Galerkin method will be
employed to determine the unknown of the approximate solution. A few
exponential B-spline numerical methods have suggested for some partial
differential equations: Exponential B-spline collocation method is build up
to compute numerical solution of the convection diffusion equation \cite%
{mohammadi}, the Korteweg-de Vries equation \cite{oz} and generalized
Burgers-Fisher equation \cite{bf}.
In this study, we will consider the Burgers' equation
\begin{equation}
u_{t}+uu_{x}-\nu u_{xx}=0 \label{1}
\end{equation}%
where subscripts $x$ and $t$ are space and time parameters, respectively and
$\nu $ is the viscosity coefficient. Boundary conditions of the Eq.(\ref{1})
are chosen from
\begin{equation}
\begin{tabular}{l}
$u\left( a,t\right) =\beta _{1},$ $u\left( b,t\right) =\beta _{2},$ \\
$u_{x}\left( a,t\right) =0,\text{ }u_{x}\left( b,t\right) =0,$ $t\in \left(
0,T\right] $%
\end{tabular}
\label{2}
\end{equation}%
and initial condition is%
\begin{equation}
u\left( x,0\right) =f\left( x\right) ,x\in \left[ a,b\right] . \label{2a}
\end{equation}%
$f\left( x\right) $ and $\beta _{1},\beta _{2}$ constants are described in
the computational section.
\section{Exponential B-splines Galerkin Finite Element Solution}
Divide spatial interval $[a,b]$ in $N$ subintervals of length $h=\dfrac{b-a}{%
N}$ and $x_{i}=x_{0}+ih$ at the knots $x_{i},$ $i=0,..,N$ and time interval $%
[0,T]$ in $M$ interval of length $\Delta t.$
Let $\phi _{i}\left( x\right) $ be the exponential B-splines defined at the
knots $x_{i},$ $i=0,\ldots ,N,$ together with fictitious knots $x_{i},$ $%
i=-3,-2,-1,N+1,N+2,N+3$ outside the interval $[a,b].$ The $\phi _{i}\left(
x\right) ,$ $i=-1,\ldots ,N+1$ can be defined as
\begin{equation}
\phi _{i}\left( x\right) =\left \{
\begin{array}{lll}
b_{2}\left[ \left( x_{i-2}-x\right) -\dfrac{1}{p}\left( \sinh \left( p\left(
x_{i-2}-x\right) \right) \right) \right] & \text{ \ } & \text{if }x\in \left[
x_{i-2},x_{i-1}\right] ; \\
a_{1}+b_{1}\left( x_{i}-x\right) +c_{1}e^{p\left( x_{i}-x\right)
}+d_{1}e^{-p\left( x_{i}-x\right) } & \text{ } & \text{if }x\in \left[
x_{i-1},x_{i}\right] ; \\
a_{1}+b_{1}\left( x-x_{i}\right) +c_{1}e^{p\left( x-x_{i}\right)
}+d_{1}e^{-p\left( x-x_{i}\right) } & \text{ } & \text{if }x\in \left[
x_{i},x_{i+1}\right] ; \\
b_{2}\left[ \left( x-x_{i+2}\right) -\dfrac{1}{p}\left( \sinh \left( p\left(
x-x_{i+2}\right) \right) \right) \right] & \text{ } & \text{if }x\in \left[
x_{i+1},x_{i+2}\right] ; \\
0 & \text{ } & \text{otherwise}%
\end{array}%
\right. \label{3}
\end{equation}%
where%
\begin{equation*}
\begin{array}{l}
p=\underset{0\leq i\leq N}{\max }p_{i},\text{ }s=\sinh \left( ph\right) ,%
\text{ }c=\cosh \left( ph\right) , \\
a_{1}=\dfrac{phc}{phc-s},\text{ }b_{1}=\dfrac{p}{2}\left[ \dfrac{c\left(
c-1\right) +s^{2}}{\left( phc-s\right) \left( 1-c\right) }\right] ,\text{ }%
b_{2}=\dfrac{p}{2\left( phc-s\right) }, \\
c_{1}=\dfrac{1}{4}\left[ \dfrac{e^{-ph}\left( 1-c\right) +s\left(
e^{-ph}-1\right) }{\left( phc-s\right) \left( 1-c\right) }\right] ,\text{ }%
d_{1}=\dfrac{1}{4}\left[ \dfrac{e^{ph}\left( c-1\right) +s\left(
e^{ph}-1\right) }{\left( phc-s\right) \left( 1-c\right) }\right] .%
\end{array}%
\end{equation*}
Each basis function $\phi _{i}\left( x\right) $ is twice continuously
differentiable. Table 1 shows the values of $\phi _{i}\left( x\right) ,$ $%
\phi _{i}^{\prime }\left( x\right) $ and $\phi _{i}^{\prime \prime }\left(
x\right) $ at the knots $x_{i}$:%
\begin{equation*}
\begin{tabular}{c|ccccc}
\multicolumn{6}{l}{Table 1: Exponential B-spline values} \\ \hline \hline
& $x_{i-2}$ & $x_{i-1}$ & $x_{i}$ & $x_{i+1}$ & $x_{i+2}$ \\ \hline
$\phi _{i}\left( x\right) $ & $0$ & $\frac{s-ph}{2\left( phc-s\right) }$ & $%
1 $ & $\frac{s-ph}{2\left( phc-s\right) }$ & $0$ \\
$\phi _{i}^{\prime }\left( x\right) $ & $0$ & $\frac{p\left( c-1\right) }{%
2\left( phc-s\right) }$ & $0$ & $\frac{p\left( 1-c\right) }{2\left(
phc-s\right) }$ & $0$ \\
$\phi _{i}^{\prime \prime }\left( x\right) $ & $0$ & $\frac{p^{2}s}{2\left(
phc-s\right) }$ & $\frac{-p^{2}s}{phc-s}$ & $\frac{p^{2}s}{2\left(
phc-s\right) }$ & $0$ \\ \hline \hline
\end{tabular}%
\end{equation*}
The $\phi _{i}\left( x\right) ,i=-1,\ldots ,N+1$ form a basis for functions
defined on the interval $[a,b]$. The Galerkin method consists of seeking
approximate solution in the following form:%
\begin{equation}
u\left( x,t\right) \approx U_{N}\left( x,t\right) =\overset{N+1}{\underset{%
i=-1}{\sum }}\phi _{i}\left( x\right) \delta _{i}\left( t\right) \label{4}
\end{equation}%
where $\delta _{i}\left( t\right) $ are time dependent unknown to be
determined from the boundary conditions and Galerkin approach to the
equation (\ref{1}). The approximate solution and the first two derivatives
at the knots can be found from the Eq. (\ref{3}-\ref{4}) as%
\begin{equation}
\begin{tabular}{l}
$U_{i}=U_{N}(x_{i},t)=\alpha _{1}\delta _{i-1}+\delta _{i}+\alpha _{1}\delta
_{i+1},$ \\
$U_{i}^{\prime }=U_{N}^{\prime }(x_{i},t)=\alpha _{2}\delta _{i-1}-\alpha
_{2}\delta _{i+1},$ \\
$U_{i}^{\prime \prime }=U_{N}^{\prime \prime }(x_{i},t)=\alpha _{3}\delta
_{i-1}-2\alpha _{3}\delta _{i}+\alpha _{3}\delta _{i+1}$%
\end{tabular}
\label{5}
\end{equation}%
where $\alpha _{1}=\dfrac{s-ph}{2(phc-s)},\alpha _{2}=\dfrac{p(1-c)}{2(phc-s)%
},\alpha _{3}=\dfrac{p^{2}s}{2(phc-s)}.$
The approximate solution $U_{N}$\ over the element $[x_{m},x_{m+1}]$ can be
written as%
\begin{eqnarray}
U_{N}^{e} &=&\phi _{m-1}\left( x\right) \delta _{m-1}\left( t\right) +\phi
_{m}\left( x\right) \delta _{m}\left( t\right) +\phi _{m+1}\left( x\right)
\delta _{m+1}\left( t\right) \notag \\
&&+\phi _{m+2}\left( x\right) \delta _{m+2}\left( t\right) \label{7}
\end{eqnarray}%
where quantities $\delta _{j}\left( t\right) ,j=m-1,...,m+2$ are element
parameters and $\phi _{j}\left( x\right) ,j=m-1,...,m+2$ are known as the
element shape functions.
Over the sample interval $[x_{m},x_{m+1}],$ applying Galerkin approach to
Eq. (\ref{1}) with the test function $\phi _{j}\left( x\right) $ yields the
integral equation:%
\begin{equation}
\underset{x_{m}}{\overset{x_{m+1}}{\int }}\phi _{j}\left( x\right) \left(
u_{t}+uu_{x}-\nu u_{xx}\right) dx. \label{8}
\end{equation}%
Substitution of the Eq. (\ref{7}) into the integral equation lead to
\begin{eqnarray}
&&\left. \overset{m+2}{\underset{i=m-1}{\sum }}\left( \underset{x_{m}}{%
\overset{x_{m+1}}{\int }}\phi _{j}\phi _{i}dx\right) \overset{\mathbf{%
\bullet }}{\delta }_{i}+\left( \underset{x_{m}}{\overset{x_{m+1}}{\int }}%
\phi _{j}\left( \overset{m+2}{\underset{k=m-1}{\sum }}\delta _{k}\phi
_{k}\right) \phi _{i}^{\prime }dx\right) \delta _{i}\right. \notag \\
&&\qquad \left. -\nu \left( \underset{x_{m}}{\overset{x_{m+1}}{\int }}\phi
_{j}\phi _{i}^{\prime \prime }dx\right) \delta _{i},\right. \label{9}
\end{eqnarray}%
where $i,j$ and $k$ take only the values $m-1,m,m+1,m+2$ for $m=0,1,\ldots
,N-1$ and $\overset{\mathbf{\bullet }}{}$ denotes time derivative.
If we denote $A_{ji}^{e},B_{jki}^{e}(\delta ^{e})$ and $C_{ji}^{e}$ by%
\begin{equation}
\begin{tabular}{ll}
$A_{ji}^{e}=\underset{x_{m}}{\overset{x_{m+1}}{\int }}\phi _{j}\phi _{i}dx,$
& $B_{jki}^{e}\left( \delta \right) =\underset{x_{m}}{\overset{x_{m+1}}{%
\int }}\phi _{j}\left( \overset{m+2}{\underset{k=m-1}{\sum }}\delta
_{k}\phi _{k}\right) \phi _{i}^{\prime }dx,$ \\
$C_{ji}^{e}=\underset{x_{m}}{\overset{x_{m+1}}{\int }}\phi _{j}\phi
_{i}^{\prime \prime }dx$ &
\end{tabular}
\label{10}
\end{equation}%
where $\mathbf{A}^{e}$ and $\mathbf{C}^{e}$ are the element matrices of
which dimensions are $4\times 4$ and $\mathbf{B}^{e}\left( \mathbf{\delta }%
^{e}\right) $ is the element matrix with the dimension $4\times 4\times 4$,
Eq.(\ref{9}) can be written in the matrix form as%
\begin{equation}
\mathbf{A}^{e}\overset{\mathbf{\bullet }}{\mathbf{\delta }^{e}}+\left(
\mathbf{B}^{e}\left( \mathbf{\delta }^{e}\right) -\nu \mathbf{C}^{e}\right)
\mathbf{\delta }^{e}, \label{11}
\end{equation}%
where $\mathbf{\delta }^{e}\mathbf{=}\left( \delta _{m-1},...,\delta
_{m+2}\right) ^{T}.$
Gathering the systems (\ref{11}) over all elements, we obtain global system
\begin{equation}
\mathbf{A}\overset{\mathbf{\bullet }}{\mathbf{\delta }}+\left( \mathbf{B}%
\left( \mathbf{\delta }\right) -\nu \mathbf{C}\right) \mathbf{\delta }=0
\label{12}
\end{equation}%
where $\mathbf{A},\mathbf{B}\left( \mathbf{\delta }\right) ,\mathbf{C}$ are
derived from the corresponding element matrices $\mathbf{A}^{e},\mathbf{B}%
^{e}\left( \mathbf{\delta }^{e}\right) ,\mathbf{C}^{e}$, respectively and $%
\mathbf{\delta =}\left( \delta _{-1},...,\delta _{N+1}\right) ^{T}$ contain
all elements parameters.
The unknown parameters $\mathbf{\delta }$ are interpolated between two time
levels $n$ and $n+1$ with the Crank-Nicolson method%
\begin{equation*}
\begin{array}{cc}
\mathbf{\delta }=\dfrac{\mathbf{\delta }^{n+1}+\mathbf{\delta }^{n}}{2}, &
\overset{\mathbf{\bullet }}{\mathbf{\delta }}=\dfrac{\mathbf{\delta }^{n+1}-%
\mathbf{\delta }^{n}}{\Delta t},%
\end{array}%
\end{equation*}%
we obtain iterative formula for the time parameters $\mathbf{\delta }^{n}$%
\begin{equation}
\left[ \mathbf{A+}\frac{\Delta t}{2}\left( \mathbf{B}\left( \mathbf{\delta }%
^{n+1}\right) -\nu \mathbf{C}\right) \right] \mathbf{\delta }^{n+1}=\left[
\mathbf{A-}\frac{\Delta t}{2}\left( \mathbf{B}\left( \mathbf{\delta }%
^{n}\right) -\nu \mathbf{C}\right) \right] \mathbf{\delta }^{n}. \label{13}
\end{equation}%
The set of equations consist of $\left( N+3\right) $ equations with $\left(
N+3\right) $ unknown parameters. Boundary conditions must be adapted into
the system. Because of the this requirement, initially the first and last
equations are eliminated from the (\ref{13}) and parameters $\delta
_{-1}^{n+1}$ and $\delta _{N+1}^{n+1}$ are substituted in the remaining
system (\ref{13}) by using following equations%
\begin{equation*}
\begin{array}{l}
u\left( a,t\right) =\alpha _{1}\delta _{-1}^{n+1}+\delta _{0}^{n+1}+\alpha
_{1}\delta _{1}^{n+1}=\beta _{1}, \\
u\left( b,t\right) =\alpha _{1}\delta _{N-1}^{n+1}+\delta _{N}^{n+1}+\alpha
_{1}\delta _{N+1}^{n+1}=\beta _{2}%
\end{array}%
\end{equation*}%
which are obtained from the boundary conditions. Thus we obtain a
septa-diagonal matrix with the dimension $\left( N+1\right) \times \left(
N+1\right) $. Since the system (\ref{13}) is an implicit system together
with the nonlinear term $\mathbf{B}\left( \mathbf{\delta }^{n+1}\right) $,
we have used the following inner iteration at each time step $(n+1)\Delta t$
to work up solutions:%
\begin{equation}
(\mathbf{\delta }^{\ast }\mathbf{)}^{n+1}=\mathbf{\delta }^{n}+\dfrac{(%
\mathbf{\delta }^{n}-\mathbf{\delta }^{n-1})}{2}. \label{14}
\end{equation}%
We use the above iteration three times to find the new approximation $(%
\mathbf{\delta }^{\ast }\mathbf{)}^{n+1}$ for the parameters $\mathbf{\delta
}^{n+1}$ to recover solutions at time step $(n+1)\Delta t$.
To start evolution of the iterative system for the unknown $\mathbf{\delta }%
^{n}$, the vector of initial parameters $\mathbf{\delta }^{0}$must be
determined by using the following initial and boundary conditions:%
\begin{equation}
\begin{tabular}{l}
$u_{0}^{\prime }(x_{0},0)=\dfrac{p\left( 1-c\right) }{2\left( phc-s\right) }%
\delta _{-1}+\dfrac{p\left( c-1\right) }{2\left( phc-s\right) }\delta _{1}$
\\
$u\left( x_{m},0\right) =\dfrac{s-ph}{2\left( phc-s\right) }\delta
_{m-1}+\delta _{m}+\dfrac{s-ph}{2\left( phc-s\right) }\delta _{m+1},$ $%
m=0,...,N.$ \\
$u^{\prime }\left( x_{N},0\right) =\dfrac{p\left( 1-c\right) }{2\left(
phc-s\right) }\delta _{N-1}+\dfrac{p\left( c-1\right) }{2\left( phc-s\right)
}\delta _{N+1}$%
\end{tabular}
\label{15}
\end{equation}%
The solution of matrix equation (\ref{15}) with the dimensions $\left(
N+1\right) \times \left( N+1\right) $ is obtained by the way of Thomas
algorithm. Once $\mathbf{\delta }^{0}$ is determined, we can start the
iteration of the system to find the parameters $\mathbf{\delta }^{n}$ at
time $t^{n}=n\Delta t.$ Approximate solutions at the knots is found from the
Eq.(\ref{5}) and solution over the intervals $[x_{m},x_{m+1}]$ is determined
from the Eq.(\ref{7}).
\section{Test Problems}
The robustness of the algorithm is shown by studying two test problems.
Error is measured by the maximum error norm;%
\begin{equation}
L_{\infty }=\left \Vert u^{\text{exact}}-u^{\text{numeric}}\right \Vert
_{\infty }=\max_{0\leq j\leq N}\left \vert u_{j}^{\text{exact}}-u_{j}^{\text{%
numeric}}\right \vert . \label{16}
\end{equation}%
The free parameter $p$ of the exponential B-spline is found by scanning the
predetermined interval with very small increment.
\textbf{(a)}\ A shock propagation solution of the Burgers' equation is%
\begin{equation}
u(x,t)=\dfrac{x/t}{1+\sqrt{t/t_{0}}\exp (x^{2}/(4\nu t))},\text{\quad }t\geq
1, \label{17}
\end{equation}%
where $t_{0}=\exp (1/(8\nu ))$.\ The sharpness of the solutions increases
with selection of the smaller $\nu $.
Substitution of the $t=1$ in Eq. (\ref{17}) gives the initial condition. The
boundary conditions $u(0,t)=0$ and $u(1,t)=0$ are used. Computations are
performed with parameters $\nu =0.0005,$ $0.005,$ $0.01,$ $h=0.02,$ $0.005$
and $\Delta t=0.01$ over the solution domain $[0,1]$. As time increases,
shock evaluation is observed and some graphical solutions are drawn in
Figs. \ref{fig1}-\ref{fig3} for various viscosity values and space steps. For $\nu =0.01,$ algorithm
produces smoother shock during run time. With decreasing values of $\nu $,
as seen in the Figs. \ref{fig1}-\ref{fig3} the steepening occurs. For the smaller viscosity
constant $\nu =0.0005$ the sharper shock is observed and steepness of
numerical solution is kept almost unchanged during the program run. The
results obtained by present scheme can be compared with ones given in the
works \cite{bs1,ra,bs2,bs3,bs4} through the computation of\ $L_{\infty }$
error norm at various times in the Table 2.%
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig1.pdf}
\caption{{}{\protect \ Solutions for $\nu =0.01\text{, }h=0.02$, $p=0.005111$}}
\label{fig1}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig2.pdf}
\caption{{}{\protect \ Solutions for $\nu =0.005\text{, }h=0.02$, $p=0.000739$}}
\label{fig2}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig3.pdf}
\caption{{}{\protect \ Solutions for $\nu=0.0005\text{, }h=0.005$, $p=0.005941$}}
\label{fig3}
\end{figure}
\begin{equation*}
\begin{tabular}{llll}
\multicolumn{4}{l}{$\text{Table 2: Comparison of numerical results at
different times}$} \\ \hline \hline
& $L_{\infty }\times 10^{3}$ & $L_{\infty }\times 10^{3}$ & $L_{\infty
}\times 10^{3}$ \\ \cline{2-4}
$h=0.005\text{, }\nu =0.005$ & $t=1.7$ & $t=2.4$ & $t=3.1$ \\ \hline
$\text{Present (}p=0.005941\text{)}$ & $3.15776$ & $2.33757$ & $4.79061$ \\
$\text{Ref.\cite{bs1} (QBGM)}$ & $1.20755$ & $0.80187$ & $4.79061$ \\
$\text{Ref.\cite{bs2} (QBCM1)}$ & $0.06192$ & $0.05882$ & $4.43469$ \\
$\text{Ref.\cite{bs3} (QBCA1)}$ & $1.21175$ & $0.80771$ & $4.79061$ \\
$\text{Ref.\cite{bs4}}$ & $0.04284$ & $0.06464$ & $4.79061$ \\
& & & \\
$h=0.02\text{, }\nu =0.005$ & $t=1.8$ & $t=2.4$ & $t=3.2$ \\ \hline
$\text{Present (}p=0.000739\text{)}$ & $8.26075$ & $7.42050$ & $7.49146$ \\
$\text{Ref.\cite{ra}}$ & $2.47189$ & $2.16784$ & $7.49146$ \\
$\text{Ref.\cite{bs2} (QBCM1)}$ & $0.54058$ & $0.39241$ & $5.54899$ \\
$\text{Ref.\cite{bs3} (QBCA1)}$ & $1.15263$ & $0.80008$ & $7.49147$ \\
$\text{Ref.\cite{bs4}}$ & $0.03546$ & $0.06464$ & $7.49147$ \\
& & & \\
$h=0.02\text{, }\nu =0.01$ & $t=1.7$ & $t=2.1$ & $t=2.6$ \\ \hline
$\text{Present (}p=0.005111\text{)}$ & $8.08651$ & $7.53518$ & $8.06798$ \\
$\text{Ref.\cite{ra}}$ & $3.13476$ & $2.66986$ & $8.06798$ \\
$\text{Ref.\cite{bs2} (QBCM1)}$ & $0.40431$ & $0.86363$ & $6.69425$ \\
$\text{Ref.\cite{bs3} (QBCA1)}$ & $0.47456$ & $1.14759$ & $8.06798$ \\
$\text{Ref.\cite{bs4}}$ & $0.09592$ & $1.14760$ & $8.06799$ \\ \hline \hline
\end{tabular}%
\end{equation*}
The absolute error distributions between the analytical and numerical
solutions are drawn in Figs. \ref{fig4}-\ref{fig6} for various viscosity values and space
steps. In these figures, the highest error appears about the right hand
boundary position. We run the program again over the extended domain $\left[
0,1.2\right] $ with parameters $\nu =0.005,$ $h=0.005$, the highest error is
reduced at the right boundary seen in the Fig. \ref{fig7} and $L_{\infty }$ error
norm decrease from $4.790609\times 10^{-3}$ to $2.259598\times 10^{-3}$ at
time $t=3.1$.
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig4.pdf}
\caption{{}{\protect \ Absolute error for $\nu =0.01\text{, }h=0.02\text{, }p=0.005111$}}
\label{fig4}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig5.pdf}
\caption{{}{\protect \ Absolute error for $\nu =0.005\text{, }h=0.005\text{, }p=0.005941$}}
\label{fig5}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig6.pdf}
\caption{{}{\protect \ Absolute error for $\nu =0.005\text{, }h=0.02\text{, }p=0.000739$}}
\label{fig6}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig7.pdf}
\caption{{}{\protect \ Absolute error for $\nu =0.005\text{, }h=0.005\text{, }$%
$p=0.005941\text{ over }[0,1.2]$}}
\label{fig7}
\end{figure}
\textbf{(b)} A well-known analytical solution of Burgers' equation is%
\begin{equation}
u(x,t)=\dfrac{\alpha +\mu +\left( \mu -\alpha \right) \exp \eta }{1+\exp
\eta },\text{\quad }0\leq x\leq 1,\text{ }t\geq 0, \label{18}
\end{equation}%
where $\eta =\dfrac{\alpha \left( x-\mu t-\gamma \right) }{\nu }$. $\alpha ,$
$\mu $ and $\gamma $ are constants. Parameters \linebreak $\alpha =0.4,$ $%
\mu =0.6$ and $\gamma =0.125$ are used to coincide with the some previous
studies. This solution involves a travelling wave and move to the right with
speed $\mu $. Initial condition is obtained from Eq. (\ref{18}) when $t=0$.
The boundary conditions are $u(0,t)=1,$ $u(1,t)=0.2$ for $t>0$.
The calculation is performed with time step $\Delta t=0.01$, space step $%
h=1/36$ and viscosity coefficient $\nu =0.01$. The program is run up to time
$t=0.5$. We have found $L_{\infty }=6.73543978\times 10^{-4}$ for the
exponential B-spline Galerkin method at time $t=0.5$ documented in Table 3
with results of the quadratic B-spline Galerkin method \cite{bs1}, the
quartic B-spline collocation method \cite{bs2}, the quintic B-spline
collocation method \cite{bs3} and the quartic B-spline Galerkin method \cite%
{bs4}.
The numerical solution obtained by the presented schemes gives better
results than the others. The profiles of initial wave and solution at some
times are depicted in Fig. \ref{fig8}. Error variations of the schemes are given in
Fig. \ref{fig9} at time $t=0.5$.%
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig8.pdf}
\caption{{}{\protect \ Solutions for $\nu =0.01$}}
\label{fig8}
\end{figure}
\begin{figure}[ht]
\centering\includegraphics[scale=0.5]{fig9.pdf}
\caption{{}{\protect \ Absolute error for $\nu =0.01$}}
\label{fig9}
\end{figure}
\begin{equation*}
\begin{tabular}{lll}
\multicolumn{3}{l}{Table 3: Comparison of results at $t=0.5$ for $h=1/36$, $%
\nu =0.01$} \\ \hline \hline
$\text{Method}$ & & $L_{{\footnotesize \infty }}\times 10^{{\footnotesize 3}%
}$ \\ \hline
$\text{Present (}p=0.002323$) & & $0.67354$ \\
$\text{Ref.\cite{bs1} (QBGM)}$ & & $6.35489$ \\
$\text{Ref.\cite{bs2} (QBCM1)}$ & & $3.03817$ \\
$\text{Ref.\cite{bs3} (QBCA1)}$ & & $5.78454$ \\
$\text{Ref.\cite{bs4} (QBGM)}$ & & $1.44$ \\ \hline \hline
\end{tabular}%
\end{equation*}
\section{Conclusion}
In this paper, we investigate the utility of the exponential B-spline in the
Galerkin algorithm for solving the Burgers' equation. The efficiency of the
method is tested for a shock propagation solution and a travelling solution
of the Burgers' equation. For the first test problem, solutions found with
the present methods are in good agreement with the results obtained by
previous studies. In the second test problem, present method leads to
accurate results than all of the others. In conclude, the numerical
algorithm in which the exponential B-spline functions are used, performs
well compared with other existing numerical methods for the solution of
Burgers' equation.\bigskip
\noindent \textbf{Acknowledgements:} The authors are grateful to The
Scientific and Technological Research Council of Turkey for financial
support given for the project 113F394.
\bigskip
|
1,477,468,750,083 | arxiv | \section{Introduction}
In a seminal 1960 Nature article~\citep{bondi1960}, Hermann Bondi presented a new approach to the study
gravitational waves in Einstein's theory of general relativity. It was based upon the outgoing null rays
along which the waves traveled. It was followed up in 1962 by a paper by Bondi, Metzner and
van der Burg~\citep{bondi1962}, in which the details were given for axisymmetric spacetimes.
In his autobiography~\cite[page 79]{Bondi1990}, Bondi remarked about this work:
``The 1962 paper I regard as the best scientific work I have ever done, which is later in life than mathematicians
supposedly peak''.
Soon after, Rainer Sachs~\citep{sachs1962} generalized
this formalism to non-axisymmetric spacetimes and sorted out the asymptotic symmetries in
the approach to infinity along the outgoing null hypersurfaces.
The beautiful simplicity of the Bondi-Sachs formalism was that
it only involved 6 metric quantities to describe a general spacetime.
At this time, an independent attack on Einstein's equations based upon null hypersurfaces was
underway by Ted Newman and Roger Penrose~\citep{np1962,npScolar2009}. Whereas the fundamental quantity
in the Bondi--Sachs formalism was the metric, the Newman-Penrose approach was based upon a
null tetrad and its curvature components.
Although the Newman-Penrose formalism involved many more variables
it led to a more geometric treatment of gravitational radiation, which culminated in
Penrose's~\citep{penrose1963} description in terms of the conformal
compactification of future null infinity, denoted by ${\mathcal I}^+$ (pronounced ``scri plus'' for script I plus).
It was clear that there were parallel
results emerging from these
two approaches but the two formalisms and notations were completely foreign. At meetings, Bondi would inquire
of colleagues, including one of us (JW), ``Are you you a qualified
translator?''. This article describes the Bondi-Sachs formalism and how it has evolved into a useful
and important approach to the current understanding of gravitational waves.
Before 1960, it was known that linear perturbations $h_{ab}$ of the Minkowski metric
$\eta_{ab} = \mathrm{diag}(-1,1,1,1)$
obeyed the wave equation (in geometric units with $c=1$)
\begin{equation}
\label{pert_wave}
\Big(-\df{^2}{t^2} +\delta^{ij}\df{^2}{y^i\partial y^j}\Big)h_{ab} = 0 \, ,
\end{equation}
where the standard Cartesian coordinates $y^i =(y^1,y^2,y^3)$ satisfy the
harmonic coordinate condition to linear order. It was also known
that these linear perturbations had coordinate (gauge) freedom which raised
serious doubts about the physical properties of gravitational waves.
The retarded time $u$
and advanced time $v$,
\begin{equation}
u = t-r\;\;,\;\;
v = t+r\;\;,\;\;
r^2 = \delta_{ij} y^i y^j \; ,
\end{equation}
characteristic hypersurfaces of the hyperbolic equations \eqref{pert_wave},
i.e. hypersurfaces along which wavefronts can travel.
These characteristic hypersurfaces are also null hypersurfaces,
i.e. their normals, $k_a = -\nabla_a u$ and $n_a = -\nabla_a v$ are null,
$\eta^{ab} k_a k_b = \eta^{ab} n_a n_b = 0$.
Note that it is a peculiar property of null hypersurfaces
that their normal direction is also tangent to the hypersurface, i.e. $k^a=\eta^{ab} k_b$
is tangent to the $u=const$ hypersurfaces.
The curves tangent to $k^a$ are null geodesics, called null rays, and generate the
$u=const$ outgoing null hypersurfaces. Bondi's ingenuity was to use such a
family of outgoing
null rays forming these null hypersurfaces to build spacetime coordinates for describing outgoing gravitational waves.
An analogous formalism based upon ingoing
null hypersurfaces is also possible and finds applications in cosmology \citep{Ellisetal.(1985)}
but is of less physical importance in the study of outgoing gravitational waves.
The new characteristic approach to gravitational phenomenon complemented the contemporary 3+1 treatment being developed by \citet{ADM1961}.
\section{The Bondi--Sachs metric}\label{sec:BSmetric}
The Bondi-Sachs coordinates $x^a =(u,r,x^A)$ are based on a family of outgoing
null hypersurfaces $u=const$ The hypersurfaces $x^0=u=const$ are null,
i.e. the normal co-vector $k_a = -\partial_a u$ satisfies $g^{ab}(\partial_a u)(\partial_b u) = 0$, so that
$g^{uu}=0$, and the corresponding future pointing vector $k^a = -g^{ab}\partial_b u$ is tangent to the null rays.
Two angular coordinates $x^A$, $(A, B, C,...=2,3)$, are constant along the null rays,
i.e. $k^a \partial_a x^A = - g^{ab}(\partial_a u) \partial_b x^A = 0$,
so that $g^{uA} = 0$. The coordinate $x^1 =r$, which varies along the null rays,
is chosen to be an areal coordinate such that
$\det [g_{AB}] = r^4 \mathfrak{q}$, where $\mathfrak{q}(x^A)$ is the determinant of the unit sphere metric $q_{AB}$
associated with the angular coordinates $x^A$, e.g. $q_{AB}=\mathrm{diag}(1,\sin^2\theta)$
for standard spherical coordinates $x^A=(\theta,\phi)$.
The contravariant components $g^{ab}$ and covariant components $g_{ab}$ are
related by $g^{ac}g_{cb} = \delta^a_b$, which in particular implies $g_{rr}=0$
(from $\delta^u_r= 0$) and $g_{rA}=0$ (from $\delta^u_A = 0$).
In the resulting $x^a=(u,r,x^A)$ coordinates, the metric takes the Bondi-Sachs
form,
\begin{equation}
\label{BS_metric}
g_{ab}dx^adx^b = -\frac{V}{r}e^{2\beta} du^2-2 e^{2\beta}dudr +r^2h_{AB}\Big(dx^A-U^Adu\Big)\Big(dx^B-U^Bdu\Big)\, ,
\end{equation}
where
\begin{equation}
g_{AB}=r^2 h_{AB}\qquad\mbox{with}\qquad \det [h_{AB}] = \mathfrak{q}(x^A),
\end{equation}
so that the conformal 2-metric $h_{AB}$ has only two degrees of freedom.
The determinant condition implies
$h^{AB}\partial_r h_{AB}= h^{AB}\partial_u h_{AB} =0$, where $h^{AC}h_{CB}=\delta^A_B$.
Hereafter $D_A$ denotes the covariant derivative of the metric $h_{AB}$, with $D^A=h^{AB}D_B$.
The corresponding non-zero contravariant components of the metric \eqref{BS_metric} are
\begin{equation}
\label{contrav_metric}
g^{ur} = -e^{-2\beta}\;\;,\quad
g^{rr} = \frac{V}{r}e^{-2\beta}\;\;,\quad
g^{rA} = -U^Ae^{-2\beta}\;\;,\quad
g^{AB} = \frac{1}{r^2}h^{AB}\;.
\end{equation}
A suitable representation of $h_{AB}$ with two functions $\gamma(u,r,\theta,\phi)$ and $\delta(u,r,\theta,\phi)$
corresponding to the $+$ and $\times$ polarization of gravitational waves is~\citep{vdBurg1966,affin}
\begin{equation}
h_{AB}dx^Adx^B =\big(e^{2\gamma}d\theta^2 +e^{-2\gamma}\sin^2\theta d\phi^2 \Big)\cosh (2\delta)
+2\sin\theta\sinh(2\delta)d\theta d\phi \;.
\end{equation}
This differs from the original form of Sachs~\citep{sachs1962} by the transformation
${\gamma\rightarrow (\gamma + \delta)/2}$ and $\delta \rightarrow (\gamma-\delta)/2$,
which gives a less natural description of gravitational waves in the weak field approximation.
In the original axisymmetric Bondi metric~\citep{bondi1962} with rotational symmetry in the
$\phi$-direction, $\delta=U^\phi=0$ and $\gamma=\gamma(u,r,\theta)$, resulting in the metric
\begin{eqnarray}
g^{(B)}_{ab}dx^adx^b &=& \Big(-\frac{V}{r}e^{2\beta}
+ r^2 U e^{2\gamma}\Big)du^2
-2e^{2\beta}dudr -r^2 U e^{2\gamma}du d\theta\nonumber\\
&&+r^2\Big(e^{2\gamma}d\theta^2 + e^{-2\gamma}\sin^2\theta d\phi^2 \Big) \; ,
\end{eqnarray}
where $U\equiv U^\theta$. Note that the original Bondi metric also has the reflection symmetry
$\phi \rightarrow -\phi$ so that it is not suitable for describing an axisymmetric rotating star.
In Bondi's original work, the areal coordinate $r$ was called a luminosity distance but this
terminology is misleading because of its different meaning in cosmology \citep[see Sec.~3.3]{jord1}.
The areal coordinate $r$ becomes singular when the expansion $\Theta$ of the null hypersurface vanishes,
where~\citep{sachs1961,sachs1962}
\begin{equation}
\Theta = \nabla_a(e^{-2\beta} k^a ) = \frac{2}{r} e^{-2\beta} \, , \quad k^a\partial_a =-g^{ur}\partial_r.
\end{equation}
In contrast, the standard radial coordinate along the null rays
in the Newman-Penrose formalism~\citep{np1962,npScolar2009} is the affine parameter $\lambda$,
which remains regular when $\Theta=0$. The areal distance and affine parameter are related by
$\partial_r \lambda=e^{2\beta}$.
Thus the areal coordinate remains non-singular provided $\beta$ remains finite.
For a version of the Bondi-Sachs formalism based upon an
affine parameter, see \citep{affin}.
\subsection{The electromagnetic analogue}\label{sec:em_analog}
The electromagnetic field in Minkowski space with its two degrees of freedom propagating
along null hypersurfaces provides a simple model to demonstrate the essential features and advantages
of the Bondi--Sachs formalism~\citep{tw1966}.
Consider the Minkowski metric in outgoing null spherical coordinates $(u,r,x^A)$ corresponding
to the flat space version of the Bondi-Sachs metric,
\begin{equation}\label{Bondi_flat}
\eta_{ab}dx^adx^b = - du^2 - 2 dr du + r^2 q_{AB}dx^Adx^B\;\; .
\end{equation}
Assume that the charge-current sources of the electromagnetic field are enclosed by a 3-dimensional timelike
worldtube $\Gamma$, with spherical cross-sections of radius $r=R$, such that the
outgoing null cones $N_u$ from the vertices $r=0$ (Fig.~\ref{fig:WT}) intersect $\Gamma$ at proper time $u$
in spacelike spheres $S_u$, which are
coordinatized by $x^A$.
\begin{figure}[h]
\centering
\includegraphics[width=0.95\textwidth]{worldtube.png}
\caption{Illustration of Bondi-Sachs coordinates defined at a timelike worldtube surrounding
a matter-charge distribution, along with an outgoing null cone. }
\label{fig:WT}
\end{figure}
The electromagnetic field $F_{ab}$ is represented by a vector potential $A_a$,
$F_{ab}=\nabla_a A_b - \nabla_bA_a$, which has the gauge freedom
\begin{equation}
A_a \rightarrow A_a+\nabla_a \chi\;\;.
\end{equation}
Choosing the gauge transformation
\begin{equation}
\chi(u,r,x^A) = -\int _{R}^r A_r dr^\prime
\end{equation}
leads to the null gauge $A_r=0$, which is the analogue of the Bondi-Sachs coordinate
condition $g_{rr}=g_{rA}=0$. The remaining gauge freedom
$\chi(u,x^A)$ may be used to set either
\begin{equation}
\label{ugauge}
A_u|_{\Gamma} = A_u(u,R,x^A)= 0 \qquad \mbox{or}\qquad
\lim_{r\rightarrow \infty}A_u(u,r,x^A) = 0.\;\;\;
\end{equation}
Hereafter, we implicity assume that the limit $r\rightarrow\infty$
is taken holding $u=const$ and $x^A=const$.
There remains the freedom
$A_B \rightarrow A_B+ \nabla_B \, \chi(x^C)$.
The vacuum Maxwell equations
$M^b := \nabla_a F^{ab} = 0$
imply the identity
\begin{equation}
\label{div_M}
0\equiv \nabla_bM^b = \partial_u M^u +\frac{1}{r^2}\partial_r (r^2 M^r) + \frac{1}{\sqrt{\mathfrak{q}}} \partial_C (\sqrt{\mathfrak{q}} M^C).
\end{equation}
This leads to the following strategy. Designate as the main equations the components of Maxwell's equations
$M^u=0$ and $M^A=0$, and
designate $M^r=0$ as the supplementary condition. Then if the main equations are satisfied (\ref{div_M})
implies
\begin{equation}
\label{only_Mr}
0 =\partial_r (r^2 M^r) \; ,
\end{equation}
so that the supplementary condition is satisfied everywhere if it is satisfied at some specified value of $r$,
e.g. on $\Gamma$ or at ${\mathcal I}^+$.
The main equations separate into the
\begin{eqnarray}
&& \mbox{Hypersurface equation:} \nonumber \\
&&M^u=0 \implies
\partial_r (r^2\partial_r A_u) = \partial_r( \eth_B A^B )
\label{hyp_em}
\end{eqnarray}
and the
\begin{eqnarray}
&&\mbox{Evolution equation:} \nonumber \\
&& M^A=0 \implies
\partial_r \partial_u A_B
= \frac{1}{2} \partial_r^2 A_B
-\frac{r^2}{2} \eth^C(\eth_B A_C - \eth_C A_B) +\frac{1}{2} \partial_r \eth_B A_u\label{ev_em} ,\nonumber\\
\end{eqnarray}
where hereafter $\eth_A$ denotes the covariant derivative with respect to the unit sphere metric $q_{AB}$,
with $\eth^A = q^{AB}\eth_B$.
The supplementary condition $M^r=0$ takes the explicit form
\begin{eqnarray}
\label{supp_em}
\partial_u(r^2\partial_r A_u) =\eth^B ( \partial_r A_B - \partial_u A_B + \eth_B A_u) .
\end{eqnarray}
A formal integration of the hypersurface equation yields
\begin{equation}
\partial_r A_u = \frac{Q(u,x^A)+ \eth_BA^B}{r^2} +O(1/r^3)\;\;,
\end{equation}
where $Q(u,x^A)$ enters as a function of integration. In the null gauge with $A_r=0$, the radial component
of the electric field corresponds to $E_r = F_{ru}=\partial_r A_u$. Thus, using the divergence theorem
to eliminate $\eth_BA^B$, the total charge enclosed in
a large sphere is
\begin{equation}
q(u) := \lim_{r\rightarrow \infty} \frac{1}{4\pi}\oint E_r r^2 \sin\theta d\theta d\phi
=\frac{1}{4\pi}\oint Q(u,x^A) \sin\theta d\theta d\phi ,
\end{equation}
where $\oint$ indicates integration over the 2-sphere.
This motivates calling $Q(u,x^A)$ the charge aspect.
The integral of the supplementary condition \eqref{supp_em}
over a large sphere then gives the charge conservation law
\begin{eqnarray}
\frac{d q(u) }{du} =0 .
\end{eqnarray}
The main equations (\ref{hyp_em}) and (\ref{ev_em}) give rise to a hierarchical integration
scheme given the following combination
of initial data on the initial null cone $N_{u_0}$, initial boundary data on the cross-section $S_{u_0}$
of $\Gamma$ and boundary data on $\Gamma$:
\begin{equation}\label{em_data}
A_B\big|_{N_{u_0}} \,, \quad \partial_r A_u\big|_{S_{u_0}} \,, \quad \partial_u A_B\big|_{\Gamma} .
\end{equation}
Then, in sequential order, (\ref{hyp_em}) is an ordinary differential equation
along the null rays which determines
$A_u$ and (\ref{ev_em}) is an ordinary differential equation which
determines $\partial_uA_B$. Together with the supplementary equation (\ref{supp_em}),
they give rise to the following evolution algorithm:
\begin{enumerate}
\item In accord with (\ref{ugauge}), choose a gauge such that $A_u\big|_{\Gamma} =0$.
\item Given the initial data $A_B\big|_{N_{u_0}}$ and $\partial_r A_u\big|_{S_{u_0}}$,
the hypersurface equation \eqref{hyp_em} can be integrated along the null rays of $N_{u_0}$
to determine $A_u$ on the initial null cone $N_{u_0}$.
\item Given the initial boundary data $\partial_u A_B|_{S_{u_0}}$,
the radial integration of the evolution equation \eqref{ev_em} determines $\partial_u A_B$
on the initial null cone $N_{u_0}$.
\item \begin{enumerate}
\item From $\partial_u A_B|_{N_{u_0}}$, $A_B$ can be obtained in a finite difference approximation
on the null cone $u=u_0 +\Delta u$.
\item From knowledge of $A_B|_{N_{u_0}}$ and $A_u|_{N_{u_0}}$, the
the supplementary condition \eqref{supp_em} determines $\partial_u\partial_r A_u\big|_{S_{u_0}}$
so that $\partial_r A_u|_{S_{u_0+\Delta u}}$ can also be obtained in a finite difference approximation.
\end{enumerate}
\item This procedure can be iterated to determined a finite difference approximation
for $A_B$ and $A_u$ on the null cone $u= u_0 +n\Delta u$.
\end{enumerate}
An analogous algorithm for solving the Bondi-Sachs equations
has been implemented as a convergent evolution code (see Sec.~\ref{sec:world-tube}).
\section{Einstein equations and their Bondi-Sachs solution}\label{sec:EinsteinEquation_BSsolution}
The Einstein equations, in geometric units $G=c=1$, are
\begin{equation}
E_{ab}:=R_{ab} - \frac{1}{2}g_{ab}R\ud{c}{c} -8\pi T_{ab} = 0\;\;,
\end{equation}
where $R_{ab}$ is the Ricci tensor, $R\ud{c}{c}$ its trace and $T_{ab}$ the matter
stress-energy tensor. Before expressing the Einstein equations in terms of the Bondi-Sachs
metric variables \eqref{BS_metric}, consider the consequence of the contracted Bianchi identities.
Assuming the matter satisfies the divergence-free (C5) condition $\nabla_b T\ud{b}{a}=0$, the Bianchi identities
imply
\begin{equation}
\label{bianchi}
0 =\nabla _b E^b_a = \frac{1}{\sqrt{-g}}\partial_b\Big( \sqrt{-g} E^b_a \Big) +\frac{1}{2}(\partial_a g^{bc})E_{bc} \;.
\end{equation}
In analogy to the electromagnetic case, this leads to the designation of the components of Einstein's equations,
consisting of
\begin{equation}
E^u_a= 0 \, , \quad E_{AB} - \frac{1}{2}g_{AB} g^{CD} E_{CD} =0\, ,
\end{equation}
as the main equations.
Then if the main equations are satisfied, referring to the metric \eqref{BS_metric},
$E^b_r = -e^{2\beta}E^{ub} =-e^{2\beta}g^{ba}E^u_a =0$ and the $a=r$ component of the conservation condition (\ref{bianchi})
reduces to $(\partial_r g^{AB})E_{AB}=-(2/r) g^{AB}E_{AB} = 0$ so
that the component $g^{AB}E_{AB} = 0$
is trivially satisfied. Here we assume that the areal coordinate $r$ is non-singular.
The retarded time $u$ and angular components $x^A$ of the
conservation condition (\ref{bianchi}) now reduce to
\begin{equation}
\partial_r (r^2 e^{2\beta}E_u^r) =0\; , \quad \partial_r (r^2 e^{2\beta} E_A^r) =0
\label{supp}
\end{equation}
so that the $E_u^r$ and $E_A^r$ equations are satisfied everywhere if they are satisfied
on a finite worldtube $\Gamma$ or
in the limit $r\rightarrow\infty$. Furthermore, if the null foliation consists
of non-singular null cones, they are automatically satisfied due to regularity conditions
at the vertex $r=0$.
These equations were called supplementary conditions by
Bondi and Sachs. Evaluated
in the limit $r\rightarrow\infty$
they are related to the asymptotic
flux conservation laws for total energy and angular momentum. In particular, the equation
$\lim_{r\rightarrow \infty} (r^2 E_u^r )=0$ gives rise to the famous Bondi mass loss equation
(see \eqref{mass_loss}).
The main Einstein equations separate further into the
\begin{eqnarray}
\mbox{Hypersurface equations:} \quad E_a^u=0
\label{hyp_gr}
\end{eqnarray}
and the
\begin{eqnarray}
\mbox{Evolution equations:} \quad E_{AB} - \frac{1}{2} g_{AB} g^{CD}E_{CD} =0.
\end{eqnarray}
In terms of the metric variables \eqref{BS_metric} the hypersurface equations
consist of one first order radial differential equation determining $\beta$ along the null rays,
\begin{equation}
\label{eq:beta_eq}
E_r^u =0 \;\;\Rightarrow \;\; \partial_r \beta = \frac{r}{16}h^{AC}h^{BD} (\partial_r h_{AB})(\partial_r h_{CD}) + 2\pi r T_{rr} \;,
\end{equation}
two second order radial differential equations determining $U^A$,
\begin{eqnarray}
E_A^u=0\;\;\Rightarrow\;\; && \partial_r \bigg[r^4 e^{-2\beta}h_{AB}(\partial_r U^B)\bigg]
= 2r^4\partial_r \Big(\frac{1}{r^2}D_A\beta \Big)
\nonumber \\ &&\qquad
-r^2h^{EF} D_E (\partial_r h_{AF})
+16\pi r^2 T_{rA}\; ,
\label{eq:UA_eq}
\end{eqnarray}
and a radial equation to determine $V$,
\begin{eqnarray}\label{eq:V_eqn}
E_u^u=0\;\Rightarrow\; &&
2 e^{-2\beta}(\partial_r V)
=
\mathscr{R}
-2h^{AB} \Big[D_A D_B \beta
+ (D_A\beta) (D_B \beta)\Big]\nonumber\\
&&\qquad
+\frac{e^{-2\beta}}{r^2 }D_A \Big[ \partial_r (r^4U^A)\Big]
-\frac{1}{2}r^4 e^{-4\beta}h_{AB}(\partial_r U^A)(\partial_r U^B)
\nonumber\\&&\qquad
+ 8\pi \Big[ h^{AB}T_{AB}-r^2 T\ud{a}{a}\Big]\; ,
\end{eqnarray}
where $D_A$ is the covariant derivative and $\mathscr{R}$ is the Ricci scalar
with respect to the conformal 2-metric $h_{AB}$.
The evolution equations can be picked out by introducing a complex polarization
dyad $m^a$ satisfying $m^a \nabla_a u= 0$ which is tangent to the null
hypersurfaces and points in the
angular direction with components $m^a=(0,0,m^A)$.
Imposing the normalization $h^{AB}=\f{1}{\chi\bar \chi}(m^{A} \bar m^{B}+m^{B} \bar m^{A})$, with $\chi\in\mathbb{C}$, $m_A \bar m^A =\chi\bar \chi$, $m_A =h_{AB} m^B$,
and $m_A m^A =0$ determines $m^A$ up to the
phase freedom $m^A \rightarrow e^{i \eta} m^A$, which can be fixed by convention. Note, the Newman-Penrose convention for the normalisation of $m^A$ uses $\chi\bar\chi=1$ \citet{npScolar2009} while numerical applications of the Bondi-Sachs formalism use $\chi\bar \chi=2$ \citep{wLRR}. The latter has the advantage to avoid factors containing $\sqrt{2}$ in the components of the tetrad which are non-practical in numerical work. Further note that the definition of the dyad $m^a$ here relates to the null vector $m^a$ of the Newman-Penrose formalism \citet{npScolar2009} as $m^a_{(NP)}=r^{-1}m^a$, because $m^a_{(NP)}$ is defined with respect to $g_{ab}$ rather that $h_{AB}$.
The symmetric 2-tensor $E_{AB}$ can then be expanded as
\begin{equation}
E_{AB} = \frac{1}{(\chi\bar \chi)^2}(E_{CD} m^Cm^D)\bar m_A\bar m_B
+ \frac{1}{(\chi\bar \chi)^2}(E_{CD} \bar m^C\bar m^D)m_Am_B +\frac{1}{2} h_{AB}h^{CD}E_{CD},
\end{equation}
where we have shown that $h^{CD}E_{CD}=0$ is trivially satisfied.
Consequently, the evolution equations reduce to the complex equation $m^A m^B E_{AB}=0$,
which takes the form~\citep{Wnewton1983,wLRR}
\begin{eqnarray}\label{eq:ev_eqn}
m^A m^B \bigg \{&{}& r\partial_r [r (\partial_u h_{AB})]
- \frac{1}{2} \partial_r[ rV (\partial_r h_{AB})]
-2e^{\beta} D_A D_B e^\beta \nonumber \\
&+& h_{CA} D_B[ \partial_r (r^2U^C) ]
- \frac{1}{2} r^4 e^{-2\beta}h_{AC}h_{BD} (\partial_r U^C) (\partial_r U^D)
\nonumber \\
&+&
\frac{r^2}{2} (\partial_r h_{AB}) (D_C U^C )
+r^2 U^C D_C (\partial_r h_{AB})
\nonumber \\
&-&
r^2 (\partial_r h_{AC}) h_{BE} (D^C U^E -D^E U^C)
-8\pi e^{2\beta}T_{AB}
\bigg \} =0.
\end{eqnarray}
It comprises a radial equation which determines the retarded time derivative of the two degrees of freedom
in the conformal 2-metric $h_{AB}$.
As in the electromagnetic case, the main equations can be radially integrated in sequential order.
In order to illustrate the hierarchical integration scheme we follow Bondi and Sachs by
considering an asymptotic $1/r$ expansion of the solutions in
an asymptotic inertial frame, with the matter sources confined to a compact region.
This ansatz of a $1/r$-expansion of the metric leads to the
peeling property of the Weyl tensor in the spin-coefficient approach (see~\citep{npScolar2009}).
For a more general approach in which logarithmic terms enter the far field expansion
and only a partial peeling property results, see~\citep{WLog1985}.
In the asymptotic inertial frame, often referred to as a Bondi frame,
the metric approaches the Minkowski metric \eqref{Bondi_flat}
at null infinity, so that
\begin{equation}
\label{bondi_bound}
\lim_{r\rightarrow\infty} \beta = \lim_{r\rightarrow\infty} U^A = 0\;\;,\quad
\lim_{r\rightarrow\infty} \frac{V}{r} = 1\;,\quad
\lim_{r\rightarrow\infty} h_{AB} = q_{AB}\;.
\end{equation}
Later, in Sec. \ref{sec:sym}, we will justify these asymptotic conditions in terms
of a Penrose compactification of ${\mathcal I}^+$.
For the purpose of integrating the main equations, we prescribe the following data:
\begin{enumerate}
\item The conformal 2-metric $h_{AB}$ on an initial null hypersurface $N_0$, $u=u_0$, which
has the asymptotic $1/r$ expansion
\begin{equation}
\label{h_AB_asympt}
h_{AB}(u_0,r,x^C) = q_{AB}+\frac{c_{AB}(u_0, x^E)}{r}+\frac{d_{AB}(u_0, x^E)}{r^2}+... ,
\end{equation}
where the condition $h^{AC}h_{CB} = \kron{A}{B}$ implies
\begin{equation}
\label{data_hAB}
h^{AB} = q^{AB}-\frac{c^{AB}}{r}-\frac{d^{AB}-q^{AC}c^{BD}c_{CD}}{r^2}+...
\end{equation}
with $c^{AB}:=q^{AD}q^{BE}c_{DE}$ and $d^{AB}:=q^{AD}q^{BE}d_{DE}$.
Furthermore, the derivative of the determinant condition $\det(h_{AB}) =\mathfrak{q}(x^C)$ requires
\begin{eqnarray}
\label{ddet}
&& q^{AB}c_{AB} =0\;,\quad q^{AB}d_{AB}=\frac{1}{2}c^{AB}c_{AB} \;,
\quad q^{AB}\partial_u c_{AB}=0\; , \nonumber\\
&&\quad q^{AB}\partial_u d_{AB}-c^{AB}\partial_u c_{AB}=0 .
\end{eqnarray}
\item The $1/r$ coefficient of the conformal 2-metric $h_{AB}$ for retarded times $u\in[u_0, u_1],\,u_1>u_0$,
\begin{equation}\label{cAB_data}
c_{AB}(u,x^C):=\lim_{r\rightarrow\infty}r (h_{AB}-q_{AB} )\; ,
\end{equation}
which describes the time dependence of the gravitational radiation.
\item A function $M(u,x^A)$ at the initial time $u_0$,
\begin{equation}\label{M_data}
M(u_0, x^A):=-\frac{1}{2}\lim_{r\rightarrow\infty} [V(u_0,r,x^C)-r]\;\;,\qquad
\end{equation}
which is called the mass aspect.
\item
A co-vector field $L_A(u_0,x^C)$ on the sphere at the initial time $u_0$,
\begin{equation}\label{LA_data}
L_A(u_0, x^C):=-\frac{1}{6}\lim_{r\rightarrow\infty }\Big(r^4 e^{-2\beta} h_{AB}\partial_rU^B - r \eth ^B c_{AB}\Big) ,
\end{equation}
which is the angular momentum aspect.
\end{enumerate}
In terms of a complex dyad $q^A=\lim_{r\rightarrow \infty} m^A$ on the unit sphere
so that $q^{AB} =\f{1}{\chi^2}( q^{A}\bar q^{B}+ q^{B}\bar q^{A})$,
e.g. for the choice $q^A=\frac{\chi}{\sqrt{2}}(1,i/\sin\theta)$, the real and imaginary part of
\begin{equation}
\label{eq:strain}
\sigma_0 =\frac{1}{2\chi^2}q^Aq^Bc_{AB} =\frac{1}{2}\bigg( c_{\theta\theta} - \frac{c_{\phi\phi}}{\sin^2\theta}\bigg)
+ i \bigg(\frac{ c_{\theta\phi}}{\sin\theta}\bigg)
\end{equation}
correspond, respectively, to the $+$ and $\times$ polarization modes
of the strain measured by a gravitational wave detector at large distance from the source \citep{thorne1983}.
Traditionally, the radiative strain $\sigma_0$ has also been called the shear because it measures the asymptotic
shear of the outgoing null hypersurfaces in the sense of geometric optics,
\begin{equation}
\sigma_0 =\lim_{r\rightarrow \infty}\Bigg(\frac{1}{\chi^2}r^2 q^A q^B \nabla_A \nabla_B u\Bigg) \; .
\label{eq:ashear}
\end{equation}
Note that $\sigma_0$ corresponds to the leading order of the spin coefficient $\sigma$ of the Newman-Penrose formalism \citep{npScolar2009}.
The retarded time derivative
\begin{equation}
\label{eq:newstensor}
N_{AB}=\frac{1}{2}\partial_u c_{AB}(u,x^C),
\end{equation} called the {\it news tensor},
determines the energy flux of gravitational radiation. The factor of $1/2$ in \eqref{eq:newstensor} is introduced to recover the Bondi's original definition of the news in the axisymmetric case. The news tensor is a geometrically
determined tensor field independent of the choice of $u$-foliation (see the discussion concerning (\ref{eq:news})).
Relative to a choice of polarization dyad, the {\it Bondi news} function is
\begin{equation}\label{news}
N=\frac{1}{\chi^2}q^A q^B N_{AB} \, ,\;\;
\end{equation}
in particular the news function is the retarded time derivative of the radiation strain $N = \partial_u \sigma_0$.
Note, in carrying out the $1/r$ expansion of the field equations the covariant derivative
$D_A$ corresponding to the metric $h_{AB}$ is related to the covariant derivative
$\eth_A$ corresponding to the unit sphere metric $q_{AB}$ by
\begin{equation}
D_A V^B = \eth_A V^B + {\cal C}^B_{AE} V^E,
\end{equation}
where
\begin{equation}
\mathcal{C}^B_{AE} = \frac{1}{2r}q^{BF} (\eth_A \, c_{FE}+\eth_E\, c_{FA}-\eth_F \, c_{AE})
+O(1/r^2).
\end{equation}
Given the asymptotic gauge conditions \eqref{bondi_bound} and the initial data
\eqref{data_hAB}, \eqref{M_data}, \eqref{LA_data}, \eqref{cAB_data} on $N_0$,
the formal integration of the main equations at large $r$ proceeds in the following sequential order:
\begin{enumerate}
\item Integration of the $\beta$-hypersurface equation gives
\begin{equation}\label{eq:beta_sol}
\beta(u_0, r, x^A) = -\frac{1}{32}\frac{c^{AB}c_{AB}}{r^2} +O(r^{-3}) \; .
\end{equation}
\item Insertion of the data \eqref{h_AB_asympt} and the solution for $\beta$ into the $U^A$
hypersurface equation \eqref{eq:UA_eq} yields
\begin{eqnarray}\label{UA_hyp_asypt}
\partial_r \bigg[r^4 e^{-2\beta}h_{AB}(\partial_r U^B)\bigg] =
\eth^E c_{AE}
+\frac{S_A(u_0, x^C)}{r} +O(1/r^2)
\end{eqnarray}
where
\begin{equation}
S_A(u_0, x^C) =\eth^B ( 2 d_{AB} - q^{FG}c_{BG} c_{AF} ).
\end{equation}
As a result, unless $S_A =0$, integration of \eqref{UA_hyp_asypt} leads to
a logarithmic $r^{-4}\ln r$ term in $\partial_r U^A$, which is ruled out by the assumption of an asymptotic $1/r$ expansion.
This leads to the following result.
Because of the determinant condition (\ref{ddet}),
$$q^A q^B q^{FG}c_{BG} c_{AF}
=\frac{1}{2} q^A q^B (q^F \bar q^G+\bar q^Fq^G)c_{BG} c_{AF}=0
$$
so that
$$ q^{FG} c_{BG} c_{AF}=\frac{1}{2} q_{AB} c^{FG} c_{FG} .
$$
As a result
$$S_A = \eth^B (2d_{AB} - \frac{1}{2} q_{AB} \, c^{FG} c_{FG} ),
$$
or, again using (\ref{ddet}), the logarithmic condition becomes
\begin{equation}
S_A = 2\eth^B b_{AB} = 0
\label{eq:logcond}
\end{equation}
where $b_{AB}=d_{AB} - \frac{1}{2} q_{AB} q^{CD} d_{CD}$
is symmetric and trace-free. It now follows readily from
the powerful Newman-Penrose $\eth$-calculus \citep{np1962,npScolar2009}
that the condition $S_A=0$ implies
$b_{AB} =0$. In order to obtain this result without
$\eth$-calculus, first use $q^{AB}b_{AB}=0$ to obtain
$$S_A = 2q^{BE}\eth_E b_{AB} =2q^{BE}(\eth_E b_{AB} -\eth_A b_{EB})
$$
so that (\ref{eq:logcond}) also implies
\begin{equation}
\epsilon^{EA}\eth_E b_{AB} =0,
\label{eq:epsb}
\end{equation}
where $\epsilon_{AB}=\frac{i}{\chi\bar\chi}(q_A \bar q_B -\bar q_A q_B)$ is the antisymmetric surface area tensor
on the unit sphere.
Consider the component $\Phi^B \epsilon^{EA}\eth_E b_{AB}=0$, where
$\Phi^B$
is a Killing vector on the unit sphere.
Then
\begin{equation}
0=\Phi^B \epsilon^{EA}\eth_E b_{AB}= \epsilon^{EA}\eth_E (b_{AB}\Phi^B)
- \epsilon^{EA}b_{AB}q_{EC}\eth^C \Phi^B .
\label{eq:curlb}
\end{equation}
But, as a result of Killing's equation $\eth^A \Phi^B +\eth^B \Phi^A =0$
and the trace-free property of $b_{AB}$,
\begin{eqnarray}
\epsilon^{EA}b_{AB} \, q_{EC}\eth^C \Phi^B
&=&
\epsilon^{EA}b_{AB} \, q_{EC}\Big[\f{1}{2} (\eth^{C} \Phi^{B} +\eth^{B} \Phi^{C} )
+\f{1}{2} (\eth^{C} \Phi^{B} -\eth^{B} \Phi^{C} )\Big]\nonumber
\\ &=&
\frac{1}{2}\epsilon^{EA}b_{AB} \, q_{EC}\epsilon^{CB}\epsilon_{FG} \eth^F \Phi^G
\\&=&
\frac{1}{2} q^{AB}b_{AB} \,\epsilon_{FG} \eth^F \Phi^G
\\ &=&0,
\end{eqnarray}
where we have used the identity $T_{AB} =\frac{1}{2} \epsilon_{AB} \epsilon^{CD}T_{CD}$
satisfied in 2-dimensions by an arbitrary antisymmetric tensor $T_{AB}$.
Consequently, (\ref{eq:curlb}) gives $\epsilon^{EA}\eth_E (b_{AB}\Phi^B)=0$ so that
$b_{AB}\Phi^B=\eth_A b$ for some scalar $b$. Inserting this result into
(\ref{eq:logcond}) yields $S_A \Phi^A=2\eth^A \eth_A b =0$ whose only solution is $b=const$.
Consequently, $b_{AB}\Phi^B =0$ which is sufficient to show the desired
result that the two independent components of $b_{AB}$ vanish. Thus $d_{AB}$
consists purely of a trace term dictated by the determinant condition
(\ref{ddet}).
Hence, applying this constraint and integrating \eqref{UA_hyp_asypt} once yields
\begin{equation}\label{eq:UA_intermediate}
r^{4}e^{-2\beta}h_{AB}\partial_r U^B =-6 L^A(u_0,x^B) + r\Big(\eth_Bc^{AB}\Big)+O(r^{-1})\;\;.
\end{equation}
\item
Rearranging \eqref{eq:UA_intermediate} while using \eqref{data_hAB} and \eqref{eq:beta_sol} and subsequent radial integration of $\partial_rU^A$ with the asypmtotic data \eqref{LA_data} gives
\begin{equation}
\label{UA_solution}
U^A(u_0,r,x^B) = -\frac{\eth_Bc^{AB}}{2r^2}
+\frac{1}{r^3} \Big(2 L^A + \frac{1}{3}c^{AE}\eth ^F c_{EF}\Big)
+O(r^{-4}) .
\end{equation}
Note \eqref{UA_solution} corrects the non-linear coefficients in the $O(r^{-3})$ terms of Bondi and Sachs' original works and
agrees with the corresponding coefficient of \cite{2010JHEP...05..062B} up to the redefinition $L^A\rightarrow -3L^A$.
\item
With the initial data \eqref{h_AB_asympt} and initial values of $\beta$ and $U^A$,
the $V$-hypersurface equation \eqref{eq:V_eqn} can be integrated to find the asymptotic solution
\begin{equation}
V({u_0, r, x^A}) = r - 2M(u_0, x^A) + O(r^{-1})\;.
\end{equation}
Here $M(u,x^A)$ is called the mass aspect since in the static, spherically
symmetric case, where $h_{AB}=q_{AB}$, ${\beta=U^A=0}$ and $M(u,x^A) =m$, the
metric \eqref{BS_metric} reduces to the Eddington-Finkelstein metric for a Schwarzschild
mass $m$.
\item Insertion of the solutions for $\beta, U^A$ and $V$ into the evolution
equation \eqref{eq:ev_eqn} yields to leading order that $q^Aq^B \partial_u d_{AB}=0$,
consistent with the determinant condition (\ref{ddet}).
\item With the asymptotic solution of the metric, the leading order coefficient of the $E^r_u$ supplementary equation gives
\begin{equation}
\label{supp_duM}
2\partial_u M = \eth_A\eth_B N^{AB} - N_{AB} N^{AB} \; .
\end{equation}
Since $N_{AB}$ is assumed known for $u_0\le u\le u_1$, integration determines
the mass aspect $M$ in terms of its initial value $M(u_0,x^A)$.
\item The leading order coefficient of the $E^r_A$ supplementary equation determines the time evolution of the
angular momentum aspect $L_A$,
\begin{eqnarray}
-3\partial_u L_{A} &=&
\eth_AM
- \f{1}{4}\eth^E(\eth_{E}\eth ^F c_{AF}-\eth_{A}\eth ^F c_{EF})
+\f{1}{8} \eth_A(c_{EF}N^{EF})
\nonumber\\ &&
-\eth_C\Big(c^{CF}N_{FA}\Big)
+\f{1}{2}c^{EF}(\eth_A N_{EF})
\label{supp_duLA}
\end{eqnarray}
The motivation for calling $L_A(u,x^A)$ the angular momentum aspect can
be seen in the non-vacuum case where its controlling $E^r_A$ supplementary equation is
coupled to the angular momentum flux $r^2T^r_A$ of the matter field to null infinity.
Together with \eqref{supp_duM}, (\ref{supp_duLA}) shows that the time evolution of $L_A$ is entirely determined
by $N_{AB}$ for $u_0\le u\le u_1$ and the initial values of $L_A$, $M$ and $c_{AB}$ at $u=u_0$.
\end{enumerate}
This hierarchical integration procedure shows how the boundary conditions \eqref{bondi_bound}
and data \eqref{data_hAB}, \eqref{M_data}, \eqref{LA_data}, \eqref{cAB_data} uniquely determine a
formal solution of the field equation in terms of the coefficients of an asymptotic $1/r$ expansion.
In particular, the supplementary equations determine the time derivatives of
$M$ and $L_A$, whereas the hypersurface equations determine the higher order expansion
coefficients. However, this formal solution cannot be cast as a well-posed evolution problem to determine
the metric for $u>u_0$ because the necessary data, e.g. $c_{AB}(u, x^C)$, lies in the future of the initial
hypersurface at $u_0$. Nevertheless, this formal solution led Bondi to the first clear understanding
of mass loss due to gravitational radiation. It gives rise to the interpretation of the supplementary
conditions as flux conservation laws for energy-momentum and angular momentum~\citep{tw1966,goldberg1974}.
The time-dependent {\it Bondi mass} $m(u)$ for an isolated system is
\begin{equation}
\label{BondiMass}
m(u):=\frac{1}{4\pi} \oint M(u,\theta,\phi)\sin\theta d\theta d\phi \; .
\end{equation}
The integration of \eqref{supp_duM} over the sphere,
using the definition of the news function \eqref{news}, gives the famous Bondi mass loss formula
\begin{equation}
\label{mass_loss}
\frac{d}{du}m(u) = -\frac{1}{4\pi}\oint |N|^2\sin\theta d\theta d\phi\;\;,
\end{equation}
where the first term of \eqref{supp_duM} integrates out because of the divergence theorem.
The positivity of the integrand in \eqref{mass_loss} shows that if a system emits gravitational waves, i.e. if there is news,
then its Bondi mass must decrease.
If there is no news, i.e. $N=0$, the Bondi mass is constant.
The expressions for the Bondi mass (56) and the mass loss formula (57) were generalized for spacetimes with non-zero cosmological constant by Saw, (2016) and higher-dimensional generalisations of (56) and (57) can be found in \cite{Tanabe2011} and \cite{GodazgarReall2012}
Here \eqref{supp_duLA} corrects the original equations Bondi and Sachs for the time evolution
of the angular momentum aspect $L_A$. For the Bondi metric in which
$\gamma(u,r,\theta) = c(u,\theta)/r+O(1/r^3)$, \eqref{supp_duLA} becomes
\begin{equation}
-3\partial_uL_\theta =
\partial_\theta M
+\frac{1}{2}c (\partial_\theta N)
-\frac{3}{2}N(\partial_\theta c) \;\;,
\end{equation}
here $N=\partial_u c$ is the axisymmetric Bondi news function.
The asymptotic approach of Bondi and Sachs illustrates the key features of the metric based null cone formulation
of general relativity. Nevertheless, assigning boundary data such as the news function $N$ at large distances
is non-physical as opposed to determining $N$ by evolving an interior system (see Sec.~\ref{sec:world-tube}).
In particular, assignment of boundary data on a finite worldtube surrounding the source
leads to gauge conditions in which the asymptotic Minkowski behavior \eqref{bondi_bound} does not hold.
\section{The Bondi-Metzner-Sachs (BMS) group}
\label{sec:sym}
The asymptotic symmetries of the metric can be most clearly and elegantly described using a Penrose compactification
of null infinity~\citep{penrose1963}. In that case the assumption of an
asymptotic series expansion in $1/r$ becomes a smoothness
condition at $\mathcal{I}^+$.
In Penrose's compactification of null infinity, $\mathcal{I}^+$ is the finite boundary of an unphysical space
time containing the limiting end points of null geodesics in the physical space time. If $g_{ab}$ is the metric
of the physical space time and $\hat g_{ab}$ denotes the unphysical spacetime the two metrics are
conformally related via $\hat g_{ab} = \Omega^2 g_{ab}$, where $\hat g_{ab}$ is smooth
(at least $C^3$) and $\Omega=0$ at $\mathcal{I}^+$.
Asymptotic flatness requires that $\mathcal{I}^+$ has the topology $\mathbb{R}\times \mathbb{S}^2$
and that $\hat \nabla_a \Omega$ vanishes nowhere at $\mathcal{I}^+$. The conformal space and physical space
Ricci tensors are related by
\begin{equation}
\label{Ric_phys_conf}
\Omega^2 R_{ab} = \Omega^2 \hat R_{ab} + 2\Omega\hat \nabla_a\hat \nabla_b \Omega
+ \hat g_{ab}\Big[\Omega\hat \nabla^c\hat \nabla_c \Omega -3(\hat\nabla^c\Omega )\hat\nabla_c\Omega\Big]
\end{equation}
where $\hat \nabla_a$ is the covariant derivative with respect to $\hat g_{ab}$.
Separating out the trace of \eqref{Ric_phys_conf}, evaluation of the physical space vacuum
Einstein equations $R_{ab}=0$ at $\mathcal{I}^+$ implies
\begin{subequations}
\begin{eqnarray}
0 & = & [(\hat\nabla^c\Omega )\hat\nabla_c\Omega]_{\mathcal{I}^+} \label{scri_null} \\
0 & = & \Big[\hat \nabla_a\hat \nabla_b \Omega
-\frac{1}{4}\hat g_{ab}\hat\nabla^c \hat\nabla_c\Omega\Big]_{\mathcal{I}^+} \; .
\label{scrishear}
\end{eqnarray}
\end{subequations}
The first condition shows that $\mathcal{I}^+$ is a null hypersurface and the second assures
the existence of a conformal transformation $\hat \Omega^{-2}\hat g_{ab} = \tilde \Omega^{-2}\tilde g_{ab}$
such that $\tilde \nabla_a \tilde\nabla _b\tilde \Omega|_{\mathcal{I}^+} = 0$.
Thus there is a set of preferred conformal factors $\tilde \Omega$ for which null infinity is a divergence-free
($\tilde \nabla^c\tilde\nabla_c\tilde\Omega |_{\mathcal{I}^+} = 0$) and
shear-free ($\tilde \nabla_a \tilde\nabla _b\tilde \Omega|_{\mathcal{I}^+} = 0$) null hypersurface.
A coordinate representation $\hat x^a =(u,\ell, x^A)$ of the compactified space
can be associated with the Bondi--Sachs physical
space coordinates in Sec.~\ref{sec:BSmetric} by the transformation $\hat x^a= (u,\ell, x^A)=(u,1/r,x^A)$.
Here the inverse areal coordinate $\ell=1/r$
also serves as a convenient choice of conformal factor $\Omega=\ell$.
This gives rise to the conformal metric
\begin{equation}
\label{cBS_metric_scri}
\hat g_{ab}d \hat x^ad\hat x^b
=\ell^3V e^{2\beta} d u^2+2 e^{2\beta}d ud\ell +h_{AB}\Big(d x^A-U^Ad u\Big)\Big(d x^B-U^Bd u\Big)\, ,
\end{equation}
where $ \det (h_{AB}) =\mathfrak{ q}$.
The leading coefficients of the conformal space metric are subject to the Einstein equations
(\ref{Ric_phys_conf}) according to
\begin{eqnarray}
h_{AB} &=&H_{AB}( u, x^C) + \ell c_{AB}( u, x^c)+ O(\ell^2)\\
\beta & = & H( u, x^C)+ O(\ell^2) \\
U^A & = & H^A( u, x^C)+ 2\ell e^{2H} H^{AB} D_B H+ O(\ell^2) \\
\ell^2 V&=& D_AH^A + \ell\Big[\frac{1}{2}\mathcal{R} + D^A D_A e^{2H}\Big]+O(\ell^2),
\end{eqnarray}
where here $\mathcal{R} $ is the Ricci scalar and $ D_A$ is the covariant derivative associated with $H_{AB}$.
In (\ref{cBS_metric_scri}), $H$, $H^A$ and $H_{AB}$ have a general form which does not correspond
to an asymptotic inertial frame.
In order to introduce inertial coordinates consider the null vector $\hat n^a =\hat g^{ab}\hat \nabla_b \ell$
which is tangent to the null geodesics generating $\mathcal{I}^{+}$.
In a general coordinate system, it has components at $\mathcal{I}^+$
\begin{equation}
\hat n^a|_{\mathcal{I}^+} = \Big(e^{-2H},0,-e^{-2H}H^A\Big)
\end{equation}
arising from the contravariant metric components
\begin{equation}
\label{cBS_contra_scri}
\hat g^{ab} \Big|_{\mathcal{I}^+}= \left(\begin{array}{ccc}0
& e^{-2H} & 0 \\e^{-2H} & 0 & -H^Ae^{-2H} \\0&-H^Ae^{-2H} & H^{AB}
\end{array}\right) \; .
\end{equation}
Introduction of the inertial version of angular coordinates by requiring
$${\hat n^a \partial_a x^A|_{\mathcal{I}^+}=0}$$ results in $H^A =0$.
Next, introduction of the inertial version of a retarded time coordinate by requiring that $u$ be an affine parameter
along the generators of $\mathcal{I}^+$,
with $$\hat n^a \partial_a u\Big|_{\mathcal{I}^+}=1,$$ results in $H=0$.
It also follows that $\ell$ is a preferred conformal factor so that the
divergence free and shear free condition $\tilde \nabla_a \tilde\nabla _b \ell_{\mathcal{I}^+} = 0$ implies
that $\partial_u H_{AB}=0$. This allows a time independent conformal transformation
$\ell \rightarrow \omega (x^C) \ell $ such that $H_{AB}\rightarrow q_{AB}$,
so that the cross-sections of $\mathcal{I}^+$ have unit sphere geometry.
In this process, the condition $H=0$ can be retained by an affine change in $u$.
Thus it is possible to establish an inertial coordinate system $\hat x^a$ at $\mathcal{I}^+$,
which justifies the Bondi-Sachs boundary conditions \eqref{bondi_bound}.
In these inertial coordinates, the conformal metric has the asymptotic behavior
\begin{subequations}\label{eq:BS_metric_inertial}
\begin{eqnarray}
h_{AB} &=&q_{AB}( u, x^C) + \ell c_{AB}(u,x^C)+ O(\ell^2)\\
\beta & =& O(\ell^2) \\
U^A & = &-\frac{\eth_B c^{AB}{2}\ell^2}+ 2 L^A \ell^3+O(\ell^4) \\
\ell^3 V&=& \ell^2 -2M\ell^3 +O(\ell^4) ,
\end{eqnarray}
\end{subequations}
showing that the Bondi-Sachs variables $c_{AB}$, mass aspect $M$ and angular momentum aspect $L^A$
are the the leading order coefficients of a Taylor series at null infinity with respect to the preferred conformal
factor $\ell$.
It follows from (\ref{scrishear}) that $\ell^{-1}\hat \nabla_a \hat\nabla_b \ell$
has a finite limit at $\mathcal{I}^+$. In inertial coordinates the tensor field
\begin{equation}
N_{ab} = \zeta^*\Big( \lim_{\ell\rightarrow 0} \,\ell^{-1} \hat \nabla_a \hat\nabla_b \ell \Big)\; ,
\label{eq:news}
\end{equation}
where $\zeta^*$ represents the pull-back to $\mathcal{I}^+$ \citep{Geroch1977}, i.e. the intrinsic $(u,x^A)$ components,
equals the news tensor (\ref{eq:news}).
It also follows that $N_{ab}$ is
independent of the choice of conformal factor $\Omega= \ell \rightarrow \omega \ell$,
$\omega>0$. This establishes the important result that the news tensor
is a geometrically defined tensor field on $\mathcal{I}^+$ independent of the choice of
$u$-foliation.
The {\it BMS group} is the asymptotic isometry group of
the Bondi-Sachs metric \eqref{BS_metric}. In terms of the physical space metric,
the infinitesimal generators $\xi^a$ of the BMS group satisfy the asymptotic version of
Killing's equation
\begin{equation}
\Omega^2 {\cal L}_\xi g_{ab} |_{{\mathcal I}^+} = -2\Omega^2 \nabla^{(a} \xi^{b)}|_{{\mathcal I}^+} =0\;\;,
\end{equation}
where $\mathcal{L}_\xi$ denotes the Lie derivative along $\xi^a$.
In terms of the conformal space metric \eqref{eq:BS_metric_inertial} with
conformal factor $\Omega = \ell$, this implies
\begin{equation}
\Big[ \hat \nabla^{(a} \xi^{b)} - \ell^{-1}\hat g^{ab} \xi^c \partial_c \ell \Big]_{\ell=0} =0.
\label{eq:ckill}
\end{equation}
This immediately requires $\xi^c \partial_c \ell=0$, i.e. the generator is tangent to ${\mathcal I}^+$
and $\ell^{-1} \xi^c \partial_c \ell |_{{\mathcal I}^+} =\partial_\ell \xi^\ell |_{{\mathcal I}^+}$.
Then (\ref{eq:ckill}) takes the explicit form
\begin{equation}
\Big[ \hat g^{ac} \partial_c \xi^b + \hat g^{bc} \partial_c \xi^a
-\xi^c \partial_c \hat g^{ab} -\hat g^{ab} \partial_\ell \xi^\ell \Big]_{{\mathcal I}^+} =0 \; ,
\label{eq:exckill}
\end{equation}
where (\ref{cBS_contra_scri}) reduces in the inertial frame to
\begin{equation}
\label{ginert_scri}
\hat g^{ab} \Big|_{{\mathcal I}^+} = \left(\begin{array}{ccc}0
&1& 0 \\ 1 & 0 & 0 \\0& 0 & q^{AB} \;
\end{array}\right) .
\end{equation}
Since only $\hat g^{ab}|_{{\mathcal I}^+} $ enters (\ref{eq:exckill}), it is simple to analyze.
This leads to the general solution
\begin{equation}
\label{eq:xi_killi}
\xi^a\partial_a|_{\ell=0} = \Big[\alpha(x^C) +\frac{u}{2} \eth_B f^B(x^C)\Big]\partial_u + f^A(x^C)\partial_A
\end{equation}
where $f^{A}(x^C)$ is a conformal killing vector of the unit sphere metric,
\begin{equation}
\label{ eq:fA}
\eth^{(A}f^{B)} -\frac{1}{2}q^{AB} \eth_C f^C = 0 \;\;.
\end{equation}
These constitute the generators of the BMS group.
The BMS symmetries with $f^A=0$ are called {\it supertranslations}; and those
with $\alpha=0$ describe conformal transformations of the unit sphere,
which are isomorphic to the orthochronous Lorentz
transformations~\citep{Sachs1962BMS}. The supertranslations form
an infinite dimensional invariant subgroup of the BMS group. Of special importance,
the supertranslations consisting of $l=0$ and $l=1$ spherical
harmonics, e.g. $\alpha = a +a_x \sin\theta \cos\phi+a_y \sin\theta \sin\phi +a_z \cos\theta$,
form an invariant 4-dimensional translation group consisting of time translations $(a)$
and spatial translations $(a_x,a_y,a_z)$. This allows an unambiguous definition
of energy-momentum. However, because the Lorentz group is not an invariant
subgroup of the BMS group there arises a supertranslation ambiguity in the
definition of angular momentum. Only in special cases, such as stationary spacetimes,
can a preferred Poincare group be singled out from the BMS group.
Consider the finite supertranslation, $\tilde u = u + \alpha(x^A) +O(\ell)$, with $\tilde x^A = x^A$,
where the $O(\ell)$ term is required to maintain $u$ as a null coordinate. Under this
supertranslation,
the radiation strain
or asymptotic shear (\ref{eq:ashear}), i.e.
$\sigma (u,x^C) = \frac{r^2}{\chi^2}q^A q^B \nabla_A \nabla_B u |_{{\mathcal I}^+} $, transforms according to
\begin{equation}
\tilde \sigma (u,x^C) =\frac{r^2}{\chi^2} q^A q^B \nabla_A \nabla_B \tilde u |_{{\mathcal I}^+} = \sigma(u,x^C)+\frac{1}{\chi^2}q^A q^B \eth_{A} \eth_{B}\alpha(x^C).
\end{equation}
This reveals the gauge freedom in the radiation stain under supertranslations.
Note, because $\alpha$ is a real function, in the terminology of the Newman-Penrose
spin-weight formalism~\citep{np1962,newpbms,goldberg}, this gauge freedom
only affects the electric (or E-mode~\citep{linmem}) component of the shear.
\section{The worldtube-null-cone formulation}
\label{sec:world-tube}
In contrast to the Bondi-Sachs treatment in terms of a $1/r$ expansion at infinity,
in the worldtube-null-cone formulation the boundary conditions for the hypersurface and evolution equations
are provided on a timelike worldtube $\Gamma$ with finite areal radius
$R$ and topology $\mathbb{R}\times\mathbb{S}^2$.
This is similar to the electromagnetic analog discussed in Sec. \eqref{sec:em_analog}.
The worldtube data may be supplied by a solution of Einstein's equations interior to $\Gamma$, so that
it satisfies the supplementary conditions on $\Gamma$.
In the most important application, the worldtube data is obtained by matching to a numerical solution
of Einstein's equations carried out by a Cauchy evolution of the interior. It is also possible to
solve the supplementary conditions as a well-posed system on $\Gamma$ if the interior
solution is used to supply the necessary coefficients~\citep{W2011world-tube}.
Coordinates $(u,x^A)$ on $\Gamma$ have the same $2+1$ gauge freedom in the choice of lapse and shift
as in a $3+1$ Cauchy problem. This produces a foliation of $\Gamma$ into spherical cross-sections
$S_u$. In one choice, corresponding to unit lapse and zero shift,
$u$ is the proper time along the timelikel geodesics normal to some initial cross-section $S_0$ of $\Gamma$, with
angular coordinates $x^A$ constant along the geodesics. In the case of an interior numerical solution,
the lapse and shift are coupled to the lapse and shift of the Cauchy evolution in the interior of the worldtube.
These coordinates are extended off the worldtube $\Gamma$ by letting $u$ label the
family of outgoing null hypersurfaces $N_u$ emanating from
$S_u$ and letting $x^A$ label the null rays in $N_u$. A Bondi-Sachs coordinate system
$(u,r,x^A)$ is then completed by letting $r$ be areal coordinate along the null rays,
with $r=R$ on $\Gamma$, as depicted in Fig.~\ref{fig:WT}.
The resulting metric has the Bondi--Sachs form \eqref{BS_metric}, which induces the $2+1$ metric
intrinsic to $\Gamma$,
\begin{equation}
\label{eq:BS_metric_WT}
g_{ab}dx^adx^b\big|_\Gamma = -\frac{V}{R}e^{2\beta}du^2 + R^2h_{AB}(dx^A-U^A)(dx^B-U^B)\;\; ,
\end{equation}
where $Ve^{2\beta}/R$ is the square of the lapse function and $(-U^A)$ is the shift.
The Einstein equations now reduce to the main hypersurface and evolution equations presented in Sec.~\ref{sec:BSmetric},
assuming that the worldtube data satisfy the supplementary conditions.
As in the electromagnetic case, surface integrals of the supplementary equations \eqref{supp} can be interpreted as conservation
conditions on $\Gamma$, as described in~\citep{tw1966,goldberg1974}.
The main equations can be solved with the prescription of the following mixed initial-boundary data:
\begin{itemize}
\item The areal radius $R$ of $\Gamma$ and $\partial_r U^A|_{\Gamma}$, as determined by
matching to an interior solution.
\item The conformal 2-metric $h_{AB}|_{N_0}$ on an entire initial null cone $N_0$ for $r>R$.
\item The values of $\beta|_{S_0}$, $U^A|_{S_0}$, $\partial_r U^A|_{S_0}$ and $V|_{S_0}$
on the initial cross section $S_{0}$ of $\Gamma$.
\item The retarded time derivative of the conformal 2-metric $\partial_u h_{AB}|_\Gamma$ on $\Gamma$ for $u>u_0$.
\end{itemize}
Given this initial-boundary data, the hypersurface equations can be solved in the same hierarchical order
as illustrated for the electromagnetic case in Sec.~\ref{sec:EinsteinEquation_BSsolution}
and the evolution equation can be solved using a finite difference
time-integrator. It has been verified in numerical testbeds,
using either finite difference approximations \citep{extraction,HighNews}
or spectral methods \citep{spec_evolution_2015} for the spatial approximations,
that this evolution algorithm is stable and converges
to the analytic solution. However, proof of the well-posedness of the analytic initial-boundary
problem for the above system remains an open issue.
A limiting case of the worldtube-null-cone problem arises when $\Gamma$ collapses to a single world line traced
out by the vertices of outgoing null cones. Here the metric variables are restricted by regularity conditions
along the vertex worldline~\citep{IsaacWellingWinicour}. For a geodesic worldline, the null
coordinates can be based on a local Fermi normal coordinate system~\citep{MisnerManasse}, where
$u$ measures proper time along the worldline and labels the outgoing null cones.
It has been shown for axially symmetric spacetimes~\citep{MMvertex} that the regularity conditions on the metric
in Fermi coordinates place very rigid constraints on the
coefficients of the null data $h_{AB}$ in a Taylor expansion in $r$ about the vertices
of the outgoing null cones. As a result, implementation of an evolution algorithm of the worldline-null-cone
problem for the Bondi-Sachs equations is complicated and has been restricted to simple problems.
Existence theorems have been established for
a different formulation of the worldline-null-cone problem in terms of wave maps~\citep{ChoqChrusMart}
but this approach does not have a clear path toward numerical evolution.
\section{Applications}
By July 2016, the seminal works of Bondi, Sachs and their collaborators have together spawned more than
1500 citations on the Harvard ADS database~\footnote{\url{http://adsabs.harvard.edu/abstract_service.html}}
(with more than 600 in the last 10 years), showing that the Bondi-Sachs formalism
has found widespread applications. The main field of application of the Bondi-Sachs formalism
is numerical relativity and an extensive overview is given in the {\it Living Review} articles of \citep{wLRR} and \citep{brLRR}.
The BMS group has played an important role in defining the energy-momentum and angular momemtum
of asymptotically flat spacetimes. For a historical account see~\citep{goldbergbms}.
Applications of the Bondi--Sachs formalism can be roughly grouped into the following sections,
where a selective choice of references is given.
\vspace{1ex}
\noindent {\em Numerical Relativity --- Null cone evolution schemes }
\begin{itemize}
\item axisymmetric simulations \citep{IsaacWellingWinicour,Gomez1994,D'inverno1996
\item Einstein-Scalar field evolutions \citep{Gomez1993,Barreto2014
\item spectral methods \citep{deOliveira2011,spec_evolution_2015,extract_spectral_2015,extract_spectral_2016}
\item black hole physics \citep{Bishop1996PRL,Papadopoulos(2002),Husaetal.(2002),PoissonVlasov(2010)
\item relativistic stars \citep{Linkeetal.(2001),Siebel2002a,Barretoetal.(2009)
\end{itemize}
\vspace{1ex}
\noindent {\em Numerical Relativity --- Waveform extraction}
\begin{itemize}
\item Cauchy-characteristic extraction and conformal compactification \citep{extraction,HighNews,Babiuc2009
\item gauge invariant wave extraction with spectral methods \citep{extract_spectral_2015,extract_spectral_2016}.
\item extraction in physical space \citep{Lehner2007waveform,Nerozzietal.(2006)
\end{itemize}
\vspace{1ex}
\noindent {\em Cosmology}
\begin{itemize}
\item reconstruction of the past light cone \citep{Ellisetal.(1985)}
\item gravitational waves in cosmology \citep{Bishop2016}
\end{itemize}
\vspace{1ex}
\noindent {\em BMS group and gravitational memory}
\begin{itemize}
\item BMS representation of emergy-momentum and angular momentum
\citep{tw1966,gw,ashtekstreub,draystreub,waldzoup,goldbergbms}
\item BMS algebra in 3/4 dimensions and BMS/conformal field theory (CFT) correspondence
\citep{2007CQGra..24F..15B,2010JHEP...05..062B,2010PhRvL.105k1103B}
\item soft theorems and the radiation memory effect \citep{mem_soft_theorem,globalemmem,linmem}, boosted Kerr-Schild metrics and radiation memory \citep{Madler2018}
\item black hole information paradox \citep{soft_info,Donnayetal.(2016)
\end{itemize}
\vspace{1ex}
\noindent {\em Exact and Approximate Solutions}
\begin{itemize}
\item Newtonian approximation \citep{Wnewton1983,nullinf
\item linearized solutions and master equation approaches \citep{
extraction,2005CQGra..22.2393B,2013PhRvD..87j4016M,2016GReGr..48...45C}
\item boost-rotation symmetric solutions \citep{Bicaketal.(1988),BicakPravdova(1998)
\end{itemize}
\centerline {Acknowledgement}
J.W. was supported by NSF grant PHY-1505965 to the University of Pittsburgh.
\bibliographystyle{plainnat}
|
1,477,468,750,084 | arxiv | \section{Introduction}
Due to the rapid development of hybridization expansion continuous-time
quantum Monte-Carlo (CT-HYB)\cite{rmp_ctqmc} method, an efficient
solver for quantum impurity models, substantial progress has been achieved
in the electronic structure studies of strongly correlated materials within the framework of density functional
theory (DFT) implemented with dynamical mean-field theory (DMFT)\cite{rmp_dftdmft}.
However, CT-HYB is insufficient for the studies of low temperature ($\sim$O(10)K)
properties of heavy fermion materials in the Kondo regime, where the
itinerant $s$, $p$, $d$ electrons co-exist and interact with the
localized $f$ electrons caused by the large Coulomb
repulsion $U$ among them. The failure of CT-HYB lies in its algorithm
construction where configurations with large charge fluctuations are
frequently proposed in the process of Monte Carlo updates, resulting in
small acceptance rates in the Kondo regime where the charge fluctuations are nearly frozen.
With the decrement of temperature {\it T}, CT-HYB method becomes increasingly inefficient because with the longer
imaginary time $\beta=1/k_{B}T$, configurations with large charge fluctuation are more and more
likely to be proposed during the sampling process, which has very small
acceptance rate.
One way to solve the above problem is to perform Schrieffer-Wolff transformation (SWT) \cite{SWTrans} to
single impurity Anderson model (SIAM) in the strong coupling limit and one gets effective low energy $s$-$d$
exchange model in which local charge fluctuations are projected out
and only virtual processes are considered. The well-known Coqblin-Shrieffer(CS)\cite{CSpaper}
model and Kondo model\cite{KMpaper} are two typical SW transformed models.
CT-J ~\cite{ctJ_paper} algorithm is developed to simulate such models by expanding partition
function in term of $s$-$d$ exchange terms. With much higher efficiency,
CT-J can be applied to study Kondo physics within the two localized
models down to much lower temperature. Based on the corresponding Kondo lattice model,
Matsumoto {\it et al}. have performed DMFT calculations for Ce-122 compounds
and successfully reproduced the general trend of antiferromagnetic transition temperature around the magnetic quantum
critical point\cite{KLM_prl2009}. In their approach, they first calculated hybridization function
between the conduction bands and the 4$f$ electrons by DFT+DMFT
with Hubbard-I approximation as an impurity solver and then constructed the effective CS model afterward by estimating $s$-$d$
exchange parameter $J$ obtained by SWT. However, such construction process
neglects the fact that $J$ has momentum and orbital dependence.
Furthermore, once the realistic interactions (not the density-density type) among the $f$-electrons have been considered,
the SWT will become enormously tedious and complicated\cite{SW_f2S1_uranium}.
As a result, CT-J is not the best practical choice for the calculations of the realistic heavy fermion materials.
In CT-HYB, the local trace part in the partition function can be viewed
as contributions from the various ``evolution paths''\cite{iqist} among different atomic
multiplets $\{\Gamma\}$ which can be grouped into high energy states
$\{\Gamma^{h}\}$ and low energy states $\{\Gamma^{l}\}$ according
to their atomic eigenenergy $E_{\Gamma}$. In Kondo regime, it is assumed
that $\{\Gamma^{l}\}$ are configurations with occupancy $n$, and $\{\Gamma^{h}\}$
are of occupancy $n\pm1$ with $n$ being an non-zero integer.
Furthermore, it is also assumed that $E_{\Gamma^{h}}\gg E_{\Gamma^{l}}$ as schematically
shown in Fig.~\ref{ctx_approx}(a). In this condition, if one takes snapshots of the dynamics of electrons on the impurity site,
atomic states would keep most time on low energy configurations for most of the time, as shown in Fig.~\ref{ctx_approx}(c).
The lower the energy is, the longer time it will spend on correspondingly.
The imaginary-time evolution operator of the high energy
states, $e^{-E_{\Gamma^{h}}\tau}$, decays much faster than that of
$\Gamma^{l}$ as illustrated in Fig.~\ref{ctx_approx}(b).
As $E_{\Gamma^{h}}$ increases,
the sharply decaying $e^{-E_{\Gamma^{h}}\tau}$ can be well approximated by the $\delta$-functions
centered at time zero, assuming that $\Gamma^{h}$ appears only in the range of $\tau\in[0,\sim0^{+})$.
Based on the above assumption, in the present paper we introduce a new impurity
solver by approximating $e^{-E_{\Gamma^{h}}\tau}$ with a probability
normalized $\delta$-function. With this new method, we are able to take into account all the virtual processes
that involve the charge fluctuations from $\Gamma_l$ to $\Gamma_h$ states without explicitly applying SWT
which is difficult for the realistic materials. Furthermore, the approximation does
not depend on the details of local interaction and thus can be easily used for the DMFT
calculations of the heavy fermion materials.
The rest of the paper is organized as follows. In the second section, we first
summarize the CT-HYB method and then introduce the
cutoff of the local Hilbert space. After that, we propose our
approximations to the local trace part in the partition functions for quantum impurity models under the Kondo limit.
In Section III, we introduce how to design Monte-Carlo updates to sample the partition
functions under the approximation mentioned above for both general and density-density type interactions.
Finally, the benchmarks of our new impurity solver are
shown in section IV on both CS and Kondo models. The summary of the paper is then given in section
V.
\section{Method}
\subsection{Hybridization Expansion}
Let us begin with the multi-band single impurity Anderson model (SIAM), which reads
\begin{equation}
H_{\text{SIAM}}=H_{\text{loc}}+H_{\text{hyb}}+H_{\text{bath}},
\end{equation}
where
\begin{equation}
H_{\text{loc}}=\sum_{\alpha\beta}\epsilon_{\alpha\beta}f_{\alpha}^{\dagger}f_{\beta}+\sum_{\alpha\beta\delta\gamma}U_{\alpha\beta\delta\gamma}f_{\alpha}^{\dagger}f_{\beta}^{\dagger}f_{\gamma}f_{\delta},
\end{equation}
\begin{equation}
H_{\text{hyb}}=\sum_{\boldsymbol{k}\nu\alpha}V_{\boldsymbol{k}\nu}^{\alpha}c_{\boldsymbol{k}\nu}^{\dagger}f_{\alpha}+h.c.,
\end{equation}
\begin{equation}
H_{\text{bath}}=\sum_{\boldsymbol{k}\nu}\varepsilon_{\boldsymbol{k}\nu}c_{\boldsymbol{k}\nu}^{\dagger}c_{\boldsymbol{k}\nu}.
\end{equation}
The Greek letters $\alpha,\beta,\delta,\gamma$ denote $N_{0}$ localized
spin-orbital index, and $p\equiv\boldsymbol{k}\nu$ denotes the conduction
band (bath) electron with momentum $\boldsymbol{k}$ and spin-orbital
index $\nu$.
The configuration space of hybridization expansion algorithm is given
by the set of imaginary times $\{\tau\}$ and corresponding orbital indices $\{\alpha\}$:
\begin{equation}
\mathcal{C}=\{\{\tau_{1},\cdots,\tau_{k}^{\prime};f_{\alpha_{1}},\cdots,f_{\alpha_{k}}^{\dagger}\}|k=0,1,\cdots\}
\label{Ck_original}
\end{equation}
Integrating out the bath operators $c_{p}(\tau)$, the partition function $Z$ reads (detailed derivations are given in Appendix A and Ref.~[\onlinecite{rmp_ctqmc}])
\begin{equation}
\begin{aligned}Z & =Z_{\text{bath}}\sum_{k=0}^{\infty}\int_{0}^{\beta}d\tau_{1}\cdots\int_{\tau_{k-1}}^{\beta}d\tau_{k}\int_{0}^{\beta}d\tau_{1}^{\prime}\cdots\int_{\tau_{k-1}^{\prime}}^{\beta}d\tau_{k}^{\prime}\\
& \times\sum_{\alpha_{1}\cdots\alpha_{k}}\sum_{\alpha_{1}^{\prime}\cdots\alpha_{k}^{\prime}}w_{\text{loc}}(\mathcal{C}_{k})w_{\det}(\mathcal{C}_{k}),
\end{aligned}
\label{Z_original}
\end{equation}
where $w_{\text{loc}}(\mathcal{C}_{k})$ is the so-called local trace part
\begin{equation}
w_{\text{loc}}(\mathcal{C}_{k})=\text{Tr}_f[\mathcal{T}_{\tau}e^{-\beta H_{\text{loc}}}f_{\alpha_{k}}(\tau_{k})f_{\alpha_{k}^{\prime}}^{\dagger}(\tau_{k}^{\prime})\cdots f_{\alpha_{1}}(\tau_{1})f_{\alpha_{1}^{\prime}}^{\dagger}(\tau_{1}^{\prime})],
\end{equation}
and $w_{\det}(\mathcal{C}_{k}$) is the so-called determinant part
\begin{equation}
w_{\det}(\mathcal{C}_{k})=\det\Delta^{(\mathcal{C}_{k})}.
\end{equation}
$\Delta^{(\mathcal{C}_k)}$ is a $k\times k$ matrix with its elements being anti-periodic hybridization
functions $\Delta^{(\mathcal{C}_{k})}_{ij}=\Delta_{\alpha_{i}\alpha_{j}^{\prime}}(\tau_{i}-\tau_{j}^{\prime})$,
\begin{equation}
\Delta_{\alpha_{i}\alpha_{j}^{\prime}}(\tau)=\sum_{p}\frac{V_{p}^{\alpha_{i}}V_{p}^{\alpha_{j}^{\prime}*}}{1+e^{-\varepsilon_{p}\beta}}\times\begin{cases}
e^{\varepsilon_{p}(\tau-\beta)}, & 0<\tau<\beta\\
-e^{\varepsilon_{p}\tau}, & -\beta<\tau<0
\end{cases},
\end{equation}
here $\tau\equiv\tau_{i}-\tau_{j}^{\prime}$. $\Delta$ can be reduced to a block-diagonal
matrix if the coupling to the bath is diagonal in spin-orbital indices,
and in this case we have $\det\Delta^{(\mathcal{C}_{k})}=\prod_{\alpha}\det\Delta^{(\mathcal{C}_{k})}_{\alpha}$.
We make this assumption of diagonal hybridization throughout the rest
of this paper. In practice, the inverse of $\Delta^{(\mathcal{C}_{k})}_{\alpha}$ denoted
by $\mathcal{M}^{(\mathcal{C}_{k})}_\alpha=[\Delta_\alpha^{(\mathcal{C}_{k})}]^{-1}$ is more convenient
to be saved and used in the fast-update formula\cite{fastupdate}.
When the interaction among $f$-electrons is density-density
type, the $w_{\text{loc}}(\mathcal{C}_{k})$ can be easily evaluated by segment
algorithm\cite{ctqmc_prl2006}. When the interaction term is the generalized type,
the local Hamiltonian $H_{loc}$ is more complicated and the atomic eigenstates are no longer
Fock states. In this case, the evaluation of the local trace part $w_{\text{loc}}(\mathcal{C}_{k})$
becomes very time consuming and can be expressed in terms of the atomic eigenstates as
\begin{equation}
\begin{aligned}\omega_{\text{loc}}(\mathcal{C}_{k}) & =s_{T_{\tau}}\cdot\sum_{\Gamma_{1}\Gamma_{2}\cdots\Gamma_{2k}}e^{-(\beta-\tau_{k})E_{\Gamma_{1}}}\langle\Gamma_{1}|f_{\alpha_{k}}|\Gamma_{2k}\rangle\\
& \times e^{-(\tau_{k}-\tau_{k}^{\prime})E_{\Gamma_{2k}}}\langle\Gamma_{2k}|f_{\alpha_{k}^{\prime}}^{\dagger}|\Gamma_{2k-1}\rangle\cdots\langle\Gamma_{3}|f_{\alpha_{1}^{\prime}}^{\dagger}|\Gamma_{2}\rangle\\
& \times e^{-(\tau_{1}^{\prime}-\tau_{1})E_{\Gamma_{2}}}\langle\Gamma_{2}|f_{\alpha_{1}}|\Gamma_{1}\rangle e^{-(\tau_{1}-0)E_{\Gamma_{1}}},
\end{aligned}
\label{wloc_noaprox}
\end{equation}
where $s_{T_{\tau}}$ is the sign determined by the time-ordering of the fermionic operators.
Each term in Eq.~(\ref{wloc_noaprox}) can be diagrammatically illustrated as an evolution path\cite{iqist} of $\{\Gamma\}$,
e.g.
\begin{equation}
\beta\vdash\Gamma_{1}\xleftarrow{f_{\alpha_{k}}(\tau_{k})}\Gamma_{2k}\xleftarrow{f_{\alpha_{k}^{\prime}}^{\dagger}(\tau_{k}^{\prime})}\cdots\xleftarrow{f_{\alpha_{1}^{\prime}}^{\dagger}(\tau_{1}^{\prime})}\Gamma_{2}\xleftarrow{f_{\alpha_{1}}(\tau_{1})}\Gamma_{1}\dashv0,
\end{equation}
which means that the local configuration evolves from $\Gamma_{1}$ at $\tau=0$
to other multiplets successively by annihilation or creation of electrons and finally
returns back to $\Gamma_{1}$ at $\tau=\beta$.
\subsection{Truncation of the Hilbert space }
For the sake of simplicity, $\{\Gamma\}$ can be divided into two classes, high energy
states $\{\Gamma^{h}\}$ and low energy states $\{\Gamma^{l}\}$.
In the Kondo limit, the average occupation number for the $f$-orbitals is very close to an integer, $n$,
which naturally defines the low energy atomic states with $n_f=n$.
The rest of the atomic states have much higher charging energy about several times of Hubbard U in difference
comparing to the low energy states. In the CS transformation, these high energy atomic states are treated
as virtual processes, which lead to exchange interaction between the localized f-electrons and itinerant electrons
in the $s,p,d$ bands. For instance, in Cerium compounds low energy states are $n_f=1$, and both the $n_f=0$ and $n_f=2$ states
are treated as virtual processes. Therefore for general SIAM with strong Coulomb
repulsion and deep local impurity level, the states $\{\Gamma^{h}\}=\{f^{n\pm1}\}$
are included as the virtual states. Now after the first step the local Hilbert space considered in our approach has been truncated to
\begin{equation}
\{\Gamma\}=\{\Gamma^{l}|N_{\Gamma^{l}}=n\}\bigcup\{\Gamma^{h}|N_{\Gamma^{h}}=n\pm1\}.
\end{equation}
$\{\Gamma^{h}\}$ are still rarely visited in MC sampling whose energy
difference to $\{\Gamma^{l}\}$ is about several eV, which is one or two orders of magnitude larger than the typical Kondo temperature.
In other words, the time evolution function, which determines the appearance probability of specific
atomic configurations in the MC processes, satisfies $e^{-\tau E_{\Gamma^{h}}}\ll e^{-\tau E_{\Gamma^{l}}}$
especially at low temperature. When $H_{loc}$ is in density-density form and segment picture is adopted,
this implies the overlapping segments or anti-segments are very short.
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{ctx_approx.png}
\caption{(Color online). Approximations made to CT-HYB in Kondo regime. (a) Energy of atomic multiplets
$\Gamma$ as a function of occupation number, in Kondo regime there is $E_\Gamma^h \gg E_\Gamma^l$.
(b) Schematic plot of $e^{-\Gamma\tau}$ as a function of $\tau$. $e^{-\Gamma^h\tau}$ decays much faster than
$e^{-\Gamma^l\tau}$. (c) Schematic plot of the impurity site hybridizing with the heat bath.
In the simplest case of the single orbital model, impurity site spend most of the time on
two low energy single occupied states, $|\uparrow\rangle$ and $|\downarrow\rangle$, than on two high energy states, unoccupied $|\rangle$
and double occupied $|\uparrow\downarrow\rangle$ (adapted from Ref.~[\onlinecite{Gabriel_snapshot}]).
(d) Sharply decaying imaginary time evolution operator of high energy states is approximated by a normalized Delta function, leading to virtual processes
included in $X$ matrix. Left panel, $\tau_i>\tau_i^\prime$; right panel, $\tau_i<\tau_i^\prime$. }
\label{ctx_approx}
\end{figure}
The above truncation requires that evolution paths with non-zero contributions
to $w_{\text{loc}}(\mathcal{C}_{k})$ are those alternating $\{\Gamma^{h}\}$
and $\{\Gamma^{l}\}$ since
\begin{equation}
\begin{aligned}|\{\Gamma^{h}|N_{\Gamma^{h}}=n+1\}\rangle & \leftarrow f_{\alpha}^{\dagger}|\{\Gamma^{l}|N_{\Gamma^{l}}=n\}\rangle,\\
|\{\Gamma^{h}|N_{\Gamma^{h}}=n-1\} & \leftarrow f_{\alpha}|\{\Gamma^{l}|N_{\Gamma^{l}}=n\}\rangle,\\
|\emptyset\rangle & \leftarrow f_{\alpha}^{\dagger}|\{\Gamma^{h}|N_{\Gamma^{h}}=n+1\}\rangle,\\
|\emptyset\rangle & \leftarrow f_{\alpha}|\{\Gamma^{h}|N_{\Gamma^{h}}=n-1\}\rangle.
\end{aligned}
\end{equation}
$w_{\text{loc}}(\mathcal{C}_{k})$ can be split into two parts according
to the energy hierarchy of the head/tail state $\{\Gamma_{1}\}$.
The part which starts from and ends in $\{\Gamma_{1}^{h}\}$ is generally
much smaller, since it contains more time evolution of the high energy
states and thus can be reasonably neglected, especially at very low
temperature. Thus, we obtain
\begin{equation}
\begin{aligned}w_{\text{loc}}(\mathcal{C}_{k}) & \approx s_{T_{\tau}}\sum_{\Gamma_{1}^{l}\Gamma_{1}^{h}\cdots\Gamma_{k}^{l}\Gamma_{k}^{h}}e^{-(\beta-\tau_{k})E_{\Gamma_{1}^{l}}}\langle\Gamma_{1}^{l}|f_{\alpha_{k}}|\Gamma_{k}^{h}\rangle\\
& \times e^{-(\tau_{k}-\tau_{k}^{\prime})E_{\Gamma_{k}}^{h}}\langle\Gamma_{k}^{h}|f_{\alpha_{k}^{\prime}}^{\dagger}|\Gamma_{k}^{l}\rangle\cdots\langle\Gamma_{2}^{l}|f_{\alpha_{1}^{\prime}}^{\dagger}|\Gamma_{1}^{h}\rangle\\
& \times e^{-(\tau_{1}^{\prime}-\tau_{1})E_{\Gamma_{1}^{h}}}\langle\Gamma_{1}^{h}|f_{\alpha_{1}}|\Gamma_{1}^{l}\rangle e^{-(\tau_{1}-0)E_{\Gamma_{1}^{l}}},
\end{aligned}
\label{wloc_approx}
\end{equation}
which evolves in $\{\Gamma^{l}\}\leftarrow\{\Gamma^{h}\}\cdots\{\Gamma^{l}\}\leftarrow\{\Gamma^{h}\}\leftarrow\{\Gamma^{l}\}$.
\subsection{Energy shift}
Eigenvalues of $H_{\text{loc}}$, $\{E_{\Gamma}\}$, can be negative or positive,
therefore $e^{-\tau E_{\Gamma}}$($\tau>0$) is either monotonically increasing
or decreasing function, respectively. However, it is the relative difference between
$\{E_{\Gamma^h}\}$ and $\{E_{\Gamma_l}\}$ that matters in Monte Carlo simulations.
Then it is convenient to make a shift to $\{E_{\Gamma}\}$
such that the time evolution functions appearing in our simulations are always monotonically decreasing.
To realize that, we shift the zero of the energy to $E_0$, where $E_{0}=\min\{E_{\Gamma^{l}}\}$,
\begin{equation}
E_{\Gamma}\rightarrow E_{\Gamma}^{\prime}=E_{\Gamma}-E_{0}\ge0.
\end{equation}
The transformation is equivalent to multiply a positive factor $e^{\beta E_{0}}$
to $w_{\text{loc}}(\mathcal{C}_{k})$
\begin{equation}
\omega_{\text{loc}}(\mathcal{C}_{k})\rightarrow\omega_{\text{loc}}^{\prime}(\mathcal{C}_{k})=\omega_{\text{loc}}(\mathcal{C}_{k})\times e^{\beta E_{0}},
\end{equation}
and partition function is changed to
\begin{equation}
Z\rightarrow Z^{\prime}=Z\times e^{\beta E_{0}}.
\end{equation}
Please note that the expectation value of an operator will not be modified by the above transformation,
\begin{equation}
\begin{aligned}\langle\hat{O}\rangle_{Z^{\prime}} & =\frac{\int d\mathcal{C}w^{\prime}(\mathcal{C})O(\mathcal{C})}{Z^{\prime}}=\frac{\int d\mathcal{C}w(\mathcal{C})e^{\beta E_{0}}O(\mathcal{C})}{Z\times e^{\beta E_{0}}}\\
& =\frac{\int d\mathcal{C}w(\mathcal{C})O(\mathcal{C})}{Z}=\langle\hat{O}\rangle_{Z}.
\end{aligned}
\end{equation}
Prime $\prime$ is omitted for $E_{\Gamma}^{\prime}$, $Z^{\prime}$,
etc. hereafter for the sake of simplicity.
\subsection{Approximations in Kondo limit}
Two typical fragments of evolution paths appearing in $w_{\text{loc}}(\mathcal{C}_{k}$) in Eq.~(\ref{wloc_approx})
are schematically depicted in Fig.~\ref{ctx_approx}(d) where each high energy state is sandwiched
between one creation and one annihilation operators. Here we focus on the left panel where
$\tau_{i}>\tau_{i}^{\prime}$. In the limit of $E_{\Gamma_{i}^{h}}\rightarrow+\infty$,
the probability of finding a configuration with finite $\tau=\tau_{i}-\tau_{i}^{\prime}>0$
approaches 0 due to the exponentially decreasing factor $e^{-(\tau_{i}-\tau_{i}^{\prime})E_{\Gamma_{i}^{h}}}$,
which means that excitations to high energy states are instantaneous, i.e. $\tau_{i}-\tau_{i}^{\prime}=0^{+}$,
in the Kondo limit. Integrating over $\tau_{i}^{\prime}$, we
obtain
\begin{equation}
\int_{\tau_{i-1}}^{\tau_{i}}e^{-(\tau_{i}-\tau_{i}^{\prime})E_{\Gamma_{i}^{h}}}d\tau_{i}^{\prime}\cdots=\cdots\frac{1}{E_{\Gamma_{i}^{h}}}\cdots|_{E_{\Gamma_{i}^{h}}\rightarrow+\infty},
\end{equation}
where $\frac{1}{E_{\Gamma_{i}^{h}}}$ indicates total probability for this particular type of virtual processes.
Then in the Kondo limit, where all the high energy local atomic states can be treated as the virtual processes,
the sharply decreasing time evolution can be well approximated by a probability normalized delta function
\begin{equation}
e^{-(\tau_{i}-\tau_{i}^{\prime})E_{\Gamma_{i}^{h}}}\rightarrow\frac{1}{E_{\Gamma_{i}^{h}}}\delta(\tau_{i}-\tau_{i}^{\prime}-0^{+}).
\end{equation}
This approximation is getting better and better when the charging energy $E_{\Gamma^h}$ is approaching infinity,
which is very suitable for the heavy fermion systems at the Kondo limit. The above approximation has the following advantages.
1) By neglecting the time dependence of the local propagator for the high energy atomic states, the charge fluctuations to these
high energy atomic states will be treated as the virtual processes, which induce an effective exchange coupling among the
conduction electrons and the low energy atomic states. For simple model system, i.e. the single orbital Anderson impurity model,
it can automatically obtain the exact same coupling terms as the SWT. 2) This approximation can be easily applied to more realistic
models generated during the process of LDA+DMFT, the coupling terms between the f-electrons and conduction electrons have the
momentum and orbital dependence, which make SWT very difficult.
Replacing all $e^{-\tau\cdot E_{\Gamma^{h}}}$ with $\frac{1}{E_{\Gamma^{h}}}\delta(\tau-0^{+})$
and integrating over all $\{\tau_{i}^{\prime}\}$, we find that a creation operator and an annihilation operator
always appear in adjacent pairs. The configuration space now reads
\begin{equation}
\begin{aligned}\mathcal{C} & =\{\{\},\{\tau_{1};s_{1}f_{\alpha_{1}}f_{\alpha_{1}}^{\dagger}\},\cdots,\\
& \times\{\tau_{1},\cdots,\tau_{k};\:s_{1}f_{\alpha_{1}}f_{\alpha_{1}^{\prime}}^{\dagger},\cdots,s_{k}f_{\alpha_{k}}f_{\alpha_{k}^{\prime}}^{\dagger}\},\cdots\},
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}|_{s_{i}=1} & \equiv f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}\rightarrow\tau_{i}=\tau_{i}^{\prime}+0^{+},\\
s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}|_{s_{i}=-1} & \equiv f_{\alpha_{i}^{\prime}}^{\dagger}f_{\alpha_{i}}\rightarrow\tau_{i}=\tau_{i}^{\prime}-0^{+}.
\end{aligned}
\end{equation}
Summation over $\{\Gamma^{h}\}$ can be written in a compact form
by defining two types of $X$-matrices labelled by $s$
\begin{equation}
\begin{aligned}X_{\Gamma_{i+1}^{l}\Gamma_{i}^{l}}^{s_{i}\alpha_{i}\alpha_{i}^{\prime}}|_{s_{i}=1} & \equiv\sum_{\Gamma_{i}^{h}}\langle\Gamma_{i+1}^{l}|f_{\alpha_{i}}|\Gamma_{i}^{h}\rangle\frac{1}{E_{\Gamma_{i}^{h}}}\langle\Gamma_{i}^{h}|f_{\alpha_{i}^{\prime}}^{\dagger}|\Gamma_{i}^{l}\rangle,\end{aligned}
\label{xmatp1}
\end{equation}
which describes virtual charge excitations $f^{n}\rightarrow f^{n+1}$ and
\begin{equation}
\begin{aligned}X_{\Gamma_{i+1}^{l}\Gamma_{i}^{l}}^{s_{i}\alpha_{i}\alpha_{i}^{\prime}}|_{s_{i}=-1} & \equiv\sum_{\Gamma_{i}^{h}}\langle\Gamma_{i+1}^{l}|f_{\alpha_{i}^{\prime}}^{\dagger}|\Gamma_{i}^{h}\rangle\frac{1}{E_{\Gamma_{i}^{h}}}\langle\Gamma_{i}^{h}|f_{\alpha_{i}}|\Gamma_{i}^{l}\rangle,\end{aligned}
\label{xmatm1}
\end{equation}
which describes virtual charge excitations from $f^{n}\rightarrow f^{n-1}$
. Finally one obtains the partition function
\begin{equation}
\begin{aligned}Z & \approx Z_{\text{bath}}\sum_{k=0}^{\infty}\int_{0}^{\beta}d\tau_{1}\cdots\int_{\tau_{k-1}}^{\beta}d\tau_{k}\sum_{\substack{\alpha_{1}\cdots\alpha_{k}\\
\alpha_{1}^{\prime}\cdots\alpha_{k}^{\prime}}} s_{T_{\tau}}\\
& \times w_{\text{loc}}(\mathcal{C}_{k})\prod_{\alpha}\det(\mathcal{M}_\alpha^{(k_\alpha)})^{-1},
\end{aligned}
\label{zapprox}
\end{equation}
where the local trace is reformulated in terms of $X$-matrices as
\begin{equation}
\begin{aligned}w_{\text{loc}}(\mathcal{C}_{k}) & = \sum_{\Gamma_{1}^{l}\cdots\Gamma_{k}^{l}} e^{-(\beta-\tau_{k})E_{\Gamma_{1}^{l}}}X_{\Gamma_{1}^{l}\Gamma_{k}^{l}}^{s_{k}\alpha_{k}\alpha_{k}^{\prime}}e^{-(\tau_{k}-\tau_{k-1})E_{\Gamma_{k}^{l}}}\\
& \times \cdots X_{\Gamma_{3}^{l}\Gamma_{2}^{l}}^{s_{2}\alpha_{2}\alpha_{2}^{\prime}} e^{-(\tau_{2}-\tau_{1})E_{\Gamma_{2}^{l}}}X_{\Gamma_{2}^{l}\Gamma_{1}^{l}}^{s_{1}\alpha_{1}\alpha_{1}^{\prime}}e^{-(\tau_{1}-0)E_{\Gamma_{1}^{l}}}.
\end{aligned}
\label{wloc_ctx}
\end{equation}
An example of third order configuration $\mathcal{C}_{3}$ is schematically shown in Fig.~\ref{diagram_c3}(a) and its determinant part is
\begin{widetext}
\begin{equation}
w_{\det}(\mathcal{C}_{3})=\left|\begin{array}{ccc}
\triangle_{\alpha_{1}^{\prime}\alpha_{1}}(0^{-}) & \triangle_{\alpha_{1}^{\prime}\alpha_{2}}(\tau_{1}-\tau_{2}) & \triangle_{\alpha_{1}^{\prime}\alpha_{3}}(\tau_{1}-\tau_{3})\\
\triangle_{\alpha_{2}^{\prime}\alpha_{1}}(\tau_{2}-\tau_{1}) & \triangle_{\alpha_{2}^{\prime}\alpha_{2}}(0^{+}) & \triangle_{\alpha_{2}^{\prime}\alpha_{3}}(\tau_{2}-\tau_{3})\\
\triangle_{\alpha_{3}^{\prime}\alpha_{1}}(\tau_{3}-\tau_{1}) & \triangle_{\alpha_{3}^{\prime}\alpha_{2}}(\tau_{3}-\tau_{2}) & \triangle_{\alpha_{3}^{\prime}\alpha_{3}}(0^{-})
\end{array}\right|,
\label{detM}
\end{equation}
\end{widetext}
which can be expanded into $3!=6$ terms as schematically represented in Fig.~\ref{diagram_c3}(b-g).
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{partition_c3.png}
\caption{(Color online) Schematic plot of a third order configuration in the approximated partition
function Eq.~(\ref{zapprox}). (a), The evolution of low energy states ($\Gamma^l_i$, labelled by horizontal solid colored lines) by means of virtual processes. Two adjacent vertical solid black lines are to denote the creation and annihilation operator pair in an $X$ matrix. $\pm$ in $X^{\pm}_i$ is to denote the type ($s_i=\pm1$) of $X$ matrices defined in Eq.~(\ref{xmatp1}, \ref{xmatm1}). (b-g), Illustration of the hybridizations determinant, Eq.~(\ref{detM}), by arrowed dashed red lines starting from a annihilation operator and ending at a creation operator. For a three order term, there are $3!=6$ terms in the determinant.} \label{diagram_c3}
\end{figure}
\section{Monte Carlo sampling}
Before introducing the detail of the Monte Carlo sampling, we first divide the different pair operators into the following types
\begin{itemize}
\item pure-pair: $\alpha_{i}=\alpha_{i}^{\prime}$, $sf_{\alpha_{i}}f_{\alpha_{i}}^{\dagger}$,
\item mix-pair: $\alpha_{i}\ne\alpha_{i}^{\prime}$, $s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}$.
\end{itemize}
An $k$-th order configuration $\mathcal{C}_{k}$ consists of time-ordered pure-pairs and mix-pairs
\begin{equation}
\beta\vdash s_{k}f_{\alpha_{k}}f_{\alpha_{k}^{\prime}}^{\dagger}(\tau_{k})-\cdots-s_{1}f_{\alpha_{1}}f_{\alpha_{1}^{\prime}}^{\dagger}(\tau_{1})\dashv0.
\end{equation}
$\mathcal{C}_{k}$ contains an equal number of creation and annihilation
operators for each flavor by construction. With fixed $\{\tau_{i}\}$
and fixed number of single-particle operators of each flavor, $\mathcal{C}_{k}$
is mathematically an element in the set of direct products of operators
permutations $\mathscr{P}$,
\begin{equation}
\begin{aligned}\{\mathcal{C}_{ki}\} & =\{\mathscr{P}\{f_{\alpha_{k}},\cdots,f_{\alpha_{1}}\}\bigotimes\mathscr{P}\{f_{\alpha_{k}^{\prime}}^{\dagger},\cdots,f_{\alpha_{1}^{\prime}}^{\dagger}\}\\
& \bigotimes\prod_{i=1}^{k}\mathscr{P}\{f_{\alpha_{i}},f_{\alpha_{i}^{\prime}}^{\dagger}\}\}.
\end{aligned}
\end{equation}
Based on the fact that any permutation can be expressed as the product
of transpositions, we design updates which keep diagram order as the
following,
\begin{itemize}
\item left-exchange: exchange annihilation operators of two adjacent pairs,
\item right-exchange: exchange creation operators of two adjacent pairs,
\item in-pair swap: $s_{i}\rightarrow-s_{i}$.
\end{itemize}
Ergodicity can be satisfied by the above updates together with insertion
and removal of pure-pairs at random times which change expansion order
by 1, since any $\mathcal{C}_{ki}$ can be generated from an list
of pure pairs by successive transpositions. Updates which shift pair-operators
is not necessary but is useful to increase sampling efficiency.
Metropolis-Hastings algorithm is used to sample configuration space
$\mathcal{C}$ according to the configuration weight
$w(\mathcal{C}_{k})=w_{\text{loc}}(\mathcal{C}_{k})\prod_{\alpha}\det(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k})})^{-1}d\tau^{k}$.
The random walk in $\mathcal{C}$ must satisfy detailed balance condition and ergodicity.
In the following, we first discuss the update scheme for general interaction and then for
density-density iteration. The main difference between the two is the way to calculate local trace.
\subsection{General interactions}
As shown in Eq.~(\ref{wloc_ctx}), the calculation of local trace requires multiplication of matrices and is time-consuming.
We can take advantages of symmetries of $H_{\text{loc}}$ and divide the full Hilbert space of $H_{\text{loc}}$
into much smaller subspaces labeled by some good quantum numbers (GQNs)\cite{Haule_ctqmc_2007},
such as the total particle number $N_{\text{tot}}$, the total Spin $z$-component
$S_{\text{tot}}^{z}$, the total angular momentum $J_{z}$, etc. Single particle creation and annihilation
operators are therefore in block diagonal form, which speeds up the calculation. Further
speed-up can be achieved by using the divide-and-conquer\cite{iqist} trick based on
the fact that diagrammatic configurations are modified locally in
each update.
\subsubsection{Pure-pair insertion/removal}
To insert a pure-pair in configuration $\mathcal{C}_{k}$, we pick
a random flavor $\alpha$, a random pair with type $s$, and a random time
$\tau$ in $(0,\beta)$. In the corresponding removal process, we
simply delete one of the existing pure-pairs among $k_{\alpha}+1$ pairs.
The ratio of the transition probabilities can be calculated for the inserting case as
\begin{equation}
\frac{p(k_{\alpha}\rightarrow k_{\alpha}+1)}{p(k_{\alpha}+1\rightarrow k_{\alpha})}=\frac{w_{\text{loc}}(\mathcal{C}_{k+1})\det(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k+1})})^{-1}}{w_{\text{loc}}(\mathcal{C}_{k})\det(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k})})^{-1}}\times\frac{2\beta}{k_{\alpha}+1},
\end{equation}
where $w_{\text{loc}}(\mathcal{C}_{k+1})$ is the local trace and $(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k+1})})^{-1}$
is the hybridization matrix of the new configuration at order $k+1$.
\subsubsection{Left/right-exchange}
In the left-exchange update, we randomly pick a pair operator $s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}$
together with its left neighbor $s_{i+1}f_{\alpha_{i+1}}f_{\alpha_{i+1}^{\prime}}^{\dagger}$
and exchange their annihilation operators if $\alpha_{i}\ne\alpha_{i+1}$
\begin{equation}
\begin{aligned}\cdots-s_{i+1}f_{\alpha_{i+1}}f_{\alpha_{i+1}^{\prime}}^{\dagger}(\tau_{i+1}) & -s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}(\tau_{i})-\cdots\\
& \Downarrow\\
\cdots-s_{i+1}f_{\alpha_{i}}f_{\alpha_{i+1}^{\prime}}^{\dagger}(\tau_{i+1}) & s_{i}f_{\alpha_{i+1}}f_{\alpha_{i}^{\prime}}^{\dagger}(\tau_{i})-\cdots.
\end{aligned}
\end{equation}
If $i=k$, the right-most pair is selected as the left neighbor of
$k$-th pair$ (k\ge2)$. It is equivalent to two successive shifts:
$f_{\alpha}$ from $\tau_{i+1}$ to $\tau_{i}$ and $f_{\alpha_{i+1}}$
from $\tau_{i}$ to $\tau_{i+1}$. Using Metropolis-Hasting algorithm
we obtain
\begin{equation}
\frac{p(k)^{\prime}}{p(k)}=\frac{w_{\text{loc}}(\mathcal{C}_{k}^{\prime})}{w_{\text{loc}}(\mathcal{C}_{k})}\times\frac{\det(\mathcal{M}_{\alpha_{i}}^{(\mathcal{C}^{\prime}_{k})})^{-1}}{\det(\mathcal{M}_{\alpha_{i}}^{(\mathcal{C}_{k})})^{-1}}\times\frac{\det(\mathcal{M}_{\alpha_{i+1}}^{(\mathcal{C}^{\prime}_{k})})^{-1}}{\det(\mathcal{M}_{\alpha_{i+1}}^{(\mathcal{C}_k)})^{-1}},\label{lexchgDBC}
\end{equation}
where $(\mathcal{M}_{\alpha}^{(\mathcal{C}^{\prime}_{k})})^{-1}$ ($\alpha=\alpha_{i},\alpha_{i+1}$)
is the new hybridization matrix of flavor $\alpha$ with shifted $f_{\alpha}$
compared with original $(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k})})^{-1}$.
The right-exchange updates works quite similar to left-exchange except that
it operates on creation operators, and the detailed balance condition
is of the form of Eq.~(\ref{lexchgDBC}) where $(\mathcal{M}_{\alpha}^{(\mathcal{C}^{\prime}_{k})})^{-1}$
is hybridization matrix with $f_{\alpha}^{\dagger}$ being shifted.
\subsubsection{Swap }
The $i$th pair is randomly selected, and we flip its type from $s_{i}$
to $-s_{i}$. Swap update is very important for satisfying ergodicity since it
switches virtual charge fluctuations between $f^{n}\rightarrow f^{n-1}$
and $f^{n}\rightarrow f^{n+1}$. Pure-pair will not be selected since the swap
of pure-pair can be done by removal of $s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}$
and insertion of $-s_{i}f_{\alpha_{i}}f_{\alpha_{i}^{\prime}}^{\dagger}$
at $\tau_{i}$ successively. The ratio of the transition probabilities
is
\begin{equation}
\frac{p(k)^{\prime}}{p(k)}=\frac{w_{\text{loc}}(\mathcal{C}_{k}^{\prime})}{w_{\text{loc}}(\mathcal{C}_{k})},
\label{swapmc}
\end{equation}
The reason why $\mathcal{M}^{-1}$ is not involved in Eq.~(\ref{swapmc}) is that it's block diagonal in
spin-orbitals.
\subsection{Density-Density interactions}
If $H_{loc}$ commutes with the occupation number operator of each
orbital, the eigenstates of $H_{loc}$ are Fock states.
For each orbital, creation operator has to be followed by an annihilation operator
for all valid configurations(we refer it as
NN-Rule). The weighting factor of the allowed configuration $\mathcal{C}_{k}$ can then be expressed as
\begin{equation}
\begin{aligned}w_{\text{loc}}(\mathcal{C}_{k}) & =s_{T_{\tau}}\cdot e^{-(\beta-\tau_{k})E_{\Gamma_{1}^{l}}}X_{\Gamma_{1}^{l}\Gamma_{k}^{l}}^{s_{k}\alpha_{k}\alpha_{k}^{\prime}}e^{-(\tau_{k}-\tau_{k-1})E_{\Gamma_{k}^{l}}}\cdots\\
& \times X_{\Gamma_{3}^{l}\Gamma_{2}^{l}}^{s_{2}\alpha_{2}\alpha_{2}^{\prime}}e^{-(\tau_{2}-\tau_{1})E_{\Gamma_{2}^{l}}}X_{\Gamma_{2}^{l}\Gamma_{1}^{l}}^{s_{1}\alpha_{1}\alpha_{1}^{\prime}}e^{-(\tau_{1}-0)E_{\Gamma_{1}^{l}}}.
\end{aligned}
\end{equation}
To propose valid configurations, updates should be carefully designed
to satisfy the NN-Rule.
\subsubsection{Pure-pair insertion/removal}
The main difference with the general interaction case is that the pair type $s$
can not be randomly selected. For a given configuration, if the orbital $\alpha$ is occupied (unoccupied) in the Fock state
spanning $\tau$, only the insertion of $f_{\alpha}^{\dagger}f_{\alpha}$($f_{\alpha}f_{\alpha}^{\dagger}$)
at $\tau$ is allowed. When it comes to
pure-pair removal, we correspondingly delete $f_{\alpha}^{\dagger}f_{\alpha}$ ($f_{\alpha}^{\dagger}f_{\alpha}$) away from $\tau$. The condition for detail balance reads
\begin{equation}
\frac{p(k_{\alpha}\rightarrow k_{\alpha}+1)}{p(k_{\alpha}+1\rightarrow k_{\alpha})}=\frac{w_{\text{loc}}(\mathcal{C}_{k+1})\det(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k+1})})^{-1}}{w_{\text{loc}}(\mathcal{C}_{k})\det(\mathcal{M}_{\alpha}^{(\mathcal{C}_{k})})^{-1}}\times\frac{\beta}{k_{\alpha}+1}.
\end{equation}
\subsubsection{Left/right-exchange and swap}
Exchange process which violate the NN-Rule will be directly rejected. Swap updates
will not violate the rule since only mix-pairs are swapped. The conditions for the detailed balance
are same with those of general interactions except for the calculations
of local trace. While left/right-exchange is equivalent to
shift of segments, swap is equivalent to switch between infinitesimal small
segment and anti-segment.
\section{Measurements}
The most important observable for QMC impurity solvers is the finite temperature imaginary-time Green's function defined by
$G_{\alpha\alpha'}^{f}(\tau)=-\langle\mathcal{T}_{\tau}f_\alpha(\tau)f_{\alpha'}^{\dagger}(0)\rangle$.
The single particle green's function, in general, includes high energy process that involve states with different occupation numbers. Such process, however, are missing in our approximated parition function, Eq.~(\ref{zapprox}), in the Kondo limit, where charge fluctuations are projected out completely. Nevertheless, we can still measure the low-energy contributions to $G_{\alpha\alpha'}^{f}(\tau)$, which correspond to the quasi-particle part in the single-particle excitations.
Here, we give a brief descriptions of how to measure $G_{\alpha\alpha'}^{f}(\tau)$.
The step by step derivation of the measurement formula is given in Appendix~B.
In the CTQMC, the diagrams contributing to $G_{\alpha\alpha'}^{f}(\tau)$ can be generated from diagrams in $Z$: One chooses an arbitrary pair of creation and annihilation operators in a given configuration $\mathcal{C}_{k}$, and removes the corresponding contributions to the determinant $\Delta_\alpha^{(\mathcal{C}_{k})}$.
Within the approximation applied to the partition function, only a specific pairs of creation and annihilation operators have contributions to $G_{\alpha\alpha'}^{f}(\tau)$. A pair of operators that belong to different $X$ matrices does contribute, while those on the same $X$ matrix do not. Those diagrams are illustrated in Fig.~\ref{Greenfun_c3}(b) and Fig.~\ref{Greenfun_c3}(c), respectively.
The measurement formula is thus given by
\begin{equation}
G^f_{\alpha\alpha^{\prime}}(\tau)=-\langle\frac{1}{\beta} \sideset{}{'}\sum_{m,n=1}^{k} \delta_{\alpha_{m},\alpha}\delta_{\alpha_{n}^{\prime},\alpha^{\prime}}\delta^{-}[\tau-(\tau_{m}-\tau_{n})]\mathcal{M}_{nm}^{(\mathcal{C}_{k})}\rangle_{Z},
\label{G_KR}
\end{equation}
where $\prime$ stands for the resctriction of summations to $m \ne n$ ( $m$ and $n$ are one different $X$ matrices). The function $\delta^-$ is defined by $\delta^{-}(\tau-\tau^\prime)=\delta(\tau-\tau^\prime)$ for $\tau>0$, and $\delta^{-}(\tau-\tau^\prime)=-\delta(\tau-\tau^\prime-\beta)$ for $\tau<0$.
After the Fourier transform, we obtain
\begin{equation}
G_{\alpha\alpha^\prime}^{f}(i\omega_{l})=-\langle\frac{1}{\beta} \sideset{}{'}\sum_{m,n=1}^{k} \delta_{\alpha_{m},\alpha}\delta_{\alpha_{n}^{\prime},\alpha^{\prime}}\mathcal{M}_{nm}^{(\mathcal{C}_{k})}e^{i\omega_{l}(\tau_{m}-\tau_{n})}\rangle.
\end{equation}
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{Greenfun_c3.png}
\caption{(Color online) Schematic plot of a third order configuration for measurement of Green's function in Kondo regime.
(a) Local trace part of Green's function $G_{\alpha\alpha}^{f}(\tau)$, which is the same as that of $Z$ shown in Fig.~\ref{diagram_c3}(a).
(b) An example of allowed (marked by \checkmark) measurement for $G_{\alpha\alpha}^{f}(\tau)$. Two operators from two different $X$
matrices [here $f_{\alpha_3}(\tau_3)$ of pair $X_3$ and $f_{\alpha^\prime_2}^\dagger(\tau_2)$ of pair $X_2$] are identified
as operators in $G_{\alpha\alpha}^{f}(\tau)$. In upper and lower panels, we plot existing hybridization lines which do not connect between the selected operators.
(c) An example of forbidden (marked by \xmark) measurement in $G_{\alpha\alpha}^{f}(\tau)$. Two operators belonging to the same $X$ matrix are not allowed to be
chosen in measurement.}
\label{Greenfun_c3}
\end{figure}
We can compare the present measurement formula, Eq.~(\ref{G_KR}), with that for the of $t$-matrix in the CS model, Eq.~(9) in Ref.~[\onlinecite{ctJ_paper}].
They are related by $t(\tau)=V^2G(\tau)$ if there is no $k$ dependence in $V_k$.
The asymptotic behavior of $G(i\omega_n)$ is $i\omega_n * G(i\omega_n) |_{n\rightarrow\infty}=z$ with $z<1$ being the quasi-particle weight in the Kondo limit.
The measurement formula for the two particle correlation function bear exactly the same form as Eq.~(11)-(13) of Ref.~[\onlinecite{ctJ_paper}], which are not mentioned here.
\section{Benchmarks}
While Coqblin-Shrieffer(CS) model is a low energy effective Hamiltonian
of ASIM in large $U$ limit in which only virtual excitations $f^{1}\rightarrow f^{0}$
survive, Kondo model incorporates both $f^{1}\rightarrow f^{0}$
and $f^{1}\rightarrow f^{2}$ by assuming deep impurity level $\epsilon_{f}$
and large $U$. Both the two models can be derived
by SWT from SIAM with density-density interaction shown below
\begin{equation}
\begin{aligned}H & =\sum_{k\alpha}\epsilon_{k}c_{k\alpha}^{\dagger}c_{k\alpha}+\sum_{\alpha=-j}^{j}\varepsilon_{\alpha}n_{\alpha}+U\sum_{\alpha<\alpha^{\prime}}n_{\alpha}n_{\alpha^{\prime}},\\
& +\sum_{\alpha k}[V_{k}^{\alpha}f_{\alpha}^{\dagger}c_{k\alpha}+V_{k}^{\alpha*}c_{k\alpha}^{\dagger}f_{\alpha}],
\end{aligned}
\end{equation}
with $N=2j+1$. A constant density of states $\rho(\epsilon)=\frac{1}{2D}\theta(D-|\epsilon|)$
with $D=1$ is chosen for conduction electrons. Both the CS and Kondo models are derived from SIAM
under certain conditions. Therefore the comparison between the Monte Carlo simulations on these models using CT-J method and directly on SIAM using our new method proposed in this paper can be used as the benchmark.
For sake of simplicity, our CT-QMC formalism for partition function (\ref{zapprox})
is referred as CT-X, where ``X'' refers to $X$-matrices.
\subsection{CS model}
The CS model reads
\begin{align}
H_{0}&=\sum_{k\alpha}\epsilon_{k}c_{k\alpha}^{\dagger}c_{k\alpha}+\sum_{\alpha}(\varepsilon_{\alpha}+J_{\alpha\alpha})|\alpha\rangle\langle\alpha|,
\\
H_{1}&=\sum_{\alpha\alpha^{\prime}}J_{\alpha\alpha^{\prime}}|\alpha\rangle\langle\alpha^{\prime}|(-c_{\alpha}c_{\alpha^{\prime}}^{\dagger}),
\end{align}
where $c_{\alpha}=N_{0}^{-1/2}\sum_{k}c_{k\alpha}$, with $N_{0}$
being number of sites and $\alpha$ denotes the spin/orbital indices.
Partition function of CS model
in CT-J can be obtained by applying the following restrictions to
Eq.~(\ref{xmatm1}) and Eqs.~(\ref{zapprox})--(\ref{wloc_ctx}):
\begin{itemize}
\item $V_{k}^{\alpha}=V_{k}^{\alpha*}=V^{\alpha}$, since exchange parameters
in CS model can be chosen as momentum independent;
\item $J_{\alpha\alpha^{\prime}}\equiv\frac{V_{\alpha}V_{\alpha^{\prime}}}{-\min\{\varepsilon_{\alpha}\}}$,
where $E_{\Gamma^{h}}=-\min\{\varepsilon_{\alpha}\}$ is the shifted
energy of $f^{0}$ state;
\item $X^{-1,\alpha\alpha^{\prime}}$ has only one none-zero element $X_{|\alpha\rangle|\alpha^{\prime}\rangle}^{-1,\alpha\alpha^{\prime}}=\frac{1}{-\min\{\varepsilon_{\alpha}\}}|\alpha\rangle\langle\alpha^{\prime}|$.
\end{itemize}
To simulate the CS model, we should put an additional restriction
to $\mathcal{C}_{k}$ with $\{s_{i}=-1|i=1,\cdots,k\}$, which means
that only pair operators describing $f^{1}\rightarrow f^{0}$ enters
in $\mathcal{C}_{k}$. Furthermore, intra-pair swap update is forbidden
since it gives rise to virtual excitation $f^{1}\rightarrow f^{2}$
which is absent in the CS model.
\subsubsection{t-matrix}
To test CT-X, we calculate t-matrix with $N=8$ and compare it with the results obtained by CT-J. We choose
the exchange parameter $J=\frac{V^{2}}{-\epsilon_{f}}$ to be 0.075 and temperature $T=0.001D$.
The results obtained by CT-X and CT-J are plotted together in FIG.~\ref{fig1}, which show excellent agreement.
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{fig1.pdf}
\caption{The impurity t-matrix $t_\alpha(\tau)$ in the imaginary-time domain for $N=8$, $J=0.075$ and $T=0.001$. Datas of CT-J is
obtained from Fig.~7 in Ref.~[\onlinecite{ctJ_paper}].} \label{fig1}
\end{figure}
\subsubsection{static susceptibility }
The static susceptibility is evaluated by integrating dynamical susceptibility
$\chi(\tau)$ as introduced in detail in section 2.3 of Ref.~[\onlinecite{ctJ_paper}].
The results obtained by CT-X and CT-J are shown in FIG.~\ref{fig2}. Again they match each other very well.
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{fig2.pdf}
\caption{Temperature dependence of the static susceptibility $\chi$ for N=8 and J=0.075. Datas of CT-J is
obtained from Fig.~6 in Ref.~[\onlinecite{ctJ_paper}].} \label{fig2}
\end{figure}
\subsection{Kondo model}
The Kondo model is given by
\begin{equation}
H=\sum_{k\sigma}\epsilon_{k}c_{k\sigma}^{\dagger}c_{k\sigma}+J\boldsymbol{S}\cdot\boldsymbol{\sigma}_{c}.
\end{equation}
where $\boldsymbol{S}=\sum_{\alpha\beta}f_{\alpha}^{\dagger}\boldsymbol{\sigma}_{\alpha\beta}f_{\beta}$
and $\boldsymbol{\sigma}_{c}=\sum_{\sigma\sigma^{\prime}}c_{\sigma}^{\dagger}\boldsymbol{\sigma}_{\sigma\sigma^{\prime}}c_{\sigma^{\prime}}$ denoting the spin operators of the local moments and itinerant electrons
respectively.
The spin-spin exchange terms can be obtained by considering both of the two virtual processes
$f^{1}\rightarrow f^{0}$ and $f^{1}\rightarrow f^{2}$.
With the particle-hole symmetry, we set $U=-2\epsilon_{f}$ thus $J=\frac{V^{2}}{-\epsilon_{f}}=\frac{V^{2}}{U+\epsilon_{f}}$
. To simulate the Kondo model by our CT-X method, all types of pair operators are allowed
to appear in the MC configurations.
As a benchmark, we calculated the $t$-matrix with $J$=0.3 and $T$=0.001 by CX-T and compare it with the results obtained by CT-J In FIG.~\ref{fig3}. Again the results from CT-X and CT-J agree very well indicating that CT-X can
treat two types of virtual charge fluctuations well. With the particle-hole symmetry of Kondo model, the real part of $t(i\omega_n)$ is zero and hence not plotted in FIG.~\ref{fig3}.
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{fig3.pdf}
\caption{Imaginary part of the impurity t-matrix $t(i\omega_n)$ of the Kondo model in the imaginary-frequency domain for N=2 and J=0.300.
Note: datas of CT-J in this figure is collected by means of WebPlotDigitizer~\cite{webpicdigit} from Fig.~11. in Ref.~[\onlinecite{ctJ_paper}]. } \label{fig3}
\end{figure}
\subsection{Kondo lattice model (KLM)}
KLM reads
\begin{equation}
H=\sum_{k\sigma}\epsilon_{k}c_{k\alpha}^{\dagger}c_{k\alpha}+J\sum_{i}S_{i}\cdot\boldsymbol{\sigma}_{i}.
\end{equation}
To further test this new impurity solver, we perform DMFT calculations
on KLM in the infinite-dimension hyper-cubic lattice
with the density of states $\rho_{c}(\omega)=D^{-1}\sqrt{2/\pi}exp(-2\omega^{2}/D^{2})$.
We set $D=1$ and fix the conduction-electron density per site as
$n_{c}=0.9$ as that in Ref.~[\onlinecite{otsuki_prl2009}]. The DMFT is iterated
on conduction-electron self-energy $\Sigma_{c}$$(i\omega_{n})$,
which is related to the cavity Green function $\mathcal{G}_{c}^{0}$
and the measured impurity $t$-matrix by Dyson equation $\Sigma_{c}(i\omega_{n})^{-1}=t(i\omega_{n})+\mathcal{G}_{c}^{0}(i\omega_{n})$.
For sake of benchmark, we calculate the momentum distribution of conduction electrons:
\begin{equation}
n_{c}(\boldsymbol{\kappa})=\langle c_{k\sigma}^{\dagger}c_{k\sigma}\rangle=T\sum_{n}G_{c}(\boldsymbol{\kappa},i\omega_{n})e^{i\omega_{n}0^{+}},
\end{equation}
where $\boldsymbol{\kappa}\equiv\epsilon_{k}$ and $G_{c}$ is the
conduction-electron Green function in the KLM, $G_{c}(\boldsymbol{\kappa},i\omega_{n})=[i\omega_{n}-\boldsymbol{\kappa}+\mu-\Sigma_{c}(i\omega_{n})]^{-1}$.
FIG.~\ref{fig4} shows the temperature dependence of $n_{c}(\boldsymbol{\kappa})$ at T=0.0050, 0.0025 and 0.0010 and well
reproduces the evolution of Fermi surface as shown in FIG. 4 of Ref.~[\onlinecite{otsuki_prl2009}]. For comparison, we plot in FIG.~\ref{fig5} the results computed by CT-X together with results
computed by CT-X at T=0.001. Once again it demonstrates that CT-X can treat two types of virtual charge fluctuations well.
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{fig4.pdf}
\caption{(Color online). Temperature dependence of momentum distribution $n_{c}(\boldsymbol{\kappa})$ computed by CT-X for $J$=0.3 and $n_c$=0.9. The vicinity of the large Fermi surface is enlarged in the inset.}\label{fig4}
\end{figure}
\begin{figure}[htp]
\includegraphics[clip,width=3.4in,angle=0]{fig5.pdf}
\caption{(Color online). Momentum distribution $n_{c}(\boldsymbol{\kappa})$ at T=0.001 and comparison with the CT-X. Datas of CT-J is
obtained from Fig.~74 in Ref.~[\onlinecite{otsuki_prl2009}]. }\label{fig5}
\end{figure}
\section{Discussion and Conclusion}
We have proposed a new CTQMC method called CT-X, which can simulate the SIAM in the Kondo limit
by projecting out local charge fluctuations, not in the effective Hamiltonian but each diagram sampled by the MC procedure.
This is done by approximating the high-energy states' imaginary-time
evolution operators which are sharply decreasing by a probability normalized $\delta$ function.
This approximation is equivalent to apply SWT for each particular diagrams.
Benchmarks of CT-X on CS model, Kondo model and Kondo lattice model with previously proposed
CT-J method show that CT-X method works very well for these model systems.
However, since in the CT-X method the SWT type approximation is applied to each particular
Feynman diagrams in Monte Carlo procedure, it can be easily applied to more general quantum impurity models
that describe realistic materials. Realistic models contain a generalized form of interaction, generalized
occupation number and generalized crystal field, which is difficult for the method based on effective model approach
such as the CT-J method. Therefore the CT-X method developed in the present paper can become a very good
impurity solver for DMFT to study strongly correlated systems such as the heavy fermion materials.
\newpage
\begin{widetext}
|
1,477,468,750,085 | arxiv | \section{Introduction}
Over the past few years, the statistical analysis of self-similar time series has become established as an important tool for investigating several natural phenomena. In general, a large part of these studies has been devoted to characterizing the complex statistical fluctuations shown by these series. Such fluctuations are associated to long-range correlations among the dynamic variables present in these series, and which obey the behavior usually described by fractal power-law decay \cite{Stanley99}.
One of the difficulties encountered in these investigations is related to the fact that the series may contain heterogeneous properties imposing certain statistical trends over itself. In other words, these series are not stationary \cite{Chianca2005}. Therefore, it is necessary to employ a technique capable of accounting, for this, since these trends may influence the correlations that exist in the series.
Two techniques have proved successful in eliminating these trends in time series: the wavelet transform modulus maxima (WTMM) \cite{Arneodo1995,Manimaran2005} and the detrended fluctuation analysis (DFA) \cite{Peng1994}. Both techniques are based on local polynomial regression in order to eliminate local trends present in different segments of the series. The DFA technique has been particularly efficient for a large range of areas such as: DNA sequences \cite{Peng1992}; heartbeat analysis \cite{Ivanov1999}; economy \cite{Rogerio2003}; seismology \cite{Telesca2001,Telesca2004}; meteorology \cite{Govindan2003}; astrophysics \cite{Zebende2003}, among others.
Basically, the option of applying the DFA technique to these studies stems from its easy implementation. Moreover, it is a tool that allows the role of trends in stationary time series to be analyzed, as well as efficiently estimating the long-range correlations through a single parameter: the scale exponent $\alpha$. Is important to emphasize that the type of correlation present in the stationary series depends on the value found for exponent $\alpha$. In this way, for $\alpha=0.5$ the signal is uncorrelated (white noise or Gaussian), while for $\alpha < 0.5$ there is anti-correlation (anti-persistence) and for $\alpha > 0.5$ there is correlation (persistence) \cite{Vicsek}.
Several attempts to apply the DFA technique to non-stationary time series (series affected by local trends) have not provided satisfactory results. Fundamentally, this has occurred because these series are not entirely characterized by a unique scale exponent $\alpha$, since different segments possess fluctuations characterized by distinct values of $\alpha$. In this case, the correct formalism for obtaining the distribution of scale exponents is multifractal analysis \cite{Vicsek}.
Recently, the number of works focusing on the characteristics of the multifractal aspects of non-stationary time series has grown, particularly those based on experimental data \cite{Pawel2006}. Outstanding among these are the applications of the generalized DFA technique, known as {\it Multifractal Detrended Fluctuation Analysis} (MF-DFA) as proposed by Kanterlhardt and collaborators \cite{Kantelhardt2002}, for a wide range of applications such as: DNA sequences \cite{Nogueira2002}, meteorology \cite{Kavasseri}, seismology \cite{Telesca} and others. It should also be remembered that two factors influence the use of the MF-DFA technique: its effortless implementation, and its excellent performance in obtaining results, related to both artificial and real data, when compared to the performance of the Wavelet Transform process, for the same systems \cite{Pawel2006}.
One highly relevant problem in molecular biology within this context, is linked to studies concerning the protein folding process through the characterization of its potential energy landscape. Such landscapes constitute a satisfactory representation of the potential energy for interaction among the various system's microscopic freedom degrees \cite{Rainer2007}.
In general, the adopted strategies are based on the assumption that the energy landscapes of proteins are complex, since being time dependent, they present a rugose structure, with many maxima and minima separated by barriers of varying heights. These properties imply complex evolutionary dynamics, in which the system experiences a variety of time scales \cite{PRE2008Mazzoni,Lorenzo 2006}.
Previous studies, using molecular dynamics simulations (MOIL program) and a variational method of fractal analysis to study the fractal properties of time series of the potential energy of molecular systems such as myoglobin; polyalanines, among others, were conducted by Lidar and collaborators \cite{Lidar1999}. Basically, they investigated systems that were subjected to a temperature $ T = $ 300K and a simulation time in the range $ 10 <t <$ 25 ps.
Their results suggest that the value of the fractal dimension (the rugosity exponent) slightly depends on temperature and the presence of $\alpha$-helix structures smoothes the rugosity of the series. Furthermore, there was evidence of universal behavior, i.e. the rugosity of different systems is described by the same fractal dimension. Recently, Hegger et al. \cite{Rainer2007} analyzed time series extracted from molecular dynamics simulations (GROMACS program) at a temeperature of $T=300$K, for polyalanines with the number $N$ of amino acids ranging between $3$ and $10$, reaching simulation time of $100$ ns.
Considering that this time series represent the dynamic trajectories followed by the system, these authors found that the effective fractal-dimension of such trajectories decreases with the chain size. According to them, such behavior occurs due to a stabilizing effect of the hydrogen bonds on the protein secondary structure ($\alpha$-helix) smoothing rugosities on the trajectories. Confirming whether this scenario is able to survive careful fractal analysis, searching for fine details of the time series fluctuations, has become a central problem to be clarified.
The present work introduces an approach, which combines molecular dynamics simulations with MF-DFA to characterize the rugosity of potential energy profiles, for polyalanine molecules. By considering these profiles as energy time series, we investigate the effects produced on the trajectories traced over the hyper-surface of the potential energy, when the size and temperature of the system is changed. In particular, we will show that the manner in which the system visits the phase-space in its dynamic evolution significantly depends on both temperature equilibrium and the nucleation of secondary structures in polyalanines.
This article is organized as follows: Section II presents the molecular dynamics simulations, the energy time series, and the energy dependence on temperature $T$ and number of amino acids $N$. Section III presents the multifractal spectra, obtained from the MF-DFA technique, associated to each different polyalanine time series. The effects caused by changes on the size of the chain and temperature and the presence of secondary structures on the spectra are discussed. Finally, Section IV presents our
conclusions.
\section{Molecular Dynamics and Time Series}
Molecular dynamics simulations have been extensively used to study the problem of protein folding \cite{Moret2005}. In general, these simulations involve considerable computational effort, since the integration of the equations of motion must be made for a system with many particles.
In the case of molecular systems, it is known that such structures can take on a great number of configurations, which grow with the number of degrees of freedom of the system. Therefore, molecular dynamics calculations for protein systems necessarily require the definition of effective potentials, from which the resulting force that acts on each particle is determined.
In this work, the numerical molecular dynamics calculations were performed with the aid of an efficient computer code: the THOR program \cite{Pascutti99}, developed to investigate structures of biological interest, such as proteins. The code includes the GROMOS force field \cite{Gunsteren1987} in its architecture, used to simulate the atomic interactions in the molecule.
In the THOR program, the conformational energy of the molecule is made up of a sum of bonded and nonbonded terms. In such approach, only hydrogen atoms covalently bonded to oxygen or to nitrogen are considered explicitly, whereas CH1, CH2, and CH3 groups are assumed to be an atomic unit. Thus, we analyze the changes of the following energy function:
\begin{equation}
E=E_H+E_\theta+E_\phi+E_\varphi+E_{LJ}+E_{el}
\end{equation}
or explicitly,
\begin{eqnarray}
E &=&\frac{1}{2}\sum_{k}K_{b_{k}}(r_{k}+r_{0})^{2}+ \frac{1}{2}\sum_{l}K_{\theta _{l}}(\theta _{l}+\theta
_{0})^{2}+ \nonumber \\ &&+\frac{1}{2}\sum_{m}K_{{\phi}_{m}}(\phi_{m}+\phi_{0})^{2}+ \nonumber \\
&&+\sum_{n}K_{\varphi_{n}}(1+\cos (m\varphi _{n}+\varphi _{0}))+\\ &&+\sum_{i<j}\left[
\frac{C_{12}(i,j)}{r_{i.j}^{12}}-\frac{C_{6}(i,j)}{r_{i,j}^{6}}\right] +\frac{1}{4\pi \varepsilon
_{0}\varepsilon _{r}} \sum_{i<j}\frac{q_{i}q_{j}}{r_{i,j}}. \nonumber
\end{eqnarray}
where $E_H$ is the Hook potential, $E_\theta$ is the angular potential, $E_\phi$ is the umproper potential, $E_\varphi$ is the dihedral-angle potential, $E_{LJ}$ is the Lennard-Jones potential, and $E_{el}$ is the Coulomb potential term (see definitions and used parameters in \cite{Gunsteren1987,Pascutti99}).
Specifically, we simulate polyalanine structures with a different number of residues at different equilibrium temperatures and initial conformations. Polyalanines are used as prototypes to study the folding process of structures in $\alpha$-helix conformations. In this dynamic, the electric dipole moments arising from the electric unbalance between the peptide bond of $NH$ and $CO$ groups, the hydrogen bridges bonds and the van der Waals interactions, are key ingredients in the cooperative effect responsible for forming such structures, and which becomes accelerated with the increasing number of amino acids in the protein.
Thus, as pointed out by Shoemaker and collaborators \cite{Shoemaker1987}, Moret and collaborators \cite{Moret2002} and Rogers \cite{Rogers1989}, a critical minimum number of amino acids is necessary so that these configurations may be observed. Furthermore, there is an upper critical number due to destabilization brought on by entropic effects.
For the numerical calculations, a similar protocol was adopted in all cases. The initial temperature started at $T_i=1$K, heating the system at a rate of $5$K per step (ps) in order to reach the desired equilibrium temperature. Three equilibrium temperatures were considered: $T=275$K, $T=300$K and $T=325$K, in a continuous medium, described by a relative dielectric constant $\epsilon_r=2$. The increases in the time dynamic was $5\, 10^{-4}$ ps, and for all simulations $N_{\textsf{step}}=5\,10^8$ steps were performed, to achieve a time of the order of $25$ns. In calculating the time, the interval associated with the thermalization of the system was discarded.
\begin{figure}
\begin{center}
\includegraphics*[width=\linewidth]{figure01.eps}
\end{center}
\caption{ Potential energy time-series of polyalanines with different numbers of residues (a) N=10 (black), (b) N = 15 (red), and (c) N = 18 (blue). In all cases the thermal equilibrium temperature is $T = 300$K.
\label{fig:figura1}}
\end{figure}
Figure (\ref{fig:figura1}) shows in the last $5$ns of observation for the potential energy time series associated to polyalanine structures in $T = 300$K and $N=10$, $15$, and $18$. It was noted that in all cases examined, the series showed the typical rugosity observed in other complex phenomena described by self-affine time series \cite{Vicsek}. It should be emphasized that for all temperature values, calculations were reached with values of $N$ between 8 and 18 residues, the results of which display similar behavior to those presented in Figure (\ref{fig:figura1}).
\section{MF-DFA -- Multifractal Spectra}
Once the time series of polyalanines have been obtained and the presence of rugosity has been observed, a careful characterization of the statistical fluctuations embedded in the series should be performed in order to obtain information on the dynamic behavior of the system. In this work, the MF-DFA method is applied, along the following steps \cite{Kantelhardt2002}:
\begin{enumerate}
\item Consider a time series $u(i)$, $i=\{1...N_{max}\}$, over a compact support, and determine its profile (integrated set), i.e.,
\begin{eqnarray}
y(i) = {\sum\limits_{k = 1}^{i} {[u(k)-<u>]}}, \label{eq:serie-integracao}
\end{eqnarray}
where $<u>$ is the mean taken over the original series $u(i)$;
\item Divide the profile in $N_s$ disjointed segments of equal size $s$, calculate the local trend through a polynomial adjustment of order $m$, represented by the variable $y_{\nu}(i)$, at each segment. Since the length $N$ of the series is often not a multiple of the considered time scale $s$, a short part at the end of the profile may remain. In order not to disregard this part of the series, the same procedure is repeated starting from the opposite end. Thereby, $2N_s$ segments are obtained altogether;
\item Determine the fluctuation variance,
\begin{equation}\label{eq:serieF2}
F^2(s,\nu) \equiv \frac{1}{s} {{\sum\limits_{i = 1}^s
\{{{{ y[(\nu-1)s +i]-y_{\nu}(i)}} }\}^2}},
\end{equation}
with $\nu =\{1,...,N_s\}$, associated to each segment. Notice that in this step, polynomial trends of the order $m$ were eliminated from the profile.
\item Calculate the mean values of all segments, to obtain the fluctuation function of the order $q$:
\begin{eqnarray}
F_{q}(s) \equiv \left\{\frac{1}{2N_s}\sum\limits_{\nu = 1}^{2N_s}
[F^2(s,\nu)]^{q/2}\right\}^{1/q},\label{eq:serieFq}
\end{eqnarray}
where, in general, the variable $q$ assumes real values, except zero.
\end{enumerate}
The characteristic property of function $F_{q}(s)$ is its scale behavior, i.e. if the time series $u(i)$ possess long-range correlations, then for increasing values of $s$, function $F_{q}(s)$ also grows, following a power law of the type:
\begin{eqnarray}
F_q(s) \sim s^{h(q)}.
\label{eq:serieFq-tau}
\end{eqnarray}
Therefore, the main result obtained with the MF-DFA method is a family of exponents $h(q)$, called the generalized Hurst exponents. For a genuine multifractal series these exponents form a decreasing function of $q$, if on the other hand, the signal is monofractal $h(q)$ = constant. Moreover, if $q<0$, $h(q)$ captures the properties of small fluctuations, then for $q>0$ large fluctuations are dominant. Particularly, when $q=2$, $h(2)=H$ is the classical Hurst exponent.
Finally, the multifractal spectrum of measures can be obtained through a simple relation between the exponent $h(q)$ and the multifractal scale exponent $\tau(q)$, defined by multifractal formalism \cite{Kantelhardt2002}:
\begin{eqnarray}
\tau(q) \equiv qh(q) - 1.
\label{eq:tauq}
\end{eqnarray}
The function $\tau(q)$ is one of the most used representations of multifractal spectra, related to time series.
Furthermore, typical results are presented obtained using the MF-DFA technique to investigate the different time series of the potential energy of polyalanines as described in Section II. Figure (\ref{fig:figura2}) represents the behavior of the logarithmic of the fluctuation function $\log F_q(s)$ as a function of $\log s$ and the parameter $q$, for the series with $N=18$ residues shown in Figure (\ref{fig:figura1}). The scale values were chosen in the range $20<s<100$ and the trends were approximated by a polynomial of order $m=4$. As can be observed, the estimates obtained for the linear adjustment of the data satisfactorily meet the behavior of the scale provided by Equation (\ref{eq:serieFq-tau}).
\begin{figure}[thbp]
\begin{center}
\includegraphics*[width=\linewidth]{figure02.eps}
\end{center}
\caption{Logarithmic of the fluctuation function $F_q(s)$ against $\log s$ with the $q$ parameter
$-5<q<5$ (step 1 from top to bottom) for polyalanine energy time series with $N=18$ residues, and thermal equilibrium temperature of $T=300$K.}\label{fig:figura2}
\end{figure}
Figure (\ref{fig:figura3}a) presents the corresponding exponents $h(q)$ (values of the slopes of the straight lines fitted to the data) as a function of $q$, while Figure (\ref{fig:figura3}b) presents the associated multifractal spectrum $\tau(q)$. In general terms, it may be stated that the results indicate that the time series investigated exhibit typical multifractal behavior ($\tau(q)$ is not a linear function of $ q $), which depends on the number of residues $N$ and the thermal temperature $T$ of system.
In Figure (\ref{fig:figura3}) different regimes of correlation may be observed: for $N=17$, the series is completely correlated; while for $N=10$, $15$ and $18$, there is a mixed system, i.e. a strong anti-correlation when $q>0$ and correlation, for some values of $q<0$. In particular, when $N=13$ the series is totally anti-correlated. According with reference \cite{Moret2002} $N=13$ is the critical number of residues associated with the formation of $\alpha$-helix in $T=300$K.
\begin{figure}[thbp]
\begin{center}
\includegraphics*[width=\linewidth]{figure03.eps}
\end{center}
\caption{(a) Generalized Hurst exponents $h(q)$ dependence on the parameter $q$ and (b) multifractal spectrum $\tau (q)$ dependence on $q$ for polyalanines with a different number of residues, thermal equilibrium temperature of $T=300$K.}\label{fig:figura3}
\end{figure}
Since for this value of $N$, the potential energy is a global minimum then we may consider that the nucleation of secondary structures alters the dynamics of the system for an anti-correlated regime, thus overcoming the growth trend of the energy induced by thermal agitation and the increase of residues. In addition, as in Figure (\ref{fig:figura3}b), all spectra $\tau(q)$ exhibit typical
multifractal properties.
\section{Conclusions}
In this work, we have studied the multifractal properties of time series of the potential energy of polyalanines. Protein chains were analyzed with different numbers of residues at three equilibrium temperatures. The research was conducted using an approach that combines molecular dynamics with MF-DFA, a technique of statistical analysis, which enabled us to characterize the rugosity associated with the temporal correlation among the dynamic variables of the series.
Our results corroborate some of those obtained by Hegger \emph{et.al} \cite{Rainer2007} and Lidar and collaborators \cite{Lidar1999}, such as the influence of time and the presence of the $\alpha$-helix in the rugosity of the time series. However, they also indicate that the other findings have not been confirmed, since the time simulation they used is much shorter than that used in this study and so insufficient to observe the formation of secondary structures. Also, the fractal analysis technique employed by these authors, which does not deal adequately with the existence of trends in the series, has not allowed to capture the subtler details of the spectra.
Indeed, the results obtained in this study indicate that all the series examined exhibit typical multifractal behavior, which depends both on the number of residues $N$, and the temperature $T$ of the system, and that these multifractal properties, represented by $\tau(q)$ spectra or, similarly, the generalized Hurst exponents $h(q)$, reveal important aspects of the temporal evolution of the system.
It was found, for example, that when the number of residues approaches the critical number of residues $N_c$, associated with the formation of a significant amount of secondary structures, the temporal correlation regime of the system is changed. In the case $N_c=13$ and $T=300$K, the system is totally anti-correlated, the spectrum $\tau(q)$ is truly multifractal and rugosity is more pronounced in the region of small fluctuations ($q<0$), as seen in Figure (\ref{fig:figura3}a). For other values of $N$, the results confirm that the two regimes of correlation are present in the series.
Recently, Moret and collaborators \cite{Moret2001a} conducted an analysis of the spectra (profiles) of the potential energy of proteins, in function of the number of dihedral angles $\phi$ and $\psi$, and found that these profiles are described by a real multifractal $f(\alpha)$ spectra. They also found that the $f(\alpha)$ spectra were sensitive to the number of degrees of freedom of the system, thus illustrating that the dimensionality of the phase space influences the accessibility of parts of the hyper-surface of the potential energy, since the proteins adopt conformations in the phase space only in the permitted regions of the spectrum $f(\alpha)$.
This behavior allows an alternative explanation for the dynamics of the clew of a protein, because it suggests the existence of preferential folding trajectories along the energy hyper-surface, i.e. in the search for its native state, proteins need not visit all the accessible states in the space phase, but only those associated with the spectrum $f(\alpha)$.
The MF-DFA method applied to the time series of the potential energy of polyalanines, has enabled this study to reveal important aspects concerning the wealth and complexity associated with the temporal evolution of these systems, in the search for its native state.
In fact, according to the number of residues and the temperature, it was shown that the trajectory of the protein, to visit its phase space dynamically, is guided mainly by the influence of secondary structures, which are formed over the time simulation, probing the hyper-surface of the conformational energy at different time scales. As a result, the energy time series exhibit multifractal long-range correlations.
Therefore, our results support an alternative explanation of the so-called Levinthal paradox
\cite{Levinthal}, because in this scenario, the protein in its dynamic evolution, is being influenced by the emergence of intermediate structures, which gradually, by successive increases in conformational stability, bypass the trajectories by way of preferential folding. Consequently, the extreme ease with which a protein is folded, despite the huge number of possible configurations, may be attributed to a succession of events, which it experiments, on a multifractal space-time energy hyper-surface.
\begin{acknowledgments}
This work was partially supported by the Brazilian federal grant agencies CNPq and CAPES, and from FACEPE (Pernambuco state grant agency) under the grants PRONEX EDT 0012-05.03/04 and APQ-0203-1.05/08).
\end{acknowledgments}
|
1,477,468,750,086 | arxiv |
\section{Introduction}\label{sec:introduction}
This article is a continuation of \cite{ADDE1}, but the approach here is a bit more general and, as a consequence, all results except those in \cref{sec:ddes} have a wider applicability; also see \cref{sec:structure} for an overview of the structure and contents.
We are still motivated by abstract delay differential equations (DDEs) of the form
\begin{subequations}
\label{eq:adde-ic}
\begin{equation}
\label{eq:adde}
\dot{x}(t) = B x(t) + F(x_t), \qquad t \ge 0,
\end{equation}
where the unknown $x$ takes values in a real or complex Banach space $Y$. (The adjective \emph{abstract} comes from the fact that $Y$ is allowed to be infinite-dimensional.) It is assumed that the possibly unbounded operator $B : \DOM(B) \subseteq Y \to Y$ generates a $\mathcal{C}_0$-semigroup $S$ of bounded linear operators on $Y$. As the state space for \cref{eq:adde} we choose the Banach space $X \ensuremath{\coloneqq} C([-h,0],Y)$ of continuous $Y$-valued functions on $[-h,0]$, endowed with the supremum-norm. The history of $x$ at time $t$ is denoted by $x_t \in X$, so
\[
x_t(\theta) \ensuremath{\coloneqq} x(t + \theta), \qquad \theta \in [-h,0].
\]
In particular, an initial condition for \cref{eq:adde} is specified as
\begin{equation}
\label{eq:ic}
x_0 = \phi \in X.
\end{equation}
\end{subequations}
Finally, we assume that $F : X \to Y$ is continuous and possibly nonlinear.
In \cite{ADDE1} the shift semigroup $\{T_0(t)\}_{t \ge 0}$ was defined as the $\mathcal{C}_0$-semigroup on $X$ corresponding to the solution of \cref{eq:adde-ic} with $F = 0$. The sun dual $\SUN{X}$ of $X$ with respect to $\{T_0(t)\}_{t \ge 0}$ was characterized as
\begin{equation}
\label{eq:xsun-adde}
X^{\odot} \simeq Y^{\odot} \times \LP{1}([0,h], Y^{\star}),
\end{equation}
where $\simeq$ denotes an (explicit and simple) isometric isomorphism. It was also shown in \cite{ADDE1} that if
\begin{equation}
\label{eq:ell}
\ell : Y \to \SUNSTAR{X}, \qquad \ell y \ensuremath{\coloneqq} (j_Y y, 0),
\end{equation}
is the embedding induced by the canonical embedding $j_Y : Y \to \SUNSTAR{Y}$, then the \ensuremath{\text{weak}^{\star}}\xspace convolution integral in the right-hand side of the abstract integral equation
\begin{equation}
\label{eq:aie0-adde}
u(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)(\ell \circ F)(u(\tau))\,d\tau, \qquad t \ge 0,
\end{equation}
takes values in the range of the canonical embedding $j : X \to \SUNSTAR{X}$. Furthermore, it was shown that there is a one-to-one correspondence between the global mild solutions of the initial value problem \cref{eq:adde-ic} and the global solutions of \cref{eq:aie0-adde}. The same correspondence exists between \emph{local} solutions of \cref{eq:adde-ic} and \cref{eq:aie0-adde}.
\subsection{Structure and outline}\label{sec:structure}
In this second part we adopt a more general point of view than in \cite{ADDE1}. Let $\{T_0(t)\}_{t \ge 0}$ be an arbitrary $\mathcal{C}_0$-semigroup of bounded linear operators on an arbitrary real or complex Banach space $X$. It is not assumed that $X$ is $\odot$-reflexive with respect to $\{T_0(t)\}_{t \ge 0}$. There exist constants $\omega \in \mathbb{R}$ and $M \ge 1$ such that
\begin{equation}
\label{eq:expboundT0}
\|T_0(t)\| \le M e^{\omega t} \qquad \text{for all } t \ge 0.
\end{equation}
Given a continuous (possibly nonlinear) operator $G : X \to X^{\odot\star}$, we are interested in solutions of the abstract integral equation
\begin{equation}
\label{eq:aie0}
u(t) = T_0(t)x + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)G(u(\tau))\,d\tau, \qquad t \ge 0,
\end{equation}
where $x \in X$ is an initial condition. (In the particular case of abstract DDEs we have $G \ensuremath{\coloneqq} \ell \circ F$.) If $u$ is continuous on some time interval $[0,t_e)$, then the \ensuremath{\text{weak}^{\star}}\xspace Riemann integral appearing in the right-hand side of \cref{eq:aie0} takes values in $X^{\odot\odot}$ for all $0 \le t < t_e$, but the inclusion $jX \subseteq X^{\odot\odot}$ may be strict. As a consequence, \cref{eq:aie0} generally does not give rise to a nonlinear semiflow on $X$, which is a fundamental complication from a dynamical systems perspective. Here we aim to address this complication in a systematic manner, motivated by previous work on abstract renewal equations \cite{Diekmann2008}, classical coupled systems with infinite delay \cite{DiekmannGyllenberg2012}, and abstract DDEs \cite{ADDE1}.
Let $J$ be any non-degenerate interval and denote $\Omega_J \ensuremath{\coloneqq} \{(t,s) \in J \times J\,:\, t \ge s\}$. A continuous function $f : J \to \SUNSTAR{X}$ will be called a \textbf{forcing function}. Given a forcing function $f$, introduce
\begin{equation}
\label{eq:v0}
v_0(\cdot,\cdot,f) : \Omega_J \to \SUNSTAR{X}, \qquad v_0(t,s,f) \ensuremath{\coloneqq} \int_s^t\SUNSTAR{T_0}(t - \tau)f(\tau)\,d\tau.
\end{equation}
Motivated by the above considerations, we are interested in forcing functions $f$ with the property that
\begin{equation}
\label{eq:admissible}
v_0(t,s,f) \in jX \qquad \text{for all } (t,s) \in \Omega_J.
\end{equation}
\begin{definition}\label{def:admrange}
A closed subspace $\mathscr{X}_0$ of $X^{\odot\star}$ is called an \textbf{admissible range for} $\{T_0(t)\}_{t \ge 0}$ if \cref{eq:admissible} holds \emph{for every forcing function} $f : J \to \mathscr{X}_0$. A continuous linear or nonlinear operator\footnote{If the continuous linear operator $L$ is admissible for $\{T_0(t)\}_{t \ge 0}$, then $L$ satisfies condition \textbf{(H0)} in \cite[Section 6]{ADDE1}, so the results from there apply.} $G : X \to X^{\odot\star}$ is called an \textbf{admissible perturbation for} $\{T_0(t)\}_{t \ge 0}$ if there exists an admissible range $\mathscr{X}_0$ for $\{T_0(t)\}_{t \ge 0}$ such that $G$ takes its values in $\mathscr{X}_0$.
\end{definition}
The strategy is now to proceed as closely as possible to the $\odot$-reflexive case, but allowing only \emph{admissible} perturbations for $\{T_0(t)\}_{t \ge 0}$. In \cref{sec:admissibility} we prove some elementary and less elementary properties related to admissibility for a given $\mathcal{C}_0$-semigroup. For example, in \cref{prop:admindependent} we show that $\mathscr{X}_0$ is independent of $J$, so in \cref{def:admrange} there is no need to include $J$ in the terminology or notation. In \cref{thm:admconstants} we give a simple criterion for a closed subspace of $X^{\odot\star}$ to be an admissible range. After a small digression on norm convergence of \ensuremath{\text{w}^{\star}}\xspace-integrals over unbounded intervals, we discuss the relationship between admissible ranges and a certain subspace of $X^{\odot\star}$ that was introduced in \cite{VanNeerven1992}. This discussion leads to \cref{thm:xsuncross} and its corollary.
Next, in \cref{sec:inhomogeneous} we address two interrelated questions about perturbation of $\{T_0(t)\}_{t \ge 0}$ by an admissible bounded linear operator. The first of these questions concerns robustness of a given admissible range, while the second question will prove to be of particular relevance for the local analysis of the semiflow generated by \cref{eq:aie0}. After some preparations, the answers are presented in \cref{thm:inhom:answers} and its corollary.
In \cref{sec:linearization} we move from linear to semilinear theory. In general, a perturbative analysis near an equilibrium of the semiflow generated by \cref{eq:aie0} requires a splitting of $G$ into a linear and a nonlinear part and a subsequent comparison between the linearly perturbed semigroup $\{T(t)\}_{t \ge 0}$ and the nonlinear semiflow $\Sigma$. In turn, such a comparison depends on the equivalence in \cref{prop:aie0_aie} and on the (uniform) differentiability of $\Sigma$ with respect to the state in \cref{prop:linearization}.
In \cref{sec:cm} we discuss the construction of smooth center manifolds in the non-$\odot$-reflexive case, offering both a comparison with \cite[Chapter IX]{Delay1995}, itself based on \cite{VanGils1982,DiekmannVanGils1984,VanderbauwhedeVanGils1987,Ball1973}, as well as an application of the material developed up until that point. We demonstrate that, with this material at hand, the non-trivial consequences of the lack of $\odot$-reflexivity are both relatively minor as well as localized exclusively in the proof of \cref{prop:Keta} on a pseudo-inverse for the linear inhomogeneous equation. The results of the construction are then summarized in \cref{thm:cm-summary}.
In the accompanying \cref{sec:decomposition-appendix} we first discuss the `lifting' of hypothesis \cref{hyp:cm} in \cref{sec:cm} from $X$ to a subspace of $X^{\odot\star}$ that includes the range of $G$. The result of this discussion, in the form of \cref{prop:xsuncross-cm}, is used in the main text. Second, we provide a proof of \cref{thm:hypcm} which gives sufficient conditions, in terms of eventual norm continuity of $\{T(t)\}_{t \ge 0}$ and a decomposition of the spectrum of its generator, for both central hypotheses \cref{hyp:cm} and \cref{hyp:cm:j} in \cref{sec:cm} to hold. Hence this theorem demonstrates that the usual assumption of eventual compactness of $\{T(t)\}_{t \ge 0}$ is more restrictive than necessary.
In \cref{sec:ddes} we return to \cref{eq:adde-ic} and we discuss some implications of the general results in the foregoing sections for the class of abstract DDEs. Indeed, the original motivation for the present work can be found in \cite{VanGils2013}, which was continued in \cite{Dijkstra2015} and extended in \cite{SpekVanGilsKuznetsov2019} to include diffusion via the unbounded operator $B$. In particular, \cref{thm:cm-dde} provides a justification of the center manifold reduction that underlies the local normal form calculations performed in \cite{VanGils2013,Dijkstra2015,SpekVanGilsKuznetsov2019}. It is \emph{not} assumed that $\{T(t)\}_{t \ge 0}$ is eventually compact and therefore the theorem applies equally well to the cases $B = 0$ and $B \neq 0$. This eliminates the somewhat unsatisfactory dichotomy in the technical treatment of the two cases that results from the usual assumption of eventual compactness \cite{Travis1974,Wu1996,Faria2002,Faria2006}.
\subsection{Conventions and notation}\label{sec:conventions}
\begin{enumerate}[wide=0pt,leftmargin=\parindent,label=\roman*.]
\item
We use the notations $\mathbb{R}_+ \ensuremath{\coloneqq} [0,\infty)$ and $\mathbb{R}_- \ensuremath{\coloneqq} (-\infty,0]$. Unless explicitly stated otherwise, \emph{all intervals are assumed to be non-degenerate}. They need not be open, closed or bounded.
\item
Unless explicitly stated otherwise, the scalar field for a given vector space may be either real or complex and is denoted by $\mathbb{K}$.
\item
The duality pairing between a Banach space $E$ and its continuous dual space $\STAR{E}$ is written as
\[
\PAIR{x}{\STAR{x}} \ensuremath{\coloneqq} \STAR{x}(x), \qquad \text{for all }\,x \in E \text{ and } \STAR{x} \in \STAR{E},
\]
and we commonly use the prefix \ensuremath{\text{w}^{\star}}\xspace to indicate the \ensuremath{\text{weak}^{\star}}\xspace-topology on $\STAR{E}$.
\item
If $E_1$ and $E_1$ are Banach spaces over the same field, then $\ensuremath{\mathcal{L}}(E_1,E_2)$ is the Banach space of all bounded linear operators from $E_1$ to $E_2$, equipped with the operator norm.
\item
If $J$ is an interval and $E$ is a Banach space, then $C(J,E)$ is the vector space of continuous functions from $J$ into $E$, equipped with the topology of uniform convergence on compact subsets. (Of course, if $J$ is compact, then this topology is induced by the usual supremum-norm.)
\item
For any interval $J$ we denote by $\ensuremath{\mathbbm{1}}$ the function on $J$ with the constant value $1 \in \mathbb{R}$, and we denote by $\ensuremath{\mathbbm{1}} \otimes x \in C(J,E)$ the function with the constant value $x \in E$.
\end{enumerate}
We briefly recall the definition of the general \ensuremath{\text{w}^{\star}}\xspace-integral as it appears in \cite[Interlude 3.13 in Appendix II]{Delay1995}; also see the treatment in \cite[Appendix A2]{VanNeerven1992} for an equivalent definition that is on the one hand more general (involving an arbitrary measure space), but on the other hand too restrictive (the measure is assumed to be finite).
Given a Banach space $E$ and an interval $J$, not necessarily bounded, let $q : J \to E^{\star}$ be \ensuremath{\text{w}^{\star}}\xspace-Lebesgue integrable, i.e. $\tau \mapsto \PAIR{x}{q(\tau)}$ is in $L^1(J,\mathbb{K})$ for all $x \in E$. By virtue of the closed graph theorem, the map
\[
x \mapsto \int_J \PAIR{x}{q(\tau)}\,d\tau, \qquad x \in E,
\]
defines an element $Q^{\star}$ of $E^{\star}$. We call $Q^{\star}$ the \textbf{\ensuremath{\text{w}^{\star}}\xspace-integral of $q$ over $J$} and put
\begin{equation}
\label{eq:wsintegral_general}
\int_J q(\tau)\,d\tau \ensuremath{\coloneqq} Q^{\star}.
\end{equation}
If $J$ is compact and $q$ is \ensuremath{\text{w}^{\star}}\xspace-continuous, then the above \ensuremath{\text{w}^{\star}}\xspace-integral may be evaluated as a \ensuremath{\text{w}^{\star}}\xspace-Riemann integral. So far, examples of \ensuremath{\text{w}^{\star}}\xspace-Riemann integrals occurred in \cref{eq:aie0-adde,eq:aie0,eq:v0}. On the other hand, if $q \in L^1(J,E^{\star})$, then $q$ is \ensuremath{\text{w}^{\star}}\xspace-Lebesgue integrable and the Bochner- and \ensuremath{\text{w}^{\star}}\xspace-integrals coincide. (This is a direct consequence of the fact that Bochner integrals commute with bounded linear operators.)
\section{Admissible ranges and admissible forcing functions}\label{sec:admissibility}
Let $\{T_0(t)\}_{t \ge 0}$ be a $\mathcal{C}_0$-semigroup on a Banach space $X$, as in \cref{sec:structure}. The following notion is useful in conjunction with \cref{def:admrange}.
\begin{definition}\label{def:admfunc}
A forcing function $f : J \to \SUNSTAR{X}$ is called an \textbf{admissible forcing function for $\{T_0(t)\}_{t \ge 0}$ on $J$} if \cref{eq:admissible} holds \emph{for this particular choice of $f$}. The vector space $\mathscr{F}_0(J)$ of all such functions is called the \textbf{admissible forcing class for $\{T_0(t)\}_{t \ge 0}$ on $J$}.
\end{definition}
\subsection{Elementary properties}\label{sec:admprops}
First we record a trivial but useful relationship between admissible ranges and admissible forcing classes.
\begin{proposition}\label{prop:rangeclass}
Let $\mathscr{X}_0$ be a closed subspace of $\SUNSTAR{X}$. If $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$ and $J$ is an interval, then
\begin{equation}
\label{eq:prop:rangeclass}
C(J,\mathscr{X}_0) \subseteq \mathscr{F}_0(J).
\end{equation}
Conversely, if \cref{eq:prop:rangeclass} holds for some interval $J$, then $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$.
\end{proposition}
Next, we show that the admissible range is independent of the particular interval, as announced following \cref{def:admrange}. This justifies calling $\mathscr{X}_0$ an \textbf{admissible range for $\{T_0(t)\}_{t \ge 0}$} if $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$ on \emph{some} interval.
\begin{proposition}\label{prop:admindependent}
If $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$ on \emph{some} interval, then it is admissible for $\{T_0(t)\}_{t \ge 0}$ on \emph{every} interval.
\end{proposition}
\begin{proof}
Suppose that $\mathscr{X}_0$ is admissible for $\{T_0(t)\}_{t \ge 0}$ on the interval $J$. Let $J'$ be any arbitrary interval, let $f : J' \to \mathscr{X}_0$ be a continuous function and let $(t,s) \in \Omega_{J'}$ with $t > s$ strictly. The interval $J$ is non-degenerate by the convention from \cref{sec:conventions}, so there exists $n \in \mathbb{N}$ such that $J$ contains an interval $[s_0,t_0]$ with $t_0 - s_0 = \frac{t - s}{n}$. Define
\[
\ensuremath{\varepsilon} \ensuremath{\coloneqq} \frac{t - s}{n}, \qquad \tau_i \ensuremath{\coloneqq} s + i\ensuremath{\varepsilon}, \qquad i = 0,\ldots,n.
\]
Then, noting that $t - \tau_i \ge 0$,
\begin{align*}
\int_s^t\SUNSTAR{T_0}(t - \tau)f(\tau)\,d\tau &= \sum_{i=1}^n \int_{\tau_{i-1}}^{\tau_i}\SUNSTAR{T_0}(t - \tau)f(\tau)\,d\tau\\
&= \sum_{i=1}^n \SUNSTAR{T_0}(t - \tau_i)\int_{\tau_{i-1}}^{\tau_i}\SUNSTAR{T_0}(\tau_i - \tau)f(\tau)\,d\tau,
\end{align*}
and we need to show that this is in $jX$. Since $jX$ is a positively $\SUNSTAR{T_0}$-invariant subspace, it is sufficient to show this for each of the \ensuremath{\text{w}^{\star}}\xspace-Riemann integrals inside the sum. So, for fixed $1 \le i \le n$ we introduce the new integration variable $\sigma = a \tau + b$ with $a$ and $b$ determined by the conditions
\[
\sigma(\tau_{i-1}) = s_0, \qquad \sigma(\tau_i) = t_0,
\]
so $a = 1$ and $b$ is some irrelevant expression. This yields
\[
\int_{\tau_{i-1}}^{\tau_i}\SUNSTAR{T_0}(\tau_i - \tau)f(\tau)\,d\tau = \int_{s_0}^{t_0}\SUNSTAR{T_0}(t_0 - \sigma)f(\sigma - b)\,d\sigma.
\]
The function $[s_0, t_0] \ni \sigma \mapsto f(\sigma - b) \in \mathscr{X}_0$ can be trivially extended to an element of $C(J,\mathscr{X}_0)$. We conclude that the right hand side of the above equality is indeed in $jX$.
\end{proof}
The next result is not surprising, but we record it explicitly for later use.
\begin{proposition}
\label{prop:jXadmissible}
$jX$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$.
\end{proposition}
\begin{proof}
For arbitrary $f \in C(\mathbb{R},jX)$ and $(t,s) \in \Omega_{\mathbb{R}}$ we have
\[
\int_s^t T_0^{\odot\star}(t - \tau)f(\tau)\,d\tau = \int_s^t j T_0(t - \tau)(j^{-1} \circ f)(\tau)\,d\tau = j \int_s^t T_0(t - \tau) (j^{-1} \circ f)(\tau)\,d\tau \in jX,
\]
where the first integral is a \ensuremath{\text{w}^{\star}}\xspace-Riemann integral and the others are ordinary Riemann integrals.
\end{proof}
\begin{proposition}\label{prop:admclosed}
$\mathscr{F}_0(J)$ is a closed subspace of $C(J,\SUNSTAR{X})$.
\end{proposition}
\begin{proof}
Let $(f_{\alpha})$ be a net in $\mathscr{F}_0(J)$ converging to some $f \in C(J,\SUNSTAR{X})$ and let $(t, s) \in \Omega_J$ be arbitrary. The interval $[s, t]$ is compact in $J$, so $f_{\alpha} \to f$ uniformly on $[s, t]$. By a standard estimate we therefore have
\begin{equation}
\label{eq:net_of_integrals}
\int_s^t \SUNSTAR{T_0}(t - \tau)f_{\alpha}(\tau)\,d\tau \to \int_s^t \SUNSTAR{T_0}(t - \tau)f(\tau)\,d\tau
\end{equation}
in the norm of $\SUNSTAR{X}$. The integrals on the left-hand side are elements of the closed subspace $jX$ of $X^{\odot\star}$, so the same is true for the integral on the right-hand side. Since $(t,s) \in \Omega_J$ was arbitrary, this proves that $f \in \mathscr{F}_0(J)$.
\end{proof}
The following result shows that, given a closed subspace $\mathscr{X}_0$ of $X^{\odot\star}$, in order to establish its admissibility, it is sufficient to verify admissibility of all constant $\mathscr{X}_0$-valued forcing functions defined on some interval.
\begin{theorem}\label{thm:admconstants}
If $\mathscr{X}_0$ is a closed subspace of $\SUNSTAR{X}$ and there exists an interval $J$ such that $\ensuremath{\mathbbm{1}} \otimes x^{\odot\star} \in \mathscr{F}_0(J)$ for all $x^{\odot\star} \in \mathscr{X}_0$, then $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$.
\end{theorem}
\begin{proof}
Since $J$ is non-degenerate, we may assume that $J$ is compact. We show that $\mathscr{F}_0(J)$ contains every $\mathscr{X}_0$-valued affine function. The result is then a simple consequence of the fact that continuous functions on compact intervals admit uniform approximations by linear splines, as will be detailed.
\begin{steps}
\item
We prove that for every $x^{\odot\star} \in \mathscr{X}_0$ the linear function $\tau \mapsto \tau x^{\odot\star}$ is in $\mathscr{F}_0(J)$. Let $(t,s) \in \Omega_J$ with $t > s $ and $x^{\odot} \in X^{\odot}$ be arbitrary. Define
\[
w : [s,t] \to X^{\odot\star}, \qquad w(\tau) \ensuremath{\coloneqq} \int_s^{\tau} T_0^{\odot\star}(t - \sigma) x^{\odot\star} \,d\sigma,
\]
and note that
\[
\frac{d}{d\tau}\PAIR{x^{\odot}}{w(\tau)} = \PAIR{x^{\odot}}{T_0^{\odot\star}(t - \tau)x^{\odot\star}}.
\]
Then partial integration yields
\begin{align*}
\PAIR{x^{\odot}}{\int_s^t T_0^{\odot\star}(t - \tau) \tau x^{\odot\star} \,d\tau} &= \int_s^t \tau \PAIR{x^{\odot}}{T_0^{\odot\star}(t - \tau) x^{\odot\star}} \,d\tau\\
&= t \PAIR{x^{\odot}}{w(t)} - \int_s^t \PAIR{x^{\odot}}{w(\tau)} \,d\tau.
\end{align*}
Since adjoints of bounded linear operators commute with \ensuremath{\text{w}^{\star}}\xspace-integration and $jX$ is positively $T_0^{\odot\star}$-invariant, it follows that
\[
w(\tau) = T_0^{\odot\star}(t - \tau) \int_s^{\tau} T_0^{\odot\star}(\tau - \sigma) x^{\odot\star} \,d\sigma \in jX \qquad \text{for all }\,\tau \in [s,t],
\]
and $w$ is norm-continuous. Hence
\[
\int_s^t T_0^{\odot\star}(t - \tau) \tau x^{\odot\star} \,d\tau = t w(t) - \int_s^t w(\tau) \,d\tau \in jX,
\]
where for the inclusion it was also used that the integral in the right-hand side is an ordinary Riemann integral and $jX$ is norm-closed. We conclude that every linear (hence: every affine) function with values in $\mathscr{X}_0$ is in $\mathscr{F}_0(J)$.
\item
Let $f : J \to \mathscr{X}_0$ be continuous, hence uniformly continuous, so $f$ is the uniform limit of a sequence $(f_n)$ of continuous piecewise affine functions. We check that $f_n \in \mathscr{F}_0(J)$ for all $n \in \mathbb{N}$. Let $n \in \mathbb{N}$ and $(t,s) \in \Omega_J$ with $t > s$ be arbitrary. There exist $m \in \mathbb{N}$ and a partition $s = t_0 < t_1 < \ldots < t_m = t$ of $[s,t]$ such for every $i = 1,\ldots,m$ the restriction of $f_n$ to $[t_{i - 1},t_i]$ is affine. We have
\[
\int_s^t T_0^{\odot\star}(t - \tau) f_n(\tau) \,d\tau = \sum_{i=1}^m T_0^{\odot\star}(t - t_i) \int_{t_{i-1}}^{t_i} T_0^{\odot\star}(t_i - \tau) f_n(\tau) \,d\tau
\]
and each summand in the right-hand side is in $jX$, so the left-hand side is in $jX$ as well. We conclude that $f_n \in \mathscr{F}_0(J)$.
\item
\Cref{prop:admclosed} and the uniform convergence $f_n \to f$ as $n \to \infty$ imply that $f \in \mathscr{F}_0(J)$ as well. The second part of \cref{prop:rangeclass} then implies that $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$. \hfill \qedhere
\end{steps}
\end{proof}
The following trivial corollary will be used in the proof of \cref{thm:inhom:answers} in \cref{sec:inhomogeneous}.
\begin{corollary}\label{cor:admlipschitz}
If $\mathscr{X}_0$ is a closed subspace of $\SUNSTAR{X}$ and there exists an interval $J$ such that every Lipschitz function $f : J \to \mathscr{X}_0$ is in $\mathscr{F}_0(J)$, then $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$.
\end{corollary}
\subsection{\texorpdfstring{$\text{Weak}^{\star}$}{Weak*}-integration over unbounded intervals}
We discuss the situation where in the \ensuremath{\text{w}^{\star}}\xspace-integral in \cref{eq:v0} defining $v_0$ one (or both) of the integration limits is infinite. This situation will occur in \cref{sec:xsuncross,sec:cm}.
The importance of the norm-closedness of $jX$ for questions of admissibility has already become apparent in the proofs of \cref{prop:admclosed,thm:admconstants}, for example. Following is a simple criterion that ensures that a given \ensuremath{\text{w}^{\star}}\xspace-integral over an unbounded interval equals the limit in norm of a sequence of \ensuremath{\text{w}^{\star}}\xspace-integrals over compact intervals.
\begin{lemma}
\label{lem:wsnormconv}
Let $J$ be an interval and let $(J_n)$ be an increasing sequence of intervals such that $J = \bigcup_n J_n$. If $q : J \to E^{\star}$ is \ensuremath{\text{w}^{\star}}\xspace-Lebesgue measurable and there exists $\hat{q} \in L^1(J,\mathbb{R})$ such that $\|q(\tau)\| \le \hat{q}(\tau)$ for a.e. $\tau \in J$, then
\[
\lim_{n \to \infty} \int_{J_n} q(\tau)\,d\tau = \int_J q(\tau)\,d\tau
\]
in norm.
\end{lemma}
\begin{proof}
The assumptions imply that $q$ is \ensuremath{\text{w}^{\star}}\xspace-Lebesgue integrable, so the \ensuremath{\text{w}^{\star}}\xspace-integral of $q$ over $J$ exists. Let $\chi_n$ be the characteristic function of the interval $J_n$. For arbitrary $x \in E$ with $\|x\| \le 1$,
\[
\Bigl|\PAIR{x}{\int_J q(\tau)\,d\tau - \int_{J_n} q(\tau)\,d\tau}\Bigr| = \Bigl| \int_J \PAIR{x}{(1 - \chi_n(\tau))q(\tau)}\,d\tau \Bigr| \le \int_J (1 - \chi_n(\tau))\hat{q}(\tau)\,d\tau,
\]
so
\begin{equation}
\label{eq:lem:wsnormconv}
\Bigl\| \int_J q(\tau)\,d\tau - \int_{J_n} q(\tau)\,d\tau \Bigr\| \le \int_J (1 - \chi_n(\tau))\hat{q}(\tau)\,d\tau \qquad \text{for all }\, n \in \mathbb{N}.
\end{equation}
The measurable functions $\tau \mapsto (1 - \chi_n(\tau))\hat{q}(\tau)$ converge to zero, pointwise on $J$ as $n \to \infty$, and
\[
|(1 - \chi_n(\tau))\hat{q}(\tau)| \le \hat{q}(\tau) \qquad \text{for all }\,n \in \mathbb{N} \text{ and } \tau \in J.
\]
Hence the right-hand side of \cref{eq:lem:wsnormconv} tends to zero as $n \to \infty$ by the dominated convergence theorem.
\end{proof}
As an application, for use in \cref{sec:xsuncross}, we have the following result. We recall from \cref{sec:conventions} that $\mathbb{K}$ denotes either the real or complex scalar field.
\begin{proposition}
\label{prop:resolvnormconv}
Let $U$ be a $\mathcal{C}_0$-semigroup on $E$ with generator $C$ and let $M_U \ge 1$ and $\omega_U \in \mathbb{R}$ be such that $\|U(t)\| \le M_U e^{\omega_U t}$ for all $t \ge 0$. For any $\lambda \in \mathbb{K}$ with $\RE{\lambda} > \omega_U$, the resolvent of $C^{\star}$ at $\lambda$ is given by
\[
R(\lambda,C^{\star})x^{\star} = \lim_{t \to \infty} \int_0^t e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau, \qquad x^{\star} \in E^{\star},
\]
i.e. $R(\lambda,C^{\star})x^{\star}$ is the limit in norm of a net of \ensuremath{\text{w}^{\star}}\xspace-Riemann integrals over compact intervals.
\end{proposition}
\begin{proof}
Choose $\lambda \in \mathbb{K}$ with $\RE{\lambda} > \omega_U$. Then we have the Laplace transform representation
\[
R(\lambda,C)x = \int_0^{\infty} e^{-\lambda \tau} U(\tau)x\,d\tau \qquad \text{for all }\,x \in E,
\]
where the integral is an improper Riemann integral. Let $x^{\star} \in E^{\star}$ be arbitrary and let $(t_n)$ be an arbitrary nonnegative, strictly increasing sequence such that $\lim_{n \to \infty} t_n = \infty$. Then
\begin{equation}
\label{eq:prop:resolvnormconv}
\PAIR{\int_0^{t_n} e^{-\lambda \tau} U(\tau) x\,d\tau}{x^{\star}} = \PAIR{x}{\int_0^{t_n} e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau} \qquad \text{for all }\,x \in E \text{ and } n \in \mathbb{N}.
\end{equation}
The integrand inside the integral on the right-hand side is \ensuremath{\text{w}^{\star}}\xspace-Lebesgue measurable on $\mathbb{R}_+$ and
\[
\|e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\| \le M_{U} e^{-(\RE{\lambda} - \omega_U)\tau} \|x^{\star}\| \qquad \text{for all }\,\tau \ge 0,
\]
so the integrand is \ensuremath{\text{w}^{\star}}\xspace-Lebesgue integrable and its norm is dominated by an element of $L^1(\mathbb{R}_+,\mathbb{R})$. \Cref{lem:wsnormconv} then shows that
\[
\lim_{n \to \infty} \int_0^{t_n} e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau = \int_0^{\infty} e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau
\]
in norm. Take the limit $n \to \infty$ in \cref{eq:prop:resolvnormconv} to obtain
\[
\PAIR{R(\lambda,C)x}{x^{\star}} = \PAIR{x}{\int_0^{\infty} e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau} \qquad \text{for all }\,x \in E,
\]
so $R(\lambda,C^{\star}) = R(\lambda,C)^{\star} = \int_0^{\infty} e^{-\lambda \tau} U^{\star}(\tau) x^{\star}\,d\tau$.
\end{proof}
\subsection{Relationship with the subspace \texorpdfstring{$X_0^{\odot\times}$}{X0suncross}}\label{sec:xsuncross}
The notion of an admissible range for a given $\mathcal{C}_0$-semigroup $\{T_0(t)\}_{t \ge 0}$, introduced in \cref{def:admrange}, is useful from the viewpoint of particular classes of delay equations \cite{Diekmann2008,DiekmannGyllenberg2012,ADDE1}, but also a bit unsatisfactory from a more fundamental perspective: The specification of $\mathscr{X}_0$ requires a class-dependent choice and, moreover, it is not clear whether a chosen admissible range has an extension that is in some sense maximal. In this light, it is relevant to refer to \cite[p.56]{VanNeerven1992}, where the author introduces the subspace\footnote{We point out that what we denote by $X_0^{\odot\times}$ is denoted by $X^{\odot\times}$ (so, \emph{without} the subscript) in \cite[p.56]{VanNeerven1992}. The notation $X^{\odot\times}_0$ (so, \emph{with} the subscript) is also introduced in \cite[p.56]{VanNeerven1992}, but it has another meaning there, related to the more general case that $A_0$ is a (not necessarily densely defined) Hille-Yosida operator.}
\begin{equation}
\label{eq:x0suncross}
X_0^{\odot\times} \ensuremath{\coloneqq} \{x^{\odot\star} \in X^{\odot\star}\,:\,R(\lambda,A_0^{\odot\star})x^{\odot\star} \in jX\}, \qquad \lambda \in \rho(A_0),
\end{equation}
with $R(\lambda,A_0^{\odot\star})$ the resolvent of $A_0^{\odot\star}$, for $\lambda$ in the resolvent set $\rho(A_0) = \rho(A_0^{\odot\star}) \subseteq \mathbb{K}$. It is \emph{not} assumed that $\mathbb{K} = \mathbb{C}$. We now discuss the relation between $X_0^{\odot\times}$ and the notion of an admissible range for $\{T_0(t)\}_{t \ge 0}$.
The following simple observation implies the invariance of $jX$ for $R(\lambda,A_0^{\odot\star})$ which, when combined with the resolvent identity, implies that $X_0^{\odot\times}$ does not depend on the choice for $\lambda$.
\begin{proposition}\label{prop:jR}
$R(\lambda,A_0^{\odot\star})j = j R(\lambda,A_0)$.
\end{proposition}
\begin{proof}
Let $x \in X$ be arbitrary. Then $y \ensuremath{\coloneqq} R(\lambda,A_0)x$ is in $\DOM(A_0)$, so
\[
x = (\lambda I - A_0)y = j^{-1}(\lambda I - A_0^{\odot\star})jy.
\]
Apply $j$ to both sides to obtain that $R(\lambda,A_0^{\odot\star})jx = jy$.
\end{proof}
The space $X_0^{\odot\times}$ is closed and positively $T_0^{\odot\star}$-invariant. If $X$ is $\odot$-reflexive for $\{T_0(t)\}_{t \ge 0}$, then $\DOM(A_0^{\odot\star}) \subseteq jX$, so in that case $X_0^{\odot\times} = X^{\odot\star}$. In \cite[Theorem 4.2.2]{VanNeerven1992} it is proven that if $L : X \to X_0^{\odot\times}$ is bounded and linear, then there exists a unique $\mathcal{C}_0$-semigroup $\{T(t)\}_{t \ge 0}$ on $X$ that satisfies \cref{eq:aie0} with $G = L$,
\begin{equation}
\label{eq:laie0}
T(t)x = T_0(t)x + j^{-1}\int_0^t T_0^{\odot\star}(t - \tau)L T(\tau)x\,d\tau \qquad \text{for all }\,x \in X \text{ and } t \ge 0.
\end{equation}
Furthermore, in \cite[Section 4.3]{VanNeerven1992} it is proven that $X_0^{\odot\times}$ is indeed maximal in three different senses; see also \cite[Theorem III.8.4]{Delay1995} for \cite[Theorem 4.3.5]{VanNeerven1992} and \cite{DGH1989} for \cite[Theorems 4.3.5 and 4.3.8]{VanNeerven1992}. Meanwhile, \cref{prop:jR} implies the following trivial case of \cite[Theorem 4.3.6]{VanNeerven1992}.
\begin{corollary}\label{cor:jXXsuncross}
$jX \subseteq X_0^{\odot\times}$.
\end{corollary}
We now show that $X_0^{\odot\times}$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$ that is also maximal, in the sense that $X_0^{\odot\times}$ includes every range that is admissible for $\{T_0(t)\}_{t \ge 0}$.
\begin{theorem}
\label{thm:xsuncross}
$X_0^{\odot\times}$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$. If $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$, then $\mathscr{X}_0 \subseteq X_0^{\odot\times}$.
\end{theorem}
\begin{proof}
Fix $\lambda > \omega$. We first prove admissibility, then maximality.
\begin{steps}
\item
Since $X_0^{\odot\times}$ is closed, by \cref{thm:admconstants} it suffices to prove that $\ensuremath{\mathbbm{1}} \otimes x^{\odot\times} \in \mathscr{F}_0(\mathbb{R})$ for all $x^{\odot\times} \in X_0^{\odot\times}$. Let $x^{\odot\times} \in X_0^{\odot\times}$ and $(t,s) \in \Omega_{\mathbb{R}}$ be arbitrary and write $y^{\odot\star}_{\lambda} \ensuremath{\coloneqq} R(\lambda,A_0^{\odot\star})x^{\odot\times}$. Then
\begin{align*}
\int_s^t T_0^{\odot\star}(t - \tau) x^{\odot\times}\,d\tau &= \int_s^t T_0^{\odot\star}(t - \tau) (\lambda I - A_0^{\odot\star}) y^{\odot\star}_{\lambda}\,d\tau\\
&= \lambda \int_s^t T_0^{\odot\star}(t - \tau) y^{\odot\star}_{\lambda}\,d\tau - \int_0^{t - s} T_0^{\odot\star}(\tau) A_0^{\odot\star} y^{\odot\star}_{\lambda} \,d\tau\\
&= \lambda \int_s^t T_0^{\odot\star}(t - \tau) y^{\odot\star}_{\lambda} \,d\tau - (T_0^{\odot\star}(t - s) - I)y^{\odot\star}_{\lambda}.
\end{align*}
By definition $y^{\odot\star}_{\lambda} \in jX$ so \cref{prop:jXadmissible} implies that the integral in the right-hand side is in $jX$. Since $\{T_0^{\odot\star}(t)\}_{t \ge 0}$ and $R(\lambda,A_0^{\odot\star}) = R(\lambda,A_0^{\odot})^{\star}$ commute and \ensuremath{\text{w}^{\star}}\xspace-integration commutes with the adjoint of a bounded linear operator, that integral is also in $\DOM(A_0^{\odot\star})$. The other term in the right-hand side is in $jX \cap \DOM(A_0^{\odot\star})$ as well, because of the commutativity of $\{T_0^{\odot\star}(t)\}_{t \ge 0}$ and $R(\lambda,A_0^{\odot\star})$ and the positive $T_0^{\odot\star}$-invariance of $X_0^{\odot\times}$. Therefore, the left-hand side is in $jX \cap \DOM(A_0^{\odot\star})$. (In fact, only the inclusion in $jX$ is needed to apply \cref{thm:admconstants}.)
\item
Let $\mathscr{X}_0$ be an admissible range for $\{T_0(t)\}_{t \ge 0}$ and let $x^{\odot\star} \in \mathscr{X}_0$ be arbitrary. \Cref{prop:resolvnormconv} shows that
\begin{equation}
\label{eq:thm:xsuncross}
R(\lambda,A_0^{\odot\star})x^{\odot\star} = \lim_{t \to \infty} \int_0^t T_0^{\odot\star}(\tau) e^{-\lambda \tau} x^{\odot\star}\,d\tau
\end{equation}
in norm. The map $\tau \mapsto e^{\lambda \tau} x^{\odot\star}$ is in $\mathscr{F}_0(\mathbb{R})$, so
\[
\int_0^t T_0^{\odot\star}(\tau) e^{-\lambda \tau} x^{\odot\star}\,d\tau = e^{-\lambda t} \int_0^t T_0^{\odot\star}(t - \tau) e^{\lambda \tau} x^{\odot\star}\,d\tau \in jX \qquad \text{for all }\,t \ge 0.
\]
The norm convergence in \cref{eq:thm:xsuncross} then implies that $R(\lambda,A_0^{\odot\star})x^{\odot\star} \in jX$ as well. \qedhere
\end{steps}
\end{proof}
\begin{corollary}
\label{cor:xsuncross}
A continuous perturbation $G : X \to X^{\odot\star}$ is admissible for $\{T_0(t)\}_{t \ge 0}$ if and only if $G$ takes its values in $X_0^{\odot\times}$.
\end{corollary}
\section{Admissibility and perturbation}\label{sec:inhomogeneous}
Let $L : X \to X^{\odot\star}$ be an admissible linear perturbation for the $\mathcal{C}_0$-semigroup $\{T_0(t)\}_{t \ge 0}$. By \cref{def:admrange} $L$ takes its values in an admissible range $\mathscr{X}_0$. The $\mathcal{C}_0$-semigroup $\{T(t)\}_{t \ge 0}$ with generator $A$ is obtained from $\{T_0(t)\}_{t \ge 0}$ by perturbation with $L$ as in \cite[Theorem 19]{ADDE1}. Given a (possibly infinite) terminal time $0 < t_e \le \infty$ and a forcing function $f : [0,t_e) \to \mathscr{X}_0$, in this section we study perturbations of $\SUNSTAR{A_0}$ and $\SUNSTAR{A}$ by $f$ on intervals in $[0,t_e)$.
In the application of dual perturbation theory to specific classes of delay equations (`abstract' or otherwise) the question of higher time regularity of the \ensuremath{\text{w}^{\star}}\xspace-integral in \cref{eq:admissible} is usually deemed less relevant. The reason for this is the direct correspondence between mild solutions of \cref{eq:adde-ic} and solutions of the abstract \emph{integral} equation \cref{eq:aie0-adde}, so the abstract \emph{differential} equation associated with \cref{eq:aie0-adde} (or, more generally, \cref{eq:aie0}) plays at most a motivating role. Nevertheless, differentiability with respect to time, in various senses, was considered in \cite{DSG3} under the assumption of $\odot$-reflexivity. In this section we restrict our attention to \ensuremath{\text{w}^{\star}}\xspace-differentiability.
\begin{definition}
\label{def:wstar-derivative}
Let $J$ be an interval and let $E$ be a Banach space. A function $q : J \to \STAR{E}$ is \textbf{\ensuremath{\text{w}^{\star}}\xspace-differentiable} with \textbf{\ensuremath{\text{w}^{\star}}\xspace-derivative} $\STAR{d}q : J \to \STAR{E}$ if
\[
\frac{d}{dt}\PAIR{x}{q(t)} = \PAIR{x}{\STAR{d}q(t)} \qquad \text{for all } x \in E \text{ and } t \in J.
\]
If in addition $\STAR{d}q$ is \ensuremath{\text{w}^{\star}}\xspace-continuous then $q$ is called \textbf{\ensuremath{\text{w}^{\star}}\xspace-continuously differentiable}.
\end{definition}
\begin{remark}\label{rem:wsloclip}
We note that it is a direct consequence of the uniform boundedness principle and the fundamental theorem of calculus that \ensuremath{\text{w}^{\star}}\xspace-continuously differentiable functions in the sense of the above definition are locally Lipschitz continuous.
\end{remark}
There are two ways to introduce inhomogeneous perturbations on the generator level. We may either perturb $\SUNSTAR{A_0}$ by $\phi \mapsto L\phi + f$ or we may perturb $\SUNSTAR{A}$ by $f$. In the first case, the initial-value problem for the associated abstract ODE is
\begin{subequations}
\label{eq:inhom0}
\begin{equation}
\label{eq:aode0_inhom}
\STAR{d}(j \circ u)(t) = \SUNSTAR{A_0} j u (t) + L u(t) + f(t), \qquad u(0) = \phi \in X,
\end{equation}
and formal variation-of-constants suggests the corresponding abstract integral equation
\begin{equation}
\label{eq:aie0_inhom}
u(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[L u(\tau) + f(\tau)]\,d\tau.
\end{equation}
\end{subequations}
If instead we perturb $\SUNSTAR{A}$ then the initial-value problem is
\begin{subequations}
\label{eq:inhom}
\begin{equation}
\label{eq:aode_inhom}
\STAR{d}(j \circ u)(t) = \SUNSTAR{A} j u (t) + f(t), \qquad u(0) = \phi \in X,
\end{equation}
along with the explicit abstract integral expression
\begin{equation}
\label{eq:aie_inhom}
u(t) = T(t)\phi + j^{-1}\int_0^t\SUNSTAR{T}(t - \tau)f(\tau)\,d\tau.
\end{equation}
\end{subequations}
We define a \textbf{subinterval} $J$ of $[0,t_e)$ to be an interval such that $0 \in J \subseteq [0,t_e)$. A \textbf{solution of} \cref{eq:aode0_inhom} on a subinterval $J$ of $[0,t_e)$ is a function $u : J \to X$ taking values in $j^{-1}\DOM(\SUNSTAR{A_0})$ and such that $j \circ u$ is \ensuremath{\text{w}^{\star}}\xspace-continuously differentiable on $J$ and satisfies \cref{eq:aode0_inhom} there. The definition for \cref{eq:aode_inhom} is analogous. Then by \cite[Proposition 22]{ADDE1} a solution of \cref{eq:aode0_inhom} is also a solution of \cref{eq:aode_inhom} and vice versa, so in this sense \cref{eq:aode0_inhom,eq:aode_inhom} are equivalent. Two natural and interrelated questions arise:
\begin{enumerate}[wide=0pt,leftmargin=\parindent,label=\roman*.]
\item
It must be checked that the \ensuremath{\text{w}^{\star}}\xspace-integral in \cref{eq:aie_inhom} takes values in the range of $j$. By assumption $\mathscr{X}_0$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$, but it is not clear whether this implies that $\mathscr{X}_0$ is also an admissible range for the perturbed semigroup $\{T(t)\}_{t \ge 0}$. In other words, we ask about robustness of admissibility of $\mathscr{X}_0$ with respect to the bounded linear perturbation $L$.
\item
We expect that $u$ given by \cref{eq:aie_inhom} is the unique solution of \cref{eq:aie0_inhom} on $[0,t_e)$. Already in the $\odot$-reflexive case this seemingly obvious fact has a less than obvious proof. It relies on a combination of $\odot$-reflexivity and a variation-of-constants formula relating the integrated semigroups corresponding to $\{\SUNSTAR{T_0}(t)\}_{t \ge 0}$ and $\{\SUNSTAR{T}(t)\}_{t \ge 0}$ \cite[Proposition 2.5]{DSG3}, \cite[Lemma III.2.23]{Delay1995}. We seek an alternative proof that also works without $\odot$-reflexivity.
\end{enumerate}
\subsection{Preparatory results}
We assume the setting introduced in the first paragraph of \cref{sec:inhomogeneous}. The resolution of the two questions above can be found in \cref{thm:inhom:answers} in \cref{sec:two-answers}. Its implications are of importance for the nonlinear analysis of \cref{eq:aie0} in a neighborhood of an equilibrium solution. Here we present some preparations for the proof that may also be of independent interest.
\begin{definition}
\label{def:favlinear}
The \textbf{Favard class} of the $\mathcal{C}_0$-semigroup $\{T_0(t)\}_{t \ge 0}$ is the linear subspace
\[
\FAV(T_0) \ensuremath{\coloneqq} \Bigl\{\phi \in X\,:\, \limsup_{h \downarrow 0}\frac{1}{h}\|T_0(h)\phi - \phi\| < \infty\Bigl\}.
\]
\end{definition}
The semigroup property of $\{T_0(t)\}_{t \ge 0}$ implies that $\FAV(T_0)$ is positively $T_0$-invariant. From the definition it also follows easily that $\FAV(T_0)$ consists precisely of those $\phi \in X$ for which $T_0(\cdot)\phi$ is locally Lipschitz. Furthermore, we also have the important equalities
\begin{equation}
\label{eq:Fav_A0sunstar}
\FAV(T_0) = j^{-1}\DOM(\SUNSTAR{A}_0) = j^{-1}\DOM(\SUNSTAR{A}) = \FAV(T)
\end{equation}
that do not require $\odot$-reflexivity. The left and right equalities are due to \cite[(3.36) in Section 3.4]{Clement1987b} and the equality in the middle follows directly from \cite[Proposition 22]{ADDE1}.
\begin{proposition}[\normalfont{cf. \cite[Proposition 2.2]{DSG3}}]
\label{prop:inhom:wsdiff}
If $f : [0,t_e) \to \mathscr{X}_0$ is locally Lipschitz continuous, then $v_0(\cdot,\cdot,f) : \Omega_{[0,t_e)} \to \SUNSTAR{X}$ defined by \cref{eq:v0} takes values in $\DOM(\SUNSTAR{A_0})$ and for every fixed $s \in [0,t_e)$ the function
\[
[s,t_e) \ni t \mapsto v_0(t,s,f) \in \SUNSTAR{X}
\]
is \ensuremath{\text{w}^{\star}}\xspace-differentiable with \ensuremath{\text{w}^{\star}}\xspace-derivative
\begin{equation}
\label{eq:sunstar:16}
\STAR{d}_t v_0(t,s,f) = \SUNSTAR{A_0}v_0(t,s,f) + f(t) \qquad \text{for all } t \in [s,t_e)
\end{equation}
and $\STAR{d}_tv_0(\cdot,\cdot,f) : \Omega_{[0,t_e)} \to \SUNSTAR{X}$ is \ensuremath{\text{w}^{\star}}\xspace-continuous.
\end{proposition}
\begin{proof}
It is very close to the proof in \cite{DSG3} but here we allow for an arbitrary lower limit in the integral defining $v_0(\cdot,\cdot,f)$ and we also require that $f$ takes its values in $\mathscr{X}_0$. Therefore we provide a detailed proof. It is convenient to use the shorthands
\[
J \ensuremath{\coloneqq} [0,t_e), \qquad w(t,s) \ensuremath{\coloneqq} v_0(t,s,f) \qquad \text{for all }\,(t,s) \in \Omega_J.
\]
\begin{steps}
\item
Let $K \subseteq \Omega_J$ be an arbitrary compact subset. We will show that $w(t,s) \in \DOM(\SUNSTAR{A_0})$ for all $(t,s) \in K$. By \cref{eq:Fav_A0sunstar} we have in particular the inclusion $j \FAV(T_0) \subseteq \DOM(\SUNSTAR{A_0})$, so it is sufficient to prove that
\begin{equation}
\label{eq:sunstar:13}
j^{-1}w(t,s) \in \FAV(T_0) \qquad \text{for all }\,(t,s) \in K.
\end{equation}
(Here it is used that $f$ takes its values in $\mathscr{X}_0$.) We note that the map
\[
\Omega_J \ni (t,s) \mapsto \sup_{s \le \tau \le t}\|f(\tau)\| \in \mathbb{R}
\]
is continuous, so there exists a constant $C_K \ge 1$ such that
\[
e^{\omega(t - s)} + \sup_{s \le \tau \le t}\|f(\tau)\| \le C_K \qquad \text{for all }\,(t,s) \in K.
\]
Also, let $K_{1,2} \subseteq J$ be the compact sets obtained by projecting $K$ onto the first and second coordinate. There exists a Lipschitz constant $L_K > 0$ for $f$ on the compact set $K_1 \cup K_2$.
\item
Let $(t,s) \in K$ be arbitrary. If $t = s$ then $w(t,s) = 0$, so we may assume that $t > s$ strictly. For $h \in (0,t - s)$ we consider the difference
\begin{align*}
\SUNSTAR{T_0}(h)w(t,s) - w(t,s) &= \int_s^t\SUNSTAR{T_0}(t + h - \tau)f(\tau)\,d\tau - \int_s^t\SUNSTAR{T_0}(t - \tau)f(\tau)\,d\tau\\
&= \int_h^{t - s + h}\underbracket{\SUNSTAR{T_0}(\tau)f(t - \tau + h)}_{(\cdots)_h}\,d\tau - \int_0^{t - s}\underbracket{\SUNSTAR{T_0}(\tau)f(t - \tau)}_{(\cdots)_0}\,d\tau\\
&= \int_h^{t - s}(\cdots)_h\,d\tau + \int_{t - s}^{t - s + h}(\cdots)_h\,d\tau - \int_0^h(\cdots)_0\,d\tau - \int_h^{t - s}(\cdots)_0\,d\tau,
\end{align*}
and this splitting leads to the estimate
\begin{equation}
\label{eq:sunstar:15}
\begin{aligned}
\frac{1}{h}\|\SUNSTAR{T_0}(h)w(t,s) - w(t,s)\| &\le \frac{1}{h}\Bigl\|\int_{t - s}^{t - s + h}\SUNSTAR{T_0}(\tau)f(t - \tau + h)\,d\tau\Bigr\|\\
&+ \frac{1}{h}\Bigl\|\int_h^{t - s}\SUNSTAR{T_0}(\tau)[f(t - \tau + h) - f(t - \tau)]\,d\tau \Bigr\|\\
&+ \frac{1}{h}\Bigl\|\int_0^h\SUNSTAR{T_0}(\tau)f(t - \tau)\,d\tau \Bigr\|.
\end{aligned}
\end{equation}
A standard estimate shows that
\begin{align*}
\Bigl\|\int_{t - s}^{t - s + h}\SUNSTAR{T_0}(\tau)f(t - \tau + h)\,d\tau\Bigr\| &\le \frac{M}{\omega}(e^{\omega h} - 1)e^{\omega(t - s)}\sup_{s \le \tau \le t}\|f(\tau)\|,\\
\Bigl\|\int_0^h\SUNSTAR{T_0}(\tau)f(t - \tau)\,d\tau \Bigr\| &\le \frac{M}{\omega}(e^{\omega h} - 1)\sup_{s \le \tau \le t}\|f(\tau)\|,
\end{align*}
so the superior limits of the first and third terms in the right-hand side of \cref{eq:sunstar:15} do not exceed $M C_K^2$. Also, we have the estimate
\[
\Bigl\|\int_h^{t - s}\SUNSTAR{T_0}(\tau)[f(t - \tau + h) - f(t - \tau)]\,d\tau \Bigr\| \le \frac{M}{\omega}L_K h (e^{\omega(t - s)} - 1),
\]
and this proves that the superior limit of the second term in the right-hand side of \cref{eq:sunstar:15} does not exceed $\frac{M}{\omega} L_K C_K$. We conclude that there exists a constant, again denoted by $C_K \ge 0$ and depending only on the compact set $K$, such that
\begin{equation}
\label{eq:limsup}
\limsup_{h \downarrow 0} \frac{1}{h}\|\SUNSTAR{T_0}(h)w(t,s) - w(t,s)\| \le C_K \qquad \text{for all }\,(t,s) \in K.
\end{equation}
\item
For all $(t,s) \in K$ and all $h > 0$ it holds that
\begin{align*}
\frac{1}{h}\|T_0(h)j^{-1}w(t,s) - j^{-1}w(t,s)\| &= \frac{1}{h}\|j^{-1}\SUNSTAR{T_0}(h)w(t,s) - j^{-1}w(t,s)\|\\
&\le \frac{\|j^{-1}\|}{h}\|\SUNSTAR{T_0}(h)w(t,s) - w(t,s)\|
\end{align*}
so \cref{eq:limsup} implies that \cref{eq:sunstar:13} holds. Moreover, for every $(t,s) \in K$ and every $\SUN{\phi} \in \SUN{X}$ with $\|\SUN{\phi}\| \le 1$ we have
\[
|\PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(t,s)}| = \lim_{h \downarrow 0}{\frac{1}{h}|\PAIR{\SUN{\phi}}{\SUNSTAR{T_0}(h)w(t,s) - w(t,s)}|} \le C_K.
\]
Since the compact set $K$ was chosen arbitrarily, we conclude that the map
\begin{equation}
\label{eq:A0v}
\Omega_J \ni (t,s) \mapsto \SUNSTAR{A_0}w(t,s) \in \SUNSTAR{X}
\end{equation}
is bounded on compact subsets of $\Omega_J$.
\item
We show that \cref{eq:A0v} is \ensuremath{\text{w}^{\star}}\xspace-continuous. Let $(t,s) \in \Omega_J$ be arbitrary. For any $\SUN{\phi} \in \DOM(\SUN{A_0})$ we have
\[
\PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(t,s)} = \PAIR{\SUN{A_0}\SUN{\phi}}{w(t,s)},
\]
which is a continuous function of $(t,s)$ by the continuity of $w$ on $\Omega_J$. Next, let $\SUN{\phi} \in \SUN{X}$ be arbitrary. By norm-density of $\DOM(\SUN{A_0})$ in $\SUN{X}$ there exists a sequence $(\SUN{\phi}_n)$ in $\DOM(\SUN{A_0})$ such that $\SUN{\phi_n} \to \SUN{\phi}$ in norm as $n \to \infty$. Let $(t_m,s_m)$ be a sequence in $\Omega_J$ converging to $(t,s)$ as $m \to \infty$. Then we estimate
\begin{align*}
|\PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(t,s)} - \PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(t_m,s_m)}| &\le (\|\SUNSTAR{A_0}w(t,s)\| + \|\SUNSTAR{A_0}w(t_m,s_m)\|)\cdot\|\SUN{\phi} - \SUN{\phi_n}\|\\
&+ |\PAIR{\SUN{\phi_n}}{\SUNSTAR{A_0}w(t,s)} - \PAIR{\SUN{\phi_n}}{\SUNSTAR{A_0}w(t_m,s_m)}|.
\end{align*}
The first term in the right-hand side can be made as small as desired merely by fixing $n$ sufficiently large, thanks to the boundedness of \cref{eq:A0v} on compact subsets of $\Omega_J$. The continuity of $\PAIR{\SUN{\phi_n}}{\SUNSTAR{A_0}w(\cdot,\cdot)}$ then implies that the second term becomes arbitrarily small as $m \to \infty$.
\item
It remains to prove the statement about \ensuremath{\text{w}^{\star}}\xspace-differentiability. We do this by showing that
\begin{equation}
\label{eq:sunstar:17}
w(t,s) = \int_s^t{[\SUNSTAR{A_0}w(\tau,s) + f(\tau)]\,d\tau} \qquad \text{for all } t \in J \text{ with } t \ge s.
\end{equation}
(Observe that the right-hand side is well-defined as a \ensuremath{\text{w}^{\star}}\xspace-Riemann integral since the integrand is \ensuremath{\text{w}^{\star}}\xspace-continuous, as shown in the previous step.) Indeed, if \cref{eq:sunstar:17} holds, then for every $\SUN{\phi} \in \SUN{X}$ we have
\[
\frac{d}{dt} \PAIR{\SUN{\phi}}{w(t,s)} = \frac{d}{dt} \int_s^t{\PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(\tau,s) + f(\tau)}\,d\tau} = \PAIR{\SUN{\phi}}{\SUNSTAR{A_0}w(t,s) + f(t)},
\]
which is precisely \cref{eq:sunstar:16}. A small direct calculation shows that
\[
\PAIR{\SUN{\phi}}{\int_s^t\SUNSTAR{A_0}w(\tau,s)\,d\tau} = \PAIR{\SUN{\phi}}{w(t,s) - \int_s^tf(\tau)\,d\tau} \qquad \text{for all } t \in J \text{ with } t \ge s,
\]
at first for $\SUN{\phi} \in \DOM(\SUN{A_0})$ and then by norm-density for all $\SUN{\phi} \in \SUN{X}$. (Change the order of integration so \cite[Lemma 3.15 in Appendix II.3]{Delay1995} can be applied.) This proves \cref{eq:sunstar:17}.
\hfill \qedhere
\end{steps}
\end{proof}
\begin{corollary}\label{cor:inhom:aie0_to_aode0}
Suppose that $\phi \in j^{-1}\DOM(\SUNSTAR{A_0})$ and $f$ is locally Lipschitz. If $J$ is a subinterval of $[0,t_e)$ and $u$ is a locally Lipschitz continuous solution of \cref{eq:aie0_inhom} on $J$ then $u$ is a solution of \cref{eq:aode0_inhom} on $J$.
\end{corollary}
\begin{proof}
Apply $j$ to \cref{eq:aie0_inhom} to obtain
\begin{equation}
\label{eq:jut}
ju(t) = \SUNSTAR{T_0}(t)j\phi + \int_0^t\SUNSTAR{T_0}(t - \tau)[Lu(\tau) + f(\tau)]\,d\tau \qquad \text{for all } t \in J.
\end{equation}
The first term on the right takes values in $\DOM(\SUNSTAR{A_0})$ and it is \ensuremath{\text{w}^{\star}}\xspace-continuously differentiable with respect to $t \in J$ with \ensuremath{\text{w}^{\star}}\xspace-derivative
\[
\STAR{d}_t \SUNSTAR{T_0}(t)j\phi = \SUNSTAR{A_0}\SUNSTAR{T_0}(t)j\phi.
\]
Also, by \cref{prop:inhom:wsdiff} the \ensuremath{\text{w}^{\star}}\xspace-integral in \cref{eq:jut} takes values in $\DOM(\SUNSTAR{A_0})$ and it is \ensuremath{\text{w}^{\star}}\xspace-continuously differentiable with respect to $t \in J$ with \ensuremath{\text{w}^{\star}}\xspace-derivative
\[
\STAR{d}_t \int_0^t\SUNSTAR{T_0}(t - \tau)[Lu(\tau) + f(\tau)]\,d\tau = \SUNSTAR{A_0} \int_0^t\SUNSTAR{T_0}(t - \tau)[Lu(\tau) + f(\tau)]\,d\tau + Lu(t) + f(t).
\]
So $u$ takes values in $j^{-1}\DOM(\SUNSTAR{A_0})$ and $j \circ u$ is \ensuremath{\text{w}^{\star}}\xspace-continuously differentiable and satisfies \cref{eq:aode0_inhom} on $J$.
\end{proof}
Concerning the following proposition, at first sight it may seem a bit odd to establish well-posedness of the \emph{linear} inhomogeneous problem \cref{eq:aie0_inhom} by means of a fixed-point argument. However, direct substitution of \cref{eq:aie_inhom} into \cref{eq:aie0_inhom} is not successful in the non-$\odot$-reflexive case, even if just for the reason given in the first of the two questions above. Indeed, the following independent well-posedness result will turn out to be instrumental in the proof of \cref{thm:inhom:answers}.
\begin{proposition}\label{prop:inhom:aie0sol}
Let $J \subseteq [0,t_e)$ be a compact subinterval. The following two statements hold.
\begin{enumerate}[label=\roman*.]
\item
For every $\phi \in X$ there exists a unique solution $u_{\phi,f}$ of \cref{eq:aie0_inhom} on $J$ and the map
\begin{equation}
\label{eq:umap_cont}
X \times C(J,\mathscr{X}_0) \ni (\phi,f) \mapsto u_{\phi,f} \in C(J,X)
\end{equation}
is continuous.
\item
If $\phi \in j^{-1}\DOM(\SUNSTAR{A_0})$ and $f$ is locally Lipschitz then there exist sequences of Lipschitz functions $u_m : J \to X$ and $f_m : J \to \mathscr{X}_0$ such that
\begin{equation}
\label{eq:aie0_inhom_approx}
u_m(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[L u_m(\tau) + f_m(\tau)]\,d\tau \qquad \text{for all }\,t \in J,
\end{equation}
and $f_m \to f$ and $u_m \to u_{\phi,f}$ as $m \to \infty$, both uniformly on $J$.
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{steps}
\item
The first statement is proven by a standard fixed-point argument. Let $M \ge 1$ and $\omega \in \mathbb{R}$ be as in \cref{eq:expboundT0}. Following \cite{Duistermaat1995}, on the Banach space $C(J,X)$ we introduce the one-parameter family of equivalent norms
\[
\|u\|_{\eta} \ensuremath{\coloneqq} \sup_{t \in J}e^{-\eta t}\|u(t)\|, \qquad \eta \in \mathbb{R}.
\]
Clearly each of these norms is complete and $\|\cdot\|_0$ is the usual supremum-norm. For each $(\phi,f) \in X \times C(J,\mathscr{X}_0)$ we define the operator $K_{\phi,f}$ on $C(J,X)$ by
\begin{equation}
\label{eq:inhom_fp}
(K_{\phi,f}u)(t) \ensuremath{\coloneqq} T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[Lu(\tau) + f(\tau)]\,d\tau \qquad \text{for all }\, t \in J.
\end{equation}
Choose $\eta > \omega$. For all $u, \hat{u} \in C(J,X)$ and all $t \in J$ we have
\begin{align*}
e^{-\eta t}\|(K_{\phi,f}u)(t) - (K_{\phi,f}\hat{u})(t)\| &\le \|j^{-1}\| M \|L\| \int_0^t e^{-(\eta - \omega)(t - \tau)} e^{-\eta \tau}\|u(\tau) - \hat{u}(\tau)\|\,d\tau\\
&\le \frac{\|j^{-1}\| M \|L\|}{\eta - \omega} \|u - \hat{u}\|_{\eta}
\end{align*}
so if we choose $\eta$ such that $2\|j^{-1}\| M \|L\| \le \eta - \omega$ then $K_{\phi,f}$ is a uniform contraction with respect to the $\|\cdot\|_{\eta}$-norm. Moreover, the maps $X \times C(J,\mathscr{X}_0) \ni (\phi,f) \mapsto K_{\phi,f}u \in C(J,X)$ are continuous for each fixed $u \in C(J,X)$. The uniform contraction principle \cite[Theorem 0.3.2]{Hale1980} therefore gives the first statement.
\item
Assume in addition that $\phi \in j^{-1}\DOM(\SUNSTAR{A_0})$ and $f$ is locally Lipschitz. Let $\LIP(J,X)$ be the subspace of $C(J,X)$ consisting of Lipschitz functions. We will show that $K_{\phi,f}$ maps $\LIP(J,X)$ into itself. First, from \cref{eq:Fav_A0sunstar} we see that $T_0(\cdot)\phi$ is in $\LIP(J,X)$. Second, let $u \in \LIP(J,X)$ be arbitrary and let $\hat{u}$ be a Lipschitz extension of $u$ to $[0,t_e)$. Then $L \circ \hat{u} + f$ is locally Lipschitz on $[0,t_e)$ with values in $\mathscr{X}_0$. \Cref{prop:inhom:wsdiff} and \cref{rem:wsloclip} show that $v_0(\cdot,0,L \circ \hat{u} + f)$ is locally Lipschitz on $[0,t_e)$ as well. Hence $K_{\phi,f}u = T_0(\cdot)\phi + j^{-1}\circ v_0(\cdot,0,L \circ \hat{u} + f)$ is in $\LIP(J,X)$.
\item
Choose an arbitrary $u_0 \in \LIP(J,X)$. The sequence of fixed-point iterates defined by
\[
u_m \ensuremath{\coloneqq} K_{\phi,f}u_{m-1}, \qquad m \in \mathbb{N},
\]
is in $\LIP(J,X)$. From \cref{eq:inhom_fp} we have for all $m \in \mathbb{N}$ and all $t \in J$,
\[
u_m(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[L u_m(\tau) + f(\tau) + L\{u_{m-1}(\tau) - u_m(\tau)\}]\,d\tau.
\]
For every $m \in \mathbb{N}$ we define $f_m \ensuremath{\coloneqq} f + L \circ (u_{m-1} - u_m)$. Then each $f_m$ is Lipschitz on $J$ with values in $\mathscr{X}_0$. Furthermore, \cref{eq:aie0_inhom_approx} holds and $u_m \to u_{\phi,f}$ and $f_m \to f$, both uniformly on $J$. \hfill \qedhere
\end{steps}
\end{proof}
We recall that $\{T(t)\}_{t \ge 0}$ is the $\mathcal{C}_0$-semigroup defined in \cite[Theorem 19]{ADDE1}. For any interval $J$ we denote by $\mathscr{F}(J)$ the class of admissible forcing functions for $\{T(t)\}_{t \ge 0}$ on $J$.
\begin{proposition}\label{prop:inhom:aode_to_aie}
If $J$ is a subinterval of $[0,t_e)$ and $u : J \to X$ is a solution of \cref{eq:aode_inhom} then $f \in \mathscr{F}(J)$ and $u$ is given by \cref{eq:aie_inhom}.
\end{proposition}
\begin{proof}
\begin{steps}
\item
The proof is rather standard, see for instance \cite[Lemma 13]{ADDE1}, but in the present case we need to work with the \ensuremath{\text{w}^{\star}}\xspace-topology instead of the norm-topology. Let $(t,s) \in \Omega_J$ with $t > s$ be arbitrary. Define $w : [s,t] \to \SUNSTAR{X}$ by $w(\tau) \ensuremath{\coloneqq} \SUNSTAR{T}(t - \tau)ju(\tau)$. We claim that $w$ is \ensuremath{\text{w}^{\star}}\xspace-differentiable with derivative
\begin{equation}
\label{eq:sunstar:30}
\STAR{d}w(\tau) = \SUNSTAR{T}(t - \tau)\STAR{d}(j \circ u)(\tau) - \SUNSTAR{T}(t - \tau)\SUNSTAR{A}ju(\tau) \qquad \text{for all }\,\tau \in [s,t].
\end{equation}
which is just what one would expect on formal grounds. To prove it, let $\tau \in [s,t]$ and $\SUN{\phi} \in \SUN{X}$ be arbitrary. For any $h \in \mathbb{R}$ such that $\tau + h \in [s,t]$ we have
\begin{align*}
\PAIR{\SUN{\phi}}{w(\tau + h) - w(\tau)} &= \PAIR{\SUN{\phi}}{\SUNSTAR{T}(t - (\tau + h))ju(\tau + h) - \SUNSTAR{T}(t -\tau)ju(\tau)}\\
&= \PAIR{\SUN{\phi}}{\SUNSTAR{T}(t - (\tau + h))[ju(\tau + h) - ju(\tau)]}\\
&+ \PAIR{\SUN{\phi}}{[\SUNSTAR{T}(t - (\tau + h)) - \SUNSTAR{T}(t - \tau)]ju(\tau)}\\
&= \PAIR{\SUN{T}(t - (\tau + h))\SUN{\phi}}{ju(\tau + h) - ju(\tau)}\\
&+ \PAIR{\SUN{\phi}}{[\SUNSTAR{T}(t - (\tau + h)) - \SUNSTAR{T}(t - \tau)]ju(\tau)}
\end{align*}
Regarding the first pairing above, by the strong continuity of $\SUN{T}$ we have
\[
\SUN{T}(t - (\tau + h))\SUN{\phi} \to \SUN{T}(t - \tau)\SUN{\phi} \qquad \text{in norm as } h \to 0.
\]
We also have
\[
\frac{1}{h}(ju(\tau + h) - ju(\tau)) \to \STAR{d}(j \circ u)(\tau) \qquad \text{\ensuremath{\text{weakly}^{\star}}\xspace as } h \to 0,
\]
while the difference quotient remains bounded in norm thanks to the Lipschitz continuity of $j \circ u$ on $[s,t]$. Together this implies that
\[
\frac{1}{h}\PAIR{\SUN{T}(t - (\tau + h))\SUN{\phi}}{ju(\tau + h) - ju(\tau)} \to \PAIR{\SUN{T}(t - \tau)\SUN{\phi}}{\STAR{d}(j \circ u)(\tau)} \qquad \text{as } h \to 0.
\]
Regarding the second pairing, since $ju(\tau) \in \DOM(\SUNSTAR{A})$ it follows that
\[
\frac{1}{h}\PAIR{\SUN{\phi}}{[\SUNSTAR{T}(t - (\tau + h)) - \SUNSTAR{T}(t - \tau)]ju(\tau)} \to -\PAIR{\SUN{\phi}}{\SUNSTAR{T}(t - \tau)\SUNSTAR{A}ju(\tau)} \qquad \text{as } h \to 0.
\]
Consequently, we have
\[
\frac{1}{h}\PAIR{\SUN{\phi}}{w(\tau + h) - w(\tau)} \to \PAIR{\SUN{\phi}}{\SUNSTAR{T}(t - \tau)\STAR{d}(j \circ u)(\tau) - \SUNSTAR{T}(t - \tau)\SUNSTAR{A}ju(\tau)}
\]
and this proves \cref{eq:sunstar:30}.
\item
Substituting \cref{eq:aode_inhom} into \cref{eq:sunstar:30} yields
\[
\STAR{d}w(\tau) = \SUNSTAR{T}(t - \tau)f(\tau) \qquad \text{for all }\,\tau \in [s,t]
\]
so $\STAR{d}w$ is \ensuremath{\text{w}^{\star}}\xspace-continuous. For every $\SUN{\phi} \in \SUN{X}$ we have
\begin{align*}
\PAIR{\SUN{\phi}}{ju(t) - \SUNSTAR{T}(t - s)ju(s)} &= \PAIR{\SUN{\phi}}{w(t)} - \PAIR{\SUN{\phi}}{w(s)}\\
&= \int_s^t{\PAIR{\SUN{\phi}}{\STAR{d}w(\tau)}\,d\tau}\\
&= \int_s^t{\PAIR{\SUN{\phi}}{\SUNSTAR{T}(t - \tau)f(\tau)}\,d\tau}
\end{align*}
Since $\SUN{\phi}$ and also $s$ and $t$ were arbitrary, we conclude that
\[
ju(t) - \SUNSTAR{T}(t - s)ju(s) = \int_s^t{\SUNSTAR{T}(t - \tau)f(\tau)\,d\tau} \qquad \text{for all }\,(t,s) \in \Omega_J.
\]
We recall that $\SUNSTAR{T}(t - s)ju(s) = jT(t - s)u(s)$, so the above equality implies that \cref{eq:admissible} holds as well with $\{T_0(t)\}_{t \ge 0}$ replaced by $\{T(t)\}_{t \ge 0}$. We conclude that indeed $f \in \mathscr{F}(J)$ and $u$ is given by \cref{eq:aie_inhom} on $J$. \qedhere
\end{steps}
\end{proof}
\subsection{A theorem on admissibility and perturbation}\label{sec:two-answers}
We are now in a position to answers the two questions that were asked following \cref{eq:inhom0,eq:inhom}.
\begin{theorem}\label{thm:inhom:answers}
The following two statements hold.
\begin{thmenum}
\item\label{thm:inhom:answers:admrange}
$\mathscr{X}_0$ is an admissible range for $\{T(t)\}_{t \ge 0}$.
\item\label{thm:inhom:answers:sol}
The unique solution of \cref{eq:aie0_inhom} on $[0,t_e)$ is given by \cref{eq:aie_inhom}.
\end{thmenum}
\end{theorem}
\begin{proof}
In the first two steps we prove the first statement of the theorem and we show that the unique solution of \cref{eq:aie0_inhom} on any compact subinterval of $[0,t_e)$ is given by \cref{eq:aie_inhom}. In the third step it will then be easy to extend the latter result to $[0,t_e)$ itself.
\begin{steps}
\item
Let $J$ be an arbitrary compact subinterval of $[0,t_e)$. Assume that $\phi \in j^{-1}\DOM(\SUNSTAR{A_0})$ and that $f : [0,t_e) \to \mathscr{X}_0$ is locally Lipschitz. \Cref{prop:inhom:aie0sol} gives a unique solution $u_{\phi,f} : J \to X$ of \cref{eq:aie0_inhom} as well as the sequences of Lipschitz functions $u_m : J \to X$ and $f_m : J \to \mathscr{X}_0$ appearing in \cref{eq:aie0_inhom_approx}. For each $m \in \mathbb{N}$ let $\hat{f}_m : [0,t_e) \to \mathscr{X}_0$ be a Lipschitz extension of $f_m$. \Cref{cor:inhom:aie0_to_aode0} with $\hat{f}_m$ and $u_m$ instead of $f$ and $u$ shows that each $u_m$ is a solution of the initial-value problem
\[
\STAR{d}(j \circ u_m)(t) = \SUNSTAR{A_0} j u_m(t) + L u_m(t) + \hat{f}_m(t), \qquad t \in J, \qquad u_m(0) = \phi.
\]
Hence each $u_m$ is also a solution of the initial-value problem
\[
\STAR{d}(j \circ u_m)(t) = \SUNSTAR{A} j u_m(t) + \hat{f}_m(t), \qquad t \in J, \qquad u_m(0) = \phi.
\]
\Cref{prop:inhom:aode_to_aie} with $u_m$ and $\hat{f}_m$ instead of $u$ and $f$ implies that
\begin{equation}
\label{eq:aie_inhom_approx}
f_m \in \mathscr{F}(J), \qquad u_m(t) = T(t)\phi + j^{-1}\int_0^t\SUNSTAR{T}(t - \tau)f_m(\tau)\,d\tau, \qquad \text{for all }\,m \in \mathbb{N},\,t \in J.
\end{equation}
Here it was also used that $\hat{f}_m|_J = f_m$ for all $m \in \mathbb{N}$. \Cref{prop:admclosed} for $\{T(t)\}_{t \ge 0}$ instead of $\{T_0(t)\}_{t \ge 0}$ and the uniform convergence of $f_m$ to $f$ on $J$ then imply that $f \in \mathscr{F}(J)$. Since $J$ was chosen arbitrarily, this proves that $f \in \mathscr{F}([0,t_e))$. \Cref{cor:admlipschitz} lets us conclude that $\mathscr{X}_0$ is an admissible range for $\{T(t)\}_{t \ge 0}$.
\item
Taking the limit $m \to \infty$ in \cref{eq:aie_inhom_approx} we obtain the identity
\begin{equation}
\label{eq:aie_inhom_dense}
u_{\phi,f}(t) = T(t)\phi + j^{-1}\int_0^t\SUNSTAR{T}(t - \tau)f(\tau)\,d\tau, \qquad \text{for all }\,t \in J,
\end{equation}
and for all $\phi \in j^{-1}\DOM(\SUNSTAR{A_0})$ and all locally Lipschitz functions $f : [0,t_e) \to \mathscr{X}_0$. The set of such pairs $(\phi,f)$ is dense in $X \times C(J,\mathscr{X}_0)$, so the continuity of \cref{eq:umap_cont} implies that \cref{eq:aie_inhom_dense} holds for all $\phi \in X$ and all continuous functions $f : [0,t_e) \to \mathscr{X}_0$. Therefore the unique solution $u_{\phi,f}$ of \cref{eq:aie0_inhom} on $J$ is given by \cref{eq:aie_inhom}.
\item
We extend this result to $[0,t_e)$. \Cref{prop:inhom:aie0sol} implies that any solution of \cref{eq:aie0_inhom} on $[0,t_e)$ must be unique. Define $\hat{u}_{\phi,f} : [0,t_e) \to X$ by $\hat{u}_{\phi,f}(t) \ensuremath{\coloneqq} u_{\phi,f}^J(t)$ where $J$ is a compact subinterval of $[0,t_e)$ such that $t \in J$ and $u_{\phi,f}^J$ is the unique solution of \cref{eq:aie0_inhom} on $J$. Then $\hat{u}_{\phi,f}$ is well-defined, continuous and satisfies \cref{eq:aie0_inhom} on $[0,t_e)$. The definition of $\hat{u}_{\phi,f}$ and the fact that each $u_{\phi,f}^J$ is given by \cref{eq:aie_inhom} imply that $\hat{u}_{\phi,f}$ itself is given by \cref{eq:aie_inhom}. \hfill \qedhere
\end{steps}
\end{proof}
Once it has been established that $\mathscr{X}_0$ is an admissible range for $\{T(t)\}_{t \ge 0}$ as well, it becomes possible to prove \cref{thm:inhom:answers:sol} in a way that exploits the symmetry between the semigroups $\{T_0(t)\}_{t \ge 0}$ and $\{T(t)\}_{t \ge 0}$. This may be of interest particularly in the $\odot$-reflexive case, when $\SUNSTAR{X}$ itself is an admissible range for $\{T(t)\}_{t \ge 0}$ and the following is an alternative for \cite[Proposition 2.5]{DSG3} and \cite[Lemma III.2.23]{Delay1995}.
\begin{proof}[Alternative proof of \cref{thm:inhom:answers:sol}]
For every $\phi \in X$ and every $f : [0,t_e) \to \mathscr{X}_0$ continuous we define $u_{\phi,f} : [0,t_e) \to X$ by \cref{eq:aie_inhom}. \Cref{thm:inhom:answers:admrange} ensures that $u_{\phi,f}$ is well-defined. First assume that $\phi \in j^{-1}\DOM(\SUNSTAR{A})$ and that $f$ is locally Lipschitz, so $u_{\phi,f}$ is locally Lipschitz. We now apply \cref{cor:inhom:aie0_to_aode0} with $\{T(t)\}_{t \ge 0}$ in place of $\{T_0(t)\}_{t \ge 0}$ and $-L \circ u_{\phi,f} + f$ in place of $f$. This yields that $u_{\phi,f}$ is a solution on $[0,t_e)$ of
\[
\STAR{d}(j \circ u_{\phi,f})(t) = \SUNSTAR{A}ju_{\phi,f}(t) + f(t), \qquad u(0) = \phi,
\]
and therefore $u_{\phi,f}$ is a solution on $[0,t_e)$ of
\[
\STAR{d}(j \circ u_{\phi,f})(t) = \SUNSTAR{A_0}ju_{\phi,f}(t) + [Lu_{\phi,f}(t) + f(t)], \qquad u(0) = \phi.
\]
Next, we apply \cref{prop:inhom:aode_to_aie} with $\{T_0(t)\}_{t \ge 0}$ in place of $\{T(t)\}_{t \ge 0}$ and $L \circ u_{\phi,f} + f$ in place of $f$ to conclude that $u_{\phi,f}$ satisfies
\begin{equation}
\label{eq:aie_inhom_dense_alt}
u_{\phi,f}(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[L u_{\phi,f}(\tau) + f(\tau)]\,d\tau
\end{equation}
for all $t \in [0,t_e)$, all $\phi \in j^{-1}\DOM(\SUNSTAR{A})$ and all locally Lipschitz functions $f : [0,t_e) \to \mathscr{X}_0$. For every compact subinterval $J$ of $[0,t_e)$ the set of all such points $(\phi,f)$ is dense in $X \times C(J,\mathscr{X}_0)$ and from \cref{eq:aie_inhom} we see that the map \cref{eq:umap_cont} is continuous. Therefore \cref{eq:aie_inhom_dense_alt} holds for all $t \in J$, all $\phi \in X$ and all continuous $f : [0,t_e) \to \mathscr{X}_0$. Since $J$ can be chosen arbitrarily, this concludes the proof.
\end{proof}
In conclusion of this section we return to the space $X_0^{\odot\times}$ from \cref{eq:x0suncross}. Define $X^{\odot\times}$ as the analogue for $\{T(t)\}_{t \ge 0}$ of that space,
\begin{equation}
\label{eq:xsuncross}
X^{\odot\times} \ensuremath{\coloneqq} \{x^{\odot\star} \in X^{\odot\star}\,:\, R(\lambda,A^{\odot\star})x^{\odot\star} \in jX\}, \qquad \lambda \in \rho(A).
\end{equation}
As a consequence of \cref{thm:xsuncross,thm:inhom:answers:admrange} the subscript can - and will - be dropped.
\begin{corollary}
\label{cor:x0suncross-is-xsuncross}
The maximal admissible ranges for $T$ and $T_0$ coincide: $X^{\odot\times} = X_0^{\odot\times}$.
\end{corollary}
\section{The nonlinear semiflow and linearization}\label{sec:linearization}
\emph{All vector spaces in this section are over $\mathbb{R}$.}
\medskip
\noindent Let $\{T_0(t)\}_{t \ge 0}$ be a $\mathcal{C}_0$-semigroup on $X$. We assume:
\begin{hypothesis}\label{hyp:G}
$G : X \to X^{\odot\star}$ in \cref{eq:aie0} is an admissible perturbation for $\{T_0(t)\}_{t \ge 0}$ and of class $C^k$ for some $k \ge 1$.
\end{hypothesis}
\noindent \Cref{cor:xsuncross,cor:x0suncross-is-xsuncross} show that for the first part we could equivalently have assumed that $G$ takes its values in the space $X^{\odot\times}$. The mean value inequality \cite[Theorem 1.8 in Chapter 1]{AmbrosettiProdi1993} implies that $G$ is locally Lipschitz. The properties of admissibility and local Lipschitz continuity together guarantee that for each $\phi \in X$ there exists a maximal solution $u_{\phi} : J_{\phi} \to X$ of \cref{eq:aie0} on a maximal interval of existence $J_{\phi} = [0,t_{\phi})$ for some $0 < t_{\phi} \le \infty$. This is proven exactly as in \cite[Theorem VII.3.4]{Delay1995} but with admissibility of $G$ for $\{T_0(t)\}_{t \ge 0}$ as a substitute for $\odot$-reflexivity of $X$ with respect to $\{T_0(t)\}_{t \ge 0}$. With the family of maximal solutions of \cref{eq:aie0} we then associate the map $\Sigma : \DOM(\Sigma) \to X$ defined by
\begin{equation}
\label{eq:semiflow}
\DOM(\Sigma) \ensuremath{\coloneqq} \{(t,\phi) \in \mathbb{R}_+ \times X\,:\, t \in J_{\phi}\}, \qquad \Sigma(t,\phi) \ensuremath{\coloneqq} u_{\phi}(t),
\end{equation}
and we can verify that $\Sigma$ is a semiflow on $X$ in the sense of \cite[Definition VII.2.1]{Delay1995}.
\subsection{Splitting of the perturbation and linearization}
Let $\hat{\phi} \in X$ be an equilibrium of $\Sigma$. We assume without loss of generality that $\hat{\phi} = 0$, i.e.
\[
J_0 = \mathbb{R}_+, \qquad \Sigma(t,0) = 0 \qquad \text{for all }\,t \ge 0.
\]
It is not difficult to verify that this is equivalent to the condition $G(0) = 0$. Let $L \ensuremath{\coloneqq} DG(0) : X \to \SUNSTAR{X}$ be the Fr\'echet derivative of $G$ at $\hat{\phi}$ and write
\begin{equation}
\label{eq:splittingGLR}
G(\phi) = L\phi + R(\phi), \qquad \phi \in X,
\end{equation}
which defines the $C^k$-smooth operator $R : X \to \SUNSTAR{X}$ as the nonlinear part of $G$ at $\hat{\phi}$. The admissibility of $G$ for $\{T_0(t)\}_{t \ge 0}$ implies the admissibility of $L$ and $R$ for $\{T_0(t)\}_{t \ge 0}$, so in particular $L$ satisfies \textbf{(H0)} in \cite[Section 6]{ADDE1}. Let $\{T(t)\}_{t \ge 0}$ be the $\mathcal{C}_0$-semigroup defined in \cite[Theorem 19]{ADDE1}. We now consider the abstract integral equation
\begin{equation}
\label{eq:aie
u(t) = T(t)\phi + j^{-1}\int_0^t\SUNSTAR{T}(t - \tau)R(u(\tau))\,d\tau, \qquad t \ge 0.
\end{equation}
\Cref{thm:inhom:answers:admrange} implies that $R$ is an admissible perturbation for $\{T(t)\}_{t \ge 0}$, so the \ensuremath{\text{w}^{\star}}\xspace-Riemann integral in \cref{eq:aie} takes values in $jX$. The following is a generalization of \cite[Proposition VII.5.4]{Delay1995}.
\begin{proposition}\label{prop:aie0_aie}
Given $0 < t_e \le \infty$ and $\phi \in X$, a function $u : [0,t_e) \to X$ is a solution of \cref{eq:aie0} if and only if $u$ is a solution of \cref{eq:aie}.
\end{proposition}
\begin{proof}
Suppose that $u$ is a solution of \cref{eq:aie0} on $[0,t_e)$. Then
\[
u(t) = T_0(t)\phi + j^{-1}\int_0^t\SUNSTAR{T_0}(t - \tau)[Lu(\tau) + R(u(\tau))]\,d\tau \qquad \text{for all }\,t \in [0,t_e),
\]
so $u$ is a solution of \cref{eq:aie0_inhom} on $[0,t_e)$ with $f = R \circ u$. \Cref{thm:inhom:answers:sol} implies that $u$ is given by \cref{eq:aie_inhom} with $f = R \circ u$, so $u$ satisfies \cref{eq:aie} on $[0,t_e)$. The converse is proven by reversing the order of the steps.
\end{proof}
\begin{proposition}\label{prop:linearization}
Let $\hat{\phi} \in X$ be an equilibrium of the semiflow $\Sigma$. Then $\Sigma$ is partially differentiable with respect to the state at $\hat{\phi}$, uniformly on compact time intervals, with partial derivative $D_2\Sigma(t,\hat{\phi}) = T(t)$. Explicitly, for every $t_0 \ge 0$ and every $\ensuremath{\varepsilon} > 0$ there exists $\delta > 0$ such that
\[
\|\Sigma(t,\phi) - \hat{\phi} - T(t)(\phi - \hat{\phi})\| \le \ensuremath{\varepsilon} \|\phi - \hat{\phi}\|
\]
for all $\phi \in X$ with $\|\phi - \hat{\phi}\| \le \delta$ and for all $t \in [0,t_0]$.
\end{proposition}
\begin{proof}
We may assume that $\hat{\phi} = 0$. \Cref{prop:aie0_aie} implies that $\Sigma(\cdot,\phi)$ is a solution of \cref{eq:aie} on $J_{\phi}$ for every $\phi \in X$, so
\begin{equation}
\label{eq:sigma_T_diff}
\Sigma(t,\phi) - T(t)\phi = j^{-1}\int_0^t\SUNSTAR{T}(t - \tau)R(\Sigma(\tau,\phi))\,d\tau \qquad \text{for all }\,t \in J_{\phi}.
\end{equation}
In order to estimate the right-hand side, we can proceed exactly as in the proof of \cite[Proposition VII.5.5]{Delay1995}, but with the following minor changes. First, in this context we are not interested in uniformity with respect to the parameter, and this shortens the proof. Second, at the beginning of the proof of the claim in step 4., assuming the contrary gives the existence of $t \in (t_1,t_0]$ such that
\[
\|\Sigma(s,\phi)\| < \overline{\delta} \text{ for all } s \in [0,t) \text{ and } \|\Sigma(t,\phi)\| \ge \overline{\delta},
\]
which corrects some small misprints. All else in the proof remains valid without modifications.
\end{proof}
\begin{remark}
If $G$ satisfies a \emph{global} Lipschitz condition with Lipschitz constant $L_G$ then the proof becomes a lot less subtle, because the a priori estimate of $\Sigma(t,\phi)$ on $[0,t_0]$ can now easily be derived using Gronwall's inequality in integral form \cite[Corollary I.6.6]{Hale1980}. Indeed, from \cref{eq:aie0} we have
\[
e^{-\omega t}\|\Sigma(t,\phi)\| \le M \|\phi\| + \|j^{-1}\| M L_G \int_0^t e^{-\omega \tau} \|\Sigma(\tau,\phi)\|\,d\tau,
\]
so Gronwall implies
\[
\|\Sigma(t,\phi)\| \le M e^{(\omega + \|j^{-1}\| M L_G)t} \|\phi\|
\]
for all $\phi \in X$ and for all $t \in J_{\phi}$. This estimate is precisely of the form \cite[(5.5) in the proof of Proposition VII.5.5]{Delay1995}.
\end{remark}
\subsection{The translation-invariant integral equation}
As part of the construction of local center manifolds in \cref{sec:cm}, we will be interested in solutions that exist for all time, such as periodic orbits. In order to discuss such solutions in a meaningful way, we introduce the translation-invariant version of \cref{eq:aie},
\begin{equation}
\label{eq:aie-st}
u(t) = T(t-s)u(s) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R(u(\tau))\,d\tau}, \qquad -\infty < s \le t < \infty,
\end{equation}
and we briefly comment on the simple relationship between \cref{eq:aie,eq:aie-st}. Let $J$ be an interval. By definition, a \textbf{solution on $J$} of \cref{eq:aie-st} is a continuous function $u : J \to X$ that satisfies \cref{eq:aie-st} for all $(t,s) \in \Omega_J$.
\begin{proposition}\label{prop:aie-translation-invariant}
Let $J$ be an interval. The function $u : J \to X$ is a solution of \textup{\cref{eq:aie-st}} if and only if
\begin{equation}
\label{eq:usigma}
t - s \in J_{u(s)}, \qquad u(t) = \Sigma(t - s, u(s)),
\end{equation}
for all $(t,s) \in \Omega_J$.
\end{proposition}
\begin{proof}
Suppose first that $u$ is a solution of \cref{eq:aie-st} and let $(t,s) \in \Omega_J$. We may assume $s < t$ strictly, for otherwise \cref{eq:usigma} holds trivially. Then $w \ensuremath{\coloneqq} u(\cdot + s)$ is a solution on $[0,t-s]$ of \cref{eq:aie} with $\phi = u(s)$. Hence $t - s \in J_{u(s)}$ so $(t - s, u(s)) \in \mathcal{D}(\Sigma)$, and $\Sigma(t - s, u(s)) = w(t - s) = u(t)$.
Conversely, let us assume that \cref{eq:usigma} holds for all $(t,s) \in \Omega_J$. Continuity of $u$ is not difficult: If $t \in J$ is an interior point, then there exists $s \in J$ with $s < t$ and for $\delta > 0$ small enough,
\[
u(t \pm \delta) = \Sigma(t \pm \delta - s, u(s)) \to \Sigma(t - s,u(s)) = u(t)
\]
as $\delta \downarrow 0$. The case that $t$ is an endpoint is even simpler. It remains to show that $u$ satisfies \cref{eq:aie-st} on $J$. Let $(t,s) \in \Omega_J$ be arbitrary and let $u_{\phi} : J_{\phi} \to X$ be the maximal solution of \cref{eq:aie} for $\phi = u(s)$. By \cref{eq:usigma} we have $u(t) = u_{\phi}(t - s)$ and this equality together with \cref{eq:aie} then implies that \cref{eq:aie-st} holds.
\end{proof}
\section{Center manifolds in the \texorpdfstring{non-$\odot$-reflexive}{non-sun-reflexive} case}\label{sec:cm}
\emph{All vector spaces in this section are over $\mathbb{R}$.}
\medskip
Let $\hat{\phi} = 0$ be an equilibrium of the semiflow $\Sigma$ defined by \cref{eq:semiflow} and let the $\mathcal{C}_0$-semigroup $\{T(t)\}_{t \ge 0}$ be the linearization of $\Sigma$ at $\hat{\phi}$ as in \cref{prop:linearization}. In this section we explain how the construction of a local center manifold for $\hat{\phi}$ in \cite[Chapter IX]{Delay1995} can be adapted to the non-$\odot$-reflexive case, using the results obtained in the previous sections.
While we deliberately stay close to the presentation in \cite{Delay1995}, there are also differences. As far as these differences stem from non-$\odot$-reflexivity, they are confined to \cref{sec:Keta}. This illustrates the general principle that, with the results from \cref{sec:admissibility,sec:inhomogeneous,sec:linearization} at hand, one can overcome the lack of $\odot$-reflexivity rather easily by applying the substitution rule $X^{\odot\star} \to X^{\odot\times}$; see also the comments in \cref{sec:conclusion}.
Another difference of some significance, although by itself unrelated to non-$\odot$-reflexivity, is in the treatment of the spectral decomposition in \cref{sec:decomposition,sec:lifting}. Smaller modifications and additions are indicated in the places where they occur.
\subsection{Spectral decompositions of \texorpdfstring{$X$}{X} and \texorpdfstring{$X^{\odot\times}$}{Xsuncross}}\label{sec:decomposition}
In \cite[Sections VIII.2 and IX.2]{Delay1995} the construction of local invariant manifolds for $\hat{\phi}$ starts from assumptions about the existence of a topological direct sum decomposition of $X^{\odot\star}$ into certain positively $\SUNSTAR{T}$-invariant subspaces, and about the behavior of $\{T^{\odot\star}(t)\}_{t \ge 0}$ on those subspaces. There the assumptions are formulated directly in the large space $\SUNSTAR{X}$, motivated by the fact that the nonlinearity does not map $X$ into itself, but rather into $\SUNSTAR{X}$.
Alternatively, one could first formulate these assumptions on $X$ and only then prove that they can be lifted to $\SUNSTAR{X}$, or rather to $X^{\odot\times} \subseteq X^{\odot\star}$, since in the general (non-$\odot$-reflexive) case, the nonlinearity $R$ introduced in \cref{eq:splittingGLR} takes its values in $X^{\odot\times}$. Apart from the substitution by $X^{\odot\times}$, this is along the lines of \cite[Theorems IV.2.11 and IV.2.12]{Delay1995} but with the additional observation that the assumption of eventual compactness of $\{T(t)\}_{t \ge 0}$ included there is not really needed. Indeed, the decomposition of $X$ and the corresponding exponential estimates may \emph{themselves} be taken as the assumption from which the analogous properties in $X^{\odot\times}$ can then be deduced. I see two advantages of this approach:
\begin{enumerate}
\item
The task of lifting from $X$ to $X^{\odot\times}$ is solved once and for all. There is no need to repeat the procedure for different classes of delay equations with different spectral properties, e.g. first for classical DDEs generating eventually compact $\mathcal{C}_0$-semigroups, next for abstract DDEs or for equations with infinite delay, and so on. We recall from \cref{sec:xsuncross} that $X^{\odot\times} = X^{\odot\star}$ in the $\odot$-reflexive case.
\item
A direct formulation of the assumptions in $X^{\odot\times}$ obfuscates the fact that the involved subspaces and operators stem from corresponding objects originally defined in or on $X$. Making this relationship more explicit by starting out on $X$ instead of $X^{\odot\times}$ adds clarity.
\end{enumerate}
We therefore begin by reformulating the assumptions from \cite[Section IX.2]{Delay1995} in $X$.
\begin{hypothesis}
\label{hyp:cm}
The space $X$ and the $\mathcal{C}_0$-semigroup $\{T(t)\}_{t \ge 0}$ on $X$ have the following properties:
\begin{hypenum}
\item
\label{hyp:cm:sum}
$X$ admits a direct sum decomposition
\begin{equation}
\label{eq:decomposition-of-x}
X = X_- \oplus X_0 \oplus X_+,
\end{equation}
which is topological, i.e. each summand is closed.
\item
\label{hyp:cm:invariance}
The subspaces $X_-$, $X_0$ and $X_+$ are positively $T$-invariant.
\item
\label{hyp:cm:group}
$\{T(t)\}_{t \ge 0}$ can be extended to a $\mathcal{C}_0$-group on $X_0$ and on $X_+$.
\item
\label{hyp:cm:trichotomy}
The decomposition \cref{eq:decomposition-of-x} is an \textbf{exponential trichotomy} on $\mathbb{R}$, meaning that there exist $a < 0 < b$ such that for every $\ensuremath{\varepsilon} > 0$ there exists $K_{\ensuremath{\varepsilon}} > 0$ such that
\begin{subequations}
\label{eq:trichotomy}
\begin{alignat}{2}
\|T(t)\phi\| &\le K_{\ensuremath{\varepsilon}} e^{at}\|\phi\| && \qquad \text{for all }\,t \ge 0 \text{ and } \phi \in X_-,\label{eq:trichotomy:stable}\\
\|T(t)\phi\| &\le K_{\ensuremath{\varepsilon}} e^{\ensuremath{\varepsilon}|t|}\|\phi\| && \qquad \text{for all }\,t \in \mathbb{R} \text{ and } \phi \in X_0, \label{eq:trichotomy:center}\\
\|T(t)\phi\| &\le K_{\ensuremath{\varepsilon}} e^{bt}\|\phi\| && \qquad \text{for all }\,t \le 0 \text{ and } \phi \in X_+.\label{eq:trichotomy:unstable}
\end{alignat}
\end{subequations}
\end{hypenum}
We call $X_-$, $X_0$ and $X_+$ the \textbf{stable subspace}, \textbf{center subspace} and \textbf{unstable subspace}.
\end{hypothesis}
In \cref{sec:lifting} it is explained in some detail how these assumptions induce a decomposition of $X^{\odot\times}$ with identical properties. The end result can be found in \cref{prop:xsuncross-cm}.
\begin{hypothesis}
\label{hyp:cm:j}
The subspaces $[X^{\odot\times}]_0$ and $[X^{\odot\times}]_+$ are contained in $jX$.
\end{hypothesis}
In concrete cases, the hypotheses \cref{hyp:cm} and \cref{hyp:cm:j} are usually verified by decomposition of the spectrum of the generator of the complexification of $\{T(t)\}_{t \ge 0}$. The following result in this spirit applies to a reasonably large class of $\mathcal{C}_0$-semigroups. Its proof can be found in \cref{sec:real-case}, while an application is given in \cref{thm:cm-dde} in \cref{sec:ddes}.
\begin{restatable}{theorem}{hypcmthm}
\label{thm:hypcm}
Suppose $\{T(t)\}_{t \ge 0}$ is eventually norm continuous and let $A_{\mathbbm{c}}$ be the complexification of its generator. If $\sigma(A_{\mathbbm{c}})$ is the pairwise disjoint union of the nonempty sets
\begin{alignat*}{2}
&\sigma_- &&\ensuremath{\coloneqq} \{\lambda \in \sigma(A_{\mathbbm{c}}) \,:\, \RE{\lambda} < 0\},\\
&\sigma_0 &&\ensuremath{\coloneqq} \{\lambda \in \sigma(A_{\mathbbm{c}}) \,:\, \RE{\lambda} = 0\},\\
&\sigma_+ &&\ensuremath{\coloneqq} \{\lambda \in \sigma(A_{\mathbbm{c}}) \,:\, \RE{\lambda} > 0\},
\end{alignat*}
where $\sigma_-$ is closed and both $\sigma_0$ and $\sigma_+$ are compact, and if
\[
\gamma_- \ensuremath{\coloneqq} \sup_{\lambda \in \sigma_-}\RE{\lambda} < 0 < \inf_{\lambda \in \sigma_+}\RE{\lambda} \ensuremath{\eqqcolon} \gamma_+,
\]
then \cref{hyp:cm} and \cref{hyp:cm:j} hold for $\{T(t)\}_{t \ge 0}$ on $X$.
\end{restatable}
Let $E$ be a Banach space and let $\eta \in \mathbb{R}$. From \cite[Definitions VIII.2.5 and IX.2.2]{Delay1995} we recall the three Banach spaces
\begin{align*}
\ensuremath{\textup{BC}}^{\eta}(\mathbb{R}_{\pm},E) &\ensuremath{\coloneqq} \{f \in C(\mathbb{R}_{\pm},E) \,:\, \sup_{t \in \mathbb{R}_{\pm}}{e^{-\eta t}\|f(t)\|} < \infty \},\\
\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},E) &\ensuremath{\coloneqq} \{f \in C(\mathbb{R},E)\,:\, \sup_{t \in \mathbb{R}}{e^{-\eta |t|}\|f(t)\|} < \infty \},
\end{align*}
equipped with their respective weighted supremum norms. If $\eta = 0$ we will suppress the superscript $\eta$.
Let $J$ be an interval. Analogously to \cref{eq:aie-st}, a \textbf{solution on $J$} of the linear homogeneous equation
\begin{equation}
\label{eq:hom-st}
u(t) = T(t - s)u(s), \qquad (t,s) \in \Omega_J,
\end{equation}
is defined as a continuous function $u : J \to X$ such that \cref{eq:hom-st} holds for all $(t,s) \in \Omega_J$.
\begin{lemma}[\normalfont{\cite[Lemma IX.2.4, proof as exercise]{Delay1995}}]
\label{lem:XsigmaBC}
Let $\eta \in (0,\min\{-a,b\})$. Then
\begin{align*}
X_- &= \{\phi \in X \, :\, \text{there exists a solution of \cref{eq:hom-st} on $\mathbb{R}_+$ through $\phi$ which belongs to $\ensuremath{\textup{BC}}^a(\mathbb{R}_+,X)$}\},\\
X_0 &= \{\phi \in X \, :\, \text{there exists a solution of \cref{eq:hom-st} on $\mathbb{R}$ through $\phi$ which belongs to $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$}\},\\
X_+ &= \{\phi \in X \, :\, \text{there exists a solution of \cref{eq:hom-st} on $\mathbb{R}_-$ through $\phi$ which belongs to $\ensuremath{\textup{BC}}^b(\mathbb{R}_-,X)$}\}.
\end{align*}
\end{lemma}
\begin{proof}
We give the proof for $X_0$ only. If $\phi \in X_0$ then $u_{\phi} : \mathbb{R} \to X$ defined by $u_{\phi}(t) \ensuremath{\coloneqq} T(t)\phi$ is a solution of \cref{eq:hom-st} through $\phi$. Choose $0 < \ensuremath{\varepsilon} \le \eta$ and $K_{\ensuremath{\varepsilon}}$ according to \cref{hyp:cm:trichotomy}, then
\[
e^{-\eta|t|} \|u_{\phi}(t)\| \le K_{\ensuremath{\varepsilon}} e^{-(\eta - \ensuremath{\varepsilon})|t|}\|\phi\| \le K_{\ensuremath{\varepsilon}}\|\phi\| \qquad \text{for all }\,t \in \mathbb{R},
\]
so $u_{\phi} \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$. Conversely, suppose that $\phi \in X$ is such that there exist a solution $u_{\phi} \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ of \cref{eq:hom-st} and a time $t_0 \in \mathbb{R}$ such that $u_{\phi}(t_0) = \phi$. We will show that $P_{\pm}\phi = 0$. By \cref{hyp:cm:group},
\[
P_+\phi = T(t_0 - t)T(t - t_0)P_+\phi \qquad \text{for all }\,t \ge t_0,
\]
so \cref{eq:trichotomy:unstable} implies, for all $t \ge t_0$,
\begin{align*}
\|P_+\phi\| &\le K_{\ensuremath{\varepsilon}} e^{b(t_0 - t)}\|T(t - t_0)P_+\phi\|\\
&\le K_{\ensuremath{\varepsilon}} e^{b(t_0 - t)}\|u_{\phi}(t)\|,
\end{align*}
and therefore, for $t \ge \max\{t_0,0\}$,
\[
e^{-\eta t}\|u_{\phi}(t)\| \ge \frac{e^{-b t_0}}{K_{\ensuremath{\varepsilon}}}e^{(b - \eta)t}\|P_+\phi\| \to \infty \qquad \text{as } t \to \infty,
\]
unless $P_+\phi = 0$. Similarly, since $u_{\phi}$ is a solution of \cref{eq:hom-st} on $\mathbb{R}$ and $u_{\phi}(t_0) = \phi$,
\[
\begin{aligned}
P_-\phi &= P_-T(t_0 - t)u_{\phi}(t)\\
&= T(t_0 - t)P_-u_{\phi}(t)
\end{aligned} \qquad \text{for all }\,t \le t_0,
\]
so \cref{eq:trichotomy:stable} implies
\[
\|P_-\phi\| \le K_{\ensuremath{\varepsilon}} e^{a(t_0 - t)}\|u_{\phi}(t)\| \qquad \text{for all }\,t \le t_0,
\]
and therefore, for $t \le \min\{t_0,0\}$,
\[
e^{-\eta|t|}\|u_{\phi}(t)\| \ge \frac{e^{-a t_0}}{K_{\ensuremath{\varepsilon}}} e^{(a + \eta)t} \|P_-\phi\| \to \infty \qquad \text{as } t \to -\infty,
\]
unless $P_-\phi = 0$. We have shown that $P_{\pm}\phi = 0$, so $\phi \in X_0$.
\begin{mycomment}
\medskip
\par
Following is also the proof for the stable subspace. If $\phi \in X_-$, then $u_{\phi} : \mathbb{R}_+ \to X$ defined by $u_{\phi}(t) \ensuremath{\coloneqq} T(t)\phi$ is clearly a solution of \cref{eq:hom-st} on $\mathbb{R}_+$. By \cref{eq:trichotomy:stable} we have
\[
e^{-a t}\|u_{\phi}(t)\| \le K_{\ensuremath{\varepsilon}} \|\phi\| \qquad \text{for all }\,t \ge 0,
\]
so $u_{\phi} \in \ensuremath{\textup{BC}}^a(\mathbb{R}_+,E)$. Conversely, let $\phi \in X$ and suppose there exists a solution $u_{\phi}$ of \cref{eq:hom-st} on $\mathbb{R}_+$ that belongs to $\ensuremath{\textup{BC}}^a(\mathbb{R}_+,X)$ and such that $u_{\phi}(t_0) = \phi$ for some $t_0 \ge 0$. We may assume that $t_0 = 0$, because otherwise we can consider the translated solution $v_{\phi}(\cdot) \ensuremath{\coloneqq} T(t_0)u_{\phi}(\cdot)$. Then, by \cref{hyp:cm:group},
\[
(I - P_-)\phi = T(-t)T(t)(I - P_-)\phi \qquad \text{for all }\,t \ge 0,
\]
and since $I - P_- = P_0 + P_+$, it follows, for $t \ge 0$,
\begin{align*}
\|(I - P_-)\phi\| &\le \|T(-t)T(t)P_0\phi\| + \|T(-t)T(t)P_+\phi\|\\
&\le K_{\ensuremath{\varepsilon}} e^{\ensuremath{\varepsilon} t}\|T(t)P_0\phi\| + K_{\ensuremath{\varepsilon}} e^{-b t}\|T(t)P_+\phi\|\\
&\le K_{\ensuremath{\varepsilon}} (e^{\ensuremath{\varepsilon} t} + e^{-bt})\|u_{\phi}(t)\|,
\end{align*}
where the second line is due to \cref{hyp:cm:trichotomy}. Now choose $\ensuremath{\varepsilon} = \frac{|a|}{2}$ and $K_{\ensuremath{\varepsilon}}$ according to \cref{hyp:cm:trichotomy}, then
\[
e^{-a t} \|u_{\phi}(t)\| \ge \frac{e^{\ensuremath{\varepsilon} t}}{K_{\ensuremath{\varepsilon}}(1 + e^{-(b +\ensuremath{\varepsilon})t})}\|(I - P_-)\phi\| \to \infty \qquad \text{as } t \to \infty,
\]
unless $(I - P_-)\phi = 0$, which shows that $\phi \in X_-$.
\end{mycomment}
\end{proof}
\subsection{Bounded solutions of the linear inhomogeneous equation}\label{sec:Keta}
Let $J$ be an interval. Analogously to \cref{eq:aie-st,eq:hom-st}, a \textbf{solution on $J$} of the linear \emph{in}homogeneous equation
\begin{equation}
\label{eq:inhom-st}
u(t) = T(t - s)u(s) + j^{-1}\int_s^t T^{\odot\star}(t - \tau)f(\tau)\,d\tau, \qquad (t,s) \in \Omega_J,
\end{equation}
is defined to be a continuous function $u : J \to X$ such that \cref{eq:inhom-st} holds for all $(t,s) \in \Omega_J$. In the analytic (as distinguished from: geometric) construction of a local center manifold, a key step is the introduction of a linear operator that associates with each appropriate forcing function a solution of \cref{eq:inhom-st} on $\mathbb{R}$ with prescribed behavior both at $t = 0$ and $t = \pm \infty$. For the $\odot$-reflexive case this is done in \cite{Delay1995}. The results from \cref{sec:admissibility,sec:inhomogeneous} can be used to obtain an extension to the non-$\odot$-reflexive setting with relatively little effort. Define, for any $\eta \in (0,\min\{-a,b\})$,
\begin{align*}
\mathcal{K}_{\eta} : \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X^{\odot\times}) \to \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X), \qquad (\mathcal{K}_{\eta}f)(t) &\ensuremath{\coloneqq} j^{-1}\int_0^t T^{\odot\star}(t - \tau)P_0^{\odot\times}f(\tau)\,d\tau\\
&+ j^{-1}\int_{\infty}^t T^{\odot\star}(t - \tau)P_+^{\odot\times}f(\tau)\,d\tau\\
&+ j^{-1}\int_{-\infty}^t T^{\odot\star}(t - \tau)P_-^{\odot\times}f(\tau)\,d\tau.
\end{align*}
Regarding the following lemma, we would like to stress that the only fundamental difference between the proof for the $\odot$-reflexive case in \cite{Delay1995} and the proof here lies in the verification that the last \ensuremath{\text{w}^{\star}}\xspace-integral in the above definition of $\mathcal{K}_{\eta}$ indeed takes values in $jX$; also see \cref{eq:Imin-lim}. Still, we have included some more details because the proof in \cite{Delay1995} is rather condensed and part of the estimates were left as exercises.
\begin{proposition}[\normalfont{cf. \cite[Lemma IX.3.2 and Exercise 3.4]{Delay1995}}]\label{prop:Keta}
Let $\eta \in (0,\min\{-a,b\})$.
\begin{propenum}
\item
$\mathcal{K}_{\eta}$ is a well-defined bounded linear operator.
\item
\label{prop:Keta:solution}
$\mathcal{K}_{\eta}f$ is the unique solution of \cref{eq:inhom-st} in $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ with zero $X_0$-component at $t = 0$.
\end{propenum}
\end{proposition}
\begin{proof}
Let $\ensuremath{\varepsilon} < \eta$ and select $K_{\ensuremath{\varepsilon}} > 0$ according to \cref{prop:xsuncross-cm:trichotomy}.
\begin{steps}[label=\roman*.]
\item
For any given $f \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X^{\odot\times})$, the three integrals in the definition of $\mathcal{K}_{\eta}f$ naturally define functions $I_{\mu} : \mathbb{R} \to X^{\odot\star}$ for $\mu \in \{0,+,-\}$. We begin by checking that these functions are well-defined, continuous, take values in $jX$ and satisfy certain estimates.
\begin{description}[wide,labelindent=0pt,itemsep=0.5em]
\item[$I_0$:]
This is the simplest case, because the integration domain is compact. \Cref{hyp:cm:j,lem:Pj:3} imply, for all $t \in \mathbb{R}$,
\begin{align*}
I_0(t) &= \int_0^t T^{\odot\star}(t - \tau)j j^{-1} P_0^{\odot\times}f(\tau)\,d\tau\\
&= j \int_0^t T(t - \tau) j^{-1} P_0^{\odot\times}f(\tau)\,d\tau,
\end{align*}
where the second integral is a Riemann-integral. We see that $I_0$ takes values in $jX$. It is straightforward to obtain the estimate
\begin{equation}
\label{eq:I0-est}
\|I_0(t)\| \le \frac{K_{\ensuremath{\varepsilon}}}{\eta - \ensuremath{\varepsilon}}e^{\eta|t|}\|f\|_{\eta} \qquad \text{for all }\,t \in \mathbb{R}
\end{equation}
by using \cref{prop:xsuncross-cm:trichotomy}, distinguishing between the cases $t \ge 0$ and $t \le 0$.
\item[$I_+$:]
\Cref{hyp:cm:j,lem:Pj:3} imply that, for any $t \in \mathbb{R}$,
\[
T^{\odot\star}(t - \tau)P_+^{\odot\times}f(\tau)= jT(t - \tau)j^{-1}P_+^{\odot\times}f(\tau) \qquad \text{for all }\,\tau \ge t,
\]
so the integrand in $I_+(t)$ is continuous on $[t,\infty)$. \Cref{prop:xsuncross-cm:trichotomy} implies the estimate
\[
\|T^{\odot\star}(t - \tau)P_+^{\odot\times}f(\tau)\| \le K_{\ensuremath{\varepsilon}} e^{bt} e^{-b\tau + \eta|\tau|} \|f\|_{\eta} \qquad \text{for all }\,\tau \ge t.
\]
Hence the \ensuremath{\text{w}^{\star}}\xspace-integral defining $I_+(t)$ exists, and it can be evaluated as a Bochner integral over $[t,\infty)$,
\[
I_+(t) = -j\int_t^{\infty} T(t - \tau)j^{-1}P_+^{\odot\times}f(\tau) \,d\tau,
\]
showing that $I_+$ takes values in $jX$. From the above estimate it follows that
\[
\|I_+(t)\| \le K_{\ensuremath{\varepsilon}} e^{bt} \|f\|_{\eta} \int_t^{\infty} e^{-b\tau + \eta|\tau|}\,d\tau.
\]
For the integral in the right-hand side,
\[
\int_t^{\infty}{e^{-b\tau + \eta|\tau|}\,d\tau} =
\begin{cases}
\frac{e^{-(b - \eta)t}}{b - \eta} &\text{if } t \ge 0,\\
\frac{e^{-(b + \eta)t}}{b + \eta} - \frac{1}{b + \eta} + \frac{1}{b - \eta} &\text{if } t \le 0.
\end{cases}
\]
Now, for $\alpha \ge 1$ we have the inequality
\[
\frac{\alpha}{b + \eta} + \frac{1}{b - \eta} \le \frac{\alpha}{b - \eta} + \frac{1}{b + \eta}.
\]
(Subtract the left-hand side from the right-hand side and find that the result is non-negative.) If $t \le 0$ then $e^{-(b + \eta)t} \ge 1$, so in this case
\[
\int_t^{\infty}{e^{-b\tau + \eta|\tau|}\,d\tau} \le \frac{e^{-(b + \eta)t}}{b - \eta} \qquad \text{if } t \le 0.
\]
In summary, we have obtained the estimate
\begin{equation}
\label{eq:exp-int-est}
\int_t^{\infty}{e^{-b\tau + \eta|\tau|}\,d\tau} \le \frac{ e^{-bt}e^{\eta|t|}}{b - \eta} \qquad \text{for all }\,t \in \mathbb{R},
\end{equation}
and therefore
\begin{equation}
\label{eq:Iplus-est}
\|I_+(t)\| \le K_{\ensuremath{\varepsilon}}\|f\|_{\eta}\frac{e^{\eta|t|}}{b - \eta} \qquad \text{for all }\,t\in \mathbb{R}.
\end{equation}
\item[$I_-$:] For any $t \in \mathbb{R}$, the integrand in $I_-(t)$ is \ensuremath{\text{w}^{\star}}\xspace-continuous, hence $\ensuremath{\text{w}^{\star}}\xspace$-Lebesgue measurable, and by \cref{prop:xsuncross-cm:trichotomy} it satisfies the estimate
\[
\|T^{\odot\star}(t - \tau)P_-^{\odot\times}f(\tau)\| \le K_{\ensuremath{\varepsilon}} e^{a(t - \tau) + \eta|\tau|}\|f\|_{\eta} \qquad \text{for all }\,\tau \le t,
\]
so the \ensuremath{\text{w}^{\star}}\xspace-integral defining $I_-(t)$ exists. \Cref{lem:wsnormconv} implies that
\begin{equation}
\label{eq:Imin-lim}
I_-(t) = \lim_{n \to \infty} \int_{t-n}^t T^{\odot\star}(t - \tau)P_-^{\odot\times} f(\tau)\,d\tau,
\end{equation}
in norm. The function $P_-^{\odot\times} \circ f$ takes values in $X^{\odot\times}$, which is an admissible range for $T$, so each integral inside the limit is an element of $jX$. The norm convergence implies that the same is true for $I_-(t)$. From the above estimate it follows that
\[
\|I_-(t)\| \le K_{\ensuremath{\varepsilon}} e^{at} \|f\|_{\eta} \int_{-\infty}^t e^{-a\tau + \eta|\tau|}\,d\tau
\]
The substitution $-\tau = s$ inside the integral enables an application of \cref{eq:exp-int-est}, which yields
\[
\int_{-\infty}^t{e^{-a\tau + \eta|\tau|}\,d\tau} = \int_{-t}^{\infty}{e^{-(-a)s + \eta|s|}\,ds} \le \frac{e^{-(-a)(-t)}e^{\eta|t|}}{-a - \eta} = \frac{e^{-at}e^{\eta|t|}}{-a - \eta},
\]
and therefore
\begin{equation}
\label{eq:Imin-est}
\|I_-(t)\| \le K_{\ensuremath{\varepsilon}}\|f\|_{\eta} \frac{e^{\eta|t|}}{-a - \eta} \qquad \text{for all } t \in \mathbb{R}.
\end{equation}
\end{description}
Combining the estimates \cref{eq:I0-est,eq:Iplus-est,eq:Imin-est}, we obtain
\[
e^{-\eta|t|}\|(\mathcal{K}_{\eta}f)(t)\| \le \|j^{-1}\| K_{\ensuremath{\varepsilon}} \|f\|_{\eta} \left(\frac{1}{\eta - \ensuremath{\varepsilon}} + \frac{1}{b - \eta} + \frac{1}{-a - \eta}\right) \qquad \text{for all }\,t \in \mathbb{R},
\]
so $\mathcal{K}_{\eta}f \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ and $\mathcal{K}_{\eta}$ is a bounded linear operator.
\item
Given any $f \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X^{\odot\times})$, it is straightforward to check that $\mathcal{K}_{\eta}f$ is a solution of \cref{eq:inhom-st}.
By \cref{prop:xsuncross-cm:group,prop:xsuncross-cm:invariance} the subspaces $[X^{\odot\times}]_0$ and $[X^{\odot\times}]_+$ are invariant for the group $\{T^{\odot\star}(t)\}_{t \in \mathbb{R}}$ and the subspace $[X^{\odot\times}]_-$ is positively invariant for the semigroup $\{T^{\odot\star}(t)\}_{t \ge 0}$. So, for every $\mu \in \{0,+,-\}$ the projectors inside $I_{\mu}(t)$ commute with the (semi)group operators, which gives $I_{\mu}(t) = P_{\mu}^{\odot\star}I_{\mu}(t)$ for all $t \in \mathbb{R}$. \Cref{lem:Pj:1} then implies that $j^{-1} \circ I_{\mu}$ maps into $X_{\mu}$. Since $I_0(0) = 0$ it follows that $(\mathcal{K}_{\eta}f)(0)$ has a vanishing component in $X_0$.
For uniqueness, suppose that $v \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ is another solution of \cref{eq:inhom-st} such that $P_0v(0) = 0$. Then $w \ensuremath{\coloneqq} \mathcal{K}_{\eta}f - v$ is a solution of \cref{eq:hom-st} in $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ through $w(t)$ for all $t \in \mathbb{R}$, so \cref{lem:XsigmaBC} shows that $w$ takes values in $X_0$ and, in particular, $w(0) = P_0w(0) = 0$. \Cref{hyp:cm:group} implies that $w = 0$ identically. \qedhere
\end{steps}
\end{proof}
\subsection{Modification of the nonlinearity}\label{sec:Rdelta}
The next step in the construction of a local center manifold is a modification of the nonlinearity $R$ introduced in the splitting \cref{eq:splittingGLR} in \cref{sec:linearization}. We recall that $R : X \to X^{\odot\times}$ is a $C^k$-smooth operator for some $k \ge 1$, and
\begin{equation}
\label{eq:R0DR0}
R(0) = 0, \qquad DR(0) = 0.
\end{equation}
For any $\delta > 0$, let $R_{\delta} : X \to X^{\odot\times}$ be its $\delta$-modification as defined in \cite[Section IX.4]{Delay1995}. The purpose of this modification is to obtain a nonlinearity that is globally Lipschitz (which will ensure that the corresponding substitution operators $\tilde{R}_{\delta}$ in \cref{eq:substitution} below are well-defined) with a Lipschitz constant controlled by $\delta$ (which will ensure the contractivity of the parameterized fixed point operator that defines the local center manifold.)
However, it is not clear to me how \cite[Lemma IX.4.1]{Delay1995} applies to obtain the aforementioned properties of $R_{\delta}$ in \cite[Corollary IX.4.2]{Delay1995}. First, if $X$ is infinite-dimensional, then the $C^k$-smoothness of $R$ does not imply Lipschitz continuity on balls of arbitrary radius. \emph{Locally} this is of course not a problem:
\begin{lemma}\label{lem:lipconst}
There exist $\delta_1 > 0$ and $L : [0,\delta_1] \to \mathbb{R}_+$ such that $L(0) = 0$, $L(\delta)$ is a Lipschitz constant for $R$ on the open ball $B_{\delta} \subseteq X$ for every $0 < \delta \le \delta_1$ and $L$ is continuous at zero.
\end{lemma}
\begin{proof}
By continuity of $DR$ and \cref{eq:R0DR0} there exists $\delta_1 > 0$ such that $\sup\left\{\|DR(w)\|\,:\, w \in B_{\delta_1}\right\} \le 1$. Define $L(0) \ensuremath{\coloneqq} 0$ and, for any $0 < \delta \le \delta_1$,
\[
L(\delta) \ensuremath{\coloneqq} \sup\left\{\|DR(w)\|\,:\, w \in B_{\delta} \right\}.
\]
By the mean value inequality $L(\delta)$ is a Lipschitz constant for $R$ on $B_{\delta}$. Moreover, given $\ensuremath{\varepsilon} > 0$ there exists $0 < \delta_{\ensuremath{\varepsilon}} \le \delta_1$ such that $\sup\left\{\|DR(w)\|\,:\, w \in B_{\delta_{\ensuremath{\varepsilon}}} \right\} \le \ensuremath{\varepsilon}$, and if $\delta \le \delta_{\ensuremath{\varepsilon}}$ then $L(\delta)$ does not exceed the left-hand side of this inequality.
\end{proof}
Second, it is unclear to me that $R_{\delta}$ has the appropriate functional form to make \cite[Lemma IX.4.1]{Delay1995} applicable, so here is a minor adaptation that uses the above lemma.
\begin{proposition}[\normalfont{cf. \cite[Lemma IX.4.1 and Corollary IX.4.2]{Delay1995}}]\label{prop:Rdelta-Lip}
For $\delta > 0$ sufficiently small, the modified nonlinearity $R_{\delta}$ is globally Lipschitz continuous with Lipschitz constant $L_{R_{\delta}} \to 0$ as $\delta \downarrow 0$.
\end{proposition}
\begin{proof}
Let $\xi : \mathbb{R}_+ \to [0,1]$ be a standard cut-off function as introduced at the start of \cite[Section IX.4]{Delay1995} and define the auxiliary functions $\xi_{\delta} : X \to [0,1]$ and $\Xi_{\delta} : X \to [0,1]$ by
\[
\xi_{\delta}(x) \ensuremath{\coloneqq} \xi\Bigl(\frac{\|x\|}{\delta}\Bigr), \qquad \Xi_{\delta}(x) \ensuremath{\coloneqq} \xi_{\delta}(P_0x)\xi_{\delta}((I - P_0)x), \qquad \delta > 0.
\]
In terms of these functions, we have $R_{\delta}(x) = R(x)\Xi_{\delta}(x)$ for all $x \in X$.
\begin{steps}
\item
Let $C > 0$ be a global Lipschitz constant for $\xi$. Using that the composition of two Lipschitz functions is Lipschitz with constant equal to the product of the two Lipschitz constants, we obtain that $\xi_{\delta}$ has the global Lipschitz constant $\frac{C}{\delta}$. This implies the global Lipschitz estimate
\begin{align*}
\left| \Xi_{\delta}(x) - \Xi_{\delta}(y) \right| &\le \left|\xi_{\delta}(P_0x) - \xi_{\delta}(P_0y)\right| + \left|\xi_{\delta}((I - P_0)x) - \xi_{\delta}((I - P_0)y)\right|\\
&\le \frac{C}{\delta}\|x - y\| + \frac{C}{\delta}\|x - y\| \lesssim \frac{C}{\delta}\|x - y\|,
\end{align*}
for all $x, y \in X$. (The numerical factor was absorbed into $C$.) We furthermore note that
\[
\|x\| = \|P_0x + (I - P_0)x\| \le \|P_0x\| + \|(I - P_0)x\| \qquad \text{for all }\,x \in X,
\]
so if $\|x\| \ge 4\delta$ then $\max\{\|P_0x\|, \|(I - P_0)x\|\} \ge 2\delta$ and consequently $\Xi_{\delta}(x) = 0$.
\item
Let $\delta > 0$ be such that $4\delta \le \delta_1$ with $\delta_1$ as in \cref{lem:lipconst}. For any $x, y \in X$ we estimate
\begin{align*}
\|R_{\delta}(x) - R_{\delta}(y)\| &\le \|R(x) - R(y)\| \cdot \Xi_{\delta}(y) + |\Xi_{\delta}(x) - \Xi_{\delta}(y)| \cdot \|R(x)\|\\
&\le
\begin{cases}
L(4\delta)\|x - y\| + \frac{C}{\delta}\|x - y\|L(4\delta)4\delta& \text{if } \|x\|, \|y\| < 4\delta,\\
0 & \text{if } \|x\|, \|y\| \ge 4\delta,\\
\frac{C}{\delta}\|x - y\|L(4\delta)4\delta & \text{if } \|x\| < 4\delta, \|y\| \ge 4\delta,
\end{cases}\\
&\le L(4\delta)(4C + 1)\|x - y\|,
\end{align*}
so $L_{R_{\delta}} = L(4\delta)(4C + 1)$ is a global Lipschitz constant for $R_{\delta}$ and $L_{R_\delta} \to 0$ as $\delta \downarrow 0$. \hfill \qedhere
\end{steps}
\end{proof}
In the proof it was also obtained that for $\delta > 0$ sufficiently small, $\|x\| \ge 4\delta$ implies $\Xi_{\delta}(x) = 0$. Together with \cref{prop:Rdelta-Lip} itself, this gives
\begin{equation}
\label{eq:Rdelta-estimate}
\|R_{\delta}(x)\| \le 4\delta L_{R_{\delta}} \qquad \text{for all }\,x \in X.
\end{equation}
We associate with $R_{\delta}$ the \textbf{substitution operator}
\begin{equation}\label{eq:substitution}
\tilde{R}_{\delta} : \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X) \to \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X^{\odot\times}), \qquad \tilde{R}_{\delta}(u) \ensuremath{\coloneqq} R_{\delta} \circ u.
\end{equation}
\begin{corollary}\label{cor:substitution}
For all $\delta > 0$ sufficiently small, $\tilde{R}_{\delta}$ is well-defined and globally Lipschitz with Lipschitz constant $L_{R_{\delta}} \to 0$ as $\delta \downarrow 0$.
\end{corollary}
\begin{proof}
$\tilde{R}_{\delta}$ is well-defined due to \cref{eq:Rdelta-estimate} and, for any $u,v \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$,
\begin{align*}
\|\tilde{R}_{\delta}(u) - \tilde{R}_{\delta}(v)\|_{\eta} &= \sup_{t \in \mathbb{R}} e^{-\eta|t|} \|R_{\delta}(u(t)) - R_{\delta}(v(t))\|\\
&\le L_{R_{\delta}} \sup_{t \in \mathbb{R}} e^{-\eta|t|} \|u(t) - v(t)\| = L_{R_{\delta}} \|u - v\|_{\eta},
\end{align*}
so $\tilde{R}_{\delta}$ inherits the global Lipschitz constant from $R_{\delta}$.
\end{proof}
\subsection{The fixed-point operator and the center manifold}\label{sec:cm-fixed-point}
Motivated by the characterization of $X_0$ provided by \cref{lem:XsigmaBC}, we will define a parameterized fixed point operator, in such a way that its fixed points correspond to exponentially bounded solutions on $\mathbb{R}$ of the modified equation
\begin{equation}
\label{eq:aie-st-delta}
u(t) = T(t-s)u(s) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R_{\delta}(u(\tau))\,d\tau}, \qquad -\infty < s \le t < \infty.
\end{equation}
(This equation is obtained from \cref{eq:aie-st} by replacing $R$ with the modified nonlinearity $R_{\delta}$ from \cref{sec:Rdelta}.) Let $\eta$ and $\mathcal{K}_{\eta}$ be as in \cref{prop:Keta} and let $\tilde{R}_{\delta}$ be as in \cref{cor:substitution}. Define the fixed point operator
\[
\mathcal{G} : \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X) \times X_0 \to \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X), \qquad \mathcal{G}(u,\phi) \ensuremath{\coloneqq} T(\cdot)\phi + \mathcal{K}_{\eta}\tilde{R}_{\delta}(u),
\]
where its second argument in $X_0$ is regarded as a parameter.
\begin{theorem}[\normalfont{cf. \cite[Theorem IX.5.1]{Delay1995}}]\label{thm:cm}
If $\eta \in (0, \min\{-a,b\})$ and if $\delta > 0$ is sufficiently small, then the following statements hold.
\begin{thmenum}
\item
For every $\phi \in X_0$ the equation $\mathcal{G}(u,\phi) = u$ has a unique solution $u = u^{\star}(\phi)$.
\item
\label{thm:cm:lipschitz}
The map $u^{\star} : X_0 \to \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ is globally Lipschitz and $u^{\star}(0) = 0$.
\end{thmenum}
\end{theorem}
\begin{proof}
Let $u,v \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ and $\phi, \psi \in X_0$ be arbitrary. Then
\begin{align*}
\|\mathcal{G}(u,\phi) - \mathcal{G}(v,\psi)\|_{\eta} &\le \sup_{t \in \mathbb{R}} e^{-\eta|t|}\|T(t)(\phi - \psi)\| + \|\mathcal{K}_{\eta}\| L_{R_{\delta}} \|u - v\|_{\eta}\\
&\le K_{\ensuremath{\varepsilon}} \sup_{t \in \mathbb{R}} e^{-(\eta - \ensuremath{\varepsilon})|t|} \|\phi - \psi\| + \|\mathcal{K}_{\eta}\| L_{R_{\delta}} \|u - v\|_{\eta}\\
&\le K_{\ensuremath{\varepsilon}} \|\phi - \psi\| + \|\mathcal{K}_{\eta}\| L_{R_{\delta}} \|u - v\|_{\eta},
\end{align*}
where in the second line we used \cref{hyp:cm:trichotomy} with $\ensuremath{\varepsilon} < \eta$. By \cref{cor:substitution} there exists $\delta_2 > 0$ such that $\|\mathcal{K}_{\eta}\| L_{R_{\delta}} \le \frac{1}{2}$ for all $0 < \delta \le \delta_2$ sufficiently small, and we select $\delta$ accordingly.
\begin{steps}[label=\roman*.]
\item
If $\psi = \phi$ then the above estimate shows that $\mathcal{G}(\cdot,\phi)$ is globally Lipschitz with Lipschitz constant $\frac{1}{2}$, so by the contraction mapping principle $\mathcal{G}(\cdot,\phi)$ has a unique fixed point $u^{\star}(\phi)$.
\item
Let $u^{\star}(\phi)$ and $u^{\star}(\psi)$ be the unique fixed points of $\mathcal{G}(\cdot,\phi)$ and $\mathcal{G}(\cdot,\psi)$, respectively. Then
\begin{align*}
\|u^{\star}(\phi) - u^{\star}(\psi)\|_{\eta} &= \|\mathcal{G}(u^{\star}(\phi),\phi) - \mathcal{G}(u^{\star}(\psi),\psi)\|_{\eta}\\
&\le K_{\ensuremath{\varepsilon}}\|\phi - \psi\| + \frac{1}{2}\|u^{\star}(\phi) - u^{\star}(\psi)\|_{\eta},
\end{align*}
so $\|u^{\star}(\phi) - u^{\star}(\psi)\|_{\eta} \le 2K_{\ensuremath{\varepsilon}}\|\phi - \psi\|$. It is clear that $u^{\star}(0) = 0$. \qedhere
\end{steps}
\end{proof}
Let $u^{\star} : X_0 \to \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ be the parameterized fixed point from \cref{thm:cm:lipschitz}. Clearly the map $\mathcal{C}$ in the definition below inherits the global Lipschitz continuity from $u^{\star}$.
\begin{definition}\label{def:CM}
A \textbf{\emph{global} center manifold} $\mathcal{W}^{\textup{c}}$ for \cref{eq:aie-st-delta} is defined as the image of the mapping
\begin{equation}
\label{eq:cmmap}
\mathcal{C} : X_0 \to X, \qquad \mathcal{C} \ensuremath{\coloneqq} \operatorname{ev} \circ u^{\star},
\end{equation}
where $\operatorname{ev} : \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X) \to X$ is the evaluation at zero.
\end{definition}
As alluded to above, $\mathcal{W}^{\textup{c}}$ is a nonlinear generalization of the center subspace $X_0$ that was characterized in \cref{lem:XsigmaBC}. This statement can be made more precise.
\begin{proposition}\label{prop:CMcharacterization}
It holds that
\[
\mathcal{W}^{\textup{c}} = \{\psi \in X \, :\, \text{there exists a solution of \cref{eq:aie-st-delta} on $\mathbb{R}$ through $\psi$ which belongs to $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$}\}.
\]
\end{proposition}
\begin{proof}
Let $\psi \in \mathcal{W}^{\textup{c}}$ be arbitrary, so $\psi = \mathcal{C}(\phi) = u^{\star}(\phi)(0)$ for some $\phi \in X_0$. We prove that $u = u^{\star}(\phi)$ is a solution of \cref{eq:aie-st-delta} in $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$. \Cref{prop:Keta:solution} shows that $\mathcal{K}_{\eta}\tilde{R}_{\delta}(u)$ is a solution of \cref{eq:inhom-st} with $f = \tilde{R}_{\delta}(u)$. It follows that
\begin{align*}
u(t) &= T(t)\phi + (\mathcal{K}_{\eta}\tilde{R}_{\delta}(u))(t)\\
&= T(t)\phi + T(t - s)(\mathcal{K}_{\eta}\tilde{R}_{\delta}(u))(s) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R_{\delta}(u(\tau))\,d\tau}\\
&= T(t)\phi + T(t - s)\left(u(s) - T(s)\phi\right) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R_{\delta}(u(\tau))\,d\tau}\\
&= T(t - s)u(s) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R_{\delta}(u(\tau))\,d\tau}
\end{align*}
for all $(t,s) \in \Omega_{\mathbb{R}}$.
Conversely, suppose that $\psi \in X$ is such that there exist a solution $u$ in $\ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ of \cref{eq:aie-st-delta} and a time $t_0 \in \mathbb{R}$ such that $u(t_0) = \psi$. We may assume that $t_0 = 0$, since by translation invariance $u_0 \ensuremath{\coloneqq} u(\cdot + t_0)$ is a solution in $BC^{\eta}(\mathbb{R},X)$ that satisfies $u_0(0) = \psi$. For $(t,s) \in \Omega_{\mathbb{R}}$ we write \cref{eq:aie-st-delta} as
\begin{align*}
u(t) &= T(t - s)P_0u(s) + T(t - s)(I - P_0)u(s) + j^{-1}\int_s^t{\SUNSTAR{T}(t - \tau)R_{\delta}(u(\tau))\,d\tau}\\
&= T(t - s)P_0u(s) + (\mathcal{K}_{\eta} \tilde{R}_{\delta}(u))(t)
\end{align*}
where \cref{prop:Keta:solution} was used for the second equality. Rearranging, we obtain
\[
T(-t)(u(t) - (\mathcal{K}_{\eta} \tilde{R}_{\delta}(u))(t)) = T(-s)P_0 u(s) \qquad \text{for all }\,(t,s) \in \Omega_{\mathbb{R}}
\]
and it follows that both sides equal the same constant $\phi \in X_0$. Hence
\[
u(t) - (\mathcal{K}_{\eta} \tilde{R}_{\delta}(u))(t) = T(t)\phi \qquad \text{for all }\,t \in \mathbb{R}
\]
which shows that $\mathcal{G}(u,\phi) = u$, implying that $\psi = u(0) = \mathcal{C}(\phi) \in \mathcal{W}^{\textup{c}}$.
\end{proof}
Let $B_{\delta}(X)$ be the open $\delta$-ball centered at the origin in $X$. The restrictions of $R_{\delta}$ and $R$ to this ball are equal, so if we restrict the unknown $u$ to take values in $B_{\delta}(X)$, then \cref{eq:aie-st-delta} and \cref{eq:aie-st} coincide as well. We note that \cref{thm:cm:lipschitz} implies that $U$ in the following definition is an open neighborhood of the origin in $X_0$.
\begin{definition}\label{def:LCM}
Let $\mathcal{C}$ be as in \cref{eq:cmmap}. A \textbf{\emph{local} center manifold} $\mathcal{W}^{\textup{c}}_{\textup{loc}}$ for \cref{eq:aie-st} is defined as the image of the restriction of $\mathcal{C}$ to $U \ensuremath{\coloneqq} \{\phi \in X_0\,:\,\mathcal{C}(\phi) \in B_{\delta}(X)\}$.
\end{definition}
In \cite{Delay1995} the proof of the next result is suggested as an exercise, but in the present context it also follows directly from \cref{prop:CMcharacterization}. We recall the definition of the semiflow $\Sigma$ in \cref{eq:semiflow}.
\begin{corollary}[\normalfont{cf. \cite[Theorem IX.5.3]{Delay1995}}]
The following two statements hold.
\begin{corenum}
\item
$\mathcal{W}^{\textup{c}}_{\textup{loc}}$ is locally positively invariant: If $\psi \in \mathcal{W}^{\textup{c}}_{\textup{loc}}$ and $0 < t_e \le \infty$ are such that $\Sigma(t,\psi) \in B_{\delta}(X)$ for all $t \in J_{\psi} \cap [0,t_e)$, then $\Sigma(t,\psi) \in \mathcal{W}^{\textup{c}}_{\textup{loc}}$ for all $t \in J_{\psi} \cap [0,t_e)$.
\item
$\mathcal{W}^{\textup{c}}_{\textup{loc}}$ contains every solution of \cref{eq:aie-st} that exists on $\mathbb{R}$ and remains sufficiently small for all positive and negative time: If $u : \mathbb{R} \to B_{\delta}(X)$ is a solution of \cref{eq:aie-st} then $u$ takes its values in $\mathcal{W}^{\textup{c}}_{\textup{loc}}$.
\end{corenum}
\end{corollary}
\begin{proof}
\begin{steps}[label=\roman*.]
\item
It is convenient to write $J_{\psi}^e \ensuremath{\coloneqq} J_{\psi} \cap [0,t_e)$. \Cref{prop:CMcharacterization} implies that there exists a solution $u \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$ of \cref{eq:aie-st-delta} passing through $\psi$, and by translation invariance we may assume that $u(0) = \psi$. So, $\Sigma(\cdot,\psi)$ and $u$ are both solutions of \cref{eq:aie-st-delta} on $J_{\psi}^e$ and $\Sigma(0,\psi) = \psi = u(0)$. Hence $\Sigma(\cdot,\psi)$ and $u$ coincide on $J_{\psi}^e$. A second application of \cref{prop:CMcharacterization} then implies that $\Sigma(t,\psi) \in \mathcal{W}^{\textup{c}}$ for all $t \in J_{\psi}^e$. Since $\mathcal{W}^{\textup{c}}_{\textup{loc}} = \mathcal{W}^{\textup{c}} \cap B_{\delta}(X)$ the result follows.
\item
If $u$ is such a solution, then $u \in \ensuremath{\textup{BC}}^{\eta}(\mathbb{R},X)$. The assumption that $u$ takes its values in $B_{\delta}(X)$ and \cref{prop:CMcharacterization} together imply the result. \qedhere
\end{steps}
\end{proof}
In order to establish that $\mathcal{C}$ has the same degree $k \ge 1$ of smoothness as the \emph{un}modified nonlinearity $R$, it can be verified that the general results on contractions on scales of Banach spaces \cite[Section IX.6]{Delay1995} apply exactly as in the $\odot$-reflexive case \cite[Section IX.7]{Delay1995}, \emph{provided} that one consistently makes the substitution $X^{\odot\star} \to X^{\odot\times}$. In particular, the important results \cite[Corollaries IX.7.8 and IX.7.10]{Delay1995} on $C^k$-smoothness and tangency, respectively, remain true.
We give a summary of the results discussed in this subsection.
\begin{theorem}\label{thm:cm-summary}
Let $\hat{\phi} = 0$ be an equilibrium of the semiflow $\Sigma$ associated with the maximal solutions of \cref{eq:aie0} and let $\{T(t)\}_{t \ge 0}$ be the linearization of $\Sigma$ at $\hat{\phi}$. Assume that \cref{hyp:cm} and \cref{hyp:cm:j} hold and that $X_0$ is finite-dimensional. Then there exist a $C^k$-smooth mapping $\mathcal{C} : X_0 \to X$ and an open neighborhood $U$ of the origin in $X_0$ such that $\mathcal{C}(0) = 0$, $D\mathcal{C}(0) = I_{X_0 \to X}$ and $\mathcal{W}^{\textup{c}}_{\textup{loc}} = \mathcal{C}(U)$ is locally positively invariant for $\Sigma$ and contains every solution of \cref{eq:aie-st} that exists on $\mathbb{R}$ and remains sufficiently small for all time.
\end{theorem}
\section{The special case of abstract DDEs}\label{sec:ddes}
At this point it is relatively straightforward to show that abstract DDEs \cref{eq:adde} form an example of a class of delay equations satisfying, under certain conditions, the hypotheses in \cref{hyp:G}, \cref{hyp:cm}, and \cref{hyp:cm:j} of the results in the previous sections.
We recall the specific setting. Let $\{T_0(t)\}_{t \ge 0}$ be the shift semigroup on $X \ensuremath{\coloneqq} C([-h,0],Y)$ where $Y$ is a \emph{real} Banach space. We assume that $F : X \to Y$ is of class $C^k$ for some $k \ge 1$ and we set $G \ensuremath{\coloneqq} \ell \circ F$. It was already shown in \cite[Proposition 8]{ADDE1} that
\begin{equation}
\label{eq:admissible:dde}
\int_0^t T_0^{\odot\star}(t - \tau)\ell w(\tau)\,d\tau \in jX \qquad \text{for all continuous } w : \mathbb{R}_+ \to Y \text{ and all } t \ge 0,
\end{equation}
with $\ell : Y \to X^{\odot\star}$ given by \cref{eq:ell}. In the terminology of the present article, we have:
\begin{proposition}\label{prop:admrange-dde}
$\ell Y$ is an admissible range for $\{T_0(t)\}_{t \ge 0}$ and $G = \ell \circ F$ is an admissible perturbation.
\end{proposition}
\begin{proof}
The closedness of $\ell Y$ in $X^{\odot\star}$ is due to \cite[Lemma 6]{ADDE1}. If $f : \mathbb{R}_+ \to \ell Y$ is any forcing function, then $f = \ell \circ w$ where $w \ensuremath{\coloneqq} \ell^{-1} \circ f : \mathbb{R}_+ \to Y$ is continuous. So, for any $(t,s) \in \Omega_{\mathbb{R}_+}$,
\begin{align*}
v_0(t,s,f) &= \int_s^t T_0^{\odot\star}(t - \tau) \ell w(\tau)\,d\tau\\
&= \int_0^{t - s} T_0^{\odot\star}(t - s - \tau) \ell w(s + \tau)\,d\tau
\end{align*}
The function $w(s + \cdot) : \mathbb{R}_+ \to Y$ is continuous and $t - s \ge 0$ so by \cref{eq:admissible:dde} the right-hand side is in $jX$. The statements then follows from \cref{prop:admindependent}.
\end{proof}
A `local' modification of \cite[Theorem 16]{ADDE1} provides a one-to-one correspondence between the maximal mild solutions of \cref{eq:adde-ic} and the maximal solutions of \cref{eq:aie0-adde}, where the latter equation is a particular case of \cref{eq:aie0}. \Cref{prop:admrange-dde} and the smoothness of $F$ imply that $G$ satisfies \cref{hyp:G}. We define the semiflow $\Sigma$ on $X$ as in \cref{eq:semiflow}, using the maximal solutions of \cref{eq:aie0-adde}.
Let $\hat{\phi} = 0$ be an equilibrium of $\Sigma$ or, equivalently, let $F(0) = 0$. We split $G$ as in \cref{eq:splittingGLR} to obtain
\begin{equation}
\label{eq:LR-adde}
L\phi \ensuremath{\coloneqq} \ell DF(0)\phi, \qquad R(\phi) = \ell(F(\phi) - DF(0)\phi), \qquad \phi \in X.
\end{equation}
The $C^k$-smooth operator $R$ is an admissible nonlinear perturbation for the $\mathcal{C}_0$-semigroup $\{T(t)\}_{t \ge 0}$. This semigroup itself is defined by perturbing the shift semigroup $\{T_0(t)\}_{t \ge 0}$ with the admissible linear perturbation $L$. \Cref{prop:linearization} shows that $T(t)$ is recovered as the partial derivative of $\Sigma$ with respect to the state at the point $(t,\hat{\phi})$, for any $t \ge 0$.
As a specialized counterpart to \cref{thm:cm-summary}, we have:
\begin{theorem}\label{thm:cm-dde}
Let $\hat{\phi} = 0$ be an equilibrium of the semiflow $\Sigma$ associated with the maximal solutions of \cref{eq:aie0-adde} and let $\{T(t)\}_{t \ge 0}$ be the $\mathcal{C}_0$-semigroup with generator $A$, obtained by perturbing the shift-semigroup $T_0$ by $L$, with $L$ and $R$ as in \cref{eq:LR-adde}. If the $\mathcal{C}_0$-semigroup $\{S(t)\}_{t \ge 0}$ generated by $B$ in \cref{eq:adde} is immediately norm continuous, $\sigma(A_{\mathbbm{c}})$ satisfies the conditions of \cref{thm:hypcm} and $X_0$ is finite dimensional, then there exist a $C^k$-smooth mapping $\mathcal{C} : X_0 \to X$ and an open neighborhood $U$ of the origin in $X_0$ such that $\mathcal{C}(0) = 0$, $D\mathcal{C}(0) = I_{X_0 \to X}$ and $\mathcal{W}^{\textup{c}}_{\textup{loc}} = \mathcal{C}(U)$ is locally positively invariant for $\Sigma$ and contains every solution of \cref{eq:aie-st} that exists on $\mathbb{R}$ and remains sufficiently small for all time.
\end{theorem}
\begin{proof}
This is a direct consequence of \cref{thm:cm-summary,thm:hypcm} combined with \cite[Theorem VI.6.6]{Engel2000}.
\end{proof}
In particular, if the $\mathcal{C}_0$-semigroup $\{S(t)\}_{t \ge 0}$ is immediately compact or analytic, then it is immediately norm continuous. Clearly the conditions in the above theorem are sufficient, but not necessary. For instance, it would have been sufficient to directly assume eventual norm continuity of $\{T(t)\}_{t \ge 0}$ itself. However, I have chosen a formulation that I believe to be suitable for application to specific examples. If more generality is required, then one may want to return to \cref{thm:cm-summary}.
\section{Conclusion and outlook}\label{sec:conclusion}
The notions of admissible range and admissible perturbation for a given $\mathcal{C}_0$-semigroup are natural abstractions of the approach to non-$\odot$-reflexivity proposed in \cite{Diekmann2008,DiekmannGyllenberg2012}, and the subspace $X^{\odot\times}$ of $X^{\odot\star}$ introduced in \cite{VanNeerven1992} naturally occurs in this context as the largest among all admissible ranges. Therefore, the admissibility problem is in principle resolved. The clause \emph{in principle} refers to the fact that for concrete classes of delay equations, one still has to verify that the perturbation $G$ appearing in \cref{eq:aie0} indeed takes its values in $X^{\odot\times}$. For abstract DDEs this was done in \cite{ADDE1}.
Interestingly, already in \cite[Section 4.5]{VanNeerven1992} it is remarked that ``From our point of view this assumption [i.e. $\odot$-reflexivity] is used only to achieve $X^{\odot\times} = X^{\odot\star}$. By replacing $X^{\odot\star}$ by $X^{\odot\times}$ many of the results [of Cl\'ement, Diekmann, Gyllenberg, Heijmans and Thieme] generalize to the non-$\odot$-reflexive case.'' Furthermore, in their introduction to \cite[Section VIII.2]{Delay1995} on stable and unstable manifolds, the authors comment that $\odot$-reflexivity is assumed ``mostly for ease of formulation; the same techniques yield analogous results when, for instance, we do not have $\odot$-reflexivity but still solutions of the AIE [abstract integral equation, i.e. \cref{eq:aie0}] define a nonlinear semigroup on $X$.'' In the light of these comments, and of course also with the benefit of hindsight, it would have been better if the authors of \cite{Delay1995} had relaxed the assumption of $\odot$-reflexivity in favor of a systematic use of the subspace $X^{\odot\times}$ instead of the full dual space $X^{\odot\star}$.
As an example that aims to be illustrative as well as useful in its own right, I have discussed in some detail the construction of local center manifolds in the non-$\odot$-reflexive case. However, I have not attempted a full rewrite of \cite[Chapter IX]{Delay1995}, for three reasons. First, the nontrivial consequences of non-$\odot$-reflexivity for this construction are very much localized, as explained in \cref{sec:Keta}. Second, I believe that the proper medium for such a rewrite would be a monograph or a textbook, rather than a research article. Third, it would arguably benefit the development of the general theory to proceed at a level sufficiently minimal and axiomatic to allow for a semilinear extension of the recent advances available in \cite{Twin2019}. It is my understanding that the authors of \cite{Twin2019} are progressing in this direction. (However, their work currently depends in a non-trivial way on the perturbations involved being of finite rank, and this precludes a direct application of their work to abstract DDEs.)
Motivation for theoretical developments often comes from specific problems, and this work is no exception \cite{VanGils2013,Dijkstra2015,SpekVanGilsKuznetsov2019}. More recently, I have been inspired by a class of control-theoretic examples for which the feedback-controlled system is an abstract DDE of the form \cref{eq:adde}. This will be explored further in collaboration with S. M. Verduyn Lunel. We note that \cite{HeijmansControl1987} contains an early control-theoretic application of dual perturbation theory, in a $\odot$-reflexive setting. Meanwhile, O. Diekmann has suggested to me a class of abstract renewal equations inspired by structured population dynamics. The results from \cref{sec:admissibility,sec:inhomogeneous,sec:linearization,sec:cm} would in principle apply to such equations, depending on the precise model formulation, and we intend to investigate this jointly. Finally, an efficient numerical approach to the linear stability problem for abstract DDEs can be found in \cite{BredaMasetVermiglio2009}, and it would be interesting to see its implementation for some of the examples mentioned above.
\subsection*{Acknowledgements}
I thank Prof. O. Diekmann (Utrecht University) for his lasting interest in this work and for suggesting the population dynamical class of examples mentioned in \cref{sec:conclusion}, Prof. S. M. Verduyn Lunel (Utrecht University) for introducing me to the control-theoretic class of examples mentioned in \cref{sec:conclusion}, and both of them for their stimulating comments and hospitality during various visits to Utrecht.
In addition, I thank Alina Andrei (Erasmus University Rotterdam) for her unconditional support of this work.
\clearpage
|
1,477,468,750,087 | arxiv | \section{Introduction}
Decisions made in international organisations are fundamental to international development efforts and initiatives. It is in these global governance arenas that the rules of the global economic system, which have a huge impact on development outcomes are agreed on; decisions are made about large-scale funding for development issues, such as health and infrastructure; and key development goals and targets are agreed on, as can be seen with the Millennium Development Goals (MDGs). More generally, international organisations have a profound influence on the ideas that shape international development efforts \cite{hudson2014global}.
Yet surprisingly little is known about the agenda-setting process for international development in global governance institutions. This is perhaps best demonstrated by the lack of information on how the different goals and targets of the MDGs were decided, which led to much criticism and concern about the global governance of development \cite{saith2006universal}. More generally, we know little about the types of development issues that different countries prioritise, or whether country-specific factors such as wealth or democracy make countries more likely to push for specific development issues to be put on the global political agenda.
The lack of knowledge about the agenda setting process in the global governance of development is in large part due to the absence of obvious data sources on states' preferences about international development issues. To address this gap we employ a novel approach based on the application of natural language processing (NLP) to countries' speeches in the UN. Every September, the heads of state and other high-level country representatives gather in New York at the start of a new session of the United Nations General Assembly (UNGA) and address the Assembly in the General Debate. The General Debate (GD) provides the governments of the almost two hundred UN member states with an opportunity to present their views on key issues in international politics -- including international development. As such, the statements made during GD are an invaluable and, largely untapped, source of information on governments' policy preferences on international development over time.
An important feature of these annual country statements is that they are not institutionally connected to decision-making in the UN. This means that governments face few external constraints when delivering these speeches, enabling them to raise the issues that they consider the most important. Therefore, the General Debate acts ``as a barometer of international opinion on important issues, even those not on the agenda for that particular session'' \cite{smith2006}. In fact, the GD is usually the first item for each new session of the UNGA, and as such it provides a forum for governments to identify like-minded members, and to put on the record the issues they feel the UNGA should address. Therefore, the GD can be viewed as a key forum for governments to put different policy issues on international agenda.
We use a new dataset of GD statements from 1970 to 2016, \emph{the UN General Debate Corpus} (UNGDC), to examine the international development agenda in the UN \cite{baturo2017understanding}.\footnote{UNGDC is publicly available at the Harvard Dataverse at \url{http://dx.doi.org/10.7910/DVN/0TJX8Y}} Our application of NLP to these statements focuses in particular on structural topic models (STMs)\cite{roberts2013structural}. The paper makes two contributions using this approach: (1) It sheds light on the main international development issues that governments prioritise in the UN; and (2) It identifies the key country-specific factors associated with governments discussing development issues in their GD statements.
\section*{The UN General Debate and international development}
In the analysis we consider the nature of international development issues raised in the UN General Debates, and the effect of structural covariates on the level of developmental rhetoric in the GD statements. To do this, we first implement a structural topic model \cite{roberts2013structural}. This enables us to identify the key international development topics discussed in the GD. We model topic prevalence in the context of the structural covariates. In addition, we control for region fixed effects and time trend. The aim is to allow the observed metadata to affect the frequency with which a topic is discussed in General Debate speeches. This allows us to test the degree of association between covariates (and region/time effects) and the average proportion of a document discussing a topic.
\subsection{Estimation of topic models}
We assess the optimal number of topics that need to be specified for the STM analysis. We follow the recommendations of the original STM paper and focus on exclusivity and semantic coherence measures. \cite{mimno2011optimizing} propose semantic coherence measure, which is closely related to point-wise mutual information measure posited by \cite{newman2010automatic} to evaluate topic quality. \cite{mimno2011optimizing} show that semantic coherence corresponds to expert judgments and more general human judgments in Amazon's Mechanical Turk experiments.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{optimal-topics.pdf}
\caption{\emph{Optimal model search}. Semantic coherence and exclusivity results for a model search from 3 to 50 topics. Models above the regression line provide a better trade off. Largest positive residual is a 16-topic model.
\label{fig:search}}
\end{figure}
Exclusivity scores for each topic follows \cite{bischof2012summarizing}. Highly frequent words in a given topic that do not appear very often in other topics are viewed as making that topic exclusive. Cohesive and exclusive topics are more semantically useful. Following \cite{roberts2016stm} we generate a set of candidate models ranging between 3 and 50 topics. We then plot the exclusivity and semantic coherence (numbers closer to 0 indicate higher coherence), with a linear regression overlaid (Figure \ref{fig:search}). Models above the regression line have a ``better'' exclusivity-semantic coherence trade off. We select the 16-topic model, which has the largest positive residual in the regression fit, and provides higher exclusivity at the same level of semantic coherence.
The topic quality is usually evaluated by highest probability words, which is presented in Figure \ref{fig:words}.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_words.pdf}
\caption{\emph{Topic quality}. 20 highest probability words for the 16-topic model.
\label{fig:words}}
\end{figure}
\subsection{Topics in the UN General Debate}
Figure \ref{fig:words} provides a list of the main topics (and the highest probability words associated these topics) that emerge from the STM of UN General Debate statements. In addition to the highest probability words, we use several other measures of key words (not presented here) to interpret the dimensions. This includes the FREX metric (which combines exclusivity and word frequency), the \emph{lift} (which gives weight to words that appear less frequently in other topics), and the \emph{score} (which divides the log frequency of the word in the topic by the log frequency of the word in other topics). We provide a brief description of each of the 16 topics here.
\noindent \textbf{Topic 1} - \emph{Security and cooperation in Europe}.
The first topic is related to issues of security and cooperation, with a focus on Central and Eastern Europe.
\noindent \textbf{Topic 2} - \emph{Economic development and the global system}.
This topic is related to economic development, particularly around the global economic system. The focus on `trade', `growth', `econom-', `product', `growth', `financ-', and etc. suggests that Topic 2 represent a more traditional view of international development in that the emphasis is specifically on economic processes and relations.
\noindent \textbf{Topic 3} - \emph{Nuclear disarmament}.
This topic picks up the issue of nuclear weapons, which has been a major issue in the UN since its founding.
\noindent \textbf{Topic 4} - \emph{Post-conflict development}.
This topic relates to post-conflict development. The countries that feature in the key words (e.g. Rwanda, Liberia, Bosnia) have experienced devastating civil wars, and the emphasis on words such as `develop', `peace', `hope', and `democrac-' suggest that this topic relates to how these countries recover and move forward.
\noindent \textbf{Topic 5} - \emph{African independence / decolonisation}.
This topic picks up the issue of African decolonisation and independence. It includes the issue of apartheid in South Africa, as well as racism and imperialism more broadly.
\noindent \textbf{Topic 6} - \emph{Africa}.
While the previous topic focused explicitly on issues of African independence and decolonisation, this topic more generally picks up issues linked to Africa, including peace, governance, security, and development.
\noindent \textbf{Topic 7} - \emph{Sustainable development}.
This topic centres on sustainable development, picking up various issues linked to development and climate change. In contrast to Topic 2, this topic includes some of the newer issues that have emerged in the international development agenda, such as sustainability, gender, education, work and the MDGs.
\noindent \textbf{Topic 8} - \emph{Functional topic}.
This topic appears to be comprised of functional or process-oriented words e.g. `problem', `solution', `effort', `general', etc.
\noindent \textbf{Topic 9} - \emph{War}.
This topic directly relates to issues of war. The key words appear to be linked to discussions around ongoing wars.
\noindent \textbf{Topic 10} - \emph{Conflict in the Middle East}.
This topic clearly picks up issues related to the Middle East -- particularly around peace and conflict in the Middle East.
\noindent \textbf{Topic 11} - \emph{Latin America}.
This is another topic with a regional focus, picking up on issues related to Latin America.
\noindent \textbf{Topic 12} - \emph{Commonwealth}.
This is another of the less obvious topics to emerge from the STM in that the key words cover a wide range of issues. However, the places listed (e.g. Australia, Sri Lanka, Papua New Guinea) suggest the topic is related to the Commonwealth (or former British colonies).
\noindent \textbf{Topic 13} - \emph{International security}.
This topic broadly captures international security issues (e.g. terrorism, conflict, peace) and in particularly the international response to security threats, such as the deployment of peacekeepers.
\noindent \textbf{Topic 14} - \emph{International law}.
This topic picks up issues related to international law, particularly connected to territorial disputes.
\noindent \textbf{Topic 15} - \emph{Decolonisation}.
This topic relates more broadly to decolonisation. As well as specific mention of decolonisation, the key words include a range of issues and places linked to the decolonisation process.
\noindent \textbf{Topic 16} - \emph{Cold War}.
This is another of the less tightly defined topics. The topics appears to pick up issues that are broadly related to the Cold War. There is specific mention of the Soviet Union, and detente, as well as issues such as nuclear weapons, and the Helsinki Accords.
Based on these topics, we examine Topic 2 and Topic 7 as the principal ``international development'' topics. While a number of other topics -- for example post-conflict development, Africa, Latin America, etc. -- are related to development issues, Topic 2 and Topic 7 most directly capture aspects of international development. We consider these two topics more closely by contrasting the main words linked to these two topics. In Figure \ref{fig:wordclouds}, the word clouds show the 50 words most likely to mentioned in relation to each of the topics.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_wordcloud2.pdf}\\
\includegraphics[width=.45\textwidth]{topic16_wordcloud7.pdf}
\caption{\emph{Topic content}. 50 highest probability words for the 2nd and 7th topics.
\label{fig:wordclouds}}
\end{figure}
The word clouds provide further support for Topic 2 representing a more traditional view of international development focusing on economic processes. In addition to a strong emphasis on 'econom-', other key words, such as `trade', `debt', `market', `growth', `industri-', `financi-', `technolog-', `product', and `argicultur-', demonstrate the narrower economic focus on international development captured by Topic 2. In contrast, Topic 7 provides a much broader focus on development, with key words including `climat-', `sustain', `environ-', `educ-', `health', `women', `work', `mdgs', `peac-', `govern-', and `right'. Therefore, Topic 7 captures many of the issues that feature in the recent Sustainable Development Goals (SDGs) agenda \cite{waage2015governing}.
Figure \ref{fig:perspectives} calculates the difference in probability of a word for the two topics, normalized by the maximum difference in probability of any word between the two topics. The figure demonstrates that while there is a much high probability of words, such as `econom-', `trade', and even `develop-' being used to discuss Topic 2; words such as `climat-', `govern-', `sustain', `goal', and `support' being used in association with Topic 7. This provides further support for the Topic 2 representing a more economistic view of international development, while Topic 7 relating to a broader sustainable development agenda.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_perspectives.pdf}
\caption{\emph{Comparing Topics 2 and 7 quality}. 50 highest probability words contrasted between Topics 2 and 7.
\label{fig:perspectives}}
\end{figure}
We also assess the relationship between topics in the STM framework, which allows correlations between topics to be examined. This is shown in the network of topics in Figure \ref{fig:correlation}. The figure shows that Topic 2 and Topic 7 are closely related, which we would expect as they both deal with international development (and share key words on development, such as `develop-', `povert-', etc.). It is also worth noting that while Topic 2 is more closely correlated with the Latin America topic (Topic 11), Topic 7 is more directly correlated with the Africa topic (Topic 6).
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_correlations.pdf}
\caption{\emph{Network of topics}. Correlation of topics.
\label{fig:correlation}}
\end{figure}
\section{Explaining the rhetoric}
We next look at the relationship between topic proportions and structural factors. The data for these structural covariates is taken from the World Bank's World Development Indicators (WDI) unless otherwise stated. Confidence intervals produced by the method of composition in STM allow us to pick up statistical uncertainty in the linear regression model.
Figure \ref{fig:wealth} demonstrates the effect of wealth (GDP per capita) on the the extent to which states discuss the two international development topics in their GD statements. The figure shows that the relationship between wealth and the topic proportions linked to international development differs across Topic 2 and Topic 7. Discussion of Topic 2 (economic development) remains far more constant across different levels of wealth than Topic 7. The poorest states tend to discuss both topics more than other developing nations. However, this effect is larger for Topic 7. There is a decline in the proportion of both topics as countries become wealthier until around \$30,000 when there is an increase in discussion of Topic 7. There is a further pronounced increase in the extent countries discuss Topic 7 at around \$60,000 per capita. However, there is a decline in expected topic proportions for both Topic 2 and Topic 7 for the very wealthiest countries.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_wealth_spline.pdf}
\caption{\emph{Effect of wealth}. Main effect and 95\% confidence interval.
\label{fig:wealth}}
\end{figure}
Figure \ref{fig:population} shows the expected topic proportions for Topic 2 and Topic 7 associated with different population sizes. The figure shows a slight surge in the discussion of both development topics for countries with the very smallest populations. This reflects the significant amount of discussion of development issues, particularly sustainable development (Topic 7) by the small island developing states (SIDs). The discussion of Topic 2 remains relatively constant across different population sizes, with a slight increase in the expected topic proportion for the countries with the very largest populations. However, with Topic 7 there is an increase in expected topic proportion until countries have a population of around 300 million, after which there is a decline in discussion of Topic 7. For countries with populations larger than 500 million there is no effect of population on discussion of Topic 7. It is only with the very largest populations that we see a positive effect on discussion of Topic 7.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_population_spline.pdf}
\caption{\emph{Effect of population}. Main effect and 95\% confidence interval.
\label{fig:population}}
\end{figure}
We would also expect the extent to which states discuss international development in their GD statements to be impacted by the amount of aid or official development assistance (ODA) they receive. Figure \ref{fig:oda} plots the expected topic proportion according to the amount of ODA countries receive. Broadly-speaking the discussion of development topics remains largely constant across different levels of ODA received. There is, however, a slight increase in the expected topic proportions of Topic 7 according to the amount of ODA received. It is also worth noting the spikes in discussion of Topic 2 and Topic 7 for countries that receive negative levels of ODA. These are countries that are effectively repaying more in loans to lenders than they are receiving in ODA. These countries appear to raise development issues far more in their GD statements, which is perhaps not altogether surprising.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_oda_spline.pdf}
\caption{\emph{Effect of ODA}. Main effect and 95\% confidence interval.
\label{fig:oda}}
\end{figure}
We also consider the effects of democracy on the expected topic proportions of both development topics using the Polity IV measure of democracy \cite{polity2003}. Figure \ref{fig:polity} shows the extent to which states discuss the international development topics according to their level of democracy. Discussion of Topic 2 is fairly constant across different levels of democracy (although there are some slight fluctuations). However, the extent to which states discuss Topic 7 (sustainable development) varies considerably across different levels of democracy. Somewhat surprisingly the most autocratic states tend to discuss Topic 7 more than the slightly less autocratic states. This may be because highly autocratic governments choose to discuss development and environmental issues to avoid a focus on democracy and human rights. There is then an increase in the expected topic proportion for Topic 7 as levels of democracy increase reaching a peak at around 5 on the Polity scale, after this there is a gradual decline in discussion of Topic 7. This would suggest that democratizing or semi-democratic countries (which are more likely to be developing countries with democratic institutions) discuss sustainable development more than established democracies (that are more likely to be developed countries).
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_polity_spline.pdf}
\caption{\emph{Effect of democracy}. Main effect and 95\% confidence interval.
\label{fig:polity}}
\end{figure}
We also plot the results of the analysis as the difference in topic proportions for two different values of the effect of conflict. Our measure of whether a country is experiencing a civil conflict comes from the UCDP/PRIO Armed Conflict Dataset \cite{gleditsch2002armed}. Point estimates and 95\% confidence intervals are plotted in Figure \ref{fig:conflict}. The figure shows that conflict affects only Topic 7 and not Topic 2. Countries experiencing conflict are less likely to discuss Topic 7 (sustainable development) than countries not experiencing conflict. The most likely explanation is that these countries are more likely to devote a greater proportion of their annual statements to discussing issues around conflict and security than development. The fact that there is no effect of conflict on Topic 2 is interesting in this regard.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_conflict.pdf}
\caption{\emph{Effect of conflict}. Point estimates and 95\% confidence intervals.
\label{fig:conflict}}
\end{figure}
Finally, we consider regional effects in Figure \ref{fig:region}. We use the World Bank's classifications of regions: Latin America and the Caribbean (LCN), South Asia (SAS), Sub-Saharan Africa (SSA), Europe and Central Asia (ECS), Middle East and North Africa (MEA), East Asia and the Pacific (EAS), North America (NAC). The figure shows that states in South Asia, and Latin America and the Caribbean are likely to discuss Topic 2 the most. States in South Asia and East Asia and the Pacific discuss Topic 7 the most. The figure shows that countries in North America are likely to speak about Topic 7 least.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{topic16_region.pdf}
\caption{\emph{Regional effects}. Point estimates and 95\% confidence intervals.
\label{fig:region}}
\end{figure}
The analysis of discussion of international development in annual UN General Debate statements therefore uncovers two principle development topics: economic development and sustainable development. We find that discussion of Topic 2 is not significantly impacted by country-specific factors, such as wealth, population, democracy, levels of ODA, and conflict (although there are regional effects). However, we find that the extent to which countries discuss sustainable development (Topic 7) in their annual GD statements varies considerably according to these different structural factors. The results suggest that broadly-speaking we do not observe linear trends in the relationship between these country-specific factors and discussion of Topic 7. Instead, we find that there are significant fluctuations in the relationship between factors such as wealth, democracy, etc., and the extent to which these states discuss sustainable development in their GD statements. These relationships require further analysis and exploration.
\section{Conclusion}
Despite decisions taken in international organisations having a huge impact on development initiatives and outcomes, we know relatively little about the agenda-setting process around the global governance of development. Using a novel approach that applies NLP methods to a new dataset of speeches in the UN General Debate, this paper has uncovered the main development topics discussed by governments in the UN, and the structural factors that influence the degree to which governments discuss international development. In doing so, the paper has shed some light on state preferences regarding the international development agenda in the UN. The paper more broadly demonstrates how text analytic approaches can help us to better understand different aspects of global governance.
\bibliographystyle{IEEEtran}
|
1,477,468,750,088 | arxiv | \section{Introduction}
\label{intro}
Two interesting mathematical models account for the functional architecture of the $V_1$ visual cortex,
both of them originally developed in the '80s and considerably expanded and refined in recent years:
a model of receptor profiles in terms of Gabor functions, \cite{Daug}, \cite{Daug2}, \cite{Marce},
and a model of the connectivity and the hypercolumn structure of the $V_1$ cortes in terms of
contact geometry and contact bundles, \cite{Hoff}. These two aspects of the mathematical modeling
of the visual cortex may appear at first unrelated, the first capturing functional analytic aspects of signal
encoding in terms of the neurons receptor profiles, the latter describing the geometric structure of
the visual cortex that captures the sensitivity to orientation of the simple cells in the hypercolumns.
The fiber bundle contact geometry also provides a good geometric description of the connections
between simple cells in different hypercolumns. These two mathematical models are in fact closely
entangled, as the more recent work of Petitot and Tondut \cite{PeTon}, Citti and Sarti \cite{CiSa}, and
Sarti--Citti--Petitot \cite{SaCiPe} has clearly shown. The simple cells profile shapes and their
geometric arrangement in the hypercolumn structure are simultaneously governed by the
same action of the rototranslation group, combined with a principle of selectivity of maximal response
(see \S 2 of \cite{SaCiPe}). Thus, it appears that the contact geometry of the $V_1$ cortex also
determines its signal analysis properties. This is an interesting mathematical observation in itself,
that certain classes of contact manifolds carry an associated signal analysis framework entirely
determined by the geometry. Part of the purpose of the present paper is to clarify what this
means in the specific case of contact $3$-manifolds that are Legendrian circle bundles over a
surface, and contact $5$-manifolds obtained from them by symplectization and contactization,
which are the two cases of direct relevance to the neuroscience modeling. Our main
focus here is on identifying additional aspects of the contact geometry that have a direct influence
on the signal analysis properties, beyond the relations already identified in previous work
such as \cite{SaCiPe}. In particular, while previous work focused on continuous representations
of signals through short time Fourier transform, we argue that a more refined model should
incorporate the discrete nature of the neuron population involved, and identify a mechanism
that ensures a good signal encoding and decoding in terms of a selection of a
discrete system of filters. We argue that this selection of a discrete Gabor system with
adequate signal analysis properties can also be seen as directly encoded in the
geometric model of the $V_1$-cortex. Our key observation to this purpose is the
fact that the combined presence of the contact structure on the Legendrian circle bundle
and a complex structure on the base surface determines an associated
bundle of framed lattices, which in turn provide the required discrete sampling set for
the Gabor frames.
\smallskip
Gabor filters play an essential role to both neural modeling and signal processing. In the works of Daugman
\cite{Daug}, \cite{Daug2} and Marcelja \cite{Marce}, it is argued why Gabor filters are the right choice for the modeling of receptive profiles of visual neurons in $V_1$. In particular, simple cells of the primary visual cortex try to localize at the same time the position $(x,y)$ and the frequency $w$ of a signal detected in the retina. However, the uncertainty principle in signal analysis indicates that it is impossible to detect both position and frequency with arbitrary precision. Gabor filters minimize the uncertainty and therefore they process spatiotemporal information optimally. Thus, a receptive profile, centered at $(x_0,y_0)$, with preferred spatial frequency $w=\sqrt{u_0,v_0}$ and preferred orientation $\theta=arctan(\frac{v_0}{u_0})$ is efficiently modelled
by a bivariate, real-valued Gabor function $f(x,y)$ of the form $exp(-\pi((x-x_0)^2+(y-y_0)^2))exp(-2\pi i(u_0(x-x_0)+u_0(y-y_0)))$. Given a distribution $I(x,y)$, specifying the distribution of light intensity of a visual stimulus, the receptive profile generates the response to that distributed stimulus via integration
$$response=\int\int_{-\infty}^{+\infty} I(x,y)f(x,y)dxdy.$$ The integral representing the response of a receptive field is commonly used in time-frequency analysis, as short time Fourier transform. The short time Fourier transform of a signal $I$ with respect to a window function $g$ is a linear and continuous, joint time-frequency representation defined as
$$V_gI(x,w)=\int_{\mathbb{R}^d}I(t)\overline{g(t-x)}e^{-2\pi i t\cdot w},\textit{ for }x,w\in\mathbb{R}^d.$$
More specifically, the response of a receptive profile to a visual signal $I$ is equal to the short time Fourier transform of the signal $I$ with respect to the Gaussian $g(x,y)=exp(-\pi(x^2+y^2))$, multiplied with a complex exponential
$$response=e^{2\pi i (x_0,y_0)\cdot(u_0,v_0)}V_gI(x_0,y_0,u_0,v_0).$$
\smallskip
The short time Fourier transform is suitable for most theoretical approaches of space-frequency and time-frequency analysis. However, it is not practical to use continuous representations for experimental purposes, when dealing with a finite (albeit large) population of neurons. Continuous representations of signals, like the short time Fourier transform, allow good encoding and decoding of the signal by using an uncountable system of receptive profiles. Methods from discrete time-frequency analysis come to solve this problem. In discrete methods, a discrete system of Gabor elementary functions is enough to reconstruct and deconstruct the signal. If the window function is supported on a subset of $\mathbb{R}^{2d}$ centered in $(x,y)$, the STFT $V_gI(x_0,y_0,u_0,v_0)$ carries the same information for neighbouring points in the support of $g$ and therefore it is possible to reduce the sampling set without compromising the quality of encoding and decoding of the signal. While there is rich bibliography on the representation of receptive profiles by continuous time-frequency signal representations, the question that arises is whether the functional geometry of the visual mechanism directly incorporates a choice of a discrete sampling set suitable for decoding and encoding visual stimuli.
\smallskip
An approach to modelling geometrically the functional architecture of the $V_1$ visual cortex
in terms of contact and sub-Riemannian geometry was developed by
Jean Petitot, and by Giovanna Citti and Alessandro Sarti, \cite{Pet}, \cite{SaCiPe}, \cite{SaCi-book}.
The purpose of our note here is to highlight some aspects of the contact geometry of the
visual cortex, with special attention to a geometric mechanism for the generation of
families of Gabor frames. These give rise to a signal analysis setting that is adapted to
the underlying contact geometry. We focus here on the specific $3$-dimensional case of
the manifold of contact elements of a $2$-dimensional surface, as this is the setting
underlying the model of \cite{SaCiPe}. We also
discuss the case of an associated $5$-dimensional contact manifold considered in
\cite{BaspSartiCitti}. We will not discuss in this paper the more general question of
Gabor frames on arbitrary contact manifolds, which we plan to develop elsewhere, since
our main goal here is only to investigate some specific geometric aspects of the visual
cortex model developed in \cite{SaCiPe} and in \cite{BaspSartiCitti}.
\smallskip
The contact $3$-manifold underlying the model of the $V_1$ cortex of \cite{Pet}, \cite{SaCiPe}
is of the form $M={\mathbb S}(T^* S)$, namely the unit sphere bundle of the cotangent bundle
of a $2$-dimensional surface $S$, also known as the manifold of contact elements
of $S$. One of our main observations here is that the Legendrian circle bundle structure
of $M$, together with the existence of an almost complex structure on the tangent
bundle $TS$, provide a natural choice of a framed lattice (a lattice together with
the choice of a basis) on the bundle ${\mathcal E}\oplus {\mathcal E}^\vee$ over the
contact $3$-manifold manifold $M$, where ${\mathcal E}$ is the pullback of $TS$ to $M$.
This lattice determines an associated Gabor system, which has the general
form of the Gabor filters considered in \cite{SaCiPe}. Using the complex
analytic method of Bargmann transforms, we investigate when the frame
condition is satisfied, so that one obtains Gabor frames for signal analysis
consistently associated to the fibers of ${\mathcal E}$.
\par In terms of the geometric model of the $V_1$ visual cortex, this shows that
the contact geometry directly determines the signal analysis, the
Gabor frames property, and the observed shape of the receptor profiles of the $V_1$ neurons.
We show, in particular, that the window function proposed in \cite{SaCiPe} to model the
receptor profiles, together with a scaling of the framed lattice determined by the
injectivity radius function of the surface $S$ (representing the retinal surface), give rise
to a Gabor system on the bundle of signal planes on the contact $3$-manifold $M$ (which
models the $V_1$ cortex) that satisfies the frame condition, hence has optimal signal
analysis properties.
\smallskip
On the other hand, the cortical simple cells are organized in hypercolumns, over each point $(x,y)$ of the retina, with respect to their sensitivity on a specific value of a visual feature. These features include orientation, color, spatial frequency, etc. In this context, the hypercolumnar architecture of $V_1$, for more than one visual feature, is modeled by a fiber bundle of dimension higher than $3$ over the retina. Each visual feature considered adds one more dimension to the fibers of the bundle. Thus, for the process of signals from an extended model, which includes more features than the three-dimensional orientation-selectivity framework, it is essential that higher dimensional models have optimal signal analysis properties. In \cite{BaspSartiCitti}, Baspinar, Citti and Sarti extend the orientation selective model to include spacial frequency and phase. However, we show that the lift of the window
function, proposed in \cite{SaCiPe} for the 3-dim model,
to the $5$-dimensional contact manifold given by the contactization of the
symplectization of $M$, in the form proposed in \cite{BaspSartiCitti}, only defines a
Gabor system in a distributional sense, and cannot satisfy the frame condition even
distributionally. We show that a simple modification of the proposed window function of
\cite{BaspSartiCitti} restores the desired Gabor frame property and allows for good
signal analysis in this higher dimensional model.
\smallskip
\section{Signals on manifolds of contact elements}
In this section we present the main geometric setting, namely a
contact manifold that is either a $3$-manifold $M$
given by the manifold of contact elements of a compact $2$-dimensional surface, or the
$5$-manifold given by the contactization of the symplectization of $M$. These
are, respectively, the geometries underlying the models of \cite{PeTon}, and of \cite{Pet}, \cite{SaCiPe},
and the model of \cite{BaspSartiCitti}.
\smallskip
The main aspect of the geometry that will play a crucial role in our construction of
the associated Gabor frames is the fact that these contact $3$-manifolds are endowed
with a pair of contact forms $\alpha, \alpha_J$ related through the almost-complex
structure $J$ of the tangent bundle $TS$. They have the property that the circle fibers
are Legendrian for both contact forms, while the Reeb vector field of each is
Legendrian for the other. This leads to a natural framing, namely a natural choice of
a basis for the tangent bundle $TM$, completely determined by the contact geometry.
It consists of the fiber direction $\partial_\theta$ and the two Reeb vector fields
$R_\alpha$, $R_{\alpha_J}$.
\subsection{Legendrian circle bundles}
The results we discuss in this section apply, slightly more generally, to the case
of a $3$-manifold $M$ that is a Legendrian circle bundle over a
two dimensional compact surface $S$.
\smallskip
The Legendrian condition means that the fiber directions $T S^1$ inside the tangent
bundle $T M$ are contained in the contact planes distribution $\xi \subset TM$. Such
Legendrian circle bundles over surfaces are classified, see \cite{Lutz}. p.~179.
They are all either given by the unit cosphere bundle $M={\mathbb S}(T^* S)$, with the
contact structure induced by the natural symplectic structure on the cotangent
bundle $T^* S$, or by pullbacks of the contact structure on $M$ to a
$d$-fold cyclic covering $M'\to M$, that exists for $d$ dividing $2g-2$,
where $g=g(S)$ is the genus of $S$.
The case of $M={\mathbb S}(T^* S)$ is the manifold of contact elements of $S$. In the
following, we will restrict our discussion to this specific case.
\smallskip
In the geometric models of the $V_1$ cortex developed in \cite{Pet}, \cite{SaCiPe}, \cite{SaCi-book},
the surface $S$ represents the retinal surface, while the fiber direction in the Legendrian circle
bundle $M={\mathbb S}(T^* S)$ represents an additional orientation variable, which keeps track of
how the tangent orientation in $TS$ of a curve in $S$ is lifted to a propagation curve in the visual cortex,
where a line is represented by the envelope of its tangents rather than as a set of points.
\smallskip
\subsection{Liouville tautological $1$-form and almost-complex twist}\label{Jsec}
Given a manifold $Y$, the cotangent bundle $T^*Y$ has a canonical
Liouville $1$-form, given in coordinates by $\lambda =\sum_i p_i\, dx^i$,
or intrinsically as $\lambda_{(x,p)}(v)=p(d\pi(v))$ for $v\in T_x Y$ and
$\pi: T^* Y \to Y$ the bundle projection. The canonical symplectic form
on $T^*Y$ is $\omega=d\lambda$.
\smallskip
Given an almost complex structure $J$ on $Y$, namely a $(1,1)$
tensor $J$ with $J^2=-1$, written in coordinates as
$J=\sum_{k,\ell} J^k_\ell dx^\ell\otimes \partial_{x_k}$, the twist
by $J$ of the tautological Liouville $1$-form on $T^* Y$ is given by
$$ \lambda_J := \sum_{k,\ell} p_k J^k_\ell dx^\ell, $$
the $2$-form $\omega_J =d \lambda_J$ satisfies
$$ \omega_J (\cdot,\cdot)=\omega( \hat J\cdot, \cdot), $$
where in local coordinates
$$ \hat J=\begin{pmatrix} J^i_j & 0 \\ \sum_k p_k (\partial_{x_j} J^k_i -\partial_{x_i} J^k_j) & J^j_i
\end{pmatrix}, $$
see for instance \cite{Bertrand}.
\smallskip
In particular, in the case of a Riemann surface $S$, with coordinates $z=x+iy$ on $S$
and $p=(u,v)$ in the cotangent fiber, the tautological $1$-form is locally of the form
$\lambda =u\, dx + v\, dy$, with $\omega=du\wedge dx+ dv \wedge dy$.
The twisted tautological form with respect to $J$
given by multiplication by the imaginary unit, $J: (u,v)\mapsto (-v, u)$, given
by $\lambda_J= -v\, dx + u\, dy$, with $\omega_J = -dv\wedge dx +du \wedge dy$.
\smallskip
\begin{prop}\label{JtwistLem}
On the contact $3$-manifold $M_w={\mathbb S}_w(T^*S)$, given by the cosphere bundle of radius $w$,
consider the contact $1$-form $\alpha$ induced by the tautological Liouville $1$-form $\lambda$
and the contact $1$-form $\alpha_J$ determined by the twisted $\lambda_J$. The
contact planes of these two contact structures intersect along the circle direction $\partial_\theta$.
The Reeb field $R_\alpha$ of $\alpha$ is Legendrian for $\alpha_J$ and the Reeb field $R_{\alpha_J}$
is Legendrian for $\alpha$.
The twist $J$ fixes the $\partial_\theta$ generator and exchanges the generators $R_{\alpha_J}$
and $R_\alpha$.
\end{prop}
\proof
On the contact $3$-manifold $M_w={\mathbb S}_w(T^*S)$, given by the cosphere bundle of radius $w$,
the contact $1$-form induced by
the tautological Liouville $1$-form $\lambda$, written in a chart $(U,z)$ on $S$ with
local coordinate $z=x+iy$, is given by
\begin{equation}\label{alphaM}
\alpha =w \cos(\theta) dx + w \sin(\theta) dy,
\end{equation}
where $(w,\theta)$ are the polar coordinates in the cotangent fibers, and the corresponding
contact planes distribution on $U\times S^1_w$ is generated by the vector fields $\partial_\theta$ and
$-\sin(\theta) \partial_x + \cos(\theta) \partial_y$, and with the Reeb vector field
\begin{equation}\label{Ralpha}
R_\alpha =w^{-1} \cos(\theta) \partial_x + w^{-1} \sin(\theta) \partial_y
\end{equation}
\smallskip
The contact structure on $M_w$ induced by the twisted Liouville $1$-form $\lambda_J$ is given in
the same chart $(U,z)$ by
\begin{equation}\label{alphaJM}
\alpha_J = - w \sin(\theta) dx + w \cos(\theta) dy
\end{equation}
with contact planes spanned by $\partial_\theta$ and $\cos(\theta) \partial_x + \sin(\theta) \partial_y$
and with Reeb vector field
\begin{equation}\label{RalphaJ}
R_{\alpha_J}=- w^{-1} \sin(\theta) \partial_x + w^{-1} \cos(\theta) \partial_y.
\end{equation}
\endproof
\smallskip
\subsection{Symplectization and contactization}\label{SymplContSec}
Given a contact manifold $(M,\alpha)$, with $\alpha$ a given contact $1$-form,
one can always form a symplectic manifold $(M\times {\mathbb R},\omega)$ with
$\omega=d(e^s \cdot \alpha)$ with $s\in {\mathbb R}$ the cylinder coordinate. Setting
$w=e^s\in {\mathbb R}^*_+$, one has $\omega= dw \wedge \alpha + w\, d\alpha$ on
$M\times {\mathbb R}^*_+$. In particular, in the case of the contact manifold $M={\mathbb S}(T^* S)$
this gives the following.
\smallskip
\begin{lem}\label{symplem}
The complement of the zero section $T^*S_0:= T^*S\smallsetminus \{0\}$
is the symplectization of the manifold of contact elements ${\mathbb S}(T^* S)$, with symplectic form
written in a chart $(U,z)$ of $S$ with $z=x+iy$ in the form
\begin{equation}\label{omegaTstarS}
\omega = dw \wedge \alpha + w\, d\alpha = du\wedge dx+ dv \wedge dy
\end{equation}
or with the twisted contact and symplectic forms, given in the same local chart by
\begin{equation}\label{omegaJTstarS}
\omega_J =dw \wedge \alpha_J + w\, d\alpha_J = -dv\wedge dx +du \wedge dy.
\end{equation}
for $(u,v)=(w\cos\theta, w\sin\theta)$.
\end{lem}
\smallskip
Given a symplectic manifold $(Y,\omega)$, if the symplectic form is exact, $\omega=d\lambda$,
then one can construct a contactization $(Y\times S^1, \alpha)$ with $\alpha = \lambda-d\phi$,
where $\phi$ is the angle coordinate on $S^1$. When the symplectic form is not exact, it is
possible to construct a contactization if there is some $\hbar >0$ such that the differential form
$\omega/\hbar$ defines an integral cohomology class, $[\omega/\hbar]\in H^2(Y,{\mathbb Z})$. In this case
there is a principal $U(1)$-bundle ${\mathcal S}$ on $Y$ with Euler class $e({\mathcal S})=[\omega/\hbar]$,
endowed with a connection $\nabla$ with curvature $\nabla^2=\omega/\hbar$. This is also
known as the {\em prequantization} bundle. This
connection determines a $U(1)$-invariant $1$-form $\alpha$ on ${\mathcal S}$. The non-degeneracy
condition for the symplectic form $\omega$ implies the contact condition for the $1$-form $\alpha$.
Different choices of the potential $\alpha$ of the connection $\nabla$ lead to equivalent contact manifolds
up to contactomorphisms, see \cite{ElHoSa} for a brief summary of symplectization and contactization.
\smallskip
\begin{lem}\label{contsymplem}
The contactization of the symplectization of the contact $3$-manifold $M={\mathbb S}(T^* S)$
is the $5$-manifold $T^*S_0 \times S^1$ with the contact form
$$ \tilde\alpha =\lambda- d\phi= w\alpha -d\phi . $$
\end{lem}
\proof
The symplectization of a contact manifold is an exact symplectic manifold, hence it admits
a contactization in the simpler form described above. Thus, starting with the contact
manifold $M={\mathbb S}(T^* S)$ for a $2$-dimensional compact surface $S$, endowed with the contact
form $\alpha$ as in \eqref{alphaM} that makes $M$ a Legendrian circle bundle, one obtains
the symplectization $T^*S_0$ with Liouville form $\lambda= w\alpha$, $w=e^s\in {\mathbb R}^*_+$,
and the contactization of the resulting exact symplectic manifold $(T^*S_0, \omega=d\lambda)$
is given by $T^*S_0 \times S^1$ with the contact form $\tilde\alpha =\lambda- d\phi= w\alpha -d\phi$.
\endproof
\smallskip
\begin{rem}\label{twistrem} {\rm
The twist $\alpha \mapsto \alpha_J$ of \eqref{alphaJM} of the contact structure on $M={\mathbb S}(T^*S)$
induces corresponding twists of the symplectization $\omega\mapsto \omega_J$ as in
\eqref{omegaJTstarS} and $\tilde\alpha \mapsto \tilde\alpha_J = w\alpha_J -d\phi$. }
\end{rem}
\begin{defn}\label{contsympldef} {\rm
We write
\begin{equation}\label{symplcont}
{\mathcal S}(M):=T^*S_0 \ \ \ \text{ and } \ \ \ {\mathcal C}{\mathcal S}(M):=T^*S_0\times S^1,
\end{equation}
for the symplectization ${\mathcal S}(M)$ of $M={\mathbb S}(T^*S)$ and the contactization ${\mathcal C}{\mathcal S}(M)$
for this symplectization, endowed with the contact and symplectic forms described above. }
\end{defn}
\smallskip
In the context of geometric models of the $V_1$ cortex, the $5$-dimensional contact
manifold ${\mathcal C}{\mathcal S}(M)$ corresponds to the model for the receptive fields considered
in \cite{BaspSartiCitti}, where an additional pair of dual variables
is introduced, describing phase and velocity of spatial wave propagation.
\medskip
\subsection{The bundle of signal planes}\label{Evec}
In the model of receptor profiles in the visual cortex (see \cite{Pet}, \cite{SaCiPe}),
signals are regarded as functions on the retinal surface and the receptor profiles
are modelled by Gabor filters in these and dual variables. When taking into
account the underlying geometric model, however, one needs to distinguish between
the local variables $(x,y)$ on a chart $(U,z=x+iy)$ on the surface $S$ (or the local variables $(x,y,\theta)$ on
the $3$-manifold $M$) and the linear variables in its tangent space $T_{(x,y)} S$.
Thus, we think of the retinal signal
as a collection of compatible signals in the planes $T_{(x,y)} S$, as $(x,y)$ varies in $S$.
We consider a real $2$-plane bundle on the $3$-manifold $M$ that describes this
geometric space where retinal signals are mapped.
\smallskip
\begin{defn}\label{signalplanedef} {\rm
Let ${\mathcal E}$ be the real $2$-plane bundle on
the contact $3$-manifold $M={\mathbb S}(T^*S)$
obtained by pulling back the tangent bundle $TS$ of the surface $S$ to
$M$ along the projection $\pi: {\mathbb S}(T^*S) \to S$ of the unit sphere bundle of $T^*S$,
\begin{equation}\label{Ebundle}
{\mathcal E}=\pi^* TS .
\end{equation}
At each point $(x,y,\theta)\in M$, with $z=x+iy$ the coordinate in a local chart $(U,z)$ of $S$,
the fiber ${\mathcal E}_{(x,y,\theta)}$ is the same
as the fiber of the tangent bundle $T_{(x,y)} S$.
Also let ${\mathcal E}^\vee$ be the dual bundle of ${\mathcal E}$, namely the bundle
of linear functionals on ${\mathcal E}$,
$$ {\mathcal E}^\vee ={\rm Hom}({\mathcal E},{\mathbb R}). $$ }
\end{defn}
\medskip
Locally the exponential map from $TS$ to $S$ allows for a comparison between
the description of signals in terms of the linear variables of $TS$ and the nonlinear variables of $S$.
The linear variables of $TS$ are the ones to which the Gabor filter analysis applies. Thus,
in terms of the contact $3$-manifold $M$, we think of a signal as a consistent family of
signals on the fibers ${\mathcal E}_{(x,y,\theta)}$, or equivalently a signal on the total space of the $2$-plane
bundle ${\mathcal E}$. The filters in turn will depend on the dual linear variables of ${\mathcal E}$ and ${\mathcal E}^\vee$.
We make this idea more precise in the next subsections.
\subsection{Fourier transform relation and signals}
Over a compact Riemannian manifold $Y$, functions on the tangent and cotangent
bundles $TY$ and $T^*Y$ are related by Fourier
transform in the following way. Let ${\mathcal S}(TY,{\mathbb R})$ denote the vector space of smooth real
valued functions on $TY$ that are rapidly decaying along the fiber directions, and similarly
for ${\mathcal S}(T^*Y,{\mathbb R})$. Let $\langle \eta,v \rangle_x$ denote the pairing of tangent and cotangent
vectors $v\in T_xY$, $\eta\in T^*_xY$ at a point $x\in Y$. One defines
$$ {\mathcal F}: {\mathcal S}(TY,{\mathbb R}) \to {\mathcal S}(T^*Y,{\mathbb R}) $$
$$ ({\mathcal F}\varphi)_x(\eta) = \frac{1}{(2\pi)^{\dim Y}} \int_{T_xY} e^{2\pi i \langle \eta,v \rangle_x} \varphi_x(v)\, d{\rm vol}_x(v), $$
with respect to the volume form on $T_x Y$ induced by the Riemannian metric.
\smallskip
Because of this Fourier transform relation, cotangent vectors in $T^* Y$ are
sometimes referred to as ``spatial frequencies".
\smallskip
In the model we are considering, the manifold over which signals are defined
is the total space ${\mathcal E}$ of the bundle of signal planes introduced in \S \ref{Evec} above,
namely real $2$-plane bundle ${\mathcal E}=\pi^* TS$. We can easily generalize the
setting described above, by replacing the pair of tangent and cotangent bundle $TY$ and $T^* Y$
of a manifold $Y$ with a more general pair of a vector bundle ${\mathcal E}$ and its dual ${\mathcal E}^\vee$.
The variables in the fibers of ${\mathcal E}^\vee$ are the spatial frequencies variables of the
models of the visual cortex of \cite{Pet}, \cite{SaCiPe}.
In this geometric setting a ``signal" is described as follows.
\smallskip
\begin{defn}\label{signaldef} {\rm
A signal is a real valued function on the total space ${\mathcal E}$ of
the bundle of signal planes, with ${\mathcal I}\in L^2({\mathcal E},{\mathbb R})$, with respect to the measure given by
the volume form of $M$ and the norm on the fibers of ${\mathcal E}$ induced by the inner
product on $TS$ through the pullback map. A smooth signal is a
smooth function that decays to zero at infinity in the fiber directions,
${\mathcal I} \in {\mathcal C}^\infty_0({\mathcal E},{\mathbb R})$. }
\end{defn}
\smallskip
The assumption that ${\mathcal I}$ is smooth is quite strong,
as one would like to include signals that have sharp contours and
discontinuous jumps, but we can assume that such signals are smoothable
by convolution with a sufficiently small mollifier function that replaces
sharp contours with a steep but smoothly varying gradient.
\smallskip
\smallskip
\subsection{Signal analysis and filters}
For signals defined over ${\mathbb R}^n$, instead of over a more general manifold,
signal analysis is performed through a family of filters (wavelets), and the
signal is encoded through the coefficients obtained by integration against the filters.
Under good conditions on the family of filters, such as the frame condition for Gabor
analysis, both the encoding and the decoding maps are bounded operators, so the
signal can be reliably recovered from its encoding through the filters.
\smallskip
For signals on manifolds there is in general no good construction of
associated filters for signal analysis, although partial results exist involving
splines discretization, diffusive wavelets, or special geometries such as
spheres and conformally flat manifolds, see for instance \cite{BerKey}, \cite{EbWi}, \cite{Pese}.
One of our goals here is to show that geometric modelling of the visual cortex in terms of
contact geometry and the description of receptive fields in terms of Gabor frames suggest a
general way of performing signal analysis on a specific class of contact manifolds.
\section{Gabor filters on the manifold of contact elements}\label{GaborContactEltsSec}
In this section we present a construction of a family of Gabor systems associated to
the contact manifolds described in the previous sections.
As above we consider a compact Riemann surface $S$, and its
manifold of contact elements $M={\mathbb S}(T^* S)$ with the two
contact $1$-forms $\alpha$ and $\alpha_J$ described in \S \ref{Jsec} above.
\smallskip
\subsection{Gabor filters and receptor profiles}
As argued in \cite{Daug2}, simple-cells in the $V_1$ cortex try to localize at the same time the position
and the frequency of a signal, and the shape of simple cells is related to their functionality.
However, the uncertainty principle in space-frequency analysis implies that it is not possible to detect,
with arbitrary precision, both position and momentum. At the same time, the need for the visual system
to process efficiently spatiotemporal information requires optimal extraction and representation of images
and their structure. Gabor filters provide such optimality, since they minimize the uncertainty, and are
therefore regarded as the most suitable functions to model the shape of the receptive profiles.
\smallskip
The hypothesis that receptor field profiles are Gabor filters is
motivated by the analytic properties of Gabor frames. In addition to the minimization of the uncertainty
principle mentioned above, the frame condition for Gabor systems provides good encoding and
decoding properties in signal analysis, with greater stability to errors than in the case of a Fourier
basis. It is therefore a reasonable assumption that such systems would provide an optimal form
of signal analysis implementable in biological systems. We will be working here under the
hypothesis that receptive field profiles in the $V_1$ cortex are indeed Gabor filters. In this section
we show how to obtain such Gabor filters directly from the contact geometry described in the
previous section, while in the next section we discuss the frame condition.
\smallskip
\subsection{Gabor systems and Gabor frames}
We recall here the notion and basic properties of $d$-dimensional Gabor systems and Gabor frames,
see \cite{Groch2}. Given a point $\lambda=(s,\xi)\in {\mathbb R}^{2d}$, with $s,\xi\in {\mathbb R}^d$, we consider the
operator $\rho(\lambda)$ on $L^2({\mathbb R}^d)$ given by
\begin{equation}\label{rholambda}
\rho(\lambda) := e^{2\pi i \langle s, \xi\rangle}\, T_s \, M_\xi
\end{equation}
with the translation and modulation operators
\begin{equation}\label{TMops}
(T_s f)(t) = f(t-s), \ \ \ \ \ \ (M_\xi f)(t) = e^{2\pi i \langle \xi, t\rangle} f(t),
\end{equation}
which satisfy the commutation relation
$$ T_s M_\xi = e^{-2\pi i \langle s,\xi\rangle} M_\xi T_s. $$
\smallskip
A Gabor system, for a given choice of a ``window function" $g\in L^2({\mathbb R}^d)$ and a
$2d$-dimensional lattice $\Lambda=A {\mathbb Z}^{2d} \subset {\mathbb R}^{2d}$, for some $A\in {\rm GL}_{2d}({\mathbb R})$,
consists of the collection of functions
\begin{equation}\label{Gaborsys}
{\mathcal G}(g,\Lambda) =\{ \rho(\lambda) g \}_{\lambda\in \Lambda} .
\end{equation}
More generally, Gabor systems can be defined in the same way for discrete sets $\Lambda \subset {\mathbb R}^{2d}$
that are not necessarily lattices. We will consider in this paper cases where the discrete set is a translate of
a lattice by some vector. In general, one assumes (see \cite{Sei}) that the discrete set $\Lambda$ in the
construction of the Gabor system is {\em uniformly discrete}, namely such that
$$ q(\Lambda) = \inf \{ \| \lambda - \lambda' \|\,|, \lambda, \lambda' \in \Lambda, \,\, \lambda \neq \lambda' \} >0\, . $$
This is clearly satisfied in the case where $\Lambda$ is a translate of a lattice.
\smallskip
A Gabor system ${\mathcal G}(g,\Lambda)$ as in \eqref{Gaborsys} is a {\em Gabor frame} if the
functions $\rho(\lambda) g$ satisfy the {\em frame condition}: there are constants $C,C'>0$
such that, for all $h \in L^2({\mathbb R}^d)$,
\begin{equation}\label{framecond}
C \, \| h \|^2_{L^2({\mathbb R}^d)} \leq \sum_\lambda |\langle h, \rho(\lambda) g \rangle |^2 \leq C' \, \| h \|^2_{L^2({\mathbb R}^d)}.
\end{equation}
\smallskip
The two inequalities in the frame condition ensure that both the encoding map that stores information
about signal $h$ into the coefficients $c_\lambda(h):=\langle h, \rho(\lambda) g \rangle$ for $\lambda\in \Lambda$,
and the decoding map that reconstructs the signal from these coefficients are bounded linear operators.
This ensures good encoding and decoding, even though the Gabor frames $\{ \rho(\lambda) g \}$
do not form an orthonormal basis, unlike in Fourier analysis.
\smallskip
Window functions are typically assumed to have a Gaussian shape.
It is in general an interesting and highly nontrivial problem of signal analysis to characterize the
lattices $\Lambda$ for which the frame condition \eqref{framecond} hold, for a given choice of window function,
see \cite{Groch2}.
\smallskip
In the modelling of the $V_1$ cortex, receptor profiles are accurately modelled by Gabor functions,
hence it is natural to consider the question of whether there is a lattice $\Lambda$, directly determined
by the geometric model of $V_1$, with respect to which the receptor profiles are organized into a
Gabor frame system. This is the main question we will be focusing on in the rest of this paper.
\smallskip
\subsection{Window function}\label{WinSec}
The construction of Gabor filters we consider here follows closely the model of \cite{SaCiPe},
reformulated in a way that more explicitly reflects the underlying contact geometry described
in the previous section. We first show how to obtain the mother function (window function)
of the Gabor system and then we will construct the lattice that generates the
system of Gabor filters.
\smallskip
Let $V$ and $\eta$ denote, respectively, the linear variables in the fibers $V\in T_{(x,y)}S\simeq {\mathbb R}^2$,
$\eta\in T^*_{(x,y)} S\simeq {\mathbb R}^2$, with $\langle \eta, V\rangle_{(x,y)}$ the duality pairing
of $T^*_{(x,y)}S$ and $T_{(x,y)}S$. We write $V=(V_1,V_2)$ and $\eta=(\eta_1,\eta_2)$
in the bases $\{ \partial_x, \partial_y \}$ and $\{ dx, dy \}$ of the
tangent and cotangent bundle determined by the choice of coordinates $(x,y)$ on $S$.
\smallskip
\begin{defn}\label{Psi0def}
A window function on the bundle $TS \oplus T^*S$ over $S$ is
a smooth real-valued function $\Phi_0$ defined on the total space of
$TS \oplus T^*S$, of the form
\begin{equation}\label{Phi0S}
\Phi_{0,(x,y)}(V,\eta):= \exp\left(- V^t A_{(x,y)} V - i \langle \eta, V\rangle_{(x,y)} \right),
\end{equation}
where $A$ is a smooth section of $T^*S \otimes T^*S$ that is symmetric
and positive definite as a quadratic form on the fibers of $TS$, with the property that
at all points $(x,y)$ in each local chart $U$ in $S$ the matrix $A_{(x,y)}$ has eigenvalues uniformly bounded away from zero,
${\rm Spec}(A_{(x,y)})\subset [\lambda,\infty)$ for some $\lambda >0$.
\end{defn}
\smallskip
\begin{lem}\label{Psiwindowlem}
The restriction of a window function $\Phi_0$ as in \eqref{Phi0S} to the bundle $TS \oplus {\mathbb S}(T^*S)$
determines a real-valued function on the total space of the bundle ${\mathcal E}$, which in a local chart is of the form
\begin{equation}\label{Psiwindow}
\Psi_{0,(x,y,\theta)}(V):= \exp\left(- V^t A_{(x,y)} V - i \langle \eta_\theta, V\rangle_{(x,y)} \right) .
\end{equation}
\end{lem}
\smallskip
\proof
Consider the restriction of $\Phi_0$ to the bundle $TS \oplus {\mathbb S}_w(T^*S)$ for some $w>0$, over a local
chart $(U,z=x+iy)$ of $S$.
This means restricting the variable $\eta\in T^*_{(x,y)} S$ to $\eta=(\eta_1,\eta_2)=(w\cos(\theta),w\sin(\theta))$,
with $\theta\in S^1$,
\begin{equation}\label{Psi0res}
\Phi_{0,(x,y)}|_{TS \oplus {\mathbb S}_w(T^*S)}(V,\theta)=\exp\left(- V^t A_{(x,y)} V - i \langle \eta_\theta, V\rangle_{(x,y)} \right),
\end{equation}
with $\eta_\theta=(w\cos(\theta),w\sin(\theta))$. In particular, we restrict to the case $w=1$.
\smallskip
We can identify the total space of the bundle $TS \oplus {\mathbb S}(T^*S)$ with the total space of
the bundle of signal planes ${\mathcal E}$ over $M={\mathbb S}(T^*S)$. Indeed,
the direct sum of two bundles $E_1, E_2$ over the same base space $S$ is given by
$$ E_1\oplus E_2=\{ (e_1,e_2)\in E_1 \times E_2\,|\, \pi_1(e_1)=\pi_2(e_2) \}. $$
Consider the projection onto the second coordinate, $P: E_1\oplus E_2 \to E_2$. This
projection has fibers $P^{-1}(e_2)=\pi_1^{-1}(\pi_2(e_2))$. Thus, the total space of
the bundle $E_1\oplus E_2$, endowed with the projection $P$, can be identified with
the pullback $\pi_2^* E_1$ over $E_2$, with fibers
$(\pi_2^* E_1)_{e_2}=\{ e_i\in E_1\,|\, \pi_1(e_1)=\pi_2(e_2) \}$.
\smallskip
Thus, we can write the function in \eqref{Psi0res} equivalently as a real-valued function $\Psi_0$ on
the total space of the bundle ${\mathcal E}$ over the contact $3$-manifold $M$, which is of the form \eqref{Psiwindow}.
\endproof
\smallskip
This provides the reformulation of the Gabor profiles considered in \cite{SaCiPe} in terms
of the underlying geometry of the bundle ${\mathcal E}$ over $M$.
\smallskip
\subsection{Lattices}
As above, consider the bundle of signal planes ${\mathcal E}$ over $M={\mathbb S}(T^*S)$. The two
contact forms $\alpha$ and $\alpha_J$ discussed in Section~\ref{Jsec} determine
a choice of basis for $TM$ given by the Legendrian circle fiber direction $\partial_\theta$,
together with the two Reeb vector fields $R_\alpha$ and $R_{\alpha_J}$, each of which is
Legendrian for the other contact form. Over a local chart $U$ of $S$, these two
vector fields are given by \eqref{Ralpha}, \eqref{RalphaJ} and
lie everywhere along the $TS$ direction,
hence they determine a basis of the fibers ${\mathcal E}_{(x,y,\theta)}$ of the bundle of signal planes
for $z=x+iy\in U$.
\smallskip
We denote by $\{ R_\alpha^\vee, R_{\alpha_J}^\vee \}$ the dual basis of
${\mathcal E}^\vee$ (over the same chart $U$ of $S$) characterized by $\langle R_\alpha^\vee , R_\alpha\rangle=1$,
$\langle R_\alpha^\vee, R_{\alpha_J}\rangle=0$,
$\langle R_{\alpha_J}^\vee, R_\alpha\rangle =0$, $\langle R_{\alpha_J}^\vee,R_{\alpha_J}\rangle=1$.
By the properties of Reeb and Legendrian vector fields, we can identify the dual basis with the contact forms,
$\{ R_\alpha^\vee, R_{\alpha_J}^\vee \}=\{ \alpha, \alpha_J \}$.
\smallskip
Thus, the contact geometry of $M$ determines a canonical choice of a basis $\{ R_\alpha, R_{\alpha_J} \}$
for the bundle ${\mathcal E}$ and its dual basis $\{ \alpha, \alpha_J \}$ for ${\mathcal E}^\vee$.
\smallskip
This determines bundles of framed lattices (lattices with an assigned basis) over a local chart in $M$ of the form
\begin{equation}\label{latLambda}
\Lambda_{\alpha,J}:= {\mathbb Z}\, R_\alpha + {\mathbb Z}\, R_{\alpha_J}\,
\end{equation}
\begin{equation}\label{latLambdavee}
\Lambda^\vee_{\alpha, J} := {\mathbb Z}\, \alpha + {\mathbb Z}\, \alpha_J \, .
\end{equation}
where $\Lambda_{\alpha,J}$ and $\Lambda^\vee_{\alpha, J}$ here can
be regarded as a consistent choice of a lattice $\Lambda_{\alpha,J, (x,y,\theta)}$
(respectively, $\Lambda^\vee_{\alpha, J, (x,y,\theta)}$) in each fiber of ${\mathcal E}$
(respectively, of ${\mathcal E}^\vee$). The bundle of framed lattices
\begin{equation}\label{Lambdacomb}
\Lambda_{\alpha,J} \oplus \Lambda^\vee_{\alpha, J}
\end{equation}
correspondingly consists of a lattice in each fiber of the bundle ${\mathcal E}\oplus {\mathcal E}^\vee$ over $M$.
We will also equivalently write the bundle of lattices \eqref{Lambdacomb} in the form
$\Lambda + \Lambda_J$ with
\begin{equation}\label{LambdaJ}
\Lambda = {\mathbb Z}\, R_\alpha \oplus {\mathbb Z}\, \alpha, \ \ \ \Lambda_J ={\mathbb Z}\, R_{\alpha_J} \oplus {\mathbb Z}\, \alpha_J \, .
\end{equation}
In the following, we will often simply use the term ``lattice" to indicate bundles of framed
lattices over $M$ as above.
\smallskip
\begin{lem}\label{GaborEGamma}
The choice of the window function $\Psi_0$ described in Section~\ref{WinSec},
together with the lattice \eqref{Lambdacomb}, determine a Gabor system
$$ {\mathcal G}(\Psi_0, \Lambda_{\alpha,J} \oplus \Lambda^\vee_{\alpha, J} ) $$
which consists, at each point $(x,y,\theta)\in M$ of the Gabor system
$$ {\mathcal G}(\Psi_{0,(x,y,\theta)}, \Lambda_{\alpha,J,(x,y,\theta)} \oplus \Lambda^\vee_{\alpha, J,(x,y,\theta)}) $$
in the space $L^2({\mathcal E}_{(x,y,\theta)})$.
\end{lem}
\smallskip
\proof
The Gabor functions in ${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ are of the form
$$ \rho(\lambda) \Psi_0=\rho(\xi) \rho(W)\, \Psi_0 = e^{2\pi i \langle \xi, V \rangle} \Psi_0(V-W), $$
for $\lambda=(\xi,W)$ with $\xi \in \Lambda^\vee_{\alpha,J}\subset {\mathcal E}^\vee$ and $W \in \Lambda_{\alpha, J}\subset {\mathcal E}$.
\endproof
\medskip
\subsection{Injectivity radius function and lattice truncation}\label{scaleLSec}
In order to adapt this construction to a realistic model of signal processing in the $V_1$ cortex, one
needs to keep into account the fact that in reality only a finite, although large, number of Gabor filters in
the collection ${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ contribute to the analysis of the retinal signals.
This number is empirically determined by the structure of the neurons in the $V_1$ cortex. This means
that there is some (large) cutoff size $R_{\max} >0$ such that the part of the lattice that contributes to the
available Gabor filters is contained in a ball of radius $R_{\max}$.
\smallskip
There is also an additional constraint that comes from the geometry. Namely, we are
using Gabor analysis in the signal planes determined by the vector bundle ${\mathcal E}$ to analyze a
signal that is originally stored on the retinal surface $S$. Lifting the signal from $S$ to the fibers
of ${\mathcal E}$ and consistency or results across nearby fibers is achieved through the exponential map
$$ \exp_{(x,y)}: T_{(x,y)} S \to S $$
from the tangent bundle of $S$ (of which ${\mathcal E}$ is the pullback to $M$) to the surface. At a given
point $(x,y)\in S$ let $R_{inj}(x,y) >0$ be the supremum of all the radii $R>0$ such that the
exponential map $\exp_{(x,y)}$ is a diffeomorphism on the ball $B(0,R)$ of radius $R$ in
$T_{(x,y)}S$. For a compact surface $S$, we obtain a continuous
{\em injectivity radius function} given by $R_{inj}: S \to {\mathbb R}_+^*$ given by
$(x,y) \mapsto R_{inj}(x,y)$.
\smallskip
Thus, to obtain good signal representations and signal analysis in the signal planes,
we want that the finitely many available lattice points that perform the shift
operators $T_W=\rho(W)$ in the Gabor system construction lie within a ball of radius $R_{inj}$
in the fibers of ${\mathcal E}$.
\smallskip
It is reasonable to assume that the maximal size $R_{\max}$, determined by empirical data on neurons in
the visual cortex, will be in general very large, and in particular larger than the maximum
over the compact surface $S$ of the injectivity radius function.
This means that, in order to match these two bounds, we need to consider a
scaled copy of the lattice $\Lambda_{\alpha, J}$.
We obtain the following scaling function.
\smallskip
\begin{lem}\label{scaledlatt}
Let $b_M: M \to {\mathbb R}^*_+$ be the function given by
\begin{equation}\label{afunctradii}
b_M(x,y,\theta) :=\frac{R_{inj}(x,y)}{R_{\max}}\, ,
\end{equation}
where $R_{inj}(x,y)$ is the injectivity radius function and $R_{\max} >0$ is an assigned constant.
For $R_{\max} > \max_{(x,y)\in S} R_{inj}(x,y)$, consider the rescaled lattice
\begin{equation}\label{aLambda}
\Lambda_{b,\alpha,J}:= b_M \, \Lambda_{\alpha, J} = {\mathbb Z}\, b_M\, R_\alpha + {\mathbb Z}\, b_M\, R_{\alpha_J} \ \ \ \text{ and } \ \ \
\left\{ \begin{array}{l} \Lambda_b = {\mathbb Z} \, b_M\, R_\alpha \oplus {\mathbb Z}\, \alpha, \\ \Lambda_{b,J} = {\mathbb Z} \, b_M\, R_{\alpha_J} \oplus {\mathbb Z}\, \alpha_J \end{array}\right. \, .
\end{equation}
All the lattice points of the original lattice $\Lambda_{\alpha,J}$ that are within the
ball of radius $R_{\max}$ correspond to lattice points of the rescaled $\Lambda_{b,\alpha,J}$
that are within the ball of radius $R_{inj}(x,y)$ in ${\mathcal E}_{(x,y,\theta)}$.
In particular, for $B$ a ball of measure $1$ in ${\mathcal E}_{(x,y,\theta)}$, and $N(r)=\# \{ \lambda\in \Lambda_{b,\alpha,J}\cap r\cdot B \}$, we have
\begin{equation}\label{bDens}
D^-(\Lambda_{b,\alpha,J})= \liminf_{r\to \infty} \frac{N(r)}{r} = b_M^{-1} > 1 \, .
\end{equation}
\end{lem}
\proof The first statement is clear by construction. Moreover,
under the assumption that $R_{\max} > \max_{(x,y)\in S} R_{inj}(x,y)$, the
function $b_M$ of \eqref{afunctradii} is everywhere smaller than one,
\begin{equation}\label{b1}
b_M(x,y,\theta) <1, \ \ \ \forall (x,y,\theta)\in M \, ,
\end{equation}
so that the density $D^-(\Lambda_{b,\alpha,J}) >1$.
\endproof
\smallskip
\begin{rem}\label{halfscale}{\rm
Note that we only need to rescale the $\Lambda_{\alpha, J}$ part of the lattice in ${\mathcal E}$ and not
the $\Lambda_{\alpha,J}^\vee$ part of the lattice in ${\mathcal E}^\vee$, since the $\Lambda_{\alpha,J}^\vee$ part
only contributes modulation operators $M_\xi$ that do not move the coordinates outside of the
injectivity ball of the exponential map, unlike the translation operators $T_W$ with $W\in \Lambda_{\alpha, J}$. }
\end{rem}
\smallskip
We can also make the choice here to scale both parts of the lattice by the same factor $b=b_M$, and work
with the scaled lattice $\Lambda_{b,\alpha, J}\oplus \Lambda_{b,\alpha,J}^\vee$ even if the scaling of
the modulation part is not necessary by the observation of Remark~\ref{halfscale} above. The difference
between these two choices can be understood geometrically in the following way. One usually normalizes
the choice of the Reeb vector field of a contact form by the requirement that the pairing is
$\langle \alpha, R_\alpha\rangle=1=\langle \alpha_J, R_{\alpha,J}\rangle$.
However, one can make a different choice of normalization. Scaling only the $\Lambda_{\alpha, J}$ part of the
lattice and not the $\Lambda_{\alpha,J}^\vee$ corresponds to changing this normalization, while scaling both
parts means that one maintains the normalization. As will be clear in the argument of Proposition~\ref{bFrameProp},
these two choices are in fact equivalent and give the same signal
analysis properties.
\medskip
\section{The Gabor frame condition}
We discuss separately the case where, in a local chart $U$ in $S$, the quadratic form $A$ in the window function $\Psi_0$
is diagonal in the basis $\{ R_\alpha, R_{\alpha_J} \}$ and the general case where it is not diagonal. The first case has
the advantage that it reduces to one-dimensional Gabor systems, for which we can reduce the
discussion to a famous result of Lyubarski\v{i} and Seip, \cite{Lyu}, \cite{Sei}, after the
slightly different form of the window function is accounted for. The more general case can be
dealt with along the lines of the results of \cite{Groch2} for $2$-dimensional Gabor systems.
In particular, the analysis of the frame condition relies on the complex analytic technique of
Bargmann transform and sampling.
\medskip
\subsection{Gabor frame condition}
Let ${\mathcal E}$ be the bundle of signal planes on the contact $3$-manifold $M$ as above. Let
$\Psi_0$ be a window function, which we assume of the form \eqref{Psiwindow}.
Suppose given a lattice bundle $\Lambda$, namely a bundle over $M$ with
fiber isomorphic to ${\mathbb Z}^4$, where the fiber $\Lambda_{(x,y,\theta)}$ is a lattice in
$({\mathcal E}\oplus {\mathcal E}^\vee)_{(x,y,\theta)}$. We form the Gabor system ${\mathcal G}(\Psi_0,\Lambda)$
as in Lemma~\ref{GaborEGamma}, with Gabor functions
$\rho(\lambda_{(x,y,\theta)}) \Psi_0 |_{{\mathcal E}_{(x,y,\theta)}}$, with
$\lambda_{(x,y,\theta)}\in \Lambda_{(x,y,\theta)}$.
\smallskip
\begin{defn}\label{smoothGaborDef}
The Gabor system ${\mathcal G}(\Psi_0,\Lambda)$ satisfies the smooth Gabor frame condtion on $M$ if there
are smooth functions ${\mathbb R}^*_+$-valued functions $C,C'$ on the local charts of $M$, such that the frame condition
holds pointwise in $(x,y,\theta)$,
\begin{equation}\label{smoothGabor}
C_{(x,y,\theta)}\, \| f \|_{L^2({\mathcal E}_{(x,y,\theta)})}^2 \leq
\sum_{\lambda_{(x,y,\theta)} \in \Lambda_{(x,y,\theta)}}
|\langle f, \rho(\lambda_{(x,y,\theta)}) \Psi_0 \rangle |^2 \leq C'_{(x,y,\theta)}\, \| f \|_{L^2({\mathcal E}_{(x,y,\theta)})}^2 \, .
\end{equation}
\end{defn}
\smallskip
Note that, although, the manifold $M$ is compact, so that globally defined continuous functions
$C,C': M\to {\mathbb R}_+$ would have a minimum and a maximum that are strictly positive and finite, in
the condition above we are only requiring that the functions $C,C'$ are defined on the local
charts, without necessarily extending globally to $M$. Indeed, since global vector fields on
an orientable compact surface $S$ necessarily have singularities (unless $S=T^2$), the frame
condition will not in general extend globally, while it holds locally within each chart, with not
necessarily uniformly bounded $C,C'$. If these functions extend globally to $M$, then a
stronger global frame condition
$$ C_{\min}\, \| f \|_{L^2({\mathcal E}_{(x,y,\theta)})}^2 \leq
\sum_{\lambda_{(x,y,\theta)} \in \Lambda_{(x,y,\theta)}}
|\langle f, \rho(\lambda_{(x,y,\theta)}) \Psi_0 \rangle |^2 \leq C'_{\max}\, \| f \|_{L^2({\mathcal E}_{(x,y,\theta)})}^2 $$
would also be satisfied, but one does not expect this to be the case, except in special cases
like the parallelizable $S=T^2$. In the case directly relevant to the modeling of the primary
visual cortex, one assumes that the retinal surface is represented by a chart $U\subset S$ with
$S=S^2$ a sphere.
\medskip
\subsection{The diagonal case: dimensional reduction}
Consider first the case where the quadratic form $A$ in \eqref{Psiwindow} is diagonal in the
basis $\{ R_\alpha, R_{\alpha_J} \}$ of the bundle ${\mathcal E}$.
\smallskip
First observe that, in a local chart $U$ of $S$, the unit vector $\eta_\theta \in T^*S$ is in fact the vector
$\eta_\theta=(\cos(\theta),\sin(\theta))$ in the basis $\{ dx, dy \}$, which is
the dual basis element $\alpha$, as in \eqref{alphaM}. Thus, the
window function \eqref{Psiwindow} used in \cite{SaCiPe} is of the
form
\begin{equation}\label{windowform}
\Psi_{0,(x,y,\theta)}(V)
= \rho(\frac{1}{2\pi}(0,1)) \hat\Psi_{0,(x,y,\theta)}(V)
\end{equation}
where
\begin{equation}\label{hatPsi}
\hat\Psi_{0,(x,y,\theta)}(V):= \exp\left(- V^t A_{(x,y)} V \right)
\end{equation}
and $(0,-1)\in \Lambda$ is the covector $-\eta_\theta(x,y)= \alpha|_{(x,y,\theta)}$.
Thus, the Gabor system can be equivalently described as
$$ {\mathcal G}( \Psi_0, \Lambda + \Lambda_J )= {\mathcal G}(\hat \Psi_0, \hat\Lambda + \Lambda_J ) $$
\begin{equation}\label{hatLambda}
\hat\Lambda =\xi_0 + \Lambda =\{ (W,\xi)\in {\mathcal E}\oplus{\mathcal E}^\vee\,|\, W\in {\mathbb Z} R_\alpha, \,\, \xi\in \xi_0 +{\mathbb Z}\alpha \} \ \ \ \text{ with } \xi_0 :=- \frac{1}{2\pi}\alpha \in {\mathcal E}^\vee \, .
\end{equation}
Note that $\hat\Lambda$ is no longer a lattice (a discrete abelian {\em subgroup} in each
fiber ${\mathcal E}_{(x,y,\theta)} \oplus{\mathcal E}^\vee_{(x,y,\theta)}$ in the local chart): it is however a uniformly discrete set given by the
translate $\xi_0 + \Lambda$.
\smallskip
\begin{lem}\label{diagcaselem}
If the quadratic form $A$ in \eqref{Psiwindow} is diagonal, $A={\rm diag}(\kappa_1^2, \kappa_2^2)$, in the
basis $\{ R_\alpha, R_{\alpha_J} \}$ of ${\mathcal E}$ in a local chart, then the Gabor frame condition for
${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ is equivalent to
the frame condition for two uncoupled problems for the one-dimensional Gabor systems
${\mathcal G}(\psi_0, \Lambda)$ and ${\mathcal G}(\phi_0,\Lambda_J)$, with
$\psi_0(V_1)=\exp( - \kappa_1^2 V^2_1 -i V_1 )$ and
$\phi_0(V_2)=\exp(-\kappa_2^2 V_2^2)$.
\end{lem}
\proof
Given the duality pairing relations between the contact forms $\alpha$, $\alpha_J$ and their Reeb
vector fields $R_\alpha$ and $R_{\alpha_J}$, if we write the vectors $V\in {\mathcal E}_{(x,y,,\theta)}$ in
coordinates $V= V_1\, R_\alpha + V_2\, R_{\alpha_J}$ over the local chart, then
the window function is written in the form
$$ \Psi_{0, (x,y,\theta)} (V_1,V_2) = \exp( - \kappa_1^2 V^2_1 -i V_1 )\cdot \exp(-\kappa_2^2 V_2^2) =\psi_0(V_1) \cdot \phi_0(V_2), $$
and the Gabor system is of the form
$$ (\rho(\lambda)\Psi_0) (V) =(\rho(\lambda_1) \psi_0)(V_1) \cdot (\rho(\lambda_2) \phi_0)(V_2) $$
$$ \lambda_1=(\xi_1,W_1)\in \Lambda \ \ \ \text{ and } \ \ \ \lambda_2=(\xi_2,W_2)\in \Lambda_J\, . $$
This means that, in this case, the Gabor frame condition problem for ${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$
reduces to two uncoupled problems for the one-dimensional Gabor systems ${\mathcal G}(\psi_0,\Lambda)$
and ${\mathcal G}(\phi_0,\Lambda_J)$. The frame condition for ${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ is
satisfied iff it is satisfied for ${\mathcal G}(\psi_0,\Lambda)$ and ${\mathcal G}(\phi_0,\Lambda_J)$, where the first
problem, by the discussion above, is equivalent to the frame condition for the system
${\mathcal G}(\hat\psi_0,\hat\Lambda)$ with
$\hat\Lambda=\xi_0+\Lambda$ and $\hat\psi_0(V_1)=\exp( - \kappa_1^2 V^2_1)$.
\endproof
\smallskip
\begin{prop}\label{noframes}
The functions in the Gabor system ${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ are {\em not} frames.
\end{prop}
\proof
The second case above is a one-dimensional Gabor system with a Gaussian window function
$g(t)=e^{-\kappa^2 t^2}$ and the lattice ${\mathbb Z}^2$, while the first case is a one-dimensional Gabor system with a
modified window function of the form $g(t)=e^{-\kappa^2 t^2 - i a t}$ and the lattice ${\mathbb Z}^2$ or equivalently
a window function $\hat g(t)=e^{-\kappa^2 t^2}$ and the discrete set $(0,a)+{\mathbb Z}^2$.
\smallskip
For a lattice $\Lambda=A{\mathbb Z}^d$ with $A\in {\rm GL}_d({\mathbb R})$ the density is given by $s(\Lambda)=|\det(A)|$.
In particular it is $s(\Lambda)=1$ for the standard lattice ${\mathbb Z}^2$. The {\em density theorem} for Gabor
frames, \cite{Jan} (see also Propositio~2 of \cite{Groch2}), states that if a Gabor system ${\mathcal G}(g,\Lambda)$
is a frame in $L^2({\mathbb R}^d)$ and the window is a rapid decay function $g\in {\mathcal S}({\mathbb R}^d)$, then necessarily
$s(\Lambda)<1$. Thus, these one-dimensional Gabor systems are not frames, hence the original system
${\mathcal G}(\Psi_0, \Lambda+\Lambda_J)$ also does not satisfy the frame condition.
\endproof
\smallskip
On the other hand, the situation changes when one takes into account the scaling of the
lattice discussed in Section~\ref{scaleLSec}.
\smallskip
\begin{prop}\label{yesframes}
Consider the rescaled lattices $\Lambda_{b,\alpha,J}$, $\Lambda_b$,
$\Lambda_{b,J}$ of \eqref{aLambda}. The system
${\mathcal G}(\Psi_0, \Lambda_b+\Lambda_{b,J})$ does satisfy the frame condition.
\end{prop}
\proof
The Gabor frame question for the system ${\mathcal G}(\Psi_0, \Lambda_b+\Lambda_{b,J})$ reduces to the question
of whether the one-dimensional systems ${\mathcal G}(\phi_0,\Lambda_{b,J})$ and
${\mathcal G}(\hat\psi_0,\hat\Lambda_b)$ with $\hat\Lambda_b=\xi_0+\Lambda_b$ are frames.
\smallskip
In the case of one-dimensional systems, there is a complete characterization of when
the frame condition is satisfied, \cite{Lyu}, \cite{Sei}, \cite{SeiWall}. This characterization
is obtained by reformulating the problem in terms of a complex analysis problem of
sampling and interpolation in Bargmann--Fock spaces.
In the case of a Gaussian window function $\psi$ and a
uniformly discrete set $\Lambda \subset {\mathbb R}^2$, it is proved in \cite{Sei}
that the Gabor system ${\mathcal G}(\psi,\Lambda)$ is a frame if and only if the lower Beurling
density satisfies $D^-(\Lambda)>1$, where
$$ D^-(\Lambda) = \lim_{r\to \infty} \inf \frac{N^-_\Lambda(r)}{r^2}, $$
with $N^-_\Lambda(r)$ the smallest number of points of $\Lambda$ contained in a
scaled copy $r{\mathcal I}$ of a given set ${\mathcal I}\subset {\mathbb R}^2$ of measure one, with measure zero
boundary. The value $D^-(\Lambda)$ is independent of the choice of the set ${\mathcal I}$.
In the case of a rank two lattice this corresponds to the condition $s(\Lambda)<1$,
which is therefore also sufficient.
\smallskip
Thus, the one-dimensional systems
${\mathcal G}(\phi_0,\Lambda_{b,J})$ and ${\mathcal G}(\hat\psi_0, \hat\Lambda_b)$ are frames
if and only if $s(\Lambda_{b,J})<1$ and $s(\Lambda_b)<1$, since the
translate $\hat\Lambda_b$ and $\Lambda_b$ have the same lower Beurling density.
Since the scaling function satisfies $b_M <1$ everywhere on $M$, as in \eqref{b1},
we have seen in Lemma~\ref{scaledlatt} that
these conditions are satisfied. It follows that the Gabor system
${\mathcal G}(\Psi_0, \Lambda_b+\Lambda_{b,J})$ is a frame.
\endproof
\medskip
\subsection{The non-diagonal case: Bargmann transform}
In the more general case where the quadratic form in $\Psi_0$ is not
necessarily diagonal in the basis $\{ R_\alpha, R_{\alpha,J} \}$ in a local chart, the question of
whether the Gabor system ${\mathcal G}(\Psi_0, \Lambda_b+\Lambda_{b,J})$ satisfies
the frame condition can still be reformulated in terms of sampling
and interpolation in Bargmann--Fock spaces, see \cite{Groch2}.
\smallskip
\subsubsection{Bargmann transform and Gabor frames}
The Bargmann transform of a function $f$ in $L^2({\mathbb R}^n)$ is defined as
\begin{equation}\label{Bargmann}
{\mathcal B} f(z)=\int_{{\mathbb R}^n}f(t)e^{2\pi t\cdot z-\pi t^2-\frac{\pi}{2}z^2}\, dt \, ,
\end{equation}
where, for $z\in\mathbb{C}^{2n}$ we write $z=x+iw$ for some $x,w\in {\mathbb R}^n$ and
$z^2=(x+iw)\cdot(x+iw)=x\cdot x-y\cdot y+i2x\cdot w$ and $|z|^2=x^2+w^2$.
It is a unitary transformation from $L^2({\mathbb R}^n)$ to the Bargmann--Fock space ${\mathcal F}^2_n$,
which consists of entire functions of $z\in {\mathbb C}^n$ with finite norm
\begin{equation}\label{normBarFock2}
\| F \|^2_{{\mathcal F}^2_n}= \int |F(z)|^2\,e^{-\pi |z|^2}\,dz \, < \infty \, ,
\end{equation}
induced by the inner product
$$ \langle F, G \rangle_{{\mathcal F}^2_n} = \int_{{\mathbb C}^n} F(z)\, \overline{G(z)}\, e^{-\pi |z|^2}\, dz \, . $$
We also consider the Bargmann--Fock space ${\mathcal F}^\infty_n$, which is the space of
entire functions on ${\mathbb C}^n$ with
\begin{equation}\label{normBarFock}
\| F \|^2_{{\mathcal F}^\infty_n}=\sum_{z\in {\mathbb C}^n} | F(z) |\, e^{-\frac{\pi |z|^2}{2} } \, < \infty \, .
\end{equation}
There is a well known relation between the Bargmann transform and Gabor systems with
Gaussian window function, see for instance \cite{Groch}, \cite{Groch2}. In our setting, because of the
form \eqref{windowform} of the window function, we need a simple variant of this relation
between Gabor systems and Bargmann transform which we now illustrate.
\smallskip
A set $\Lambda\subset {\mathbb C}^n$ is a {\em sampling set} for ${\mathcal F}_n^2$ if there are constants
$C,C'>0$, such that, for all $F\in {\mathcal F}_n^2$,
$$ C \cdot \| F \|^2_{{\mathcal F}_n^2} \leq \sum_{\lambda\in \Lambda} | F(\lambda) |^2 e^{-\pi |\lambda|^2} \leq C' \cdot \| F \|^2_{{\mathcal F}_n^2} \, . $$
A set $\Lambda\subset {\mathbb C}^n$ is a {\em set of uniqueness} for ${\mathcal F}^\infty_n$ if a function
$F\in{\mathcal F}^\infty_n$ satsifying $F(\lambda)=0$ for all $\lambda\in \Lambda$ must vanish
indentically, $F\equiv 0$.
For $\Lambda\subset {\mathbb C}^n$, let $\bar\Lambda=\{ \bar\lambda\,|\, \lambda\in \Lambda \}$.
\smallskip
We consider as in
\cite{Groch3} the modulation spaces $M^p({\mathbb R}^n)$ as the space of
tempered distributions $f\in {\mathcal S}^\prime({\mathbb R}^n)$ with Gabor transform with bounded $L^p$ norm,
$\| V_\varphi f\|_p <\infty$, for all $\varphi\in {\mathcal S}({\mathbb R}^d)$, where
$$ V_\varphi f=\langle f, M_\xi T_x \varphi\rangle =\int_{{\mathbb R}^d} f(t) \overline{\varphi(t-x)}
e^{-2\pi i \xi\cdot t} \, dt \, . $$
Similarly, the modulation space $M^\infty({\mathbb R}^n)$ is the space of
tempered distributions $f\in {\mathcal S}^\prime({\mathbb R}^n)$ with
$\| V_\varphi f\|_\infty <\infty$, for all $\varphi\in {\mathcal S}({\mathbb R}^d)$.
\smallskip
\begin{prop}\label{Bergequiv1}
Let $\Lambda\subset {\mathbb C}^n$ be a lattice and let $\phi(x)=e^{-\pi \, |x|^2}e^{-2\pi i \, a\cdot x}\in L^2({\mathbb R}^n)$,
for some fixed $a\in {\mathbb R}^n$.
Then the following conditions are equivalent.
\begin{enumerate}
\item The Gabor system ${\mathcal G}(\phi,\Lambda)$ is a frame.
\item The set $\bar\Lambda_a:=\bar\Lambda+ ia$ is a sampling set for ${\mathcal F}^2_n$.
\item The set $\bar\Lambda_a$ is a set of uniqueness for ${\mathcal F}^\infty_n$.
\end{enumerate}
\end{prop}
\proof For the proof of $1\iff 2$ it suffices to prove that $$|\langle f,M_wT_x\phi \rangle|=|{\mathcal B}(x-i(w+a))|e^{-\frac{\pi |(x-i(w+a))|^2}{2}}.$$ We have
$$V_\phi f(x,w)=\int_{{\mathbb R}^n}f(t)e^{-\pi (t-x)^2}e^{-2\pi i (a\cdot(t-x))}e^{-2\pi i (w\cdot t)}dt=$$
$$=e^{2\pi i (a\cdot x)}\int_{{\mathbb R}^n}f(t)e^{-\pi t^2+2\pi tx-\pi x^2}e^{2\pi i(a+w)\cdot t}dt=$$
$$=e^{2\pi i a\cdot x}e^{-\pi i x\cdot (a+w)}e^{-\frac{\pi}{2}(x^2+(a+w)^2)}\int_{{\mathbb R}^n}f(t)e^{-\pi t^2}e^{2\pi t\cdot(x-i(w+a))}e^{\frac{\pi}{2}(x-i(a+w))^2}dt\, .$$
Moreover, for $z^{\prime}=x+i(w+a)$,
$$V_{\phi}f(x,w)=e^{-\frac{\pi}{2}|z^{\prime}|^2}e^{-\pi i x\cdot \Im(z^{\prime})}e^{2\pi i (a\cdot x)}{\mathcal B} f(\overline{z^{\prime}})$$
Thus, $|V_\phi f(x,w)|=|{\mathcal B}\, f(\overline{z^{\prime}})|e^{-\frac{\pi}{2}|z^\prime|^2}=|{\mathcal B}\, f(x-i(w+a))|e^{-\frac{\pi |(x-i(w+a))|^2}{2}}$.
Thus, we obtain $$\sum_{\lambda\in \Lambda}|V_\phi f(\lambda)|=\sum_{z^{\prime}\in \Bar{\Lambda_a}}|{\mathcal B}\, f({z^{\prime}})|e^{-\frac{\pi}{2}|z^\prime|^2}\, , $$
and $\sum_{\lambda\in \Lambda}|V_\phi f(\lambda)|\asymp \| f \|_{L^2({\mathbb R}^n)}$ if and only if
$$\sum_{z^{\prime}\in \bar{\Lambda}_a}
|{\mathcal B}\, f({z^{\prime}})| e^{-\frac{\pi}{2} |z^\prime|^2 } \asymp \| {\mathcal B}\, f \|_{{\mathcal F}^2_n}\, . $$
To prove $2\iff 3$,
starting with the assumption that $\bar{\Lambda}_a$ is a set of sampling for ${\mathcal F}_n^2$,
let $F\in {\mathcal F}^\infty_n$ be such that $F(\lambda)=0$ for all $\lambda\in \bar{\Lambda}_a$.
The Bargmann-Fock space ${\mathcal F}^\infty_n$
is related to the modulation space $M^\infty({\mathbb R}^n)$ through the Bargmann transform \eqref{Bargmann},
$$ {\mathcal F}^\infty_n = {\mathcal B}(M^\infty({\mathbb R}^n))\, . $$
Thus, there exists an element $f\in M^\infty({\mathbb R}^n)$ such that ${\mathcal B} \, f=F$. Thus, we have
${\mathcal B}\, f(\lambda) =0$, for all $\lambda\in \bar{\Lambda}_a$, hence
$\langle f,\pi(\lambda)\phi \rangle =0$, for all $\lambda\in \Lambda$.
The equivalence $1\iff 2$ then implies that $f\equiv 0$, hence $F\equiv 0$.
Conversely, suppose that $\bar{\Lambda}_a$ is a set of uniqueness for ${\mathcal F}^\infty_n$.
Theorem~3.1 of \cite{Groch3} shows that the frame
condition for the Gabor system ${\mathcal G}(\phi,\Lambda)$, for a window $\phi\in {\mathcal S}({\mathbb R}^n)$,
is equivalent to the condition that the Gabor transform map is one-to-one as a map
\begin{equation}\label{Vgmap}
V_\phi : M^\infty({\mathbb R}^n)\to \ell^\infty(\Lambda)\, , \ \ \ \
V_\phi: f \mapsto V_\phi f |_\Lambda \, .
\end{equation}
Since we have $\phi\in {\mathcal S}({\mathbb R}^n)$, it suffices to prove that the Gabor transform
$f\mapsto V_{\phi}f|_{\bar\Lambda_a}$ is one-to-one as a map
$M^\infty({\mathbb R}^n) \to \ell^\infty(\bar\Lambda_a)$.
Let $D$ denote the map $D: M^\infty({\mathbb R}^n) \to \ell^\infty(\Lambda)$ given by
$$ D: f \mapsto \{ {\mathcal B}\, f (\lambda) \}_{\lambda \in \Lambda}\, , $$
and let $T: \ell^\infty(\Lambda) \to \ell^\infty(\Lambda_a)$ be given by
$$ T : \{ c_\lambda \}_{\lambda \in \Lambda} \mapsto \{
e^{\pi i \lambda _1 (\lambda_2+a)}e^{-|\lambda+(0,a)|^2/2} c_\lambda \}_{ \lambda+(0,a) \in \Lambda_a}\, , $$
The operator $V_\phi$ of \eqref{Vgmap} is the composite $V_\phi = T \circ D$, which is injective since
both $T$ and $D$ are.
\endproof
\smallskip
\begin{rem}\label{sameframe} {\rm In particular this shows that, with the
window functions $\tilde{\phi}(x)=e^{-\pi x^2}$ and
$\phi(x)=e^{-\pi x^2}e^{-2\pi i (a\cdot x)}$, the Gabor system
${\mathcal G}(\phi,\Lambda)$ is a frame if and only if ${\mathcal G}(\tilde{\phi},\Lambda)$ is a frame. }
\end{rem}
\smallskip
Indeed, for the window $\tilde\phi$ the system ${\mathcal G}(\tilde{\phi},\Lambda)$ is a frame
iff the system ${\mathcal G}(\tilde{\phi},\Lambda_a)$ is a frame and the latter is equivalent to
$$ \sum_{z\in \bar\Lambda} |{\mathcal B}\, f({z-ia})|e^{-\frac{\pi}{2}|z|^2}\asymp ||{\mathcal B}\, f||_{{\mathcal F}_n^2}\, , $$
which we have seen is equivalent to ${\mathcal G}(\phi,\Lambda)$ being a frame.
\smallskip
\subsubsection{Geometric Bargmann transform}
We apply this Bargmann transform argument to our geometric setting.
The bundle ${\mathcal E}$ is endowed with an almost complex
structure, coming from the identification ${\mathcal E}=\pi^* TS$ with $S$ a
Riemann surface, hence the dual ${\mathcal E}^\vee$ can also be endowed
with an almost complex structure. However, for the purpose of applying
the Bargmann transform argument in our setting, we just need to consider the
bundle ${\mathcal E} \oplus {\mathcal E}^\vee$ as a complex $2$-plane bundle over $M$.
First note that the local bases $\{ R_\alpha, R_{\alpha_J} \}$ of ${\mathcal E}$ and
$\{ \alpha, \alpha_J \}$ of ${\mathcal E}^\vee$ determine a local isomorphism
between ${\mathcal E}$ and ${\mathcal E}^\vee$. For $(W,\eta)\in ({\mathcal E} \oplus {\mathcal E}^\vee)_{(x,y,\theta)}$,
with $W=W_1 R_\alpha + W_2 R_{\alpha_J}$ and $\eta=\eta_1 \alpha + \eta_2 \alpha_J$,
we define $J: {\mathcal E} \oplus {\mathcal E}^\vee \to {\mathcal E} \oplus {\mathcal E}^\vee$ with $J^2=-1$ by setting
$$ J \, (W,\eta) := (\eta, -W) =\eta_1\, R_\alpha + \eta_2\, R_{\alpha_J} - W_1\, \alpha - W_2\, \alpha_J\, . $$
We can then take $W+ i \eta :=(W,\eta)$ with scalar multiplication by $\lambda\in {\mathbb C}$, $\lambda=x+iy$
with $x,y\in {\mathbb R}$ given by $\lambda \cdot (W+ i \eta)=(x+y\, J)\, (W,\eta)$. This gives a
fiberwise identification
\begin{equation}\label{Imap}
{\mathcal I}: ({\mathcal E} \oplus {\mathcal E}^\vee)_{(x,y,\theta)} \stackrel{\simeq}{\to} {\mathbb C}^2\, \ \ \
(W,\eta) \mapsto z=(z_1,z_2)=(W_1+i\eta_1, W_2+i\eta_2)\, .
\end{equation}
\smallskip
Given the choice of a window function $\Psi_{0,(x,y,\theta)}(V)$ as in \eqref{Psiwindow},
with a quadratic form on the fibers of ${\mathcal E}$ over the local chart, determined by a smooth section
$A$ of $T^*S \otimes T^*S$ that is symmetric and positive definite, we consider an
associated quadratic form
\begin{equation}\label{Qform}
{\mathcal Q}: {\mathcal E}\oplus {\mathcal E}^\vee \to {\mathbb C}, \ \ \ \ {\mathcal Q}_{(x,y,\theta)}(W+i\eta):= W^t \, A_{(x,y)} \, W + 2i \langle \eta, W \rangle_{(x,y,\theta)} - \eta^t\, \eta,
\end{equation}
where $\langle \eta, W \rangle$ is the duality pairing of ${\mathcal E}$ and ${\mathcal E}^\vee$, and $\eta^t\, \eta$
denotes the pairing with respect to the metric in ${\mathcal E}^\vee$
determined by the metric on $S$.
We use the notation
\begin{equation}\label{znotation}
{\mathcal Q}(z):= {\mathcal Q}\circ {\mathcal I}^{-1}(z) \ \ \ \text{ and } \ \ \ V\bullet z:= V^t \frac{A_{(x,y)}}{2\pi} W +i \langle \eta, V \rangle\, .
\end{equation}
\smallskip
\begin{defn}\label{BargmGeom}
The Bargmann transform of a function $f \in L^2({\mathcal E},{\mathbb C})$ is a function ${\mathcal B}\, f:{\mathcal E}\oplus{\mathcal E}^\vee\to {\mathbb C}$
defined fiberwise by
\begin{equation}\label{eqBargGeom}
({\mathcal B}\, f)|_{({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}}(W,\eta):=\int_{{\mathcal E}_{(x,y,\theta)}} \, f|_{{\mathcal E}_{(x,y,\theta)}}(V)\, e^{2\pi V\bullet z-\pi V^t A_{(x,y)}V+\frac{\pi}{2}{\mathcal Q}(z)}\,
d{\rm vol}_{(x,y,\theta)}(V)\, ,
\end{equation}
with the notation as in \eqref{znotation} and with $d{\rm vol}_{(x,y,\theta)}(V)$ the volume form on the
fibers of ${\mathcal E}$ determined by the Riemannian metric on $S$.
\end{defn}
\smallskip
\begin{lem}\label{GaborBargmannLem}
Consider the window function $\Psi_0$ as in \eqref{Psiwindow}. The Gabor functions
$$\rho(W,\eta)\Psi_0(V)=e^{2\pi i\langle \eta,V-W \rangle}\Psi_0(V-W)\, , $$
with $(W,\eta)\in\mathcal{E}\bigoplus\mathcal{E}^\vee$, satisfy
\begin{equation}\label{eqGaborBargmann}
|\langle f,\rho(W,\eta)\Psi_0 \rangle|=|{\mathcal B}\, f(W-i(\eta+\frac{\eta_\theta}{2\pi}))|\, e^{-\frac{\pi}{2}|{\mathcal Q}(W-i(\eta+\frac{\eta_\theta}{2\pi}))|}\, .
\end{equation}
\end{lem}
\proof We have
\begin{align*}
\langle f,\rho(W,\eta)\Psi_0\rangle &=\int_{{\mathcal E}_{(x,y,\theta)}}f(V)\,e^{-\pi(V-W)^t\frac{A}{\pi}(V-W)-2\pi i \langle \frac{\eta_\theta}{2\pi},V-W\rangle }e^{-2\pi i \langle \eta,V\rangle}\, d{\rm vol}(V)\\
&=e^{2\pi i \langle \frac{\eta_\theta}{2\pi},W\rangle}\int_{{\mathcal E}_{(x,y,\theta)}} f(V)\, e^{\pi V^t\frac{A}{\pi}V-2\pi V^t\frac{A}{\pi}W-\pi W^t\frac{A}{\pi}W}e^{-2\pi i\langle \frac{\eta_\theta}{2\pi}+\eta,V \rangle}\, d{\rm vol}(V)\\
&= e^{2\pi i \langle \frac{\eta_\theta}{2\pi},W\rangle} e^{-i\pi \langle \eta+\frac{\eta_\theta}{2\pi}, W\rangle}e^{-\frac{\pi}{2}W^t\frac{A}{\pi}W+\frac{\pi}{2}(\eta+\frac{\eta_\theta}{2\pi})^t\cdot (\eta+\frac{\eta_\theta}{2\pi})}\cdot \\
&\cdot \int_{{\mathcal E}_{(x,y,\theta)}}f(V)\, e^{-2\pi V\bullet (W-i(\frac{\eta_\theta}{2\pi}+\eta))}e^{-\pi V^t\frac{A}{\pi}V}e^{-\frac{\pi}{2} {\mathcal Q}(W-i(\eta+\frac{\eta_\theta}{2\pi}))}d{\rm vol}(V)\,
\end{align*}
with ${\mathcal Q}$ as in \eqref{Qform}.
\endproof
\smallskip
\begin{rem}\label{remBz}{\rm
Under the identification \eqref{Imap} we write \eqref{eqGaborBargmann} equivalently as
\begin{equation}\label{eqBcoeff}
|\langle f,\rho(W,\eta)\Psi_0\rangle |=|{\mathcal B} f(\overline{z})|e^{-\frac{\pi}{2}|{\mathcal Q}(z)|} \ \ \text{ for } \
z=W+i(\frac{\eta_\theta}{2\pi}+\eta) \, .
\end{equation} }
\end{rem}
\smallskip
\begin{defn}\label{BFspaceE}
The global Bargmann-Fock space ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)$ is the space of functions
$F:{\mathcal E} \oplus{\mathcal E}^\vee \to {\mathbb C}$ such that
$F|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} \circ {\mathcal I}^{-1}:{\mathbb C}^2\to {\mathbb C}$ is entire with
$$ \| F \|^2_{{\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)}=\int_M \int_{{\mathbb C}^2}
\bigg|F|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}}\circ {\mathcal I}^{-1}(z)\bigg|^2 \, e^{-\pi |{\mathcal Q}(z)|}dz\, d{\rm vol}(x,y,\theta) < \infty \, . $$
The fiberwise Bargmann-Fock space ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}$ is the space of functions
$F: ({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)} \to {\mathbb C}$ such that $F\circ {\mathcal I}^{-1}:{\mathbb C}^2\to {\mathbb C}$ is entire,
with the norm
$$ \| F \|^2_{{\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} =\int_{{\mathbb C}^2}
\bigg|F|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}}\circ {\mathcal I}^{-1}(z)\bigg|^2 \, e^{-\pi |{\mathcal Q}(z)|}dz < \infty \, . $$
\end{defn}
\smallskip
The space ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)$ is a Hilbert space with the inner product
$$\langle F,G\rangle_{{\mathcal F}^2}:=\int_M \int_{\mathbb{C}^2}F|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} \circ {\mathcal I}^{-1}(z) \,\,\overline{G|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} \circ {\mathcal I}^{-1}(z)}\, e^{-\pi |{\mathcal Q}(z)|}\, dz \, d{\rm vol}(x,y,\theta)\, . $$
Indeed, ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)$ is the direct integral over $(M,d{\rm vol})$ of a family of Hilbert spaces
${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}$, which are isomorphic, through the map ${\mathcal I}$ of \eqref{Imap} with
the Hilbert space $L^2({\mathbb C}^2,e^{-\pi |{\mathcal Q}(z)|}dz)$.
\smallskip
In this geometric setting we formulate the sampling condition in the following way.
\begin{defn}\label{smoothsample}
Let $\Lambda$ be a bundle of lattices over $M$ where, over a local chart we have $\Lambda_{(x,y,\theta)}$ a lattice in
$({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}$. The bundle $\Lambda$ satisfies the smooth
sampling condition for ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)$ if there are ${\mathbb R}^*_+$-valued smooth functions
$C,C'$ on the local charts of $M$, such that, for all $(x,y,\theta)$ in a local chart of $M$ and for all
$F\in {\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}$, the estimates
\begin{equation}\label{Msample}
C_{(x,y,\theta)}\cdot \| F \|^2_{{\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} \leq \sum_{\Lambda_{(x,y,\theta)}} \bigg| F|_{({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}} \bigg|^2 e^{-\pi |{\mathcal Q}(\lambda_{(x,y,\theta)})|} \leq C'_{(x,y,\theta)}\cdot \| F \|^2_{{\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}}
\end{equation}
are satisfied, with ${\mathcal Q}$ as in \eqref{Qform}.
\end{defn}
\smallskip
\begin{lem}\label{lemEframe}
For any $(x,y,\theta)$ in a local chart of $M$,
the Bargmann transform ${\mathcal B}$ of \eqref{eqBargGeom} is a bijection from
$L^2(({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)})$ to ${\mathcal F}^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)}$, with
\begin{equation}\label{metricEquality}
\| {\mathcal B} \, f \|_{{\mathcal F}({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}}=
K_{(x,y)}\cdot \| f \|_{L^2({\mathcal E} \oplus{\mathcal E}^\vee)_{(x,y,\theta)} } \, ,
\end{equation}
for a smooth ${\mathbb R}_+^*$-valued function $K$ over the local charts $U$ of $S$.
Moreover, ${\mathcal G}(\Psi_0,\Lambda)$ is a frame for $L^2({\mathcal E}_{(x,y,\theta)})$ if and only if $\overline{\Lambda}+i\frac{\eta_\theta}{2\pi}$ is a set of sampling for ${\mathcal F}({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}$.
\end{lem}
\smallskip
\proof
For the window function $\Psi_0$ as in \eqref{Psiwindow}, we have
$$ \|\Psi_0 \|^2_{L^2({\mathcal E}_{(x,y,\theta)})} =\int_{{\mathcal E}_{(x,y,\theta)}} | \Psi_0(V) |^2\, dV =
\int_{{\mathcal E}_{(x,y,\theta)} } e^{-2 V^t A_{(x,y)} V} \, dV = \frac{\pi}{2\sqrt{\det(A_{(x,y)})} } \, . $$
as a standard Gaussian integral in $2$-dimensions. Because we assumed that the
matrices $A_{(x,y)}$ in the window function $\Psi_0$ of \eqref{Psiwindow} have spectrum
bounded away from zero, the quantity
$$ K_{(x,y)} := \frac{\pi} { 2 \sqrt{ \det(A_{(x,y)})} } $$
determines a smooth real valued function $K: S\to {\mathbb R}$ with a strictly positive minimum.
Moreover, by Theorem~3.2.1 and Corollary~3.2.2 of \cite{Groch}, the orthogonality
relation
$$ \langle V_{\phi_1} f_1, V_{\phi_2} f_2\rangle_{L^2({\mathbb R}^{2n})} = \langle f_1,f_2 \rangle_{L^2({\mathbb R}^n)} \cdot \overline{\langle \phi_1,\phi_2 \rangle_{L^2({\mathbb R}^n)}} $$
for the short time Fourier transform
$$ V_\phi f (x,\omega)=\int_{{\mathbb R}^n} f(t)\, \overline{\phi(t-x)} \, e^{-2\pi i t\cdot \omega} \, dt \, , \
\text{ with }\ (x,\omega)\in {\mathbb R}^{2n}\, , $$
gives the identity
$$ \|\langle f,\rho(W,\eta)\Psi_0\rangle\|_{L^2({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}}= \| f \|_{L^2({\mathcal E}_{(x,y,\theta)})}
\cdot \| \Psi_0 \|_{L^2({\mathcal E}_{(x,y,\theta)})} \, .$$
Moreover, by \eqref{eqBcoeff} we have, for $z={\mathcal I}(W,\eta)$,
$$ \|\langle f,\rho(W,\eta)\Psi_0\rangle\|_{L^2({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}}= \int_{({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}} | \langle f,\rho(W,\eta)\Psi_0\rangle |^2 d{\rm vol}(W,\eta) $$
$$ = \int_{{\mathbb C}^2} |{\mathcal B}\, f(\bar z) |^2 e^{-\pi |{\mathcal Q}(z)|} \, dz\, . $$
Injectivity then follows, while surjectivity follows by the same argument showing
the density of ${\mathcal B}(L^2({\mathbb R}^n))\subset {\mathcal F}^2_n$ in the proof of Theorem~3.4.3 of \cite{Groch},
applied pointwise in $(x,y,\theta)\in M$.
\smallskip
The Gabor system ${\mathcal G}(\Psi_0,\Lambda)$ satisfies the smooth frame
condition of Definition~\ref{smoothGaborDef} if
there are smooth functions $C_{(x,y,\theta)}, C'_{(x,y,\theta)} >0$ on the local charts of $M$ such that
$$ C_{(x,y,\theta)} \, \| f \|^2_{L^2({\mathcal E}_{(x,y,\theta)})} \leq \sum_{\lambda=(W,\eta)\in \Lambda}
|\langle f , \rho(\lambda) \Psi_0 \rangle |^2 \leq C'_{(x,y,\theta)} \| f \|^2_{L^2({\mathcal E}_{(x,y,\theta)})}\, . $$
By \eqref{eqBcoeff} we see that this is equivalent to the smooth sampling condition of
Definition~\ref{smoothsample} for $\overline{\Lambda}+i\frac{\eta_\theta}{2\pi}$.
\endproof
\smallskip
\begin{prop}\label{bFrameProp}
With the scaling by the function $b=b_M(x,y,\theta)$ of \eqref{afunctradii}, the Gabor system
${\mathcal G}(\Psi_0,\Lambda_{b,\alpha,J}\oplus
\Lambda_{\alpha,J}^\vee)$ satisfies the frame condition.
\end{prop}
\proof We write here the window function $\Psi_0$ as $\Psi_0^A$ to emphasize the
dependence on the quadratic from $A=A_{(x,y)}$.
Let $f: {\mathcal E} \to {\mathbb R}$ be a signal, with $f|_{{\mathcal E}_{(x,y,\theta)}}\in L^2({\mathcal E}_{(x,y,\theta)})$.
We have
$$ \sum_{\lambda\in \Lambda_{b,\alpha,J}\oplus
\Lambda_{\alpha,J}^\vee} \bigg| \langle f,\rho(\lambda)\Psi^A_0\rangle \bigg|^2 = $$
$$ = \sum_{(bn,m)\in b{\mathbb Z}^2\times{\mathbb Z}^2}\bigg| \int_{{\mathcal E}_{(x,y,\theta)}}f(V)e^{2\pi i mV}e^{(V-bn)^t A_{(x,y,\theta)}(V-bn)+i\langle \eta_\theta,V-bn\rangle }d{\rm vol}_{(x,y,\theta)}(V)\bigg|^2\, . $$
With a change of variables $V=\sqrt{b}\, U$ and correspondingly changing the quadratic form $A$ to $b A$,
we rewrite the above as
$$ \sum_{(bn,m)\in b{\mathbb Z}^2\times{\mathbb Z}^2} b^2\, \bigg| \int_{{\mathcal E}_{(x,y,\theta)}} f(\sqrt{b}U)
\overline{M_{\sqrt{b}m}T_{\sqrt{b}n}e^{-U(bA_{(x,y,\theta)})U^T-i\langle \eta_\theta\sqrt{b},U \rangle}}
d{\rm vol}_{(x,y,\theta)}(V)\bigg|^2 $$
$$ = b^2\sum_{\tilde{\lambda}\in \sqrt{b}\Lambda_{\alpha,J}\oplus
\sqrt{b}\Lambda_{\alpha,J}^\vee}\bigg| \langle f_{\sqrt{b}},\rho(\tilde{\lambda})\Psi^{bA}_0 \rangle \bigg|^2\, , $$
where $f_{\sqrt{b}}(V)=f(\sqrt{b}V)$. Therefore, the Gabor system ${\mathcal G}(\Psi_0^A,\Lambda_{b,\alpha,J}\oplus
\Lambda_{\alpha,J}^\vee)$ is a frame for $L^2({\mathcal E}_{(x,y,\theta)})$ if and only if
${\mathcal G}(\Psi_0^{bA},\Lambda_{\sqrt{b},\alpha,J}\oplus
\Lambda_{\sqrt{b},\alpha,J}^\vee)$ is a frame for $L^2({\mathcal E}_{(x,y,\theta)})$.
Moreover, by Lemma~\ref{lemEframe}, we know that ${\mathcal G}(\Psi_0^{bA}, \Lambda_{\sqrt{b},\alpha,J}\oplus
\Lambda_{\sqrt{b},\alpha,J}^\vee)$ is a frame for $L^2({\mathcal E}_{(x,y,\theta)})$ if and only if the uniformly
discrete set $\sqrt{b}{\mathbb Z}^2+i\sqrt{b}\left(1+\frac{\eta_\theta}{2\pi}\right){\mathbb Z}^2$ is a set of sampling for ${\mathcal F}({\mathcal E}\oplus{\mathcal E}^\vee)_{(x,y,\theta)}$. Finally, by \cite{Groch2},
$\sqrt{b}{\mathbb Z}^2+i\sqrt{b}\left(1+\frac{\eta_\theta}{2\pi}\right){\mathbb Z}^2$ is a set of sampling if and only if the complex lattice $T({\mathbb Z}^2+i{\mathbb Z}^2)$ is a set of sampling for the matrix $$ T=\begin{pmatrix}
\sqrt{b}&0\\
0&\sqrt{b}
\end{pmatrix}\, . $$
By Proposition~11 of \cite{Groch2}, the latter condition is satisfied if and only if $\sqrt{b}<1$, which we know
is the case by Lemma~\ref{scaledlatt}.
\endproof
\bigskip
\section{Gabor frames: symplectization and contactization}\label{Gabor5DSec}
As in \S \ref{SymplContSec}, we consider the contactization ${\mathcal C}{\mathcal S}(M)$ of the symplectization ${\mathcal S}(M)$
of the manifold of contact elements $M={\mathbb S}_w(T^*S)$ of a surface $S$.
This model is motivated by the goal of describing visual perception based on neurons sensitive not only to
orientation, but also to frequency and phase, with the frequency-phase and the position-orientation uncertainty
minimized by the Gabor functions profiles. From the point of view of this model, it is worth pointing out that,
although higher dimensional, the $5$-dimensional contact manifold ${\mathcal C}{\mathcal S}(M)$ is completely determined
by the contact $3$-manifold $M$ with no additional independent choices, being just the contactization of the symplectization.
\smallskip
Given local charts on $M$ with the choice of local basis
\begin{equation}\label{xibasis}
X_\theta =\partial_\theta, \ \ \ R_{\alpha_J} =- w^{-1}\sin(\theta) \partial_x +w^{-1} \cos(\theta) \partial_y
\end{equation}
for the contact planes $\xi$ of the contact structure $\alpha$ on $M$ and
the Reeb field $R_\alpha=w^{-1} \cos(\theta) \partial_x + w^{-1} \sin(\theta) \partial_y$, we obtain
a basis of the contact hyperplane distribution of the five-dimensional contact manifold
$(T^*S_0\times S^1, \tilde\alpha)$, in the corresponding local charts, given by
$$ \{ X_\theta, R_{\alpha_J}, R_{\phi,\alpha}, X_w \}. $$
$$ R_{\phi,\alpha}:=\partial_\phi + R_\alpha \ \ \text{ and } \ \ X_w :=\partial_w\, . $$
\smallskip
In the case of the twisted contact structure $\alpha_J$, with the choice of basis
\begin{equation}\label{xiJbasis}
X_\theta =\partial_\theta, \ \ \ R_\alpha=w^{-1} \cos(\theta) \partial_x + w^{-1} \sin(\theta) \partial_y
\end{equation}
for the contact plane distribution $\xi_J$,
and the Reeb vector field $R_{\alpha,J} =- w^{-1}\sin(\theta) \partial_x +w^{-1} \cos(\theta) \partial_y$,
we similarly obtain a basis for the contact hyperplanes $\tilde\xi_J$ given by
\begin{equation}\label{basisxiJ}
\{ X_\theta, R_\alpha, R_{\phi,\alpha,J}, X_w \},
\end{equation}
$$ R_{\phi,\alpha,J}:=\partial_\phi + R_{\alpha,J} \, . $$
\smallskip
The bundle ${\mathcal E}$ of signal planes on $M$ determines the following bundles on
the symplectization ${\mathcal S}(M)$ and the contactization ${\mathcal C}{\mathcal S}(M)$.
\smallskip
\begin{defn}\label{EhatEtildeE}
Let $\hat{\mathcal E}$ denote the pullback of the bundle ${\mathcal E}$ of signal planes to
$T^*S_0\simeq M\times {\mathbb R}^*_+$ via the projection to $M$, and
let $\tilde{\mathcal E}$ denote the vector bundle over ${\mathcal C}{\mathcal S}(M)$ given by
$\hat{\mathcal E}\boxplus TS^1=\pi_{T^*S_0}^* \hat{\mathcal E} \oplus \pi_{S^1}^* TS^1$,
with pullbacks taken with respect to the two projections of ${\mathcal C}{\mathcal S}(M)=T^*S_0\times S^1$ on the two factors.
\end{defn}
\smallskip
The signals in this setting will be functions $I: \tilde{\mathcal E}\to {\mathbb R}$. The vector bundle $\tilde{\mathcal E}$ on ${\mathcal C}{\mathcal S}(M)$
is a rank $3$ real vector bundle over a $5$-dimensional manifold.
\smallskip
\begin{rem}\label{basisEtilde}{\rm
A basis of sections for $\tilde{\mathcal E}$ over a local chart is obtained
by taking the vectors $\{ R_\alpha, R_{\alpha_J}, \partial_\phi \}$.
There are two other choices of basis directly determined by the contact forms $\tilde\alpha$ and $\tilde\alpha_J$, namely
$\{ R_\alpha, R_{\phi,\alpha,J}, R_{\tilde\alpha_J} \},$ where the first two vectors span the intersection
$\tilde{\mathcal E}\cap \tilde\xi$ of the contact hyperplane distribution with the bundle $\tilde{\mathcal E}$ and the last
vector is the Reeb field of $\tilde\alpha_J$, or $\{ R_{\alpha_J}, R_{\phi,\alpha}, R_{\tilde\alpha} \}$,
with the first two vector fields spanning $\tilde{\mathcal E}\cap \tilde\xi_J$ and the third the Reeb field
of $\tilde\alpha$. The first basis has the advantage of a providing consistent choices of basis for both
${\mathcal E}$ and $\tilde{\mathcal E}$.
}\end{rem}
\smallskip
\begin{lem}\label{lem5Dwindow}
In a local chart of ${\mathcal S}(M)$ with coordinates $(x,y,w,\theta)$,
the window function $\Psi_0$ as in \eqref{Psiwindow} extends to a window function
on $\hat{\mathcal E}$ given by
\begin{equation}\label{hatPsi0}
\hat\Psi_{0, (x,y,w,\theta)}(V) =\exp\left(-V^t A_{(x,y)} V - i \langle \eta_{(w,\theta)}, V\rangle_{(x,y)} \right),
\end{equation}
with $\eta_{(w,\theta)}=(w\cos(\theta), w\sin(\theta))$.
\end{lem}
\proof The window function $\Psi_0$ as in \eqref{Psiwindow} is obtained as restriction
to $TS\oplus {\mathbb S}(T^*S)$ of a window function $\Phi_0$ on $TS\oplus T^*S$ defined
as in \eqref{Phi0S} in Definition~\ref{Psi0def}. By identifying ${\mathcal S}(M)=T^*S_0$ and
$\hat{\mathcal E}$ with the pullback of $TS$ to ${\mathcal S}(M)$, we see that $\Phi_0$ induces a
window function $\hat\Psi_0$ on $\hat{\mathcal E}$ of the form \eqref{hatPsi0}.
\endproof
\smallskip
We further extend the window function \eqref{hatPsi0} to $\tilde{\mathcal E}$ so as to
obtain a window function that is a modified form of the function considered in the
model of \cite{BaspSartiCitti}.
\smallskip
\begin{defn}\label{tildePsi0def}
In a local chart of ${\mathcal C}{\mathcal S}(M)$ with coordinates $(x,y,w,\theta,\phi)$,
window functions on $\tilde{\mathcal E}$ extending the window function \eqref{hatPsi0} are
functions on $\tilde{\mathcal E}$ of the form
\begin{equation}\label{tildePhi0}
\tilde\Psi_{0,(x,y,w,\theta,\phi),\zeta_0}(V,\upsilon) =\exp \left( - V^t A_{(x,y)} V -i \langle \eta_{(w,\theta)}, V \rangle_{(x,y,w,\theta)}-\kappa_\phi^2 \upsilon^2 -i \langle \zeta_0, \upsilon \rangle_\phi \right)\, ,
\end{equation}
for $\eta_{(w,\theta)}$ as in \eqref{hatPsi0}, and with $\zeta_0 \in T^*_\phi S^1$ and $\upsilon \in T_\phi S^1$.
The two-dimensional
Gabor systems of the form $\{ \rho(W,\eta) \Psi_0 |_{{\mathcal E}_{(x,y,\theta)}} \}$ are then replaced
by a three-dimensional system of the form
\begin{equation}\label{Gabor3D}
\rho(W,\eta,\nu,\zeta) \tilde\Psi_0 |_{\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}} (V,\upsilon) \, ,
\end{equation}
with $(W,\eta,\nu,\zeta)\in (\tilde{\mathcal E}\oplus\tilde{\mathcal E}^\vee)_{(x,y,w,\theta,\phi)}$.
\end{defn}
\smallskip
In the setting of \cite{BaspSartiCitti}, the additional variables $\phi\in S^1$ (with
its linearization $\upsilon \in T_\phi S^1$) and the dual variable $\zeta \in T^*_\phi S^1$,
which we view here as part of the bundle $\tilde{\mathcal E}$ over the contact manifold ${\mathcal C}{\mathcal S}(M)$,
represent a model of phase and velocity of spatial wave propagation. The window function
$\tilde\Psi_0$ that we consider here differs from the function considered in \cite{BaspSartiCitti},
which does not have the Gaussian term in the $\upsilon \in T_\phi S^1$ variable. While
they consider the limit case where $\kappa_\phi=0$, we argue here that one needs this
additional term to be non-zero (though possibly small) in order to have good signal analysis
properties for the associated Gabor system, in the presence of these additional variables.
The Gaussian term in $\upsilon$ can in principle be replaced by another rapid decay function,
however, it seems more natural to use a Gaussian term, like we have for the variables in ${\mathcal E}$,
in order to maintain a similar structure for all the variables of $\tilde{\mathcal E}$. We will return to
discuss the case $\kappa_\phi=0$ of \cite{BaspSartiCitti} in \S \ref{Gelfand3Sec}.
\smallskip
Let $\tilde{\mathcal E}^\vee$ denote the dual bundle of $\tilde{\mathcal E}$, with the choice of local basis
$\{ R_\alpha, R_{\alpha,J}, \partial_\phi \}$ for $\tilde{\mathcal E}$ and the dual local
basis $\{ \alpha, \alpha_J, d\phi \}$. This determines bundles of framed lattices over the local charts of ${\mathcal C}{\mathcal S}(M)$
\begin{equation}\label{tildeLatt}
\tilde\Lambda = {\mathbb Z}\, R_\alpha + {\mathbb Z}\, R_{\alpha_J} + {\mathbb Z}\, \partial_\phi = \Lambda_{\alpha,J}\oplus L, \ \ \ \
\tilde\Lambda^\vee = {\mathbb Z} \,\alpha + {\mathbb Z}\, \alpha_J + {\mathbb Z} \, d\phi = \Lambda^\vee_{\alpha,J}\oplus L^\vee \, ,
\end{equation}
with $\Lambda_{\alpha,J}$ and $\Lambda^\vee_{\alpha,J}$ the bundles of framed lattices
of \eqref{latLambda} and \eqref{latLambdavee}.
\smallskip
We consider the bundle of framed lattices
$\tilde\Lambda\oplus \tilde\Lambda^\vee$, which has the property that, in a local chart, the fibers
$$ (\tilde\Lambda\oplus \tilde\Lambda^\vee)_{(x,y,w,\theta,\phi)} \subset
(\tilde{\mathcal E} \oplus \tilde{\mathcal E}^\vee)_{(x,y,w,\theta,\phi)} $$
are lattices in the fibers of the $3$-plane bundle $\tilde{\mathcal E}\oplus \tilde{\mathcal E}^\vee$ over the
$5$-dimensional contact manifold ${\mathcal C}{\mathcal S}(M)$.
\smallskip
The window function $\tilde\Psi_0$ and the bundle of framed lattices
$\tilde\Lambda\oplus \tilde\Lambda^\vee$ determine a Gabor system
\begin{equation}\label{Gabor3DLambda}
{\mathcal G}(\tilde\Psi_0, \tilde\Lambda\oplus \tilde\Lambda^\vee)=\left\{ \rho(\lambda) \tilde\Psi_0 |_{\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}} \, \bigg|\, \lambda=(W,\eta,\nu,\zeta) \in (\tilde\Lambda\oplus \tilde\Lambda^\vee)_{(x,y,w,\theta,\phi)} \right\} \, .
\end{equation}
\smallskip
As in the case of the bundle of framed lattices $\Lambda_{\alpha,J}\oplus \Lambda^\vee_{\alpha,J}$
we consider a scaling of the lattices in the fibers of $\tilde\Lambda\oplus \tilde\Lambda^\vee$, for
the same reasons discussed in \S~\ref{scaleLSec}. We define $R_{\max}>0$ as in
\S~\ref{scaleLSec}. For the $TS^1$ direction of $\tilde{\mathcal E}$, the injectivity radius $R^{S^1}_{inj}$
is constant and equal to half the length of the $S^1$ circle. Thus, we take, as in \S~\ref{scaleLSec},
a scaling factor of the form $\gamma=R^{S^1}_{inj}/R_{\max}$. As discussed in \S~\ref{scaleLSec}
we can assume that in our model $R_{\max} > R^{S^1}_{inj}$ so that $\gamma <1$. We then
consider the bundle of framed lattices determined by this choice of scaling on $L$ and the previous
choice of scaling on $\Lambda_{\alpha,J}$.
\smallskip
\begin{defn}\label{scale3Dlattdef}
Let $\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee$ be the bundle of framed lattices of the form
\begin{equation}\label{scale3Dlat}
\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee =
\Lambda_{b,\alpha,J}\oplus L_\lambda \oplus \Lambda_{\alpha,J}^\vee \oplus L^\vee\, ,
\end{equation}
where $\Lambda_{b,\alpha,J}$ is the scaled lattice of \eqref{aLambda} with $b=b_M$ the
function of \eqref{afunctradii}, while $L_\lambda= \lambda \, L$ for the constant
$\lambda=R^{S^1}_{inj}/R_{\max}$ as above. This has associated Gabor system
\begin{equation}\label{scaleGabor3DLambda}
{\mathcal G}(\tilde\Psi_0, \tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)=\left\{ \rho(\lambda) \tilde\Psi_0 \, \bigg|\, \lambda=(W,\eta,\nu,\zeta) \in \tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee \right\} \, ,
\end{equation}
where for simplicity of notation we have suppressed the explicit indication of the fibers of
$\tilde{\mathcal E}\oplus\tilde{\mathcal E}^\vee$ as in \eqref{Gabor3DLambda}.
\end{defn}
\smallskip
We then have the following result about the Gabor frame condition for the Gabor
systems \eqref{Gabor3DLambda} and \eqref{scaleGabor3DLambda}.
\smallskip
\begin{prop}\label{prop3Dframes}
The Gabor system ${\mathcal G}(\tilde\Psi_0, \tilde\Lambda\oplus \tilde\Lambda^\vee)$ of \eqref{Gabor3DLambda}
is not a frame. The Gabor system ${\mathcal G}(\tilde\Psi_0, \tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$
of \eqref{scaleGabor3DLambda} is a frame.
\end{prop}
\proof By construction the Gabor systems with window function $\tilde\Psi_0$ and kattice
$\tilde\Lambda \oplus \tilde\Lambda^\vee$ or $\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee$
split as a product of a $2$-dimensional system ${\mathcal G}(\Psi_0,\Lambda_{\alpha,J}\oplus \Lambda_{\alpha,J}^\vee)$
or ${\mathcal G}(\Psi_0,\Lambda_{b,\alpha,J}\oplus \Lambda_{\alpha,J}^\vee)$ and a $1$-dimensionak Gabor
system ${\mathcal G}(\psi_0, L\oplus L^\vee)$ or ${\mathcal G}(\psi_0, L_\lambda \oplus L^\vee)$, where
$$ \psi_{0,\phi}(\upsilon) = \exp(-\kappa_\phi \upsilon^2 -i \langle \zeta_0,\upsilon\rangle_\phi )=
\exp(-\kappa_\phi \upsilon^2 -i \zeta_0 \upsilon )\, . $$
Thus, the frame condition
for ${\mathcal G}(\tilde\Psi_0, \tilde\Lambda\oplus \tilde\Lambda^\vee)$ holds if and only if it holds
for both ${\mathcal G}(\Psi_0,\Lambda_{\alpha,J}\oplus \Lambda_{\alpha,J}^\vee)$ and ${\mathcal G}(\psi_0, L\oplus L^\vee)$
and similarly the frame condition for ${\mathcal G}(\tilde\Psi_0, \tilde\Lambda_{b,\gamma} \oplus \tilde\Lambda^\vee)$
holds if and only if it holds for both ${\mathcal G}(\Psi_0,\Lambda_{b,\alpha,J}\oplus \Lambda_{\alpha,J}^\vee)$ and
${\mathcal G}(\psi_0, L_\lambda \oplus L^\vee)$.
For the $1$-dimensional systems with a rapid decay function as window function, the frame condition
holds if and only if the lower Beurling density $D^-$ of the lattice is strictly greater than one.
For the lattice $L\oplus L^\vee$ this condition is not satisfies (see Proposition~\ref{noframes})
so the Gabor system is not a frame, while for the lattice
$L_\lambda\oplus L^\vee$ is is satisfied since $\gamma<1$ (see Proposition~\ref{yesframes}).
Thus, in the case of the Gabor system ${\mathcal G}(\tilde\Psi_0, \tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$
of \eqref{scaleGabor3DLambda} the question is reduced to the question of whether the $2$-dimensional
system ${\mathcal G}(\Psi_0,\Lambda_{b,\alpha,J}\oplus \Lambda_{\alpha,J}^\vee)$ is a frame. We know this system
is indeed a frame by Proposition~\ref{bFrameProp}.
\endproof
\medskip
\subsection{Gelfand triples and Gabor frames}\label{Gelfand3Sec}
We return here to discuss the case of the profiles considered in \cite{BaspSartiCitti},
with the term $\kappa_\phi=0$. As mentioned above, the function $\tilde\Psi_0 |_{\kappa_\phi=0}$
is not a window function for a Gabor system in the usual sense, as it is not of rapid decay
(and not even $L^2$) along the fibers of $\tilde{\mathcal E}$. However, we can still interpret it as
a tempered distribution on the fibers of $\tilde{\mathcal E}$. Thus, one can at least ask the question of
whether this window function defines Gabor frames in a distributional sense.
To formulate Gabor systems in such a setting, it is convenient to consider the formalism of Gelfand
triples (also known as rigged Hilbert spaces, \cite{GeVi}).
\smallskip
We consider here the same setting as in \cite{TTT}, \cite{Tsch} for distributional frames,
with Gelfand triples given by
$$ {\mathcal S}(\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}) \hookrightarrow L^2(\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}, d{\rm vol}_{(x,y,w,\theta,\phi)}) \hookrightarrow {\mathcal S}^\prime(\tilde{\mathcal E}_{(x,y,w,\theta,\phi)})\, , $$
where the space ${\mathcal S}$ of tempered distributions is densely and continuously embedded
in the $L^2$-Hilbert space, which is densely and continuously embedded in the dual space
${\mathcal S}^\prime$ of distributions. The pairing $\langle f, g \rangle$ of distributions $f\in {\mathcal S}^\prime$
and test functions $g\in {\mathcal S}$ extends the Hilbert space inner product. We write the above
triples for simplicity of notation in the form
$$ {\mathcal S}(\tilde{\mathcal E})\hookrightarrow L^2(\tilde{\mathcal E}) \hookrightarrow {\mathcal S}^\prime(\tilde{\mathcal E})\, . $$
\smallskip
\begin{defn}\label{DistrFramesDef}
A distributional Gabor system ${\mathcal G}(\tilde\Phi_0,\tilde\Lambda)$ on $\tilde{\mathcal E}$ is given by a
window generalized-function $\tilde\Phi_0\in {\mathcal S}'(\tilde{\mathcal E})$ and a bundle of lattices $\tilde\Lambda$ with
$$ {\mathcal G}(\tilde\Phi_0,\tilde\Lambda)=\{ \rho(\lambda) \tilde\Phi_0 \,|\, \lambda\in \tilde\Lambda \} \subset {\mathcal S}'(\tilde{\mathcal E})\, . $$
The distributional Gabor system ${\mathcal G}(\tilde\Psi_0,\tilde\Lambda)$ is a distributional frame for the
bundle $\tilde{\mathcal E}$ on ${\mathcal C}{\mathcal S}(M)$ if there are bounded smooth functions $C,C': {\mathcal C}{\mathcal S}(M)\to {\mathbb R}^*_+$
with strictly positive $\inf_{{\mathcal C}{\mathcal S}(M)} C$ and $\inf_{{\mathcal C}{\mathcal S}(M)} C'$, such that, for all $f\in {\mathcal S}(\tilde{\mathcal E})$
$$ C_{(x,y,w,\theta,\phi)} \, \| f \|^2_{L^2(\tilde{\mathcal E}_{(x,y,w,\theta,\phi)})} \leq
\sum_{\lambda\in \tilde\Lambda_{(x,y,w,\theta,\phi)}} |\langle \rho(\lambda)\tilde\Phi_0, f \rangle|^2
\leq C'_{(x,y,w,\theta,\phi)}\, \| f \|^2_{L^2(\tilde{\mathcal E}_{(x,y,w,\theta,\phi)})} \, . $$
\end{defn}
\smallskip
\begin{lem}\label{DistrGaborLem}
Let $\tilde\Phi_0=\tilde\Psi_0 |_{\kappa_\phi=0}$, with $\tilde\Psi_0$ as in \eqref{tildePhi0}.
The systems ${\mathcal G}(\tilde\Phi_0,\tilde\Lambda\oplus \tilde\Lambda^\vee)$ and
${\mathcal G}(\tilde\Phi_0,\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$ with the lattices
as in Definition~\ref{scale3Dlattdef}, are distributional Gabor systems that decompose
into a product of a $2$-dimensional ordinary Gabor system given by ${\mathcal G}(\Psi_0,\Lambda_{\alpha,J}\oplus
\Lambda_{\alpha,J}^\vee)$ or ${\mathcal G}(\Psi_0,\Lambda_{b,\alpha,J}\oplus
\Lambda_{\alpha,J}^\vee)$, respectively, and a $1$-dimensional distributional Gabor system
of the form ${\mathcal G}(\phi_0,L\oplus L^\vee)$ or ${\mathcal G}(\phi_0,L_\gamma\oplus L^\vee)$, respectively,
with window generalized-function $\phi_0(\upsilon)=\exp(-i\zeta_0 \upsilon) \in {\mathcal S}^\prime({\mathbb R})$.
The distributional Gabor system ${\mathcal G}(\tilde\Phi_0,\tilde\Lambda\oplus \tilde\Lambda^\vee)$ does
not satisfy the distributional Gabor frame condition. The distributional Gabor system
${\mathcal G}(\tilde\Phi_0,\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$ satisfies the distributional
Gabor frame condition if and only if the $1$-dimensional distributional Gabor system
${\mathcal G}(\phi_0,L_\gamma\oplus L^\vee)$ satisfies the distributional frame condition.
\end{lem}
\proof We view $\tilde\Phi_0$ as the distribution in ${\mathcal S}'(\tilde{\mathcal E})$ that acts on test
functions $f \in {\mathcal S}(\tilde{\mathcal E})$ as
$$ \langle f,\tilde\Phi_0 \rangle_{(x,y,w,\theta,\phi)} = \int_{\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}} \overline{\tilde\Phi_0 |_{\tilde{\mathcal E}_{(x,y,w,\theta,\phi)}}(V,\upsilon)} \, f(V,\upsilon) \, d{\rm vol}_{(x,y,w,\theta,\phi)}(V,\upsilon)\, . $$
As in Proposition~\ref{prop3Dframes} we see that the distributions $\rho(\lambda)\tilde\Phi_0$ are
products of a function $\rho(\lambda')\Psi_0 \in {\mathcal S}(\tilde{\mathcal E})$ and a distribution $\rho(\lambda'')\phi_0$
in ${\mathcal S}^\prime(\tilde{\mathcal E})$, with $\lambda=(\lambda',\lambda'')$ for $\lambda\in \tilde\Lambda\oplus \tilde\Lambda^\vee$ and $\lambda'\in \Lambda_{\alpha,J}\oplus \Lambda_{\alpha,J}^\vee$ and
$\lambda''\in L\oplus L^\vee$ (and similarly for the scaled versions of the lattices).
Since these Gabor systems decouple, the distributional frame condition becomes equivalent
to the ordinary frame condition for the part that is an ordinary frame and the distributional frame
condition for the part that is a distributional frame. Thus,
the distributional Gabor systems ${\mathcal G}(\tilde\Phi_0,\tilde\Lambda\oplus \tilde\Lambda^\vee)$ and
${\mathcal G}(\tilde\Phi_0,\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$ are distributional
Gabor frames if and only if the respective $2$-dimensional ordinary Gabor systems are ordinary
frames and the respective $1$-dimensional distributional Gabor systems are distributional frames.
In the first case we know that the frame condition already fails at the level of the $2$-dimensional
ordinary Gabor system. In the second case the $2$-dimensional system satisfies the usual
frame condition by Proposition~\ref{bFrameProp}, hence the question reduces to whether the
$1$-dimensional distributional system ${\mathcal G}(\phi_0,L_\gamma\oplus L^\vee)$ satisfies the
distributional frame condition.
\endproof
\smallskip
The following statement shows that, even when interpreted in this distributional setting
the Gabor system generated by the window function $\tilde\Phi_0$ as in \cite{BaspSartiCitti}
does not give rise to frames, hence it does not allow for good signal analysis.
\smallskip
\begin{prop}\label{noframe5D}
The distributional Gabor system ${\mathcal G}(\tilde\Phi_0,\tilde\Lambda_{b,\gamma}\oplus \tilde\Lambda^\vee)$
does not satisfy the distributional frame condition.
\end{prop}
\proof By Lemma~\ref{DistrGaborLem} we can equivalently focus on the question of whether the
one-dimensional distributional Gabor system ${\mathcal G}(\phi_0,\gamma{\mathbb Z}+{\mathbb Z})$ satisfies the
distributional frame condition. Given a signal $f\in {\mathcal S}({\mathbb R})$, we have, for $\lambda=(\gamma n,m)$
and $\phi_0(t)=e^{-i \zeta_0 t}$,
$$ \langle f, \rho(\lambda)\phi_0 \rangle = \int_{\mathbb R} e^{-2\pi i m t} f(t) e^{i \zeta_0 (t-\gamma n)} \, dt = $$
$$ e^{-i \zeta_0 \gamma n }\int_{\mathbb R} e^{-2\pi i (m-\frac{\zeta_0}{2\pi}) t} f(t) \, dt =
e^{-i \zeta_0 \gamma n } \hat f (m-\frac{\zeta_0}{2\pi}) \, . $$
Note that when we take $| \langle f, \rho(\lambda)\phi_0 \rangle |^2$ the dependence on $n$
disappears entirely so the sum over the lattice is always divergent.
\endproof
\smallskip
\begin{rem}\label{GroupHorVF} {\rm
The window function $\tilde\Psi_0 |_{\kappa_\phi=0}$ in \cite{BaspSartiCitti}
is chosen so that the Lie group and Lie algebra structure underlying receptive profiles
of this form (see \cite{Pet}, \cite{SaCiPe})
determines horizontal vector fields given by the basis $\{ X_\theta, R_\alpha, R_{\phi,\alpha,J}, X_w \}$
of \eqref{basisxiJ} of the contact hyperplanes $\tilde\xi_J$. However, if we replace this
choice of window with our window $\tilde\Psi_0$ where $\kappa_\phi\neq 0$, the same Lie group
of transformations acts on these types of profiles generating the same horizontal vector fields.
Note that also the original goal of \cite{BaspSartiCitti} of describing receptor profiles of neurons sensitive to
frequancy and phase variables, with the frequency-phase uncertainty minimized is already satisfied by the
Gabor system generated by our proposed window function $\tilde\Psi_0$, without the need to impose
$\kappa_\phi=0$. }\end{rem}
\bigskip
\subsection*{Acknowledgment} This work was supported by NSF grants DMS-1707882 and DMS-2104330,
NSERC grants RGPIN-2018-04937 and RGPAS-2018-522593, FQXi grants FQXi-RFP-1-804 and
FQXI-RFP-CPW-2014. The authors thank Alessandro Sarti,
Boris Khesin, and Yael Karshon for helpful discussions.
|
1,477,468,750,089 | arxiv | \section{Introduction}
Electrodynamics, in Maxwell's well-known form and its many possible generalisations, can be understood to a surprising extent without referring to a metric. The starting point is then to consider quantities that can be counted, without requiring the measurements
of areas, volumes or durations, for which one typically would need to resort to less elementary objects such as measuring sticks and clocks. The invariance of the countable elementary quantities, in particular electric charges, gives rise to a conserved current (3-form), and this in turn gives rise to an excitation (2-form), whereas the conservation of the magnetic flux gives rise to a field strength (2-form). The workings of theory are then specified by the relation between the field strength and the excitation, called the constitutive relation. In Maxwell's vacuum electrodynamics the constitutive relation (Hodge dual) requires a metric, but more general possibilities can be considered, with or without invoking a metric, and this allows the unified
description of the vast variety of physical phenomenology of the electromagnetic interaction from linear and non-linear effects in media to axionic and other extensions of the Maxwell electrodynamics.
Such a {\it premetric} construction of the classical electromagnetic theory is exposed in great detail and clarity in the pedagogical textbook of Hehl and Obukhov \cite{hehl2003foundations}. The premetric program was originally put forward in 1922 by Kottler, who applied it both to
electromagnetism and to Newtonian gravity \cite{Hehl:2016glb}. More recently, relativistic theories of gravity have been considered in the context of the premetric program \cite{Itin:2016nxk,Hohmann:2017duq,Itin:2018lcb,Itin:2018dru,Puetzfeld:2019wwo,Obukhov:2019oar}, and this has quite naturally lead to the metric teleparallel\footnote{A flat affine connection is called ``teleparallel''. We refer to a metric-compatible flat connection as ``metric teleparallel'',
and to symmetric flat connection as ``symmetric teleparallel''. (In the literature the term teleparallel usually means the former special case, and it is a common statement that whereas in General Relativity the fundamental variable is the metric, in the teleparallel version of the theory the fundamental variable is the tetrad. However, such a statement is empty, if not misleading, since the Einstein-Hilbert action can just as well be rewritten in solely terms of the tetrad. A unifying framework for all the formulations is metric-affine theory, wherein the fundamental variables are the metric and the affine connection \cite{BeltranJimenez:2019tjy,Jimenez:2019ghw}. In General Relativity the appropriate connection of course is not teleparallel, but it is both metric and symmetric.) } reformulation of Einstein's theory \cite{Aldrovandi:2013wha,Maluf:2013gaa}. In the premetric approach to the theory of gravity, one begins with the conservation of energy and momenta since these are the sources of the gravitational field \cite{Hehl:2019csx}. Formally, the construction proceeds in a rather direct analogy to
the case of electromagnetism. The conservation of energy and momenta gives rise to currents and further, excitations. Corresponding forces are introduced, and now their constitutive relation to the excitations, even in the local and linear case, contains many more possibilities than
in the case of electromagnetism \cite{Itin:2018dru}, due to there being four conserved charges instead of one.
In the standard textbook descriptions of General Relativity, gravitation is often interpreted as geometry \cite{Misner:1974qy}, whereas
in the metric teleparallel formulation gravity is rather understood as a force \cite{Aldrovandi:2013wha}. These alternative formulations and interpretations provide
interesting insights into both Newton's and Einstein's theories \cite{KNOX2011264}, but yet, it may be also useful to recall that to the latter, the quintessence of gravitation was neither
force nor geometry, but inertia \cite{pittphilsci9825}. To express this idea precisely, in the modern terms of a gauge theory, one may contemplate the basic fact that a purely inertial interaction
is characterised by a vanishing gauge field strength. It is the gauge field strength that is both the gauge invariant measure of force, from the mathematical perspective of field theory, and the gauge invariant measure of geometry, from the perspective of principal bundles. However, the description has to be a bit more subtle, since the gravitational force (or, equivalently, geometry) can be eliminated only locally.
A resolution for the dilemma has been recently sought in the context of the so called {\it purified gravity} \cite{BeltranJimenez:2017tkd}. From the viewpoint of geometrical foundations, it was proposed that the fictitious forces could be described by a purely integrable spacetime affine geometry \cite{Koivisto:2018aip}. Such a geometry, which is devoid of both torsion and curvature, was recognised as that of the so called symmetric teleparallelism \cite{Nester:1998mp,Adak:2008gd} and it was discovered that the affine connection of the geometry is generated by a pure coordinate transformation, i.e. a (passive) translation. This was the starting point for a reformulation of Einstein's theory, called the Coincident General Relativity (CGR) \cite{BeltranJimenez:2017tkd}. This theory, which is determined uniquely by the integrability postulate and a symmetry principle, can indeed be understood as a canonical gauge theory of translations, and it can be simply described as the minimal covariantisation of the Einstein's action \cite{Jimenez:2019yyx}. For a review of the three possible formulations of General Relativity, in terms of curvature, torsion and non-metricity, respectively, see Ref. \cite{BeltranJimenez:2019tjy}, for a unification of teleparallel geometries see the recent Ref. \cite{Jimenez:2019ghw}, and about more general modifications of General Relativity, see e.g. Ref. \cite{Heisenberg:2018vsk}.
In this paper, we attempt to understand purified gravity from the perspective of the premetric program. It will turn out that the electromagnetic analogy to purified gravity is rather Proca's massive
\cite{Ruegg:2003ps,Tu:2005ge,Goldhaber:2008xy} than Maxwell's massless electromagnetism. This clarifies why also the
gravitational field itself contributes to the conserved currents of energy and momenta (which is more difficult to explain in the metric teleparallel construction), and the dimension of the gravitational action is no longer anomalous. A main conclusion will be that in the consistent
formulation of the theory, the field strength is vanishing but there exists an excitation: this reflects the geometrical set-up wherein the affine spacetime connection is trivialisable, but the connection to which matter turns out to couple,
is the curved metric-compatible connection \cite{Koivisto:2018aip}. Actually, the more suitable analogy would be Stueckelberg's than Proca's massive electromagnetism. The latter's formulation is physically equivalent but respects the original symmetry of the Maxwell theory
(which, though coming at the price of introducing an extra scalar, can be indispensable for e.g. renormalisation \cite{Ruegg:2003ps}). Indeed, the constitutive relation of CGR is found to be uniquely specified as the one that
restores the translational symmetry of the theory. Thus, the graviton can be interpreted as the massless Goldstone boson of a spontaneous symmetry breaking, an idea which goes back to at least to Isham, Salam and Strathdee \cite{Isham:1971dv}.
The structure of this article is seen from the Table of contents above. First we shall go through the premetric deduction of gravity theory using the standard language of tensors in Section \ref{tensors}. It can be helpful, in the spirit of \cite{BeltranJimenez:2018vdo}, to expose the basic foundation of the construction without obscuring its simplicity by excessive mathematical formalism. On the other hand, exterior algebra provides the natural expressions for conservation laws, and elegantly highlights the paramount role of the Poincar\'e's lemma in the premetric reasoning. Thus in Section \ref{forms} we also write down the premetric construction in the language
of differential forms. The reader familiar with this language might prefer to start from Section \ref{forms}, Section \ref{tensors} being redundant with it. In both discussions, we emphasise the analogy between gravity and electromagnetism by first reviewing the premetric perspective in the latter, slightly simpler, case (a comparison of these two cases and a dictionary between the two languages will then be given in Table \ref{table1}).
The local and linear constitutive relations are analysed in Section \ref{constitutions}, by first focusing on metrical relations, taking into account parity-violating ones, and then generalising to fully arbitrary constitutive relations which we decompose into their irreducible components. The interpretation of the metric as a Stueckelberg field is elaborated in Section \ref{symmetries}, filling in some technical details and briefly speculating on the ultraviolet limit of purified gravity. The properties of the theories with more general constitutive relations, that do not share the unique property of the CGR relation, are explored in Section \ref{properties}, with attention on the degrees of freedom, propagation of waves and the conservation laws. We conclude in Section \ref{conclusions} with a summary and discussions.
\section{Premetric construction in the tensor language}
\label{tensors}
We shall refer to the covariant derivative that satisfies $[\nabla_\mu,\nabla_\nu]=0$. We ask the reader who is uncomfortable with this to kindly just consider $\nabla_\mu$ as a notation for $\partial_\mu$
until the Section \ref{ptos}, wherein we shall justify the use of the covariant form of the equations in this Section.
\subsection{Excitation}
\subsubsection{Electromagnetism}
The conservation of electric charge entails the existence of an electric current, described by the vector density $J^\mu$. The charge conservation, in integral and in differential forms is
\begin{equation}
\int_{\partial \Omega_4}{{\rm d}}^3 x J^\mu n_\mu =0\,, \quad \text{and} \quad \nabla_\mu J^\mu = 0\,, \label{chargecons}
\end{equation}
respectively, $n_\mu$ being a unit normal to the 3-surface $\partial\Omega_4$. Locally, this is equivalent to the equation (that is a generalised version of the inhomogeneous Maxwell equation)
\begin{equation} \label{inhomomax}
\nabla_\mu H^{\mu\nu} = J^\nu\,,
\end{equation}
where the electromagnetic excitation $H^{\mu\nu}=H^{[\mu\nu]}$ is an antisymmetric tensor density. In case of a theory with self-interactions, such as in Proca's massive electromagnetic theory
or a non-Abelian gauge theory, we may write $J^\mu=T^\mu + t^\mu$, where $T^\mu$ are the external sources and $t^\mu$ are due to the electromagnetic field
itself\footnote{To give a concrete example, in the Proca theory we would have $t^\mu = m^2\sqrt{-g} g^{\mu\nu} A_\nu$, where $m$ is the mass of the electromagnetic field, $g^{\mu\nu}$ is the metric and $A_\nu$ is the electromagnetic potential we shall introduce in a moment. At this point of course we do not have a metric at hand.}. We have assumed an additive decomposition of the
sources. There is redundancy in the excitation tensor density, in the sense that any $H^{\mu\nu} \rightarrow H^{\mu\nu} + \nabla_\lambda \varphi^{\lambda\mu\nu}$, where $\varphi^{\lambda\mu\nu}{}= \varphi^{[\lambda\mu\nu]}$ also satisfies the above equation (\ref{inhomomax}) with the antisymmetric property of as $H^{\mu\nu}$. Without requiring the same property, the 4-component redundancy is increased to 24 components. This ambiguity of the excitation tensor is not usually taken into account in the premetric construction of electromagnetic theory, since it has no relevance to the dynamics.
\subsubsection{Gravity}
In gravity, we begin with the conservation of energy-momentum and denote the corresponding current as $J^{\mu}{}_{\nu}$.
The four conservation laws are, again in integral and in differential forms, expressed as
\begin{equation}
\int_{\partial \Omega_4}{{\rm d}}^3 x J^\mu{}_\nu n_\mu =0\,, \quad \text{and} \quad \nabla_\mu J^\mu{}_\nu = 0\,.
\end{equation}
The latter implies again the existence of an antisymmetric excitation tensor density $H^{\mu\nu}{}_\alpha = H^{[\mu\nu]}{}_\alpha$.
The redundancy in this tensor density is $H^{\mu\nu}{}_\alpha \rightarrow H^{\mu\nu}{}_\alpha + \nabla_\lambda \varphi^{\lambda\mu\nu}{}_\alpha$ where, if $\varphi^{\lambda\mu\nu}{}_\alpha = \varphi^{[\lambda\mu\nu]}{}_\alpha$ it has 16 independent components, and if not, 96 independent components.
We now write
\begin{equation} \label{gravity}
\nabla_\mu H^{\mu\nu}{}_\alpha = J^\nu{}_\alpha = T^\nu{}_\alpha + t^\nu{}_\alpha\,,
\end{equation}
taking into account that in addition to the energy-momentum of matter $T^\mu{}_\nu$, there can also occur inertial energy-momentum $t^\mu{}_\nu$.
\subsection{Field strength}
\subsubsection{Electromagnetism}
The field strength $F_{\mu\nu}$ satisfies the integral and differential conservation equations
\begin{equation}
\int_{\partial \Omega_4}F_{\mu\nu}n^\nu =0\,, \quad \text{and} \quad \nabla_{[\alpha}F_{\mu\nu]} = 0\,,
\end{equation}
implying the conservation of the magnetic flux and the existence of the electromagnetic potential
$A_\mu$, such that $F_{\mu\nu} = 2\nabla_{[\mu}A_{\nu]}$. It is defined up to the total derivative $A_\mu \rightarrow A_\mu + \nabla_{\mu}\varphi$. For a detailed premetric investigation of the potential $A_\mu$, see \cite{Pfeifer:2016har}.
\subsubsection{Gravity}
In analogy to electromagnetism, we introduce the gravitational field strength $F^{\alpha\beta}{}_{\mu\nu}$ which ought to be conserved,
\begin{equation}
\int_{\partial \Omega_4}F^{\alpha\beta}{}_{\mu\nu}n^\nu =0\,, \quad \text{and} \quad \nabla_{[\rho}F^{\alpha\beta}{}_{\mu\nu]} = 0\,.
\end{equation}
This implies the existence of a gravitational potential $A^{\alpha\beta}{}_{\mu}$, such that $F^{\alpha\beta}{}_{\mu\nu} = 2\nabla_{[\mu}A^{\alpha\beta}{}_{\nu]}$ and
defined up to the derivative $A^{\alpha\beta}{}_\mu \rightarrow A^{\alpha\beta}{}_\mu + \nabla_{\mu}\varphi^{\alpha\beta}$. The defining peculiarity of gravitation is that it is can be
always be locally eliminated. In other words, its field strength should vanish, $F^{\alpha\beta}{}_{\mu\nu} = 0$. Therefore, we can always assume that $A^{\alpha\beta}{}_\mu = \nabla_{\mu}\varphi^{\alpha\beta}$.
One notes that we stipulated that the gravitational field strength comes with two indices, thus the transformation of $\varphi^{\mu\nu}$ has a priori 16 independent components.
Later, when imposing the constitutive relations, it will be evident that the theory actually involves only the 10 components that are symmetric with respect to the exchange of the two indices. A possible interpretation
is that these correspond to the 4+6 conserved quantities, the four-momentum and the angular momenta. One may also consider that the underlying symmetry is just the symmetry of the frame, GL(4), and by requiring Lorentz invariance (through the imposement of the symmetrised constitutive relations) we can then eliminate the 6 antisymmetric components of $\varphi^{\alpha\beta}$, leaving us with the 10 nonzero $\varphi^{\alpha\beta}=\varphi^{(\alpha\beta)}$.
We can already anticipate that is possible to identify the gauge potential and the pure gauge transformation as $A^{\alpha\beta}{}_\mu = -Q_\mu{}^{\alpha\beta}$ and $\varphi^{\alpha\beta}=g^{\alpha\beta}$, respectively. The vanishing of the
field strength means teleparallelism, $\nabla_{[\mu}Q_{\nu]}{}^{\alpha\beta} = 0$. It is a geometric identity that
$\nabla_{[\mu}Q_{\nu]}{}^{\alpha\beta} = R^{(\alpha\beta)}{}_{\nu\mu} - \frac{1}{2}T^\lambda{}_{\mu\nu} Q_{\lambda}{}^{\alpha\beta}$, see \cite{BeltranJimenez:2018vdo} for notation and details. However, we should make clear that at this point we do not have a metric at hand. Also, we are not considering a GL(4) gauge theory, where
$R^{(\alpha\beta)}{}_{\nu\mu} \neq F^{\alpha\beta}{}_{\mu\nu}$ would have a different form comprising terms that are quadratic in $A^{\alpha\beta}{}_\mu$ and the gauge transformation (at a nonlinear order) would not be simply the shift by a derivative of $\varphi^{\alpha\beta}$. After arriving at the final form of the premetric theory, we will better clarify its relationship to the theory derived from the conventional gauge approach based on the GL(4) group. Namely, in Section \ref{clarity} we will assume the gauge connection
to be given GL(4) form, denoted by $\Gamma^\alpha{}_{\mu\nu}$ and consisting of Levi-Civita, torsion and non-metricity when such a decomposition is possible, and will then show that in the end the final form of the theory is the same as follows from the present premetric construction.
\subsection{Force}
\subsubsection{Electromagnetism}
The Lorentz force is described by the four-vector density
\begin{equation} \label{lorentz}
f_\mu = F_{\mu\nu}J^\nu = F_{\mu\nu}\left( T^\mu + t^\mu\right)\,,
\end{equation}
which contributes to the non-conservation of the energy-momentum, as we will learn below.
For the analogous case of gravity, it will be important to realise that in massive electromagnetism there is an element of non-conservation even in the case of vanishing force. Namely, when
we go back to our very starting point (\ref{chargecons}), and recall that in the case of Proca theory in Minkowski space we have $t^\mu=m^2 A^\mu$, the conservation of electric charge current $T^\mu$ is
\begin{equation}
\nabla_\mu T^\mu = -m^2\nabla_\mu A^\mu\,. \label{electrocons}
\end{equation}
Hence, the electric charge current is conserved only if the Lorentz condition $\nabla_\mu A^\mu =0$ is imposed\footnote{In the case of Proca theory with a curved metric,
the Lorentz condition is generalised to the metric-covariant form and the equation (\ref{electrocons}) becomes the metric-covariant conservation of the electric current tensor (not density).}.
\subsubsection{Gravity}
\label{gforces}
In gravity, the quantity analogous to (\ref{lorentz}) is a rank (1,1) tensor density
\begin{equation}
f^{\mu}{}_{\nu} = F^{\mu\beta}{}_{\alpha\nu} \left( T^\alpha{}_\beta + t^\alpha{}_\beta\right) = 0\,, \quad \text{since} \quad F^{\mu\beta}{}_{\alpha\nu} = 0\,.
\end{equation}
The vanishing of the tensor density $f^\mu{}_\nu$ reflects the conservation of the total energy-momentum, to be defined next.
Before that, let us however note that there nevertheless arises an effective force felt by the matter fields. Denoting this effective force by $\mathcal{F}_\nu$, we see that it is given as
\begin{equation}
\nabla_\mu T^\mu{}_\nu = -\nabla_\mu t^\mu{}_\nu \equiv \mathcal{F}_\nu\,.
\end{equation}
As will be clarified in the following, the presence of this effective force is due to the nonzero mass of the pure gauge connection. That makes it clear that the physical origin of $\mathcal{F}_\nu$
is inertia, though in the end its effects can also be discussed in terms of an effective force and illustrated in terms of geometry. As in the case of electromagnetism, compare (\ref{electrocons}),
the non-conservation is gauge-dependent. The canonical frame of purified gravity \cite{Jimenez:2019yyx} has been proposed to be the analogue of the Lorentz gauge in massive electromagnetism in that, in a well-defined
sense, it establishes the ``physical'' geometry.
\subsection{Energy-momentum current}
\subsubsection{Electromagnetism}
In electromagnetism, the energy-momentum current has the form
\begin{equation} \label{emt}
\prescript{\text{em}}{}{T}^\mu{}_\nu = H^{\mu\alpha}F_{\nu\alpha} - \frac{1}{4}\delta^\mu_\nu H^{\alpha\beta}F_{\alpha\beta} + P^\mu A_\nu - \frac{1}{2}\delta^\mu_\nu P^\alpha A_\alpha\,,
\end{equation}
where $P^\mu$ is determined by the interaction potential, and its presence generically breaks the U(1) invariance\footnote{An example is the Proca field for which $P^\mu = m^2\sqrt{-g} g^{\mu\nu}A_\nu$.}.
If $P^\mu=0$, the divergence of the above tensor density becomes
\begin{equation}
\nabla_\mu \prescript{\text{em}}{}{T}^\mu{}_\nu = f_\nu - H^{\alpha\beta}\left( \nabla_\beta F_{\nu\beta} + \frac{1}{4}\nabla_\nu F_{\alpha\beta}\right) - \frac{1}{4}\left( \nabla_\nu H^{\alpha\beta}\right) F_{\alpha\beta}\,.
\end{equation}
It is easy to see that when $H^{\alpha\beta} = F^{\alpha\beta}$, we recover $\nabla_\mu \prescript{\text{em}}{}{T}^\mu{}_\nu = f_\nu$.
\subsubsection{Gravity}
We are considering the purely inertial energy-momentum $t^\mu{}_\nu$ in (\ref{gravity}), since the $T^\mu{}_\nu$ shall be defined by all matter fields.
For example, $\prescript{\text{em}}{}{T}^\mu{}_\nu$ in (\ref{emt}) is a contribution to the $T^\mu{}_\nu$ in the presence of the electromagnetic field, and if there were charged fields contributing to the electromagnetic current
$T^\mu$, they would contribute to the gravitational current $T^\mu{}_\nu$ as well.
Since $F^{\alpha\beta}{}_{\mu\nu}=0$, the kinetic energy-momentum term in direct analogy to (\ref{emt}) is now trivial, but there may be a nontrivial potential energy-momentum.
We denote with $P^\alpha{}_{\mu\nu}$ the conjugate of the potential analogous to $P^\mu$ in electromagnetism. Then, the energy-momentum current analogous to (\ref{emt}) is
\begin{equation} \label{gemt}
t^\mu{}_\nu = P^\mu{}_{\alpha\beta}Q_{\nu}{}^{\alpha\beta} - \frac{1}{2}\delta^\mu_\nu P^\gamma{}_{\alpha\beta}Q_{\gamma}{}^{\alpha\beta}\,.
\end{equation}
According to Ref. \cite{Jimenez:2019yyx}, this tensor density describes the fictitious energy-momentum in a non-inertial frame, being sourced by a pure-gauge field $A^{\alpha\beta}{}_\mu = \nabla_\mu \varphi^{\alpha\beta} = -Q_\mu{}^{\alpha\beta}$.
The most direct equivalent of this in an electromagnetic theory would the requirement of the vanishing of the current $t^\mu$ in a pure-gauge massive electromagnetism.
It is also trivial to see that the contribution to $\prescript{\text{em}}{}{T}^\mu{}_\nu$ due to a mass of the gauge field $A_\mu$ can be nonzero even in the pure-gauge case $A_\mu=\nabla_\mu\varphi$, but nevertheless it can be always eliminated by choosing the unitary gauge, $\varphi \rightarrow 0$. Whereas formally the equivalent of the Lorentz gauge in a non-trivial Proca theory would be $\nabla_\mu t^\mu{}_\nu = 0$, the definition of the
canonical frame of purified gravity is $t^\mu{}_\nu=0$.
\section{Constitutive relation}
\label{constitutions}
This far we have established the two fundamental equations of purified gravity, expressing the conservation of translational currents and the integrability postulate, respectively.
The quantities appearing in these two equations are related by the linking equations, whose possible forms we study in this Section. We begin by writing down the metric form of the
constitutive relation that is known to reproduce the equivalent of General Relativity, then consider arbitrary constitutive relations in terms of a metric, and finally analyse in detail
the generic local and linear constitutive relations.
\subsection{Electromagnetism}
\label{electroconst}
In order to have a predictive theory, one has to specify the kinetic and the potential excitations. In the case of linear constitutive relation, the generic kinetic constitutive tensor density $\tilde{\chi}^{\mu\nu\alpha}=\tilde{\chi}^{[\mu\nu]\alpha}$ has
24 independent components and the generic potential constitutive tensor density $\tilde{\xi}^{\mu\alpha}$ has 16 components. The constitutive relations can be written as
\begin{equation} \label{const}
H^{\mu\nu} = \tilde{\chi}^{\mu\nu\alpha}A_\alpha\,, \quad P^\mu = \tilde{\xi}^{\mu\alpha}A_\alpha\,.
\end{equation}
In the case of the Proca theory, we have a metric $g_{\mu\nu}$ and its Levi-Civita covariant derivative $\mathcal{D}_\mu$ at hand, and can write the constitutive relations as $\tilde{\chi}^{\mu\nu\alpha} = 2\sqrt{-g}g^{\alpha[\nu}\mathcal{D}^{\mu]}$ and $\tilde{\xi}^{\mu\alpha}=m^2\sqrt{-g}g^{\mu\alpha}$.
If in addition to linearity, we require that the constitutive relation does not
involve derivative operators other than the field strength, the generic ansatz then reads
\begin{equation} \label{ansatz}
H^{\mu\nu} = {\chi}^{\mu\nu\alpha\beta}F_{\alpha\beta} + {\chi}^{\mu\nu\alpha}A_\alpha\,, \quad P^\mu = {\xi}^{\mu\alpha\beta}F_{\alpha\beta} + {\xi}^{\mu\alpha}A_\alpha\,,
\end{equation}
where $ {\chi}^{\mu\nu\alpha\beta}= {\chi}^{[\mu\nu]\alpha\beta}= {\chi}^{\mu\nu[\alpha\beta]}$ has 36 and ${\xi}^{\mu\alpha\beta}={\xi}^{\mu[\alpha\beta]}$ has 24 independent components.
In this formulation, the Proca theory is ${\chi}^{\mu\nu\alpha\beta}=\sqrt{-g}g^{\mu\alpha}g^{\nu\beta}$, ${\chi}^{\mu\nu\alpha}=0$, and
${\xi}^{\mu\alpha\beta}=0$, ${\xi}^{\mu\alpha}=m^2\sqrt{-g}g^{\mu\alpha}$. These constitutive relations have the exchange symmetry ${\chi}^{\mu\nu\alpha\beta} = {\chi}^{\alpha\beta\mu\nu}$,
${\xi}^{\mu\alpha}={\xi}^{\alpha\mu}$. If we consider constitutive relations built with a metric $g_{\mu\nu}$ and the Levi-Civita tensor density $\epsilon^{\alpha\beta\gamma\delta}$, it is impossible to have
nontrivial pieces ${\chi}^{\mu\nu\alpha}$ or ${\xi}^{\mu\alpha\beta}$ (as we shall soon see, the case is different with gravity, and in fact it is then crucial to take into account the piece corresponding to ${\chi}^{\mu\nu\alpha}$).
The general relation ${\chi}^{\mu\nu\alpha\beta}$ in (\ref{ansatz}) has been well studied, see \cite{hehl2003foundations,Rubilar:2007qm}, and its complete classification has been performed. In particular, the relation can be decomposed into the principal part (20 components), the skewon part (15 components) and the axion part (1 component). In a general theory, the propagating fields may feature, in addition to the familiar electromagnetic field, an axion, a dilaton and the more exotic skewon \cite{Hehl:2005xu,Ni:2014qfa}.
\subsection{Gravity}
In analogy to (\ref{const}), the gravitational constitution relations are now specified by the 960 independent components of the kinetic constitutive tensor density
$\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta= \chi^{[\mu\nu]}{}_{\alpha\rho\sigma}{}^\beta = \chi^{\mu\nu}{}_{\alpha(\rho\sigma)}{}^\beta$ and the 1600 independent components of the potential constitutive tensor density $\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}= \xi^\alpha{}_{(\mu\nu)}{}^\beta{}_{\rho\sigma}
= \xi^\alpha{}_{\mu\nu}{}^\beta{}_{(\rho\sigma)}$ as
\begin{equation} \label{gravconst}
H^{\mu\nu}{}_\alpha = \chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta Q_\beta{}^{\rho\sigma}\,, \quad P^\alpha{}_{\mu\nu} = \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}Q_\beta{}^{\rho\sigma}\,.
\end{equation}
There are three symmetrisations we have imposed here in order to extract only the symmetric part of the gauge field $A^{\alpha\beta}{}_\mu$, such that we can identify
identify $\varphi^{(\alpha\beta)} = g^{\alpha\beta}$.
Since this $g^{\mu\nu}$ is the only tensor we have at hand in addition to the ones appearing in (\ref{gravconst}), the most economical way to proceed is to assume that the constitutive relation features only this tensor.
We emphasise that the object $\varphi^{\mu\nu}$ emerged in the construction of the theory as we noticed that the potential $A^{\mu\nu}{}_\alpha$ can always be reduced to $A_\alpha{}^{\mu\nu}=-\nabla_\alpha \varphi^{\mu\nu}$, so we have only changed the name of $\varphi^{(\mu\nu)}$ into $g^{\mu\nu}$ and of $\nabla_\alpha \varphi^{(\mu\nu)}$ into
$-Q_\alpha{}^{\mu\nu}$, and not added these tensors ad hoc.
Since our starting point was the conservation of energy and momentum, another natural assumption is that the theory should be invariant under translations, $g_{\mu\nu} \rightarrow g_{\mu\nu} + \mathcal{D}_{(\mu}X_{\nu)}$. It has been shown that these two requirements uniquely specify the theory, that in the gauge wherein the connection vanishes was called the Coincident General Relativity (CGR) \cite{BeltranJimenez:2017tkd}.
The constitutive relations for that theory are
\begin{subequations}
\label{cgrconst}
\begin{eqnarray}
\label{cgrconst1}
\prescript{}{\text{CGR}}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta & = & \sqrt{-g}M^2 \left( g_{\alpha(\rho}\delta_{\sigma)}^{[\mu}g^{\nu]\beta} - g_{\rho\sigma}\delta_{\alpha}^{[\mu}g^{\nu]\beta}+\delta^{[\mu}_\alpha\delta^{\nu]}_{(\rho}\delta^\beta_{\sigma)}\right)\,, \\ \label{cgrconst2}
\prescript{}{\text{CGR}}\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} & = & - \frac{1}{4}\sqrt{-g} M^2\left( g^{\alpha\beta}g_{\mu(\rho}g_{\sigma)\nu} - 2\delta^\beta_{(\nu}g_{\mu)(\sigma}\delta^\alpha_{\rho)}
- g^{\alpha\beta}g_{\mu\nu}g_{\sigma\rho} + \delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} + \delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho}\right)\,.
\end{eqnarray}
\end{subequations}
The mass scale $M$ gives the gravitational coupling, and is related to the Newton's constant $G$ scale by $1/M^2=8\pi G$.
There is a remarkable relation
\begin{equation}
\text{in CGR}: \quad \nabla_\mu H^{\mu\nu}{}_\alpha = 2\nabla_\mu P^{\mu\nu}{}_\alpha\,. \quad
\end{equation}
This relation is an identity that holds despite the different symmetries of the two tensor densities. Note that it implies that
\begin{equation}
\text{in CGR}: \quad \nabla_\mu\nabla_\nu P^{\mu\nu}{}_\alpha =0\,,
\end{equation}
which indeed can be derived as a Bianchi identity \cite{BeltranJimenez:2018vdo} .
Since in the equations of motion the tensor density $H^{\mu\nu}{}_\alpha$ only features in (\ref{gravity}), where it can be equivalently replaced
by (twice) the tensor density $P^{\mu\nu}{}_\alpha$, it appears that at the level of dynamics, we can identify the latter as both the kinetic and the potential excitation tensor density.
Defining $\tau^\mu{}_\nu = \nabla_\alpha H^{\alpha\mu}{}_\nu$, we can rewrite the identity (\ref{gravity}) as
\begin{equation}
\tau^\mu{}_\nu= T^\mu{}_\nu + t^\mu{}_\nu\,,
\end{equation}
which features explicitly the canonical decomposition \cite{Jimenez:2019yyx} of the field equation into the gravitational, matter, and inertial energy-momentum tensor densities.
There is an important subtlety however, that since $\nabla_\sigma \varphi^{[\sigma\mu]\nu}{}_\alpha = H^{\mu\nu}{}_\alpha - 2P^{\mu\nu}{}_\alpha \neq 0$, we should not employ the potential excitation to deduce the energy-momentum
of a gravitating system, but should use kinetic excitation.
\subsection{General metric constitutive relation}
Let us then continue to consider more general possibilities than the special case of CGR. The generic metric relation (\ref{const}) is given by 9 parameters as
\begin{eqnarray}
\prescript{}{\text{EVEN}}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta & = & \sqrt{-g}\left( b_1 g_{\alpha(\rho}\delta_{\sigma)}^{[\mu}g^{\nu]\beta} + b_2 g_{\rho\sigma}\delta_{\alpha}^{[\mu}g^{\nu]\beta} + b_3 \delta^{[\mu}_\alpha\delta^{\nu]}_{(\rho}\delta^\beta_{\sigma)} \right)\,, \label{eq:chinewer}\\
\prescript{}{\text{EVEN}}\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} & = & \sqrt{-g}\left( c_1 g^{\alpha\beta}g_{\mu(\rho}g_{\sigma)\nu} + c_2 \delta^\beta_{(\nu}g_{\mu)(\sigma}\delta^\alpha_{\rho)}
+ c_3 g^{\alpha\beta}g_{\mu\nu}g_{\sigma\rho} + c_4\delta^\alpha_{(\mu}g_{\nu)(\rho}\delta^\beta_{\sigma)} +
\frac{1}{2}\tilde{c}_5 \delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} + \frac{1}{2}\tilde{c}_6\delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho}\right)\,. \quad \label{eq:xinewer}
\end{eqnarray}
For the two last pieces, it will be more convenient later to employ the different parameterisation
\begin{equation}
\frac{1}{2}\tilde{c}_5 \delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} + \frac{1}{2}\tilde{c}_6\delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho} =
\frac{1}{2}c_5\left(\delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} + \delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho}\right) + \frac{1}{2}c_6\left(\delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} - \delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho}\right)\,.
\end{equation}
It can be seen that the $c_5$-term is reversible and $c_6$ is the coefficient of a skewon term.
As noticed by Vilson and R\"unkla \cite{Runkla:2018xrv}, the relation has the exchange symmetry $\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} = \xi^\beta{}_{\rho\sigma}{}^\alpha{}_{\mu\nu}$
when $\tilde{c}_6=\tilde{c}_5$, i.e. $c_6=0$. Only the components of the constitutive relation which have this symmetry enter into the Lagrangian, and having a Lagrangian formulation is a necessary condition for reversibility \cite{Itin:2018dru}.
The constitutive relation (\ref{eq:xinewer}) with $c_6=0$ describes the 5-parameter action of what was dubbed the Newer General Relativity \cite{BeltranJimenez:2017tkd} and has been studied
on many occasions \cite{Adak:2005cd,Adak:2008gd,BeltranJimenez:2018vdo,Runkla:2018xrv,Hohmann:2018wxu,Soudi:2018dhv}.
However, it is worth reiterating that the unique constitutive relation (\ref{cgrconst}) is dictated by $\nabla_\mu\nabla_\nu P^{\mu\nu}{}_\alpha =0$ and $\nabla_\mu H^{\mu\nu}{}_\alpha = 2\nabla_\mu P^{\mu\nu}{}_\alpha$, which reflects the translational invariance of the purified gravity theory \cite{BeltranJimenez:2018vdo}. Conroy analysed the case of a generic linear constitutive relation without the restriction to first derivative order or even the assumption of locality \cite{Conroy:2017yln}, which required the parameterisation by 9 independent functions (the argument of those functions being the d'Alembertian operator $g^{\mu\nu}\nabla_\mu\nabla_\nu$). Nonlinear constitutive relations have been applied in the context of $f(Q)$ cosmology\footnote{Such models may have also relevance at galactic scales \cite{Milgrom:2019rtd}.} \cite{BeltranJimenez:2017tkd,Harko:2018gxr,Lu:2019hra,Jimenez:2019ovq,Lazkoz:2019sjl,Xu:2019sbp}, and Dialektopoulos has classified the cosmological Noether symmetries of the most generic nonlinear first-derivative action \cite{Dialektopoulos:2019mtr}. The alternative possibilities that are uncovered in the premetric formalism could also be interesting to study in more detail.
In the case that one resorts, in addition to the metric, to the Levi-Civita tensor density $\epsilon^{\alpha\beta\gamma\delta}$, it is possible to consider parity-violating purified gravity theories \cite{Iosifidis:2018zwo,Conroy:2019ibo}.
Only one additional term can appear in the quadratic form, but four contractions can be formed in the kinetic constitutive relation. The parity-violating constitutive
relations are
\begin{eqnarray}
\prescript{}{\text{ODD}}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta & = & b_4\epsilon^{\mu\nu}{}_{\alpha(\rho}\delta^\beta_{\sigma)} +
b_5\epsilon^{\mu\nu}{}_\alpha{}^\beta g_{\rho\sigma} + b_6\epsilon_\alpha{}^{\beta[\mu}{}_{(\rho}\delta^{\nu]}_{\sigma)} + b_7\epsilon^{\mu\nu\beta}{}_{(\rho}g_{\sigma)\alpha}\,, \label{eq:chiodd}\\
\prescript{}{\text{ODD}}\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} & = & c_7\left( \epsilon^{\alpha\beta}{}_{\mu(\rho}g_{\sigma)\nu} + \epsilon^{\alpha\beta}{}_{\nu(\rho}g_{\sigma)\mu}\right)\,. \label{eq:xiodd}
\end{eqnarray}
The excitation tensor densities implied by the most general metric constitutive relation are therefore, explicitly,
\begin{eqnarray}
H^{\mu\nu}{}_\alpha & = & -\sqrt{-g}\left( b_1 Q^{[\mu\nu]}{}_\alpha + b_2Q^{[\mu}\delta^{\nu]}_\alpha + b_ 3 \tilde{Q}^{[\mu}\delta^{\nu]}_\alpha\right) + b_4\epsilon^{\mu\nu}{}_{\alpha\beta}\tilde{Q}^\beta
+ b_5\epsilon^{\mu\nu}{}_{\alpha\beta}{Q}^\beta + b_6\epsilon_{\alpha}{}^{\beta[\mu}{}_\rho Q_\beta{}^{\nu]\rho}\,, \\
P^\alpha{}_{\mu\nu} & = & \sqrt{-g}\left[ c_1 Q^\alpha{}_{\mu\nu} + c_2 Q_{(\mu}{}^\alpha{}_{\nu)} + c_3 Q^\alpha g_{\mu\nu} + c_4\delta^\alpha_{(\mu}\tilde{Q}_{\nu)}
+ \frac{1}{2}\left( c_5 \delta^\alpha_{(\mu}\tilde{Q}_{\nu)} + c_6\delta^\alpha_{(\mu}Q_{\nu)}\right) \right] - 2c_7\epsilon^{\alpha}{}_{\beta\rho(\mu}Q^{\beta}{}_{\nu)}{}^\rho\,.
\end{eqnarray}
However, the parity-violating constitution relation cannot satisfy the requirement $\nabla_\alpha H^{\alpha\mu}{}_\nu = 2\nabla_\alpha P^{\alpha\mu}{}_\nu$. The CGR relations
(\ref{cgrconst}) are therefore the unique local and linear (non-derivative) constitutive law of a Lagrangian theory, and this excludes odd-parity interactions.
Finally, we should remark that though the above requirement ensures that the premetric equations can be derived from an action principle, such a principle may not be necessary for a consistent
physical theory. If this assumption is relaxed, one may consider the full
13-component set of theories described by the above constitutive relations $\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta= \prescript{}{\text{ODD}}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta + \prescript{}{\text{EVEN}}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta$ and $\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}= \prescript{}{\text{ODD}}\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} + \prescript{}{\text{EVEN}}\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}$. However, except
in the case of CGR, these theories may not be compatible with the metric-covariant conservation of matter energy-momentum (though they, by construction, are compatible with the
conservation of the total energy-momentum of the matter and the metric field) unless additional constraints are imposed.
As we will demonstrate in Section \ref{properties}, devoted to the further study of the possible viability of such generalised class of purified gravity (which could be dubbed the
Premetric Newer General Relativity), this turns out to be a case. Now we instead continue with the investigation of the constitutive relations, proceeding to the most generic case wherein no metric (nor Levi-Civita tensor) need be assumed.
\subsection{Irreducible decomposition of $\xi$}
Since the constitutive tensor densities $\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ and $\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ are rather cumbersome objects with a large number of components, it is helpful to decompose them into smaller parts. Here we make use of an irreducible decomposition based on Young diagrams, along the lines of a similar decomposition shown in~\cite{Itin:2018dru}. We begin with the potential constitutive tensor density $\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} = \xi^{\alpha}{}_{(\mu\nu)}{}^{\beta}{}_{\rho\sigma} = \xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{(\rho\sigma)}$. Its decomposition can be visualized in terms of Young diagrams as
\begin{equation}
\begin{split}
\yng(1) \otimes \yng(2) \otimes \yng(1) \otimes \yng(2) &= \left(\yng(1) \otimes \yng(1)\right) \otimes \left(\yng(2) \otimes \yng(2)\right)\\
&= \left(\yng(2) \oplus \yng(1,1)\right) \otimes \left(\yng(2,2) \oplus \yng(3,1) \oplus \yng(4)\right)\,,
\end{split}
\end{equation}
where the first bracket corresponds to the upper indices, while the second bracket corresponds to the lower indices. In 4 dimensions one finds that the total number of components decomposes as
\begin{equation}
1600 = 200 \oplus 120 \oplus 450 \oplus 270 \oplus 350 \oplus 210\,.
\end{equation}
By applying the Young projectors to $\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ one finds that its irreducible parts are given by
\begin{subequations}\label{eq:xidecomp}
\begin{eqnarray}
\prescript{[1]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \xi^{(\alpha}{}_{\mu\nu}{}^{\beta)}{}_{\rho\sigma} - \prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} - \prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}\,,\\
\prescript{[2]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \xi^{[\alpha}{}_{\mu\nu}{}^{\beta]}{}_{\rho\sigma} - \prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} - \prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}\,,\\
\prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \frac{1}{2}\left(\xi^{(\alpha}{}_{\mu\nu}{}^{\beta)}{}_{\rho\sigma} - \xi^{(\alpha}{}_{\rho\sigma}{}^{\beta)}{}_{\mu\nu}\right)\,,\\
\prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \frac{1}{2}\left(\xi^{[\alpha}{}_{\mu\nu}{}^{\beta]}{}_{\rho\sigma} - \xi^{[\alpha}{}_{\rho\sigma}{}^{\beta]}{}_{\mu\nu}\right)\,,\\
\prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \xi^{(\alpha}{}_{(\mu\nu}{}^{\beta)}{}_{\rho\sigma)}\,,\\
\prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \xi^{[\alpha}{}_{(\mu\nu}{}^{\beta]}{}_{\rho\sigma)}\,.
\end{eqnarray}
\end{subequations}
The structure of this decomposition becomes clearer if we first decompose $\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ into its reversible and irreversible parts,
\begin{subequations}
\begin{eqnarray}
\accentset{+}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \frac{1}{2}\left(\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} + \xi^{\beta}{}_{\rho\sigma}{}^{\alpha}{}_{\mu\nu}\right) = \accentset{+}{\xi}^{\beta}{}_{\rho\sigma}{}^{\alpha}{}_{\mu\nu}\,,\\
\accentset{-}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \frac{1}{2}\left(\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} - \xi^{\beta}{}_{\rho\sigma}{}^{\alpha}{}_{\mu\nu}\right) = -\accentset{-}{\xi}^{\beta}{}_{\rho\sigma}{}^{\alpha}{}_{\mu\nu}\,.
\end{eqnarray}
\end{subequations}
Note that only $\accentset{+}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ contributes to the Lagrangian and preserves matter energy-momentum, while $\accentset{-}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ mediates dissipative effects. We then further decompose these two parts by imposing the symmetry or antisymmetry of the upper two indices,
\begin{equation}
\accentset{\pm}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} = \accentset{\pm}{\xi}^{(\alpha}{}_{\mu\nu}{}^{\beta)}{}_{\rho\sigma} + \accentset{\pm}{\xi}^{[\alpha}{}_{\mu\nu}{}^{\beta]}{}_{\rho\sigma}\,.
\end{equation}
Carefully examining the decomposition~\eqref{eq:xidecomp} then shows that it can alternatively be written in the equivalent form
\begin{subequations}\label{eq:xidecompri}
\begin{eqnarray}
\prescript{[1]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{+}{\xi}^{(\alpha}{}_{\mu\nu}{}^{\beta)}{}_{\rho\sigma} - \accentset{+}{\xi}^{(\alpha}{}_{(\mu\nu}{}^{\beta)}{}_{\rho\sigma)}\,,\\
\prescript{[2]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{-}{\xi}^{[\alpha}{}_{\mu\nu}{}^{\beta]}{}_{\rho\sigma} - \accentset{-}{\xi}^{[\alpha}{}_{(\mu\nu}{}^{\beta]}{}_{\rho\sigma)}\,,\\
\prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{-}{\xi}^{[\alpha}{}_{\mu\nu}{}^{\beta]}{}_{\rho\sigma}\,,\\
\prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{+}{\xi}^{(\alpha}{}_{\mu\nu}{}^{\beta)}{}_{\rho\sigma}\,,\\
\prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{+}{\xi}^{(\alpha}{}_{(\mu\nu}{}^{\beta)}{}_{\rho\sigma)}\,,\\
\prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \accentset{-}{\xi}^{[\alpha}{}_{(\mu\nu}{}^{\beta]}{}_{\rho\sigma)}\,.
\end{eqnarray}
\end{subequations}
The reversible and irreversible parts thus decompose as
\begin{subequations}
\begin{eqnarray}
\accentset{+}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \prescript{[1]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} + \prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} + \prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}\,,\\
\accentset{-}{\xi}^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \prescript{[2]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} + \prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} + \prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}\,.
\end{eqnarray}
\end{subequations}
In Table \ref{nomenclature} we give names to these irreducible pieces, following a logic adapted from~\cite{Itin:2018dru}. All parts contribute to energy-momentum, but not all can be derived from a Lagrangian formulation.
\begin{center}
\begin{table}[h]
\begin{tabular}{|c| c| c| c| c|| c |}
Irreducible part & Components & Nomenclature & Lagrangian & Dispersion & Metric terms \\
\hline
$\prescript{[1]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 200 & Principal-1 & Yes & - & $c_1-2c_3$, $c_2+c_4-2c_5$ \\
$\prescript{[2]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 120 & Skewon-1 & No & - & none \\
$\prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 450 & Skewon-2 & No & - & $c_6$ \\
$\prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 270 & Axion-1 & Yes & - & $c_4-c_2$, $c_7$ \\
$\prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 350 & Principal-2 & Yes & - & $c_1+c_3$, $c_2+c_4+c_5$ \\
$\prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$ & 210 & Axion-2 & No & - & none \\
\hline
$\prescript{[1]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 400 & Principal-A & (Yes) & Yes & $b_1-2b_2$, $2b_4-b_6$ \\
$\prescript{[2]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 400 & Principal-B & (Yes) & Yes & $b_1+b_2$ \\
$\prescript{[3]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 80 & Axion-A & (No) & No & $b_4+b_6$, $2b_5+b_7$ \\
$\prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 80 & Axion-B & (No) & No & $b_5-b_7$ \\
\hline
$\prescript{\{1\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 336 & Odd Axion & (No) & (No) & $b_5-b_7$ \\
$\prescript{\{2\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 240 & Odd Principal-1 & (No) & (Yes) & $2b_5+b_7-b_4$, $2b_5+b_7$ \\
$\prescript{\{3\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 2 $\cdot$ 144 & Even Principal-1 & (Yes) & (Yes) & $b_1$, $b_2$ \\
$\prescript{\{4\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 80 & Odd Principal-2 & (No) & (Yes) & $b_4-2b_5-2b_6-b_7$ \\
$\prescript{\{5\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ & 16 & Even Principal-2 & (Yes) & (Yes) & $b_1-2b_2-2b_3$ \\
\hline
\end{tabular}
\caption{Nomenclature for the irreducible parts of the constitutive relations. In the last column we indicate which combinations of the metric terms contribute to each of the irreducible parts.
Elsewhere, the parenthesis is used to indicate that we have a definite answer only in a metrical theory.
As will be clarified later, the dispersion relation of gravitational waves depends at the linear order only on the constitutive relation $\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$,
and not all its irreducible parts contribute.
\label{nomenclature}}
\end{table}
\end{center}
\subsection{Irreducible decomposition of $\chi$}
We then continue with the kinetic constitutive tensor density $\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta = \chi^{[\mu\nu]}{}_{\alpha\rho\sigma}{}^\beta = \chi^{\mu\nu}{}_{\alpha(\rho\sigma)}{}^\beta$. In terms of Young diagrams the decomposition is given by
\begin{equation}
\begin{split}
\yng(1,1) \otimes \yng(1) \otimes \yng(2) \otimes \yng(1) &= \left(\yng(1,1) \otimes \yng(1)\right) \otimes \left(\yng(1) \otimes \yng(2)\right)\\
&= \left(\yng(2,1) \oplus \yng(1,1,1)\right) \otimes \left(\yng(2,1) \oplus \yng(3)\right)\,,
\end{split}
\end{equation}
where again the first and second bracket correspond to upper and lower indices, respectively. One then finds that the total number of independent components splits {into}
\begin{equation}
960 = 400 \oplus 400 \oplus 80 \oplus 80\,.
\end{equation}
The irreducible decomposition of $\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ is given by
\begin{subequations}\label{eq:chidecomp}
\begin{eqnarray}
\prescript{[1]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} - \prescript{[2]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} - \prescript{[3]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} - \prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\,,\\
\prescript{[2]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \chi^{\mu\nu}{}_{(\alpha\rho\sigma)}{}^{\beta} - \prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\,,\\
\prescript{[3]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \chi^{[\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta]} - \prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\,,\\
\prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \chi^{[\mu\nu}{}_{(\alpha\rho\sigma)}{}^{\beta]}\,.
\end{eqnarray}
\end{subequations}
An alternative decomposition can be obtained by lowering the first pair of indices {by} using the Levi-Civita symbol, and defining
\begin{equation}
\tilde{\chi}_{\mu\nu\alpha\rho\sigma}{}^{\beta} = \frac{1}{2}\epsilon_{\mu\nu\tau\omega}\chi^{\tau\omega}{}_{\alpha\rho\sigma}{}^{\beta}\,.
\end{equation}
We can then perform a decomposition in the lower indices, which is expressed in Young diagrams as
\begin{equation}
\yng(1,1) \otimes \yng(1) \otimes \yng(2) = \yng(4,1) \oplus \yng(3,2) \oplus 2 \cdot \yng(3,1,1) \oplus \yng(2,2,1) \oplus \yng(2,1,1,1)\,.
\end{equation}
Taking into account also the upper index, which we omitted in the decomposition above, the number of independent components splits as
\begin{equation}
960 = 336 \oplus 240 \oplus 2 \cdot 144 \oplus 80 \oplus 16\,.
\end{equation}
Particular attention should be paid to the third diagram, which appears twice in the decomposition. This indicates that the irreducible tensor decomposition, seen as a decomposition of a tensor product of representations of \(\mathrm{GL}(4)\) into irreducible subrepresentations, contains two copies of the same irreducible representation represented by this diagram. However, in contrast to the remaining representations, which appear only once in the decomposition, there is no \emph{canonical} choice of the two representation spaces (and hence projectors onto particular tensor components); only their direct sum is canonically determined. Thus, the decomposition yields 5 terms, which we label \(\prescript{\{I\}}{}{\tilde{\chi}}_{\mu\nu\alpha\rho\sigma}{}^{\beta}, I = 1, \ldots, 5\). Keeping in mind that these are still antisymmetric in the first two indices, we may raise these indices again, hence defining
\begin{equation}
\prescript{\{I\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} = -\frac{1}{2}\epsilon^{\mu\nu\tau\omega}\prescript{\{I\}}{}{\tilde{\chi}}_{\tau\omega\alpha\rho\sigma}{}^{\beta}\,.
\end{equation}
These terms are then given by
\begin{subequations}\label{eq:chidecomp2}
\begin{align}
\prescript{\{1\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \frac{1}{5}\left(\chi^{\mu\nu}{}_{(\alpha\rho\sigma)}{}^{\beta} + 2\delta_{(\alpha}^{[\mu}\chi^{\nu]\gamma}{}_{|\gamma|\rho\sigma)}{}^{\beta} + 4\delta_{(\alpha}^{[\mu}\chi^{\nu]\gamma}{}_{\rho\sigma)\gamma}{}^{\beta}\right)\,,\\
\prescript{\{2\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} - \chi^{\mu\nu}{}_{(\alpha\rho\sigma)}{}^{\beta} - \frac{1}{2}\chi^{\gamma\delta}{}_{\gamma\delta(\rho}{}^{\beta}\delta_{\sigma)}^{[\mu}\delta_{\alpha}^{\nu]} + \frac{2}{3}\delta_{\alpha}{}^{[\mu}\chi^{\nu]\gamma}{}_{\gamma\rho\sigma}{}^{\beta}\nonumber\\
&\phantom{=}+ \frac{5}{6}\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{|\alpha\gamma|\sigma)}{}^{\beta} - \frac{1}{6}\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{\sigma)\alpha\gamma}{}^{\beta} - \frac{2}{3}\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{|\gamma\alpha|\sigma)}{}^{\beta} - \frac{2}{3}\delta_{\alpha}{}^{[\mu}\chi^{\nu]\gamma}{}_{(\rho\sigma)\gamma}{}^{\beta}\,,\\
\prescript{\{3\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \frac{1}{5}\Big(3\chi^{\gamma\delta}{}_{\gamma\delta(\rho}{}^{\beta}\delta_{\sigma)}^{[\mu}\delta_{\alpha}^{\nu]} - 3\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{\sigma)\alpha\gamma}{}^{\beta} - 3\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{|\alpha\gamma|\sigma)}{}^{\beta}\nonumber\\
&\phantom{=}+ 2\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{|\gamma\alpha|\sigma)}{}^{\beta} + 2\delta_{\alpha}{}^{[\mu}\chi^{\nu]\gamma}{}_{(\rho\sigma)\gamma}{}^{\beta} - 4\delta_{\alpha}{}^{[\mu}\chi^{\nu]\gamma}{}_{\gamma\rho\sigma}{}^{\beta}\Big)\,,\\
\prescript{\{4\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \frac{1}{2}\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{\sigma)\gamma\alpha}{}^{\beta} - \frac{1}{2}\delta_{(\rho}^{[\mu}\chi^{\nu]\gamma}{}_{|\alpha\gamma|\sigma)}{}^{\beta} + \frac{1}{6}\chi^{\gamma\delta}{}_{\gamma\delta(\rho}{}^{\beta}\delta_{\sigma)}^{[\mu}\delta_{\alpha}^{\nu]}\,,\\
\prescript{\{5\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= -\frac{4}{15}\chi^{\gamma\delta}{}_{\gamma\delta(\rho}{}^{\beta}\delta_{\sigma)}^{[\mu}\delta_{\alpha}^{\nu]}\,.
\end{align}
\end{subequations}
Some properties of these components are summarised in Table \ref{nomenclature}.
\subsection{Irreducible decomposition of metric constitutive law}
We now apply the decompositions shown above to the metric constitutive relations. The potential constitutive tensor density~(\ref{eq:xinewer},\ref{eq:xiodd}) decomposes into the parts
(in this subsection, we shall absorb the scale $M^2$ into the coefficients $c_i$ and $b_i$, which then become dimensionful)
\begin{subequations}
\begin{eqnarray}
\prescript{[1]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \sqrt{-g}\left[\frac{c_1 - 2c_3}{3}g^{\alpha\beta}\left( g_{\mu(\rho}g_{\sigma)\nu} - g_{\mu\nu}g_{\rho\sigma}\right) + \frac{c_2 + c_4 - 2c_5}{6}\left( 2\delta_{(\mu}^{(\alpha}g_{\nu)(\rho}\delta_{\sigma)}^{\beta)} - g_{\mu\nu}\delta_{(\rho}^{\alpha}\delta_{\sigma)}^{\beta} - g_{\rho\sigma}\delta_{(\mu}^{\alpha}\delta_{\nu)}^{\beta}\right)\right]\,,\\
\prescript{[2]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & 0\,,\\
\prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \sqrt{-g}\frac{c_6}{2}\left(\delta^\alpha_{(\sigma}\delta^\beta_{\rho)}g_{\mu\nu} - \delta^\alpha_{(\mu}\delta^\beta_{\nu)}g_{\sigma\rho}\right)\,,\\
\prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \sqrt{-g}(c_4 - c_2)\delta_{(\mu}^{[\alpha}g_{\nu)(\rho}\delta_{\sigma)}^{\beta]} + c_7\left( \epsilon^{\alpha\beta}{}_{\mu(\rho}g_{\sigma)\nu} + \epsilon^{\alpha\beta}{}_{\nu(\rho}g_{\sigma)\mu}\right)\,,\\
\prescript{[5]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & \sqrt{-g}\left[(c_1 + c_3)g^{\alpha\beta}g_{(\mu\nu}g_{\rho\sigma)} + (c_2 + c_4 + c_5)g_{(\mu\nu}\delta_{\rho}^{\alpha}\delta_{\sigma)}^{\beta}\right]\,,\\
\prescript{[6]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma} & = & 0\,.
\end{eqnarray}
\end{subequations}
We find that the only irreversible part is the skewon $\prescript{[3]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$, which is non-vanishing only in the case $c_6 \neq 0$. We also find that the parity-violating term~(\ref{eq:xiodd}) contributes only to the part $\prescript{[4]}{}\xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}$. For the kinetic constitutive tensor density~(\ref{eq:chinewer},\ref{eq:chiodd}) we have the irreducible parts
\begin{subequations}
\begin{eqnarray}
\prescript{[1]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \sqrt{-g}\left[\frac{b_1 - 2b_2}{3}\left( g_{\alpha(\rho}\delta_{\sigma)}^{[\mu}g^{\nu]\beta} - g_{\rho\sigma}\delta_{\alpha}^{[\mu}g^{\nu]\beta}\right) + b_3\delta_{\alpha}^{[\mu}\delta_{(\rho}^{\nu]}\delta_{\sigma)}^{\beta}\right] + \frac{2b_4 - b_6}{3}\left(\epsilon_{\alpha}{}^{\mu\nu}{}_{(\rho}\delta_{\sigma)}^{\beta} - \epsilon_{\alpha}{}^{\beta[\mu}{}_{(\rho}\delta_{\sigma)}^{\nu]}\right)\,,\\
\prescript{[2]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & \sqrt{-g}(b_1 + b_2)g_{(\rho\sigma}\delta_{\alpha)}^{[\mu}g^{\nu]\beta}\,,\\
\prescript{[3]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & (b_4 + b_6)\epsilon_{\alpha(\rho}{}^{[\mu\nu}\delta_{\sigma)}^{\beta]} + \frac{2b_5 + b_7}{3}\left( g_{\rho\sigma}\epsilon_{\alpha}{}^{\beta\mu\nu} - g_{\alpha(\rho}\epsilon_{\sigma)}{}^{\beta\mu\nu}\right)\,,\\
\prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} & = & (b_5 - b_7)g_{(\rho\sigma}\epsilon_{\alpha)}{}^{\beta\mu\nu}\,.
\end{eqnarray}
\end{subequations}
Alternatively, we may use the decomposition~(\ref{eq:chidecomp2}) and find
\begin{subequations}
\begin{align}
\prescript{\{1\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= (b_5 - b_7)g_{(\rho\sigma}\epsilon_{\alpha)}{}^{\beta\mu\nu}\,,\\
\prescript{\{2\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \frac{2b_5 + b_7 - b_4}{2}\delta_{(\rho}^{[\mu}\epsilon^{\nu]}{}_{\sigma)\alpha}{}^{\beta} + b_4\epsilon^{\mu\nu}{}_{\alpha(\rho}\delta_{\sigma)}^{\beta} + \frac{2b_5 + b_7}{3}\left(g_{\rho\sigma}\epsilon^{\mu\nu}{}_{\alpha}{}^{\beta} - g_{\alpha(\rho}\epsilon^{\mu\nu}{}_{\sigma)}{}^{\beta}\right)\,,\\
\prescript{\{3\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \sqrt{-g}\left(\frac{b_1 - 2b_2}{5}\delta_{\alpha}^{[\mu}\delta_{(\rho}^{\nu]}\delta_{\sigma)}^{\beta} + b_1g_{\alpha(\rho}\delta_{\sigma)}^{[\mu}g^{\nu]\beta} + b_2g_{\rho\sigma}\delta_{\alpha}^{[\mu}g^{\nu]\beta}\right)\,,\\
\prescript{\{4\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= \frac{b_4 - 2b_5 - 2b_6 - b_7}{2}\delta_{(\rho}^{[\mu}\epsilon^{\nu]}{}_{\sigma)\alpha}{}^{\beta}\,,\\
\prescript{\{5\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta} &= -\sqrt{-g}\frac{b_1 - 2b_2 - 5b_3}{5}\delta_{\alpha}^{[\mu}\delta_{(\rho}^{\nu]}\delta_{\sigma)}^{\beta}\,.
\end{align}
\end{subequations}
Following this decomposition, we find that the parity-preserving terms contribute only to $\prescript{\{3\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ and $\prescript{\{5\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$, while the parity-violating terms contribute only to $\prescript{\{1\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$, $\prescript{\{2\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$ and $\prescript{\{4\}}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}$.
We shall return in Section \ref{conclusions} to summarise in Figure \ref{fig2} the physical assumptions which lead from the 5632-component general constitutive relation to the unique theory specified by 1 free component.
\section{Premetric construction in the language of differential forms}
\label{forms}
In this Section we revisit the steps of Section \ref{tensors} in a different formalism. A dictionary between the two languages will be given in Table \ref{table1}.
\subsection{Frame}
Consider a 4-dimensional differential manifold endowed with a coframe $\mathrm{e}^a$. Its exterior products generate the bases
\begin{equation} \label{bases}
\mathrm{e}^a\,, \quad \mathrm{e}^{ab} = \mathrm{e}^a\wedge\mathrm{e}^b\,, \quad \mathrm{e}^{abc} = \mathrm{e}^a\wedge\mathrm{e}^b\wedge\mathrm{e}^c\,, \quad
\mathrm{e}^{abcd} = \mathrm{e}^a\wedge\mathrm{e}^b\wedge\mathrm{e}^c\wedge\mathrm{e}^d\,,
\end{equation}
of the spaces of untwisted 1-forms, 2-forms, 3-forms and 4-forms, respectively. In terms of the Levi-Civita permutation symbol $\varepsilon_{acbd}$ and a section $s$ of the orientation line bundle, we can introduce the twisted scalar-valued volume form
\begin{equation}
\text{vol} = \frac{1}{4!}\varepsilon_{acbd}\mathrm{e}^{abcd}\otimes s\,.
\end{equation}
We note that we can use the interior product (to be denoted here with a ``dot'' instead of the perhaps more common ``hook'') without invoking a spacetime metric, it being in basic terms just a summation than a contraction.
Thus we can introduce the frame field $\text{\textschwa}_a$ as the inverse of the coframe,
\begin{equation}
\text{\textschwa}_a\cdot\mathrm{e}^b = \mathrm{e}^b\cdot\text{\textschwa}_a = \delta^b_a\,,
\end{equation}
and this allows to also introduce the basis form for the spaces of twisted 0-forms, 1-forms, 2-forms, 3-forms and 4-forms as
\begin{equation} \label{tbases}
\epsilon_{abcd} = \text{\textschwa}_d\cdot\epsilon_{abc}\,, \quad \epsilon_{abc} = \text{\textschwa}_c\cdot\epsilon_{ab}\,, \quad
\epsilon_{ab} = \text{\textschwa}_b\cdot\epsilon_{a}\,, \quad \epsilon_a = \text{\textschwa}_a\cdot \text{vol}\,, \quad \text{vol}\,,
\end{equation}
respectively. One may check that $\mathrm{e}^a\wedge\epsilon_b=\delta^a_b\text{vol}$. Under a GL(4) transformation $\Lambda_a{}^b$ (with the inverse $\Lambda^a{}_b$ and the determinant $\Lambda)$, we have the following transformation laws:
\begin{equation} \label{trans}
\mathrm{e}^a \rightarrow \Lambda_b{}^a \mathrm{e}^b\,, \quad \text{\textschwa}_a \rightarrow \Lambda^b{}_a\text{\textschwa}_b\,, \quad \text{vol} \rightarrow |\Lambda | \text{vol}\,,
\end{equation}
and thus the bases (\ref{bases}) are tensors whilst the twisted bases (\ref{tbases}) are tensor densities.
\subsection{Excitation}
\subsubsection{Electromagnetism}
The conservation of the electric charge entails the existence of an electric current $\boldsymbol{\mathrm{J}}$. It is described as a twisted 3-form,
\begin{equation}
\boldsymbol{\mathrm{J}} = J^a \epsilon_a\,.
\end{equation}
Under the transformation (\ref{trans}) $J^a \rightarrow \Lambda^{-1}\Lambda^a{}_b J^b$, and thus $\boldsymbol{\mathrm{J}} \rightarrow \pm \boldsymbol{\mathrm{J}}$, where the sign is the
sign of $\Lambda$. The charge conservation, in integral and in differential forms is
\begin{equation}
\int_{\partial \Omega_4}\boldsymbol{\mathrm{J}} =0\,, \quad \text{and} \quad {{\rm d}} \boldsymbol{\mathrm{J}} = 0\,,
\end{equation}
respectively. Locally, the latter is equivalent to the inhomogeneous Maxwell equation
\begin{equation}
{{\rm d}} \boldsymbol{\mathrm{H}} = \boldsymbol{\mathrm{J}}\,,
\end{equation}
implying the existence of the electromagnetic excitation $\boldsymbol{\mathrm{H}}$, which is a twisted 2-form
\begin{equation}
\boldsymbol{\mathrm{H}} = \frac{1}{2}H_{ab}\mathrm{e}^{ab} = \frac{1}{2}\tilde{H}^{ab}\epsilon_{ab}\,, \quad \text{where} \quad \tilde{H}^{ab}=\frac{1}{2}\epsilon^{abcd}H_{cd}\,.
\end{equation}
Since $H_{ab}$ is a twisted covariant tensor, $\tilde{H}^{ab}$ is an untwisted contravariant tensor density. To take into account that in addition to the
matter sources $\mathbf{T}$, in massive electromagnetism there exists also the self-interaction source $\mathbf{t}$, one performs the decomposition $\boldsymbol{\mathrm{J}} = \mathbf{T} + \mathbf{t}$.
In case of such self-interactions, one also needs to consider a twisted 3-form $\boldsymbol{\mathrm{P}}$, given as
\begin{equation}
\boldsymbol{\mathrm{P}} = \frac{1}{6}P_{abc}\mathrm{e}^{abc} = \tilde{P}^{a}\epsilon_{a}\,, \quad \text{where} \quad \tilde{P}^{a}=\frac{1}{6}\epsilon^{abcd}P_{bcd}\,.
\end{equation}
Since $P_{abc}$ is a twisted covariant rank-3 tensor, $\tilde{P}^{a}$ is an untwisted contravariant vector density.
\subsubsection{Gravity}
In gravity, we begin with the conservation of energy and momenta, and we have thus 4 conserved charges.
As in the case of electromagnetism, they are described by a twisted 3-form,
\begin{equation}
\boldsymbol{\mathrm{J}}_{a} = J_{a}{}^c\epsilon_c\,.
\end{equation}
The conservation in integral and in differential forms is analogously expressed as
\begin{equation}
\int_{\partial \Omega_4}\boldsymbol{\mathrm{J}}_{a} =0\,, \quad \text{and} \quad {{\rm d}} \boldsymbol{\mathrm{J}}_{a} = 0\,.
\end{equation}
The latter implies again the existence of a twisted two-form
\begin{equation}
\boldsymbol{\mathrm{H}}_{a} = \frac{1}{2}H_{abc}\mathrm{e}^{bc} = \frac{1}{2}\tilde{H}_{a}{}^{bc}\epsilon_{bc}\,, \quad \text{where} \quad \tilde{H}_{a}{}^{bc}=\frac{1}{2}\epsilon^{bcde}H_{ade}\,.
\end{equation}
Since $H_{abc}$ is a twisted covariant tensor, $\tilde{H}_{a}{}^{bc}$ is an untwisted contravariant tensor density. We now write
\begin{equation}
{{\rm d}} \boldsymbol{\mathrm{H}}_{a} = \boldsymbol{\mathrm{J}}_{a} = \mathbf{T}_{a} + \mathbf{t}_{a}\,,
\end{equation}
taking into account that in addition to the energy-momentum of matter $\mathbf{T}_{a}$, there can also occur inertial energy-momentum $\mathbf{t}_{a}$.
The potential excitation is now defined as the twisted 3-form $\boldsymbol{\mathrm{P}}_{ab}$, given as
\begin{equation}
\boldsymbol{\mathrm{P}}_{ab} = \frac{1}{6}P_{abcde}\mathrm{e}^{cde} = \tilde{P}_{ab}{}^{c}\epsilon_{c}\,, \quad \text{where} \quad \tilde{P}_{ab}{}^{c}=\frac{1}{6}\epsilon^{cdef}P_{abdef}\,.
\end{equation}
Since $P_{abcde}$ is a twisted covariant rank-3 tensor, $\tilde{P}_{ab}{}^{c}$ is an untwisted contravariant vector density.
\subsection{Field strength}
\subsubsection{Electromagnetism}
The field strength $\mathbf{F}=\frac{1}{2}F_{ab}\mathrm{e}^{ab}$ is an untwisted 2-form, which satisfies the equations
\begin{equation}
\int_{\partial \Omega_4}\mathbf{F} =0\,, \quad \text{and} \quad {{\rm d}} \mathbf{F} = 0\,.
\end{equation}
The latter equation is an expression of the conservation of the magnetic flux, and it implies the existence of the electromagnetic potential
$\mathbf{A} = A_a\mathrm{e}^a$ such that ${{\rm d}}\mathbf{A} = \mathbf{F}$. The electromagnetic potential is defined up to a scalar $\varphi$ such that $\mathbf{A} \rightarrow \mathbf{A} + {{\rm d}}\varphi$.
\subsubsection{Gravity}
We introduce the gravitational field strength $\mathbf{F}^{ab}$ as an untwisted tensor-valued 2-form which satisfies the equations
$\mathbf{F}^{ab} = 0$, due to the integrability of the gravitational geometry. The gravitational potential $\mathbf{A}^{ab}=\mathbf{A}^{ab}{}_{c}\mathrm{e}^c$ for which
$\mathbf{F}^{ab} = {{\rm d}} \mathbf{A}^{ab}$, thus has the further property that $\mathbf{A}^{ab} = {{\rm d}} \varphi^{ab}$ for some $\varphi^{ab}$, which follows from our basic postulate that $\mathbf{F}^{ab} = 0$.
\subsection{Force}
\subsubsection{Electromagnetism}
The force acting on matter is described by a covector-valued twisted 4-form $\boldsymbol{f}_a = f_a \text{vol}$, where $f_a$ is a covector-valued scalar.
The Lorentz force is given as
\begin{equation}
\boldsymbol{f}_a = \left(\text{\textschwa}_a\cdot \mathbf{F}\right)\wedge \boldsymbol{\mathrm{J}} = \left( J^b F_{ba}\right)\text{vol} = f_a \text{vol}\,.
\end{equation}
By construction, $f_a$ is an untwisted covector-valued scalar density.
\subsubsection{Gravity}
The gravitational force is, again, constructed in complete analogy to the electromagnetic one, as a covector-valued twisted 4-form $\boldsymbol{f}_{a}{}^b = f_{a}{}^b\text{vol}$,
\begin{equation}
\boldsymbol{f}_{a}{}^c = \left(\text{\textschwa}_a\cdot \mathbf{F}^{bc}\right)\wedge \boldsymbol{\mathrm{J}}_{b} = \left( J_{b}{}^d F^{bc}{}_{da}\right)\text{vol} = f_a{}^c \text{vol}\,.
\end{equation}
The absence of curvature, $\mathbf{F}^{ab}=0$, implies the absence of force, $\boldsymbol{f}_{a}{}^b =0$. This reflects the conservation of total energy and momentum.
However, matter energy-momentum need not be conserved. Therefore, there arises an effective force
\begin{equation}
{{\rm d}} \mathbf{T}_a = -{{\rm d}} \mathbf{t}_a \equiv \mathfrak{f}_a\,.
\end{equation}
The interpretation of this force as an inertial effect was already discussed in Section \ref{gforces}.
In the case of CGR it turns out that $\mathfrak{f}_a$ has precisely the correct from to ensure the metric-covariant conservation of matter. This could be expected
from that the CGR can be derived from an action principle, which guarantees the generalised Bianchi identity \cite{Koivisto:2005yk}.
\subsection{Energy-momentum current}
\subsubsection{Electromagnetism}
Energy-momentum currents are described by covector-valued 3-forms. In the case of electromagnetism, we write
\begin{equation} \label{emcurrent}
\prescript{\text{em}}{}\mathbf{T}_a = \frac{1}{2}\left( \mathbf{F}\wedge \text{\textschwa}_a\cdot \boldsymbol{\mathrm{H}} - \boldsymbol{\mathrm{H}} \wedge \text{\textschwa}_a\wedge \mathbf{F} + \mathbf{A} \wedge \text{\textschwa}_a \cdot \boldsymbol{\mathrm{P}} - \boldsymbol{\mathrm{P}}\wedge\text{\textschwa}_a\cdot \mathbf{A}\right)\,.
\end{equation}
If the theory can be obtained from a twisted Lagrangian 4-form
\begin{equation}
\mathbf{\Lambda} = -\frac{1}{2}\mathbf{F}\wedge\boldsymbol{\mathrm{H}} - \frac{1}{2}\mathbf{A}\wedge\boldsymbol{\mathrm{P}}\,,
\end{equation}
we can alternatively write
\begin{equation}
\prescript{\text{em}}{}\mathbf{T}_a = \text{\textschwa}_a\cdot \mathbf{\Lambda} - \mathbf{F}\wedge\text{\textschwa}_a\cdot\boldsymbol{\mathrm{H}} - \mathbf{A}\wedge\text{\textschwa}_a\cdot\boldsymbol{\mathrm{P}}\,.
\end{equation}
One may verify that, defining $\mathcal{L}_a\boldsymbol{\mathrm{X}} = [\text{\textschwa}_a,\boldsymbol{\mathrm{X}}]$ for any form $\boldsymbol{\mathrm{X}}$ as its Lie derivative along the basis vector $\text{\textschwa}_a$,
\begin{equation}
{{\rm d}} \prescript{\text{em}}{}\mathbf{T}_a = \boldsymbol{f}_a - \frac{1}{2}\left( \mathbf{F}\wedge\mathcal{L}_a\boldsymbol{\mathrm{H}} - \boldsymbol{\mathrm{H}}\wedge\mathcal{L}_a\mathbf{F}
+ \mathbf{A}\wedge\mathcal{L}_a\boldsymbol{\mathrm{P}} - \boldsymbol{\mathrm{P}}\wedge\mathcal{L}_a\mathbf{A}\right)\,.
\end{equation}
Here $\boldsymbol{f}_a$ is the Lorentz force and the remaining additional force is determined by the constitutive law.
\subsubsection{Gravity}
In case of gravity, the current analogous to (\ref{emcurrent}) contains only the two last terms. That is,
\begin{equation}
\mathbf{t}_a = \frac{1}{2}\left( \mathbf{A}^{bc} \wedge \text{\textschwa}_a \cdot \boldsymbol{\mathrm{P}}_{bc} - \boldsymbol{\mathrm{P}}_{bc}\wedge\text{\textschwa}_a\cdot \mathbf{A}^{bc}\right)\,.
\end{equation}
Again, if the theory can be obtained from a twisted Lagrangian 4-form
\begin{equation}
\mathbf{\Lambda} = - \frac{1}{2}\mathbf{A}^{ab}\wedge\boldsymbol{\mathrm{P}}_{ab}\,,
\end{equation}
there is an alternative formula,
\begin{equation}
\mathbf{t}_a = \text{\textschwa}_a\cdot \mathbf{\Lambda} - \mathbf{A}^{bc}\wedge\text{\textschwa}_a\cdot\boldsymbol{\mathrm{P}}_{bc}\,.
\end{equation}
The conservation of the gravitational energy-momentum tensor can be derived to be
\begin{equation}
{{\rm d}} \mathbf{t}_a = - \frac{1}{2}\left( \mathbf{A}^{bc}\wedge\mathcal{L}_a\boldsymbol{\mathrm{P}}_{bc} - \boldsymbol{\mathrm{P}}_{bc}\wedge\mathcal{L}_a\mathbf{A}^{bc}\right)\,,
\end{equation}
which is minus the effective force $\mathfrak{f}_a$ affecting matter.
\subsection{Constitutive relation}
\subsubsection{Electromagnetism}
By using the expansions
\begin{equation}
\boldsymbol{\mathrm{H}} = \frac{1}{2}\tilde{H}^{ab}\epsilon_{ab}\,, \quad \mathbf{F} = \frac{1}{2}F_{ab}\mathrm{e}^{ab}\,, \quad \boldsymbol{\mathrm{P}} = \tilde{P}^a\epsilon_a\,, \quad \mathbf{A} = A_a\mathrm{e}^a\,,
\end{equation}
one finds that essentially the same constitutive relations as in subsection \ref{electroconst} are to be specified. For example, a generalisation of the Proca theory is given by
\begin{equation}
\tilde{H}^{ab} = {\chi}^{abcd}F_{cd}\,, \quad \tilde{P}^a = {\xi}^{ab}A_b\,.
\end{equation}
The constitutive tensor densities in the two languages are related by the components of the coframe field
\begin{equation}
{\chi}^{abcd} = \mathrm{e}^a{}_\mu\mathrm{e}^b{}_\nu e^c{}_\rho e^d{}_\sigma {\chi}^{\mu\nu\rho\sigma}\,, \quad {\xi}^{ab} = \mathrm{e}^a{}_\mu\mathrm{e}^b{}_\nu {\xi}^{\mu\nu}\,.
\end{equation}
Thus, the analysis of the constitutive relations in the tensor language is directly applicable to theory formulated in the exterior algebra.
\subsubsection{Gravity}
Now let us recall the expansions
\begin{equation}
\boldsymbol{\mathrm{H}}_{a}{}_ = \frac{1}{2}\tilde{H}_{a}{}^{bc}\epsilon_{bc}\,, \quad {\boldsymbol{\mathrm{P}}}_{ab} = \tilde{P}^c{}_{ab}\epsilon_c\,, \quad \mathbf{A}^{ab}{} = A^{ab}{}_c\mathrm{e}^c\,.
\end{equation}
The constitutive relations we focused upon previously are thus expressed in latin indices as
\begin{equation}
\tilde{H}_c{}^{ab} = \chi^{ab}{}_{cdf}{}^e Q_e{}^{df}\,, \quad \tilde{P}^a{}_{bc} = \xi^a{}_{bc}{}^d{}_{ef}Q_d{}^{ef}\,,
\end{equation}
where the index conversion can be made with the help of the components of the coframe and the tetrads, i.e. the components of the frame, in the very obvious way
\begin{equation}
\chi^{ab}{}_{cdf}{}^e = \mathrm{e}^a{}_\mu\mathrm{e}^b{}_\nu\text{\textschwa}_f{}^\sigma\mathrm{e}^e{}_\beta\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^\beta \text{\textschwa}_c{}^\alpha\text{\textschwa}_d{}^\rho\,, \quad
\xi^a{}_{bc}{}^d{}_{ef} = \mathrm{e}^a{}_\alpha\mathrm{e}^d{}_\beta \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} \text{\textschwa}_b{}^\mu\text{\textschwa}_c{}^\nu\text{\textschwa}_e{}^\rho\text{\textschwa}_f{}^\sigma\,.
\end{equation}
By lifting the analysis into the frame bundle, we have introduced an additional object, the frame field, only to avoid introducing a coordinate chart explicitly. One may then consider trading the extra structure
for another, the symbol $\eta^{ab}$. Then one of the fields, $\mathrm{e}^a$ and $\varphi^{ab}$ (where $\mathbf{Q}^{ab}=-{{\rm d}} \varphi^{ab}$) becomes redundant, since it is possible, and indeed
conventional, to make the identification $\varphi^{ab}\cdot\text{\textschwa}_a\cdot\text{\textschwa}_b = \eta^{ab}\text{\textschwa}_a\otimes\text{\textschwa}_b$.
\begin{center}
\begin{table}[h]
\begin{tabular}{| c| c c c|c c c|}
\hline
Objects and laws & Basis & Electromagnetism & Gravity & W & Electromagnetism & Gravity \\
\hline
Source current & 3- & $\boldsymbol{\mathrm{J}} = \mathbf{T} + \mathbf{t}$ & $\boldsymbol{\mathrm{J}}_{a}=\mathbf{T}_{a}+\mathbf{t}_{a}$ & 1 &$J^\mu{}=T^\mu{}+t^\mu{}_\nu$ & $J^\mu{}_\nu=T^\mu{}_\nu+t^\mu{}_\nu$ \\
Conservation law & 4- & ${{\rm d}} \boldsymbol{\mathrm{J}}=0$ & ${{\rm d}} \boldsymbol{\mathrm{J}}_{a}=0$ & 1 & $\nabla_\mu J^\mu=0$& $\nabla_\mu J^\mu{}_\nu=0$ \\
Kinetic excitation & 2- & $\boldsymbol{\mathrm{H}}$ & $\boldsymbol{\mathrm{H}}_{a}$& 1 & $H^{\mu\nu}$& $H^{\mu\nu}{}_\alpha$\\
Mass excitation & 3- & $\boldsymbol{\mathrm{P}}$ & $\boldsymbol{\mathrm{P}}_{ab}$ & 1 & $P^\alpha$ & $P^\alpha{}_{\mu\nu}$ \\
Inhomog. field eqn. & 3- & ${{\rm d}}\boldsymbol{\mathrm{H}}=\boldsymbol{\mathrm{J}}$ & ${{\rm d}}\boldsymbol{\mathrm{H}}_{a}=\boldsymbol{\mathrm{J}}_{a}$& 1 & $\nabla_\mu H^{\mu\nu}=J^\nu$ & $\nabla_\mu H^{\mu\nu}{}_\alpha=J^\mu{}_\alpha$ \\
Kinetic potential & 1+ & $\mathbf{A}$ & $\mathbf{A}^{ab}$& 0 & $A_\mu$ & $A^{\alpha\beta}{}_\nu$ \\
Mass potential & 0+ & $B$ & $B^{ab}$& 0 & $B$ & $B^{\alpha\beta}$ \\
Field strength & 2+ & $\mathbf{F}={{\rm d}}\mathbf{A}$ & $\mathbf{F}^{ab}={{\rm d}}\mathbf{A}^{ab}$ & $0$ & $F_{\mu\nu}=2\nabla_{[\mu}A_{\nu]}$ & $F^{\alpha\beta}{}_{\mu\nu}=2\nabla_{[\mu}A^{\alpha\beta}{}_{\nu]}$ \\
Homog. field eqn & 3+, 2+ & ${{\rm d}} \mathbf{F}=0$ & $\mathbf{F}^{ab}=0$ & $0$ & $\nabla_{[\alpha}F_{\mu\nu]}=0$ & $F^{\alpha\beta}{}_{\mu\nu}=0$\\
Lorentz force & 4- & $f_a=\text{\textschwa}_a\cdot \mathbf{F}\wedge\boldsymbol{\mathrm{J}}$ & $0$ & $1$ & $f_\mu = F_{\mu\nu}J^\nu$ & $0$ \\
Effective force & 4- & $\boldsymbol{\mathfrak{f}}=-{{\rm d}}\mathbf{t}$ & $\boldsymbol{\mathfrak{f}}_{a}=-{{\rm d}}\mathbf{t}_{a}$& $1$& $\mathfrak{f}=-\nabla_\mu t^\mu$ & $\mathfrak{f}_\nu=-\nabla_\mu t^\mu{}_\nu$\\
Energy-momentum & 3- & $\prescript{\text{em}}{}{\mathbf{T}}_a$ & $\mathbf{t}_{a}$ & $1$ & $\prescript{\text{em}}{}{T}^\mu{}_\nu$ & $t^\mu{}_\nu$ \\
Kinetic Lagrangian & 4- & $\prescript{\text{kin}}{}\mathbf{\Lambda} = -\frac{1}{2}\mathbf{F}\wedge\boldsymbol{\mathrm{H}}$ & $0$ & 1 & $\prescript{\text{kin}}{}L = \frac{1}{4}H^{\mu\nu}F_{\mu\nu}$ & $0$ \\
Mass Lagrangian & 4- & $\prescript{\text{pot}}{}\mathbf{\Lambda} = - \frac{1}{2}\mathbf{A}\wedge\boldsymbol{\mathrm{P}}$ & $\prescript{\text{pot}}{}\mathbf{\Lambda} =-\frac{1}{2}\mathbf{A}^{ab}\wedge\boldsymbol{\mathrm{P}}_{ab}$ & 1 & $\prescript{\text{pot}}{}L=-\frac{1}{2}P^\mu A_\mu$& $\prescript{\text{pot}}{}L=-\frac{1}{2}P^\alpha{}_{\mu\nu} A^{\mu\nu}{}_\alpha$ \\
\hline
Kinetic constitutive rel. & 0- & $\chi^{abcd}$ & $\chi^{ab}{}_{cdf}{}^e$ & 1 & $\chi^{\alpha\beta\mu\nu}$ & $\chi^{\mu\nu}{}_{\alpha\beta\gamma}{}^\delta$ \\
Mass constitutive rel. & 0- & $\xi^{ab}$ & $ \xi^a{}_{cd}{}^b{}_{ef}$ & 1 & $\xi^{\alpha\beta}$ & $ \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}$ \\
\hline
\end{tabular}
\caption{Summary of the objects and laws in the two formalisms. In the column ``Basis'' the number $p$ denotes a $p$-form, and $-$ is for twisted, $+$ for untwisted. The entry in column ``W'' is $1$ for tensor densities (or, tensor
density equations) and $0$ for tensors (the rank is manifest). We see that the quantities corresponding to twisted forms are tensor densities. We also see that apart from the form of the homogeneous field equation, the analogy with
massive electromagnetism is complete, naturally modulo the extra indices in gravity theory. However, as mentioned in the text, the homogeneous field equation $\mathbf{F}^{ab}=0$ could be regarded as a macroscopic approximation to the
gravity theory where only ${{\rm d}} \mathbf{F}^{ab}=0$ was required at energies $\sim M$ (or, distances $\sim 1/M$). \label{table1}}
\end{table}
\end{center}
\section{Restoring symmetries}
\label{symmetries}
In this Section we discuss further the analogy of purified gravity and massive electromagnetism. In particular, the analogy suggests a natural extrapolation of CGR which
predicts an ``impurity'' of the spacetime structure at the Plank scale.
\subsection{From Proca to Stueckelberg}
\label{ptos}
\subsubsection{Electromagnetism}
This far our discussion has been based on the Proca formulation. The Lagrangian for the massive vector field is then simply
\begin{equation} \label{proca}
L_{\text{Proca}} = \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}m^2 A_\mu A^\mu\,.
\end{equation}
The mass term obviously breaks the gauge symmetry $A_\mu \rightarrow A_\mu - \nabla_\mu\varphi$. For many purposes, it is much better to consider the Stueckelberg version of the
theory. Let us introduce a new field $B$ and
write the Lagrangian for the two fields as
\begin{equation} \label{stuck}
L_{\text{Stueck}} = \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}\left(\nabla_\mu B+m A_\mu\right)\left(\nabla^\mu B + m A^\mu\right)\,.
\end{equation}
The trick is that now we have restored the gauge symmetry of the massless case, when taking into account also the transformation of the field $B$, since the action is invariant under
\begin{equation}
A_\mu \rightarrow A_\mu - \nabla_\mu\varphi\,, \quad B \rightarrow B + m\varphi\,,
\end{equation}
and by setting $B=0$ we recover the Proca action. Thus, the formulations (\ref{proca}) and (\ref{stuck}) are equivalent as far as the vector boson is concerned, but the introduction of a further degree of freedom, $B$, allows to extend the symmetries of the system yielding important consequences, for instance, for the renormalizability of the theory \cite{Ruegg:2003ps}.
This suggests to improve the application of the premetric program, as realised in the two previous sections, in the case of nonzero potential excitations. It should be then understood
that just as the existence of the kinetic excitation $\boldsymbol{\mathrm{H}}$ implies the existence of a potential $\mathbf{A}$, the existence of a potential excitation $\boldsymbol{\mathrm{P}}$ implies the existence of a
field $B$. The principle is that the symmetry that emerges for the kinetic excitation should not be destroyed by the presence of the potential excitation. Thus, a non-vanishing
constitutive relation $\xi$ may be considered to entail the presence of an additional Stueckelberg field. As is the case in the above demonstrated example, the resulting theory should be physically completely
equivalent (despite the formal introduction of the additional degrees of freedom). However, it is, in our understanding, conceptually preferable from the standpoint of the premetric program, to consider that the redundancy in the one-form
$\mathbf{A}$ deduced from the property of the two-form $\boldsymbol{\mathrm{H}}$, is not undermined by the presence of the three-form $\boldsymbol{\mathrm{P}}$. Rather, from the latter, we can deduce the further property of the theory: the existence of the 0-form $B$.
In the case of purified gravity, we shall sometimes refer to the corresponding Stueckelberg 0-form $B^{ab}$ as the ``premetric field''.
\subsubsection{Gravity}
\label{clarity}
Two concerns may have arisen in the formulation of the gravitational theory as presented in the above three Sections. Firstly, was it legitimate to start with the connection $\nabla_\mu$ instead of $\partial_\mu$? Secondly, was it legitimate to promote the gauge transformation $\varphi^{\mu\nu}$ (respectively, $\varphi^{ab}$) to a dynamical variable? Now we shall clarify these points which had been
left somewhat vague. Both the issues are addressed by applying the same symmetry-based reasoning that lead us from Proca electromagnetism to Stuckelberg
electromagnetism to gravity. Thus, neither the substitution $\partial_\mu \rightarrow \nabla_\mu$, nor the substitution $A^{\mu\nu}{}_\alpha \rightarrow -Q_\alpha{}^{\mu\nu} =\nabla_\alpha g^{\mu\nu}$
are put by hand in the theory, but they are the inevitable consequences of the symmetry axiom we proposed to supplement the premetric program with in order to more robustly deal with a
symmetry-breaking self-interactions when such arise in physics.
To show this in detail, let us take some steps back and undo those two perhaps dubious substitutions.
In the Proca-type formulation we have now (twice) walked through, the gravitational Lagrangian is written as
\begin{equation} \label{procag}
L_{P} = \lambda_{\mu\nu}{}^{\alpha\beta} F^{\mu\nu}{}_{\alpha\beta} +
\frac{1}{2}M^2A^{\mu\nu}{}_\alpha \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} A^{\rho\sigma}{}_{\beta}\,,
\end{equation}
where we implemented the constraint of vanishing force with a Lagrange multiplier $\lambda_{\mu\nu}{}^{\alpha\beta}$ (though this is inessential). The important thing is that we do not assume
anything else about the field $A^{\rho\sigma}{}_{\beta}$, except that it does not describe a physical force. The symmetry of the field strength $F^{\mu\nu}{}_{\alpha\beta}$ for the gauge potential $A^{\mu\nu}{}_\alpha$ is
$A^{\mu\nu}{}_\alpha \rightarrow A^{\mu\nu}{}_\alpha - \partial_\alpha\varphi^{\mu\nu}$ for an arbitrary $\varphi^{\alpha\beta}$. Obviously, the self-interaction term now breaks such symmetry.
Exactly as in the case of electromagnetic self-interaction, we shall restore the symmetry by
introducing the compensating field $B^{\mu\nu}$, which must transform as $B^{\mu\nu} \rightarrow B^{\mu\nu} + M\varphi^{\mu\nu}$. The gravitational Stueckelberg action then reads
\begin{equation}
L_{S} = \lambda_{\mu\nu}{}^{\alpha\beta} F^{\mu\nu}{}_{\alpha\beta} +
\frac{1}{2}\left( M A^{\mu\nu}{}_\alpha + \partial_\alpha B^{\mu\nu}\right) \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}\left( M A^{\rho\sigma}{}_\beta +\partial_\beta B^{\rho\sigma}\right)\,.
\end{equation}
The variation with respect to the field $A^{\mu\nu}{}_\alpha$ just gives an equation of motion for the irrelevant Lagrange multiplier:
\begin{equation}
2\partial_\beta \lambda_{\mu\nu}{}^{\alpha\beta} = \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}\left( M A^{\rho\sigma}{}_\beta +\partial_\beta B^{\rho\sigma}\right)\,.
\end{equation}
The variation with respect to the Lagrange multiplier in turn, gives the equation of motion for $A^{\mu\nu}{}_\alpha$:
\begin{equation}
F^{\mu\nu}{}_{\alpha\beta} = 0 \quad \Rightarrow \quad A^{\mu\nu}{}_\alpha = 2M\hat{\Gamma}^{\mu}{}_\alpha{}^{\nu}\,, \quad \text{where}\,\, \hat{\Gamma}^{\mu}{}_\alpha{}^{\nu}\,\,\text{is flat}\,.
\end{equation}
We have introduced a flat affine connection with a convenient normalisation.
Using this information in the action, it becomes
\begin{equation}
L_{S} =
\frac{1}{2}M^2\left( \partial_\alpha B^{\mu\nu} + 2\hat{\Gamma}^{(\mu}{}_\alpha{}^{\nu)} \right) \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}\left( \partial_\beta B^{\rho\sigma} + 2\hat{\Gamma}^{(\rho}{}_\alpha{}^{\sigma)} \right)\,.
\end{equation}
We have emphasised the symmetrisation of the connection $\hat{\Gamma}^{\mu}{}_\alpha{}^{\nu}$ that is imposed by the symmetries of the constitutive relation, $\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}
= \xi^\alpha{}_{(\mu\nu)}{}^\beta{}_{(\rho\sigma)}$. We should now note that though torsion is given by the antisymmetry of the last two indices of an affine connection, the contortion (i.e. the total contribution of the torsion to the affine connection) is
antisymmetric in its first and the last indices. Thus we may write
\begin{equation}
\partial_\alpha B^{\mu\nu} + 2\hat{\Gamma}^{(\mu}{}_\alpha{}^{\nu)} = \partial_\alpha B^{\mu\nu} + 2{\Gamma}^{(\mu}{}_\alpha{}^{\nu)} \equiv \nabla_\alpha B^{\mu\nu}\,, \quad \text{where}\,\, {\Gamma}^{\mu}{}_\alpha{}^{\nu}\,\,\text{is flat and torsion-free}\,.
\end{equation}
Finally we may rename the variable $B^{\mu\nu}$ as $g^{\mu\nu}$, and conclude that the theory is
\begin{equation}
L_S = \frac{1}{2}M^2\nabla_\alpha g^{\mu\nu}{}_\alpha \xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma} \nabla_\beta g^{\rho\sigma} = \frac{1}{2}M^2 Q_\alpha{}^{\mu\nu}P^\alpha{}_{\mu\nu}\,.
\end{equation}
Thus, neither of the two ingredients of CGR (and its generalisations), the metric and the symmetric teleparallel covariant derivative\footnote{In teleparallel gravity, the relevance of the pure-gauge Lorentz connection has been emphasised often \cite{Aldrovandi:2013wha,Golovnev:2017dox,Krssak:2018ywd,Jarv:2019ctf}. It would seem more difficult to justify this structure from the premetric approach.}, are in the least way ad hoc. They emerge as dynamical variables from the first principles of the axiomatic approach to gravity theory.
\subsection{From Stueckelberg to Kibble ?}
\subsubsection{Electromagnetism}
In the formulation that was originally due to Kibble, the Stueckelberg theory was embedded in a free Abelian Higgs model. A complex scalar field $\Phi$, charged under the U(1)
symmetry, was introduced such that its U(1)-covariant derivative is
\begin{equation}
\mathrm{D}_\mu \Phi = \left(\partial_\mu -ieA_\mu\right)\Phi\,.
\end{equation}
The quite elegant action then reads
\begin{equation} \label{kibble}
L_{\text{Kibble}} = \frac{1}{4}F_{\mu\nu}F^{\mu\nu} - \frac{1}{2}| \mathrm{D} \Phi |^2 + V(|\Phi |^2)\,.
\end{equation}
A suitable potential can lead to a spontaneous symmetry breaking which sets the modulus of the complex scalar field to $|\Phi_0 | = m/e$.
Thus, at the minimum,
\begin{equation}
\Phi_0 = \frac{m}{e}\exp{\left(\frac{ieB(x)}{m}\right)}\,,
\end{equation}
and we can identify the phase of the complex scalar with the Stueckelberg field $B$. Up to a possibly nonzero $V(|\Phi_0 |^2)$, we obviously recover (\ref{stuck}) at the minimum.
\subsubsection{Gravity}
Since as far as we know, all massive elementary particles and gauge fields have acquired their mass via the Higgs mechanism, it is reasonable to assume that this is the case also for the mass $M$ of the
gravitational gauge field $A^{\alpha\beta}{}_\mu$. In fact, there is plenty of evidence for scale invariance in physics. Starting from the basic argument of Weyl \cite{weyl1919raum} which many still find compelling enough by itself
\cite{Scholz:2017pfo}, in modern particle physics technical arguments have kept suggesting the non-existence of an absolute scale at the level of fundamental physics. In a scale-invariant theory, dimensional regularisation introduces only logarithmic runnings of the coupling constants,
and in such a case the large hierarchy of the interactions might be better explained \cite{Meissner:2006zh}. If the scale symmetry was restored in the absence of the cosmological constant, its
exceedingly small value would be technically natural \cite{Lucat:2018slu}. The Higgs mass parameter sits at the edge of the stability bound \cite{Alekhin:2012py} and its quartic coupling seems to
run to zero near the Planck scale, which suggests that scale symmetry is indeed restored there if not at lower scales, possibly solving the stability issue of the Standard Model \cite{Oda:2015gna}.
For the reasons stated above, we believe the Planck scale should be the result of a spontaneous symmetry breaking. The most straightforward analogy of Kibble's Abelian Higgs model does not however satisfactorily incorporate such
a mechanism within our present formalism. We could write the Kibble-type action for gravity as
\begin{equation}
L_K = \frac{1}{2}\left(\mathrm{D}_\alpha\Phi^{\mu\nu}\right)^\ast\xi^\alpha{}_{\mu\nu}{}^\beta{}_{\rho\sigma}\left(\mathrm{D}_\beta\Phi^{\rho\sigma}\right) + V_{\alpha\beta}\Phi^{\alpha\beta}\,.
\end{equation}
To remain in the premetric framework, we have taken the potential term to be determined by the (generally nonlinear) constitutive relation $V_{\alpha\beta}$ for our new complex field
$\Phi^{\alpha\beta}$.
When the radial component of the field has settled to an isotropic constant value such that
\begin{equation}
\Phi_0^{\alpha\beta} = M\exp{\left( iB^{\alpha\beta}\right)}\,,
\end{equation}
we recover the Stueckelberg-type action. However, either we would have to invoke an inhomogeneous covariant derivative for the field such that
\begin{equation}
\mathrm{D}_\mu\Phi^{\alpha\beta} = \partial_\mu \Phi^{\alpha\beta} + 2iM A^{\alpha\beta}{}_\mu{}\,,
\end{equation}
or introduce a metric such that we would be allowed to write
\begin{equation}
\hat{\nabla}_\mu\Phi^{\alpha\beta} = \partial_\mu \Phi^{\alpha\beta} + i\hat{\Gamma}^{\alpha}{}_{\mu\rho}\Phi^{\rho\beta} + i\hat{\Gamma}^{\beta}{}_{\mu\rho}\Phi^{\alpha\rho}\,.
\end{equation}
On the the other hand, in any case it is difficult to see how now to specify the constitutive relation in practice without invoking a metric.
As the purpose of this paper was only the axiomatic formulation of purified gravity, we leave the intriguing problem of the actual symmetry breaking mechanism and its dynamics to a further study.
Recently, a promising way has been pointed out, first by realising an observer space \cite{Gielen:2012fz} in Cartan geometry \cite{Westman:2014yca} by using merely a Lorentz connection and a Higgs-like scalar (in particular: no metric or a frame field) in a polynomial quadratic action (thus, not only a local and linear but even a polynomial constitutive relation) to give rise to a spacetime in the spontaneously broken phase \cite{Zlosnik:2018qvg}, and then by embedding this scenario to the General Linear bundle \cite{Koivisto:2019ejt}, thus bringing it a step closer to our present premetric construction of purified gravity\footnote{As a side-product, this scheme may eliminate the need for particle dark matter in cosmology and astrophysics \cite{Zlosnik:2018qvg}, and suggests the incorporation of the gauge fields of particle physics within the $A^{\alpha\beta}{}_\mu$ \cite{Koivisto:2019ejt}.}.
\subsection{Impurities near the Planck scale ?}
There is a complete analogy between purified gravity and massive electromagnetism, except that the gravitational force is imposed to vanish. This raises the question why should we not allow a kinetic term for
gravity as well, and then to recover the predictions of GR, restrict to solutions with vanishing field strength. Actually, such solutions are naturally the relevant ones at the classical level. Since the connection
$A^{\mu\nu}{}_\alpha$ is massive, it interacts only at finite distances. The range of the force is of the order of the Planck length, about $1.6\cdot 10^{-35}$ meters. At the macroscopic scales, the gauge field does not
propagate. Therefore, for practical purposes $F^{\mu\nu}{}_{\alpha\beta} \approx 0$ and we obtain the same predictions by considering the theory with the canonical kinetic term, instead of imposing the vanishing of the
field strength with a Lagrange multiplier. It is interesting to consider that purified gravity becomes impure as one probes microscopic distances approaching the Planck scale. We would thus predict that at those scales, gravity
is no longer pure inertia, and the equivalence principle may in some sense be broken.
There are some similarities with this scenario and the very interesting approach towards quantum gravity by Percacci {\it et al} \cite{Percacci:1990wy,Percacci:2009ij,Pagani:2015ema}. In particular, they also consider a Higgs-like
mechanism that would make the gravitational connection massive. However, their set-up is also fundamentally different since they consider the usual (equivalent of) Einstein-Hilbert term as the kinetic term, while the mass terms are
additional terms that give masses only to the distorted part of the connection (torsion and non-metricity). In the version of purified gravity advocated in this paper, we on the contrary see the (equivalent of the) Einstein-Hilbert
term as the mass term, and speculate further that at extremely high energies the possible kinetic term, which as usual would be a square of the field strength of the connection, would become dynamically relevant.
In general, there are many interesting studies that implement the idea of a gravitational Higgs mechanism \cite{Percacci:1990wy,Percacci:2009ij,Pagani:2015ema,Isham:1971dv,Tresguerres:2000qn,Leclerc:2005qc,Tiemblo:2005js,Ali:2007hu,Westman:2014yca,Zlosnik:2018qvg,Koivisto:2019ejt}, but to our knowledge, the necessity for such a mechanism had not been previously deduced by an axiomatic method in the framework of the premetric program.
To be concrete, we propose the extrapolation of purified gravity Lagrangian $L_{\text{gr}}$ motivated by the massive electromagnetic Lagrangian $L_{\text{em}}$ like so,
(where for clarity we divide the corresponding mass scales in the $P$'s out from the $p$'s)
\begin{equation} \label{two}
L_{\text{em}} = \frac{1}{4}H^{\mu\nu}F_{\mu\nu} - \frac{1}{2}m^2 p^\mu A_\mu\,, \quad
L_{\text{gr}} = \frac{1}{4}H_{\alpha\beta}{}^{\mu\nu} F^{\alpha\beta}{}_{\mu\nu} -
\frac{1}{2}M^2p_{\alpha\beta}{}^\mu A^{\alpha\beta}{}_{\mu}\,.
\end{equation}
The hypothetical photon mass has not been detected despite conducting experiments with exquisite accuracy, and thus if an $m\neq 0$ exists, we know that this $m$ must be very small \cite{Tu:2005ge,Goldhaber:2008xy}.
On the other hand, the hypothetical ``hypercurvature'' has not been experimentally probed because, as we know, $M$ is very large, in fact the largest fundamental mass scale known in Nature.
In this sense, the phenomenological status of the two theories in (\ref{two}) are the opposite: in electromagnetism, it is only the gauge-invariant piece that appears
in the standard theory and the physicality of the longitudinal polarisation $\varphi$ has not been established, whereas for gravity it is only the invariance-breaking piece $\varphi^{\alpha\beta}$
that propagates the well-established graviton, while the gauge-invariant kinetic term of the ``hypergravitational'' gauge field $A^{\alpha\beta}{}_\mu$ is only suggested by
the theoretical argument we have put forward.
\section{Properties of general theories}
\label{properties}
In this Section we investigate the implications of more general constituve relations. First we consider the dispersion relation in a fully general (local and linear) case, and then
focus on the properties of the 13-parameter theories that can be defined in the presence of a metric.
\subsection{Wave propagation}
In gravity the inhomogeneous field equation~\eqref{gravity} has two contributions to the current \(J^{\mu}{}_{\alpha}\), which are the matter energy-momentum \(T^{\mu}{}_{\alpha}\) and gravitational energy-momentum \(t^{\mu}{}_{\alpha}\). In the geometric optics approximation we assume that the gravitational field is sufficiently weak so that we can neglect its energy-momentum contribution, and so we will set \(t^{\mu}{}_{\alpha} = 0\). Further, we study the propagation of waves in vacuum only, and hence set \(T^{\mu}{}_{\alpha} = 0\) as well. We end up with the source-free field equation \(\nabla_{\mu}H^{\mu\nu}{}_{\alpha} = 0\). We then make use of the constitutive relation~\eqref{gravconst}, together with \(Q_{\beta}{}^{\rho\sigma} = -\nabla_{\beta}\varphi^{\rho\sigma}\) and \(\varphi^{[\rho\sigma]} = 0\). Working in the Fourier domain, where \(\nabla_{\mu} \rightarrow q_{\mu}\) becomes the wave covector, we finally obtain the dispersion relation
\begin{equation}
M^{\nu}{}_{\alpha\rho\sigma}\varphi^{\rho\sigma} = 0\,, \quad
M^{\nu}{}_{\alpha\rho\sigma} = q_{\mu}q_{\beta}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\,.
\end{equation}
We call \(M^{\nu}{}_{\alpha\rho\sigma}\) the characteristic tensor density. Note that not all of the irreducible components of \(\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\) contribute to the characteristic tensor density and hence the dispersion relation. Following their definition~\eqref{eq:chidecomp}, the components \(\prescript{[3]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\) and \(\prescript{[4]}{}\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\) are antisymmetric in their first and last indices, so that their contribution vanishes.
We further remark that we have found 16 homogeneous, linear equations for the 10 components of \(\varphi^{\rho\sigma}\). This means that there must be a redundancy which eliminates six of these equations. Four equations are readily eliminated realizing that \(q_{\nu}M^{\nu}{}_{\alpha\rho\sigma} = 0\), due to the antisymmetry of \(\chi^{\mu\nu}{}_{\alpha\rho\sigma}{}^{\beta}\) in its first two indices. The remaining redundancies are more difficult to find and depend on the particular form of the constitutive density. We will reveal them in the most general metric case below.
\subsection{Wave propagation in the metric case}
For the metric constitutive density~(\ref{eq:chinewer},\ref{eq:chiodd}) we find the characteristic tensor density
\begin{equation}
M_{\nu\alpha\rho\sigma} = \frac{\sqrt{-g}}{2}\left[b_1\left(q_{\nu}g_{\alpha(\rho}q_{\sigma)} - q^2g_{\nu(\rho}g_{\sigma)\alpha}\right) + b_2\left(q_{\nu}q_{\alpha} - q^2g_{\nu\alpha}\right)g_{\rho\sigma} + b_3\left(q_{\alpha}g_{\nu(\rho}q_{\sigma)} - g_{\nu\alpha}q_{\rho}q_{\sigma}\right)\right] + \frac{2b_4 - b_6}{2}\epsilon_{\nu\alpha\beta(\rho}q_{\sigma)}q^{\beta}\,,
\end{equation}
where we have lowered the first index for convenience, and introduced the abbreviation \(q^2 = q_{\mu}q^{\mu}\). We see that the terms corresponding to the parameters \(b_5\) and \(b_7\) do not contribute. The linearized field equations thus read
\begin{multline}\label{eq:wavefull}
0 = E_{\nu\alpha} = 2M_{\nu\alpha\rho\sigma}\varphi^{\rho\sigma}\\
=\sqrt{-g}\left[b_1(\varphi_{\alpha\beta}q^{\beta}q_{\nu} - q^2\varphi_{\nu\alpha}) + b_2\varphi^{\beta}{}_{\beta}(q_{\nu}q_{\alpha} - q^2g_{\nu\alpha}) + b_3(\varphi_{\nu\beta}q^{\beta}q_{\alpha} - g_{\nu\alpha}\varphi^{\rho\sigma}q_{\rho}q_{\sigma})\right] + (2b_4 - b_6)\epsilon_{\nu\alpha\beta\rho}q^{\beta}q_{\sigma}\varphi^{\rho\sigma}\,.
\end{multline}
As mentioned before, these equations are not independent, and the four equations \(q^{\nu}E_{\nu\alpha}\) are satisfied identically. To find further redundancies, it is helpful to further decompose these equations into four irreducible parts. For this purpose, we first contract with \(q^{\alpha}\), which yields the longitudinal part
\begin{equation}\label{eq:wavelong}
E_{\nu\alpha}q^{\alpha} = 2\sqrt{-g}(b_1 - b_3)q^{\alpha}q^{\beta}\varphi_{\beta[\alpha}q_{\nu]}\,.
\end{equation}
Remarkably, the antisymmetric, transverse part
\begin{equation}\label{eq:waveanti}
E^{\beta\gamma}\epsilon_{\beta\gamma\nu\alpha}q^{\alpha} = 4(2b_4 - b_6)q^{\alpha}q^{\beta}\varphi_{\beta[\alpha}q_{\nu]}
\end{equation}
is of the same form, but is the only part which originates from the parity-violating terms. The trace of the field equations reads
\begin{equation}\label{eq:wavetrace}
E^{\alpha}{}_{\alpha} = \sqrt{-g}[(b_1 - 3b_3)\varphi^{\alpha}{}_{\alpha}q^2 - (b_1 + 3b_2)\varphi^{\alpha\beta}q_{\alpha}q_{\beta}]\,.
\end{equation}
We are then left with the transverse and trace-free part of the field equations, which reads
\begin{multline}\label{eq:wavesym}
3q^2E_{(\nu\alpha)} - 3q^{\beta}q_{(\nu}E_{\alpha)\beta} + E^{\beta}{}_{\beta}(q_{\nu}q_{\alpha} - q^2g_{\nu\alpha}) =\\
\sqrt{-g}b_1\left\{6q^2q^{\beta}q_{(\nu}\varphi_{\alpha)\beta} - q_{\nu}q_{\alpha}(q^2\varphi^{\beta}{}_{\beta} + 2q_{\beta}q_{\gamma}\varphi^{\beta\gamma}) - 3(q^2)^2\varphi_{\nu\alpha} + [(q^2)^2\varphi^{\beta}{}_{\beta} - q^2q_{\beta}q_{\gamma}\varphi^{\beta\gamma}]q_{\nu\alpha}\right\}
\end{multline}
and, again remarkably, depends on \(b_1\) only. Note that each of these equations must be satisfied individually.
We now find the aforementioned redundancy of the field equations, which is apparent from the fact that the longitudinal part~\eqref{eq:wavelong} and the antisymmetric transverse part~\eqref{eq:waveanti} are identical, up to a constant factor, which means that only one of them counts to the number of independent equations. These are four equations since their is one free index; however, they are not independent, since their contraction with \(q^{\nu}\) vanishes identically. Hence, we keep three equations, and have eliminated three further redundant equations, in addition to the four equations already found for the general constitutive relation. In total we have thus eliminated seven of the original 16 equations. The remaining nine equations are the trace equation~\eqref{eq:wavetrace}, the five independent components of the symmetric, transverse, trace-free equation~\eqref{eq:wavesym} and the three independent components mentioned before. Since \(\varphi^{\rho\sigma}\) has 10 independent components, it thus follows that there must be a gauge freedom eliminating one of them. This can most easily be seen from an Ansatz of the form
\begin{equation}
\varphi^{\rho\sigma} = Ug^{\rho\sigma} + Vq^{\rho}q^{\sigma} + W^{(\rho}q^{\sigma)} + \tilde{\varphi}^{\rho\sigma}\,,
\end{equation}
where
\begin{equation}
W^{\rho}q_{\rho} = 0\,, \quad
\tilde{\varphi}^{[\rho\sigma]} = 0\,, \quad
\tilde{\varphi}^{\rho}{}_{\rho} = 0\,, \quad
\tilde{\varphi}^{\rho\sigma}q_{\sigma} = 0\,.
\end{equation}
Inserting this ansatz into the field equations~\eqref{eq:wavefull}, we find that they reduce to
\begin{equation}\label{eq:wavedecom}
\sqrt{-g}\left\{2\left[(b_1 + 4b_2 + b_3)U + (b_2 + b_3)q^2V\right](q_{\alpha}q_{\nu} - q^2g_{\alpha\nu}) - \frac{b_1 - b_3}{2}q^2q_{\alpha}W_{\nu} - b_1q^2\tilde{\varphi}_{\nu\alpha}\right\} - \frac{2b_4 - b_6}{2}\epsilon_{\alpha\nu\rho\sigma}q^2q^{\rho}W^{\sigma} = 0\,,
\end{equation}
while the decomposed equations take the form
\begin{align}
\sqrt{-g}q^2[(b_1 + 4b_2 + b_3)U + (b_2 + b_3)q^2V] &= 0\,, &
\sqrt{-g}(b_1 - b_3)(q^2)^2W_{\alpha} &= 0\,, \nonumber\\
\sqrt{-g}b_1(q^2)^2\tilde{\varphi}_{\alpha\beta} &= 0\,, &
(2b_4 - b_6)(q^2)^2W_{\alpha} &= 0\,,
\end{align}
up to constant, numerical factors. We see that the scalar, vector and tensor modes decouple and that the equations~\eqref{eq:wavedecom} possess the gauge freedom
\begin{equation}
U \rightarrow U + \lambda(b_2 + b_3)q^2\,, \quad
V \rightarrow V - \lambda(b_1 + 4b_2 + b_3)\,,
\end{equation}
and hence
\begin{equation}
\varphi^{\rho\sigma} \rightarrow \varphi^{\rho\sigma} + \lambda[(b_2 + b_3)q^2g^{\rho\sigma} - (b_1 + 4b_2 + b_3)q^{\rho}q^{\sigma}]\,.
\end{equation}
This removes one of the two scalar degrees of freedom. Also for the second scalar mode we find that it is not propagating, since the corresponding terms in the field equations~\eqref{eq:wavedecom} take the form of a constraint equation. Further, we find that non-trivial solutions for the remaining modes are obtained only for \(q^2 = 0\), so that all wave solutions must propagate along the null directions of the metric \(g_{\alpha\beta}\), i.e., on its light cone.
We remark that a particular case is given by theories whose parameters satisfy \(b_1 = b_3\) and \(2b_4 = b_6\). In this case the field equations~\eqref{eq:wavefull} and hence also~\eqref{eq:wavedecom} are symmetric and the vector mode \(W^{\rho}\) does not contribute. This is in particular the case for CGR. We thus find that the only propagating mode is the transverse, traceless tensor mode, as expected.
\subsection{Perturbations}
Consider the perturbations $\delta g_{\mu\nu}$ of the flat metric $\eta_{\mu\nu}$,
\begin{equation}
g_{\mu\nu} = \eta_{\mu\nu} + \delta g_{\mu\nu}\,.
\end{equation}
Using the 1+3 decomposition familiar from cosmological perturbation theory, we decompose the perturbation $\delta g_{\mu\nu}$ into scalars $\phi$, $\psi$, $\beta$, $\sigma$, transverse vectors $B_i$, $E_i$, and transverse and traceless tensors $h_{ij}$ as follows:
\begin{equation}
\delta g_{00} = -2\phi\,, \quad \delta g_{0i} = -\beta_{,i} + B_i\,, \quad \delta g_{ij} = -2\psi\delta_{ij} + \sigma_{,ij}-\frac{1}{3}\nabla^2\sigma\delta_{ij} + 2 E_{(i,j)} + 2h_{ij}\,.
\end{equation}
We shall compute the linearised field equations in vacuum. Since $t^\mu{}_\nu$ is of quadratic order, it is sufficient to consider the equation $\nabla_\alpha H^{\alpha\mu}{}_\nu=0$.
The energy-momentum could be computed from
\begin{eqnarray}
H^{i0}{}_0 & = & -\left( b_1+b_2\right) \phi^{,i} + \left( 3b_2+b_3\right)\psi^{,i} + \frac{1}{2}\left( b_1-b_3\right)\dot{\beta}^{,i} +
\frac{1}{3}b_3\nabla^2\sigma^{,i} \nonumber \\ & + & \frac{1}{2}\left( b_1-b_3\right) \dot{B}^i + \frac{1}{2}\left( b_6-2b_7\right)\epsilon^{ijk}B_{j,k} + \frac{1}{2}b_3\nabla^2 E^i\,, \\
H^{i0}{}_j & = & -\delta^i_j\left( b_2+b_3\right)\dot{\phi} + \delta^i_j\left( b_1+3b_2\right)\dot{\psi}
+\left( 2b_5+b_6\right)\epsilon^i{}_j{}^k\phi_{,k}-\left( 2b_4+6b_5+b_6+2b_7\right)\epsilon^i{}_j{}^k\psi_{,k} \nonumber \\ & + & \frac{1}{2}b_1\partial^i\partial_j\beta - \frac{1}{2}b_3\delta^i_j\nabla^2\beta -\frac{1}{2} \left( 2b_4 - b_6\right)\epsilon^i{}_j{}^k\dot{\beta}_k \nonumber \\
& - & \frac{1}{2}b_1\left( \partial^i\partial_j-\frac{1}{3}\delta^i_j\nabla^2\right)\dot{\sigma} + \frac{1}{6}\left( 4b_4-b_6-2b_7\right)\epsilon^i{}_j{}^k\nabla^2\sigma_{,k} \nonumber \\
& + & \frac{1}{2}b_1 B_j{}^{,i} - b_4\epsilon^i{}_j{}^k\dot{B}_k - b_1\dot{E}^{(i}{}_{,j)} + b_4\epsilon^i{}_j{}^k\nabla^2 E_k - \frac{1}{2}b_6\epsilon_j{}^{kl}E_{k,l}{}^{,i}
+ b_7\epsilon^{ikl}E_{k,jl} \nonumber \\
& - & b_1 \dot{h}^i{}_j - b_6 \epsilon_j{}^{kl}h^i{}_{k,l} + 2b_7\epsilon^{ikl}h_{jk,l}\,.
\end{eqnarray}
The field equations are $\nabla_\alpha H^{\alpha\mu}{}_\nu=0$, where
\begin{eqnarray}
\nabla_\mu H^{\mu 0}{}_0 & = & -\left( b_1+b_2\right)\nabla^2\phi + \left( 3b_2+b_3\right)\nabla^2\psi + \frac{1}{2}\left( b_1-b_3\right)\nabla^2\dot{\beta}
-\frac{1}{3}b_3\nabla^4\sigma\,, \label{efe1} \\
\nabla_\mu H^{\mu 0}{}_i & = & -\left( b_2+b_3\right)\dot{\phi}_{,i} + \left( b_1+3b_2\right)\dot{\psi}_{,i} + \frac{1}{2}\left( b_1-b_3\right)\nabla^2\beta^{,i} -\frac{1}{3}b_1\nabla^2\dot{\sigma}_{,i} \nonumber \\
& + & \frac{1}{2}b_1 \nabla^2\left( B_i -\dot{E}_i\right) - \left( b_4 - \frac{1}{2}b_6\right)\left( \epsilon_i{}^{kl}B_{k,l} - 2\nabla^2 E_{(j,k)}\right) \label{efe2}
\,, \\
\nabla_\mu H^{\mu i}{}_j & = & \left( b_2+b_3\right)\delta^i_j\ddot{\phi}+ b_2\left( \partial^i\partial_j -\delta^i_j\nabla^2\right)\phi - \left( b_1 + 3b_2 \right)\delta^i_j\ddot{\psi} - \left( b_1+3b_2+b_3\right)\left(\partial^i\partial_j-\delta^i_j\nabla^2\right)\psi \nonumber \\ & + & \left( 2b_4-b_6\right)\epsilon^i{}_j{}^k\left(\dot{\phi}_{,k}+\dot{\psi}_{,k}\right) \nonumber \\
& - & \frac{1}{2}\left( b_1+b_3\right)\partial^i\partial_j \dot{\beta}+ b_3\delta^i_j\nabla^2\dot{\beta} - \left( b_4-\frac{1}{2}b_6\right) \epsilon^i{}_j{}^k\left(\ddot{\beta}_{,k} + \nabla^2\beta_{,k}\right) \nonumber \\
& - & \frac{1}{6}\left( b_1-2b_3\right)\left( \partial^i\partial_j -\delta^i_j\nabla^2\right)\nabla^2\sigma
+ \frac{1}{6}b_1\left( 3\partial^i\partial_j - \delta^i_j\nabla^2\right)\ddot{\sigma} - \frac{1}{3}\left( 2b_4-b_6\right)\epsilon^i{}_j{}^k\nabla^2\dot{\sigma}_{,k} \nonumber \\
& - & \frac{1}{2}\left( b_1\dot{B}^{i}{}_{,j} + b_3\dot{B}_j{}^{,i}\right) -\frac{1}{2}\left( b_1-b_3\right) \nabla^2 E_j{}^{,i} + b_1\ddot{E}^i{}_{,j} + \left( b_4-\frac{1}{2}b_6\right) \epsilon^i{}_j{}^k\left(\ddot{B}_k - \nabla^2 \dot{E}_k\right) \nonumber \\ & - & b_1 \Box h^i{}_j\,. \label{efe3}
\end{eqnarray}
Note that these in general have also antisymmetric components, given as
\begin{eqnarray}
\left(\nabla_\mu H^{\mu [0}{}_j\right) g^{i]j} & = & \frac{1}{2}\left( b_1-b_3\right)\left( \dot{\phi}^{,i}+\dot{\psi}^{,i}+ \ddot{\beta}^{,i} + \nabla^2\beta^{,i}-\frac{1}{3}\nabla^2\dot{\sigma}^{,i} + \ddot{B}^i -\nabla^2 \dot{E}^i \right) \nonumber \\
& - & \left( 2b_4-b_6\right)\left(\epsilon^{ikl}B_{k,l} - 2\nabla^2 E_{(j,k)}\right) \,, \\
\left(\nabla_\mu H^{\mu [i}{}_j \right) g^{j]k} & = & \left( 2b_4-b_6\right) \epsilon^{ijk}\left( \dot{\phi}_{,k}+\dot{\psi}_{,k} + \frac{1}{2}\ddot{\beta}_{,k} + \frac{1}{2}\nabla^2\beta_{,k} -\frac{1}{3}\nabla^2\dot{\sigma}_{,k} + \ddot{B}_k - \nabla^2 \dot{E}_k\right) \nonumber \\
& + & \left( b_1-b_3\right)\left( \dot{B}^{[i,j]}-\nabla^2 E^{[i,j]} \right)\,.
\end{eqnarray}
Since the scalars, vectors and tensors are decoupled at the linear order, we can focus on each sector separately.
The transverse-traceless perturbations $h_{ij}$ are the simplest. We see that as long as $b_1 \neq 0$, there are tensor perturbations that propagate on the light cone, obeying the usual wave equation $\Box h_{ij}=0$.
As expected from the analysis of the characteristic equation, the parameters $b_5$ and $b_7$ do not enter the field equations. We see that the antisymmetric components of the field equations vanish when the longitudinal and the antisymmetric transverse part of characteristic equation are set to zero. In the following we will consider only the subset of constitutive relations which yield symmetric field equations. Thus we set $b_3=b_1$ and $b_6=2b_4$.
Then the equations of motion for the vector perturbations reduce to $\nabla^2 V^i=0$, $\dot{V}^i=0$, where $V^i=B^i-\dot{E}^i$ is the gauge-invariant combination of the two transverse 3-vectors. These equations are the same as in General Relativity. Thus, we find that vector perturbations do not propagate in vacuum.
To study the system of four coupled scalar perturbations, let us consider the Fourier modes with frequency $q^0=\omega$ and wavevector $q^i=k^i$.
One readily sees that then the two equations (\ref{efe1}) and (\ref{efe2}) become redundant.
Using one of them, the trace and the off-diagonal part of (\ref{efe3}), respectively, we obtain the three equations,
\begin{eqnarray}
0 & = & \left( b_1+b_2\right) \phi - \left( b_1+3b_2\right) \psi + \frac{1}{3}b_1 \hat{\sigma}\,, \\
0 & = & -3\left( b_1+b_2\right)\omega^2\phi + 2b_2 k^2\phi + 3\left( b_1+3b_2\right)\omega^2\psi - 2\left( 2b_1+3b_2\right) k^2\psi -2b_1 k^2\hat{\beta} + \frac{1}{3}b_1 k^2\hat{\sigma}\,, \\
0 & = & b_2\phi-\left( 2b_1+3b_2\right)\psi - b_1\hat{\beta} + \frac{1}{6}b_1\hat{\sigma} + \frac{1}{2}b_1\frac{\omega^2}{k^2}\hat{\sigma}\,,
\end{eqnarray}
where we defined $\hat{\beta}=i\omega\beta$, $\hat{\sigma}=-k^2\sigma$. However, only two of the three equations above are independent. Thus we have only two equations for
four variables. This, nevertheless, is sufficient because there are now two gauge invariances, say $X$ and $Y$,
\begin{eqnarray}
\phi & \rightarrow & \phi - \frac{b_1+3b_2}{2\left( b_1+2b_2\right)}X\,, \quad \psi \rightarrow \psi - \frac{b_1+b_2}{2\left( b_1+2b_2\right)}X\,, \quad \beta \rightarrow \beta + X\,, \\
\phi & \rightarrow & \phi +\frac{\left( b_1+3b_2\right)\omega^2-\left( b_1+b_2\right) k^2}{4\left( b_1+2b_2\right) k^2} Y\,, \quad
\psi \rightarrow \psi +\frac{3\left( b_1+b_2\right)\omega^2+\left( b_1-b_2\right) k^2}{12\left( b_1+2b_2\right) k^2} Y\,, \quad
\hat{\sigma} \rightarrow \hat{\sigma} + Y\,.
\end{eqnarray}
We can therefore eliminate any two of the variables, and solve for the rest from the above system. If $b_1=0$, the system is underdetermined, but otherwise we find a trivial dispersion relation, i.e. no propagating scalar modes in vacuum.
In summary, the five-parameter class of theories with $b_3=b_1$ and $b_6=2b_4$ has the same field content in vacuum as CGR, and thus, to the leading order, this class of theories
is perfectly viable. In contrast, most of the parameter space of Newer General Relativity theory can be ruled out already at the leading order due to the appearance of dangerous extra degrees of freedom\footnote{The situation is similar for New General Relativity \cite{Karananas:2014pxa,Blagojevic:2018dpz} and its generalisations \cite{Koivisto:2018loq}.
yet, we should remark that the absence of pathological degrees of freedom in the linear fluctuations does not guarantee the viability of the theory. In particular, strongly
coupled degrees of freedom seem to be a generic flaw in modified (metric or symmetric) teleparallel gravity theories \cite{Cheng:1988zg,Jimenez:2019ovq,Ferraro:2018tpu,Ferraro:2018axk,Jimenez:2019tkx}.} \cite{BeltranJimenez:2017tkd,Conroy:2017yln}. It could be interesting to study further the novel class of theories that cannot (at least in any straightforward way) be derived from a Lagrangian. At a nonlinear order one should take into account also the potential constitutive relation, which in the general metric case includes 7 additional parameters.
From the above system we can confirm that when $b_1=-b_2=b_3=1$, we have $\nabla_\alpha H^{\alpha\mu}{}_\nu=2\nabla_\alpha {P}^{\alpha\mu}{}_\nu=\tau^\mu{}_\nu$ (setting
$2c_1=-2c_3=-c_2=c_5=-1/2$), where
\begin{subequations}
\begin{eqnarray}
\tau^0{}_0 & = & -2\nabla^2\varphi\,, \\
\tau^0{}_i & = & -2\dot{\varphi}_{,i} + \frac{1}{2}\nabla^2 V_i\,,\\
\tau^i{}_j & = & \left( -\nabla^2\phi + 2\ddot{\varphi} +\nabla^2\varphi + \nabla^2\dot{\beta}\right)\delta^i_j +\left(-\phi+\varphi - \dot{\beta} + \frac{1}{2}\ddot{\sigma}\right){}^{,i}{}_{,j} - \dot{V}^{(i}{}_{,j)} + \ddot{h}^i{}_j - \nabla^2h^i{}_j\,.
\end{eqnarray}
\end{subequations}
Here, $\varphi$ is the shorthand for $\varphi=\psi-\frac{1}{6}\nabla^2\sigma$.
\subsection{Covariant conservation}
In theories that have a Lagrangian formulation, the concept of conservation is well understood. If the matter couples only to the metric and no other gravitational fields (in particular, a connection with torsion), and we assume a diffeomorphism invariant matter action, then the matter energy-momentum will satisfy the usual metric-covariant conservation law. Even if matter does couple to other gravitational fields, a generalised conservation law will hold, which can be derived in just the same way from diffeomorphism invariance, by looking at a variation $\delta\Phi = \mathcal{L}_{\boldsymbol{{\xi}}} \Phi$ of all gravitational fields $\Phi$ given by the Lie derivative with respect to an arbitrary vector field $\boldsymbol{{\xi}}$. Then, if the dynamcs of the gravity theory is also described by a diffeomorphism invariant action, it will satisfy an equivalent of the Bianchi identities, and the gravitational field equations relate these generalised Bianchi identities and the conservation of matter energy-momentum \cite{Koivisto:2005yk}.
Since only one special case of the 14-parameter theory studied above admits a Lagrangian formulation, the issue of conservation is a crucial one and needs to be properly addressed. Now we should understand that the consistency condition for the covariant conservation of matter, in particular $\mathcal{D}_\mu T^\mu{}_\nu =0$, determines the equation of motion for the connection. Thus, we still assume that the matter sector of the theory has a Lagrangian formulation - or, at least, that the matter fields obey the usual metric-covariant conservation law and diffeomorphism invariance is valid in this effective sense. It would be possible to consider some more general situation, but that would be non-canonical (as arbitrary prescriptions would be required, e.g. for the connection equation of motion) and not in line with the principles of our axiomatic approach (where the starting point is conservation).
We begin with the field equation for the metric in the 14-parameter theory,
\begin{equation} \label{pre-efe}
\nabla_\alpha H^{\alpha\mu}{}_\nu = T^\mu{}_\nu + P^\mu{}_{\alpha\beta}Q_\nu{}^{\alpha\beta} - \frac{1}{2}\delta^\mu_\nu P^\gamma{}_{\alpha\beta}Q_\gamma{}^{\alpha\beta}\,.
\end{equation}
From this we derive the connection equation of motion by studying the matter conservation
\begin{equation} \label{ccoem}
\mathcal{D}_\mu T^\mu{}_\nu = \mathcal{D}_\mu\nabla_\alpha H^{\alpha\mu}{}_\nu - \mathcal{D}_\mu \left( P^\mu{}_{\alpha\beta}Q_\nu{}^{\alpha\beta}\right) + \frac{1}{2}\partial_\nu\left( P^\gamma{}_{\alpha\beta}Q_\gamma{}^{\alpha\beta}\right)\,.
\end{equation}
The metric-covariant derivatives are easily rewritten in terms of our commuting derivatives by noting that for any $(1,1)$-covariant tensor density $X^\mu{}_\nu$ we have (for any torsion-free connection, in fact)
\begin{equation} \label{mcova}
\mathcal{D}_\mu X^\mu{}_\nu = \nabla_\mu X^\mu{}_\nu + L^\alpha{}_{\mu\nu}X^\mu{}_\alpha - L^\mu{}_{\mu\alpha}X^\alpha{}_\nu - \frac{1}{2}Q_\mu X^\mu{}_\nu
= \nabla_\mu X^\mu{}_\nu + L^\alpha{}_{\mu\nu}X^\mu{}_\nu \,,
\end{equation}
where in the second equality we have taken into account that $L^\mu{}_{\mu\alpha}=-\frac{1}{2}Q_\alpha$, since
\begin{equation} \label{disformation}
L^\alpha{}_{\mu\nu} = \frac{1}{2}Q^\alpha{}_{\mu\nu} - Q_{(\mu}{}^\alpha{}_{\nu)}\,.
\end{equation}
Thus the first term in (\ref{ccoem}) is very simple,
\begin{equation}
\mathcal{D}_\mu\nabla_\alpha H^{\alpha\mu}{}_\nu = L^\beta{}_{\mu\nu}\nabla_\alpha H^{\alpha\mu}{}_\beta\,.
\end{equation}
However, in the following it is useful to rewrite this as
\begin{equation} \label{part1}
\mathcal{D}_\mu\nabla_\alpha H^{\alpha\mu}{}_\nu = -2 L^\beta{}_{\mu\nu} \nabla_\alpha P^{\alpha\mu}{}_\beta + \Delta_\nu =
-2L_{\beta\mu\nu} \nabla_\alpha P^{\alpha\mu\beta} - 2L^\alpha{}_{\beta\nu} Q_{\mu\alpha\gamma}P^{\mu\beta\gamma} + \Delta_\nu\,.
\end{equation}
In the first equality we replaced derivative of the kinetic excitation with derivative of the potential excitation, denoting the difference of the corresponding terms as
\begin{equation} \label{deltanu}
\Delta_\nu = L^\beta{}_{\mu\nu} \nabla_\alpha\left( H^{\alpha\mu}{}_\beta + 2P^{\alpha\mu}{}_\beta\right)\,,
\end{equation}
and in the second equality we just raised the last index of the tensor inside the derivative. The second term in (\ref{ccoem}) can be expanded into three pieces, using again (\ref{mcova}),
\begin{equation} \label{part2}
\mathcal{D}_\mu \left( P^\mu{}_{\alpha\beta}Q_\nu{}^{\alpha\beta}\right) = \left(\nabla_\mu P^{\mu\alpha\beta}\right) Q_{\nu\alpha\beta} + P^{\mu\alpha\beta}\left( \nabla_\mu Q_{\nu\alpha\beta}\right)
+ L^\lambda{}_{\mu\nu} P^\mu{}_{\alpha\beta}Q_\lambda{}^{\alpha\beta}\,.
\end{equation}
Now we note that the first term in (\ref{part1}) and (\ref{part2}) enter into the conservation equation (\ref{ccoem}) in the combination
\begin{equation}
\left( \nabla_\mu P^{\mu\alpha\beta} \right)\left( Q_{\nu\alpha\beta} + 2L_{\alpha\beta\nu}\right) = 0\,,
\end{equation}
because $L_{(\alpha\beta)\nu}=-\frac{1}{2}Q_\nu$, as seen from (\ref{disformation}). We may thus drop those two terms. Let us then consider the remaining 3 terms (forgetting
$\Delta_\nu$ for the moment) we obtain by substracting (\ref{part2}) from (\ref{part1}). By mere index rearranging, we can sum those 3 terms together and obtain
\begin{equation}
-P^{\mu\alpha\beta}\left( \nabla_\mu Q_{\nu\alpha\beta} + 2L^{\gamma}{}_{\alpha\nu}Q_{\mu\beta\gamma} + L^{\gamma}{}_{\alpha\nu}Q_{\mu\beta\gamma}\right)
= -P^{\mu\alpha\beta}\mathcal{D}_\nu Q_{\mu\alpha\beta} = -\frac{1}{2}\partial_\nu \left( P^{\mu\alpha\beta} Q_{\mu\alpha\beta}\right)\,.
\end{equation}
In the second step we used the property of the commuting covariant derivative that $\nabla_{[\mu}Q_{\nu]\alpha}{}^{\beta}=0$, and then identified the metric-covariant derivative in analogy with the formula (\ref{mcova}). The third step
follows from basic properties of the metric-covariant derivative. The result neatly cancels the remaining piece in (\ref{ccoem}), and thus we have finally arrived at
\begin{equation}
\mathcal{D}_\mu T^\mu{}_\nu = \Delta_\nu\,.
\end{equation}
This establishes that the equation of motion for the connection in Premetric Newer General Relativity is given by $\Delta_\nu=0$, where $\Delta_\nu$ was defined in (\ref{deltanu}).
Recall that the equation in Newer General Relativity is given by $\nabla_\mu\nabla_\alpha P^{\alpha\mu}{}_\nu=0$. CGR is the singular case that belongs to the union of those two classes of theories, and it is also the unique theory within either class wherein the equation of motion for the connection trivialises.
Finally, we should note that imposing the $\Delta_\nu=0$ may change the conclusions of the three previous subsections, since properties of the conservative versions of the
14-parameter non-conservative theories can exhibit differences already at the linear order.
\begin{figure}[h]
\begin{tikzpicture}[every text node part/.style={align=center}]
\node[ellipse,draw,label=above:{charge}] (nE) {$E_a$};
\node[ellipse,draw,label=above:{current: matter}](nJ) [right=of nE,xshift=2cm] {$\boldsymbol{\mathrm{J}}_a=\mathbf{T}_a$};
\draw[->] (nJ)-- node [above,midway] {$\iiint \boldsymbol{\mathrm{J}}_a$} (nE);
\node[ellipse,draw,label=above:{gravity}](nT) [right=of nJ] {$\mathbf{t}^a$};
\draw[-] (nJ)-- node [above,midway,yshift=0.15cm] {$+$} (nT);
\node[ellipse,draw,label=above:{conjugate}](nX) [left=of nE,xshift=-2cm] {$x^a = X^{ab}E_b$};
\draw[->] (nE) -- node [above,midway] {$\left[ x^a, E_b\right] = i\hbar\delta^a_b$} node [below,midway] {?} (nX);
\node[ellipse,draw,label=below:{kinetic \\ exc.}] [below=of nJ] (nH) { $\boldsymbol{\mathrm{H}}_a$};
\node[ellipse,draw,label=below:{mass exc.},label=right:{$\Rightarrow$ symmetry \\ breaking}] [below=of nH] (nP) { $\boldsymbol{\mathrm{P}}_{ab}$};
\draw[<-] (nJ)-- (nH);
\draw[very thin][<-] (nT)-- (nP);
\node[ellipse,draw,label=left:{field strength}](nF) [below=of nX] {$\mathbf{F}^{ab}$};
\draw[<-] (nT)-- (nP);
\draw[->] (nX)-- (nF);
\node[rectangle,draw,label=above:{integrability \\ ${\text{\tiny{breaks at E$\sim$M?}}}$}] (n2) [right=of nF,xshift=2cm] {$\mathbf{F}^{ab}=0$};
\draw[dashed][-] (nF)-- (n2);
\node[rectangle,draw,label=above:{field eqn}](n1) [left=of nH,xshift=-1cm] {${{\rm d}}\boldsymbol{\mathrm{H}}_a=\boldsymbol{\mathrm{J}}_a$};
\draw[dashed][-] (nH)-- (n1);
\draw[dashed][-] (nJ)-- (n1);
\node[ellipse,draw,label=below:{potential}](nA) [below=of nF] {$\mathbf{A}^{ab}$};
\draw[->] (nF)-- (nA);
\draw[dashed][-] (nA)-- (n2);
\node[ellipse,draw,label=below:{premetric field}](nB) [below=of n2] {${{\rm d}} B^{ab}$};
\draw[->] (nA)-- node [above,midway] {$=$} (nB);
\draw[->] (n2)-- (nB);
\draw[very thick][->] (nB) -- node [above,midway] (c1) {$\chi$} (nH);
\draw[very thick][->] (nB) -- node [above,midway] (c2) {$\xi$} (nP);
\draw[<->] (c1) [xshift=10cm,yshift=-15cm] -- node [right,midway] {$\exists \boldsymbol{\mathrm{L}} \Rightarrow \underset{\text{\tiny{CGR}}}{(\chi,\xi)}$} (c2);
\node[ellipse,label=below:{}](nS) [right=of nP] {$$};
\node[ellipse,label=left:{scale}](nM) [below=of nT,yshift=-0.5cm] {$M$};
\draw[->] (nS)-- (nM);
\draw[->] (nM)-- (nT);
\end{tikzpicture}
\caption{A schematic figure illustrating the logical structure of the premetric construction of purified gravity. The constitutive relations are $\chi$ and $\xi$ (with indices omitted).
The kinetic excitation is related to the existence of a conserved current, and the mass excitation is related to the presence of gravitational contribution to the current. At some level, the energy and momenta are conjugate to space and time. The way we set up the coordinates for the latter (or, the frame), is merely a convention. The choice (which may become, even in principle, impossible at the Planck scale) will affect our description of physics, but this effect has to be purely inertial and not a physical force. The gauge potential is thus given by a gauge transformation. Due to the nonzero mass excitation the gauge transformation becomes the dynamical Stueckelberg field. Thus, the constitutive relation $\xi$ renders the $B^{ab}$ into a dynamical ``premetric field'', and in symmetrised conjuction with the constitutive relation $\chi$ filters from the field the properties of a metric in the unique fashion that is dictated by the requirement of an underlying action principle. \label{fig1}}
\end{figure}
\section{Conclusions and perspectives}
\label{conclusions}
The conclusion of this paper is given by the Figure \ref{fig1} and the formula (\ref{two}), where the former illustrates the axiomatic deduction of the CGR purified gravity theory, and the
latter specifies its suggested extrapolation. In what follows we will discuss these conclusions at more length.
\subsection{Summary}
\subsubsection{Fundamental equations}
In the premetric program one foundational dichotomy is the separation of extensive (how many) and intensive (how strong) quantities.
Our starting point for the extensive objects, in the matter sector, was the conservation of energy and momenta, in particular, a 4-component conserved charge. For the intensive objects, in the gravitational sector, the main assumption was the integrability postulate, in particular, the vanishing of the field strength of a 16-component one-form potential. From this we arrived at a class of theories that have a very close analogy to the theory of massive electromagnetism (as was exhibited in Table \ref{table1}). The relation that was deduced between the extensive quantities,
\begin{itemize}
\item $\text{Fundamental Equation 1:} \quad \nabla_\alpha H^{\alpha\mu}{}_\nu = T^\mu{}_\nu + t^\mu{}_\nu$\,, \label{field_eq}
\end{itemize}
where now the presence of $t^\mu{}_\nu$ is the consequence of the symmetry breaking realised by the mass of the one-form potential, is the fundamental equation that in the end determines the dynamics of the fields.
The other fundamental equation is the integrability postulate that now determines the nature of the intensive quantities,
\begin{itemize}
\item $\text{Fundamental Equation 2:} \quad F^{\alpha\beta}{}_{\mu\nu} \approx 0\,.$ \label{integrability}
\end{itemize}
This means that the one-form does not propagate. However, due to the symmetry breaking the theory is not quite trivial. Analogously to pure-gauge Proca theory where the gauge field content reduces to one massless scalar,
in our generalisation of the pure-gauge massive field theory there remains a propagating massless tensor field, the premetric field. This structure of the theory is schematically illustrated in Fig. \ref{fig1}.
\subsubsection{Linking equations}
At this stage the theory was completely metric-free. It was also non-predictive, since the quantities appearing in the would-be dynamical Fundamental Equation 1 are undetermined. The theory is completed by establishing the
relations that link the extensive and the intensive quantities, called the constitutive relations. The intensive quantity in our case is the premetric field $B^{\mu\nu}$, and due to its Stueckelbergian origin it appears, at the pre-metric level,
only via its derivatives. Assuming a linear constitutive relation, the kinetic excitation appearing in the Fundamental Equation 1 is given as
\begin{itemize}
\item $\text{Linking Equation 1:} \quad H^{\alpha\mu}{}_\nu = \chi^{\alpha\mu}{}_{\rho\sigma}{}^{\beta}\nabla_\beta B^{\rho\sigma}\,.$ \label{link1}
\end{itemize}
The constitutive tensor is antisymmetric in its first indices, otherwise in principle arbitrary. In this study we restricted to relations which also are symmetric in the last two covariant indices. This way the desired metric properties are inherited by the premetric field. We considered two irreducible decompositions of the general relation, separating it into four and five irreducible
components, respectively. They determine the piece $t^\mu{}_\nu$ in the Fundamental Equation 1, whose form is given by (\ref{emt}), together with the potential constitutive relation,
\begin{itemize}
\item $\text{Linking Equation 2:} \quad P^{\alpha}{}_{\mu\nu} = \xi^{\alpha}{}_{\mu\nu}{}^{\beta}{}_{\rho\sigma}\nabla_\beta B^{\rho\sigma}\,.$ \label{link2}
\end{itemize}
Again, we restricted ourselves to symmetrised relations, such that the two pairs of covariant indices are symmetrised. We then recognised six irreducible components of the relation: symmetric and antisymmetric principal
components, which are both reversible; symmetric and antisymmetric skewon components, which are both irreversible; and an axion component which is further reduced to the reversible part and the irreversible part; recall Table \ref{nomenclature}.
The total number of independent components, assuming different conditions on the constitutive relations, is reviewed in Figure \ref{fig2}.
\subsubsection{Metric purified gravity}
\label{mpg}
We investigated in more detail the cases where the two Linking Equations involve a metric. In particular, since the field $B^{\mu\nu}$ had emerged from the premetric structure as a consequence of
symmetry breaking, it was the canonical candidate for the role of the metric. It had entered into the theory as a Stueckelberg field restoring the symmetry broken by the mass term in the potential, implied by the
nontrivial Linking Equation 2. The generic metric constitutive relations have 14 independent components, contributing to all but the antisymmetric skewon and the irreversible axion irreducible parts of the relations.
As a first viability check of the new class of theories, which could be dubbed the Premetric Newer General Relativity, we explored the particle content and the wave propagation in the weak-field limit. Preliminarily,
we could exclude only two components as they contribute to the antisymmetric parts of the Fundamental Equation 1 and would result in new, and probably dangerous, degrees of freedom. Without additional
constraints, the general theories would also violate the equivalence principle, in the sense that matter would not follow the metric-geodesic trajectories. In Lagrangian theories, the conservation of matter follows from
the diffeomorphism invariance of the action. However, in our premetric construction we do not presuppose a Lagrangian formulation. Therefore an additional constraint may be in order,
\begin{itemize}
\item $\text{Geodesic Postulate:} \quad \mathcal{D}_\mu T^\mu{}_\nu=0\,.$
\end{itemize}
Indeed, even in the context of General Relativity, the geodesic motion of matter is sometimes introduced as an independent postulate. However, in Lagrangian theories the laws governing the motion of particles are
inscribed in the field equations. In the context of Lagrangian symmetric teleparallel theory, the metric-covariant conservation of matter energy-momentum follows from the extremisation of the action with respect to the
variations of connection. CGR is the unique quadratic theory whose action is extremised by an arbitrary connection, meaning that {\it the Geodesic Postulate is redundant with the Fundamental Equation 1}. Regardless
of this, in principle, the CGR is also uniquely specified, amongst the 14 possible metric constitutive relations, by requiring that {\it the Linking Equation 1 is compatible with an action principle determined by the Linking Equation 2}. In the more general Premetric Newer General Relativity, the geodesic postulate has to be separately imposed, and it can be regarded as the equation of motion for the symmetric teleparallel connection that cannot now be deduced by the usual variational methods due to the absence of a Lagrangian. In the case of the 14-parameter theory, the equation
is $\Delta_\mu=0$, where $\Delta_\mu$ was defined in (\ref{deltanu}).
\begin{figure}
\begin{center}
\begin{tikzpicture}[every text node part/.style={align=center}]
\node[ellipse,draw,label=above:{general},label=left:{$\xi$}] (nG) {$4096$};
\node[ellipse,draw,label=above:{symmetrised}](nS) [right=of nG] {$1600$};
\draw[->] (nG)-- (nS);
\node[ellipse,draw,label=above:{metric}](nM) [right=of nS] {$7$};
\draw[->] (nS)-- (nM);
\node[ellipse,draw,label=above:{parity-even}](nE) [right=of nM] {$6$};
\draw[->] (nM)-- (nE);
\node[ellipse,draw,label=left:{reversible $\xi$}](nG2) [below=of nG] {$2176$};
\draw[dashed][->] (nG)-- (nG2);
\node[ellipse,draw](nS2) [right=of nG2] {$820$};
\draw[dashed][->] (nS)-- (nS2);
\draw[->] (nG2)-- (nS2);
\node[ellipse,draw](nM2) [right=of nS2] {$6$};
\draw[dashed][->] (nM)-- (nM2);
\draw[->] (nS2)-- (nM2);
\node[ellipse,draw](nE2) [right=of nM2] {$5$};
\draw[dashed][->] (nE)-- (nE2);
\draw[->] (nM2)-- (nE2);
\node[ellipse,draw,label=above:{compatible}](nC) [right=of nE2] {$1$};
\draw[->] (nE)-- (nC);
\draw[->] (nE2)-- (nC);
\node[ellipse,draw,label=left:{$\chi$}](nGc) [below=of nG2] {$1536$};
\node[ellipse,draw](nSc) [below=of nS2] {$960$};
\node[ellipse,draw](nMc) [below=of nM2] {$7$};
\node[ellipse,draw](nEc) [below=of nE2] {$3$};
\draw[->] (nGc)-- (nSc);
\draw[->] (nSc)-- (nMc);
\draw[->] (nMc)-- (nEc);
\draw[->] (nEc)-- (nC);
\end{tikzpicture}
\end{center}
\caption{The number of independent components in the constitutive relations in cases satisfying certain conditions. The solid lines indicate restrictive assumptions on the constitutive relations,
and the dashed lines in particular indicate the assumption of reversibility of the potential constitutive relation. E.g., in the generic symmetrised, there are in total 1560 independent components that can determine
the theory. In the case that the constitutive relations are given in terms of a metric, the number reduces to 13, and the further requirement of an action principle leaves no freedom expect for an overall constant. \label{fig2}}
\end{figure}
\subsection{Comparison with metric teleparallelism}
There are two main points of departure in the premetric construction of purified gravity (PG) in comparison to the premetric construction of metric teleparallel gravity (TG).
1) Firstly, in PG it is imposed the vanishing of the force, while an excitation is allowed. Since in TG there is a force like in electromagnetism, the analogy with Maxwell theory remains intact at this point. On the other hand, the
fact that gravity indeed is different from electromagnetism (and the other interactions) in that it is equivalent to inertia, is incorporated into the premetric structure of PG.
Secondly, 2) PG is analogous to the massive rather
than the vanilla Maxwell electromagnetism. On the other hand, in TG one then needs to break the complete analogy by introducing gravitational charges, which have no correspondence in the Maxwell electromagnetism.
Related to this point, one could perhaps add that 2') in PG the metric is a Stueckelberg field of the (pure-gauge) connection, while in TG the metric is obtained from the translational gauge connection by identifying it with the coframe field. However, the latter identification entails the tacit introduction of a symmetry-breaking field.
Thus, through the point 2), the necessary breaking of symmetry that has been left unaddressed in the premetric discussions of TG, is raised to a main role in the premetric construction of PG.
Further, the proposed extrapolation of PG towards the ultra-violet regime, recall (\ref{two}), would restore the one-to-one correspondence with massive electromagnetism that is
only apparently lost by the point 1) at macroscopic scales
\subsection{Implications for CGR}
The premetric approach provides some new insights to CGR, which emerges as the unique metrical theory that is consistent with an action formulation and the axioms of the premetric framework.
In this framework, the field equation is given by the Fundamental Equation 1 above, with the constitutive relations from (\ref{cgrconst}). In particular, it features the tensor density $H^{\alpha\nu}{}_\nu$, which can be shown to reduce, in the coincident gauge, to what is known as the Einstein energy-momentum complex pseudotensor density. It has been found that by using that pseudotensor density one always obtains reasonable results for the energy-momentum of any gravitational-matter system, whereas various other pseudotensorial prescriptions sometimes fail to yield the expected answer\footnote{For comparative studies see e.g. \cite{Aguirregabiria:1995qz,Xulu:2002ix}.}. In CGR the Einstein energy-momentum complex is promoted to a true tensor
density, and the recently proposed characterisation of the canonical frame has removed the ambiguity from the results. However, it was left unjustified why the tensor density $\tau^\mu{}_\nu=\nabla_\alpha H^{\alpha\mu}{}_\nu$ should be used in the determination of the gravitational energy-momentum instead of the $\hat{\tau}^\mu{}_\nu=-2\nabla_\alpha P^{\alpha\mu}{}_\nu$, which is more naturally written into the action-derived field equations. The ambiguity is related to the freedom to add a so called superpotential to a given energy-momentum complex. However, the canonical premetric framework leaves no such freedom. By construction it is obvious that we are compelled to use $H^{\alpha\nu}{}_\nu$ to determine the translational charges, for it is precisely the excitation conjugated to the translational currents.
\subsection{Implications for extended gravity theories}
The class of extended gravity theories we studied in more detail in this work was defined by the 14-parameters of the general metric linear (and local) constitutive relation, the Premetric Newer General Relativity.
As reviewed above, the class of models survives the first consistency and viability tests since they (given only two constraints on the parameters) reduce to General Relativity
at the linear order. This can be contrasted with the generic quadratic metric teleparallel and quadratic symmetric teleparallel theories (sometimes referred to the New and the Newer General Relativity, respectively), whose parameters are very stringently constrained in the same limit (and as well known, for quadratic pure-metric theories only the topological Gauss-Bonnet terms survive). It can be therefore interesting to pursue further the newly found theories, to investigate the viability of their nonlinear solutions and for example in view of their possible cosmological applications.
In general, the premetric approach raises the perhaps exotic possibility of theories without a Lagrangian formulation, of which the quadratic 14-parameter class is just an example.
Considering Lagrangian extended theories characterised by more general constitutive relations, there are two main observations we can make. Firstly, many of the previously studied extended symmetric teleparallel theories, in particular such
with nonlinear constitutive relations, might be difficult to incorporate within the premetric formalism. As a simple example, in the $f(Q)$ models one would require an excitation tensor $H^{\mu\nu}{}_\nu$ such that $\nabla_\alpha H^{\alpha\mu}{}_\nu = -2\nabla_\alpha f'(Q)\prescript{}{CGR}P^{\alpha\mu}{}_\nu$, which at first look would not appear easily possible unless $f'(Q)$ is a constant. Looking at things from the other side, the premetric framework suggests a great variety of previously unexplored ways of extending gravity. As an example, as we imposed three symmetrisations upon the constitutive relations from the beginning, mainly motivated by the aim of obtaining a metric in the end, it is reasonable to ask what what would happen when some or all of these symmetrisations were abandoned. This would expand the available theory space, and provide a remarkably simple way of realising asymmetric gravity which from the outset avoids the main problems that there are in extending the purely-metric theory by allowing antisymmetric components in the metric tensor. Namely, in the latter case one quite generally introduces ghosts since the available invariants are of a higher order, and one also encounters technical obstacles e.g. in the generalisation of the Levi-Civita connection\footnote{See e.g. \cite{Mann:1981st,Damour:1992bt,Moffat:1994hv,Janssen:2006jx,Janssen:2007yu} for studies on gravitational and theories with nonsymmetric metric.}.
Another Pandora's box is opened by allowing more extra fields to determine the constitutive relations. One non-minimal but rather natural extension would be to consider the case that the constitutive relation is not determined by the premetric field, but by an independent metric. Of course, this kind of bimetric theory and the many other possible novel extensions could turn out to be plagued by ghosts or other pathologies\footnote{A bimetric constitutive relation to an extent resembles such bimetric variational principles wherein the connection is considered as the Levi-Civita connection of an independent metric \cite{Koivisto:2011vq,Amendola:2010bk}. The latter setups may however introduce problematical degrees of freedom \cite{BeltranJimenez:2012sz}. For a review of the ghost-free bimetric gravity theory \cite{Hassan:2011zd}, see \cite{Schmidt-May:2015vnx}.}. However, one may contemplate whether it is possible to establish a robust correspondence between the field-theoretical consistency of a purified gravity theory and the existence of its
action-compatible premetric formulation. Such quite unique cases of consistent theories as CGR and (the symmetric teleparallel version of) the ghost-free Hassan-Rosen bimetric theory turn out to be also quite unique
in that they admit a premetric formulation.
One of our conclusions at least is that without the latter, a theory does not have a well-defined canonical energy-momentum complex.
\subsection{Implications for quantum gravity}
First we should recall that purified gravity has already provided insights that could be highly relevant in the unification of gravity and quantum physics. In the canonical approach to quantum gravity, the notorious problem of time might be taken into reconsideration from the perspective wherein we, besides the conventional ADM energy of the standard Hamiltonian formalism, have also available the unique consistent definition of localisable energy excitation in a gravitational system. The other main approach to quantisation, the path integral formalism, can obviously also have a more promising starting point, since the CGR action in the canonical frame
is well-defined without invoking boundary terms or counter terms that are necessary e.g. in the standard approach to Euclidean quantum gravity in Riemannian geometry.
In the study carried out in this paper, the question about whether the analogy of purified gravity with Proca electromagnetism is complete naturally arose, the vanishing of the field strenght tensor then being a valid approximation only at super-Planckian length scales.
Though massive Abelian gauge theories are renormalisable even without the Higgs mechanism, from the perspective of purified gravity it is natural to consider a spontaneous emergence of the Planck scale
since one wants to recover scale invariance at the most fundamental level of physics. In this work we only very tentatively discussed an actual realisation of such a mechanism (in particular, an analogous mechanism with Kibble's
spontaneous mass generation), but the new way of looking at gravity, seeing the covariant version of the Einstein Lagrangian as the mass term for the gauge connection instead of a kinetic term, very concretely points to a quite novel approach to realise gravity as a renormalisable gauge field theory with no formal difference from the others we already know.
That the metric is a part of the connection that is curved only at very microscopic distances, raises speculations about the physical role of that curvature. One could be brought back to pre-Einsteinian considerations of geometric
theories of gravity, in particular to Riemann and Clifford, who both entertained the idea that matter is nothing but a tiny disturbance in the spatial curvature, so that matter in motion can be understood as a simple variation in space
of these disturbances. Now the fact that the metric, a macroscopic emergent field which also can be associated with a curvature at a less fundamental level, is interwined with matter via the Einstein equation, could be understood
as a consequence of both the metric and the matter being aspects of the same connection. In the premetric construction of electromagnetism, one ultimately builds upon quantities that can be counted: electric charges and magnetic fiux
lines. It is clear that energy and momentum are quantised, and thus our starting point of the conservation of energy-momentum is in line with the principle of countability of extensive quantities. Perhaps the countability of intensive
quantities, in this case the ones related to the flux lines of the ``hypergravitational'' field, should be understood as a reflection of the quantised nature of matter particles.
\acknowledgements{The work was supported by the Estonian Research Council through the Personal Research Funding project PRG356 ``Gauge Gravity'' and by the European Regional Development Fund through the Center of Excellence TK133 ``The Dark Side of the Universe''.}
|
1,477,468,750,090 | arxiv | \section*{Acknowledgments}
This work was supported by the University of Alberta, Faculty of Science; the Natural Sciences and Engineering Research Council, Canada (Grants Nos. RGPIN-04523-16, DAS-492947-16, and CREATE-495446-17); Quantum Alberta; and the Canada Foundation for Innovation. B.D.H. acknowledges support from the Killam Trusts.
\section*{Appendix A: Low-temperature setup}
Our low temperature optomechanics setup is designed to allow for flexible optical coupling while also maximizing the thermal connection between the dilution refrigerator and the device. Fig.~\ref{fig:measSetup}a is a photograph of the base plate of the dilution refrigerator which demonstrates the optical coupling and cooling systems. A closeup of the chip holder, Fig.~\ref{fig:measSetup}b, shows the GaAs chip mounted in an annealed copper chip holder that has been gold plated to provide a malleable surface. The chip is screwed in tightly enough to deform the surface of the chip holder, which creates a high surface area mechanical connection for thermal conduction at millikelvin temperatures. The chip holder is connected to flexible copper braids that transfer heat to the base plate of the dilution refrigerator.
The flexibility of the copper braid anchoring system allows the GaAs chip to be freely maneuvered using a 3-axis piezoelectric positioning stack while remaining thermally anchored. This is critical for the dimpled tapered fiber coupling system that allows for optical coupling to any device on the GaAs chip. The optical coupling can be controlled by using the piezoelectric positioning stages to move the device such that the fiber touches different sections of the photonic crystal. Fig.~\ref{fig:measSetup}c shows a photonic crystal nanobeam that is optically coupled to a tapered fiber.
\section*{Appendix B: Optomechanical Characterization}
\label{sec:device_char}
\begin{figure}[t]
\includegraphics[width=.45\textwidth]{Fig6.png}
\caption{Wavelength scan of the photonic crystal optical mode (white), fit to a Lorentzian curve (black). The background shows the frequency spectrum at each step of the wavelength scan, with the mechanical resonance and EOM calibration tone. }
\label{fig:Device_optomechanics}
\centering
\end{figure}
Initial characterization of the GaAs photonic crystal nanobeam was performed at $4.2~\mathrm{K}$ using direct detection. In Fig.~\ref{fig:Device_optomechanics}, a tunable telecom laser was used to probe the photonic crystal optical resonance. A Lorentzian fit was used to extract the center frequency $\omega_c/2\pi = 193.7~\mathrm{THz}$ and the cavity linewidth $\kappa/2\pi = 5.0~\mathrm{GHz}$. The optical scan was performed in discrete steps of $1~\mathrm{pm}$ so that the mechanical spectrum could be measured at every laser detuning. The frequency spectrum has two identifiable peaks: the mechanical mode with $Q_m = 1.1\times10^3$ at frequency $\omega_m/2\pi = 2.3725~\mathrm{GHz}$, and a calibration tone generated by an EOM at $\omega_\mathrm{cal}/2\pi = 2.37~\mathrm{GHz}$. The EOM tone is used for phase calibration~\cite{Gorodetsky2010} to calculate the single photon-single phonon optomechanical coupling $g_0/2\pi = 1.3\pm0.3~\mathrm{MHz}$ (at 4.2 K).
When the temperature is decreased to the millikelvin regime, the mechanical $Q$-factor improves to $Q_\mathrm{m} = 28,800$ as measured using a double-pulse ringdown technique~\cite{Hauer2018}. We note that the mechanical $Q$-factor decreases with increasing device temperature. The cooperativity is calculated from the optomechanical interaction rate $\Gamma_\mathrm{om}=4g_0^2 n_\mathrm{cav}/\kappa = 2\pi\times 0.31~\mathrm{MHz}$, where $n_\mathrm{cav} = 230$ is the number of photons in the optical mode. Using these properties, the number of added phonons is calculated to be $n_\mathrm{add} =(C^{-1} + \kappa / 4 \omega_m)^2) / ( 1 + (\kappa / 4 \omega_m)^2)$, where $C=\{n_\mathrm{th}/\mathcal{C},1/\mathcal{C}_\mathrm{qu}\}$ for the ambient and combined baths respectively \cite{Hill2012,Andrews2014}.
|
1,477,468,750,091 | arxiv | \section{Introduction}
Recently there has been much discussion of how
the intervening mass from clustered matter between
the last scattering surface (at $z\approx 1100$)
and an observer at the present time distorts the
CMB anisotropy by means of gravitational lensing \cite{blanchard,cole,tomita,bernardeau,van-waerbeke}.
On the level of the two-point correlation function,
this effect distorts the TT (temperature-temperature)
correlation power spectrum \cite{seljak} and
mixes the EE and BB
polarization power spectra as well as distorting them \cite{zald,benabed-bis,seljak-ter}.
Lensing also introduces
non-Gaussianities that manifest themselves in the higher-point correlation
functions \cite{zaldarriaga,cooray-ter}. At the level of the three-point correlation
function, to leading order there is no nonzero expectation
value if we regard the lensing potential as a random field \cite{cooray,kesden}.
However if we consider the CMB lensing potential as fixed,
we find that expectation values of the form
\begin{eqnarray}
\left<
T (\boldsymbol{\theta})
T(\boldsymbol{\theta}')
\right> _{\Phi (\boldsymbol{\theta}^{\prime \prime })}
\end{eqnarray}
do not vanish, and this property may be exploited to
recover or ``reconstruct'' the lensing field
using estimators quadratic in $T$ (or in $E$ and $B$) \cite{hu2001a}.
Much effort has been devoted to developing an optimal
reconstruction of the lensing potential in harmonic space,
which implicitly assumes full sky
coverage with no galactic cut, no bad pixels due to
point sources that must be excised, and no nonuniform
weighting to account for uneven sky coverage \cite{challinor,lesgourgues,bode}.
For a reconstruction based on the temperature anisotropy
alone, it has been shown how to construct the optimal quadratic
estimator in this idealized context \cite{hu2001b}, and the improvement
that can be gained from using an even more optimal
maximum likelihood estimator is marginal \cite{seljak-bis}, because the
distortion due to lensing is small compared to the
intrinsic cosmic variance and noise of the experiment
(although this assumption is less valid at very
large $\ell $ for very clean maps where the lensing
signal is dominant). For the exploitation
of polarized anisotropies, the situation is somewhat
more complicated. When the experimental noise is large
compared to the B polarization mode generated by lensing,
the situation is essentially the same as for a reconstruction using
the temperature data \cite{hu-okamoto,cooray-bis}.
However at a higher sensitivity where the B signal
is essentially entirely due to lensing, the quadratic estimator
underperforms because the actual multipole moments rather
than their averages should be used for the optimal weighting.
In this case, the higher order corrections to the
quadratic estimator present in the maximum likelihood
estimator are no longer negligible \cite{hirata-seljak}.
In this paper we investigate a real space approach
to lensing reconstruction. Under the ideal conditions
often assumed and described above, this approach would naturally
yield the same result as the conventional approach in harmonic space.
Our interest however lies in considering slightly non-optimal
estimators that have been modified to have a finite range so that cuts,
excisions of pixels, and non-uniform coverage may be included in a simple
and flexible way. We believe that
such non-ideal but more robust local estimators
defined in real space may prove superior for confronting the complications
inherent in analyzing real data \cite{smith,miller,perotto}.
Another advantage of the approach here is that the dilatation and pure shear
provide separate and essentially independent lensing reconstructions which
may be confronted with each other. This feature may prove useful
as a way of diagnosing spurious signals, which are unlikely to affect
the two reconstructions in the same way. Moreover the presence of
two shear components enables one to estimate the noise of the
reconstruction through the implied transverse displacement field,
which is forbidden in weak lensing.
Before proceeding to the details of this program, it is useful
to consider the relation between the various descriptions
of the lensing and the deformation of the anisotropies in real
space. It is also useful to consider which angular scales
contribute the most statistical weight to the lensing reconstruction.
The lensing distortion of the CMB anisotropy on
the surface of last scatter may be described in three ways:
by a lensing potential $\Phi ,$ by a deflection field
$\boldsymbol{\xi }=\nabla \Phi ,$ or by the three components of
the shear tensor
\begin{eqnarray}
\kappa =
\begin{pmatrix}
\kappa _0+\kappa _{+}& \kappa _{\times }\cr
\kappa _{\times }& \kappa _0-\kappa _{+}\cr
\end{pmatrix}
=\nabla _a\nabla _b\Phi .
\end{eqnarray}
Even if we have simultaneous access to the entire sky,
the descriptions $\Phi $ and $\boldsymbol{\xi }$ suffer
from an ambiguity. $\Phi $ cannot be distinguished
from $\Phi +\textrm{(constant)}$ and the vector field
$\boldsymbol{\xi }$ can be measured only up to a
constant translation (or more properly a rotation in the
presence of sky curvature).
This is because if we know only the CMB power spectrum,
a patch of sky and its translation necessarily have
the same likelihood on account of isotropy of the
underlying stochastic process. Consequently,
the absolute translation due to lensing cannot be observed.
By contrast, locally the shear and dilatation (which are gradients
of the translation vector field) are completely well defined.
This can easily be seen by considering
the effect of a constant deformation described by
a deformation matrix $S$ relating the angular coordinates
$\boldsymbol{\theta },$ the actual coordinates on the celestial
sphere of a point on the last scattering surface, to
the coordinates $\boldsymbol{\theta }'$, the
coordinates that the same point would have in the
absence of lensing.
\footnote{In the sequel we shall,
unless otherwise indicated, employ the flat sky approximation
where the vector ${\boldsymbol{\theta } }$ represents a point
on the flattened celestial sphere and ${\boldsymbol{\ell } }$ represents
a wavevector. At times summations over $(l,m)$ shall
also be used.}
We have
$\boldsymbol{\theta '}= S\boldsymbol{\theta }$
where $S=\exp [-\boldsymbol{\kappa }].$
(Note that we employ the flat sky approximation
and assume that the deformation is small so that a linear
treatment is adequate.)
To linear order, the power spectrum is modified
in the following way by this linear deformation,
which preserves the homogeneity but not the isotropy
of the underlying statistical process:
\begin{eqnarray}
C(\vert {\boldsymbol{\ell } }\vert )
\to C(\ell ) \biggl[
1 + \kappa _0
\left( \frac{d(\ln[C(\ell )])}{d(\ln [\ell ])} + 2\right)
+\left(
\frac{\kappa _+(\ell _1^2-\ell _2^2)+\kappa _\times (2\ell _1\ell _2)}{\ell ^2}
\right)
\frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}
\biggr] .
\end{eqnarray}
For the case of perfect scale invariance
(i.e., a power law of the form $C(\ell )\propto \ell ^{-2}$)
there is no change in the
correlations due to the dilatation component of $S,$
and similarly for a perfect white noise spectrum
(i.e., a power law of the form $C(\ell )\propto \ell ^0$)
there is no sensitivity to the pure shear components
$\kappa _+$ and $\kappa _\times $ in the anisotropic
$(m=\pm 2)$ correlations.
To the extent that the shear-dilatation components
are slowly varying, we may construct estimators
of $\kappa _0,$ $\kappa _+,$ and $\kappa _\times $ as
follows
\begin{eqnarray}
\hat \kappa _0 &=& {N_0}^{-1}
\int _A d^2\theta ~
\int _A d^2\theta '~
\left[
T(\boldsymbol{\theta } )~T(\boldsymbol{\theta } ')-
\left< T({\boldsymbol{\theta } })~T({\boldsymbol{\theta } '})\right> _{\kappa =0}
\right]
\cr
&&\times \int \frac{d^2\ell }{(2\pi )^2}
\exp [i{\boldsymbol{\ell } }\cdot ({\boldsymbol{\theta } }-{\boldsymbol{\theta } '})]~
\frac{C(\ell )}{[C(\ell )+N(\ell )]^2}
\left( \frac{d(\ln [C(\ell )])}{d(\ln [\ell ])} + 2\right)\cr
&=& {N_0}^{-1}{\cal A}^{-1}
\int _A d^2\theta ~
\int _A d^2\theta '~
\left[
T({\boldsymbol{\theta } })~T({\boldsymbol{\theta } '})-
\left< T({\boldsymbol{\theta } })~T({\boldsymbol{\theta } '})\right> _{\kappa =0}
\right]
K_0({\boldsymbol{\theta } }-{\boldsymbol{\theta } '})\cr
&=& \frac{1}{\cal A}
\int _A d^2\theta ~
\left[
T({\boldsymbol{\theta } })~({\cal F}_{\kappa _0}T)({\boldsymbol{\theta } }) -\textrm{(constant)}
\right]
\end{eqnarray}
and similarly
\begin{eqnarray}
\begin{pmatrix}
\hat \kappa _{+}\cr
\hat \kappa _{\times }\cr
\end{pmatrix}
&=& {N_{+,\times }}^{-1}
\int _A d^2\theta ~
\int _A d^2\theta '~T(\boldsymbol{\theta } )~T({\boldsymbol{\theta } '})~\cr
&&\times \int \frac{d^2\ell }{(2\pi )^2}
\exp [i{\boldsymbol{\ell } }\cdot ({\boldsymbol{\theta } }-{\boldsymbol{\theta } '})]~
\frac{C(\ell )}{[C(\ell )+N(\ell )]^2}
\frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}
\begin{pmatrix}
\cos (2\vartheta )\cr
\sin (2\vartheta )\cr
\end{pmatrix}\cr
&=& {N_{+,\times }}^{-1}{\cal A}^{-1}
\int _A d^2\theta ~
\int _A d^2\theta '~T({\boldsymbol{\theta } })~T({\boldsymbol{\theta } '})~
\begin{pmatrix}
K_+({\boldsymbol{\theta } }-{\boldsymbol{\theta } '})\cr
K_\times ({\boldsymbol{\theta } }-{\boldsymbol{\theta } '})\cr
\end{pmatrix},
\end{eqnarray}
where the normalization factors are given by
\begin{eqnarray}
N_0&=&{\cal A}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{[C(\ell )]^2}{[C(\ell )+N(\ell )]^2}
\left( \frac{d(\ln [C(\ell )])}{d(\ln [\ell ])} + 2\right)^2,\cr
N_+&=&N_\times =\frac{\cal A}{2}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{[C(\ell )]^2}{[C(\ell )+N(\ell )]^2}
\left( \frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}\right)^2
\end{eqnarray}
and ${\cal A}$ is the area of the domain.
Here $N(\ell )$ is the noise of the experiment being considered
and serves as a cut-off at large $\ell ,$ above which there is
very little exploitable information because of the low
signal-to-noise of the CMB maps.
The above expressions assume a spatially flat, two-dimensional
domain formally of large but
finite area subject to a spatially uniform linear deformation.
We may consider a toroidal domain in the limit that the
periods of the torus become arbitrarily large (though the toroidal domain is not essential for the application of the real space estimator presented in this paper).
Before proceeding it is useful to consider the relation of this
simplified problem to the real problem
of reconstructing the lensing field starting from a
CMB map of finite resolution. Because of the smallness
of the lensing distortion of the CMB anisotropy,
the idealized situation considered above is
less different from
the actual
situation, where one is dealing with a curved sky
and a varying dilatation and shear fields,
than one might at first sight suppose.
\begin{figure}
\begin{center}
\includegraphics[width=17cm]{lens_potl_derivs_v2.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Lensing power spectrum.}
The lensing field power spectrum is shown represented
in several manners. The three panels (from left to right) illustrate
the lensing field expressed as potential, a deflection field,
and dilatation/shear field, respectively. Plotted
are $[\ell (\ell +1)C_{XX}/(2\pi )]^{1/2}$
where $XX=\Phi \Phi ,$
${\boldsymbol{\xi \xi }},$ $\kappa \kappa .$
Here
$C_{\ell }^{\boldsymbol{\xi \xi }}=\ell (\ell +1) ~C_{\ell }^{\Phi \Phi },$
and
$C_{\ell }^{\kappa \kappa }=\ell ^2(\ell +1)^2 ~C_{\ell }^{\Phi \Phi }/4.$
Of the three, the last power spectrum is more directly related
to the observed distortion.
}}
\label{Fig:LensingFields}
\end{figure}
The rightmost panel of Fig.~\ref{Fig:LensingFields} shows
the shear-dilatation power spectrum as a function of multipole number $\ell ,$
and we observe that for $\ell <100,$ the distortion is always
less than about $1.5\% .$ This implies that in order to attain
an $(S/N)$ of approximately unity it is necessary to consider
a region containing at least $O(10^3)$ resolution elements,
where a resolution element is a pixel of the map of sufficient size
so that the noise and angular resolution of the survey give $S/N\approx 1.$
Consequently, there is little point to trying to
reconstruct the lensing field
over a region not having at least 30 resolution elements on a side.
If the distortion from lensing were greater, the situation would be
different.
For the ideal linear estimator
\begin{eqnarray}
\frac{1}{\sigma ^2_{\hat \kappa _{0,ideal}}}=
\left(
\frac{S}{N}
\right) ^2
&=&
\sum _{\ell , m}
\frac{C_\ell ^2}
{2(C_\ell +N_\ell )^2}
\left[
\frac{d(\ln [C])}{d(\ln [\ell ])}+2
\right] ^2\cr
&\approx &
\frac{{\cal A}}{(4\pi )}
\int _0^\infty d\ell ~\ell ~
\frac{C(\ell )^2}
{(C(\ell )+N(\ell ))^2}
\left[
\frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}+2
\right] ^2
\label{SoverN}
\end{eqnarray}
where ${\cal A}$ is the area of the sky patch considered.
We now consider other unbiased estimators
of $\kappa _0$ and $\kappa_+$ where a slight increase in the variance
is compensated for by other desirable properties. We
are presently interested in estimators for which the
real space filtering kernel is short range.
To this end it is useful to define the inner product
on the space of weight vectors ${w}=\{ w_\ell \}$:
\begin{eqnarray}
\left< w^A, w^B \right>
&=&
\sum _{\ell , m}
\frac{1}{2(C_\ell +N_\ell )^2}
w^A_{\ell ,m}
w^B_{\ell ,m}\cr
&=&
{\cal A}\int \frac{d^2\ell }{(2\pi )^2}
\frac{1}{2[C(\ell )+N(\ell )]^2}
w^A(\boldsymbol{\ell } )
w^B(\boldsymbol{\ell })
\end{eqnarray}
where we give both the spherical and flat sky continuum
forms.
If we set $w_{ideal}(\ell )=\delta C(\ell )_{th, \kappa _0=1}=
C(\ell)[\frac{d \ln C_\ell}{d \ln \ell}+2],$
then in terms of the above inner product
\begin{eqnarray}
\hat \kappa _{0,ideal}=
\frac{
\left< w_{ideal}, \delta C_{obs}\right>
}{
\left< w_{ideal}, w_{ideal} \right>
}
\end{eqnarray}
and
\begin{eqnarray}
\left( \frac{S}{N}\right) ^2_{\kappa _0, ideal}=
\left<\delta C_{th, \kappa _0=1}, \delta C_{th, \kappa _0=1}\right> .
\end{eqnarray}
Given an arbitrary weight vector $w,$ using the above inner
product we may define the following unbiased estimator of
$\kappa _0$
\begin{eqnarray}
\hat \kappa _0(w)=
\frac{
\left< w, \delta C_{obs}\right>
}{
\left< w, w_{ideal} \right>
}
\end{eqnarray}
provided that the denominator does not vanish, and its
variance is given by
$\left< w, w\right> /\left< w_{ideal}, w\right> ^2,$
so that the increase in variance with respect to the ideal estimator
is given by following geometric expression
for the secant squared
\begin{eqnarray}
\frac{
\textrm{Var}(\hat \kappa _0(w))
}{
\textrm{Var}(\hat \kappa _0(w_{ideal}))
}
=
\frac{
\left< w, w\right>
\left< w_{ideal}, w_{ideal} \right>
}{
\left< w, w_{ideal} \right> ^2
}~\rm{sec}^2(\chi).
\label{VVar}
\end{eqnarray}
For $\kappa _+$ and $\kappa _\times $ analogous formulae may be
derived straightforwardly.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{lensed_CMB.pdf}
\includegraphics[width=7.5cm]{lensing_dimensionless_spectral_index.pdf}\\
\includegraphics[width=7.5cm]{lensing_cumulative_information.pdf}
\includegraphics[width=7.5cm]{kappa_plus_cumulative_information_B.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Character of the signal.}
The top row shows as a function of multipole number $\ell $
the temperature power spectrum and local spectral index,
defined as $d\ln C_\ell /d\ln \ell,$
for the standard cosmology (WMAP best-fit model).
On the bottom row,
the left panel shows the normalized cumulative
$\chi ^2$ as a function of $\ell $ integrated both
from the left and from the right using the sensitivity
and resolution parameters for the PLANCK experiment
(where the 100, 143 and 217 GHz channels have been
combined in quadrature) \cite{bluebook}. We observe that the central 80\% of
the information is concentrated in the range $\ell =800$--$1600.$
Smaller $\ell $ contribute almost no information because there
are comparatively very few
independent multipoles, and moreover the angular spectrum in
the sky is very nearly scale invariant. At much larger $\ell $
instrument noise and beam smearing wash out the usable signal.
In the intermediate range a structure of plateaus connected
by steep rises can be observed. This structure is a direct
result of the Doppler oscillations. Around the crests and troughs
the spectrum is almost scale invariant and hence does not contain any
information for determining the dilatation. The right panel
shows the corresponding plot for the shear, where the plateaus
are less pronounced.
}}
\label{Fig:Info}
\end{figure}
We consider how the information (or $S^2/N^2$)
contained in the ideal estimator
is distributed over the various multipoles. In Fig.~\ref{Fig:Info}
[panels (c) and (d)]
we plot the quantity
\begin{eqnarray}
F_<(\bar \ell )=
\frac{
\int _0^{\bar \ell }\ell ~d\ell ~
\frac{ {C(\ell) }^2}{({C(\ell)}+{N(\ell )})^2}
\left[ \frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}+2 \right] ^2
}{
\int _0^{\infty }\ell ~d\ell ~
\frac{ {C(\ell )}^2}{({C(\ell )}+{N(\ell )})^2}
\left[ \frac{d(\ln [C(\ell )])}{d(\ln [\ell ])}+2 \right] ^2
}
\end{eqnarray}
and $F_>(\bar \ell )=1-F_<(\bar \ell )$ where
\begin{eqnarray}
N_\ell
=N_0 ~\exp \left[ +\ell ^2\theta ^2_{beam}\right]
=N_0 ~\exp \left[ +\ell ^2/\ell ^2_{beam}\right]
\end{eqnarray}
and $\ell _{beam}=(810)(10'/\theta _{beam}^{fwhm}).$
The corresponding quantity is also shown for the shear.
In their present state, the ideal minimum variance kernels for the
estimators
$\hat \kappa _0,$
$\hat \kappa _+,$
and
$\hat \kappa _\times $
have their support sharply peaked at small separations, but
nevertheless there is still some small support
for large separation. We now investigate how much information
is lost if the support at large separation is completely
cut away. The quantitative way to characterize this loss
is to ask by what factor the variance of the estimator is
increased relative to the optimal estimator after our
pruned estimator has been renormalized to render it unbiased.
\def\frac{1}{2}{\frac{1}{2}}
In real space the
minimum variance full-sky estimator kernel for $\kappa _0$ is given by
\begin{eqnarray}
K_{ideal}(\theta )
&=&
\int _0^\infty \ell ~d\ell ~
K_{ideal}(\ell )\cr
&=&
\int _0^\infty \ell ~d\ell ~
{J_0(\ell \theta )}
\frac{1}{N_{ideal}}
\frac{C(\ell )}
{[C(\ell) +N(\ell )]^2}~
\left[ \frac{d(\ln [C(\ell )])}{d[\ln (\ell )]} +2 \right] .
\end{eqnarray}
This kernel may be inverted using the following inverse Bessel
transform:
\begin{eqnarray}
K_{ideal}(\ell )=\frac{1}{2\pi }\int _0^\infty \theta ~d\theta ~
J_0(\ell \theta )~
K_{ideal}(\theta ).
\end{eqnarray}
We limit the support of the kernel by requiring that $K(\theta )$
be nonzero only for $\theta \le \theta _{max}$ where
$\theta _{max}$ is varied. This is accomplished numerically
by expressing $J_0(\ell \theta )$ as a linear combination
of cubic spline basis functions spanning the interval
$\theta \in [0, \theta _{max}]$ and optimizing for the shape that minimizes the
variance calculated according to eqn. (\ref{VVar}). Analogous expressions
may be obtained for the shear by replacing $J_0$ with $J_2.$
Fig. \ref{Fig:TruncEst} shows the variance ratio as a function of $\theta _{max}$ for the dilatation and shear estimators. We observe that the increase in variance at small separations is more modest for the shear estimator.
\begin{figure}
\begin{center}
\includegraphics[width=5.3cm]{idl_lambda_vs_theta_max.pdf}
\includegraphics[width=5.3cm]{idl_filter_theta.pdf}
\includegraphics[width=5.3cm]{idl_filter_ell.pdf}\\
\vspace{0.5cm}
\includegraphics[width=5.3cm]{idl_lambda_vs_theta_max_shear.pdf}
\includegraphics[width=5.3cm]{idl_filter_theta_shear.pdf}
\includegraphics[width=5.3cm]{idl_filter_ell_shear.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Performance of estimator with truncated angular support.}
We indicate how limiting the angular support of the filter in our
estimator increases the noise.
The panels on the top row refer to the dilatation filter, while the panels on the bottom row refer to the shear filter.
Panel (a) indicates how the estimator variance (with the estimator
normalized to be unbiased) increases
as the angular support (disk radius in degrees) is reduced.
Panels (b) and (c) indicate the profiles of the optimal truncated estimators
in both angular space and harmonic space.
}}
\label{Fig:TruncEst}
\end{figure}
\section{Results: Dilatation and Shear Reconstruction}
In the previous section we showed how by means
of a linear filter
${\cal F}_{\kappa _0}$ applied to a temperature map
$T(\boldsymbol{\theta } ),$ we may obtain a reconstructed dilatation field
through the product map
\begin{eqnarray}
\kappa _{0, rec}(\boldsymbol{\theta } )
=T(\boldsymbol{\theta } )({\cal F}_{\kappa_0}T)(\boldsymbol{\theta } )-c
\end{eqnarray}
where $c$ is a constant offset. We presented
a theoretical derivation of the optimal shape for such a filter.
The best shape for the filter depends both on the cosmological model,
and more importantly on the details of the experiment, because most
of the statistical weight is situated on the smallest angular scales,
near the resolution limit of the experiment. If the primordial anisotropies
dominated to arbitrarily small scales and experiments of unlimited
sensitivity and angular resolution were possible,
the lensing field could be reconstructed as accurately as desired
simply by increasing the angular resolution and sensitivity.
For a given experiment, the desirable filter shape may be
intuitively understood as follows. We assume weak lensing (i.e.,
$\vert \kappa \vert \ll 1).$
We want to measure the change in
shape of the power spectrum (or rather of the ``local'' power spectrum
within a certain finite size patch for the
case of interest of a non-uniform dilatation). Working in
the flat sky approximation, which is appropriate because the length
scales most sensitive to the changes in overall scale are
small (i.e., $\ell \gg 100 ),$
we define ${\cal C}(\ell )=\ell ^2~C(\ell ).$ Under a small
dilatation, which transforms the temperature map as follows
\begin{eqnarray}
T(\boldsymbol{\theta } )\to T(\boldsymbol{\theta } ')=T\Bigl(\exp [-\kappa ]~\boldsymbol{\theta } \Bigr),
\end{eqnarray}
the power spectrum transforms as
\begin{eqnarray}
{\cal C}(\ell )\to
{\cal C}\Bigl(\exp [+\kappa ]~\ell \Bigr),
\end{eqnarray}
so for positive dilatations the power spectrum
in the form ${\cal C}(\ell )$ squeezes to the left in
such a way that the amplitudes of the various features remain
unchanged.
If the dilatation is small, the change in the power spectrum
is proportional to $\kappa ~{\cal C}'(\ell ).$
To make a good filter for implementing the above scheme
giving the reconstruction with the least noise, we wish to
block modes of wavenumber where ${\cal C}(\ell )$ is approximately
flat, to
allow modes of wavenumber where ${\cal C}(\ell )$ is rising
to pass with a positive phase factor,
and to
allow modes of wavenumber where ${\cal C}(\ell )$ is falling
to pass with a negative phase factor. Moreover the filter should
combine the different wavenumbers according to inverse variance
weighting in order to minimize the noise of the reconstructed dilatation field.
The quality of the inverse variance reconstruction may be characterized
quantitatively in terms of the variance of the reconstructed field.
We define a quality factor $Q$ so that
\begin{eqnarray}
(\delta \kappa )^2\approx \frac {Q}{\cal A}
\end{eqnarray}
where ${\cal A}$ is the area of the region over which the reconstructed
field has been averaged and $(\delta \kappa )$ is
the fluctuation in $\kappa $ averaged over that same area.
Here $Q$ is equal to $\sigma ^2_{\hat \kappa _0}/{\cal A}$ where
$\sigma ^2_{\hat \kappa _0}$ is given in eqn.~(\ref{SoverN}).
Implicit is the assumption
that the noise in the reconstructed dilatation field lacks long
range correlations---that is, it can be treated as white noise
beyond a certain angular coherence scale. This is indeed the case because
the filter ${\cal F}_{\kappa _0}$
blocks small wave numbers.
We assume that the area ${\cal A}$ is sufficiently large so that the white
noise regime has been reached.
\begin{figure}
\begin{center}
\includegraphics[width=15cm]{form_factor_planck_shear_all_numk280_nside1200_niter10_norm_alpha_x,x_alpha_y,x_b.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Estimator form factors at large lensing wave numbers.}
We plot
as a function of multipole number
the form factors for the dilatation, longitudinal shear, and cross shear reconstructions using the filter
described in the text.
Noise for the Planck instrument combining the 100+143+217 GHz channels is assumed using bluebook values
and combining channels in the naive way \cite{bluebook}. A deformation field
whose wavenumber constitutes the horizontal axis
was applied to unlensed maps. For the dilatation and the ``plus" shear,
a longitudinal displacement field was used. For the ``cross" shear a transverse deformation was applied.
As a result of symmetry considerations, the responses of the dilatation and plus shear to a transverse
deformation have vanishing expectation values (i.e., they are pure noise)
and hence are not shown. The same holds for
the cross shear response to a longitudinal deformation.
}}
\label{Fig:FormFactorsConsolidated}
\end{figure}
For a non-uniform dilatation field having a small wavenumber, the previous
analysis carries over with very small corrections. Using the same
filter as for the uniform case, one expects that
\begin{eqnarray}
\left< \kappa _{0, recon}(\boldsymbol{\theta } )\right> =\kappa _{0, exact}(\boldsymbol{\theta } ),
\end{eqnarray}
in other words, that there is negligible bias. However for larger
wavenumbers a form factor appears due to the fact that our estimator
probes the power spectrum over a window of a finite width in angular space,
and consequently smoothes
the underlying exact dilatation field thus reducing its amplitude. Quantitatively,
if the dilatation field has the form
\begin{eqnarray}
\kappa _0 \cos (\boldsymbol{\ell } \cdot \boldsymbol{\theta } +\phi ),
\end{eqnarray}
the expectation value of the reconstructed field takes the form
\begin{eqnarray}
F(\ell ) \kappa _0 \cos (\boldsymbol{\ell } \cdot \boldsymbol{\theta } +\phi )
\end{eqnarray}
and as before the noise field will have the same statistics.
$F(\ell )$ is known as the form factor and satisfies
$F(0)=1.$ $F(\ell )$
falls off for increasing $\ell,$ eventually going to zero.
The form factors are indicated in Fig.~\ref{Fig:FormFactorsConsolidated}.
The shear plus estimator experiences less smoothing than the dilatation estimator because the form of the $\ell$-space filter has less cancellations as can be seen in Fig.~\ref{Fig:TruncEst}.
In Fig.~\ref{Fig:gaussianfilters} we present an alternative, more heuristic derivation of the dilatation field
reconstruction filter which has the virtue of rendering manifest the contribution of various features in the
power spectrum to the reconstruction. This alternative filter is a combination of Gaussian filters placed at the locations in multipole space where the magnitude of the derivative of $\ell ^2~C(\ell )$ is greatest, and whose widths
are chosen to correspond roughly to the width of the rise and fall of this quantity.
In Table \ref{SNTable} the $(S/N)^2$ for each of these filters are shown and we indicate their relative statistical weights as well as the $(S/N)^2$ when they are combined using inverse variance weighting.
Somewhat surprisingly, this naive derivation yields a filter almost as efficient
as the optimal one.
\begin{figure}
\begin{center}
\includegraphics[width=12cm]{WMAP_spec_and_gauss_filters.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Alternative filter derivation.}
We generate a dilatation reconstruction filter by an alternate procedure whereby Gaussian filters are placed at the
locations in multipole space where the magnitude of the derivative of $\ell ^2~C(\ell )$ is greatest. The widths
of the filters are chosen to correspond roughly to the width of the rise and fall of this quantity.
}}
\label{Fig:gaussianfilters}
\end{figure}
\begin{table*}[htbp]
{\footnotesize
\begin{center}
\begin{tabular}{|p{3.5cm}|p{3.5cm}|p{3.5cm}|}
\hline
\centering{$\ell _{center}$} & \centering{$\sigma _\ell$} & \centering{$\left(S/N\right)^2$ $\deg^{-2}\cdot\kappa_0^{-2}$} \tabularnewline
\hline \hline
\centering{$300$} & \centering{50} & \centering{$3.13 \pm 0.26$} \tabularnewline
\hline
\centering{$470$} & \centering{40} & \centering{$1.56 \pm 0.13$} \tabularnewline
\hline
\centering{$600$} & \centering{40} & \centering{$3.02 \pm 0.25$} \tabularnewline
\hline
\centering{$770$} & \centering{40} & \centering{$0.13 \pm 0.01$}\tabularnewline
\hline
\centering{$900$} & \centering{50} & \centering{$61.51 \pm 5.02$} \tabularnewline
\hline
\centering{$1070$} & \centering{40} & \centering{$1.44 \pm 0.12$} \tabularnewline
\hline
\centering{$1200$} & \centering{50} & \centering{$30.66 \pm 2.50$} \tabularnewline
\hline
\centering{$1370$} & \centering{50} & \centering{$23.24 \pm 1.90$} \tabularnewline
\hline
\centering{$1550$} & \centering{40} & \centering{$42.53 \pm 3.47$} \tabularnewline
\hline
\multicolumn{2}{|c|}{Combined} & \centering{$167.22 \pm 13.65$} \tabularnewline
\hline
\multicolumn{2}{|c|}{Ideal} & \centering{$140.50 \pm 11.47$} \tabularnewline
\hline
\end{tabular}
\end{center}
}
\caption{\baselineskip=0.5cm{
{\bf Performance of individual feature filters.}
The first nine entries correspond to Gaussian filters whose centers and widths
are indicated and have been placed where the features of the power spectrum
change the most in response to a small dilatation. The last column expresses
the sensitivity of each filter. The next-to-last row shows $(S/N)^2$
resulting when the filters are combined using inverse covariance
weighting. This is compared to the ideal filter. The errors
result from Monte Carlo noise.
}}
\label{tab:signaltonoise}
\label{SNTable}
\end{table*}
\begin{figure}
\begin{center}
\includegraphics[width=5cm]{anisotropie_B.png}
\includegraphics[width=5.5cm]{wmap_flat_sim_bis.jpg}
\includegraphics[width=5.5cm]{wmap_sheared_sim_bis.jpg}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Distortion of power spectrum due to shear.}
In the two-dimensional plot in harmonic space the power
spectrum is shown after a pure shear transformation has
been applied turning the sequences of acoustic peaks
into concentric ellipses rather than concentric circles.
In the real space images (middle, unlensed and right, sheared)
the transformation is hardly apparent to the unaided eye.
}}
\label{FigPowerAniso}
\end{figure}
We now repeat the same analysis for
a constant pure shear deformation, for concreteness
with $\kappa _+>0,$ $\kappa _\times =0,$ there is a stretching
along the $x$-direction with the same amount of compression
along the $y$-direction, so that the temperature map
is deformed according to
\begin{eqnarray}
T(\theta _x,\theta _y)\to
T'(\theta _x',\theta _y')=
T(e^{-\kappa _+}\theta _x,
e^{+\kappa _+}\theta _y),
\end{eqnarray}
and for the harmonic coefficients one has the following transformation law
\begin{eqnarray}
T(\ell _x,\ell _y)\to
T'(\ell _x',\ell _y')=
T(e^{+\kappa _+}\ell _x,
e^{-\kappa _+}\ell _y).
\end{eqnarray}
For any power spectrum having a shape like that of the CMB
temperature, where at all wavenumbers the spectrum is ``red''
compared to a white noise spectrum, a pure shear deformation
causes a loss of power for wavevectors oriented close to the stretched principal axis
and a corresponding increase of power for wavevectors close to the principal axis of compression.
Along the diagonal direction the power is unaffected.
The estimator developed in the first section of
this paper exploits this anisotropy to reconstruct two components
of the shear field.
The effect is illustrated in Fig. \ref{FigPowerAniso}. In panel (a)
the acoustic oscillations, which would in the absence of lensing
be visible as a series of
concentric circular rings, have as a result of a pure shear deformation
been deformed into concentric ellipses.
Panels (b) and (c) show simulated temperature maps
before and after a constant shear deformation,
here exaggerated in magnitude (with $\kappa _0=0.1$)
for clarity. One observes that the annuli
become elliptical. Unlike for the dilatation field, where
a scale-invariant spectral index implies that statistically
the pattern does not change, for pure shear an isotropic
scale invariant random field is deformed into an anisotropic
scale invariant random field, where heuristically one might
say that the series of elliptical (rather than on the average
circular) diffuse motifs have been put down
in a scale-free manner.
In Fig.~\ref{Fig:NonLin} we show the extent of the region of linear response of the dilatation and shear estimators. Ignoring higher order terms gives rise to a bias which can be corrected \cite{seljak-bis,kesden_2003}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm]{response_planck_dilatation_shear_plus_numk280_numl3000_alpha_x,x_b-1.pdf}
\end{center}
\caption{\baselineskip=0.5cm{
{\bf Estimator non-linearity.}
We plot the recovered root-mean-square distortion versus the input
root-mean-square distortion in order to characterize the range
of linear response for our estimator. The input is a long-wavelength
longitudinal deformation, so that the two-dimension shear and
dilatation are exactly half the one-dimensional dilatation.
Exactly the same non-linearity will plague the quadratic estimator
as well, since the two have been demonstrated to be equivalent
at low wavenumber (as discussed in section \ref{CompQuad}).
}}
\label{Fig:NonLin}
\end{figure}
\section{\label{Combining} Combining the $\kappa _0(\theta ),$ $\kappa _+(\theta )$ and
$\kappa _\times (\theta )$ reconstructions}
In the previous two sections we have shown how by means of
three filters
${\cal F}_{\kappa _0},$
${\cal F}_{\kappa _+ },$ and
${\cal F}_{\kappa _\times }$ we may obtain a noisy
reconstruction of the fields
$\kappa _0 (\boldsymbol{\theta } ),$
$\kappa _+ (\boldsymbol{\theta } ),$ and
$\kappa _\times(\boldsymbol{\theta } )$
in real space. We also saw how the reconstruction was
essentially finite-range in real space and
could be made of compact support with very little loss
of information relative to the optimal nonlocal,
full-sky reconstruction. The reconstructed fields are not independent
but related according to the following relations
\begin{eqnarray}
\kappa _0(\boldsymbol{\ell } ) &=& -\frac{1}{2}(\ell _x^2 +\ell _y^2 )\Phi (\boldsymbol{\ell } ),\cr
\kappa _{+}(\boldsymbol{\ell } ) &=& -\frac{1}{2}(\ell _x^2 -\ell _y^2 )\Phi (\boldsymbol{\ell } ),\cr
\kappa _{\times }(\boldsymbol{\ell } ) &=& -\ell _x\ell _y\Phi (\boldsymbol{\ell } ),
\end{eqnarray}
which are local in harmonic space.
These consistency relations, expressed in terms of the
lensing potential $\Phi ,$ can be used to harmonize the
reconstruction. Even though there are three reconstructed
fields, the cross shear, which would result from a transverse
deformation that cannot be produced by weak lensing, is pure
noise. The noises present in the longitudinal shear and the
dilatation reconstruction are uncorrelated because one
involves the $m=0$ sector and the other the $m=\pm 2$ sector.
Harmonization by inverse variance weighting reduces the noise.
If we consider the noise in the
reconstruction field to lowest order in the perturbation
expansion (as 4-point functions of the underlying Gaussian
field) ignoring higher-order terms, then there are no
correlations between reconstructed fields components of
differing wavenumber.
Consequently, in order to obtain the lowest noise
estimator of $\Phi ,$ it suffices to consider each
$\boldsymbol{\ell } $ sector separately and to use the covariance
matrix of the noise in
$\kappa _0(\boldsymbol{\ell } ),$
$\kappa _{+}(\boldsymbol{\ell } ),$ and
$\kappa _{\times }(\boldsymbol{\ell } )$
as a basis for the optimal harmonization.
\section{Comparison with the quadratic estimator}
\label{CompQuad}
In this section we show how the dilatation and shear
estimators developed in this paper
are related to the linearly optimal
quadratic estimator for the lensing field \cite{hu2001a,hu2001b}.
We demonstrate that in the low-$\ell $ limit
the quadratic estimator becomes a linear combination
of our dilatation and longitudinal shear
estimators. In fact in this limit the linearly
optimal quadratic estimator
is identical to the inverse variance weighted linear combination
of the
dilatation and longitudinal shear estimators, whose
noises are uncorrelated.
At higher $\ell ,$ the
approximation used to derive this relation progressively
breaks down. To quantify this degradation, we plot in Fig.
\ref{Fig:CompQuadEst} the
increase in variance of the combined dilatation
plus longitudinal shear estimators.
For the reasons already presented concerning the
minute statistical weight contributed by
small multipoles, we find it adequate to
work in the flat sky approximation.
The relation in real space (valid to linear order)
\begin{eqnarray}
\delta T(\boldsymbol{\theta } )=\nabla \Phi (\boldsymbol{\theta } )\cdot \nabla T(\boldsymbol{\theta } )
\end{eqnarray}
in harmonic space translates into
\begin{eqnarray}
\delta T(\boldsymbol{\ell } )=
\int \frac {d^2\ell '}{(2\pi )^2}
(-\boldsymbol{\ell } ')\cdot (\boldsymbol{\ell } -\boldsymbol{\ell } ')~
\Phi (\boldsymbol{\ell } ')~
T(\boldsymbol{\ell } -\boldsymbol{\ell } ')
\end{eqnarray}
where we define
\begin{eqnarray}
\left< T(\boldsymbol{\ell } )T(\boldsymbol{\ell } ') \right>
=(2\pi )^2 \delta ^2(\boldsymbol{\ell } +\boldsymbol{\ell } ')~C(\ell ).
\end{eqnarray}
Since $T_{sky}=T+\delta T,$ it follows that to leading (linear)
order
\begin{eqnarray}
&&
\left<
T_{sky} \left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)
T_{sky} \left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\right>\cr
&=&\Phi (\boldsymbol{\ell } ')\times
\left[
-\left( \frac{\boldsymbol{\ell }'}{2}-\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } -\frac{\boldsymbol{\ell }'}{2} \right|\right)
-\left( \frac{\boldsymbol{\ell }'}{2}+\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } +\frac{\boldsymbol{\ell }'}{2} \right|\right)
\right] .
\label{eqn_quad}
\end{eqnarray}
It follows that
\begin{eqnarray}
\hat \Phi (\boldsymbol{\ell } ')=N^{-1}\sum _{\boldsymbol{\ell } }
W(\boldsymbol{\ell }'; \boldsymbol{\ell } )~
T_{sky} \left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)~
T_{sky} \left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\end{eqnarray}
is the minimum variance unbiased estimator for
the Fourier coefficient $\Phi (\boldsymbol{\ell } ')$ where
\begin{eqnarray}
W(\boldsymbol{\ell } '; \boldsymbol{\ell } )=
\frac{
\left[
-\left( \frac{\boldsymbol{\ell }'}{2}-\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } -\frac{\boldsymbol{\ell }'}{2} \right|\right)
-\left( \frac{\boldsymbol{\ell }'}{2}+\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } +\frac{\boldsymbol{\ell }'}{2} \right|\right)
\right]
}{
\left[
C\left(\boldsymbol{\ell } -\frac{\boldsymbol{\ell } '}{2}\right)+
N\left(\boldsymbol{\ell } -\frac{\boldsymbol{\ell } '}{2}\right)
\right]
\left[
C\left(\boldsymbol{\ell } +\frac{\boldsymbol{\ell } '}{2}\right)+
N\left(\boldsymbol{\ell } +\frac{\boldsymbol{\ell } '}{2}\right)
\right]
}
\label{eqnW}
\end{eqnarray}
and
\begin{eqnarray}
N=\sum _{\boldsymbol{\ell } }
\frac{
\left[
-\left( \frac{\boldsymbol{\ell }'}{2}-\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } -\frac{\boldsymbol{\ell }'}{2} \right|\right)
-\left( \frac{\boldsymbol{\ell }'}{2}+\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } +\frac{\boldsymbol{\ell }'}{2} \right|\right)
\right] ^2
}{
\left[
C\left(\boldsymbol{\ell } -\frac{\boldsymbol{\ell } '}{2}\right)+
N\left(\boldsymbol{\ell } -\frac{\boldsymbol{\ell } '}{2}\right)
\right]
\left[
C\left(\boldsymbol{\ell } +\frac{\boldsymbol{\ell } '}{2}\right)+
N\left(\boldsymbol{\ell } +\frac{\boldsymbol{\ell } '}{2}\right)
\right]
}.
\end{eqnarray}
We may approximate the quantity in the square brackets in eqn.~(\ref{eqn_quad})
to linear order to obtain
\begin{eqnarray}
&&
-\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } ' ~C\left( \left| \boldsymbol{\ell } -\frac{\boldsymbol{\ell } '}{2}\right| \right)
-\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } ' ~C\left( \left| \boldsymbol{\ell } +\frac{\boldsymbol{\ell } '}{2}\right| \right) \cr
&=&-{\ell '}^2~C(\ell )-(\boldsymbol{\ell } \cdot \boldsymbol{\ell } ')^2\frac{1}{\ell }
\frac{\partial C(\ell )}{\partial \ell }\cr
&=&-\frac{1}{2}\frac{{\ell '}^2}{\ell ^2}\frac{\partial [\ell ^2C(\ell )]}{\partial \ln [\ell ]}
-
\left( \frac{
(\boldsymbol{\ell } \cdot \boldsymbol{\ell } ')^2-\frac{1}{2}{\ell '}^2\ell ^2
}{\ell ^2}\right)
\frac{\partial [C(\ell )]}{\partial \ln [\ell ]}\cr
&=&-\frac{1}{2}{\ell '}^2\left\{
\frac{1}{\ell ^2}
\frac{\partial [\ell ^2C(\ell )]}{\partial \ln [\ell ]}
+\cos [2\Theta ]
\frac{\partial [C(\ell )]}{\partial \ln [\ell ]}
\right\}
\label{eqnN}
\end{eqnarray}
where $\Theta $ is the angle between $\boldsymbol{\ell } $ and $\boldsymbol{\ell } '.$
The expression is remarkably similar to a linear combination
of the expressions appearing in the dilatation and the pure shear
reconstruction of the previous section.
\begin{figure}[t]
\vskip-0.5cm
\begin{center}
\includegraphics[width=15cm]{comp_quad_d_plus_sl.pdf}
\end{center}
\vskip-0.5cm
\caption{\baselineskip=0.5cm{
{\bf Comparison with quadratic estimator}
We plot the noises of the various estimators
compared to the expected signal (heavy red curve). The quadratic estimator is indicated
in thick black. The dilatation and shear estimators are shown in dashed red and green,
respectively, and when combined nominally give the dashed magenta curve, but when the
imperfect overlap with the expected signal is taken into account, yield the solid
magenta curve. The blue curve would be indicative of the actual noise in the recovered
maps, but if the imperfect overlap where corrected to remove the bias at high
$\ell $
the heavy magenta curve would result.
For comparison we show the predicted lensing signal (as computed by CAMB
for the WMAP best fit model) as the heavy red curve.
}}
\label{Fig:CompQuadEst}
\end{figure}
Approximating the numerator of (\ref{eqnW}) as (\ref{eqnN})
and the denominator of (\ref{eqnW}) as
$[ C(\ell )+ N(\ell )]^2,$ we may write the
three estimators in the following unified manner:
\begin{eqnarray}
\hat D(\boldsymbol{\ell } ')&=&
\frac{1}{2}
\frac{\cal A}{N_D}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}
}
{[C(\ell )+N(\ell )]^2
}
\left[
\frac{\partial (\ln [\ell ^2C(\ell )])}{\partial [\ln (\ell )]}
\right] ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)\cr
\hat S_L(\boldsymbol{\ell } ')&=&
\frac{1}{2}
\frac{\cal A}{N_{S_L}}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}
}
{[C(\ell )+N(\ell )]^2
}
\left[
\cos [2\Theta (\boldsymbol{\ell } ,\boldsymbol{\ell } ')]
\frac{\partial (\ln [C(\ell )])}{\partial [\ln (\ell )]}
\right] ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)\cr
\hat Q(\boldsymbol{\ell } ')&=&
\frac{1}{2}
\frac{\cal A}{N_{Q}}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}
}
{[C(\ell )+N(\ell )]^2
}
\left[
\frac{\partial (\ln [\ell ^2C(\ell )])}{\partial [\ln (\ell )]}
+
\cos [2\Theta (\boldsymbol{\ell } ,\boldsymbol{\ell } ')]
\frac{\partial (\ln [C(\ell )])}{\partial [\ln (\ell )]}
\right] \cr
&&\qquad \times
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\end{eqnarray}
where
\begin{eqnarray}
N_D(\boldsymbol{\ell } ')&=&
\frac{1}{2}
{\cal A}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}^2
}
{[C(\ell )+N(\ell )]^2
}
\left[
\frac{\partial (\ln [\ell ^2C(\ell )])}{\partial [\ln (\ell )]}
\right] ^2\cr
N_{S_L}(\boldsymbol{\ell } ')&=&
\frac{1}{2}
{\cal A}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}^2
}
{[C(\ell )+N(\ell )]^2
}
\left[
\cos [2\Theta (\boldsymbol{\ell } ,\boldsymbol{\ell } ')]
\frac{\partial (\ln [C(\ell )])}{\partial [\ln (\ell )]}
\right] ^2\cr
N_Q(\boldsymbol{\ell } ')&=&
\frac{1}{2}
{\cal A}
\int \frac{d^2\ell }{(2\pi )^2}
\frac{
{[C(\ell )]}^2
}
{[C(\ell )+N(\ell )]^2
}
\left[
\frac{\partial (\ln [\ell ^2C(\ell )])}{\partial [\ln (\ell )]}
+
\cos [2\Theta (\boldsymbol{\ell } ,\boldsymbol{\ell } ')]
\frac{\partial (\ln [C(\ell )])}{\partial [\ln (\ell )]}
\right] ^2
\end{eqnarray}
and $N_Q=N_D+N_{S_L}$ because the cross term cancels under the integral.
The quantities $N_D,$ $N_{S_L},$ $N_Q$ express the amount of information provided
by the respective estimators, and we have shown that in the limit where $\boldsymbol{\ell } '$
is small
\begin{eqnarray}
\hat Q(\boldsymbol{\ell } ')=
\frac{N_D(\ell ')} {N_D(\ell ')+N_{S_L}(\ell ')}
\hat D(\boldsymbol{\ell } ')
+
\frac{N_{S_L}(\ell ')} {N_D(\ell ')+N_{S_L}(\ell ')}
\hat S_L(\boldsymbol{\ell } ').
\end{eqnarray}
The exact expressions in Fourier space for the dilatation and shear statistics at not necessarily small
wavenumber $\boldsymbol{\ell } '$ are
\begin{eqnarray}
\hat D(\boldsymbol{\ell } ')&=&\frac{{\cal A}}{2}{1\over N_D(\ell ')}
\int \frac{d^2\ell }{(2\pi )^2}
\left\{
F\left(\frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) +
F\left(\frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\right\}~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)~\cr
\hat S_L(\boldsymbol{\ell } ')&=&\frac{{\cal A}}{2}{1 \over N_{S_L}(\ell ')}
\int \frac{d^2\ell }{(2\pi )^2}
\biggl\{
G\left(\frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)
\cos \left[ 2\Theta \left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } , \boldsymbol{\ell } '\right) \right]\cr
&&\qquad\qquad+
G\left(\frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\cos \left[ 2\Theta \left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } , \boldsymbol{\ell } '\right) \right]
\biggr\}
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)~
\end{eqnarray}
where
\begin{eqnarray}
&&F(\bar \ell )
\frac{C(\bar \ell )}
{[C(\bar \ell )+N(\bar \ell )]^2}
\frac{\partial ( \ln [\bar \ell ^2C(\bar \ell)]) }
{\partial (\ln [\bar \ell ]) },\cr
&&G(\bar \ell )
\frac{C(\bar \ell )}
{[C(\bar \ell )+N(\bar \ell )]^2}
\frac{\partial ( \ln [C(\bar \ell)]) }
{\partial (\ln [\bar \ell ]) }.
\end{eqnarray}
To express compactly the increase in variance due to the non-optimality for $\boldsymbol{\ell } '$
away from zero, it is useful to define the inner product
\begin{eqnarray}
\left< \hat A, \hat B\right> =
\frac{{\cal A}}{2}\int \frac{d^2\ell }{(2\pi )^2}
(C+N)\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)
(C+N)\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
A(\boldsymbol{\ell } )B(\boldsymbol{\ell } )
\end{eqnarray}
where
\begin{eqnarray}
\hat A =
\frac{{\cal A}}{2}{1\over N_A}
\int \frac{d^2\ell }{(2\pi )^2}~A(\boldsymbol{\ell } )~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) ~
T_{sky}\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)~
\end{eqnarray}
and $\hat B$ is defined analogously.
In this notation the various estimators have the following shape functions:
\begin{eqnarray}
Q(\boldsymbol{\ell } ;\boldsymbol{\ell } ')&=&
\frac{2}{(\ell ')^2}
\frac{
-\left( \frac{\boldsymbol{\ell }'}{2}-\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } -\frac{\boldsymbol{\ell }'}{2} \right|\right)
-\left( \frac{\boldsymbol{\ell }'}{2}+\boldsymbol{\ell } \right)\cdot \boldsymbol{\ell } '~
C\left(\left| \boldsymbol{\ell } +\frac{\boldsymbol{\ell }'}{2} \right|\right)
}{
(C+N)\left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)
(C+N)\left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
}\cr
D(\boldsymbol{\ell } , \boldsymbol{\ell } ')&=&
\frac{1}{2}
\left\{
F\left(\frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right) +
F\left(\frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\right\},\cr
S_L(\boldsymbol{\ell } , \boldsymbol{\ell } ')&=&
\frac{1}{2}
\Biggl\{
G\left(\frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } \right)
\cos \left[ 2\Theta \left( \frac{\boldsymbol{\ell } '}{2}+\boldsymbol{\ell } , \boldsymbol{\ell } '\right) \right]
\cr
&&\qquad +
G\left(\frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } \right)
\cos \left[ 2\Theta \left( \frac{\boldsymbol{\ell } '}{2}-\boldsymbol{\ell } , \boldsymbol{\ell } '\right) \right]
\Biggr\} .
\end{eqnarray}
It follows that the increase in the variance is given by the
following expression for the secant squared
\begin{eqnarray}
\frac{
\sigma ^2(D)
}{
\sigma ^2(Q)
}=
\frac{
\left< \hat Q, \hat Q\right>
\left< \hat D, \hat D\right>
}{
\left< \hat Q, \hat D\right> ^2} .
\end{eqnarray}
In Fig.~\ref{Fig:CompQuadEst} we indicate the performance of the several estimators at finite $\ell '.$
We plot
$C_{\ell}^{(\delta \kappa_0)(\delta \kappa _0)}$(i.e. the expectation value of the square modulus of the harmonic coefficient on the sphere
for the reconstruction noise) for the estimators $Q,$ $D,$ and $S_L,$ in black, dashed red and dashed green, respectively. The dashed magenta curve indicates the noise obtained by combining
$D$ and $S_L$ using inverse variance weighting. The solid magenta curve indicates the correction when the lack of overlap is taken into account to render the estimator unbiased at large wavenumber. We assume an instrument noise
combining the Planck 100+143+217 GHz channels according to the specifications
given in the bluebook \cite{bluebook}.
At low wavenumber, $Q$ exhibits a flat (i.e., white noise)
spectrum, which subsequently divergences at large $\ell '.$ At very low $\ell '$
we observe that the noise from $Q$ is the same as the noise from $D$ and $S_L$
combined in quadrature, as shown theoretically in the text, but at higher
$\ell '$ the approximations used break down and a noise excess is observed.
We observe that for $\ell \ltorder 70,$ the difference in performance between
the estimator developed here and the linearly optimal quadratic
estimator is minimal. At higher wavenumbers beyond $\ell \gtorder
100 ,$ however, the variance increases rapidly due to lack of overlap with the ideal
kernel. There is a priori no reason why a real space approach
could not be extended to higher wavenumbers for the lensing field.
However, in the present paper we do not explore how this would work.
\section{Concluding remarks}
We have demonstrated how to reconstruct in real
space using a filter of compact support the
weak gravitational lensing field, here represented
as three fields, a dilatation field $\kappa _0(\boldsymbol{\theta } )$ and
the two components of the pure shear distortion field
$\kappa _+(\boldsymbol{\theta } )$ and $\kappa _\times (\boldsymbol{\theta } ).$
The three fields are related by
a set of nonlocal consistency conditions, which may subsequently
be exploited to reduce the noise of the reconstruction.
Except for an integration constant
and two translational and one rotational zero modes, the weak lensing may alternatively
and equivalently
be described by either (1) a gravitational lensing potential
$\Phi (\boldsymbol{\theta } ),$ (2) a displacement field ${\boldsymbol {\xi}}(\boldsymbol{\theta } ),$
$\Phi (\boldsymbol{\theta } ),$ (2) a displacement field ${\boldsymbol{\xi}(\boldsymbol{\theta } )},$
or (3) the dilatation field $\kappa _0(\boldsymbol{\theta } )$ and
the two components of the pure shear distortion field
$\kappa _+(\boldsymbol{\theta } )$ and $\kappa _\times (\boldsymbol{\theta } ).$
In this paper we argue that for the purpose of reconstruction
the representation (3) is advantageous because this is the
representation for which the lensing field bears a local relation
to the real space CMB maps. This locality comes at a price
because the three components are not independent and subject
to nonlocal consistency conditions, which may be exploited
to improve the reconstruction.
Locality allows different regions of the sky to be analyzed
independently in a natural way, quite unlike the quadratic
reconstruction in harmonic space, where the entire sky must
be analyzed simultaneously. This approach and variations thereof
hold promise for dealing with partial sky coverage and
excised point sources.
For the filters developed in this paper there is very little loss
of information for the lensing field at low wavenumbers. However
at larger wavenumbers the lensing signal is attenuated according
to a wavenumber dependent form factor, which can be deconvolved
by applying a correction filter.
\vspace{0.6cm}
\noindent
\textbf{Acknowledgements:}
MB and KM acknowledge support from a joint CNRS/NRF travel grant.
The work of MB, CSC and MR was supported in part by the Projet
Blanc VIMS-PLANCK of the Agence Nationale de la Recherche.
KM and CSC are supported by the South African National Research
Foundation.
|
1,477,468,750,092 | arxiv | \section{Speaker Biographies}
\noindent \textbf{Preetha Chatterjee}, \textit{Ph.D. (University of Delaware),} is an Assistant Professor at Drexel University. Her research interests are in improving software engineers’ tools and environments through empirical data analysis, natural language processing and machine learning. She serves on the OC/PC for several conferences such as ICSE, MSR, ICSME, and SANER.
\smallskip
\noindent \textbf{Tushar Sharma}, \textit{PhD (AUEB, Greece), MS (IIT-Madras, India),} is an assistant professor at Dalhousie University. His research interests include software quality, refactoring, and applied machine learning for software engineering. He worked with Siemens Research for more than nine years. He co-authored Refactoring for Software Design Smells: Managing Technical Debt and two Oracle Java certification books. He has founded and developed Designite which is a software design quality assessment tool used by many practitioners and researchers worldwide. He is an IEEE Senior Member.
\smallskip
\noindent \textbf{Paul Ralph}, \textit{PhD (British Columbia),} is an award-winning scientist, author, consultant, and Professor of Software Engineering at Dalhousie University. His research intersects software engineering, human-computer interaction, and project management. Paul is the editor of the Software Engineering Empirical Standards.
\section{Overview of Empirical Standards Project} \label{sec:overview}
The Empirical Standards project aims to improve review quality, consistency, and predictability in the peer-review process for empirical studies by creating brief public documents that model our community's expectations for empirical research \cite{ralph2020empirical}. Creating and evolving these standards, and then reconstituting peer review processes around them, should also improve research quality and consensus.
The project started with the ACM SIGSOFT special initiative to improve paper and peer review quality \cite{ralph2020acm}. Two years and 50+ contributors later, Ralph \textit{et al.} \cite{ralph2021acm} released the standards and a prototype reviewing tool \cite{arshad2021towards}. The standards and review tools are available online.\footnote{\url{https://acmsigsoft.github.io/EmpiricalStandards/docs/}} The quickest way to grasp what is meant by an ``empirical standard'' is to go to the standards website and look at a standard for a familiar methodology.
Critically, the standards are method-specific. The standard for experiments is very different from the standard for case studies. The standard for questionnaire surveys is very different from the standard for simulations. Standards must be method-specific to foster diversity in research and avoid cross-paradigm criticism (e.g. one should not criticize a case study for lack of generalizability because that's not what a case study is for \cite{stol2018abc}). However, the standards share a common format, including several sections:
\begin{itemize}
\item Application: how to determine whether this standard applies to a given manuscript
\item Essential attributes: properties a manuscript must have to be acceptable in any peer reviewed venue
\item Desirable attributes: properties that may enhance the rigor and quality of a manuscript but are not always necessary
\item Extraordinary attributes: properties associated with award-quality research
\item Anti-patterns: common problems seen in this kind of study
\item Invalid criticisms: critiques reviewers should not make about this kind of study
\item Suggested reading: references to helpful works about the methodology
\item Exemplars: published manuscripts that effectively demonstrate some (not necessarily all) of the essential, desirable or extraordinary attributes.
\end{itemize}
Each standard was initially developed by a small team of scientists experienced in that method. Much care was taken to craft the essential attributes, as these will determine whether a manuscript is accepted for publication.
However, research methods constantly evolve along with associated expectations for them;
hence, this project aims to constantly update the standards to foster and incorporate emerging expectations. An empirical standard is supposed to \textit{model}, not \textit{set}, a community's expectations around rigor. We therefore encourage interested readers to suggest improvements to the standard by raising a pull-request on the GitHub repository.\footnote{\url{https://github.com/acmsigsoft/EmpiricalStandards}} and raising a pull-request.
\section{Motivation and Objectives}
To improve peer review and paper quality, the standards must be adopted in several ways: researchers using the standards to design studies and prepare manuscripts; reviewers using the standards to evaluate manuscripts; editors and program chairs using the standards to define quality for their venues. This motivates the twin aims of our tutorial:
\begin{enumerate}
\item to communicate the contents of the repository mining standard to attendees and help attendees understand how to apply the standards in their various roles;
\item to hear the attendee's feedback on the repository mining standard, understand their challenges and concerns (if any), and conceive potential improvements to the standard.
\end{enumerate}
\section{Tutorial Format}
The tutorial comprises three parts. It will begin with a brief overview of the empirical standards project.
During the overview, we will explain the motivation for and goals of the empirical standards, as well as how the repository mining standard was developed, and the process for updating and improving the standards.
In the second part of the tutorial,
we will explain how to interpret and use the repository mining standard. We will explain its main elements, as follows.
\begin{description}
\item [Application:]
The standard applies to software engineering studies that use automated techniques to extract data from large-scale data repositories and quantitatively analyze the contents mined from the repositories.
\item [Essential Attributes:]
The standard specifies the minimum essential attributes that the study must explain.
These attributes include the description and justification of data sources, repository selection criteria, and the procedure of data extraction.
\item [Desirable and Extraordinary Attributes:]
The standard summarizes attributes that are not universally necessary but tend to improve the quality of this kind of study.
These attributes include aspects related to supplementary material, hypothesis testing, qualitative analysis of construct validity and dataset quality.
The standard also discusses extraordinary attributes such as establishing causality among the studied variables.
\item [Anti-patterns:]
The standard discusses several anti-patterns that a repository mining study must avoid, including limiting analysis to quantitative description, convenience sampling without good selection criteria, and
presenting insufficient details about the data processing steps.
\item [Invalid Criticisms:]
The standard provides a list of common but invalid criticisms including
needing to include more repositories
and expecting different sources of repositories or data than those selected and justified in the study. Reviewers should abstain from lodging such criticisms.
\item [Suggested Reading:]
The standard lists a set of articles from the community that provide comprehensive treatment to one or more aspects included in the standard.
\item [Exemplars:]
The standard include references to some good example of software repository mining studies.
\end{description}
\balance
In the third part of the tutorial,
the presenters will open the floor to the attendees to provide feedback on the current standard.
Additionally, suggestions will be sought to improve its usefulness and adoption by the research community. We hope that up to half of the session can be dedicated to discussion and feedback.
\section{Acknowledgments}
\begin{acks}
We would like to acknowledge the amazing support of everyone who contributed to the Empirical Standards for Software Engineering Research. A complete list of contributors is available at: \newline \url{https://acmsigsoft.github.io/EmpiricalStandards/people/}
\end{acks}
\bibliographystyle{ACM-Reference-Format}
|
1,477,468,750,093 | arxiv | \section{Introduction}
\label{s:intro}
\noindent
Most galaxies harbour a supermassive black hole (SMBH) of mass
$10^6$--$10^{10}\Msun$ at their centres. The SMBH is typically
surrounded by a dense stellar system, which is sometimes a distinct
cluster and sometimes a smooth inward continuation from larger radii
of the galaxy's stellar distribution.
We focus in this paper on the near-Keplerian region where the
gravitational force is dominated by the SMBH. The dynamical behavior
of the stars in this region involves the following processes
\citep[e.g.,][hereafter KT11]{2011MNRAS.412..187K}. (i) To a first approximation,
the stars follow eccentric Keplerian orbits with orbital periods
$P=1$--$10^4\yr$ (for the sake of
concreteness, all numerical estimates are for the near-Keplerian
region of the Milky Way between $0.001\pc$ and $\sim 1\pc$ of
the central black hole at Sgr A*). (ii) On longer timescales,
$10^3$--$10^5\yr$, the spherical component of the
gravitational field from the stellar system and relativistic effects
lead to apsidal precession (retrograde and prograde, respectively) of
the stellar orbits. (iii) Non-spherical components of the
gravitational field from the stellar system lead to diffusion in the
orientation of the orbits on even longer timescales,
$10^5$--$10^7\yr$. (iv) Non-axisymmetric torques between individual
stellar orbits lead to diffusion of the eccentricities of the orbits
on timescales of $10^7$--$10^{10}\yr$. Processes (iii) and (iv) are
called vector and scalar resonant relaxation, respectively
\citep{1996NewA....1..149R}. (v) Finally, the semimajor axes diffuse
due to two-body encounters and dynamical friction on timescales
$\gtrsim 10^9\yr$. A review of these and other dynamical
processes in galactic nuclei is given in \cite{2013degn.book.....M}.
A rough guide to the relevant timescales is obtained by
considering a cluster of $N\gg1$ stars of mass $m$ surrounding a
central mass $M_\bullet$, with $Nm\ll M_\bullet$. If the typical
orbital radius is $a$ and the corresponding orbital period is
$P=2\pi(a^3/GM_\bullet)^{1/2}$, then
\newpage
\begin{itemize}[leftmargin=0.5cm,itemsep=1ex]
\item the apsidal precession time is $\sim P\,M_{\bullet}/(Nm)$;
\item the orbital planes are re-oriented on the vector resonant relaxation
timescale, $\sim P\, M_{\bullet}/(m\sqrt{N})$;
\item the eccentricities are re-distributed on the scalar resonant
relaxation timescale, $\sim P\, M_{\bullet}/m$;
\item the semimajor axes diffuse on the two-body or non-resonant
relaxation timescale, $\sim P\, M_{\bullet}^2/(m^2N)$.
\end{itemize}
The large number of stars ($\sim 10^7$) and vast range of spatial and
temporal scales ($10^{-6}$--$1\pc$ and 10--$10^{10}\yr$), as well as
the long-range spatial and temporal correlations of the forces
involved in resonant relaxation, prohibit the accurate dynamical
modeling of these environments with the tools used for stellar
clusters, namely Fokker--Planck calculations and direct N-body
integrations. However, the hierarchy of timescales in near-Keplerian
stellar systems leads to adiabatic invariants, and algorithms that
enforce their conservation can increase numerical accuracy and
decrease computational demands. For example, by averaging over
timescales long compared to the orbital period but short compared to
the apsidal precession timescale, we obtain Gauss's method for secular
dynamics \citep{2009MNRAS.394.1085T}, in which each body on an
eccentric orbit is replaced by a ``wire'' on which the linear density
is proportional to the corresponding residence time, i.e., inversely
proportional to the velocity. On even longer timescales, we can
average the wires over the apsidal precession timescale and thereby
represent them with annuli. Since these structures are stationary and
axisymmetric, the energy and magnitude of the angular momentum of a
stellar orbit are conserved but the direction of the angular momentum
is not; in other words the geometry of the annulus (periapsis,
apoapsis, and surface density) is fixed, but its orientation is
not. Vector resonant relaxation (hereafter VRR) is the stochastic process arising from
the gravitational interaction of these annuli, leading to relaxation
of their orientations.
Here we describe a new symplectic integrator, \textsc{n-ring}, which
follows VRR in near-Keplerian stellar systems.
First, we derive the surface density of the annulus describing an
eccentric stellar orbit by averaging over orbital phase and apsidal
angle. Next we derive the corresponding secular Hamiltonian describing
the interaction between a pair of stars. The resulting equations of
motion for a pair of stars can be solved analytically. We construct a
symplectic integrator by combining the effects of the pairwise
interactions. We parallelize, refine, and optimize the algorithm by
evaluating independent pairs in parallel, and by evaluating the
strongest interactions with a smaller timestep than the weaker ones.
We use \textsc{n-ring} to study VRR in spherical
near-Keplerian stellar systems containing up to 16k stars. We
measure the temporal correlation function of the orbit normals and
determine the timescales for relaxation and complete mixing as a
function of the semimajor axis, eccentricity, and stellar mass
distributions. We construct a simple model of the relaxation process
as a Markovian random walk on a sphere and show that this provides a good
representation of the numerical results. We also provide empirical
formulae that can be used to estimate the VRR timescale in spherical systems.
\section{Secular evolution}
\subsection{Hamiltonian for vector resonant relaxation}\label{s:Hamiltonian}
We consider a system of $N$ stars, of masses $m_i$ with $ i \in
\{1,2,\dots,N\}$, orbiting an SMBH of mass $M_\bullet$ located at the
origin. We denote the Keplerian orbit by $\bm{r}_{i}(t)$ and the
semimajor axis, eccentricity, and period by $a_i$, $e_i$, and
$P_i\equiv 2\pi/\Omega_i$ with $\Omega_i \equiv (G
M_{\bullet}/a_i^3)^{1/2}$. We make the following assumptions:
\begin{enumerate}[itemsep=1.5ex,leftmargin=0.5cm]
\item \label{i:mass} the mass in stars is much less than the mass of
the SMBH, $\sum_im_i\ll M_\bullet$, although the number of stars $N\gg1$;
\item \label{i:binary} there are no binaries (although binaries with semimajor
axes much less than the system size can be treated as single
stars over the timescales considered here);
\item\label{i:GR} the stellar system is sufficiently far from the SMBH
that each star follows an approximately Keplerian orbit around the
SMBH;
\item\label{i:apsidal} the apsidal precession time of each orbit is
much longer than the longest orbital period in the stellar system;
\item\label{i:apse} the apsidal precession time of each orbit is
much shorter than the shortest orbital plane re-orientation time\footnote{\label{foot:prec}
This assumption fails for a small fraction of stars with eccentricity very close
to unity, since the angular momentum goes to zero as $e\to 1$ so
even a tiny torque will rapidly re-orient the orbit. More precisely, the
apsidal precession rates due to the mean mass distribution and due
to general relativity vary as $(1-e^2)^{1/2}$ and $(1-e^2)^{-1}$
respectively, while the re-orientation rates due to VRR and due to
Lense--Thirring precession vary as $(1-e^2)^{-1/2}$ and
$(1-e^2)^{-3/2}$.};
\item all orbital and apsidal precession periods are incommensurate,
so mean-motion and apsidal secular resonances do not play a role;
\item\label{i:nodal} the Newtonian potential of the stellar cluster is
the main driver of the re-orientation of orbital planes, as opposed
to either Lense--Thirring precession or a massive perturber (e.g., a second black hole, a galactic
bar, or a molecular torus).
\end{enumerate}
These assumptions may be satisfied for most stars and compact objects
between $\sim 0.001$ and $\sim0.2\pc$ in the Galactic center on timescales
$10^5$--$10^7\,$yr (see KT11). In particular, assumption~\ref{i:mass}
requires that the apoapsides $r_{a,i}=a_i(1+e_i)$ are much smaller
than the radius $1.8\pc$ where the SMBH mass equals the enclosed
stellar mass. The expected binary fraction in galactic nuclei,
assumption~\ref{i:binary}, is quite
uncertain \citep{2008MNRAS.389.1655A,2009ApJ...700.1933H}, but a recent study
suggests that $30^{+34}_{-21}\%$ of massive young stars in the
Galactic centre may be in binaries \citep{2014ApJ...782..101P}.
Assumption~\ref{i:GR} requires that the periapsides
$r_{p,i}=a_i(1-e_i)$ are much larger than the gravitational radius
$r_g = GM_{\bullet}/c^2=2\times10^{-7}\pc$.
Assumption~\ref{i:apsidal} is valid for stars with semimajor axes
$\lesssim 1\pc$ (see Fig.\ 1 of KT11). Assumption \ref{i:apse} is
generally valid for stars with semimajor axes $\lesssim 1\pc$, except
in a narrow range of radii where the prograde general-relativistic
apsidal precession cancels the retrograde Newtonian precession, in
particular $a (1-e^2)^{0.54}\simeq 7\,\rm mpc$ (see
\citealt{2010PhRvD..81f2002M,2014CQGra..31x4003B}, KT11, and Eq.\ \ref{e:appprec} with
$s=1$). Orbits outside this narrow range of radii approximately conserve their
eccentricity; during one VRR timescale $\Delta e \sim (t_\vrr/t_{\rm rr})^{1/2} \sim N^{-1/4}$.
As for assumption \ref{i:nodal}, Lense--Thirring precession
is negligible if $r_{p,i}$ is much larger than the rotational
influence radius $r_{r} = [4 \chi M_{\bullet} / (m_{\rms} \sqrt{N})
]^{2/3} r_g \sim 1\,\chi^{2/3}\,$mpc where $0<\chi<1$ is the
dimensionless spin parameter of the SMBH (see
\citealt{2010PhRvD..81f2002M}, Fig.\ 1 of KT11, and
\citealt{2012PhRvD..86j2002M}). The most prominent known massive
perturber in the Galactic Centre is the molecular torus at radii
$1.5$--$7\,$pc, whose influence is significant outside of $\sim
0.2\,$pc (see KT11 and references therein).
The Keplerian orbits evolve slowly due to the gravitational forces
from the other stars. To follow this evolution we first average the
gravitational interaction potential between stars $i$ and $j$ over the
orbital periods of both stars\footnote{Note that because of this orbit
averaging the net force on the SMBH is zero, so it remains at rest at
the origin in this approximation.}. This average is
\begin{align}\label{e:Hellipse}
H^{( i j )}_{\E}
&\equiv \left\langle- \frac{G m_i m_j}{\|\r_i(t) - \r_j(t')\|}
\right\rangle_{t,t'}\nonumber \\
&=-\frac{1}{P_i P_j}
\oint\D \bm{r}_ i \oint \D\bm{r}_j\frac{G m_i m_j}{v_i v_j \|\r_i - \r_j \|}
\end{align}
where the subscript ``RR'' stands for ``resonant relaxation'' and
\begin{equation}
v=\|\dot{\r}\|=\sqrt{GM_{\bullet} \left(\frac{2}{\|\r\|} -
\frac{1}{a}\right)}
\end{equation}
is the speed. The integrations run over the Keplerian elliptical
trajectories. The interaction energy is that of two elliptical wires
with linear density $m/(P v)$.
We assume that the stellar system is approximately spherical. Then its
dominant effect on the orbit of an individual star is apsidal
precession. The characteristic precession time is approximately
$t_{\rm prec}= 2\pi \|\bm{\Omega}_{\rm prec}\|^{-1}\approx\Omega /
[G \rho(a) ]$, where $\rho(a)$ is the average stellar mass
density in the vicinity of the orbit (see
Appendix~\ref{app:precession}). We next average the interaction
Hamiltonian $H^{( i j )}_{\E}$ over the apsidal precession period
$t_{\rm prec}$, so the eccentric wires are replaced by axisymmetric
rings or annuli. For each star the mass between radii $r$ and $r+\D r$
is $\D m = 2 m \,\D r/(P|v_{r}|)$ where $v_{r}$ is the radial
component of the Keplerian velocity. Using $|v_{r}| = (v^2
- v_{\theta}^2)^{1/2}$ and the conservation of angular momentum $L =
m r v_{\theta} = m\sqrt{G M_{\bullet} a (1-e^2)}$, the surface
density becomes
\begin{equation}\label{e:sigma}
\sigma(r)= \frac{\D m}{2\pi r \D r}=\frac{m}{ 2\pi^2 a \sqrt{(r_{a}-r)(r-r_{p})}}
\end{equation}
if $r_{p}\leq r\leq r_{a}$ and $\sigma(r)=0$ otherwise; here
$r_{a}=a(1+e)$, $r_{p}=a(1-e)$ are the apoapsis and periapsis of
the orbit. Thus,
\begin{equation}
H^{( i j )}_{\E}
=-\int \D \r \int \D\r'
\frac{G \sigma_i(r) \sigma_j(r')}{\|\r - \r'\|}\,,
\label{e:Hannuli}
\end{equation}
where the integration is over the annular surfaces swept out by the rotating ellipses
in the range $r_{p,i}\leq r\leq r_{a, i}$ and
$r_{p, j}\leq r'\leq r_{a, j}$.
We evaluate the integral using a multipole expansion in
Appendix~\ref{app:interactionenergy} to find (Eqs.\
\ref{e:Hintdefinion}, \ref{e:Phi_ell}, \ref{e:Rdef}, and \ref{e:sijl-def})
\begin{equation}\label{e:Hresult}
H^{( i j )}_{\E} = -\frac{ G m_i m_j}{a_{\Out}}
\sum_{\ell=0}^{\infty}
P_{\ell}(0)^2\, s_{ i j \ell}\, \alpha_{ij}^{\ell}
\,P_{\ell}(\cos I_{ i j })\,.
\end{equation}
where $I_{ij}$ is the inclination angle between the orbital planes of
star $i$ and $j$, $P_{\ell}(x)$ is a Legendre polynomial, and in
particular for integer $n\ge0$
\begin{equation} \label{e:pnzero}
P_{2n}(0)=(-1)^n\frac{(2n)!}{2^{2n}(n!)^2}\,,\quad P_{2n+1}(0)=0\,.
\end{equation}
Furthermore (Eqs.\ \ref{e:sijl-def}, \ref{e:www})
\begin{align}\label{e:s_ijl}
s_{ij\ell} &=
\frac{1}{\pi^2}\int_0^{\pi} \D \phi \int_0^{\pi} \D \phi'
\\&\quad\times\nonumber
\frac{\min\left[\; (1 + e_{\In}\cos\phi),\; \alpha_{ij}^{-1}(1 + e_{\Out} \cos\phi')\;\right]^{\ell+1}
}{ \max\left[\; \alpha_{ij}(1 + e_{\In} \cos\phi),\; (1 + e_{\Out} \cos\phi')\;\right]^{\ell}}
\end{align}
where ``${\Out}$'' and ``${\In}$'' label the index $i$ or $j$ with the
larger and the smaller semimajor axis, respectively, and
$\alpha_{ij}=a_{\In} / a_{\Out} < 1$. In
Appendix~\ref{app:interactionenergy} we show that one of the two
integrals in Eq.~(\ref{e:s_ijl}) can be evaluated analytically and we
use this result to derive a generating function of $s_{ij \ell}$. Analytic
closed expressions are available in special cases: for example, for circular,
non-overlapping orbits $s_{i j \ell} = 1$ for all $\ell$, and for
eccentric radially non-overlapping orbits we have (Eq.~\ref{e:S-nonoverlapping})
\begin{equation}
s_{ij\ell} = \frac{\chi_{ \Out }^{\ell}}{\chi_{ \In }^{\ell+1}}
P_{\ell+1}(\chi_{ \In })P_{\ell-1}(\chi_{ \Out })\quad {\rm
if}~r_{a,\In}<r_{p,\Out}\,,
\label{e:xxxyyy}
\end{equation}
for $\ell > 0$, where $\chi_i$ is the aspect ratio of the elliptical
orbit of star $i$, i.e., $\chi_i=a_i/b_i=1/\sqrt{1-e_i^2}$, where
$b_i=a_i \sqrt{1-e_i^2}$ is the semiminor axis. The integral
$s_{ij\ell}$ in Eq.~(\ref{e:s_ijl}) depends on the four parameters
$\alpha_{ij}$, $e_{\In}$, $e_{\Out}$, and $\ell$, and can be tabulated
on a four-dimensional grid. The integral for all stellar pairs may
then be obtained by interpolation on the grid\footnote{The grid must
be sufficiently dense to resolve the resonance peaks shown in
Figure~\ref{f:energy-alpha} below.}.
\begin{figure*}
\centering
\mbox{\includegraphics{fig1a.eps}
\quad\includegraphics{fig1b.eps}}\\
\mbox{\includegraphics{fig1c.eps}\quad
\includegraphics{fig1d.eps}
}
\caption{\label{f:energy-alpha} VRR coupling coefficients
$\J_{ij\ell}$ (Eqs.\ \ref{e:HRR} and \ref{e:Jijell}). The subscripts
$i$ and $j$ label the stars and $\ell$ labels the (even) multipole
order. {\it Top left panel:} Eccentricities $e_i = e_j=0$ (solid
line), 0.3 (long-dashed), 0.6 (short-dashed), and 0.9 (dotted). The
red and blue curves show $\J_{ij\ell}$ for the multipoles $\ell = 2$
and $4$, respectively, as a function of the semimajor axis ratio
$\alpha_{ij}=\min(a_i,a_j)/\max(a_i,a_j)$. Circular orbits are
coupled more strongly than eccentric orbits for comparable semimajor
axes ($\alpha\sim 1$), but the coupling falls off more slowly for
eccentric orbits in the range $1\geq \alpha_{ij} \geq (1-e)/(1+e)$
where there is radial overlap. {\it Top right panel:} $e_i=0.2$ and
$e_j=0.8$. Here additional multipoles up to $\ell=50$ are shown as a
function of the semimajor axis ratio $a_i/a_j$. Different line
styles show different radial regimes, as defined in Appendix
\ref{app:interactionenergy}: non-overlapping orbits (dash--dotted),
overlapping (dotted), and embedded (solid). The boundaries between
these regions are marked with $A$, $B$, $C$, and $D$ which satisfy
$a_{i}/a_{j}=(1\pm e_j)/(1\pm e_i)$. {\it Bottom panels:} The
limiting behavior of $\ell^2 \J_{ij\ell}$ for asymptotically large
$\ell$, as a function of eccentricity and semimajor axis. In the
bottom left panel, $e_j=0.3$ and $a_i/a_j=0.68$, 0.8, 1, 1.1, and 2
for different curves, as labeled. In the bottom right panel
$e_i=0.2$ and $e_j=0.8$ and $a_i/a_j$ is varied. The limit of
$\ell^2 \J_{ij\ell}$ is zero for non-overlapping orbits, finite and
non-zero for overlapping (dotted lines) or embedded orbits (solid
lines), and divergent if the periapsides or the apoapsides coincide
(see Appendix \ref{app:convergence}). }
\end{figure*}
The sum over $\ell$ in Eq.~(\ref{e:Hresult}) converges very quickly
for radially non-overlapping orbits with $\alpha_{ij}\ll 1$. The
convergence is slower for $\alpha_{ij}\sim 1$ or for radially
overlapping orbits, but even so the terms in the sum decrease
asymptotically as $\ell^{\,-2}$--$\ell^{\,-2.5}$ except for a set of
measure zero (see Appendix~\ref{app:convergence} for a thorough
discussion of convergence). The first 10 even multipoles are
typically sufficient for at least $\sim1\%$ accuracy. The series
converges more slowly if the periapsides or the apoapsides of the two
orbits coincide and the orbits are coplanar ($\sim\ell^{-2}\ln\ell$),
especially if one of the orbits is circular ($\sim \ell^{-1.5}$), or
if the orbits are circular with the same radii but not coplanar ($\sim
\ell^{-1.5}$). The sum diverges (terms $\sim \ell^{-1}$) only if the
two orbits are circular with the same radii and coplanar
($\alpha_{ij}=1$ and $e_i=e_j=I_{ij}=0$).
Since the averaged surface density representing each star is
stationary and axisymmetric on the orbital timescale $P$, and the
precession timescale $t_{\rm prec}\gg P$, the orbits conserve their
Keplerian energy and their scalar angular momentum $L=\|\L\|$ as they
interact. Thus, the semimajor axes and eccentricities are conserved
during the evolution. In summary,
\begin{equation}\label{e:HRR}
H_{\E} =
-\sum_{ij \ell}^{i<j}\J_{ij\ell}\,P_{\ell}\big( \Ln_{i}\cdot \Ln_{j} \big)\,,
\end{equation}
where the dynamical variables are the unit vectors normal to the
orbits, $ \Ln_{i} \equiv \L _{i}/L_{i}$,
and $\J_{ij\ell}$ are constant coupling coefficients
\begin{equation}\label{e:Jijell}
\J_{ij\ell} = \frac{ G m_i m_j}{a_{\Out}}
P_{\ell}(0)^2\, s_{ i j \ell}\, \alpha_{ij}^{\ell}\,.
\end{equation}
The top panels of Figure~\ref{f:energy-alpha} show $\J_{ij\ell}$ for
$\ell=2$--$4$ (top left panel) and 2--50 (top right panel), for a
range of semimajor axis ratios $a_i/a_j$ and selected values of the
eccentricities $e_i$ and $e_j$. At all semimajor axes and
eccentricities, the interaction energy is dominated by the $\ell=2$
quadrupolar term and decreases monotonically with $\ell$. The
coupling declines rapidly with $\ell$, as
$\alpha_{ij}^{\ell}(1+e_{\In})^{\ell}/(1-e_{\Out})^{\ell}$, for radially
non-overlapping orbits, i.e., for $\alpha_{ij}<
(1-e_{\Out})/(1+e_{\In})$. The coupling coefficients exhibit peaks
when the periapsides or apoapsides coincide, which become increasingly
prominent as $\ell$ increases. The bottom panels show the limit of
$\ell^2 \J_{ij\ell}$ for large $\ell$, as a function of $e_i$ and
$a_i/a_j$, respectively. This quantity is relevant for the torque
exerted between inclined orbits as we show below. The limit is zero
for non-overlapping orbits, but finite positive for overlapping or
embedded orbits (see Appendix \ref{app:interactionenergy} for precise
definitions of these terms). Thus, a larger number of multipoles is needed to
calculate accurately the torques between overlapping or embedded
orbits.
\subsection{Equations of motion}
We have argued that only the directions of the angular momenta of the
stellar orbits change due to the averaged star-star interactions,
while the scalar angular momenta $L=\|\L\|$ are conserved. The equations
of motion for the angular momenta can be derived using Poisson
brackets.
We shall use Greek subscripts to denote Cartesian coordinates
$(x,y,z)$. The Poisson brackets of the angular-momentum vectors
satisfy $\{L_{i \alpha},L_{j
\beta}\}=\sum_{\gamma}\delta_{ij}\epsilon_{\alpha\beta\gamma}L_{i
\gamma}$; here $i$ and $j$
label the stars, $\delta_{ij}=1$ if $i=j$ and zero otherwise, and
$\epsilon_{\alpha\beta\gamma}$ is the Levi-Civita or antisymmetric
tensor. For any complete set of phase space variables $\{X_s\}$ and a
function $f$ of phase-space variables, we have
\begin{equation}
\frac{\D f}{\D t}=\{f,H\} = \sum_s \{f, X_s \} \frac{\partial
H}{\partial X_s}
\end{equation}
where $H$ is the Hamiltonian. Using Eqs.\ (\ref{e:HRR}) and
(\ref{e:Jijell}) the equations of motion become
\begin{align}\label{e:EOM1}
\frac{\D L_{i \alpha}}{\D t} &= \{L_{i \alpha}, H\} =
\sum_{j=1}^N\sum_{\beta=1}^3\{L_{i \alpha}, L_{j \beta}\}
\frac{\partial H}{\partial L_{j\beta}} \nonumber \\
&= -\sum_{j\ell\beta\gamma} \epsilon_{\alpha\beta\gamma} \frac{L_{i \gamma} L_{j\beta}}{L_iL_j}\J_{ij\ell}
P'_{\ell}\big(\Ln_i\cdot \Ln_j\big),
\end{align}
where $P'_\ell(x)$ is the derivative of the Legendre polynomial\footnote{Note that
$P'_n(x)=n[P_{n-1}(x)-xP_{n}(x)]/(1-x^2)$.
},
and $L=\|\L\|$. This can be expressed more simply as
\begin{align}
\dot{\L }_i &= \bm{\Omega}_i \times \L _i, \nonumber \\
\bm{\Omega}_i &= -\sum_{j\ell} \frac{\J_{ij\ell}}{L_i L_j}
P'_{\ell}\big(\Ln_i\cdot \Ln_j\big)\, \L _j .
\label{e:EOM2}
\end{align}
The vector $\bm{\Omega}_i$ is the angular velocity of the
precession of the angular-momentum vector of a star $i$ due to its
averaged interactions with the other stars.
Using the $\L _i$ as phase-space variables, the phase space has $3N$
dimensions. There are $N+2$ conserved quantities:
\begin{align}\label{e:Ltot}
&\frac{\D}{\D t} E_{\E} = -\frac{\D}{\D t}\sum_{ij\ell} \J_{ij\ell}
P_{\ell}\big(\Ln_i\cdot \Ln_j\big)=0, \nonumber \\
&\frac{\D}{\D t} \sum_i \L _i = -\sum_{ij\ell} \frac{\J_{ij\ell}}{L_i
L_j} P'_{\ell}\big(\Ln_i\cdot \Ln_j\big)\,\L _j\times \L _i=0,
\nonumber \\
&\frac{\D}{\D t}(\L _i\cdot \L _i)=0\quad{\rm for~all~}i\in \{1,\dots,N\}.
\end{align}
The first is the conservation of total energy, which follows because
the Hamiltonian $H_{\E}$ (Eq.\ \ref{e:HRR}) is independent of time.
The second is the conservation of the total angular-momentum vector, which follows from
the double sum over $i$ and $j$ of products of symmetric
($\J_{ij\ell}=\J_{ji\ell}$) and antisymmetric terms ($\Ln_j\times\Ln_i$).
The third is the conservation of the scalar angular momentum of each
star, $L_i=m_i \sqrt{G M_{\bullet} a_i(1-e_i^2)}$, due to the orthogonality of
$\L _i$ and $\dot{\L }_i$ in Eq.~(\ref{e:EOM2}). The first two
conservation laws are valid for the original N-body system, but the
third holds only after we average over the orbital period $P$ and
apsidal precession time $t_{\rm prec}$.
\section{Numerical integrator}
\subsection{Pairwise evolution}\label{s:pairwise}
Since the Hamiltonian $H_{\E}$ is a sum of pairwise inter\-action
terms it is useful to first examine the evolution under a single such
term and then superimpose the effects of all the pairs.
The interaction between a single pair of stars leads to uniform
precession of their angular momenta around their common total
angular-momentum vector. Because of this simple behavior, the
equations of motion can be integrated analytically, as we now
show. Eq.~(\ref{e:EOM2}) implies that
\begin{align}
\frac{\D \L _i}{\D t} &= -\sum_{\ell=2}^{\infty}
\frac{\J_{ij\ell}}{L_i L_j} P'_\ell\big(\Ln_i\cdot
\Ln_j\big)\, \L _j \times \L _i, \nonumber \\
\frac{\D \L _j}{\D t} &= -\frac{\D \L _i}{\D t}.
\end{align}
Introduce new variables $\bm{J}_{ij} = (\L _i + \L _j)/2$
and $\bm{K}_{ij} = (\L _i - \L _j)/2$. Then the equations become
\begin{align}
\frac{\D \bm{J}_{ij}}{\D t} = 0\quad{\rm and}\quad
\frac{\D \bm{K}_{ij}}{\D t} = \bm{\Omega}_{ij} \times \bm{K}_{ij},
\end{align}
where
\begin{equation}\label{e:omjk}
\bm{\Omega}_{ij}= -\sum_{\ell=2}^{\infty} \frac{2 \J_{ij\ell}}{L_i L_j}
P'_\ell\bigg(\frac{J_{ij}^2 - K_{ij}^2 }{L_i L_j}\bigg)\, \bm{J}_{ij}={\rm const}\,.\\
\end{equation}
The magnitudes of $\bm{J}_{ij}$ and $\bm{K}_{ij}$ are both
conserved. Thus $\bm{\Omega}_{ij}$ is conserved, so $\bm{K}_{ij}$
rotates uniformly with angular velocity $\bm{\Omega}_{ij}$, and we
have
\begin{align}
\bm{J}_{ij}(t) &= \bm{J}_{ij0} \nonumber \\
\bm{K}_{ij}(t) &= \cos\left[\Omega_{ij} (t -t_0) \right] \bm{K}_{ij0} \nonumber\\
&\quad + \sin\left[\Omega_{ij} (t -t_0) \right]\bm{\hat{\Omega}}_{ij}\times \bm{K}_{ij0}\\\nonumber
&\quad+ \left\{1 - \cos\left[\Omega_{ij} (t-t_0)\right]\right\}\big(\bm{K}_{ij0}\cdot \bm{\hat{\Omega}}_{ij}\big)
\bm{\hat{\Omega}}_{ij}
\end{align}
where $\bm{K}_{ij0} = \bm{K}_{ij}(t_0)$ and $\bm{J}_{ij0} =
\bm{J}_{ij}(t_0)$ denote the initial conditions.
The angular momenta are fixed if $\L_i$ and $\L_j$ are parallel,
antiparallel, or perpendicular. Nearly perpendicular angular momenta
precess with nearly zero angular velocity, but nearly parallel angular
momenta with mutual inclination $I_{ij}\ll 1$ precess with a nonzero
angular speed $\Omega_{ij} \approx \sum_{\ell\mbox{ \scriptsize even}}\ell
J_1(\ell I_{ij}) \J_{ij\ell}(L_i+L_j)/(I_{ij}L_i L_j)$ in a retrograde
direction relative to $\bm{L_{i}}+\bm{L_{j}}$; here $J_1$ is a Bessel
function (see Eq.~\ref{e:P'cosI1}). For overlapping or embedded
orbits, $\ell^2 \J_{ij\ell}$ approaches a finite limit
(Eq.~\ref{e:Jasymptotic-overlap}) shown in
Figure~\ref{f:energy-alpha}, thus the angular velocity tends
asymptotically to
\begin{align}\label{e:Omegaasymptotics}
\bm{\Omega}_{ij} &\approx -\lim_{\ell\rightarrow \infty} (\ell^2
\J_{ij\ell})\sum_{\ell\mbox{ \scriptsize even}} \frac{J_1(\ell I_{ij})}{\ell
I_{ij}}\frac{\L_i+\L_j}{L_i L_j} \nonumber \\
&\approx -\lim_{\ell\rightarrow \infty} (\ell^2
\J_{ij\ell})\frac{\L_i+\L_j}{2I_{ij} L_i L_j}\,,
\end{align}
where the sum has been approximated by an integral in the last
equation. Thus the precession speed $\|\dot{\L}_i\|=\|\bm{\Omega}_{ij}
\times\L_i\|$ approaches a finite non-zero limit for
$I_{ij}\rightarrow 0$ for overlapping or embedded orbits. The bottom
panels of Figure~\ref{f:energy-alpha} show that $\lim_{\ell\rightarrow
\infty} \ell^2 \J_{ij\ell}$ is singular when the periapsides or
apoapsides of the two orbits coincide, so the precession speed is
singular in this case. Furthermore, since the torque is non-zero when
either eccentricity tends to unity, $\bm{\Omega}_{ij}$ tends to
infinity as ${\Ln}_{j} I_{ij}^{-1} (1-e_i^2)^{-1/2}$ when
$e_i\rightarrow 1$; thus very eccentric orbits precess very
rapidly. Similar remarks apply for nearly antiparallel angular
momenta. We derive the asymptotic angular velocity for arbitrary
inclinations in Appendix~\ref{s:asymptotics}
(Eq.~\ref{e:Omegaasymptotics4}).
\subsection{Symplectic integrator}
A system of $N$ stars has $\frac{1}{2}N(N-1)$ pairwise
interactions. Clearly, we can integrate this system
numerically by advancing the angular
momentum of each pair of stars in turn using the results of the
previous subsection. However, there is some advantage to deriving this
result in a more systematic and general way.
The evolution is governed by the first-order differential
equations~(\ref{e:EOM2}). We may write these as
\begin{equation}
\dot{\L} = \gothg \L
\label{e:eqmot}
\end{equation}
where $\L\equiv (\L_{1},\ldots,\L_N)$ and $\gothg$ is the operator
defined by
\begin{equation}
\gothg\L=(\bm{\Omega}_{1}\times\L_{1},\ldots,\bm{\Omega}_N\times\L_N).
\end{equation}
The operator $\gothg$ can be written as a sum over pairs,
\begin{equation}
\gothg=\sum_{i=1}^N\sum_{j>i} \gothg_{ij}
\end{equation}
where $\gothg_{ij}$ operates only on the pair of angular momenta
$\L_i,\L_j$ as described in Section \ref{s:pairwise}. Thus the commutator
$[\gothg_{ij},\gothg_{mn}]$ is zero if and only if the pairs $ij$ and
$mn$ have no member in common. Since $\bm{\Omega}_i$
depends explicitly on $\L$, $\G_{ij}$ is a nonlinear operator.
The solution to the equations of motion (\ref{e:eqmot})
is formally
\begin{equation}
{\L}(t) = \exp(\Delta t\, \gothg) \L(t_0)
= \sum_{n=0}^{\infty} \frac{\Delta t^n}{n!} \gothg^n \L(t_{0}), \quad \Delta
t\equiv t-t_0\,.
\end{equation}
Since $\gothg$ is a sum of operators $\gothg_{ij}$ that do not all
commute, the exponential of $\gothg$ is not simply the product of the
exponentials $\gothg_{ij}$. The Zassenhaus formula shows that to
second order in $\Delta t$ \citep[see][and references
therein]{2012CoPhC.183.2386C}
\begin{align}\label{e:exponential}
\exp(\Delta t\,\G) &=
\bigg(\prod_J\exp(\Delta t\, \G_J)\bigg)
\\&\quad\times
\bigg(\prod_{J<K}\exp\big(- \half\Delta t^2
[\G_J,\G_K]\big)\bigg). \nonumber
\end{align}
Here $J,K=1,2,\ldots,N(N-1)/2$ are indices labeling all of the
particle pairs in an arbitrary order. Assuming that the first product
of exponentials is evaluated in this order [$\exp(\Delta
t\G_{1})\exp(\Delta t\G_{2})\cdots$], the second product can be
evaluated in any order so long as $J<K$ in each commutator
$[\G_J,\G_K]$.
In the following we keep only the first product which corresponds to a
composition of the actions of independent pairwise interactions
generated by Hamiltonians $H^{(ij)}_\E$. The state vector of the
system $\L = (\L_{1}, \L_{2}, \dots, \L_N)$ then follows as
\begin{equation}\label{e:L(t)}
\L(t) = \prod_{i,j}^{i>j}\O_{ij}(\Delta t) \L(t_0),\ {\rm where }\ \O_{ij}(\Delta t)=\exp(\Delta t\, \G_{ij})\,.
\end{equation}
In Section \ref{s:pairwise} we have derived the analytic solution to
the pairwise interaction: $\O_{ij}(\Delta t)$ rotates $\L_i$ and
$\L_j$ around their common total angular-momentum vector by a finite
angle $\Omega_{ij}\,\Delta t$, keeping all other $\L_k$ fixed.
The integrator given by Eq.~(\ref{e:L(t)}) is symplectic since each
component operator $\O_{ij}(\Delta t)$ is the exact solution of the
equations of motion for the Hamiltonian $H_{\E}^{(ij)}$. However it is
only first-order accurate, i.e., the truncation error after a fixed
integration time $\Delta T=n\Delta t$ varies as $\Delta t$ or as
$n^{-1}$. Errors arise due to the non-commutativity of different
interaction pairs and the effects of higher order interactions in
Eq.~(\ref{e:exponential}). Convergence may be improved either by
using a higher order integrator or by choosing a particular ordering
of the evaluation of the $\O_{ij}$. We discuss these and other
improvements to the numerical algorithm in the following subsections.
\subsection{Higher order accuracy}\label{s:s:second order}
\begin{figure*}
\centering
\mbox{\includegraphics[height=7cm]{fig2a.eps}
\includegraphics[height=7cm]{fig2b.eps}
\includegraphics[height=7cm]{fig2c.eps}}
\mbox{\includegraphics[height=7cm]{fig2d.eps}
\includegraphics[height=7cm]{fig2e.eps}
\includegraphics[height=7cm]{fig2f.eps}}
\caption{\label{f:error-a}
Angular-momentum convergence errors
$\|\delta\L\|/L\equiv \|\L -\L^{\rm true}\|/L$ after a fixed time $T$,
as a function of semimajor axis $a$. The cluster has $N=1024$ and 4096
stars in the top and bottom rows, respectively.
The number of simulation steps varies across the panels as marked;
the reference angular momentum $\L^{\rm true}$ is determined by
integrating with 4096 timesteps. Three different integrators are
shown: (i) the open red squares show the second-order integrator
$\O^{(2)}(\Delta t)$ (Eq.\ \ref{e:L(t)2}); (ii) the filled green
squares show the same integrator, but with the timestep for the
innermost $N/4$ stars reduced by a factor of 16; (iii) the small
blue circles show the eighth-order integrator
$\O^{(8)}(15\Delta t)$ (Eq.\ \ref{e:8th-order}; the factor 15 is
chosen so that the second-order and eighth-order integrators have
the same number of function evaluations per unit time).
All simulations neglect multipoles beyond $\ell_{\max}=20$.
The stars are initially distributed spherically and in a disk with root mean square (RMS) inclination 0.1;
the two components have the same total mass in all panels.
In the left panel, the sphere and disk stars have equal mass,
in the middle and right panels the disk stars are 4 times as massive
as the stars in the spherical component.
The total simulated time interval corresponds to a VRR timescale of the inner edge of the cluster
$t_{\vrr}=M_{\bullet}/(m_{\rms} \sqrt{N}) P_{\min}$, $m_{\rms}=\langle m^2\rangle^{1/2}$,
$P_{\min}=2\pi a_{\min}^{3/2}/(GM_{\bullet})^{1/2}$ (Eq.~\ref{e:vrr}). For both the
disk and the sphere the initial conditions are
$n(a)\propto a^{-2.4}$, $a_{\max}/ a_{\min}=100$, $dN/de\propto e$ for $e\leq0.9$.
}
\end{figure*}
A simple way to improve the integrator to second-order (error of order
$(\Delta t)^2$ after a fixed integration time) is to choose a
time-reversible ordering of terms, e.g.,
\begin{equation}\label{e:L(t)2}
\L(t) = \prod_{i=2}^{N}\prod_{j=1}^{i-1}\O_{ij}(\Delta t/2)\times\prod_{i=N}^{2}\prod_{j=i-1}^{1}\O_{ij}(\Delta t/2)\,
\L(t_0)
\end{equation}
Products are ordered from the initial to final values (shown on the
bottom and top of the product symbols) here and below if not stated
otherwise. Since each term is time-reversible, i.e., $\O_{ij}(\Delta
t)\O_{ij}(-\Delta t)=\bm{I}$ is the identity operator for arbitrary
$\Delta t$, their reversible composition is time-reversible. Hence,
the truncation error after a fixed time interval must be even in the
timestep $\Delta t$ and so must be at least of order $(\Delta
t)^2$.
Higher order algorithms can be constructed by varying $\Delta t$ in
successive iteration steps
\citep{1990PhLA..150..262Y,1990PhLA..146..319S}. For example, if we
label the second-order operator on the right side of
Eq.~(\ref{e:L(t)2}) $\O^{(2)}(\Delta t)$, an eighth-order integrator
is
\begin{equation}\label{e:8th-order}
\O^{(8)}(\Delta t) = \prod_{s=0}^{14} \O^{(2)}(r_s \Delta t)
\end{equation}
where \citep{1994PhyA..205...65S}
\begin{align}
r_0 &= r_{14} = 0.74167036435061295344822780\nonumber \\
r_1 &= r_{13} = -0.4091008258000315939973001\nonumber \\
r_2 &= r_{12} = 0.19075471029623837995387626\nonumber \\
r_3 &= r_{11} = -0.57386247111608226665638773\nonumber \\
r_4 &= r_{10} = 0.29906418130365592384446354\nonumber \\
r_5 &= r_{9} = 0.33462491824529818378495798\nonumber \\
r_6 &= r_{8} = 0.31529309239676659663205666\nonumber \\
r_7 &= -0.79688793935291635401978884\,.
\label{e:8th-order-coefficients}
\end{align}
Note that here 15 evaluations are required for each $\Delta t$, i.e.
the execution time of $\O^{(8)}(15\Delta t)$ is equivalent to
that of $\O^{(2)}(\Delta t)$ repeated 15 times. The truncation error of $\O^{(8)}(15\Delta t)$
is much smaller than that of $\O^{(2)}(\Delta t)$ for sufficiently small
$\Delta t$ as shown in the right panels of Figure~\ref{f:error-a}.
\subsection{Timestep refinement}\label{s:s:timerefinement}
As seen in Figure \ref{f:error-a}, the integration errors of the
innermost stars in a cluster typically greatly exceed those of the
outer stars. This is not surprising, since the coupling coefficients
satisfy $\J_{ij\ell} \propto 1/a_{\Out}$ (Eq.~\ref{e:Jijell}), and from
Eq.\ (\ref{e:EOM1}) the characteristic timescale for changes in the
angular momentum of star $i$ is $\Delta t_{\rm int} \approx L_i/(a^3
n(a) \J_{ij \ell}) \propto a^{{\gamma}-1.5}$ where $n(a)\propto a^{-{\gamma}}$
is the number density of stars in the cluster\footnote{The interaction
is often strongest for a stellar disk component even if it is
subdominant in mass. The observed disk of young stars in the
Galactic Centre has ${\gamma}=2.4$--$2.9$ and the spherical component of
old stars has ${\gamma}=1.2$--$1.75$
\citep{2009ApJ...697.1741B,2010ApJ...708..834B}.}. Thus, stars at
smaller semimajor axes require a smaller timestep $\Delta t$ for the
same integration accuracy. The errors may be efficiently reduced by
implementing a block timestep procedure that preserves the symplectic
and time-reversible properties
\citep{1992JChPh..97.1990T,1994AJ....108.1962S}. We reduce the
timestep to $\Delta t/k$ for a block containing the innermost $N/K$
stars, and calculate the mutual interactions of the stars within the
block $k$ times before calculating their interactions with the rest of
the stars. Thus, the integrator can be written as
\begin{equation}\label{e:refinement}
\O_{\rm in, in}(\Delta t,k) \O_{\rm in, out}(\Delta t) \O_{\rm out, out}(\Delta t)
\end{equation}
where
\begin{align}\label{e:refinement-inin}
\O_{\rm in, in}(\Delta t,k) &= \bigg[\prod_{i,j}^{j<i\leq N/K}\O_{ij}(\Delta t/k)\bigg]^{k}\,,\\
\O_{\rm in, out}(\Delta t) & = \prod_{i,j}^{j\leq N/K<i}\O_{ij}(\Delta t)\,,\\
\O_{\rm out, out}(\Delta t) & = \prod_{i,j}^{N/K<j<i}\O_{ij}(\Delta t)\,.
\end{align}
The two-level timestep refinement procedure reduces the truncation
errors of the stars in the inner block by a factor $\sim k^n$ for a
method that converges as $\mathcal{O}(\Delta t^n)$. If the algorithm
execution time is proportional to $N^2$, the calculation of the inner
block is approximately the same cost as the calculation of the rest of
the system when $k=K^2$.
Figure~\ref{f:error-a} shows the effects of the two-level timestep
refinement procedure for a cluster with $a_{\max}/a_{\min}=100$. The
red squares show the errors when a single timestep is used, and the
green squares show the errors when using the two-level timestep
procedure (with $K=4$ and $k=16$). The errors are indeed improved by
close to $K^4=256$ at the smallest semimajor axes. The
optimal value of $K$ may be set according to the radial range of the
simulated cluster and the number density exponent $\gamma$.
The errors may be further decreased using a Trotter decomposition in which
the combined action of the operators $e^A$ and $e^B$ is represented as
$e^{A/2} e^B e^{A/2}$ \citep{Trotter,1992JChPh..97.1990T}.
For $e^A\equiv \O_{\rm in,\In}(\Delta t,k)$ and
$e^B \equiv \O_{\rm in, out}(\Delta t) \O_{\rm out, out}(\Delta t)$,
Eq.~(\ref{e:refinement}) becomes
\begin{equation}\label{e:refinement2}
\O_{\rm in, in}\left(\half\Delta t,\half k\right) \O_{\rm in, out}(\Delta t) \O_{\rm out, out}(\Delta t)
\O_{\rm in, in}\left(\half\Delta t,\half k\right)\,.
\end{equation}
The algorithm may be made time-reversible and hence second-order
accurate as discussed in Section~\ref{s:s:second order} by evaluating
all operators in the reverse order in successive timesteps. An
improved variant with even smaller errors is obtained by making each
$\O_{\rm in, in}\left(\half \Delta t,\half k\right)$ term in
Eq.~(\ref{e:refinement2}) time-reversible by choosing the reverse
order of the pairwise operators $\O_{ij}$ for steps
$2,4,\ldots,\half k$.
The operators $\O_{\rm in, out} \O_{\rm out, out} $ may be further
Trotter decomposed or time-symmetrized but we find that this
does not improve convergence significantly.
The left and middle panels of Figure~\ref{f:error-a-refinement} show how the errors change for
various implementations of the two-level timestep refinement.
\begin{figure}
\centering
\mbox{\includegraphics[width=8.5cm]{fig3.eps}}
\caption{\label{f:refinement} Timestep refinement scheme of the
symplectic integrator, shown for a three-level refinement with $K_1
= K_2 = 2$ for a cluster of $N=16$ stars. We depict the operators
as elements of a lower triangular matrix as shown. The algorithm
for an arbitrary number of refinement levels runs recursively as
follows. In each refinement level $n<n_{\max}$ a block of $N_n$
stars is grouped in two sets based on their specific angular
momentum: the $(n+1)^{\rm st}$ ``inner block'' of $N_{n+1} \equiv
N_n/K_{n+1}$ stars and the $(n+1)^{\rm st}$ ``outer block''of
$N_n(K_{n+1}-1)/K_{n+1}$ stars. For each refinement level, the
inner block is further refined and the refined operators are
executed $2k_{n+1}$ times with timestep $\Delta t_{n+1} \equiv
\Delta t_{n}/(2k_{n+1})$ each, while the interactions among the
outer stars and the interactions of the inner stars with the outer
stars are executed only twice with timestep $\half\Delta t_{n}$.
The algorithm starts with $O_{\rm in,\rm in}^{\langle 0 \rangle}$
for $N_0=N$, which includes all stars in the inner block.
}
\end{figure}
Figure~\ref{f:error-a} shows that even after the two-level
timestep refinement, the convergence errors vary systematically by
three orders of magnitude over a factor 100 in semimajor axis. To
obtain more uniform convergence, we may choose a larger inner block
(i.e., smaller $K$) and implement a multilevel refinement by
recursively refining the innermost block of stars. To start, set the
$0^{\rm th}$ refinement level to be the whole cluster of stars
$N_0\equiv N$. Then set the stars in the $n^{\rm th}$ refinement
level to be the innermost $N_{n} \equiv N_{n-1}/K_{n}$ stars, where
$K_{n}$ is an integer. In each refinement step, we execute the
operators corresponding to interactions among these $N_{n}$ stars with
a reduced timestep $ \Delta t_{n} \equiv \Delta t_{n-1}/(2k_{n})$;
each such operator is applied $2k_n$ times, as follows. In the
$n^{\rm th}$ level refinement, we define the operators within the
inner block recursively as
\begin{align}\label{e:refinement3}
\O_{\rm in,in}^{\langle n \rangle}\left( \Delta t_{n} \right) &=
\left[ \O_{\rm in,in}^{\langle n+1 \rangle}\left(\frac{\Delta t_{n}}{2 k_{n+1}}\right)\right]^{\frac12 k_{n+1}}
\nonumber\\&\quad\times
\O_{\rm out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right)
\left[ \O_{\rm in,in}^{\langle n+1 \rangle}\left(\frac{\Delta t_{n}}{2 k_{n+1}}\right)\right]^{k_{n+1}}
\nonumber\\&\quad\times
{\O'}_{\rm out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right)
\left[ \O_{\rm in,in}^{\langle n+1 \rangle}\left(\frac{\Delta t_{n}}{2 k_{n+1}}\right)\right]^{\frac12 k_{n+1}}\!\!.
\end{align}
where
\begin{align}
{\O}_{\rm out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right)
&=\O_{\rm in, out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right)
\O_{\rm out, out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right)\,,\\
\O_{\rm in, out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right) & =
\prod_{i,j}^{j\leq N_{n+1}<i\leq N_{n}}\O_{ij}\left( \frac{\Delta t_{n}}{2} \right) \,,\\
\O_{\rm out, out}^{\langle n+1 \rangle}\left( \frac{\Delta t_{n}}{2} \right) & =
\prod_{i,j}^{N_{n+1}<j<i\leq N_n}\O_{ij}\left( \frac{\Delta t_{n}}{2} \right) \,.
\label{e:refinement3c}
\end{align}
Here the index inside the angle brackets $\langle \cdot \rangle$
labels the refinement level, and primed operators use the
reverse-order composition of the unprimed operator (as in the
operators on either side of the $\times$ in Eq.\ \ref{e:L(t)2}).
The recursion ends at the final level of refinement $n_{\max}$ for which
\begin{align}
\O_{\rm in, in}^{\langle n_{\max} \rangle}\left( \Delta t_{n_{\max}} \right)&=
\prod_{i,j}^{j<i\leq N_{n_{\max}}}\O_{ij}\left( \frac{\Delta t_{n_{\max}}}{2}\right)
\nonumber\\
&\times
\left[\prod_{i,j}^{j<i\leq N_{n_{\max}}}{\O}_{ij}\left(\frac{\Delta t_{n_{\max}}}{2}\right)\right]'
\label{e:refinement3-finalin}
\end{align}
In practice, the simulation is advanced by $\Delta t$ by running
\begin{align}\label{e:refinement3-simulation}
\O_{\rm simulation}(\Delta t) = \O_{\rm in,in}^{\langle 0 \rangle}\left( \Delta t \right)
\end{align}
where $\O_{\rm in,in}^{\langle \cdot \rangle} (\cdot)$
is defined by Eq.~(\ref{e:refinement3}). It is
instructive to verify that $\O_{\rm simulation}(\Delta t)$ executes
each $\O_{ij}$ operator for a total interval of $\Delta t$. To see
this, note that Eqs.~(\ref{e:refinement3})--(\ref{e:refinement3c})
imply that $\O_{\rm in,in}^{\langle 0 \rangle}\left( \Delta t
\right)$ executes the interactions among the outer stars of the first
refinement level ($N_1=N/K_1<j\le N$) for a total time $\Delta t$,
via two operations of timestep $\half\Delta t$. These operators
will not be executed any more during this simulation step.
Furthermore $\O_{\rm in,in}^{\langle 0 \rangle}\left( \Delta t
\right)$ executes $\O_{\rm in,in}^{\langle 1 \rangle}\left(
\half\Delta t/k_1 \right)$ for $2k_1$ times. When doing so
Eq.~(\ref{e:refinement3}) is invoked again, each time executing the
interactions among the outer stars of the second refinement level
twice with timestep $\frac{1}{4}\Delta t/k_1$ each, thus in total
for $4k_1$ times. Thus every outer operator of the second refinement
level is run for a total time of $\Delta t$; these operators are not
executed any more during this simulation step. The recursion
continues until the maximum refinement level is reached; at this stage
each of the inner operators is applied twice with timestep $\half
\Delta t_{n_{\max}}$. The maximum refinement level has $\Delta
t_{n_{\max}} = \Delta t / (2^{n_{\max}} k_1 k_2\cdots k_{n_{\max}})$.
Figure~\ref{f:refinement} shows the subdivisions of the operators for
a three-level refinement with $K_0=1$, $K_1=2$, and $K_2=2$.
\begin{figure*}
\centering
\mbox{\includegraphics[width=0.38\textwidth]{fig4a.eps}
\includegraphics[width=0.295\textwidth]{fig4b.eps}
\includegraphics[width=0.295\textwidth]{fig4c.eps}}
\caption{\label{f:error-a-refinement} Angular-momentum convergence
errors for simulations with different refinement methods. The left
and middle panels show different algorithms with a two-level
timestep refinement, the right panel shows a three-level timestep
refinement. {\it Left panel:} The operators are labeled as follows:
$A$ represents the interactions among the members of the inner block
of $N/K$ stars (with semi-latus rectum $a_i(1-e_i^2)\leq 8$),
followed with timestep $\Delta t/k$ where $K=4$ and $k=16$; $B$ is
the mutual interaction between the members of the inner and outer
blocks followed with timestep $\Delta t$, $B$ is the interactions
among the members of the outer block of $N-(N/K)$ stars (with
$a_i(1-e_i^2)> 8$) followed with timestep $\Delta t$. Primed
operators use the reverse-order composition of the operators in the
corresponding unprimed operators. The simulation parameters are the
same as in the top right panel of Figure~\ref{f:error-a}. The legend
shows the order in which the operators are evaluated for a single
simulation step from right to left. All refinement schemes employ
the reverse order of operators for every second simulation step.
The simplest refinement method $C B A^k$ improves the errors by a
factor $\sim 40$ relative to an integrator with no refinement
(cf. open red squares in Figure~\ref{f:error-a}). The Trotter
decomposition $A^{k/2} C B (A')^{k/2}$ helps to decrease errors
further by a factor $\sim 4$--$5$. The inner-symmetric Trotter
decomposition $(AA')^{k/4} C B (A A')^{k/4}$ method is even better,
by another factor $\sim 2$--$3$. {\it Middle panel:} Different
variants of the inner-symmetric Trotter decomposition given by
Eqs.~(\ref{e:refinement3-sA})--(\ref{e:refinement3-sABC}) labeled
sA, sAB, sAC, and sABC, respectively. All variants show comparable
errors. {\it Right panel:} Three-level $K=2$ refinement using the
same algorithms and timestep as in the middle panel. The $sABC$
method produces the most uniform errors, and smallest maximum
errors. The two-level timestep-refined simulations execute 256
steps in $\sim50\%$ more time than the three-level timestep-refined
algorithms with 64 simulation steps. }
\end{figure*}
Note that the reverse-order composition of operators, denoted by
primes, has been invoked in Eqs.~(\ref{e:refinement3}) and
(\ref{e:refinement3-finalin}) to make the algorithm time-reversible.
For an overview, suppressing the arguments, the refinement scheme may
be summarized as
\begin{align}\label{e:refinement3-sA}
\O_{\rm in,in}^{\langle n \rangle} = &
\big(\O_{\rm in,in}^{\langle n+1 \rangle}\big)^{\frac12 k_{n+1}} \O_{\rm in,out}^{\langle n+1 \rangle} \O_{\rm out,out}^{\langle n+1 \rangle}
\big(\O_{\rm in,in}^{\langle n+1 \rangle}\big)^{k_{n+1}}\nonumber\\
&\;\times {\O'}_{\rm out,out}^{\langle n+1 \rangle} {\O'}_{\rm in,out}^{\langle n+1 \rangle}
\big(\O_{\rm in,in}^{\langle n+1 \rangle}\big)^{\frac12 k_{n+1}}.
\end{align}
With this algorithm $\O_{\rm in,in}^{\langle n\rangle}=\O_{\rm
in,in}^{\prime\langle n\rangle}$ at all refinement levels $n$.
Alternatively, we may time-symmetrize according to any of the
following schemes,
\begin{align}\label{e:refinement3-sAB}
&(\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}} \O_{\rm in,out}^{\langle n+1 \rangle}\O_{\rm out,out}^{\langle n+1 \rangle}
{\O'}_{\rm out,out}^{\langle n+1 \rangle}{\O'}_{\rm in,out}^{\langle n+1 \rangle} (\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}},\\
\label{e:refinement3-sAC}
&(\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}} \O_{\rm out,out}^{\langle n+1 \rangle}\O_{\rm in,out}^{\langle n+1 \rangle}
{\O'}_{\rm in,out}^{\langle n+1 \rangle}{\O'}_{\rm out,out}^{\langle n+1 \rangle} (\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}},\\
\label{e:refinement3-sABC}
&(\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}} \O_{\rm in,out}^{\langle n+1 \rangle}{\O'}_{\rm in,out}^{\langle n+1 \rangle}
\O_{\rm out,out}^{\langle n+1 \rangle}{\O'}_{\rm out,out}^{\langle n+1
\rangle} (\O_{\rm in,in}^{\langle n+1 \rangle})^{k_{n+1}}\,.
\end{align}
Figure~\ref{f:error-a-refinement} shows the convergence
errors for Eqs.~(\ref{e:refinement3-sA})--(\ref{e:refinement3-sABC})
labeled by sA, sAB, sAC, and sABC, respectively. All four methods
employ a three-level timestep refinement with $K=(1,2,2)$. The
repetition factors are $k=(1,8,4)$ for sA and $k=(1,4,4)$ for the
other three methods. The execution times are comparable for each
algorithm with 64 simulation steps and for the two-level timestep
algorithms with 256 steps in Figure~\ref{f:error-a-refinement}.
The sABC method
(Eq.~\ref{e:refinement3-sABC}) has the most homogeneous errors and
smallest maximum errors. This algorithm has the most number of
time-reversible factors, including the inner and outer blocks of stars
and the mutual interactions between the two.
\subsection{Grouping terms in blocks}\label{s:s:blocks}
The accuracy of the integrator can be significantly improved by
choosing a particular order in which the interactions in
Eq.~(\ref{e:L(t)2}) are calculated. One way to achieve this is by
grouping the stars into blocks such that the most strongly coupled
stars are mostly in the same block. Since the interactions are much
weaker if the semimajor axes are widely separated, $\alpha_{ij}\ll 1$,
and the precession rate is slower for less eccentric orbits,
it is natural to define the blocks using criteria based on the
semimajor axes or specific angular momenta $L_i/m_i \propto
\sqrt{a_i(1-e_i^{2})}$ of the stars. A specific assignment procedure
is described in the next subsection. After defining the blocks, we
evaluate the interactions block-by-block, first evaluating all the
interactions within each block then the interactions between blocks,
\begin{equation}\label{e:blocksprod}
\prod_{a=1}^{B}\prod_{b=1}^{a} \O^{a,b} \times {\rm reverse~order}
\end{equation}
where $\O^{a,b}$ denotes the product of all pairwise interaction terms
between blocks $a$ and $b$, and ``$\rm reverse~order$'' denotes the
time-reversed composition of operators.
\subsection{Parallelization}
The main bottleneck of the symplectic integrator outlined above is the
steep scaling with the number of stars, at least $\mathcal{O}(N^2)$.
Each timestep requires the calculation of $N(N-1)/2$ interactions.
Furthermore, errors arise due to the noncommutativity of different
terms which further increase with $N$. The steep scaling with $N$
makes it unfeasible to simulate clusters with a realistic number of
stars on a single processor. Here we show how to parallelize the
algorithm to reduce the execution time.
Since the symplectic algorithm outlined above uses a composition of
operators in a particular order, it is not immediately obvious whether
it is possible to run the algorithm on parallel threads. Fortunately,
we may realize that each operator $\O_{ij}$ affects only $\L_i$ and
$\L_j$ and the strict sequential ordering of $\O_{ij}$ and $\O_{kl}$
is not necessary if $i$ and $j$ are different from $k$ and $l$. In
particular if we split the stars into two disjoint blocks, the
self-interactions of the blocks may be calculated in parallel by two
threads, followed by a sequential calculation of the mutual
interaction between blocks. More generally, we may split the operators
into many segments of the form $\O_{i_1,i_2} \O_{i_3,i_4}\dots
\O_{i_{N-1},i_N}$ where $(i_1,i_2,\dots, i_N)$ is a permutation of
$(1,2,\dots,N)$. Then all of these $N/2$ operators commute within this
sequence, and can be evaluated independently on parallel
threads (we show how to do this below).
\begin{figure}
\centering
\mbox{\includegraphics[width=8.5cm]{fig5.eps}}
\caption{\label{f:parallel}
Parallelization scheme of the symplectic integrator. The interaction is
calculated as the composition of the effects of pairwise interaction terms. We
depict the interaction terms between stars $i$ and $j$ as elements of a lower
triangular matrix and group them in tiles of size $2^t$ as shown.
Tiles of the same size commute, and can be executed in parallel.
Further, interactions within a diagonal of a given tile also commute,
but different diagonals within a given tile do not, nor do different size tiles.
Thus, synchronization is necessary between executing the interactions of different
diagonals within a given tile and between different size tiles.
For an unlimited number of processors, the algorithm execution time is
$\mathcal{O}(N)$.
If the number of available processors $P$ is less than $N/2$,
the parallel algorithm run-time scales as $\mathcal{O}[(N(N-1)/2P]$
and requires exactly $2P$ synchronizations independent of
$N$.
}
\end{figure}
With this background in mind, we construct a parallel method for
$N=2^n$ stars as shown in Figure~\ref{f:parallel}. We depict the
operators as elements of a lower triangular matrix, and group them
into tiles of size $2^t$ with $t=0,1,\dots,n-1$ as shown for $N=16$.
We construct the tiling by recursively removing square tiles of size
$2^t\times 2^t$ starting with the largest, $t=n-1$. Removing this
submatrix leaves two lower triangular matrices, half the size of the
original. Next we remove the $2^{t-1}\times 2^{t-1}$ square matrices
from the two triangular matrices, leaving two smaller triangular
matrices each. We repeat this iteration down to $t=0$, thereby
covering the matrix completely. This gives $2^{n-t-1}$ square tiles of
size $2^t$. The elements of tile $k=0,1,\ldots,2^{n-t-1}-1$ of
size $2^t$ are $\O_{ij}$ where $1\leq i - (2k+1) 2^t \leq 2^t$ and
$1\leq j - (2k) 2^{t}\leq 2^t$. All tiles of a given size represent
interactions between distinct groups of stars (i.e., the tiles of a
given color in Figure \ref{f:parallel} do not overlap horizontally or
vertically). Thus, different tiles of the same size commute.
Next we discuss the commutativity of operators within a given tile.
Note that the operators in any diagonal within a tile
commute. This leads to a
parallelization scheme based on diagonals, which is best described by
an example. In the top green square in Figure~\ref{f:parallel},
$(n,t,k)=(4,2,0)$, we may choose the following ordering
\begin{align}
&(\O_{51}\O_{62}\O_{73}\O_{84})(\O_{52}\O_{63}\O_{74}\O_{81})\nonumber\\
&\quad\times(\O_{53}\O_{64}\O_{71}\O_{82})
(\O_{54}\O_{61}\O_{72}\O_{83})\,.
\end{align}
The terms in each parenthesis commute and can be evaluated in parallel,
but synchronization is required between the parentheses.
In summary, we may evaluate the action of all the $\O_{ij}$ as follows
\begin{align}\label{e:parallel1}
\prod_{t=0}^{n-1}
\prod_{d=1}^{2^{t}}\bigg(
\prod_{i=1}^{2^{t}}
\prod_{k=0}^{2^{n-t-1}-1}
\O_{(2k+1) 2^t + i,\,(2k) 2^t + [ (i+d)\, {\rm mod}\, 2^t]}\bigg)
\end{align}
where the terms in the large parentheses commute and can be run on independent threads.
More generally, instead of diagonals, we may choose any $2^t$
long cycle of permutations of $(1\dots 2^t)$, labelled $Z_{(2^t)}$,
to cover all elements of a tile
\begin{align}\label{e:parallel2}
\prod_{t=0}^{n-1}
\prod_{\sigma\in Z_{(2^t)}}\bigg(
\prod_{i=1}^{2^{t}}
\prod_{k=0}^{2^{n-t-1}-1}
\O_{(2k+1) 2^t + i,\,(2k) 2^t + \sigma_i}\bigg)\,.
\end{align}
Choosing random instead of fixed permutations for different simulation steps
helps to decrease systematic errors that arise due to the noncommutativity
of terms.
With at least $N/2$ processors, each parenthesis in
Eqs.~(\ref{e:parallel1})--(\ref{e:parallel2}) can be evaluated in a
time $\tau_e$, where $\tau_e$ denotes the execution time corresponding
to a single $\O_{ij}$ operator. Different threads need to synchronize
data between evaluations of non-commuting operators, and we denote the
corresponding time overhead by $\tau_s$. The execution time of one
timestep of the simulation is then $\sum_{t=0}^{n-1}\sum_{d=1}^{2^t}
(\tau_e+\tau_s)= (N-1)(\tau_e + \tau_s)$, so the parallelized
simulation time scales as $N$. For a limited number of processors
$P=2^p \leq N/2$, the time for evaluating the operators in one
timestep without synchronizations is $N(N-1)\tau_e/(2P)$. In this
case the optimal processor allocation that provides the minimum number
of synchronizations is determined as follows. First split the stellar
system into $B=2P$ blocks of stars, and calculate all of the
interactions within a block on the same processor. Next, to calculate
the $B(B-1)/2$ mutual interactions between blocks, we tile the blocks
according to the same binary tree scheme as shown in
Figure~\ref{f:parallel}. The interactions of different tiles of the
same size commute. Therefore we can evaluate the mutual interactions
between blocks in the order given by
Eqs.~(\ref{e:parallel1})--(\ref{e:parallel2}). The calculation
requires synchronization after each diagonal of the tiles and after
calculating the self-interactions of blocks: $2P$ synchronizations in
total, independent of $N$. Thus, the execution time for $P< N/2$ is
$N(N-1)\tau_e/(2P) + 2 P \tau_s $.
The parallelization scheme outlined above applies for an arbitrary
indexing of stars. In practice we may also employ all of the
improvements discussed in Sections~\ref{s:s:second
order}--\ref{s:s:timerefinement} to further speed up the
calculation. The multilevel refinement outlined
in Section~\ref{s:s:timerefinement} is commensurate with
this parallelization scheme as long as the $K$ refinement levels are
powers of 2. When the timestep is decreased by a factor $k$,
the execution time increases by the same factor for the corresponding
$(N/K)^2$ operators. However, the number of synchronization steps
increases significantly for each refinement level since
that is independent of $N$.
\subsection{Summary}
First we summarize the algorithm for the eighth-order integrator
(\ref{e:8th-order}), but without timestep refinement; the description
for the second-order integrator is an obvious simplification of this one:
\begin{enumerate}[leftmargin=0.5cm,itemsep=1.5ex,label=\arabic*.]
\item Calculate and store the coupling coefficients $\J_{ij\ell}$ for all $i$ and $j$
and for $\ell=2,4,\dots, \ell_{\max}$.
\item Order stars according to semimajor axis or specific angular
momentum and divide into tiles as illustrated in Figure
\ref{f:parallel}.
\item Choose a random permutation for each tile. Set the timestep to
$\Delta t_s = r_s\Delta t$ for the eighth-order integrator
(Eq.~\ref{e:8th-order}). Repeat the following for $s=0,\ldots,14$
to advance all pairs of stars $i$ and $j$ by substeps $\Delta t_s$:
\begin{enumerate}[itemsep=1.5ex]
\item\label{tile} Starting with the smallest tile size ($t=0$ in Eq.\
\ref{e:parallel2}), use parallel processors to operate on the
elements of a given permutation within a tile and the different
tiles of the same size [the products over $k$ and $i$ in
Eq.~(\ref{e:parallel2})].
\item\label{permutation} Repeat this process for the different
permutations of a given tilesize [the product over $\sigma$ in
Eq.\ (\ref{e:parallel2})].
\item\label{tilesize} Repeat this for the different size tiles
($t=1,\ldots,n$).
\item\label{reverse} Repeat the previous three steps in reverse order.
\end{enumerate}
\end{enumerate}
In the algorithm with a two-level timestep timestep refinement and
second-order integrator, iterations \ref{tile}--\ref{tilesize} go as
follows:
\begin{enumerate}[leftmargin=0.5cm,itemsep=1.5ex]
\item\label{i:inner1} Advance the innermost $N/K$ stars (those with
the smallest indices) with a reduced timestep $\Delta t_s/{k}$ for a
total time interval $\Delta t_s/2$, by repeating iterations
\ref{tile}--\ref{tilesize} $k/2$ times. In every second iteration
we reverse the ordering of the operators.
\item\label{i:outer} Evolve the rest of the interactions among the
outer $N(K-1)/K$ stars and the mutual interactions between the inner
and outer stars with a timestep $\Delta t_s/2$ and then in the reverse order
for $\Delta t_s/2$.
\item\label{i:inner2} Repeat step \ref{i:inner1} to evolve the inner block again
for a total time interval $\Delta t_s/2$.
\end{enumerate}
Note that each operator is evaluated for a total $\Delta t_s$ after each iteration
\ref{i:inner1}--\ref{i:inner2}. Methods with higher order refinements decompose
the inner cluster further and repeat steps \ref{i:inner1}--\ref{i:inner2} for
each level of refinement.
The cluster composition (in particular the mass and radius
distribution of the stars) and the error tolerance determine the
optimal $K$ and repetition factors $k$ and the most efficient order
for the integrator (see Figures~\ref{f:error-a} and \ref{f:error-a-refinement}).
The value of $\ell_{\max}$ is chosen such
that $\ell_{\max}=\pi/(2I_{\min})$ where $I_{\min}$ is the minimum
inclination that must be resolved by the simulation (see
Appendix~\ref{app:convergence}).
\section{Vector Resonant Relaxation as a Stochastic Process}
\label{s:random}
As an application of these results, we examine VRR of a spherical stellar cluster around a
SMBH \citep{1996NewA....1..149R, 2006ApJ...645.1152H,
2007MNRAS.379.1083G,2009ApJ...698..641E,2011MNRAS.412..187K,
2011MNRAS.411L..56G,2011ApJ...738...99M,2011ApJ...726...61M}. As
discussed in Section \ref{s:intro}, VRR is the stochastic process
arising from the torques between the annuli that represent stellar
orbits that have been averaged over the orbital period and apsidal
precession time. The adjective ``vector'' refers to the fact that such
torques change the orientation of the angular-momentum vector but not
the scalar angular momentum (Eq.~\ref{e:Ltot}).
In the standard (Chandrasekhar) model of two-body relaxation in
stellar systems \citep[e.g.,][]{bt08}, each star undergoes a random
walk in Cartesian velocity space due to encounters with stars passing
nearby. In the incoherent phase of VRR, each star undergoes a
random walk in $\Ln$ on the unit sphere due to torques from other
stars. Two-body relaxation can be approximated as Brownian motion,
that is, most of the relaxation is due to a large number of encounters
of short duration. In contrast, in VRR the stochastic motion of the
orbit normals cannot be divided into discrete steps occurring at a
fixed and very short time interval $\Delta t$. In other words, VRR is unlike Brownian motion or
diffusion in that the angular momenta move in a coherent, spatially
correlated manner until their directions change substantially and they
exibit incoherent, stochastic evolution only over much longer times. For this
reason, the correlation function of angular momentum vector directions
\emph{cannot} be expressed as
$\|\L_i(t_0+\tau)-\L_i(t_0)\|/\|\L_i(t_0)\| = (\tau/{t_\vrr})^{1/2}$
in the incoherent evolutionary phase, and the definition of the vector resonant
relaxation timescale $t_\vrr$ must be revised.
In Section~\ref{s:random-theory}, we introduce a simple stochastic
model to describe incoherent VRR in a spherical stellar cluster,
in which the angular momentum vector directions undergo an isotropic
random walk on a spherical surface with a step size which is not
infinitesimal and which is drawn from a probability distribution
function (PDF). For any given PDF, we show that the stochastic
evolution may be solved analytically and that the multipole moments of
the correlation function with $\ell>0$ decay exponentially
(Eq.~\ref{e:randomwalk-mean}). We use this property to define the VRR
timescale (Eq.~\ref{e:V-incoherent}) and construct moments of the
stellar distribution (Eq.~\ref{e:V}) that evolve linearly in time
(Eq.~\ref{e:Vell-theory2}). In Section~\ref{s:random-application}, we
analyse the results of our numerical simulations in this framework,
and in Section~\ref{s:comparison}, we compare results in the
literature for the coherent evolutionary phase of VRR with those in
this study.
\subsection{Random walk on the sphere -- general theory}
\label{s:random-theory}
In general, a random walk on a sphere can be described as follows
(\citealt{Roberts_Ursell60}; see also \citealt{1929pomo.book.....D,2012leas.book.....C}).
Suppose that the probability distribution
for the initial position of a point $\r_0$ on the spherical surface of
unit radius, $S_2$, is $\rho_0(\r)$. At step $n$, $\r$ moves an angle
$\alpha_n=\cos^{-1}\mu_n$ on the sphere in a random direction with
probability $p(\mu_n)\D\mu_n$. Therefore, the probability density after
the $n^{\rm th}$ step is set by the probability density of the
preceding step as\footnote{
We define the distribution function of $\r$ as a random field
$\rho_n(\r) \equiv
\rho_n[\r ; \rho_{n-1}(\r')_{\r'\in S_2}, \mu_n]\equiv
\rho_n[\r ; \rho_0(\r')_{\r'\in S_2}, \mu_1,\dots, \mu_n]$ using Eq.~(\ref{e:randomwalkdef}).
Here the $\mu_i$ are independent random variables for all $i$ and
$\rho_0(\r')$ is a given initial distribution for $\r'\in S_2$.
}
\begin{equation}
\rho_n(\r) =
\frac{1}{2\pi}\int_{S_2} \D \r'
\delta(\r \cdot \r'- \mu_n)\,\rho_{n-1}(\r')\,.
\label{e:randomwalkdef}
\end{equation}
This equation is linear in $\rho$ and can be solved using the
eigenbasis of the corresponding linear operator. In
Appendix~\ref{s:app:randomwalk}, we show that the eigenfunctions are
the spherical harmonics\footnote{See definition in Eq.~(\ref{e:Y}).}
$Y_{\ell m}(\r)$ with eigenvalues $P_{\ell}(\mu_n)$. Expanding the
initial distribution in this basis as
\begin{equation}\label{e:randomwalk-rho0}
\rho_0(\r) = \sum_{\ell, m}a_{\ell m,0}Y_{\ell m}(\r)\,,
\end{equation}
the distribution after a single step is
\begin{equation}\label{e:randomwalk1}
\rho_1(\r) = \sum_{\ell,m}
P_{\ell}(\mu_1) \, a_{\ell m,0} Y_{\ell m}(\r)\,,
\end{equation}
and after the $n^{\rm th}$ step it is
\begin{equation}\label{e:randomwalk-rho}
\rho_{n}(\r) = \sum_{\ell, m} a_{\ell m, n}Y_{\ell m}(\r)
\end{equation}
where
\begin{equation}\label{e:randomwalk-a}
a_{\ell m, n} = \prod_{k=1}^n P_{\ell}(\mu_k) \, a_{\ell m,0}\,.
\end{equation}
The expectation value of the $(\ell,m)$ spherical multipole moment in the $n^{\rm th}$ step
is
\begin{equation}\label{e:randomwalk-mean}
\langle a_{\ell m, n} \rangle = \langle P_{\ell}(\mu) \rangle^n a_{\ell m,0}
\end{equation}
where $\langle F(\mu_k) \rangle = \int_{-1}^1 F(\mu_k) p(\mu_k) \D \mu_k$ for any function $F(\mu_k)$.
The RMS fluctuations around the mean are given by
\begin{equation}\label{e:randomwalk-sigma}
\sigma^2\equiv \langle a_{\ell m,n }^2\rangle -\langle a_{\ell m, n}\rangle^2
= \left\{ \left\langle [P_{\ell}(\mu)]^2 \right\rangle^n - \left\langle P_{\ell}(\mu) \right\rangle^{2n} \right\}\, a_{\ell m,0}^2
\end{equation}
and the cross-correlation of $a_{\ell m, n}$ and $a_{\ell' m', n}$
\begin{align}\label{e:randomwalk-correlation}
C^{\ell m}_{\ell' m'} &\equiv \langle a_{\ell m,n } a_{\ell' m',n } \rangle -\langle a_{\ell m, n}\rangle\langle a_{\ell' m', n}\rangle\\
&= \left\{ \left\langle P_{\ell}(\mu) P_{\ell'}(\mu) \right\rangle^n
- \left\langle P_{\ell}(\mu)\right\rangle^n \left\langle P_{\ell'}(\mu) \right\rangle^n \right\}a_{\ell m,0}a_{\ell' m',0}.
\end{align}
Since $|\langle P_\ell(\mu) \rangle | \leq 1$ for $\ell>0$, each
multipole moment with $\ell>0$ decays exponentially in the number of
steps as $|\langle a_{\ell m, n} \rangle|/|a_{\ell m, 0} | = \exp[n
\ln |\langle P_\ell(\mu) \rangle|]$; the system ``isotropizes'' with a
decay time of $-\Delta t / \ln |\langle P_{\ell}(\mu) \rangle |$ where
$\Delta t$ is the timestep.
Since $x\equiv |a_{\ell m,n}/a_{\ell m,0}|$ is an
$n$-element product of independent and identically distributed positive random variables for any
$\ell$ and $m$, the distribution of $\ln x$ for $n\gg 1$
follows from the central limit theorem, and we find that the
probability density function of $x$ is approximately
\begin{equation}\label{e:distribution1}
\varphi(x) \approx \frac{1}{\sqrt{2\pi n} \,x \sigma_0 }
\exp\hspace{-1pt}\left[ - \frac{(\ln x -
n\nu)^2}{2 n \sigma_0^2} \right]
\end{equation}
where
\begin{equation}
\nu\equiv \langle \ln |P_\ell(\mu)|\rangle, \quad \sigma_0^2\equiv\langle
[\ln|P_\ell(\mu)|]^2\rangle -\langle \ln |P_\ell(\mu)|\rangle^2\,.
\label{e:jjjppp}
\end{equation}
Note that the mean and RMS of $a_{\ell m,n}$ are given generally by
Eqs.~(\ref{e:randomwalk-mean})--(\ref{e:randomwalk-sigma}), while
Eqs.~(\ref{e:distribution1})--(\ref{e:jjjppp}) are approximate
statements valid only when $n\gg1$.
The Green's function corresponding to an initial density
$\rho_0$ that is concentrated at the $\theta=0$ pole corresponds to
$a_{\ell m,0} =\sqrt{(2\ell + 1) /(4\pi)}\delta_{m,0}$. Thus the
probability distribution function for the angle $\theta$
between the initial and final position after $n$ steps is given by
\begin{equation}
p_n(\theta)= 2\pi\rho_n(\r)\sin\theta =\sum_{\ell=0}^\infty
\frac{2\ell +1}{2}\prod_{k=1}^n P_{\ell}(\mu_k)\,P_{\ell}(\cos\theta)\sin\theta,
\end{equation}
which implies that\footnote{This quantity is related to the
autocorrelation function of the spherical multipole moments since
$\overline{P_\ell(\cos\theta)}= {4\pi\,(2\ell+1)^{-1}
\sum_{m=-\ell}^{\ell} a_{\ell m,n} a^*_{\ell m,0}}$. }
\begin{equation}\label{e:ppp}
\overline{ P_\ell(\cos\theta)}
\equiv \int_0^\pi P_\ell(\cos\theta) p_n(\theta)\,\D\theta = \prod_{k=1}^n P_{\ell}(\mu_k)
\end{equation}
where overbar denotes the average over $p_n(\theta)$. Thus after averaging over all $\mu_k$
and $p_n(\theta)$ we get
\begin{equation}\label{e:probability-average}
\overline{ \langle P_\ell(\cos\theta) \rangle} = \langle P_{\ell}(\mu)\rangle^{n}\,.
\end{equation}
In a planar random walk with step $\alpha$, the RMS distance traveled
after $n$ steps is $\sqrt{n}\alpha$. This formula does not apply to
the random walk on a sphere unless $\sqrt{n}\alpha\ll1$, since the
geometry is not planar (for example, the maximum angular distance
between any two points on a sphere is $\pi$). To generalize some of
the concepts of planar random walks to the sphere, we first consider the limiting case
of Brownian motion, in which the angular step $\alpha=\cos^{-1}\mu$ and
the timestep $\Delta t$ both approach zero with
$\alpha^2\sim \Delta t$. In this limit $P_{\ell}(\mu)
\approx \exp[-\frac14 \ell (\ell+1) \alpha^2]$, and so Eqs.\
(\ref{e:randomwalk-rho})--(\ref{e:randomwalk-a}) become\footnote{
Brownian motion on the sphere also satisfies the diffusion equation \citep{1929pomo.book.....D}
\begin{equation}
\frac{d\rho}{dt} = \frac14 \nabla\cdot \frac{\langle\alpha^2\rangle}{\Delta t}\nabla\rho\,.
\end{equation}
where $\nabla$ is the gradient operator on the unit sphere.
}
\begin{equation}\label{e:randomwalk-brownian}
\rho_{n}(\r) = \sum_{\ell,m} a_{\ell m,0} Y_{\ell m}(\r) e^{-\frac14 \ell (\ell+1) v_n}
\end{equation}
where $v_n = \sum_{k=1}^n\alpha_k^2$, so that
$\langle v_n\rangle = n \langle \alpha^2\rangle = \langle\alpha^2\rangle t
/\Delta t $ is the variance of the corresponding planar
Brownian motion.
The analog of Eq.~(\ref{e:ppp}) is
\begin{equation}
\overline{P_\ell(\cos\theta)} = e^{-\frac14 \ell(\ell+1) v_n}.
\end{equation}
Motivated by the results above, we define the quantity
\begin{equation}
\label{e:V}
V_{\ell}(t) \equiv - \frac{4}{\ell(\ell+1)} \ln \bigg| \frac1{N'} \sum_{i=1}^{N'}\frac1{T} \int_0^{T} \D t_0 P_{\ell}[ \cos \alpha_i(t,t_0) ] \bigg|
\end{equation}
which we call the angular variance; here $\alpha_i(t,t_0)$ is the angular distance traversed
by the orbit normal $\Ln_{i}$ between time $t_0$ and time $t_0+t$, i.e.,
\begin{equation}
\cos \alpha_i(t,t_0) \equiv {\Ln}_{i}(t+t_0) \cdot {\Ln}_{i}(t_0)\,.
\end{equation}
In Eq.~(\ref{e:V}), we have averaged $P_{\ell}(\cos \alpha_i)$ over
both the cluster index and the reference time to reduce statistical noise.
The ensemble average is either
over the full population ($N'=N$) or over a subset of the stars
($N'<N$, e.g., over stars within a restricted range of mass,
eccentricity, and semimajor axis). For Brownian motion
$V_{\ell}(t_n)$ is an estimator of the variance $v_n$ for all $\ell$
so long as $v_n\ll 1$, and for a general random walk it estimates
$-4\ell^{-1}(\ell+1)^{-1}n\ln|\langle P_{\ell}(\mu)\rangle|$. In
either case $V_{\ell}(t)$ grows linearly with time over timescales
long compared to the timestep $\Delta t$ until the $\ell^{\rm th}$
multipole becomes completely mixed. Complete mixing occurs when the
level of anisotropy becomes less than the stochastic variations which
arise due to the finite number of stars. Thus for a single component
cluster, complete mixing occurs when $V_\ell\approx V_{\ell,\sat}$
with $\ell\geq 1$ and
\begin{align}
\exp\left[ -\ffrac{1}{4}\ell (\ell+1) V_{\ell,\sat}\right]
&\equiv \frac{1}{\sqrt{N}}\langle [P_{\ell}(\cos\alpha)]^2\rangle^{1/2}\nonumber\\
&= \frac{1}{\sqrt{N(2\ell+1)}}\,;
\end{align}
in the last line we assumed that $\alpha$ is drawn from an isotropic
distribution. Solving for $V_{\ell, \sat}$ gives
\begin{align}\label{e:Vell-sat}
V_{\ell,\sat} = \frac{2\ln\left[(2\ell+1)N\right]}{\ell(\ell+1)}\,.
\end{align}
In summary, for Brownian motion, the angular variance is expected to follow
\begin{align}\label{e:Vell-theory1}
V_{\ell}(t)
=\left\{
\begin{array}{l}
\displaystyle{ \frac{t}{\Delta t} \langle \alpha^2 \rangle {~~\rm if~~}\langle \alpha^2\rangle^{1/2} \ll 1 {~~\rm and ~~} V_{\ell}< V_{\ell,\sat}}\,,\\[2ex]
\text{stochastic variations around $V_{\ell,\sat}$ otherwise\,,}
\end{array}
\right.
\end{align}
and for a general random walk
\begin{align}\label{e:Vell-theory2}
V_{\ell}(t)
=\left\{
\begin{array}{l}
\displaystyle{ -\frac{4}{\ell(\ell+1)} \frac{t}{\Delta t} \ln|\langle P_{\ell}(\mu)\rangle| {~~\rm if~~}V_{\ell}< V_{\ell,\sat}}\,,\\[2ex]
\text{stochastic variations around $V_{\ell,\sat}$ otherwise\,.}
\end{array}
\right.
\end{align}
Complete mixing occurs when all multipole moments are completely
mixed. We find below that in general the dipole moment is the
slowest to mix, so complete mixing occurs after approximately
$n_{\sat} = -\ln 3N/(2\ln
|\langle \mu \rangle|) $ timesteps. For small angular steps $n_{\sat} =
\ln 3N/\langle\alpha^2 \rangle$.
\subsection{Application to resonant relaxation}
\label{s:random-application}
\begin{figure*}
\centering
\mbox{
\includegraphics[width=0.48\textwidth]{fig6a.eps}\hfill
\includegraphics[width=0.48\textwidth]{fig6b.eps}
}
\caption{\label{f:betaT}
The dimensionless coherent torque parameter $\beta_T$
(Eqs.~\ref{e:torque-coherent-powerlaw} and \ref{e:betaT3}) for a star with
eccentricity $e$, orbiting in a spherical population of stars with a
fixed eccentricity $e'$ and a distribution of semimajor axes
$n(a)\propto a^{-1.5}$ (left panel) and $\propto a^{-2.5}$ (right
panel). The colored curves have $e=0$, 0.1, \dots, 0.9 from top to
bottom, and $e'$ is varied on the horizontal axis. }
\end{figure*}
We now apply these results to VRR. In the incoherent phase of VRR,
each star undergoes a random walk in $\r\equiv \Ln$ on the unit sphere
due to torques from other stars.
We introduce a decoherence time $t_{\phi}$: over time intervals much
less than the decoherence time the stochastic torque on a star is
temporally correlated\footnote{In practice we identify the decoherence
time with the time over which the the torque is approximately
constant.} (``coherent evolution''), while the torques at times
separated by much more than the decoherence time are temporally
uncorrelated (``incoherent evolution''). Of course, the decoherence
time will depend on the eccentricity and semimajor axis of the star
and the properties of the stellar cluster of which it is a member. We
first determine the RMS torque that characterizes the coherent
evolutionary phase, then we use the stochastic model of the previous
section to characterize the incoherent evolution. We analyse our
numerical simulations in this framework and determine how the model
parameters depend on the physical parameters of the stellar orbits in
the two regimes.
A second parameter that characterizes the evolution of a star $i$ during
VRR is related to the RMS torque that it experiences.
For a cluster composed of stars of similar semimajor
axes $a$, and a distribution of eccentricities and masses,
\begin{align}\label{e:torque-coherent}
T_{\rms,i} = \langle\T_{i}^2\rangle^{1/2}
&\simeq \frac{\beta_T}{2\pi}\frac{ G \sqrt{N} m_{\rms} m_i }{a}
\nonumber\\
&= \beta_T \frac{\sqrt{N} m_{\rms}}{M_{\bullet}}
\frac{m_i \sqrt{G M_{\bullet} a}}{P}\,,
\end{align}
where $P=2\pi(a^3/G M_\bullet)^{1/2}$ is the orbital period,
$m_{\rms} = (N^{-1}\sum_i m_i^2)^{1/2}$, $\beta_T $ is a dimensionless constant of
order unity, and averaging is over the distribution of the other stars
in a spherical cluster, $\Ln_{j\neq i}$.
Similarly, the RMS rate of change of the orbit normal for star $i$ is
\begin{equation}\label{e:om-coherent}
\Omega_{\rms,i} = \bigg\langle\bigg(\frac{\D\Ln_{i}}{\D t}\bigg)^2\bigg\rangle^{1/2}
=\left\langle \frac{\T_i^2}{L_i^2}\right\rangle^{1/2} \simeq
\beta_\Omega \frac{\sqrt{N}m_{\rms}}{M_\bullet P}.
\end{equation}
Using the notation of Eqs.~(\ref{e:HRR}) and (\ref{e:EOM2}),
\begin{align}\label{e:betaT0}
\beta_T &= \frac{2\pi a}{G m_i m_{\rms}} \bigg[
\frac{1}{N}\sum_{j,k=1}^N\sum_{\ell,n}\J_{ij\ell}\J_{ikn}P_\ell'(\cos I_{ij})
\bigg.\nonumber \\
&\quad\times P_n'(\cos I_{ik}) \,(\cos I_{jk}-\cos I_{ij}\cos I_{ik})\bigg]^{1/2},
\end{align}
and $\beta_\Omega=\beta_T(1-e_i^2)^{-1/2}$. We simplify this
expression in Appendix~\ref{s:beta}. We find that the series
in $\ell$ converges very quickly, and so the coherent torques
in a spherical cluster are predominantly quadrupolar. The torque is a
Gaussian random variable with zero mean and dispersion set by
$\beta_T$.
More generally, if there is a range of semimajor axes with $\D N=4\pi
a^2n(a)\,\D a$ stars in the semimajor axis interval $a\to a+da$, we
can replace $N$ by $\D N /\D \ln a = 4\pi a^3n(a)$ in all these
equations where $a\equiv a_i$. For example, Eqs.\
(\ref{e:torque-coherent}) and (\ref{e:om-coherent}) become
\begin{align}\label{e:torque-coherent-powerlaw}
T_{\rms,i} &\simeq \beta_T \frac{\sqrt{\D N/\D\ln a}\, m_{\rms}}{M_{\bullet}}
\frac{m_i \sqrt{G M_{\bullet} a}}{P}\,, \nonumber \\
\Omega_{\rms,i} &\simeq \beta_\Omega \frac{\sqrt{\D N/\D\ln a}\, m_{\rms}}{M_\bullet P}.
\end{align}
In Appendix~\ref{s:beta}, we show that with this definition $\beta_T$
is independent of $a$ if the distribution of $a$ is a power law, and
independent of the distribution of stellar masses. We evaluate the
average in Eq.~(\ref{e:betaT0}) as integrals over orientation,
eccentricity, and semimajor axis (Eq.~\ref{e:betaT3}) for $n(a)\propto
a^{-1.5}$ and $\propto a^{-2.5}$. Figure~\ref{f:betaT} shows
$\beta_T$ for an orbit with eccentricity $e$, assuming that all stars
in the cluster have a fixed eccentricity $e'$. The Figure shows that
$0.7\lesssim \beta_T\lesssim 1.5$ and that $\beta_T$ is a decreasing
function of both $e$ and $e'$. Thus we may generally conclude that
$\beta_T$ must be a decreasing function of $e$ for an arbitrary
eccentricity distribution, with values in the same range 0.7--1.5. In
particular Figure~\ref{f:beta} shows $\beta_T$ and $\beta_{\Omega}$
for a star cluster with a thermal eccentricity distribution $\D N =
2e\,\D e$ and number density proportional to $a^{-\gamma}$ where
$1.5<\gamma<2.5$. Simple fitting formulae are\footnote{This result
disagrees with the eccentricity dependence reported by
\citet{2007MNRAS.379.1083G}, for reasons given in
Section~\ref{s:comparison} below.}
\begin{align}\label{e:betafit}
\beta_T(e) \simeq 1.05 - 0.3\, e\,,\quad
\beta_{\Omega}(e) \simeq \frac{1.05 - 0.3\, e}{(1-e^2)^{1/2}}\,.
\end{align}
Thus we find that the angular-momentum re-orientation timescale is
approximately independent of the semimajor axis distribution
(i.e., the exponent $\gamma$), and is also independent of the eccentricity for $0\leq e
\leq 0.75$, to within $20\%$ accuracy. The
angular momenta of highly eccentric stars are re-oriented much more
rapidly. RMS-averaging over both $e$ and $e'$ for a thermal
distribution yields
$\langle\beta_T^2\rangle^{1/2}=0.85$.\footnote{The RMS average of
$\beta_{\Omega}$ over both $e$ and $e'$ in a thermal eccentricity distribution
is logarithmically divergent, $\langle \beta_{\Omega}^2\rangle^{1/2}\propto \ln(1-e_{\max})$
for $e_{\max}\rightarrow 1$.}
\begin{figure}
\centering
\mbox{
\includegraphics{fig7.eps}
}
\caption{\label{f:beta}
The dimensionless parameters $\beta_T$ and $\beta_{\Omega}$
(Eq.~\ref{e:torque-coherent-powerlaw}) describing
the RMS coherent torque and precession rate for a star with
eccentricity $e$ due to a spherical population of stars with a
thermal distribution of eccentricity $\D N = 2 e \D e$ and a
distribution of semimajor axes $n(a)\propto a^{-\gamma}$, where
$1.5\leq \gamma \leq 2.5$ as labeled. The evaluation is done using
Eq.~(\ref{e:betaT3}). }
\end{figure}
Pursuing the analogy to the random walk on the sphere, the decoherence
time $t_\phi$ takes the place of the timestep and
$\Omega_{\rms}t_\phi$, which we call the angular coherence length,
takes the place of the RMS angular displacement per timestep
$\langle\alpha^2\rangle$. On timescales short compared to the
decoherence time, the orbit normals move in the mean field of the
cluster at a rate $d\Ln/dt$ which is approximately constant\footnote{
As long as the mean-field potential is constant in time, ${\Ln}$
moves with angular velocity $\partial H_{\E}/\partial \L$ along a
closed path on the unit sphere that is a contour of constant
$H_{\E}$, see Eq.~(\ref{e:EOM2}).}, and the angular variance is
\begin{align}
V_{\ell}(t) &= -\frac{4}{\ell(\ell+1)}\ln
\left|\left\langle P_{\ell}\hspace{-1pt}\big[\Ln_i(t+t_0)\cdot\Ln_i(t_0)\big] \right\rangle \right|,
\quad
t\,\lesssim\, t_{\phi, \ell}
\nonumber\\ &= \Omega_{\rms}^2t^2\,,
\label{e:V-coherent}
\end{align}
where the quadratic approximation in the second
line holds so long as the angular displacement is small
($V_\ell(t)\ll1$). On timescales long compared to the decoherence
time, the orbital vectors execute a random walk, where
\begin{align}
V_{\ell}(t) = \frac{t}{t_{{\vrr},\ell}}, \quad t\,\gtrsim\, t_{\phi, \ell}\,,
\label{e:V-incoherent}
\end{align}
which defines the VRR time for the $\ell^{\rm th}$ harmonic
$t_{{\vrr},\ell}$. We identify the decoherence time
with the transition from quadratic to linear growth
of $V_{\ell}(t)$, that is,
\begin{equation}\label{e:tvrr-def}
t_{\phi,\ell} = \frac{1}{\Omega_{\rms}^2 t_{\vrr,\ell}}\,.
\end{equation}
\begin{figure}
\centering
\mbox{
\includegraphics{fig8.eps}
}
\caption{\label{f:Vell} The evolution of the angular variance
$V_{\ell}=-4\ell^{-1}(\ell+1)^{-1}\ln \big|{(NT)}^{-1}\int_0^{T} \D
t_0\sum_{i=1}^N P_{\ell}[\cos \alpha_i(t,t_0)]\big|$ in a simulation
with 16,384 stars. Here $\alpha_i(t,t_0)$ is the angular distance
between the angular-momentum vector of star $i$ at time $t_0$ and
time $t+t_0$. The stars are initially spherically distributed with
nearly the same semimajor axis and a uniform distribution in the
square of the eccentricity for $e\lesssim 0.99$ (i.e., uniform
distribution on the energy surface in phase space). The angular
variance is expected to grow quadratically at early times (coherent
torques) and linearly at later times (random walk on a sphere) until
the mode is fully mixed, as marked by short coloured lines on
the vertical axis (Eq.\ \ref{e:Vell-sat}). The shaded region
shows $\min[ (\beta_{\Omega} t f_{\vrr}/t_{\vrr})^2, t/t_{\vrr} ]$
for reference where $t_{\vrr} = f_{\vrr}
[M_{\bullet}/(\sqrt{N}m_{\rms})] P$, $0.9\leq \beta_{\Omega}\leq
1.5$, and $0.8\leq f_{\vrr} \leq 1.5$.}
\end{figure}
For a single-component spherical cluster of stars, the torques are
comparable for different stars and constant for a characteristic time
$\sim\Omega_{\rms}^{-1}$, and therefore one might expect
$t_{\phi}\sim\Omega_{\rms}^{-1}$, so the angular coherence length is
$\Omega_{\rms}t_{\phi} \sim 1$. In this case the formulae above yield
$t_{\vrr}\sim \Omega_{\rms}^{-1}$ so we write
\begin{equation}
t_{\vrr} = f_{\vrr}\frac{M_\bullet}{\sqrt{N}m_{\rms}}P\,,
\label{e:vrr}
\end{equation}
where $f_{\vrr}$ is a dimensionless constant of order unity. With
these definitions the decoherence time is\footnote{
Using the notation of \citet{2009ApJ...698..641E}, the decoherence
time is parameterized by the dimensionless constant $A_{\phi}$ as
$t_{\phi} = A_{\phi} [M_{\bullet}/(\sqrt{N}m_{\rms})] P$.}
\begin{equation}\label{e:tphi}
t_\phi=\frac{1}{f_{\vrr}\beta_\Omega^2}\frac{M_\bullet}{\sqrt{N}m_{\rms}}P\,.
\end{equation}
\begin{figure*}
\centering
\mbox{
\includegraphics{fig9a.eps}\quad
\includegraphics{fig9b.eps}\quad
\includegraphics{fig9c.eps}
}\\[2ex]
\mbox{
\includegraphics{fig9d.eps}
\includegraphics{fig9e.eps}
}
\caption{\label{f:Lmapandtorque} {\it Top panels:} The evolution of
the normalized angular-momentum vectors for two representative stars
from a simulation similar to Figure~\ref{f:Vell}. The three panels
show orthogonal projections on the $x$--$z$, $y$--$z$, and $x$--$y$
planes. The two stars were randomly selected from a subset of stars
with roughly the mean eccentricity of the cluster $\langle e \rangle
=0.58$. The motion is shown for a time interval $t=7
\,\Omega_{\rms}^{-1}$. {\it Bottom panels:} The evolution of the
torque as a function of time. The bottom left panel shows the two
stars for which the trajectories are shown in the top panels. The
long-dashed and short-dashed lines show the mean of $\|\bm{T}_i\|$
and $T_{\rms,i}$ of the cluster, respectively. In the bottom right
panel, the solid green and black curves in the right panel show
stars with nearly the mean eccentricity, and stars from the whole eccentricity
range. The decoherence time is approximately independent of
eccentricity. }
\end{figure*}
\begin{figure*}
\mbox{\hspace{-20pt}\includegraphics{fig10a.eps}\qquad
\includegraphics{fig10b.eps}}\\
\mbox{
\vspace{2pt}\hspace{10pt}\includegraphics{fig10c.eps}\qquad
\includegraphics{fig10d.eps}}
\centering
\caption{\label{f:correlation1} The angular variance $V_{\ell}(t)$ as in
Figure~\ref{f:Vell}, but for $\ell=1$ and different initial
conditions (top left), different numbers of stars (top right) and
different RMS masses (bottom panels). {\it Top left:} Different
curves show the range spanned by six simulations with different
initial conditions. {\it Top right:} The number of stars $N$ is
varied between 256 and 16384, the legend shows $N/1024$. {\it
Bottom left:} The stellar cluster is comprised of 15k low-mass and
1k high-mass stars (left) so the total mass $Nm$ is the same for
both groups. The curves show $V_1$ for stars grouped in subsets
containing 1k members, sorted by mass and eccentricity
(curves are colored by eccentricity as shown on the right); solid
and dashed lines have different stellar masses as labeled. {\it
Bottom right:} Similar to bottom left, but with heavy stars
$\sqrt{15}\times$ more massive than light stars. The shaded regions show $1.1 \leq
\beta_{\Omega}\leq 1.8$ and $0.5\leq f_{\vrr}\leq 1.5$ (top panels),
$0.5 \leq \beta_{\Omega}\leq 2$ and $0.75\leq f_{\vrr}\leq 4.5$
(bottom left), and $0.7 \leq \beta_{\Omega}\leq 3.0$ and $0.3\leq
f_{\vrr}\leq 2.5$ (bottom right). }
\end{figure*}
For a range of semimajor axes, the relaxation time Eq.~(\ref{e:vrr}) becomes
\begin{equation}
t_{\vrr}(a) = f_{\vrr}\frac{M_\bullet}{\sqrt{4\pi a^3n(a)}m_{\rms}}P(a)\,.
\label{e:vrr1}
\end{equation}
We measure the dimensionless parameters $\beta_\Omega$
and $f_{\vrr}$ using numerical simulations, from the behavior of
$V_\ell(t)$ at small and large times.
\begin{figure*}
\centering
\mbox{
\includegraphics[height=7.1cm]{fig11a.eps}
\includegraphics[height=7.57cm]{fig11b.eps}}\\
\mbox{
\includegraphics[height=7.1cm]{fig11c.eps}
\includegraphics[height=7.5cm]{fig11d.eps}}
\caption{\label{f:correlation_a}
VRR in a stellar cluster with a range of eccentricity $0\leq e
\leq0.99$ ($\D N = 2e\D e$) and semimajor axis
$a_{\max}/a_{\min}=100$ with number density $n(a)\propto a^{-1.75}$
(left panels) and $r^{-2.4}$ (right panels). We sort the stars with
respect to their semimajor axis (top panels) and eccentricity
(bottom panels) and group them into 32 bins containing 128 stars
each. The 32 curves in each panel shows $V_1=-2\ln|\langle \cos
\alpha_i\rangle|$ as in Figure~\ref{f:correlation1} for the stars in
the corresponding bins where $\alpha_i$ is the angular distance
traversed by star $i$ in dimensionless time $\tau =
t/[M_{\bullet}m_{\rms}^{-1}(\D N/\D \ln a)^{-1/2}P(a)]$ from some
reference time $t_0$. We average over $i$ and $t_0$ for each $\tau$.
The evolution of this quantity is quadratic in the initial coherent
phase and linear during incoherent random mixing. The curves are
colored according to the semimajor axis (top) or eccentricity
(bottom panels) as shown on the right. The main systematic effect
with semimajor axis is well captured by the relaxation time formula,
the curves nearly overlap in these units despite a range of a factor
of 56 (left panels) or 250 (right panels) in $t_{\vrr}(a)$. Residual
variations are probably due to edge effects: stars near $a_{\min}$
and $a_{\max}$ relax slower. Since the curves nearly overlap for
$e<0.7$, stars with small to moderately large eccentricities relax
at nearly the same rate given by $t_{\vrr}(a)$. However highly
eccentric orbits $e>0.8$ relax by up to a factor 4--8 faster. The
eccentricity dependence in the coherent phase of the simulation is
in perfect agreement with the direct calculation shown in
Figure~\ref{f:beta}.}
\end{figure*}
\begin{figure*}
\centering
\mbox{\includegraphics[height=7.12cm]{fig12a.eps}
\includegraphics[height=7.58cm]{fig12b.eps}}\\
\mbox{
\includegraphics[height=7.12cm]{fig12c.eps}
\includegraphics[height=7.58cm]{fig12d.eps}}
\caption{\label{f:correlation_a2}
Same as Figure~\ref{f:correlation_a} but with an eccentricity
distribution that is thermal below $e=0.4$ and flat at higher
eccentricities, $dN\propto e de$ for $e<0.4$ and $dN\propto de$ otherwise.
The trends are very similar. The interaction calculation $\J_{ij\ell}$ was
truncated above multipole harmonic index $\ell_{\max}=20$ in this figure,
and above $50$ in Figure~\ref{f:correlation_a}.
}
\end{figure*}
Figure~\ref{f:Vell} shows $V_{\ell}(t)$ measured in a simulation of a
spherical cluster with 16,384 stars with nearly the same semimajor
axes and masses. The figure shows that indeed all $V_{\ell}$ grow
quadratically at first (coherent torques) and then linearly (random
walk), until eventually they saturate and thereafter execute random
variations. The dipole ($\ell=1$) mixes most slowly, higher harmonics
mix sooner. The curves with different $\ell$ approximately overlap
before they saturate; this behavior is in agreement with
Eq.~(\ref{e:Vell-theory1}) for Brownian motion even though the angular
coherence length is of order unity so the Brownian approximation is
questionable. The shaded region shows the model described by
Eqs.~(\ref{e:V-coherent})--(\ref{e:vrr}) with
$0.9\leq \beta_{\Omega}\leq 1.5$ and $0.8\leq f_{\vrr}\leq 1.5$,
the best-fit dimensionless torque and VRR factors are
$\beta_\Omega\approx 1.2$ and $f_{\vrr}\approx 1.2$. The linear
evolution corresponding to a random walk starts where
$V_{\ell}(t_{\phi})=\beta_{\Omega}^{-2}f_{\vrr}^{-2} \approx 0.5 $ for
$1\leq \ell\leq 5$. The angular coherence length is $\langle
\alpha^2\rangle^{1/2} = \Omega_{\rms}t_{\phi} \approx
\beta_{\Omega}^{-1}f_{\vrr}^{-1} \approx 0.7 \approx 39\,\rm deg$.
The horizontal lines show the expected saturated level of $V_{\ell}$
based on Eq.~(\ref{e:Vell-sat}), which is consistent with the curves.
Thus, our approximate treatment of the stochastic motion as a random
walk appears to provide a consistent model of the evolution shown in
Figure \ref{f:Vell}.
To show an example of the actual motion of angular-momentum vectors,
the top panel of Figure~\ref{f:Lmapandtorque} shows a time interval
$\sim 7\,\Omega_{\rms}^{-1}$ of the $\Ln_i$ trajectory for two stars
in a simulation similar to Figure~\ref{f:Vell}. The two stars are
chosen to have close to the mean eccentricity of the cluster. In this
case, our model approximates their motion as $\sim 10$ steps of a
random walk with an average step size of $30^\circ$. The time interval
shown corresponds to $\sim 3$ relaxation timescales, and $\sim 0.25$
of the complete mixing timescale for $\ell=1$. The bottom left panel
of Figure~\ref{f:Lmapandtorque} shows the torque as a function of time
in units of $M_{\bullet} P
/(\sqrt{N}m_{\rms})=\beta_\Omega\Omega_{\rms}^{-1}\approx\Omega_{\rms}^{-1}$
for the same stars. The bottom right panel of
Figure~\ref{f:Lmapandtorque} shows the torque as a function of time
for a larger sample of stars: green curves show stars with nearly the
median eccentricity, black curves show stars from the full range of
eccentricities ($0\leq e<0.99$). The torques vary substantially from
their initial values after a decoherence time $t_{\phi} \sim
(0.3$--$0.7)\,\Omega_{\rms}^{-1}$, which is consistent with our
earlier estimate from the angular variance. The decoherence time is
similar for stars of all eccentricities.
Figure~\ref{f:correlation1} shows $V_\ell(t)$ in simulations with
different initial conditions, numbers of stars, and distributions of
stellar masses. In these simulations we continue to assume that all
stars have nearly the same semimajor axis, a spherical distribution in
angular-momentum space, and a thermal distribution of eccentricities
as in Figure~\ref{f:Vell}.
We find that Eq.~(\ref{e:vrr}) describes well the dependence of the
relaxation timescale $t_{\vrr}$ on the number of stars, although
the fitted value of
$f_{\vrr}$ can vary by 30--40\% for different initial
conditions. In particular, in the upper right
panel we vary the number of stars by a factor $64$ but the variation
in scaled time at a fixed value of $V_1$ is less than a factor of two,
and shows no systematic trend with $N$. Complete mixing occurs when
the angular variance saturates, which in these simulations occurs at
$t_{\sat}\sim 10$--$30\, t_{\rm vrr}$.
Note however that some of the curves do not display a perfectly linear growth
during incoherent evolution; in various runs $V_{\ell}(t)$
exhibits time dependence both shallower and steeper than linear.
Similar anomalous diffusion is often observed
in chaotic systems near phase transitions, in random walks where the
probability distribution of step size is top-heavy, and in systems with long-term memory
(\citealt{PhysRevLett.83.2104,PhysRevE.82.021101,Gottwald21052013}; see also
\citealt{2011MNRAS.411L..56G} and \citealt{2013ApJ...764...52B} for related findings in scalar resonant
and two-body relaxation, respectively).
The simulations in the bottom panels contain two groups of stars with
the same total mass $Nm$ (bottom left panel) and the same value of
$\sqrt{N}m$ (bottom right panel); the RMS masses in the clusters are
$m_{\rms}=3.87$ and $1.40$ respectively. Here $V_{1}(t)$ is shown
for 1k element bins sorted by mass and eccentricity, with solid
curves showing the low-mass stars, colors representing eccentricity
as shown on the right, and dashed black curves showing the high-mass
stars. We find that the predicted scaling with $m_{\rms}$ captures
the mass dependence well. The relaxation is approximately eccentricity
independent for $e\lesssim 0.8$, and it is systematically faster
for more eccentric orbits, but the decoherence time is roughly
independent of eccentricity even for very eccentric orbits.
This is consistent with the observation
that the mean field of the cluster is dominated by stars with
$e\lesssim 0.8$ ($64\%$ of stars have $e<0.8)$ and the torque
decreases weakly with eccentricity; thus the torque is approximately
constant until the stars with $e\lesssim 0.8$ are re-oriented.
Next let us relax the assumption of a fixed semimajor axis. We
distribute the orbits between $a_{\max}/a_{\min}=100$, and integrate
for $\sim 10$ relaxation times at the outer edge of the cluster or
$\sim10^3$ relaxation times at the inner edge of the cluster. To
maintain numerical accuracy for such a large dynamic range, we reduce
the number of stars to 4 thousand. Each star has the same mass and a
thermal distribution of eccentricity ($dN = 2 e \,\D e$). We bin the
stars according to semimajor axis or eccentricity to look for
systematic effects in the relaxation time.
Figure~\ref{f:correlation_a} shows the result of two simulations with
number density profiles $n(r)\propto r^{-1.75}$ and $r^{-2.4}$
respectively, which correspond to the observed distribution of
B-stars and Wolf--Rayet/O stars, respectively
\citep{2010ApJ...708..834B}. We find that the dependence of the
relaxation time $t_{\vrr}(a)$ on semimajor axis $a$ is given
approximately by Eq.~(\ref{e:vrr1}). Indeed, despite a range of a
factor of $(0.3$--$6)\times10^{4}$ in the number density as a
function of semimajor axis, there is less than a factor $\sim 3$
variation in the angular variance $V_1(t)$ when time is measured in
units of $t_{\vrr}(a)$ as given by Eq.~(\ref{e:vrr}). This is only a
little larger than the factor $\sim 2$ variation seen for different
realizations of the initial conditions (cf. top left panel of
Figure~\ref{f:correlation1}). Moreover most of this variation is
seen for stars with semimajor axes near the cutoffs at $a_{\max}$ and
$a_{\min}$, and so are probably due to ``edge effects''.
The bottom panels of Figure~\ref{f:correlation_a} show the dependence
of the relaxation rate on eccentricity: the rate is nearly
independent of eccentricity for $e\lesssim 0.7$, but orbits with
$e\,\gtrsim \,0.8$ relax faster on average, by as much as a factor of
4--8. This behavior is in good agreement with the direct calculation
of $\beta_T$ and $\beta_\Omega$ shown in Figure~\ref{f:beta}.
Highly eccentric orbits have a much larger $\beta_{\Omega}$;
therefore they are re-oriented more rapidly and have a larger
angular coherence length. However, the decoherence time is roughly
independent of eccentricity since the torques are dominated by
stars with $e\lesssim 0.7$ which have similar $\beta_\Omega$ and
therefore are re-oriented at similar rates.
Figure~\ref{f:correlation_a2} shows the evolution in the case where
the number of high-eccentricity stars is smaller, but the results
are very similar.
The figures show that $\beta_{\Omega}=0.95\pm 0.1$
and $f_{\rm vrr}=1.9\pm0.2$ for $e<0.7$, while for orbits with $e>0.8$
$\beta_{\Omega}$ and $f_{\rm vrr}$ are larger and smaller by up to factors of
$\sim 3.5$ and $5$, respectively.
\subsection{Comparison with previous results}\label{s:comparison}
In this paper, we have explored an idealized model of how orbits in a
spherical stellar system undergo re-orientation due to torques from
other orbits. Our model is based on the approximation that the rate
of apsidal precession is much faster than the rate at which the
orbital planes change their orientation. This approximation is valid
because the ratio of the re-orientation time to the apsidal precession
time in a cluster of $N\gg 1$ stars scales as $\sqrt{N}$ (see Section
\ref{s:intro}) which is likely to be valid for most stars in the
Galactic center with semimajor axes between $\sim0.003\pc$ and $\sim
1\pc$ (see Section~\ref{s:Hamiltonian} and Figure 1 of KT11).
There are many previous studies of resonant relaxation
\citep{1996NewA....1..149R, 2006ApJ...645.1152H,
2007MNRAS.379.1083G,2009ApJ...698..641E,2009ApJ...702..884P,
2009ApJ...705..361G,2010PhRvD..81f2002M,
2011MNRAS.411L..56G,2011ApJ...738...99M,2011PhRvD..84d4024M,
2011ApJ...726...61M,2012A&A...545A..70S,2013ApJ...763L..10A}. Some
of these studies only computed the torques between fixed Kepler
ellipses, which are not relevant in the regime considered here where
the apsidal precession is faster than the re-orientation of the
ellipse. Some employed direct N-body simulations, which in principle
are more accurate than the approximations used here. However, due to
the computational cost of studying slow processes such as VRR with
direct N-body simulations, earlier studies were restricted to either
(i) small-$N$ systems, in which the vector and scalar resonant
relaxation timescales are not well-separated, or (ii) following the
N-body system for less than the apsidal precession period, so the
torque parameters $\beta_T$ and $\beta_\Omega$ (Eq.\
\ref{e:torque-coherent-powerlaw}) were measured for Keplerian ellipses
rather than annuli. Thus, either they did not measure the long-term average
values of $\beta_T$ and $\beta_\Omega$ that are relevant for VRR, or
they did not measure the coefficient $f_{\vrr}$ (Eq.\ \ref{e:vrr}) that
parametrizes incoherent VRR. We believe that the simulations in this paper
provide the first detailed study of VRR that represents
both the coherent and incoherent evolution for systems with a large
number of stars.
When comparing with earlier studies, we must account for definitions
of $\beta$ in these papers that are slightly different from ours:
\begin{itemize}
\item \citet{1996NewA....1..149R} defined $\beta^{\rm RT}$ using
$\langle\|\T_i\|\rangle=\beta^{\rm RT} (2\pi)^{-1} \sqrt{N}Gm^2/a_i$
where $N$ denotes the total number of stars. They carried out N-body
simulations with $64\le N\le 8192$ and a range of semimajor axes
$a_{\max}/a_{\min}=10$, with $n(a)\propto a^{-\gamma}$, $\gamma=2$,
and a thermal distribution of eccentricities for $e\le
e_{\max}=0.8$. Only 64 ``active'' stars interacted
self-consistently; the rest exerted torques on the active stars but
followed fixed orbits in either a point-mass or an isochrone
potential (the isochrone was used to experiment with the effect of
rapid apsidal precession). They measured $\beta^{\rm RT}$ as
$\beta^{\rm RT} \equiv \langle 2\pi a_i \|\T_i\|\rangle/(\sqrt{N}
Gm^2 ) =(M_{\bullet}/\sqrt{N}m) \langle \|\T_i \|P_i / L_{{\rm
c},i}\rangle_i$, where $L_{{\rm c},i}=m \sqrt{G M_{\bullet}
a_i}$ is the angular momentum of a circular orbit. To compare this
to our $\beta_T$ we must make two corrections. First, for a
two-dimensional Gaussian distribution ($\T$ perpendicular to $\L$)
$\langle \|\T\|\rangle = \half\pi^{1/2} T_{\rms} $. Second, we
measure $\beta_T$ using $dN/d\ln a$ whereas they use $N$; to make
the conversion we note from Appendix~\ref{s:beta} that in a
power-law density distribution $\beta_T$ and hence $ a
\|\T\| /\sqrt{\D N /\D \ln a}$ is independent of $a$, so we have
$\langle a_i\|\T_i\|\rangle/\sqrt{N} = (a_i \|\T_i\|/\sqrt{N})\langle
\sqrt{\D N/\D\ln a}\,\rangle /\sqrt{\D N/\D\ln a_i}$. Then for the
assumed number density profile ($\gamma=2$, $a_{\max}/a_{\min}=10$), we have $\beta^{\rm RT}
= \half f_e\pi^{1/2} \beta_T {\int_{a_{\min}}^{a_{\max}} \D N\,(\D N /\D \ln
a)^{1/2} /(N^{1/2}\int_{a_{\min}}^{a_{\max}} \D N)} =
0.670f_e\, \beta_T$, where $f_e\simeq 1.2$ is a correction arising
because Rauch \& Tremaine did not have any stars with
$e>e_{\max}=0.8$ (cf.\ Fig.~\ref{f:betaT}). They measured
$\beta^{\rm RT} = 1.8\pm 0.1$ in the Kepler case where the
background stars had no apsidal precession due to the unperturbed
potential, and $\beta^{\rm RT} = 0.7\pm 0.1$ in the isochrone case
with rapid apsidal precession, corresponding to $\beta_T=2.2\pm0.1$
and $\beta_T=0.9\pm0.1$, respectively.
\item \citet{2007MNRAS.379.1083G} defined $\beta^{\rm GH}$ using
$\langle\|\T_i\|\rangle=\beta^{\rm GH} \sqrt{N(<2a_i)}Gm^2/a_i$,
where $N(<2a_i)$ denotes the number of stars with semimajor axis
less than $2a_i$. They calculated the orbit-averaged torques for
fixed Keplerian wires using $N=10,000$ stars with density
$n(a)\propto a^{-\gamma}$ and $\gamma=1.4$. They found that the
mean absolute torque along the minor axis of the orbit increased
with eccentricity and conversely along the major axis, such that the
total torque increased with eccentricity as $\beta^{\rm GH} = 1.76
(e^2 + 0.5)/2\pi$ with an average over the eccentricity distribution
($\D N/\D e\propto 2e$) $\langle \beta^{\rm GH}\rangle = 1.76/2\pi$
and an RMS $\langle (\beta^{\rm GH})^2\rangle^{1/2}=1.83/2\pi$. For
a power-law density distribution our definition of the torque
parameter is related to theirs as
$2\pi\beta^{\rm GH}= \pi^{1/2} (3-\gamma)^{1/2} 2^{(\gamma-5)/2}
\beta_T$. For $\gamma=1.4$ this yields $2\pi\beta^{\rm GH} =
0.64\,\beta_T$, so their result implies
$\beta_T=2.7(e^2+0.5)$. Averaging over a thermal distribution of
eccentricities yields $\langle \beta_T\rangle=2.6$
and $\langle \beta_T^2 \rangle^{1/2}=2.9$.
\item \citet{2009ApJ...698..641E} defined $\beta^{\rm EAK}$ using a
similar definition as \citet{1996NewA....1..149R},
$\langle\|\T_i\|\rangle=\beta^{\rm EKA} (2\pi)^{-1}
\sqrt{N}Gm^2/a$. They conducted a number of N-body simulations with
$N=200$ and a variety of semimajor axis distributions, number
density $n\propto a^{-\gamma}$ with $1\leq \gamma\leq 1.75$. They
noted that the torque perpendicular to $\Ln$ was mostly along the
instantaneous minor axis of the orbit, as in
\citet{2007MNRAS.379.1083G}. Using the same arguments as for
\citet{1996NewA....1..149R}, we get that $\beta^{\rm EKA} = 0.68\,
\beta_T$ and $0.69\, \beta_T$ for $\gamma=1.75$ and $\gamma=1$,
respectively. Measuring the re-orientation correlation function for
a few precession times, they found $\beta^{\rm EKA} = 1.83\pm 0.03$,
which implies $\beta_T = 2.7$.
\end{itemize}
For comparison, our calculations yield $\beta_T\simeq 0.85\pm 0.1$
(Eq.\ \ref{e:betafit} and Fig.\ \ref{f:beta}), which is a factor 3
smaller than the results reported by \citet{1996NewA....1..149R},
\citet{2007MNRAS.379.1083G}, and \citet{2009ApJ...698..641E}. The
systematically higher value of $\beta_T$ found in these investigations
arises because the torque on an orbit was averaged over a timescale
short compared to the apsidal precession period\footnote{The rate of
re-orientation, as measured by $\beta_T$, can be even more rapid on
timescales shorter than or comparable to the orbital period
\citep{2010PhRvD..81f2002M,2011CQGra..28v5029S,2012A&A...545A..70S}.}.
As shown by these studies, the largest component of the torque is
parallel to the minor axis of the Keplerian orbit; as the orbit
precesses, the direction of the largest torque precesses as well so
the mean torque averaged over a precession period is smaller than the
mean torque averaged over the orbital period. The use of torques
averaged over the apsidal precession period rather than the orbital
period is necessary to estimate the rate of VRR on timescales longer
than the apsidal precession period, so long as apsidal precession is
much faster than nodal precession. This requirement is satisfied for stars of
small to moderate eccentricity at all radii in the Galactic centre
(see Fig.\ 1 of KT11), but can fail for nearly radial orbits at large
or small radii (see footnote \ref{foot:prec}).
An observation that supports this argument is that our estimate
$\beta_T\simeq 0.85\pm 0.1$ matches the estimate $\beta_T=0.9\pm0.1$
reported by \citet{1996NewA....1..149R} for the isochrone potential, in
which the stars are subject to rapid apsidal precession. Furthermore,
a similar rate of vector resonant relaxation was found using direct
N-body simulations, which
looked at the long term behavior of orbits close to the SMBH including
relativistic corrections (Kupi \& Alexander, private communication\footnote{talk presented at Stars and Singularities, Benoziyo Center for Astrophysics Workshop Series, Rehovot, Israel, \url{http://www.weizmann.ac.il/home/tal/Workshop09/talk_files/Kupi.pdf}}). Their rate
of re-orientation may be fitted by $\|\Delta \L\|/L = c_0 (\beta_{T,0} +
\beta_{T,1}) t/t_{\omega}$ if $t\,\lesssim\, t_{\omega}$, and $c_0
[\beta_{T,0} (t/t_{\omega})^{1/2} + \beta_{T,1} (t/t_{\omega})]$ if
$t_{\phi} \,\gtrsim\, t \,\gtrsim\, t_{\omega}$, where
$\beta_{T,0}+\beta_{T,1}\simeq 2.7$, $\beta_{T,1}\simeq 0.9$,
$t_\omega$ is the apsidal precession time, and $c_0$ is a
constant. Thus, part of the initial coherent torque becomes incoherent
over timescales longer than the apsidal precession time $t_{\omega}$,
leaving a much smaller coherent component thereafter.
An additional limitation of earlier studies is that they could
not accurately characterize the properties of the random walk for the direction
$\Ln_i$ during the incoherent phase of VRR (i.e., the parameter
$f_{\vrr}$ of Eq.\ \ref{e:vrr}), mainly due to the computational cost
of long N-body integrations. Furthermore, previous simulations were
restricted to a small number of self-consistently interacting stars
(between $50$ and $200$) in which complete mixing sets in much earlier
(Eqs.~\ref{e:Vell-theory1})--(\ref{e:Vell-theory2}) which makes the
measurement of the parameters of the incoherent phase more difficult.\footnote{
\citet{2009ApJ...698..641E} defined $f_{\vrr}=1/(A_{\phi} \beta_{\Omega}^2)$,
where $A_{\phi}$, set by the decoherence time in Eq.~(\ref{e:tphi}),
was not determined.}
Finally, previous analyses used the simplified model
$\langle\|\L(t+t_0) - \L(t_0)\|/L \rangle \propto (t/t_{\rm
vrr})^{1/2}$ to characterize VRR, which is
not appropriate if the angular coherence length is of order unity; it
is for this reason that we developed the analysis in
Section~\ref{s:random-theory} based on the random walk on the sphere.
\section{Summary}
We have introduced a new integrator, \textsc{n-ring}, to simulate
vector resonant relaxation in stellar clusters around supermassive
black holes. \textsc{n-ring} integrates Hamilton's equations for $N$
stars, averaged over the orbital period and apsidal precession. The
code uses a multipole expansion (up to $\ell_{\max}=50$ in our
experiments) of the averaged inter-particle potential. The code
decomposes the evolution into pairwise interactions, integrates the
averaged Hamiltonian exactly for each pairwise interaction, and
iterates over all $\half N(N-1)$ such interactions, thereby conserving
the total angular momentum exactly. The coupling coefficients for
different multipole moments are generally complicated functions of the
semimajor axis and eccentricity, but can be calculated once and for
all at the start of the integration.
We have shown how to make the algorithm time-reversible and $n^{\rm
th}$ order accurate (up to $n=8$ in our experiments). We constructed
a parallelization scheme, and increased the efficiency using a
time-block refinement and operator ordering. Using a small computer
cluster of 32 cores, this integrator can accurately integrate the
evolution of a cluster of $\sim 10^4$ stars with a large range of
radii for $\sim 10$ relaxation times within 7 days.
The major challenges that limit the speed of the code include the
following.
\begin{enumerate}[leftmargin=0.5cm]
\item The coupling coefficients driving resonant relaxation
can be strongly enhanced for orbits with nearly coincident periapsides
or apoapsides (see bottom panels of Figure \ref{f:energy-alpha}).
\item
For radially overlapping orbits the coupling coefficients decline
relatively slowly, as $\ell^{-2}$, implying that all multipoles up to
$\ell\sim 1/I$ contribute equally to the motion for orbital
inclination $I$.
\item The precession frequency between two radially
overlapping orbits diverges as their mutual inclination approaches
zero.
\item Gravitational N-body integrations of star clusters,
galaxies, or large-scale structure benefit from the fact that most
stars are at large distances ($N\sim r^3$) so their collective
gravitational potential can be approximated by a few multipole
moments; in contrast, in the averaged problem investigated by
\textsc{n-ring} each star can interact strongly with all stars having
radially overlapping orbits. Thus there are no simple ways to reduce
the number of calculations per timestep below O$(N^2)$. However,
parallel execution on $N$ processors can reduce the computation time
to O$(N)$.
\end{enumerate}
We derived a stochastic model to describe a random walk with
an arbitrary distribution of step sizes on the unit sphere. Expanding
the probability distribution in spherical harmonics shows that the
amplitudes of the spherical harmonics with $\ell>0$ decay exponentially during a
spherical random walk. The angular variance $V_{\ell}\equiv
-2\ell^{-1}(\ell+1)^{-1}\ln|\langle P_{\ell}(\cos\alpha) \rangle|$
grows linearly in time where $\alpha$ is the angular distance
traversed by an orbit normal in time $t$ and $P_{\ell}(\cdot)$ are
Legendre polynomials.
We have investigated the long-term evolution of spherical stellar
systems with up to 16k stars, spanning a factor of up to 100 in
semimajor axis. The simulations confirm that the
orbital orientation vectors initially evolve coherently
($V_{\ell}\propto t^2$) and then undergo a spherical random walk
($V_{\ell}\propto t$) until the system becomes fully mixed. The RMS
step size of the random walk in our simulations is
$\alpha_{\rms}\simeq 0.5$--1 radians and full mixing requires $(\ln
3N)/\alpha_{\rms}^2$ timesteps where $N$ is the number of stars.
In the initial coherent phase of vector resonant relaxation, the RMS
torques can be calculated exactly (Appendix~\ref{s:beta} and Figures
\ref{f:betaT} and \ref{f:beta}). This confirmed the analytical
scaling relations with semimajor axis, number density, and component
mass, and showed perfect agreement with the simulations. In
particular, the torque parameter is $\beta_T=0.8$--$1.5$ (see
Eq.~\ref{e:torque-coherent-powerlaw} and Figure~\ref{f:betaT}) for
different eccentricities. The rate of re-orientation of the orbital
plane follows a similar scaling with
$\beta_\Omega=\beta_T/(1-e^2)^{1/2}$ (Eq.~\ref{e:betafit}). We found
that the torques are generally weakly decreasing functions of the
eccentricity in spherical clusters during vector resonant relaxation,
and in particular for a thermal eccentricity distribution
$\beta_T\simeq 1.05-0.3\,e$. The rate of re-orientation of the orbit
axis is approximately independent of eccentricity for $e\lesssim 0.7$,
and much faster only for $e \,\gtrsim\, 0.8$. The rate of
re-orientation is smaller than has been observed in most\footnote{Except for the
isochrone simulations of \citet{1996NewA....1..149R} and Kupi \&
Alexander, as described in Section~\ref{s:comparison}.} N-body simulations
by a factor $\sim 3$, and most of this difference arises
because the torque perpendicular to the angular-momentum vector is
smaller when apsidal precession is rapid.
Our simulations confirm the formula for the vector resonant relaxation
timescale derived from a model of the relaxation as a random walk on
the sphere (Eq.~\ref{e:vrr}) and imply that the parameter
$f_{\vrr}\simeq 0.5$--2.1 depending mainly on eccentricity (Figures
\ref{f:correlation_a} and \ref{f:correlation_a2}). In a thermal
distribution of eccentricities ($\D N \propto 2\,e\,\D e$), we find
that highly eccentric orbits $e\,\gtrsim\, 0.8$ relax faster by up to
a factor $5$; however, the vector resonant relaxation time for low-
and moderate-eccentricity orbits with $e\lesssim 0.7$ is practically
independent of eccentricity with $f_{\vrr}\simeq 1.9\pm 0.2$. The
simulations also show that the decoherence time of vector resonant
relaxation is roughly independent of eccentricity in the full eccentricity range. The
angular-momentum vectors in the inner regions of our simulated cluster
undergo a stochastic random walk already when the vectors in the outer
parts of the cluster are still experiencing a coherent torque. For a
cluster with a given number of stars, the relaxation rate is
proportional to the RMS stellar mass of the stellar cluster. Thus the
primary uncertainty in estimating the vector resonant relaxation near
the Galactic centre is the mass function of stars, stellar remnants,
gas clouds, etc.: the RMS stellar mass diverges even for a Salpeter
mass function unless a maximum-mass cutoff is imposed, and the mass
function in the Galactic centre is believed to be more top-heavy than
in the solar neighbourhood (see KT11 and references therein).
We found that the Markovian random walk on a sphere gives a good approximate
description of the long-term evolution under vector
resonant relaxation. However, in some cases the temporal correlation
function displays deviations from this model even after averaging over
several mixing timescales (Figure~\ref{f:correlation1}), which
possibly indicates some level of persistent long-term memory in these
stellar systems. In the future we will use \textsc{n-ring} to
examine resonant dynamical friction and vector resonant relaxation in
anisotropic systems.
The purpose of this paper has been twofold: first, to develop an
efficient and general numerical algorithm for simulating vector
resonant relaxation, and second, to relate the simple analytic
description of vector resonant relaxation to quantitative results from
our simulations of model star clusters surrounding central
black holes.
\section*{Acknowledgments}
BK was supported in part by the W.M. Keck Foundation Fund of the
Institute for Advanced Study and NASA grants NNX11AF29G and
NNX14AM24G. Simulations were run on the Harvard Odyssey, CfA/ITC,
and IAS clusters.
\bibliography {ms}
|
1,477,468,750,094 | arxiv | \section*{Introduction}
Recent advances in machine learning (ML) and quantum computing have revolutionized the methodology of processing complex and large-scale data.
While merging these fields, classical or quantum systems can generate a massive amount of time series data, such as sensing data or quantum states that flow through multiple quantum channels in a network of quantum devices~\cite{kimble:2008:quinternet,simon:2017:nat:quinternet,wehner:2018:science:quinternet}.
This context leads to the requirement of a novel learning paradigm to process these data efficiently,
such as the easy manipulation used in training and deployment, while maintaining rich representation capability.
Currently, algorithms are being designed on specific homogeneous data, such as quantum-native or classical-native data.
However, most quantum devices rely on classical controls~\cite{viola:1999:prl:open,dong:2010:control}, such as temperature or signals from electronic controllers~\cite{vandijk:2019:control,rist:2020:microwave}.
The outputs of these devices are not simply derived from quantum channels and are also considered a function of classical controls and quantum input.
A representative example is a quantum switch with classical control, which simulates the indefinite causal order between two operations~\cite{chiribella:2012:qswitch,procopio:2015:qswitch:exp,rubino:2017:qswitch:exp,goswami:2018:qswitch:exp,wei:2019:qswtich:exp} [Fig.~\ref{fig:hybrid:overview}(a)].
Therefore, the research on hybrid quantum and classical data processing can lead to broader and near-term applicability for quantum devices. For example, we can use the same resource to learn the tomography of devices receiving both classical and quantum data without doing it separately for each control setting.
Contrary to ML models such as artificial recurrent neural networks on a digital computer, a physical system with rich dynamics can be a good candidate for a learning system within the framework of physical reservoir computing (PRC)~\cite{nakajima:2021:RCbook,nakajima:2020:physical}. In PRC, the input is fed into a dynamical system called a reservoir to create nonlinear dynamics of input data via sufficiently complex and high-dimensional trajectories~\cite{jaeger:2001:echo,jaeger:2001:short,maass:2002:reservoir,lukoeviius:2009:reservoir,nakajima:2021:RCbook}.
A readout, which outputs a linear combination of the accessible observables in the reservoir, is the only part that needs to be trained without interfering with the reservoir's internal parameters.
Accordingly, the success and efficiency of PRC rely on good physical realizations of the reservoir, which has attracted considerable interest from diverse research fields~\cite{nakajima:2020:physical}.
The seminal work~\cite{fujii:2017:qrc} uses a disordered ensemble quantum dynamics system as a quantum reservoir (QR) to process classical data,
with the possibility of having a large number of degrees of freedom.
QRs have been developed in various platforms, such as nuclear magnetic resonance (NMR) systems~\cite{fujii:2017:qrc,nakajima:2019:qrc,tran:2020:higherorder}, superconducting quantum processors~\cite{chen:2020:temporal,suzuki:2021:natural}, fermions and bosonic models~\cite{ghosh:2019:quantum,ghosh:2020:reconstruct,ghosh:2019:neuromorphic,khan:2021:qrc:boson}, quantum harmonic oscillators~\cite{nokkala:2020:gaussian,gerasimos:2021:prx:measurement},
arrays of Rydberg atoms~\cite{bravo:2021:rydberg}, and photonic quantum memristors~\cite{spagnolo:2022:natphotonic}.
Several studies have focused on the processing of data in the form of quantum states~\cite{ghosh:2019:quantum,ghosh:2019:neuromorphic,ghosh:2020:reconstruct,ghosh:2021:quantum:adv,tran:2021:temporal,nokkala:2021:online}, which provide certain advantages over classical ML methods.
However, a QR is yet to be treated as a homogeneous data-driven model because it lacks the ability to deal with hybrid forms of quantum-classical data.
Therefore, an unified architecture for hybrid quantum-classical processing is required from theoretical and applied perspectives.
\begin{figure*}
\includegraphics[width=14cm]{FIG1.pdf}
\protect\caption{A quantum reservoir processor for quantum-classical hybrid data processing. (a) An example of a quantum device with hybrid inputs. Here, we consider a quantum switch that includes two quantum channels $\ensuremath{\mathcal{N}}_A$ and $\ensuremath{\mathcal{N}}_B$ and an independent switch state $\rho_s$ controlled by a classical signal $s$.
This quantum switch can be considered a function of the hybrid input $(s, \beta)$.
Given a quantum state $\beta$, the quantum switch produces an output $\ensuremath{\mathcal{N}}_A\circ \ensuremath{\mathcal{N}}_B(\beta)$ if $\rho_s = \ket{0}\bra{0}$ (s=0) and $\ensuremath{\mathcal{N}}_B\circ\ensuremath{\mathcal{N}}_A(\beta)$ if $\rho_s = \ket{1}\bra{1}$ (s=1).
When $\rho_s$ is in a superposition of $\ket{0}$ and $\ket{1}$, such as $\rho_s=\ket{\psi_s}\bra{\psi_s}$ with $\psi_s = \sqrt{s}\ket{0}+\sqrt{1-s}\ket{1}$ ($0\leq s \leq 1$), the output becomes a quantum superposition of two alternative orders $\ensuremath{\mathcal{N}}_A\circ\ensuremath{\mathcal{N}}_B(\beta)$ and $\ensuremath{\mathcal{N}}_B\circ\ensuremath{\mathcal{N}}_A(\beta)$.
(b) Our quantum reservoir (QR) is a network of quantum dots that can receive both quantum and classical data as input.
Quantum inputs are incident via optical fields, and classical inputs are encoded in experimental control fields.
The appropriate readout after a time evolution on QR can provide a high-dimensional transformation for both classical and quantum inputs, which can be used in learning tasks.
\label{fig:hybrid:overview}}
\end{figure*}
In this study, we establish a framework that considers a QR as an analog processor to process hybrid quantum-classical data.
Inspired by Refs.~\cite{ghosh:2019:quantum,ghosh:2020:reconstruct}, our QR is a network of quantum dots with random inter-site couplings.
Classical inputs are encoded in classical controls, such as coherent pumps in the network,
and quantum inputs are incident to the QR in the form of optical fields. For temporal processing, each quantum input interacts with the QR for a short duration before being replaced by another input.
The time evolution of the interactions provides a high-dimensional nonlinear mapping of the input via the correlations in the QR, which can be extracted by classical or quantum readouts on accessible nodes.
This enables us to learn the function of input sequence, leading to diverse applications in classical and quantum data processing.
\section*{Results}
\textbf{Quantum\text{--}Classical Hybrid Information Processing via a Quantum Reservoir. }
When we describe a quantum device processing quantum data in a realistic scenario, we must incorporate classical control into the model.
In this case, a quantum device is in fact a function of quantum input $\beta$ and classical control $u$ as $\ensuremath{\mathcal{F}}(u, \beta)$, where we consider the scalar $u$ for ease of explanation.
For a device processing the sequence of hybrid inputs $(u_1, \beta_1)$, $(u_2, \beta_2)$, \ldots, we can describe it using the temporal map $\ensuremath{\mathcal{F}}(\{(u_l, \beta_l)\})$ of input history~\cite{tran:2021:temporal}.
Our target is to develop a trainable framework to emulate $\ensuremath{\mathcal{F}}$.
The proposed framework contains three main parts: an input part containing input modes to receive the data, a QR processor to interact with inputs in a quantum evolution, and a readout for further processing [Fig.~\ref{fig:hybrid:overview}(b)]. We consider the QR processor as a two-dimensional lattice of $N$ quantum dots, represented by the Hamiltonian
\begin{align}\label{eqn:hamiltonian}
\hat{H} = &\sum_i E_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_i + \sum_{\langle i,j \rangle}h_{ij}\left( \hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_j + \hat{\ensuremath{c}}^{\dagger}_j\hat{\ensuremath{c}}_i\right)\nonumber\\
& + \sum_iQ_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_i\hat{\ensuremath{c}}_i + P(t)\sum_i\left( \hat{\ensuremath{c}}^{\dagger}_i + \hat{\ensuremath{c}}_i \right),
\end{align}
where $\hat{\ensuremath{c}}_i$, $E_i$, $h_{ij}$, $Q_i$, and $P(t)$ are the field operators, onsite energies, hopping amplitudes between the nearest neighbor sites, nonlinearity strengths, and uniform time-dependent coherent field strengths, respectively.
$P(t)$ can be used to encode the classical input $u(t)$ as
$P(t) = P + Wu(t)$, where $P$ and $W$ are the constant coefficient and input scaling, respectively.
The dynamics of the combined quantum state $\rho$ of the QR as well as the input modes can be described by the quantum master equation (we use the unit where Plank constant $\hbar=1$).
\begin{align}
\dot{\rho} = -i[\hat{H}, \rho] + \gamma\sum_j\ensuremath{\mathcal{L}}(\hat{\ensuremath{c}}_j)\rho + \Omega(t - t_{\text{init}})\hat{A}\rho,
\end{align}
where $\Omega(t)=1$ for $t\geq 0$ and $0$ otherwise.
Here, $\hat{A}\rho=\sum_k \dfrac{\gamma_k}{\gamma}\ensuremath{\mathcal{L}}(\hat{\ensuremath{a}}_k)\rho + \sum_{k,j}W_{jk}^{\text{in}} \left( \left[\hat{\ensuremath{a}}_k\rho, \hat{\ensuremath{c}}^{\dagger}_j \right] + \left[\hat{\ensuremath{c}}_j, \rho\hat{\ensuremath{a}}^{\dagger}_k \right] \right)$ represents the cascade coupling between the input modes $\hat{\ensuremath{a}}_k$ and the QR~\cite{gardiner:1993:prl:driving}.
The Lindblad superoperator $\ensuremath{\mathcal{L}}(\hat{x})$ is defined for any arbitrary operator $\hat{x}$ by $\ensuremath{\mathcal{L}}(\hat{\ensuremath{x}})\rho=\hat{\ensuremath{x}}\rho\hat{\ensuremath{x}}^{\dagger} - \dfrac{1}{2}\left( \hat{\ensuremath{x}}^{\dagger}\hat{\ensuremath{x}}\rho + \rho\hat{\ensuremath{x}}^{\dagger}\hat{\ensuremath{x}}\right)$.
\begin{figure*}
\includegraphics[width=17cm]{FIG2.pdf}
\protect\caption{Demonstration of the quantum tomography task and the classical channel equalizer task. (a) A random sequence of one-qubit quantum inputs (upper panel) and a result for the channel equalizer task (bottom panel) in the evaluation phase. Each quantum state is represented as a real vector by stacking the real and imaginary parts of the density matrix. (b) The target and reconstructed tomography with $N=3$ reservoir sites, $P/\gamma=0.1$, $W/\gamma=1.0$, and the measurement multiplexity $V=8$.
\label{fig:switch:eqn}}
\end{figure*}
\begin{figure*}
\includegraphics[width=17cm]{FIG3.pdf}
\protect\caption{Performance in the quantum tomography and classical channel equalizer tasks. (a) The average root mean square of fidelities (RMSF) and the average symbol error rate (SER) with shaded error bars over 10 trials.
(b) (Left) The average RSMF in the tomography task when we increase the number $N$ of reservoir sites in the QR.
(Right) Comparison between the average SER in the Echo State Network (ESN) and in our QR for the same number of computational nodes.
In (b), we set the input scaling as $W/\gamma=1.0$ and the measurement multiplexity as $V=8$ for numerical experiments; therefore, the QR with number of computational nodes $16, 24, 32, 40, 48,$ corresponds to $N=2, 3, 4, 5, 6$ sites in the reservoir.
\label{fig:switch:perform}}
\end{figure*}
We explain quantum-classical hybrid processing using the proposed platform.
First, the QR is excited only with the uniform $P$ for $0\leq t< t_{\text{init}}$ and no incident quantum inputs. We choose $t_{\textup{init}}$ such that the QR at time $t_{\text{init}}$ reaches a steady state.
This setting ensures the echo state property~\cite{jaeger:2001:echo} for the reproducible computation, where the response to the same input sequence is independent of the QR's initial state.
Then, the quantum input $\beta$ (described by the input modes $\hat{\ensuremath{a}}_k$) is incident to the reservoir, and the classical input $u(t)=u$ is activated at the same time.
At time $t_1 = t_{\textup{init}} + \tau$ for time interval $\tau$, an appropriate and practical readout from the QR is performed for nontrivial transformations of input data
(see Supplementary Information for detailed settings of $h_{ij}$, $\gamma_k$, $W_{jk}^{\text{in}}$, $\tau$, and $t_{\textup{init}}$).
We consider two readout schemes: a linear combination of measurement results on the accessible observables (classical readout) and the other with a linear combination of quantum modes (quantum readout). The former is associated with a measurement process, while the latter has been considered in a quantum neuromorphic platform for quantum state preparation~\cite{ghosh:2019:neuromorphic}.
For a non-temporal processing task, we repeat the above procedure for every hybrid data instance $(u, \beta)$.
For a temporal processing task, at $t_l = t_{\textup{init}} + (l-1)\tau$ ($l=1,2,\ldots$), the classical input is switched to $u(t)=u_l$, and the quantum state $\beta_l$ replaces the partial state in the input modes. Since the input information is transferred into the QR during the interaction, this scheme enables the memory ability, which is required in temporal processing tasks.
In the classical readout, measuring the expectation values of the occupation numbers $n_j=\langle \hat{\ensuremath{c}}^{\dagger}_j\hat{\ensuremath{c}}_j\rangle$ can extract the information from the QR to reconstruct $\ensuremath{\mathcal{F}}$.
A representative application is quantum tomography, which reconstructs the density matrix output of $\ensuremath{\mathcal{F}}$ via the linear regression model: $W^{\textup{out}}\ensuremath{\boldsymbol{n}} + \ensuremath{\boldsymbol{b}} \approx \ensuremath{\boldsymbol{Y}}_{\ensuremath{\mathcal{F}}}$~\cite{ghosh:2020:reconstruct,tran:2021:temporal}.
Here, $\ensuremath{\boldsymbol{n}}=(n_1,\ldots,n_K)^\top$ is the $K$-dimensional reservoir state for readout, $\ensuremath{\boldsymbol{Y}}_{\ensuremath{\mathcal{F}}}$ is the real vector form to stack the real and imaginary elements of $\ensuremath{\mathcal{F}}$,
and $W^{\textup{out}}$ and $\ensuremath{\boldsymbol{b}}$ are the weight and bias parameters to be optimized via the training (see Methods).
In the classical readout, multitasking is possible since the training cost is minimal for independent training with different $W^{\textup{out}}$ for different tasks.
If the measurement is performed after an interaction time $\tau$ for the current input and right before the next input, the dimensionality $K$ is equal to the number of quantum dots $N$.
One can increase this dimensionality by performing measurements at different timings in the interval $\tau$, which is known as the temporal multiplexing technique.
Between two inputs, we perform measurements at equal interval $\tau/V$, forming the dimensionality $K=NV$.
Here, $V$ is called the measurement multiplexity.
Another technique to increase the dimensionality $K$ is spatial multiplexing~\cite{nakajima:2019:qrc}, where readout reservoir states in different QRs are combined to learn the target.
In the quantum readout, the standard toolbox of linear optical elements~\cite{braunstein:2005:toolbox} enables us to generate $M$ quantum output modes
$
\hat{C}_m = \sum_j o_{mj}\hat{\ensuremath{c}}_j
$ with complex coefficients $o_{mj}$.
The output modes must satisfy the commutation relations $[\hat{C}_m, \hat{C}_n^{\dagger}] = \delta_{mn}$, which impose the condition $\sum_j o_{mj}o^{*}_{nj}=\delta_{mn}$.
Since the target is the quantum state, the training process is not as simple as the one used for linear regression on the accessible observables in the classical readout.
Consider the separation of non-adjustable and adjustable parameters in PRC, we assume that the parameters of Hamiltonian in Eq.~\eqref{eqn:hamiltonian} are random and not trainable. Instead, we train interaction ($W^{\textup{in}}_{jk}$) and readout ($\{o_{mj}\}$) coefficients such that the quantum state described via $\{\hat{C}_m\}$ becomes the same as the output of $\ensuremath{\mathcal{F}}$ (see Methods).
\textbf{Quantum Tomography and Channel Equalizer.} We present an application of QR to hybrid tasks in which quantum tomography and noise-free reconstruction of classical data are performed simultaneously.
Consider a temporal map $\ensuremath{\mathcal{F}}\{(s_l, \beta_l)\}$ where $\{s_l\}$ and $\{\beta_l\}$ are the sequences of classical controls and quantum inputs, respectively.
We assume that the output state $\ensuremath{\mathcal{F}}_l = \ensuremath{\mathcal{F}}\{(s_l, \beta_l)\}$ is accessible at $l=1,\ldots,L$ for training. The tomography task learns the relation between $\ensuremath{\mathcal{F}}_l$ and $\{(s_l, \beta_l)\}$ for $l \leq L$ and reconstructs $\ensuremath{\mathcal{F}}_l$ with $l>L$.
Obviously, the QR cannot learn this hybrid task without the information contained in $\{s_l\}$.
Therefore, we further assume that the classical control data are also accessible, although only in a distorted form of a nonlinear transformation $s_l\to u_l$.
Since multitasking is feasible in the classical readout, we can also reconstruct $\{s_l\}$ from $\{u_l\}$.
In the following example, we consider $\ensuremath{\mathcal{F}}$ as a quantum switch with classical control [Fig.~\ref{fig:hybrid:overview}(a)].
Technically, a quantum switch includes two quantum channels $\ensuremath{\mathcal{N}}_A$ and $\ensuremath{\mathcal{N}}_B$ representing the operations by Alice and Bob, respectively, and an independent switch state $\rho_s$.
Signal communication between Alice and Bob is only restricted to a partial order.
However, the quantum switch can send the information under the indefinite causal order of quantum channels~\cite{chiribella:2012:qswitch,procopio:2015:qswitch:exp,rubino:2017:qswitch:exp,goswami:2018:qswitch:exp,wei:2019:qswtich:exp}.
Given a state $\beta$ on which these channels act, the quantum switch produces an output $\ensuremath{\mathcal{N}}_A\circ \ensuremath{\mathcal{N}}_B(\beta)$ if $\rho_s = \ket{0}\bra{0}$ and $\ensuremath{\mathcal{N}}_B\circ\ensuremath{\mathcal{N}}_A(\beta)$ if $\rho_s = \ket{1}\bra{1}$.
When the switch state is in a superposition of $\ket{0}$ and $\ket{1}$, such as $\rho_s=\ket{\psi_s}\bra{\psi_s}$ with $\psi_s = \sqrt{s}\ket{0}+\sqrt{1-s}\ket{1}$ ($0\leq s \leq 1$), the output becomes a quantum superposition of two alternative orders $\ensuremath{\mathcal{N}}_A\circ\ensuremath{\mathcal{N}}_B(\beta)$ and $\ensuremath{\mathcal{N}}_B\circ\ensuremath{\mathcal{N}}_A(\beta)$.
Here, the quantum switch $\ensuremath{\mathcal{S}}(\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B)$ can be considered a function of hybrid input $(s, \beta)$.
We use our QR to mimic the behavior of the quantum switch applied to the input sequence.
Given a delay $d$, we demonstrate that the QR with current inputs $\beta_l$ and $u_l$ can utilize memory effects to reconstruct $\sigma_l=\ensuremath{\mathcal{S}}(\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B)(s_{l-d}, \beta_{l-d})$ and $s_{l-d}$.
We consider $\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B$ as two depolarizing quantum channels and the reconstruction of $\{s_{l-d}\}$ from $\{u_l\}$ as the nonlinear channel equalization task (see Methods).
Here, $\{\beta_l\}$ is an i.i.d. sequence of one-qubit density matrices, and $\{s_l\}$ is an i.i.d. discrete sequence of symbols, which are selected from $\{-3, -1, 1, 3\}$ with equal probability. The switch state at each $l$ is $\rho_{(3+s_l)/6}$, and the distorted input $\{u_l\}$ is transformed from $\{s_l\}$ via both linear and nonlinear channels~\cite{jaeger:2004:harness} (see Methods).
If $d\geq 1$, it requires a QR with the nonlinear effect and a memory of both quantum and classical inputs.
The QR's output is divided into two parts: the tomography result $\ensuremath{\boldsymbol{Y}}_l$ in the real vector form and the equalized result $y_l$.
$\ensuremath{\boldsymbol{Y}}_l$ is then transformed in the density matrix form $\hat{\sigma}_l$ with the consideration of a projection to obtain a positive semidefinite matrix (see Methods).
$y_l$ is converted back into a nearest symbol $\hat{s}_l \in \{-3, -1, 1, 3\}$.
The training is performed at $l=1,\ldots,L$ ($L=800$), and the tomography performance is evaluated via the root mean square of fidelities (RMSF)
\begin{align}
\textup{RMSF}=\sqrt{(1/T)\sum_{l=L+1}^{l=L+T}F^2(\sigma_l, \hat{\sigma}_l)},
\end{align}
where $T=200$ and $F(\rho,\sigma )=\tr[\sqrt{\sqrt{\sigma}\rho\sqrt{\sigma}}]$.
The equalization performance is evaluated via the symbol error rate (SER)
\begin{align}
\text{SER}=\textup{card}(\{l \mid \hat{s}_l \neq s_{l-d}\})/T.
\end{align}
Figure~\ref{fig:switch:eqn}(a) illustrates a sequence of one-qubit quantum input in the evaluation phase (upper panel) and a result for the channel equalizer (bottom panel) with delay $d=1$ (see Supplementary Information for results with other $d$ values).
Here, the predict and target sequences for the reconstruction of classical symbols $\{s_l\}$ are overlapped at almost all time steps.
The density matrix at each time step is represented as a real vector by stacking the real and imaginary parts.
Figure~\ref{fig:switch:eqn}(b) depicts that the quantum target sequence can be reconstructed well.
We systematically evaluate the performance of the tomography and channel equalizer tasks via the RMSF (left axis) and SER (right axis) in Fig.~\ref{fig:switch:perform}(a) for different $N$ and $W$.
A large value of $W$ compared with $h_{ij}$ and $W_{jk}^{\text{in}}$ leads to non-ergodic behavior in the QR, i.e., a strong and qualitative dependence on the initial state at $t_{\textup{init}}$ (Fig.~S1 in Supplementary Information).
In addition, in Supplementary Information, we further investigate the effects of the classical input in the reconstruction of the quantum input.
With a large $W$, the input state is incident with weak coupling ($|W_{jk}^{\textup{in}}| \ll |P(t)=P+Wu(t)|$) under a strong effect of the classical input to the QR's dynamics, which means that not much information regarding quantum inputs can be retained in the QR.
In contrast, a small $W$ reduces the memory effect in reconstructing the previous classical input.
This explains the existence of a region of $W$ for an optimal performance ($W/\gamma\approx 1.0$).
The left panel of Fig.~\ref{fig:switch:perform}(b) displays the RMSF of the tomography task when we increase the number $N$ of reservoir sites.
In the right panel of Fig.~\ref{fig:switch:perform}(b), we further compare the performance in the equalization task with the Echo State Network (ESN) in classical reservoir computing under the condition of the same number of computational nodes (see Methods). Here, we set the input scaling as $W/\gamma=1.0$ and use the QR with the measurement multiplexity $V=8$; therefore, the QR containing $16, 24, 32, 40, 48$ computational nodes corresponds to $N=2, 3, 4, 5, 6$ sites in the reservoir.
We confirm that with appropriate setting of the constant coherent field $P$, we can obtain almost the same performance with the ESN.
\textbf{Continuous Variable Tomography and Closed Loop.}
We modify the situation in the tomography task where, after the training phase, we were unable to access the information from the classical control $s_l$.
Surprisingly, owing to the advantages of multitasking, our QR can autonomously generate $s_l$ in a closed-loop manner while performing the tomography task with the hybrid input.
In the training phase, $s_l$ is learned in an open loop where we predict the next step $s_{l+1}$ given the input $u_l=s_l$.
After training, the prediction is used as the classical input for the next step, forming a closed-loop control without any external interventions.
This model-free prediction is well established in classical reservoir computing, for example, to predict low-dimensional chaotic systems~\cite{jaeger:2004:harness} or large spatiotemporally chaotic systems~\cite{pathark:2018:prl:chaos}.
However, to the best of our knowledge, our demonstration is the first to combine the closed-loop setting with the quantum tomography task, which is only effective in the QR setting.
\begin{figure*}
\includegraphics[width=17cm]{FIG4.pdf}
\protect\caption{Continuous variable tomography and closed-loop control of periodic classical signals. (a) Closed-loop control of classical signals with $N=3$, $V=10$, $P/\gamma=1.0$, and $W/\gamma=0.6,0.8,$ and $1.0$. (b) Continuous variable tomography at typical time steps in the closed loop with $W/\gamma=0.8$. The last panel displays the absolute difference between the target and reconstructed Wigner functions.
(c) Stability after adding a small perturbation to the trajectory for different input scaling $W/\gamma$.
\label{fig:close:amp}}
\end{figure*}
We consider the quantum tomography of continuous variable states.
The target is to reconstruct the output $\ensuremath{\mathcal{F}}_l= \ensuremath{\mathcal{F}}\{(s_l,\beta_l)\}$ in the Wigner function form $\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)$ defined on a grid of continuous variables $x_i$ and $p_j$ (see Methods).
We use 300 randomly generated one-mode thermal states $\beta_l$ and the periodic signals $s_l = 0.5 + 0.5\sin(\dfrac{l\pi f}{510})$ in the training phase.
The target $\ensuremath{\mathcal{F}}_l$ is created by applying one-mode squeezing operator to $\beta_l$ as
\begin{align}
\hat{S}(\xi_l)=\exp\left( \xi_l \hat{\ensuremath{a}}^{\dagger}\had - \xi_l^{*}\hat{\ensuremath{a}}\ha \right),
\end{align}
where $\xi_l=s_le^{i\pi/4}$.
In Supplementary Information, we consider another encoding: $\xi_l=0.3e^{i2\pi s_l}$.
Here, we consider the cutoff Fock space dimension (the effective dimension) of these continuous variables states is $D_{\textup{eff}}=9$.
Figure~\ref{fig:close:amp}(a) shows examples of the control signals in the training and closed-loop phase for $f=60$. With $W/\gamma=0.8$ and $N=3$ sites, the control signal is almost reconstructed perfectly for all time steps in the closed-loop phase.
This QR can efficiently reconstruct the Wigner function even without accessing the control signal [Fig.~\ref{fig:close:amp}(b)].
We further investigate the stability of the closed-loop trajectories plotted in the $(s_l, s_{l+10})$ plane [Fig.~\ref{fig:close:amp}(c)].
The QR presents a stable embedding of sinusoidal classical inputs if the trajectory can return to the target after adding a small perturbation (green line) into a predicted value, suggesting that our system successfully learned the target attractor.
We observe an appropriate setting of input scaling $W$ to obtain stable closed loops ($W/\gamma \approx 0.8)$.
Intriguingly, if we increase $W/\gamma$, for example to $W/\gamma = 1.8$, the closed loop fails to reconstruct the trajectory of the sinusoidal input in the evaluation stage but can produce chaotic-like behavior in the embedding space.
In this case, the generated trajectory is not elliptical as the trajectory of sinusoidal inputs but still robust with respect to a small perturbation.
We also observe the dependency of the performance of closed-loop control and the production of chaotic-like behavior on time scales $f$ of the control signals, which are investigated in detailed in Supplementary Information.
\textbf{Quantum Readout and Depolarizing Channel.}
Finally, we present an application using the quantum readout scheme to output quantum states. We use the QR to prepare a depolarizing quantum channel $\ensuremath{\mathcal{F}}\{(s_l, \beta_l)\} = s_{l}I/D + (1-s_{l})\beta_{l}$, where $\{\beta_l\}$ are randomly generated in a $D$-dimensional Hilbert space and $\{s_l\}$ is a random sequence in $[0, 1]$.
First, we consider a sequence of $200$ one-qubit quantum states for the training and $100$ states for the evaluation. The baseline is computed when we set the output as the same as the input.
We use the Nelder\text{--}Mead simplex algorithm~\cite{lagarias:1998:nd} (see Methods) to minimize
the fidelity error
\begin{align}\label{eqn:fidelity:error}
\textup{EF}=\sqrt{\dfrac{1}{L}\sum_{l=1}^{L}[1-F(\sigma_l, \hat{\sigma}_l)]^2},
\end{align}
where $\sigma_l$ and $\hat{\sigma}_l$ are the target and preparing quantum states, respectively.
In Fig.~\ref{fig:qtask:depolar1b}(a), the evaluated fidelity errors with different readout and training configurations are presented for the QR with $N=2$ sites, $P=1.0$, and $W=2.0$. The interquartile range is contained within the box, and the 5th and 95th percentiles are marked by whiskers. The median is the line across the box, and the outliers are located outside the whiskers of each box plot.
Here, IN, RV, and ALL correspond to the setting where only input modes $\hat{\ensuremath{a}}_k$, only reservoir modes $\hat{\ensuremath{c}}_j$, or both of them are considered as the readout nodes, respectively. Wo and Wio correspond to the situation where only readout weights or both readout weights and interaction coefficients $W^{\textup{in}}_{jk}$ are considered as the training parameters, respectively.
The result implies that the consideration of both input and reservoir modes as $N_R$ readout modes and both interaction coefficients and readout weights for training leads to the best performance.
Under this setting, we display the variation in fidelity errors EF with the input scaling $W/\gamma$ and $N_R$ in Fig.~\ref{fig:qtask:depolar1b}(b).
Even with a small QR ($N_R=3, 4$) we can prepare the target channel with a relatively low error ($<2\%$), which is significantly better compared with the baseline ($\approx 8\%$).
Furthermore, increasing $W/\gamma$ basically leads to a better performance where more information regarding the classical input is integrated.
\begin{figure*}
\includegraphics[width=17cm]{FIG5.pdf}
\protect\caption{The error in training and evaluating the quantum readout to prepare the depolarizing quantum channel. (a) Combinations of readout nodes and training parameters, where only input modes (IN), only reservoir modes (RV), or both of them (ALL) are considered as $N_R$ readout nodes.
The training parameters are readout weights (Wo) or both readout weights and interaction coefficients (Wio). Fidelity error with one-qubit input states (b) and error taken in the Wigner representation with continuous variable states (c) varying with input scaling $W/\gamma$.
\label{fig:qtask:depolar1b}}
\end{figure*}
Finally, we prepare the depolarizing channel using the input quantum states as random squeezed thermal states in the continuous variable form. We minimize the cost function taken in the Wigner representation as follows:
\begin{align}\label{eqn:wigner:error}
\textup{EW} = \sqrt{ \dfrac{1}{L}\sum_{l=1}^L\dfrac{\sum_{i,j}[\ensuremath{\mathcal{W}}(\sigma_l; x_i, p_j) - \ensuremath{\mathcal{W}}(\hat{\sigma}_l; x_i, p_j)]^2}{\sum_{i,j}[\ensuremath{\mathcal{W}}(\sigma_l; x_i, p_j) + \ensuremath{\mathcal{W}}(\hat{\sigma}_l; x_i, p_j)]^2}}.
\end{align}
Owing to the scale limitation, we only simulate the continuous variable states of the effective dimension $D_{\textup{eff}}=3$, where $D=D^2_{\textup{eff}}=9$.
Figure~\ref{fig:qtask:depolar1b}(c) presents the errors in 50 training and 50 evaluating data varying with $W$.
We can observe a similar trend in Fig.~\ref{fig:qtask:depolar1b}(b), that is, with sufficient classical information ($W/\gamma\geq 1.0$), the error EW ($\approx 0.16$) with $N_R=3$ readout nodes is significantly lower than the baseline's error ($\approx 0.29$).
This result is still below a considerably good preparation ($\textup{EW}< 0.1$), but it demonstrates that hybrid inputs can be effectively considered for training the quantum readout.
\section*{Discussion}
We proposed a framework for an analog QR processor with hybrid inputs and classical and quantum readouts for learning heterogeneous quantum-classical data.
This aligns well with scenarios where one wishes to model a quantum device to process quantum input but must rely on classical control signals in physical experiments.
Our framework, therefore, has the potential to be physically implemented in quantum network systems where classical control and quantum sources can interact with nonlinear quantum systems to form a quantum channel.
It can help realize quantum adaptive systems capable of quantum information processing.
These agents can be used to interpret and memorize both classical and quantum signals from their environment and to respond accordingly to the actions of their surroundings~\cite{elliott:2022:quantum:adaptive}.
Processing hybrid quantum and classical data is a promising idea to facilitate future innovative use cases for quantum computers.
This concept aims to leverage the advantages of quantum mechanics in ML with an unconventional computing framework and intriguing applications.
It is not limited to the conventional discussion on practical quantum advantages, such as the ``beating speedup" of quantum to classical ML methods~\cite{schuld:2022:advantage}.
For example, classical readouts lead to interesting applications of multitasking where quantum data can be processed in a closed loop of the classical control.
Furthermore, adding this closed-loop mechanism allows us to utilize the unique coherence properties of quantum systems to generate unique classical dynamics.
We consider the quantum readout to avoid the measurement process of preparing the quantum output. However, optimization can be challenging and requires improvement, since we need to simulate or drive the quantum system and evaluate the cost function for a wide range of parameters.
A further enticing discussion would be the case of the correlation between the processing of quantum data and classical data in a hybrid setting of the QR. We can consider a QR to simultaneously process quantum data and classical data as separate tasks. An intriguing research question arises: Can this multitasking mechanism induce positive or negative effects on information processing?
For example, if we repeatedly modify the coherent field strengths of the QR via a classical input with a large magnitude, it can limit the short-term memory properties of quantum data processing (see Supplementary Information).
However, one can also expect positive effects and not only negative ones. There may exist a situation where simultaneously processing different modals of data can actually bring an optimal regime rather than solely solving a single task.
We can start by investigating relations between hybrid input protocols with the dynamics of the QR, such as the classical input may induce the dynamical phase transition in the QR~\cite{pena:2021:qrc:dynamic}.
We can also study how classical and quantum data are processed via the QR's dynamics, such as by decomposing the readout reservoir states in terms of basis polynomials for input history~\cite{dambre:2012:nonlinear,kubota:2021:prr:IPC}.
Along with this research line, one can refer to a recent study demonstrating that quantum noise in real quantum processors can induce the information processing capability when using classical data~\cite{kubota:2022:noise}.
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
\section{Learning Quantum Tomography via Quantum Reservoir Computing}
In this section, we explain some quantum extensions of reservoir computing (RC) using a scheme called quantum reservoir computing (QRC) for the quantum tomography task.
\subsection{Quantum Reservoir Computing with Network of Quantum Dots}
In our study, we model the QRC approach via the framework of repeated quantum interactions.
Here, the input sequence is fed via sequential interactions between a quantum reservoir (QR) network $\ensuremath{\mathcal{S}}$ with an auxiliary system $\ensuremath{\mathcal{E}}$.
We consider a two-dimensional lattice of $N$ quantum dots for the QR network, represented by the Hamiltonian
\begin{align}\label{eqn:hamiltonian}
\hat{H} = &\sum_i E_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_i + \sum_{\langle i,j \rangle}h_{ij}\left( \hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_j + \hat{\ensuremath{c}}^{\dagger}_j\hat{\ensuremath{c}}_i\right) + \sum_iQ_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}^{\dagger}_i\hat{\ensuremath{c}}_i\hat{\ensuremath{c}}_i + P(t)\sum_i\left( \hat{\ensuremath{c}}^{\dagger}_i + \hat{\ensuremath{c}}_i \right),
\end{align}
where $\hat{\ensuremath{c}}_i$, $E_i$, $h_{ij}$, $Q_i$, and $P(t)$ are the field operators, onsite energies, hopping amplitudes between the nearest neighbor sites, nonlinearity strengths, and uniform time-dependent coherent field strengths to excite the QR, respectively.
The dynamical evolution can be described by the master equation (we omit the Plank constant $\hbar$ for ease notation).
\begin{align}
\dot{\rho} = -i[\hat{H}, \rho] + \gamma\sum_j\ensuremath{\mathcal{L}}(\hat{\ensuremath{c}}_j)\rho + \Omega(t - t_{\text{init}})\hat{A}\rho,
\end{align}
where $\rho$ is the combined density matrix of the QR as well as the input modes.
Here, $\Omega(t)=1$ for $t\geq 0$ and $0$ otherwise,
and $\hat{A}\rho=\sum_k \dfrac{\gamma_k}{\gamma}\ensuremath{\mathcal{L}}(\hat{\ensuremath{a}}_k)\rho + \sum_{k,j}W_{jk}^{\text{in}} \left( \left[\hat{\ensuremath{a}}_k\rho, \hat{\ensuremath{c}}^{\dagger}_j \right] + \left[\hat{\ensuremath{c}}_j, \rho\hat{\ensuremath{a}}^{\dagger}_k \right] \right)$ represents the decay in the input modes with rates $\gamma_k/\gamma$ (the first term)
due to the cascaded coupling between the input modes $\hat{\ensuremath{a}}_k$ and the QR (the remaining terms).
The Lindblad superoperator $\ensuremath{\mathcal{L}}(\hat{x})$ is defined for any arbitrary operator $\hat{x}$ by $\ensuremath{\mathcal{L}}(\hat{\ensuremath{x}})\rho=\hat{\ensuremath{x}}\rho\hat{\ensuremath{x}}^{\dagger} - \dfrac{1}{2}\left( \hat{\ensuremath{x}}^{\dagger}\hat{\ensuremath{x}}\rho + \rho\hat{\ensuremath{x}}^{\dagger}\hat{\ensuremath{x}}\right)$.
In our numerical simulations, we consider $E_i/\gamma=0$, $\gamma_k/\gamma = \sum_j (W_{jk}^{\text{in}})^2$, with $W_{jk}^{\text{in}}$ being the input weights randomly chosen from the interval $[0.0, \gamma]$.
For the tomography tasks, we assume that there is no interaction between reservoir sites, i.e., $Q_i=0$ (Figs. 2,3,4 in the main text).
We allow for the interactions ($Q_i/\gamma = 1.0$) in the experiments preparing the quantum depolarizing channel (Fig.~5 in the main text).
In the information processing pipeline, the QR is first excited with a uniform $P(t)=P$ for $0\leq t< t_{\text{init}}$ and no incident quantum inputs. We choose $t_{\textup{init}}$ such that the QR at time $t_{\text{init}}$ reaches a steady state.
Then, the quantum input is incident to the reservoir via the input modes $\hat{\ensuremath{a}}_k$, and the classical input $u(t)=u$ is activated at the same time.
The classical input $u(t)$ is encoded via the classical optical excitation as
$P(t) = P + Wu(t)$, where $W$ is the input scaling.
At time $t_1 = t_{\textup{init}} + \tau$ for the time interval $\tau$, an appropriate and practical readout from the QR is performed for nontrivial transformations of input data, such as a linear combination of quantum modes or the linear combination of measurement on the accessible observables.
If the task is a non-temporal processing task, we repeat the above procedure for every hybrid data instance $(u, \beta)$.
For a temporal processing task, we consider a sequence of hybrid inputs $(u_1, \beta_1), (u_2, \beta_2), \ldots$, where $\{u_l\}$ is the classical and $\{\beta_l\}$ is the quantum sequence.
At time $t_l = t_{\textup{init}} + (l-1)\tau$ for $l=1,2,\ldots$, the classical input is switched to $u(t)=u_l$, and the quantum state $\beta_l$ replaces the partial state in the input modes.
In a design similar to that of classical RC, we first consider a readout scheme called \textit{classical readout} associated with a measurement process. The partial information regarding the state of the QR network after the $n$th interaction is obtained via measuring the expectation values of the occupation numbers in reservoir sites.
Here, the $j$th element $x_{lj}$ of the reservoir states $\ensuremath{\boldsymbol{x}}_l$ can be calculated as $x_{lj}= n_j=\langle \hat{\ensuremath{c}}^{\dagger}_j\hat{\ensuremath{c}}_j\rangle$.
One can increase the dimension of $\ensuremath{\boldsymbol{x}}_l$ by using the temporal multiplexing technique.
Between two inputs, we perform measurements at every time interval $\tau/V$, where
$V$ is the measurement multiplexity.
After obtaining the reservoir states, the training procedure in QRC is similar to that in conventional RC.
In the classical readout, multitasking is possible because the training cost is minimal for independent training with different readout weights for different tasks.
The learning performance of the classical readout scheme depends on the dynamics of the occupation numbers $n_j(t)$. In Fig.~\ref{fig:tau}, we show the dynamics of the occupation numbers $n_1(t)$, $n_2(t)$, and $n_3(t)$ compared with the corresponding numbers at time $t_{\textup{init}}$ for $N=3$ reservoir sites with a constant classical input ($W/\gamma=0$) and randomized quantum inputs of one-qubit quantum states. Here, we consider $t_{\textup{init}}=5.0/\gamma$ with different input intervals $\tau$ and different incoherent excitation $P$.
The dynamics from $t=0$ to $t_1 = 5.0/\gamma$ is solely driven by $P$ (actually, the dynamics starts with an empty reservoir $n_1(t)=n_2(t)=n_3(t)=0$) and the system reaches to an initial state before $t_{\textup{init}}$.
The first quantum input is incident at $t_1 = t_{\textup{init}}$, where we can see that the occupation numbers deviate from the steady states.
If we consider a large $\tau$ [Figs.~\ref{fig:tau}(b)(c)], the occupation numbers come back to the steady state before the next input. If we perform the readout measurements at this timing, we cannot obtain sufficient information from the input. Therefore, we choose $\tau$ such that $n_j(t)$ are largely away from the steady values at the timing before the next input is incident on the system [Fig.~\ref{fig:tau}(a)]. In our experiments, we consider $\gamma\tau = 1.0$.
\begin{figure}
\includegraphics[width=12cm]{FIGS1.pdf}
\protect\caption{The typical dynamics of the occupation numbers $n_j(t)$ in the QR, represented by the amount of the input photons $n_j(t) - n_j(t_{\textup{init}})$ entering the QR. The dynamics starts from $t=0$ to $t_{\textup{init}}=5.0/\gamma$, where one-qubit quantum inputs are incident at $t_l = t_{\textup{init}} + (l-1)\tau$.
\label{fig:tau}}
\end{figure}
\subsection{Channel Equalizer and the Tomography of the Quantum Switch}
As a demonstration in the main text, we consider the quantum switch, which is a superposition of two alternative orders of quantum channels~\cite{chiribella:2012:qswitch}.
A quantum switch includes two quantum channels $\ensuremath{\mathcal{N}}_A$ and $\ensuremath{\mathcal{N}}_B$ to create a new channel $\ensuremath{\mathcal{S}}(\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B)$, which uses the channels $\ensuremath{\mathcal{N}}_A$ and $\ensuremath{\mathcal{N}}_B$ in an order that is entangled with an independent switch quantum state $\rho_s$.
The channel $\ensuremath{\mathcal{S}}(\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B)$ returns the state $(\ensuremath{\mathcal{N}}_A\circ\ensuremath{\mathcal{N}}_B(\rho))\otimes \ket{0}\bra{0}$ if $\rho_s = \ket{0}\bra{0}$ and $(\ensuremath{\mathcal{N}}_B\circ\ensuremath{\mathcal{N}}_A(\rho))\otimes \ket{1}\bra{1}$ if $\rho_s = \ket{1}\bra{1}$.
When the switch state is in a superposition of $\ket{0}$ and $\ket{1}$, the channel returns a correlated state as the result of $\ensuremath{\mathcal{N}}_A$ and $\ensuremath{\mathcal{N}}_B$ acting on $\rho$ in a quantum superposition of two alternative orders.
We consider a random sequence of one-qubit input states $\{\beta_l\}$
and the classical control $\{s_l\}$ as an i.i.d. discrete sequence of symbols, which are selected from $\{-3, -1, 1, 3\}$ with equal probability. The switch state at each $l$ is $\rho_{(3+s_l)/6}$, and the distorted input $\{u_l\}$ is transformed from $\{s_l\}$ via two channels: $q_l=0.08s_{l+2} - 0.12s_{l+1} + s_l + 0.18s_{l-1} - 0.1s_{l-2} + 0.09s_{l-3} - 0.05s_{l-4} + 0.04s_{l-5} + 0.03s_{l-6} + 0.01s_{l-7}$ (linear channel) and $u_l = q_l + 0.036q_l^2 - 0.011q_l^3 + \nu_l$ (nonlinear channel) where $\nu_l$ is an i.i.d. Gaussian noise.
Given a delay $d$ and current inputs $\beta_l$, $u_l$, we aim to perform a tomography of the temporal quantum map $\ensuremath{\mathcal{F}}(\{s_l, \beta_l\}_{l}) = \ensuremath{\mathcal{S}}(\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B)(\beta_{l-d}\otimes \rho_{s_{l-d}})$ and reconstruct $s_{l-d}$, where $\ensuremath{\mathcal{N}}_A, \ensuremath{\mathcal{N}}_B$ are two depolarizing quantum channels.
This reconstruction of $\{s_l\}$ is considered the nonlinear channel equalization task, which has been noted in wireless communication channels and demonstrated using conventional RC method~\cite{jaeger:2004:harness}.
\begin{figure}
\includegraphics[width=15.5cm]{FIGS2.pdf}
\protect\caption{The average root mean square of fidelities (RMSF) (the higher value is the better performance) over 10 trials of random input data, QR and quantum switch configurations at the combination of $P$ and $W$. We use $N=3$ reservoir sites and a measurement multiplexity $V=8$.
\label{fig:sw_eqn_quantum}}
\end{figure}
\begin{figure}
\includegraphics[width=15.5cm]{FIGS3.pdf}
\protect\caption{The average symbol error rate (SER) (the lower value is the better performance) over 10 trials of random input data, QR's configuration at the combination of $P$ and $W$. We use $N=3$ reservoir sites and the measurement multiplexity $V=8$.
\label{fig:sw_eqn_class}}
\end{figure}
Figures~\ref{fig:sw_eqn_quantum} and~\ref{fig:sw_eqn_class} present the average performance over 10 trials for each combination of $P/\gamma$ and input scaling $W/\gamma$ with $N=3$ reservoir sites, where in each trial, we consider a different random input sequence, random QR configurations, and random depolarizing coefficients.
The training is performed at $l=1,\ldots,L$, and the performance is evaluated over $l=L+1,\ldots,L+T$ via the root mean square of fidelities (RMSF) in the tomography task $\textup{RMSF}=\sqrt{(1/T)\sum_{l=L+1}^{l=L+T}F^2(\sigma_l, \hat{\sigma}_l)}$,
and the symbol error rate (SER) $\text{SER}=\textup{card}(\{l \mid s_{l-d} \neq \hat{s}_l\})/T$. Here, the fidelity $F(\rho,\sigma )=\tr[\sqrt{\sqrt{\sigma}\rho\sqrt{\sigma}}]$ and $\sigma_l$ and $s_{l-d}$ are the targets for the tomography and channel equalization tasks, respectively.
We use 800 time steps for training and 200 time steps for the evaluation.
We also use the temporal multiplexing technique with the measurement multiplexity V = 8 for numerical experiments.
Figure~\ref{fig:sw_eqn_delays} displays the RMSF of the tomography task and the SER of the channel equalization task when we increase the value of the delay $d$ with different number of sites in the QR.
The performances drop significantly when $d\geq 3$, implying the short-term memory property in the QR.
\begin{figure}
\includegraphics[width=17cm]{FIGS4.pdf}
\protect\caption{The average root mean square of fidelities (RMSF) of the tomography task (a) and the average symbol error rate (SER) of the channel equalization task (b) over 10 trials when we increase the value of the delay $d$.
Here, we consider the QR with $N=2, 3, 4$ sites in the reservoir.
\label{fig:sw_eqn_delays}}
\end{figure}
\subsection{Non-temporal Continuous Variable Tomography}
In this demonstration, we consider the quantum tomography of continuous variable states in a non-temporal setting.
We evaluate the tomography for three settings: $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}, \ensuremath{\mathcal{F}}_{\textup{cv-phase}}$, and $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$, of the quantum map $\ensuremath{\mathcal{F}}(s, \beta)$, given an one-mode quantum input $\beta$ and a classical input $s$.
For $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ and $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$, we consider a random sequence in $[0, 1]$ of $\{s_l\}_{l=1:200}$ and a random sequence of one-mode thermal states $\{\beta_l\}_{l=1:200}$ and take the index of $l=1,\ldots,100$ for the training and $l=101,\ldots,200$ for the evaluation. Here, we consider one-mode thermal states $\beta_l$ as Gaussian continuous-variable states with the density matrices
\begin{align}\label{eqn:thermal}
\beta_l = \sigma_l \quad \text{ for } \quad \sigma_l = \dfrac{1}{1 + \overline{v}_l}\sum_{n=0}^{\infty}\left( \dfrac{\overline{v}_l}{1 + \overline{v}_l}\right)^n \ket{n}\bra{n},
\end{align}
where $\ket{n}$ represents the state corresponding to $n$ photon numbers, and $\overline{v}_l$ is the expectation value of the photon number in the state. We consider $\overline{v}_l = (r_l \cos(\phi_l))^2$, where $r_l$ and $\phi_l$ are taken randomly in $[0.0, 0.3]$ and $[0.0, \pi]$, respectively.
The quantum maps $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ and $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$ are defined as
\begin{align}
\ensuremath{\mathcal{F}}_{\textup{cv-amp}}(s, \beta) &= \hat{\ensuremath{\mathcal{S}}}(\xi_\textup{amp}(s))\beta \hat{\ensuremath{\mathcal{S}}}(\xi_\textup{amp}(s))^{\dagger},\\
\ensuremath{\mathcal{F}}_{\textup{cv-phase}}(s, \beta) &= \hat{\ensuremath{\mathcal{S}}}(\xi_\textup{phase}(s))\beta \hat{\ensuremath{\mathcal{S}}}(\xi_\textup{phase}(s))^{\dagger},
\end{align}
where $\ensuremath{\mathcal{S}}(\xi)$ is the one-mode squeezing operator, defined as $\ensuremath{\mathcal{S}}(\xi) = \exp(\xi \hat{\ensuremath{a}}^{\dagger}\had - \xi^*\hat{\ensuremath{a}}\ha)$.
Here, we consider $\xi$ as functions of classical data $s$ defined as
\begin{align}
\xi_\textup{amp}(s) = s\exp(i\pi/4), \qquad \xi_\textup{phase}(s) = 0.3\exp(i2\pi s).
\end{align}
For the quantum map $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$, we consider the same$\{s_l\}$ but random one-mode squeezed-thermal states for $\{\beta_l\}$ as
\begin{align}
\beta_l = \hat{\ensuremath{\mathcal{S}}}(\xi_l)\sigma_l\hat{\ensuremath{\mathcal{S}}}(\xi_l)^{\dagger},
\end{align}
where $\sigma_l$ is defined as in Eq.~\eqref{eqn:thermal} and $\xi_l = r_l\sin(\phi_l)$.
The quantum map $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$ is defined as the quantum switch with the input $\beta_l$ and the switch state \begin{align}
\psi_{s_l} = \sqrt{s_l}\ket{\alpha} + \sqrt{1-s_l}\ket{-\alpha},
\end{align}
where we consider the following coherent states with $\alpha = 2.5$
\begin{align}
\ket{\pm\alpha} = \exp\left(-\dfrac{|\alpha|^2}{2} \right)\sum_{n=0}^{\infty}\dfrac{(\pm\alpha)^n}{\sqrt{n!}}\ket{n}.
\end{align}
\begin{figure}
\includegraphics[width=15cm]{FIGS5.pdf}
\protect\caption{(a) Examples of input, target, and reconstruct Wigner functions in the non-temporal continuous variable tomography for $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$ (upper panel), $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ (middle panel), and $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$ (lower panel) by $N=4$-site QR with a measurement multiplexity $V=10$ and a constant coherent field strength $P/\gamma=1.0$.
The columns labeled ``(Hybrid)" and ``(Quantum only)" describe the results with the input scaling $W/\gamma=1.0$ (both classical and quantum inputs) and $W/\gamma=0.0$ (no classical input), respectively.
(b) Variation in the tomography error EW for $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$ (orange line) and $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ (blue line) with the input scaling $W$ and $N=2, 3$-site QR.
(c) Variation in the tomography error EW for $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$ and $N=2, 3$-site QR.
In the results labeled as ``Full-train", we consider a random sequence in $[0, 1]$ of $\{s_l\}_{l=1:200}$ and a random sequence of one-mode thermal states $\{\beta_l\}_{l=1:200}$ with the index of $l=1,\ldots,100$ for the training and $l=101,\ldots,200$ for the evaluation.
In the results labeled ``Bin-train" and ``Tri-train", only binary or tri-values classical data in the training phase are considered, i.e., $s_l\in\{0.0, 1.0\}$ (for ``Bin-train") and $s_l \in \{0.0, 0.5, 1.0\}$ (for ``Tri-train") for $l=1,\ldots,100$.
\label{fig:qlm:demo}}
\end{figure}
Figure~\ref{fig:qlm:demo}(a) shows several examples of input, target, and reconstruct Wigner functions for $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$ (upper panel), $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ (middle panel), and $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$ (lower panel). Here, we use $N=4$-site QR with a measurement multiplexity $V=10$ and a constant coherent field strength $P/\gamma=1.0$.
The columns labeled ``(Hybrid)" and ``(Quantum only)" describe the results when we consider the input scaling $W/\gamma=1.0$ (both classical and quantum inputs) and $W/\gamma=0.0$ (no classical input), respectively.
We observe that if both of quantum and classical inputs are included in the QR, the reconstructed states are very similar to the target states of the quantum maps with hybrid quantum-classical input.
To further evaluate the performance systematically, we calculate the error based on the Wigner representation
\begin{align}
\textup{EW} = \sqrt{ \dfrac{1}{T}\sum_{l=L+1}^{L+T}\dfrac{\sum_{i,j}[\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j) - \hat{\ensuremath{\mathcal{W}}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)]^2}{\sum_{i,j}[\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j) + \hat{\ensuremath{\mathcal{W}}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)]^2}},
\end{align}
where $\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)$ and $\hat{\ensuremath{\mathcal{W}}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)$ are the target and reconstructed Wigner functions of the state $\ensuremath{\mathcal{F}}_l = \ensuremath{\mathcal{F}}(s_l, \beta_l)$, respectively. The error metric is evaluated in the evaluation phase with $L=100$ and $T=100$.
Figure~\ref{fig:qlm:demo}(b) depicts the errors to reconstruct the quantum maps $\ensuremath{\mathcal{F}}_{\textup{cv-phase}}$ (orange lines) and $\ensuremath{\mathcal{F}}_{\textup{cv-amp}}$ (blue lines) at different input scaling $W$ with $N=2$ (upper plot) and $N=3$ (lower plot).
The errors are calculated over 10 trials of data and QR configurations, with the solid lines depicting the average values associated with error bars.
We observe an optimal range of input scaling $W$ for optimal performance in each task.
We note that setting a too small value of $W$ limits the effect of the classical input into the QR.
In contrast, setting a too large value of $W$ will impose the localization in the quantum dots and may lead to non-ergodic behavior in the QR.
In this case, when the input state $\beta_l$ is incident on the QR with weak coupling ($|W_{jk}^{\textup{in}}| \ll |P + Ws_l|$), sufficient information regarding $\beta_l$ cannot be extracted from the QR.
Figure~\ref{fig:qlm:demo}(c) depicts the errors $\textup{EW}$ (blue lines labeled ``Full-train") to reconstruct the quantum map $\ensuremath{\mathcal{F}}_{\textup{cv-sw}}$. We can observe that the effect of input scaling $W$ is not significant as in other tasks.
We further present an intriguing setting by limiting the variety of the classical input in the training phase while keeping the same data in the evaluating phase.
In the results labeled ``Bin-train", we only consider binary classical data in the training phase, i.e., $s_l\in\{0.0, 1.0\}$ for $l=1,\ldots,100$.
The performance is worse since there is no superposition of the switch state in the training phase to help the learning of the quantum switch.
However, for tri-values of $s_l \in \{0.0, 0.5, 1.0\}$ (labeled ``Tri-train") in the training phase, only one pattern of the superposition switch state is considered in the training; we can obtain a relatively low error with a suitable range of input scaling $W$. For example, the performance at $W=0.2$ is comparable with the performance of ``Full-train" at $W=0.02$.
These results demonstrate that tomography for the quantum switch can be performed with limited patterns of training data.
\section{Continuous Variable Tomography and Closed-Loop Control}
In this section, we formulate an intriguing problem in predicting the future evolution of the quantum tomography of hybrid inputs. Specifically, we consider a situation where after the training phase, we cannot access the information from the classical control $s_l$.
This problem can be addressed owing to the advantage of the multitasking in reservoir computing.
Here, our QR can autonomously generate $s_l$ in a closed-loop manner while performing the tomography task using hybrid inputs.
In the training phase, $s_l$ is learned in open loop, where we output the classical value $v_l$ given the input $s_l$ such that $v_l$ and $s_{l+1}$ are as close as possible [Fig.~\ref{fig:close:overview}(a)].
After training, the prediction is used as the classical input for the next step, i.e., replacing $s_{l+1}$ by $v_l$, thereby forming a closed-loop control without any external interventions [Fig.~\ref{fig:close:overview}(b)].
\begin{figure}
\includegraphics[width=12cm]{FIGS6.pdf}
\protect\caption{Closed-loop control of the QR. The QR can autonomously generate classical data $s_l$ in a closed-loop manner while performing the tomography task.
(a) In the training phase, $s_l$ is learned in an open loop to output the classical value $v_l$ given the input $s_l$ such that $v_l$ and $s_{l+1}$ are as close as possible.
(b) After training, the prediction is used as the classical input for the next step, i.e., replacing $s_{l+1}$ by $v_l$, thereby forming a closed-loop control without any external interventions.
\label{fig:close:overview}}
\end{figure}
In this demonstration, we consider the quantum tomography of continuous variable states. The target is to reconstruct the representation of the output $\ensuremath{\mathcal{F}}_l= \ensuremath{\mathcal{F}}\{(s_l,\beta_l)\}$ in the Wigner function form $\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)$ defined on a grid of continuous variables $x_i$ and $p_j$.
We use 300 randomly generated one-mode thermal states $\beta_l$ and the periodic signals $s_l = 0.5 + 0.5\sin(\dfrac{l\pi f}{510})$ as the hybrid input in the training phase.
The target squeezing thermal state $\ensuremath{\mathcal{F}}_l$ is created by applying the one-mode squeezing operator $\hat{S}(\xi_l)=\exp\left( \xi_l \hat{\ensuremath{a}}^{\dagger}\had - \xi_l^{*}\hat{\ensuremath{a}}\ha \right)$ to $\beta_l$.
We consider two types of output squeezing thermal states, where the classical control
$s_l$ is encoded in the amplitude ($\xi_l=s_le^{i\pi/4}$) or phase ($\xi_l=0.3e^{i2\pi s_l}$) of $\xi_l$.
Here, we consider the cutoff Fock space dimension (the effective dimension) of these continuous variables states is $D_{\textup{eff}}=9$.
Figure~\ref{fig:close:amp}(a) (for the amplitude encoding of the classical control in the target state) and
Fig.~\ref{fig:close:phase}(a) (for the phase encoding of the classical control in the target state)
present examples of the classical control signals in the training and closed-loop phase for $f=60$ with different input scaling $W$.
Since we consider the same input sequence for both encoding methods, the results for this classical prediction are the same.
For $W/\gamma=0.8$, the control signal is almost reconstructed perfectly for all time steps in the closed-loop phase with a QR of $N=3$ sites and a measurement multiplexity $V=10$.
Subsequently, as shown in Fig.~\ref{fig:close:amp}(b), this QR can efficiently reconstruct the Wigner function even without accessing the control signal in the closed-loop phase since the errors in each coordinate $(x_j, p_j)$ are almost zero.
The errors for the tomography task in Fig.~\ref{fig:close:phase}(b) are larger but still less than 0.05, demonstrating that the phase encoding of classical input in the output $\ensuremath{\mathcal{F}}_l$ is more difficult to emulate than the amplitude encoding.
We evaluate the performance of the closed-loop control using the valid prediction time (VPT) in the classical and quantum tomography tasks.
Assume that the training phase is performed for the index $l=1,\ldots,L$. Given a positive integer $t$, we define the following errors $\textup{NRMSE}(t)$ and $\textup{EW}(t)$, which are the prediction errors of the classical and quantum tomography tasks in the closed-loop phase, respectively.
\begin{align}
\textup{NRMSE}(t) &= \sqrt{\dfrac{1}{t}\sum_{l=L}^{L+t-1}\dfrac{(s_{l+1} - v_{l})^2}{\sigma^2_s} },\\
\textup{EW}(t) &= \sqrt{ \dfrac{1}{t}\sum_{l=L+1}^{L+t}\dfrac{\sum_{i,j}[\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j) - \hat{\ensuremath{\mathcal{W}}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)]^2}{\sum_{i,j}[\ensuremath{\mathcal{W}}(\ensuremath{\mathcal{F}}_l; x_i, p_j) + \hat{\ensuremath{\mathcal{W}}}(\ensuremath{\mathcal{F}}_l; x_i, p_j)]^2}},
\end{align}
where $\sigma^2_s$ is the variance of the classical control signal $\{s_l\}$.
Given an error threshold $\varepsilon$, the valid prediction time is defined as the longest time for which the error is smaller than or equal to $\varepsilon$:
\begin{align}
\textup{C-VPT}(\varepsilon) &= \max \{ T \mid \textup{NRMSE}(t) \leq \varepsilon \quad \forall t \leq T\} \\
\textup{Q-VPT}(\varepsilon) &= \max \{ T \mid \textup{EW}(t) \leq \varepsilon \quad \forall t \leq T\}.
\end{align}
\begin{figure}
\includegraphics[width=16cm]{FIGS7.pdf}
\protect\caption{(a) Examples of the classical control signals in the training and closed-loop phase for $f=60$ with different values of the input scaling $W/\gamma$ in the QR of $N=3$ sites with measurement multiplexity $V=10$.
(b) Continuous variable tomography at typical time steps in the training phase ($t_1, t_2, t_3$) and the closed-loop phase ($t_3+1, t_4, t_5$) with $W/\gamma=0.8$ in the amplitude encoding of the classical control in the target states. The last panel displays the absolute difference between the target and reconstructed Wigner functions.
(c) The valid prediction time in classical (C-VPT) and quantum tomography (Q-VPT) tasks depending on the input scaling $W/\gamma$ and the frequency of the sinusoidal classical inputs.
\label{fig:close:amp}}
\end{figure}
\begin{figure}
\includegraphics[width=16cm]{FIGS8.pdf}
\protect\caption{(a) Examples of the classical control signals in the training and closed-loop phase for $f=60$ with different values of the input scaling $W/\gamma$ in the QR of $N=3$ sites with measurement multiplexity $V=10$.
(b) Continuous variable tomography at typical time steps in the training phase ($t_1, t_2, t_3$) and the closed-loop phase ($t_3+1, t_4, t_5$) with $W/\gamma=0.8$ in the phase encoding of the classical control in the target states. The last panel displays the absolute difference between the target and reconstructed Wigner functions.
(c) The valid prediction time in classical (C-VPT) and quantum tomography (Q-VPT) tasks depending on the input scaling $W/\gamma$ and the frequency of the sinusoidal classical inputs.
\label{fig:close:phase}}
\end{figure}
We plot the dependency of the VPTs on the scaling $W$ and frequency $f$ of the control signals [Fig.~\ref{fig:close:amp}(c) and Fig.~\ref{fig:close:phase}(c)].
For the amplitude encoding of classical control in the target states [Fig.~\ref{fig:close:amp}(c)], we observe an optimal range ($0.8 \leq W/\gamma \leq 1.2$) of input scaling $W$ for optimal performance of the tomography task.
Here, we note that setting a too large $W$ will impose the localization in the quantum dots and may lead to non-ergodic behavior in the QR.
In this case, when the input state $\beta_l$ is incident to the QR with weak coupling ($|W_{jk}^{\textup{in}}| \ll |P + Ws_l|$), sufficient information regarding $\beta_l$ cannot be extracted from the QR.
For the phase encoding of the classical control in the target states [Fig.~\ref{fig:close:phase}(c)], we observe a trade off in the values of C-VPT and Q-VPT, where setting a small $W$ ($0.1 \leq W/\gamma \leq 0.8$) can increase the C-VPT but decrease the Q-VPT.
The results in Fig.~\ref{fig:close:amp}(c) and Fig.~\ref{fig:close:phase}(c) show that different temporal quantum maps $\ensuremath{\mathcal{F}}$ require different profiles of information processing, which can be adjusted by some modifiable parameters, such as the input scaling $W$ or the constant coherent field $P$.
It can be helpful if we can evaluate the required information processing ability of a temporal quantum map, such as how far and which combinations of the previous inputs are processed in this map. This question is directly related to the information processing framework in input-driven dynamical systems~\cite{dambre:2012:nonlinear,kubota:2021:prr:IPC} but presents further challenges in our hybrid setting.
\begin{figure}
\includegraphics[width=16cm]{FIGS9.pdf}
\protect\caption{(a) The dynamics of the average occupation numbers $\bar{n}(t)$ over $N=3$ reservoir sites (left panel) and the autocorrelation for one trial of the QR's configuration and data (right panel) at different $f$ and input scaling $W/\gamma$.
(b) The average values of autocorrelation zero-crossing times over 10 trials of the QR's configurations and input data for each combination of $W/\gamma \in \{0.0, 0.2, \ldots, 1.8, 2.0\}$ and $f \in \{10, 20, \ldots, 90, 100\}$.
\label{fig:cross:delay}}
\end{figure}
We also notice the dependency of the VPTs on the time scales of the classical control signals in Fig.~\ref{fig:close:amp}(c) and Fig.~\ref{fig:close:phase}(c), where the classical inputs with higher frequencies $f$ (lower timescales) basically lead to better performance.
To characterize the activity of the QR, we observe the dynamics of the average occupation numbers $\bar{n}(t)$ over reservoir sites at different $f$ and input scaling $W$ [left panels in Fig.~\ref{fig:cross:delay}(a)].
In the presence of periodic classical inputs ($W/\gamma > 0.0$), an oscillatory response is superposed on the intrinsic dynamics of the quantum input without the classical input ($W/\gamma=0.0$).
We further calculate the autocorrelation function of each frequency $f$ averaged across all the reservoir sites,
\begin{align}
C(\tau_c) = \dfrac{1}{N}\sum_{j=1}^N \langle \left( n_j(t) - \langle n_j(t) \rangle \right)\left( n_j(t+\tau_c) - \langle n_j(t) \rangle \right) \rangle,
\end{align}
where the angular brackets denote a time average.
Here, $C(0)$ depicts the total variance in the fluctuations of the occupation numbers in the reservoir sites, whereas $C(\tau_c)$ with $\tau_c > 0$ provides information about the temporal structure of the reservoir activity.
In the right panels of Fig.~\ref{fig:cross:delay}(a), we plot the autocorrelation for one trial of the QR's configuration and data at different $f$ and input scaling $W$.
With no classical input ($W/\gamma = 0.0$), the autocorrelation function decays to the values around zero as $\tau_c$ increases. This implies that temporal fluctuations are uncorrelated over large time intervals, which is due to the effect of random quantum inputs and disordered dynamics in the QR.
When the QR is driven by sinusoidal classical inputs, we observe that the periodic activity induced by these inputs is superposed on the background of the quantum inputs.
We define the timescale of the QR as the first $\tau_c$ such that $C(\tau_c)$ crosses the zero line, which can be understood as the first time interval where the temporal temporal fluctuations are uncorrelated.
This zero-crossing time depends on the spontaneous activity of the QR and the timescale of the external classical input.
We plot in Fig.~\ref{fig:cross:delay}(b) the average values of zero-crossing times over 10 trials of the QR's configurations and input data for each combination of $W/\gamma \in \{0.0, 0.2, \ldots, 1.8, 2.0\}$ and $f \in \{10, 20, \ldots, 90, 100\}$.
In the presence of external classical inputs ($W/\gamma > 0.0$), if the zero-crossing times are larger than those of no classical inputs ($W/\gamma = 0.0$), we observe low values of VPTs in Fig.~\ref{fig:close:amp}(c) and Fig.~\ref{fig:close:phase}(c).
These results imply that the timescales of the QRs without classical inputs should be larger than the timescales induced by classical inputs. These timescales can be modified by adjusting the constant coherent field $P$ and the input scaling $W$.
We further analyze the effect of perturbation to investigate the stability of the embedded classical trajectories. Figure~\ref{fig:close:pert} shows the output dynamics of both the target and perturbed prediction trajectories in the closed-loop phase plotted in the $(s_l, s_{l+10})$ plane for different values of $f$ and $W/\gamma$.
After the training phase, we add a small perturbation into the predicted value,
which results in an extra drift in the $(s_l, s_{l+10})$ plane (green line).
The reservoir presents a stable embedding of sinusoidal classical inputs if the trajectory can return to the target one after the addition of the perturbation.
We observe appropriate ranges of input scaling $W/\gamma$ and $f$ to obtain stable closed loops (Fig.~\ref{fig:close:pert} and Fig.~\ref{fig:series:pert}).
Furthermore, if we increase the input scaling $W/\gamma$, the closed loop fails to reconstruct the trajectory of sinusoidal inputs but can produce chaotic-like behavior in the embedding space.
Intriguingly, the generated trajectory is not elliptical as the trajectory of sinusoidal inputs but is still robust with respect to a small perturbation.
\begin{figure}
\includegraphics[width=18cm]{FIGS10.pdf}
\protect\caption{Stability after adding a small perturbation to the trajectory. The QR presents a stable embedding of sinusoidal classical inputs if the trajectory can return to the target after adding a small perturbation (green line) into the predicted value. We observe appropriate ranges of input scaling $W$ and $f$ to obtain stable closed loops.
There is an intriguing observation that if we increase the input scaling $W/\gamma$, the closed loop fails to reconstruct the trajectory of sinusoidal inputs but can produce chaotic-like behavior in the embedding space.
\label{fig:close:pert}}
\end{figure}
\begin{figure}
\includegraphics[width=18cm]{FIGS11.pdf}
\protect\caption{Stability after adding a small perturbation to the sinusoidal classical input with $f=60$ and different values of $W/\gamma$. The QR presents a stable reconstruction if the trajectory can return to the target after adding a small perturbation into the predicted value. In this experiment, we observe that $W/\gamma=0.8$ can provide a stable reconstruction.
\label{fig:series:pert}}
\end{figure}
\clearpage
\section{Quantum Memory Capacity Defined via Tomography Learning}
In Fig.~2 in our main text, we present the results of tomography for the quantum switch, which requires the information of previous input signals.
The performance of this task depends on the amount of memory from previous inputs that the classical readout can retrieve from the reservoir.
In the conventional RC, we evaluate the short-term memory (STM) property of the reservoir via the delay-reconstruction task for reconstructing the previous input. Given a time delay $d \geq 0$ and a uniform random input sequence $\{u_n\}$, the target of this task is to produce the output sequence $\{y_n\}$ such as $\{y_n\}$ can approximate the target sequence $\{\hat{y}_n = u_{n-d}\}$.
For each delay time step $d$, the readout part is trained to remember the input sequences at delayed $d$-time steps.
The performance is evaluated by the square of the correlation coefficient $\ensuremath{\mathcal{C}}(d)$ between the output and delayed input sequences~\cite{jaeger:2001:short} as follows:
\begin{align}
\ensuremath{\mathcal{C}}^2(d) = \dfrac{{\textup{cov}}^2(\{y_n\}, \{u_{n-d}\} )}{\textup{var}(\{y_n\})\textup{var}(\{u_n\})}.
\end{align}
Here, $\textup{cov}(\cdot)$ and $\textup{var}(\cdot)$ denote the covariance and variance function, respectively.
The STM property represents that this $\ensuremath{\mathcal{C}}^2(d)$ is sufficiently small at large values of the delay $d$.
$\ensuremath{\mathcal{C}}^2(d)$ is defined as the \textit{memory function} to characterize the memory profile of the reservoir.
Furthermore, the \textit{memory capacity} (MC) of the reservoir is given by
\begin{align}
\textup{MC} = \sum_{d=0}^{\infty}\ensuremath{\mathcal{C}}^2(d).
\end{align}
In our study, we consider the concept of quantum memory capacity~\cite{tran:2021:temporal} to measure the ability of the QR to reconstruct the previous quantum inputs via the classical readout.
We investigate the quantum version of STM in the QR via the quantum version of the delay-reconstruction task $\ensuremath{\mathcal{F}}(u_n=0, \beta_n) = \beta_{n-d}$ given the delay $d$, where classical inputs are zero.
Since the input and output are quantum states, the capacity to reconstruct the previous $d$ steps of the input states is evaluated via the square of the distance correlation~\cite{gabor:2007:corr} between the output $\{\sigma_n\}$ (obtained via the training of the classical readout) and the target $\{\hat{\sigma}_n\}=\{\beta_{n-d}\}$:
\begin{align}
\ensuremath{\mathcal{R}}^2(d) = \dfrac{\ensuremath{\mathcal{V}}^2(\{\sigma_n\}, \{\beta_{n-d}\})}{\sqrt{\ensuremath{\mathcal{V}}^2(\{\sigma_n\}, \{\sigma_n\})\ensuremath{\mathcal{V}}^2(\{\beta_{n}\}, \{\beta_{n}\})}}.
\end{align}
Here, $\ensuremath{\mathcal{V}}^2(\{\rho_n\}, \{\sigma_n\})$ represents the squared distance covariance of random sequences of density matrices $\{\rho_n\}, \{\sigma_n\}$.
The squared distance covariance $\ensuremath{\mathcal{V}}^2(\{\rho_n\}, \{\sigma_n\})$ is calculated from all pairwise distances $A(\rho_j, \rho_k)$ and $A(\sigma_j, \sigma_k)$ for $j, k = 1,2,\ldots,n$,
where the distance $A(\rho, \sigma) = \arccos{F(\rho, \sigma)}$ for given density matrices $\rho$ and $\sigma$ is defined as the angle induced from the fidelity $F(\rho, \sigma)=\tr[\sqrt{\sqrt{\sigma}\rho\sqrt{\sigma}}]$.
We construct the distance matrices for $\{\rho_n\}$ and $\{\sigma_n\}$ as $(R_{jk})$ and $(S_{jk})$ with the elements $R_{jk}=A(\rho_j, \rho_k)$ and $S_{jk}=A(\sigma_j, \sigma_k)$.
We take all double centered distances
\begin{align}
r_{j,k} &= R_{j,k} - \bar{R}_{j.} - \bar{R}_{.k} + \bar{R}_{..},\\
s_{j,k} &= S_{j,k} - \bar{S}_{j.} - \bar{S}_{.k} + \bar{S}_{..},
\end{align}
where $\bar{R}_{j.}$ and $\bar{R}_{.k}$ are the $j$th row mean and the $k$th column mean, respectively,
and $\bar{R}_{..}$ is the grand mean of the distance matrix $(R_{jk})$ (the same notations for $S$).
The squared distance covariance is the arithmetic average
\begin{align}
\ensuremath{\mathcal{V}}^2(\{\rho_n\}, \{\sigma_n\}) = \dfrac{1}{n^2}\sum_{j=1}^n\sum_{k=1}^nr_{j,k}s_{j,k}.
\end{align}
$\ensuremath{\mathcal{R}}^2(d)$ gives information about the serial dependence between $\{\sigma_n\}$ and $\{\hat{\sigma}_n\}=\{\beta_{n-d}\}$.
Here, $0\leq \ensuremath{\mathcal{R}}^2(d) \leq 1$ and $\ensuremath{\mathcal{R}}^2(d) = 1$ if we can find some linear transformation from the output sequence $\{\sigma_n\}$ to the target sequence $\{\hat{\sigma}_n\}$.
In contrast, $\ensuremath{\mathcal{R}}^2(d)=0$ implies that the system cannot reconstruct the previous $d$ steps of the inputs because the output and the target sequences are completely independent.
We define $\ensuremath{\mathcal{R}}^2(d)$ as the \textit{quantum memory function} of the QR via tomography learning with the classical readout.
Consequently, the \textit{quantum memory capacity} (QMC) is defined as
\begin{align}\label{eqn:qmc}
\textup{QMC} = \sum_{d=0}^{\infty}\ensuremath{\mathcal{R}}^2(d).
\end{align}
Figure~\ref{fig:qmem} presents a demonstration for the quantum memory function and quantum memory capacity, where we consider $\{\beta_n\}$ as a random sequence of one-qubit input states with 400 time steps for the training and 100 time steps for the evaluation.
Figure~\ref{fig:qmem}(a) displays the values of RMSF, and Fig.~\ref{fig:qmem}(b) displays the quantum memory function $\ensuremath{\mathcal{R}}^2(d)$ for $N=3$-site QR at several values of uniform excitation $P(t)=P$.
We perform the tomography task with $V=5$ measurement multiplexity, which means that the dimension of the reservoir state is $VN=15$.
The STM property depends on the value of $P$, where the memories at $d < 5$ dominate all the regions and converge to almost the same value at a sufficiently large $d$. This value is non-zero owing to the effect of the finite data length.
\begin{figure}
\includegraphics[width=17cm]{FIGS12.pdf}
\protect\caption{(a) The RMSF of the STM task varying by the delay $d$ and (b) the quantum memory function $\ensuremath{\mathcal{R}}^2(d)$ for $N=3$-site QR at several values of uniform excitation $P(t)=P$.
The tomography task is performed with $V=5$ measurement multiplexity and a random sequence of one-qubit input states with 400 time steps for the training and 100 time steps for the evaluation.
(c) The dependency of quantum memory capacity (QMC) on the coherent field strength $P$ at $N=2,3$-site QR. The QMC is averaged over 10 random trials with displayed error bars.
(d) The absolute difference $| \bar{n}(t) - \bar{n}(t_{\textup{init}}) |$ between the average occupation numbers $\bar{n}(t)$ at initial time $t_{\textup{init}}=5/\gamma$ and an arbitrary $t$ $(0\leq \gamma t \leq 15)$ varying by $P/\gamma$. The quantum input is incident on the QR at $\gamma t=5, 6, 7, \ldots, 14, 15$, which increases the value $| \bar{n}(t) - \bar{n}(t_{\textup{init}}) |$ before decreasing it until the next input.
\label{fig:qmem}}
\end{figure}
We further plot the dependency of quantum memory capacity (QMC) on the coherent field strength $P$ in Fig.~\ref{fig:qmem}(c) at $N=2,3$-site QR.
Here, Eq.~\eqref{eqn:qmc} is calculated until the maximum delay $d_{\textup{max}}=40$.
We observe an optimal region of $P$ ($2\leq P < 10$), where the QMC is favorable.
To explain this behavior, we further analyze the dynamics of the occupation numbers $n_j(t)$ in the reservoir sites.
Figure~\ref{fig:qmem}(d) plots the absolute difference $| \bar{n}(t) - \bar{n}(t_{\textup{init}}) |$ between the average element $\bar{n}(t)$ of the reservoir states at initial time $t_{\textup{init}}=5/\gamma$ and an arbitrary $t$ $(0\leq \gamma t \leq 15)$.
This difference approaches zero as $t$ approaches $t_{\textup{init}}$.
The quantum input is incident to the QR at $\gamma t=5, 6, 7, \ldots, 14, 15$, which increases the value $| \bar{n}(t) - \bar{n}(t_{\textup{init}}) |$ before decreasing it until the next input.
We anticipate that increasing the magnitude of the coherent field strength $P$ compared with $h_{ij}$ in Eq.~\eqref{eqn:hamiltonian} may lead to non-ergodic behavior in the QR, i.e., a strong and qualitative dependence of expectation values on the initial state at $t_{\textup{init}}$ (Fig.~\ref{fig:tau}).
Furthermore, the input state $\beta_l$ is incident to the QR with weak coupling ($|W_{jk}^{\textup{in}}| \ll |P|$) in this case.
Therefore, sufficient information regarding $\beta_l$ cannot be extracted from the QR as $| \bar{n}(t) - \bar{n}(t_{\textup{init}}) | \approx 0$.
In contrast, a small $P(t)$ strongly drives the system from the steady state at the input-injecting timing but reduces the memory effect of the QR in reconstructing past information since the old information is replaced very quickly.
\clearpage
\section{Effects of Classical Input on the Reconstruction of Quantum Input}
In the main text, we considered the target function, which is a function $\ensuremath{\mathcal{F}}$ of hybrid inputs $(u, \beta)$.
In this case, information of both the classical input $u(t)$ and quantum input $\beta(t)$ is retained in the reservoir states.
Since the classical input $u(t)$ is encoded into the strength of the coherent field $P(t) = P + Wu(t)$, the classical input and input scaling $W/\gamma$ have a strong effect on the dynamics of the QR.
If the target function $\ensuremath{\mathcal{F}}$ does not depend on the classical input $u(t)$, then the injection of $u(t)$ into the QR may affect the reconstruction of $\ensuremath{\mathcal{F}}$.
In this section, we verify this observation by investigating the memory capacity.
Given a sequence of hybrid inputs $\{u_n, \beta_n\}$, we use our QR in a multitask setting with the classical and quantum delay-reconstruction tasks mentioned in the previous section.
Given a delay $d$, we consider the delay reconstruction of the classical input $\{u_{n-d}\}$ in the classical task and the delay reconstruction of the quantum input $\{\beta_{n-d}\}$ in the quantum task.
We compute the corresponding MC and QMC for the classical and quantum tasks, respectively.
Figure~\ref{fig:mem:tradeoff:cv0} displays the dependency of MC and QMC on the input scaling $W/\gamma$ for $N=3,4$-site QR with a constant coherent field strength $P/\gamma=1.0$. Here, we consider the random uniform $\{u_n\}$ for classical inputs and $\{\beta_n\}$ as a random sequence of one-qubit states for quantum inputs.
To attain the same setting that was used in the task described in Fig.~2 in the main text, we perform the tomography with $V=8$ measurement multiplexity and use 800 time steps for the training and 200 time steps for the evaluation. We compute the MC and QMC until the maximum delay $d_{\textup{max}}=20$.
The result demonstrates that the QMC is reduced when the random classical input is introduced into the QR with increasing input scaling $W/\gamma > 0$.
For a relatively large $W/\gamma > 1.0$, both MC and QMC decrease owing to the localization effect with a large strength of the coherent field $P(t)=P + Wu(t)$
However, at $W/\gamma \leq 1.0$, we observe a trade-off relation between MC and QMC. Here, increasing $W/\gamma$ from zero can help improve the MC but reduce the QMC.
This observation implies that this QR may not perform well if the target function does not depend on the classical input, and the fluctuation of classical inputs has a strong effect on the QR dynamics.
However, if the target function is the function of the classical and quantum input, we can use the trade-off of MC and QMC to adjust $W/\gamma$ for an optimal performance.
\begin{figure}
\includegraphics[width=16cm]{FIGS13.pdf}
\protect\caption{The dependency of MC and QMC on the input scaling $W/\gamma$ for $N=3,4$-site QR with a constant coherent field strength $P/\gamma=1.0$. We consider the random uniform $\{u_n\}$ for classical inputs and $\{\beta_n\}$ as a random sequence of one-qubit states for quantum inputs with 800 time steps for the training and 200 time steps for the evaluation.
The tomography task is performed with $V=8$ measurement multiplexity, and MC and QMC are calculated until the maximum delay $d_{\textup{max}}=20$.
The solid lines and the shaded areas indicate the median values and the confidence intervals (one standard deviation) calculated in the ensemble of 10 random trials of the input sequence and the QR's configuration, respectively.
\label{fig:mem:tradeoff:cv0}}
\end{figure}
\clearpage
\section{Quantum Readout for Temporal Quantum Learning}
\begin{figure}
\includegraphics[width=8.5cm]{FIGS14.pdf}
\protect\caption{The error evaluated to prepare the temporal depolarizing quantum channel for varying the delays.
We consider both input modes $\hat{\ensuremath{a}}_k$ and reservoir modes $\hat{\ensuremath{c}}_i$ as $N_R=3$ readout nodes and train both interaction coefficients $W^{\textup{in}}_{jk}$ and readout weights.
The blue lines represent $d_c=0$ and the variation of $d_q$. The orange lines represent $d_q=0$ and the variation of $d_c$.
\label{fig:qtask:depolarheat}}
\end{figure}
We present a proof of concept application for quantum tasks using the quantum readout scheme. We use the QR to prepare the target quantum state of a temporal depolarizing quantum channel $\ensuremath{\mathcal{F}}\{(s_l, \beta_l)\} = s_{l-d_c}I/D + (1-s_{l-d_c})\beta_{l-d_q}$, where $\{\beta_l\}$ is randomly generated in a $D$-dimensional Hilbert space, $\{s_l\}$ is a random sequence in $[0, 1]$, and $d_c, d_q$ are the delay times.
Here, we consider a sequence of $200$ one-qubit quantum states for the training and $100$ states for the evaluation. The baseline is computed when we set the output the same as the input.
We use the Nelder\text{--}Mead simplex algorithm~\cite{lagarias:1998:nd} to minimize
the fidelity error
$
\textup{EF}=\sqrt{(1/L)\sum_{l=1}^{L}[1-F(\sigma_l, \hat{\sigma}_l)]^2},
$
where $\sigma_l$ and $\hat{\sigma}_l$ are the target and the preparing quantum states, respectively.
Figure~\ref{fig:qtask:depolarheat} illustrates the average fidelity error as a function of
$d_c$ and $d_q$ over 10 trials. Here, we consider both input modes $\hat{\ensuremath{a}}_k$ and reservoir modes $\hat{\ensuremath{c}}_i$ as $N_R=3$ readout nodes, and both interaction coefficients $W^{\textup{in}}_{jk}$ and readout weights as the training parameters. For $d_q=0$ (orange lines), we can prepare the target channel with a small $d_c < 3$. However, if $d_q > 0$ (blue lines with $d_c=0$), the large increases in the fidelity errors imply that it is difficult to realize previous quantum inputs, which may incur a higher cost for training more readout nodes.
To understand the training process, we show the Nelder\text{--}Mead steps for minimizing the fidelity errors EF in Fig.~\ref{fig:train:delay}(a) ($d_c=d_q=0$) and Fig.~\ref{fig:train:delay}(b) ($d_c=d_q=1$) at different values of the input scaling $W$.
Here, we consider $N_R=4$ readout nodes and 9 trials of input data and the QR's configurations.
The process starts with an initial guess for which EF is large and progressively reaches to a minimum value.
The training results are still below the considerable good preparations, especially with a non-zero delay. However, they demonstrate that the hybrid quantum-classical inputs are effectively considered in training the quantum readout.
\clearpage
\begin{figure}
\includegraphics[width=15cm]{FIGS15.pdf}
\protect\caption{Nelder\text{--}Mead steps for minimizing the fidelity errors EF with (a) ($d_c=d_q=0$) and (b) ($d_c=d_q=1$) at different values of the input scaling $W/\gamma$. Each graph represents each trial.
Here, we consider both input modes $\hat{\ensuremath{a}}_k$ and reservoir modes $\hat{\ensuremath{c}}_i$ as $N_R=4$ readout nodes and train both in- and out-weight parameters.
\label{fig:train:delay}}
\end{figure}
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
|
1,477,468,750,095 | arxiv | \section{Introduction}
In many-body theories, generic phenomena are often associated to and characterized by the presence of symmetries~\cite{coleman_1985}. Examples include quantum critical points and topological insulators~\cite{fradkinbook}, whose universal properties are dictated by the presence of microscopic global symmetries, and the confining properties of gauge theories, which are often related to the structure of local conservation laws~\cite{Creutz1997}. While these examples concern the equilibrium properties of matter, the role of symmetries has also been widely investigated in systems out-of-equilibrium, for instance, in connection to universal behavior~\cite{Polkovnikov2011a,Sieberer_2016}.
A paradigmatic phenomenon that - in a sense we specify below - lies 'in- between' equilibrium and out-of-equilibrium is represented by eigenstates of quantum Hamiltonians that, despite belonging to the middle of the energy spectrum, feature properties that are at odds with theoretical expectations based on the eigenstate thermalization hypothesis (ETH)~\cite{Deutsch1991, Srednicki1994,Brandino2012,Delfino_2014}. These states, recently dubbed quantum many-body scars~\cite{Turner2018}, have finite energy density above the ground state and sub-extensive entanglement entropy. Quantum scars have been linked to anomalously slow dynamics observed in laser-dressed Rydberg atom ensembles~\cite{Bernien2017}, and have attracted a considerable theoretical effort~\cite{Turner2018,Turner2018a,Khemani2019,Choi2019,Ho2019,LinMPS2019,Iadecola2019,Surace,Michailidis2019,Moudgalya2020}.
While a number of models supporting quantum many-body scars have recently been found~\cite{Moudgalya2018,Moudgalya2018a,Mark2019,Schecter2019,Bull2019,Iadecola2020,Ok2019,Hudomal2019,Shibata2019,Chattopadhyay2019,Pai2019,Moudgalya2019,Mark2020,Zhao2020,Lee2020}, the general conditions (if any) for stabilizing ETH-violating states are still unknown, and the role of symmetries in this context stands as an open question.
Some of the recent works in this direction link the presence of quantum scars to signatures of integrability~\cite{Khemani2019}, to semiclassical trajectories~\cite{Ho2019,Michailidis2019}, to quasiparticle excitations \cite{LinMPS2019,Iadecola2019} and to the emergence of an algebraic structure~\cite{Choi2019}. Another candidate mechanism was put forward in Ref.~\cite{Surace}, in which scarred bands of Rydberg atom chains are interpreted as special eigenstates that survive the lattice regularization of an integrable field theory. While integrability does not have an immediate counterpart in more than one dimension, the Coleman-Mandula theorem shows how supersymmetry provides a feasible way of extending the set of conservation laws without resulting in a trivial (in the sense of S-matrix being the identity) theory~\cite{ColemanMandula}.
Here, we show how supersymmetry (SUSY) provides a route to formulate lattice models with 'scarred' states in the middle of the spectrum, whose stability is guaranteed as long as supersymmetry itself is not violated. In concrete, we consider $D$-dimensional lattice models of constrained spin-less fermions, that realize an exact $N=2$ supersymmetry at the lattice level~\cite{Fendley2003,Fendley2003a,Fendley2005,Huijse2008,Huijse2008a,Cheong2009,Beccaria2012,Bauer2013,Hagendorf2013}, and show how these models support scarred eigenstates (as SUSY doublets) in any $D$. After the general proof, we discuss in detail the ladder case, and address the resilience of scarred eigenstates in the presence of supersymmetry-breaking terms.
\section{Supersymmetric lattice models}
The model we study has been introduced in Ref.~\cite{Fendley2003}. The degrees of freedom are spinless fermions $c_{\bf r}$, with ${\bf r}$ being a site on a generic lattice, and the operators satisfy the canonical anticommutation relations $\{c_{\bf r}^\dagger, c_{\bf s}\}=\delta_{{\bf r},{\bf s}}$. The Hamiltonian can be written in terms of the supercharge operators $Q$ and $Q^\dagger$ defined as
\begin{equation}
Q^\dagger = \sum_{\bf r} \alpha^{\mathstrut}_{\bf r} P^{\mathstrut}_{\bf r}\cd{{\bf r}}, \hspace{0.2cm} Q = \sum_{\bf r} \alpha^{\mathstrut *}_{\bf r} P^{\mathstrut}_{\bf r} c^{\mathstrut}_{\bf r}
\end{equation}
where $\alpha_{\bf r}$ is a complex coefficient, and $P_{\bf r}$ is a projector which constrains all the neighbours of site $i$ to be unoccupied. The Hamiltonian has the form
\begin{equation}
H = \{Q^\dagger, Q\}.
\end{equation}
The supercharge operators satisfy
\begin{equation}
Q^2=(Q^\dagger)^2=0,\qquad [H, Q]=[H, Q^\dagger]=0.
\end{equation}
In addition to these, the model has a symmetry associated to the fermion number $F=\sum_{\bf r} P_{\bf r} \cd{{\bf r}} c_{\bf r}$, with $[F, Q^\dagger]=Q^\dagger$, $[F, Q]=-Q$.
The Hamiltonian can be explicitly rewritten as $H=H_0+V$ with
\begin{equation}
\label{eq:Ham}
H_0 = \sum_{\langle {\bf r}, {\bf s} \rangle}\left(\alpha^{\mathstrut}_{\bf r} \alpha_{\bf s}^{\mathstrut *} P^{\mathstrut}_{\bf r} \cd{{\bf r}} c^{\mathstrut}_{\bf s} P^{\mathstrut}_{\bf s}+ \text{H.c.}\right),
\end{equation}
\begin{equation}
V = \sum_{\bf r} |\alpha_{\bf r}|^2 P_{\bf r}
\end{equation}
where $\langle {\bf r}, {\bf s}\rangle$ indicates pairs of neighbouring sites. Below, we will focus on $D$-dimensional hypercubic lattices of linear dimension $L$: some of the results are extendable to other bipartite lattices. The supersymmetric algebra imposes a specific structure of the spectrum. Eigenstates can be classified in singlets and doublets: all the singlets satisfy $Q\ket{\psi}=Q^\dagger\ket{\psi}=H\ket{\psi}=0$; doublets are pairs of states of the form $\ket{\psi}$, $Q^\dagger\ket{\psi}$ (with the condition $Q\ket{\psi}=0$) and have strictly positive energy.
As discussed in detail in Ref.~\cite{Fendley2003}, this set of constraints realizes a $N=2$ SUSY which is exact at the lattice level.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.46\textwidth]{fig1crop.pdf}
\end{center}
\caption{(a) Exact eigenstate $\ket{\psi_{A,e}}$ in two-leg ladders. Sites belonging to sublattices $A$ and $B$ are colored in blue and orange respectively. Grey rectangles indicate the plaquettes, and for each plaquette there is a fermion in a superposition of the sites on the diagonal as in Eq.~\ref{eq:exstA}. The square (b) and cubic (c) lattices are split into plaquettes. On each plaquette we put a fermion in a superposition between the $A$ sites, in such a way that hopping terms annihilate the state. States with larger fermionic numbers can be constructed by placing two fermions, one on each $A$ site.
}
\label{fig:ladder}
\end{figure}
Before moving to the core of our work, we note that, in 2D and 3D, the models presented above draw strong similarities to the dynamics of fermionic isotopes confined in optical lattices, and laser-dressed with Rydberg $s$- or $p$-states~\cite{PhysRevLett.104.223002,PhysRevLett.104.195302,PhysRevLett.105.160404,Zeiher_2016,Glaetzle:2014dq,mitra2019robust}. In particular, the tunneling dynamics subjected to the constrains discussed above has been pointed out in Ref.~\cite{PhysRevB.92.045106, PhysRevLett.111.165302}.
\section{Exact eigenstates at finite energy density: two-leg ladders}\label{sec:exeig}
We now construct exact eigenstates in the middle of the spectrum. These states can be written as product states of square plaquettes, and can be found for any filling $F\ge L/2$, as we will detail in the following subsections. For the sake of readability, we first discuss the conceptually simpler 2-leg ladder case, and then move forward to the generic bipartite lattice in $D$ spatial dimensions.
To set the notation, we define the two sublattices $A$ and $B$ as in Fig.~\ref{fig:ladder}-a, such that each $A$ site has only neighbours of type $B$ and viceversa. We split the ladder in plaquettes (the grey squares in Fig.~\ref{fig:ladder}-a): we can choose to put the plaquettes either between neighbouring even/odd or odd/even rungs. From now on, we choose to place them between even and odd rungs as in Fig.~\ref{fig:ladder}-a. From the set of states that we will construct following this choice, we can then obtain a new set of states by applying a translation of one site along the ladder (the new states will be product states of odd/even plaquettes).
{\it Half-filling. }
Since the total number of fermions $F$ is conserved, we can construct eigenstates with a fixed filling. We first consider the sector $F=L/2$. We define the states $\ket{\psi_{A, e}}$ as follows:
\begin{equation}
\label{eq:exstA}
\ket{\psi_{A,e}}= \prod_{i=0}^{L/2-1}\frac{1}{\mathcal{N}_{i,A}}\left(d^\dagger_{2i,1}-d^\dagger_{2i+1, 2}\right)\ket{0},
\end{equation}
where $d_{i,j}^\dagger = \alpha_{i,j}^{-1}P^{\mathstrut}_{i,j}\cd{i,j}$ and $\mathcal{N}_{i,A}$ is a normalization constant. We choose the convention that the product is ordered from left to right.
The state is constructed as a product state of plaquettes, with a fermion in each plaquette: each fermion sits in a superposition between the two sites of a diagonal (of type $A$).
In order to prove that $\ket{\psi_{A,e}}$ is an eigenstate, it is convenient to treat separately the hopping terms within a plaquette and those between different plaquettes. Within the plaquette, the fermions can hop from sites of the sublattice $A$ to the sublattice $B$: however, the coefficients in the superposition are such that the two contributions from the $A$ sites cancel due to destructive interference for each of the $B$ sites. On the other hand, the terms between different plaquettes would bring a fermion in a site $B$ which cannot be occupied due to the hard-core constraint, and hence annihilate the state. These two arguments prove that
$H_0\ket{\psi_{A, e}}=0$.
The interaction term can also be easily computed by noting that $P_{i,j}=0$ for sites of lattice $B$ and $P_{i,j}=1$ for those of lattice $A$. Therefore we have
\begin{equation}
H\ket{\psi_{A, e}}=V\ket{\psi_{A, e}}=\sum_{(i,j)\in A}|\alpha_{i,j}|^2\ket{\psi_{A, e}}.
\end{equation}
We can similarly construct the state $\ket{\psi_{B,e}}$, having fermions on sublattice $B$,
\begin{equation}
\label{eq:exstB}
\ket{\psi_{B,e}}= \prod_{i=0}^{L/2-1}\frac{1}{\mathcal{N}_{i,B}}\left(d^\dagger_{2i,2}-d^\dagger_{2i+1, 1}\right)\ket{0}.
\end{equation}
As anticipated, other two states can be obtained by applying the translation operator, namely $\ket{\psi_{A/B,o}}=T\ket{\psi_{B/A,e}}$.
We note that, while eigenstates that occupy different sublattices are orthogonal ($\braket{\psi_{A,\, \cdot}}{\psi_{B,\, \cdot}}=0$), the eigenstates defined on the same sublattice have the same energy and are not orthogonal ($\braket{\psi_{A,e}}{\psi_{A,o}}\neq 0$ and $\braket{\psi_{B,e}}{\psi_{B,o}}\neq 0$), but they are linearly independent.
These states have energy $E_{A/B} =\sum_{(i,j)\in A/B}|\alpha_{i,j}|^2$: being eigenstates at a finite energy density above the zero-energy ground state, their entanglement entropy is expected to be proportional to the volume $L$. This is not the case: when the ladder is cut in two, the entanglement entropy is either 0 (if the cut is between two plaquettes) or a finite quantity (if the cut is within a plaquette). These eigenstates satisfy an area law entanglement at a finite energy density and hence they qualify as many-body quantum scars.
{\it Above half-filling.} For number of fermions $F>L/2$, a number of exact eigenstates can be similarly constructed as a product state of plaquettes.
We start from one of the four states $\ket{\psi_{A/B, e/o}}$, and we choose $F-L/2$ plaquettes where to increase the fermion occupancy from $1$ to $2$ fermions: on the selected plaquettes we place fermions on both sites of the diagonal. For example, we can add a fermion to the $j$-th plaquette on top of the state $\ket{\psi_{A,e}}$ by substituting $(d^\dagger_{2j,1}-d^\dagger_{2j+1,2})/\mathcal{N}_{j,A}$ with $P^{\mathstrut}_{2j,1}\cd{2j,1}P^{\mathstrut}_{2j+1,2}\cd{2j+1,2}$ in the product in Eq.~\ref{eq:exstA}. In this way, we obtain $\binom{F-L/2}{L/2}$ states, one for each choice of the positions of the doubly occupied plaquettes.
With the same argument used for the states at filling $F=L/2$, it is possible to prove that these states are annihilated by $H_0$ and are eigenstates of $V$ with eigenvalue $\sum_{(i,j)\in A/B}|\alpha_{i,j}|^2$.
\section{Exact scars in $d$-dimensional hypercubic lattices}
We now generalize the construction of exact eigenstates for the square ladder presented above to hypercubic lattices in dimension $D$. To do so, we group all the sites of the lattice into square plaquettes, and we construct the eigenstates as product states of plaquettes. We define the two sublattices $A$ and $B$, such that neighbouring sites belong to different sublattices. We find two classes of eigenstates: in $A$-states ($B$-states) fermions occupy the sites on sublattice $A$ ($B$) only. To construct the states, on each plaquette we create either one or two fermions on the ($A/B$) diagonal, with the same operators as in the ladder. A pictorial representation of one of these states is shown in Fig.~\ref{fig:ladder} for $d=2$ and $d=3$.
The number of exact eigenstates depends on the number of ways in which the lattice sites can be grouped in square plaquettes and it grows with the system size. For example, in the specific case $d=2$ and $F=L_x L_y/4$ (where $L_x$ and $L_y$ are even and are the number of sites in the $x$ and $y$ directions), we can construct $2^{L_x/2}+2^{L_y/2}-2$ different states for each sublattice ($A$ or $B$).
\section{Spectral statistics in two-leg ladders}
In the previous section we found an extensive number of states with finite energy density and an entanglement entropy which does not depend on $L$. We now show that the rest of the spectrum for a two-leg ladder with periodic boundary conditions is compatible with ETH.
We study the model in Eq.~\ref{eq:Ham} using exact diagonalization. Since the construction above works for arbitrary, site-dependent coefficients $\alpha_{i,j}$, we choose random real coefficients $\alpha_{i,j}$ from a uniform distribution in the interval $[1,2)$, and average over a certain number of disorder realizations. We compute the spectrum in the sector with fermionic number $F=L/2$. Thanks to the supersymmetric algebra, the Hilbert space can be split in three sector: (i) $\mathcal{H}_{Q^\dagger} = \{\ket{\psi} : Q\ket{\psi}=0$, $Q^\dagger\ket{\psi}\neq0\}$,
(ii) $\mathcal{H}_{Q} = \{\ket{\psi} : Q\ket{\psi}\neq 0$, $Q^\dagger\ket{\psi}=0\}$, (iii) $\mathcal{H}_{0} = \{\ket{\psi} : Q\ket{\psi}= 0$, $Q^\dagger\ket{\psi}=0\}$.
The Hamiltonian is block-diagonal in these sectors: the states of the last sectors are singlets with energy $E=0$; we focus on the other two sectors, where the structure of the spectrum is non-trivial. We remark that each state of these sector belongs to a SUSY doublet and hence has a SUSY partner with the same energy, but different fermionic number ($F=L/2+1$ for the first sector and $F=L/2-1$ for the second sector). Therefore, no degeneracies and no other conservation laws are expected in the spectrum we analyze. To test the validity of the ETH for the majority of the eigenstates, we study the ratio between nearby gaps
\begin{equation}
r_n = \frac{ \mathrm{Min} \{ \Delta E_n , \Delta E_{n + 1 } \} }{ \mathrm{Max} \{ \Delta E_n , \Delta E_{n + 1 } \} }.
\end{equation}
Here $\Delta E_n = E_{n}-E_{n-1}$, with $n$ labelling the eigenvalues $E_n$ of $H$ in increasing order, for a given disorder realization.
We then average $r_n$ over $n$ and over $100$ disorder realizations; we consider the full energy spectrum. The results, plotted in Fig.~\ref{fig:rstat} clearly show that in both sectors $r$ converges to the value expected for a Wigner-Dyson distribution $r_{WD}=0.536$ for increasing $L$, and thus validate the assumption that the majority of the eigenstates satisfy the ETH.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.35\textwidth]{rstat.pdf}
\caption{Average level spacing ratio as a function of the number of rungs $L$ in the two sectors of non-zero energy states. The grey line indicates the value predicted for Wigner-Dyson spectral statistics. For increasing $L$, in both sectors $r$ flows towards $r_{WD}$, signalling compatibily with the ETH.
}
\label{fig:rstat}
\end{center}
\end{figure}
We then check that the eigenstates we found are the only anomalous states in the spectrum. We choose coefficients $\alpha_{i,j}=1$ for all sites.
In Fig.~\ref{fig:SvsE}-a, we show the half-chain entanglement entropy for ladders of $L=12,14$ rungs with $L/2$ fermions in the translation- and reflection-invariant sector. In both sectors $\mathcal{H}_{Q^\dagger}$ and $\mathcal{H}_{Q}$, the majority of the eigenstates approximate a smooth profile with large entanglement in the middle of the spectrum, as expected in an ergodic system. A single outlier (circled in red) with anomalously small entanglement entropy is present in a region of high energy density and corresponds to the translation- and reflection-invariant superposition of the eigenstates defined above. Similar conclusions are corroborated by the analysis of diagonal correlations, depicted in Fig.~\ref{fig:SvsE}-b.
\section{Robustness to perturbations and connection to the Shiraishi-Mori construction}
We now discuss the stability of SUSY scarred eigenstates with respect to external perturbations. As discussed above, the states are stable under arbitrary supersymmetric perturbations. In particular, the construction above does not rely on any specific structure of the coefficients $\alpha_i$. In this section, we will investigate the robustness of these scarred eigenstates to other perturbations, which break the supersymmetry of the model.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{fig4.pdf}
\caption{
(a) Bipartite entanglement entropy as a function of the energy of the eigenstates in the translation- and reflection-invariant sector.
(b) Expectation value of the local observable $n_{j,1}n_{j+1,2}$ a function of the energy of the eigenstates in the translation- and reflection-invariant sector. Blue (orange) dots correspond to states in the sector $\mathcal{H}_{Q^\dagger}$ ($\mathcal{H}_{Q})$.
}
\label{fig:SvsE}
\end{center}
\end{figure}
As a first case, we consider the Hamiltonian $H_{\eta}=H_0+\eta V$
in any $D$. If we move away from the supersymmetric point $\eta=1$, the Hamiltonian does not commute with the supercharges and the spectrum cannot be split in sectors. However, since the scars we construct (both for half and higher filling) are simultaneous eigenstates of $H_0$ and $V$, they are exact eigenstates of $H_\eta$ for arbitrary $\eta$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig6.pdf}
\end{center}
\caption{
Bipartite entanglement entropy as a function of the energy of the eigenstates for different values of $\lambda$ ($L=14$). The eigenstates are in the translation- and reflection-invariant sector.
}
\label{fig:SvsEpert}
\end{figure}
Next, we consider the ladder case, and a perturbation of the type $H_\lambda=\lambda\sum_{i=0}^{L-1}(n_{i,1}n_{i+1,2}+n_{i,2}n_{i+1,1})$, with $n_{i,j}=\cd{i,j}c^{\mathstrut}_{i,j}$. The scars we construct for $\lambda=0$ are not exact eigenstates for $\lambda\neq 0$. We perform a numerical analysis following a previous study of perturbations in constrained spin chains~\cite{Lin_stability2019}.
In Fig.~\ref{fig:SvsEpert} we plot the bipartite entanglement entropy as a function of the energy for different values of $\lambda$. For $\lambda=0$ there is a single scar in the half-filling case in the translation- and reflection-invariant sector. For some values of $\lambda$, this scar hybridizes strongly with the continuum of states belonging to the thermal cloud, but small values of the entropy persist in a large region of $\lambda$ (large with respect to average gap at this energy density), excluding the aforementioned points. As is clear from Fig.~\ref{fig:SvsEpert}, the scarred state undergoes a large number of level crossing as $\lambda$ is varied but its entanglement entropy remains anomalously small. The phenomenology is extremely similar to the case of constrained spin-models, and, while system sizes here are insufficient to draw conclusions that hold in the thermodynamic limit, we can still observe the same type of resilience of scarred features at finite size.
In terms of physical implementations, the models we discussed have been partly addressed in works related to fermionic Rydberg-dressed atoms (at least, for the case of ladders). We note, however, that in terms of experimental signatures the connection to experiments requires some extra care with respect to other spin models. In order to have long-time coherent oscillations, like the ones observed in \cite{Bernien2017}, a set of equally-spaced energy eigenstates is needed. In our case, this could be achieved by adding a chemical potential, which shifts the scars according to the number of particles. However, to detect the oscillations, one should be able to prepare an initial state in a superposition with different numbers of particles. While this might be possible for spin systems (a similar mechanism is used, for example, in \cite{Schecter2019}), it is not feasible for number-conserving fermionic particles. A more direct experimental proof of the existence of scars would be obtained using the scar itself as initial state of the dynamics: every observable should remain approximately constant in time.
We comment on the connection of the eigenstates discussed above with the Shiraishi-Mori construction for embedding ETH-violating states in an otherwise ergodic spectrum~\cite{Shiraishi2017}. The construction consists of local projectors $P_j$ and a subspace $\mathcal T$ of the Hilbert space satisfying $P_j \mathcal T=0$. Then the Hamiltonian
\begin{equation}\label{eq:shmo}
H=\sum_j P_j h_j P_j + H' \hspace{1cm} [H', P_j]=0
\end{equation}
has candidate scarred eigenstates in the subspace $\mathcal T$.
It can be shown that the Hamiltonian~(\ref{eq:Ham}) can be recast in the form of Eq.~(\ref{eq:shmo}) with $\mathcal{T}$ being the subspace with a single scar state. We examine, for instance, the scar $\ket{\psi_{A,e}}$ in Eq.~(\ref{eq:exstA}). To prove the construction, we define $P_j$ as a local projector acting on the $j$-th plaquette and on its neighbours,
\begin{equation}
P_j=1-\ket{j_{A,e}}\bra{j_{A,e}},
\end{equation}
\begin{equation}
\ket{j_{A,e}}=\prod_{i=j-1}^{j+1}\frac{1}{\mathcal{N}_{i,A}}\left(d^\dagger_{2i,1}-d^\dagger_{2i+1, 2}\right)\ket{0}_i.
\end{equation}
This projector annihilates the state that has a single fermion in a superposition on the $A$ diagonal in each of the plaquettes considered (as in the state $\ket{\psi_{A,e}}$) and acts trivially on the other states. We find that the term $V$ commutes with the projectors $P_j$ and corresponds to $H'$ in the Shiraishi-Mori construction. The hopping terms, on the other hand, need some further manipulation. We define $h_j$ as made of two parts: (i) the sum of the hopping terms in the $j$-th plaquette, (ii) the sum of the hopping terms between the $j$-th plaquette and its neighbors (with a factor $1/2$). With this definition, we see that $h_j=P_j h_j P_j$ and $H_0=\sum_j h_j$, resulting in the desired form of Eq.~(\ref{eq:shmo}). This construction can be applied to the other scars, and to the case of higher dimensionality.
Each scar represents an isolated embedded subspace, and hence its entanglement entropy does not scale with $L$.
\section{Conclusions and outlook} We have shown that $N=2$ supersymmetric lattice models display weak-ergodicity breaking in the form of scarred eigenstates in any $D$-dimensional hypercubic lattice.
SUSY is not a sufficient ingredient for quantum scars in $D>1$: for instance, even within the model we consider, the spectrum at low-filling does not feature ergodicity breaking. Instead, we find important to emphasize that the results reported here underline that insights from quantum field theory - in our case, provided by the Coleman-Mandula theorem - can provide a very simple tool to easily diagnose conditions that favor quantum scarring, that is complementary to other approaches based on exact lattice solutions, that are typically applicable to single models~\cite{LinMPS2019,Moudgalya2018,Moudgalya2018a,Mark2020}. It is important to stress that it would not be sufficient for a lattice model to recover SUSY as a low-energy symmetry, since the phenomena we are concerned with require finite-energy-density above the ground state. Due to the fact that formulating explicit supersymmetric theories on the lattice is challenging, it stands as an open quest to determine if there exists additional features that, in combination with SUSY, can guarantee the appearance of quantum scars in given lattice models. To resolve such questions, it would thus be important to formulate lattice models with richer supersymmetric structures, and investigate their SUSY-specific dynamical effects~\cite{Cubero_2016}.
\begin{acknowledgments}
We acknowledge several useful discussions with D. Abanin, P. Fendley, W. Ho, M. Lukin, K. Papadodimas, and H. Pichler, and thank A. Gambassi, A. Lerose, and P. P. Mazza for collaboration on a related work. This work is partly supported by the ERC under grant number 758329 (AGEnTh), by the Quantera programme QTFLAG, and has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 817482 (Pasquans). This work has been carried out within the activities of TQT.
{\it Note added. --} During completion of this work, a preprint appeared~\cite{lin2020quantum} showing how the bosonic counterpart of some of the states we discuss are also present in the 2D PXP model in the middle of the spectrum, where scarring was already observed in numerical simulations~\cite{michailidis2020stabilizing}.
\end{acknowledgments}
\bibliographystyle{unsrtaipauth4-1.bst}
|
1,477,468,750,096 | arxiv | \section{Introduction}
In the Standard Model (SM), right handed fermions do not couple to W
and their couplings to Z are proportional to the electric charge. Compelling
tests of this feature exist for leptons, whereas for quarks available tests
are less conclusive due to the interference with non perturbative QCD effects.
Another characteristics of the right-handed sector of the SM is a rather
complicated and apriori unexplained spectrum of weak hypercharges. (Since
the seventies, the latter has motivated left-right symmetric extensions
of the SM~\cite{LR} which shed a new light on
the EW couplings of right handed fermions.) None of the
above features of the SM follow from the EW symmetry
$S_{EW}= SU(2)_W \times U(1)_Y$, as long as the latter is spontaneously broken:
Indeed, with the help of agents of Symmetry Breaking (Higgs fields), it is possible
to construct $S_{EW}$ invariant couplings of right handed
fermions to W. This fact suggests to look for eventual modifications of the
right-handed couplings as a
conceivable signal of a non standard EW symmetry breaking.
Model independent tests of EWSB require first of all a
``bottom-up'' Effective Theory approach which starts from the known vertices
of the SM and step
by step in a low-energy expansion controlled by a {\bf power counting}
orders possible non standard effects according to their importance at low
energies. Next, it should be specified how the lepton - quark universality
could be naturally broken at subleading orders to escape strong
experimental constraints concerning leptons.
Such a class of LEETs has been proposed three
years ago~\cite{HS1} and further developed and completed later~\cite{HS2}.
In this talk, I am
going to review the characteristic feature of this class: The appearance at
NLO of couplings of right
handed quarks to W and modification of their couplings to Z. Then I will
comment on first attempts to confront these predictions with
experiment~\cite{BOPS}.
\section{Not quite Decoupling EW Low Energy Effective Theory (LEET)}
In its minimal version, the LEET contains the naturally light particles of the
SM: SU(2) x U(1) gauge fields, chiral fermions (including right-handed
neutrinos) and the triplet of GBs. For small
momenta $p \ll 4\pi F_W = \Lambda_W \sim 3 TeV$, the effective Lagrangian is
written as a low-energy expansion
\begin{equation}
\mathcal{L}_{\mathit{eff}} = \sum_{d \ge 2} \mathcal{L}_{d} , \quad
\mathcal{L}_{d} = \mathcal{O}([p/\Lambda_W]^d)~,
\end{equation}
\noindent where the infrared dimension of a local operator,
$d = n_{\delta} + n_g + n_f/2$ is
given by the number of derivatives, the number of gauge couplings and the
number of fermion fields. A Feynman diagram with effective vertices v=1...
and with L loops counts at low-energy as O($p^d$), where
\begin{equation}
d = 2 + 2 L + \sum_{v} (d_v - 2)~.
\label{powercounting}
\end{equation}
\noindent The LEET is renormalizable order by order in the LE expansion, provided at
each order, all terms allowed by symmetries are effectively included in (2.1).
In particular, the symmetry of the LEET $S_{nat} \supset S_{EW}$ must prevent
all ``unwanted `` non standard vertices to appear already at the leading order
$\mathcal{O}(p^2)$. In a bottom-up approach, the higher symmetry $S_{nat}$
is unknown apriori (it is the remnant of the not quite decoupled
high energy sector of the theory), but it can be inferred requiring that
the leading order $\mathcal{L}_{2}$ of the LEET coincides with the
Higgs-free part of the SM
Lagrangian. I refer to~\cite{HS2}, where it is shown that the {\bf minimal solution}
of this condition reads
\begin{equation}
S_{\mathit{nat}} = \left [ SU(2)_{G_L} \times SU(2)_{G_R} \times U(1)^{B-L}_{G_B} \right ]_{\mathit{elem}}
\times \left [ SU(2)_{\Gamma_L} \times SU(2)_{\Gamma_R} \right ]_{\mathit{comp}}~.
\label{snat}
\end{equation}
The Goldstone boson matrix $\Sigma(x) \in SU(2)$ (needed to give masses to W
and Z) transforms according to a different local chiral symmetry
\begin{equation}
\Sigma(x) \to \Gamma_L(x) \Sigma(x) \left[\Gamma_R(x)\right ]^{-1}
\end{equation}
than the chiral fermion doublets and the elementary gauge fields coupled to
fermions
\begin{equation}
\psi_{L/R} \to G_{L/R}\exp\left[-i \frac{B-L}{2} \alpha \right] \psi_{L/R}~.
\label{trafofermions}
\end{equation}
The most general Lagrangian of dimension $d=2$ invariant under the linear
action of the symmetry $S_{nat}$ reads
\begin{eqnarray}
\mathcal{L} \left( p^2 \right) &=& \frac{F_W^2}{4} \left\langle D_{\mu}
\Sigma^{\dag} D^{\mu} \Sigma \right\rangle + i\, \overline{\psi_L}
\gamma^{\mu} D_{\mu} \psi_L + i\, \overline{\psi_R} \gamma^{\mu} D_{\mu}
\psi_R \nonumber \\
&& - \frac{1}{2} \left\langle G_{L \mu \nu} G_L^{\mu \nu} + G_{R \mu \nu}
G^{\mu \nu}_R \right\rangle - \frac{1}{4} G_{B \mu \nu} G_B^{\mu \nu}~.
\label{lag}
\end{eqnarray}
It contains several gauge fields not observed at low energies $E< \Lambda_{W}$,
no fermion masses and a gauge boson mass term which has no obvious connection
with the SM. Nevertheless, the above Lagrangian reduces to the one of the SM
upon imposing $S_{nat}$ - invariant constraints eliminating the redundant
gauge fields through pairwise identification of different gauge factors
up to a gauge transformation. (Notice that these constraints break the
accidental L-R symmetry present in (2.6).) Example of such constraints is
\begin{equation}
\Gamma_{L,\mu} = \mathcal{X} g_L G_{L,\mu} \mathcal{X}^{-1} +
i \mathcal{X} \partial_{\mu} \mathcal{X}^{-1}
\label{xconstraint}
\end{equation}
which replaces $SU(2)_{G_{L}} \times SU(2)_{\Gamma_{L}}$ by its diagonal
subgroup (identified with the SM weak isospin) and a scalar object
$\mathcal{X}$ which is a (constant) multiple of a $SU(2)$ matrix, and is
called ``spurion''. Similarly, one identifies up to a gauge
$\Gamma_{R,\mu} \sim g_{R} G_{R,\mu} \sim g_{B} G_{B,\mu} \tau_{3}/2$.
We then remain with the gauge fields of the SM, receiving standard masses
and mixing through the first term in (2.6) and coupled in the standard way
to fermions. In addition, we now have three $SU(2)$ valued spurions
$\mathcal{X}$, $\mathcal{Y}$ and $\omega$
\begin{equation}
\mathcal{X}(x) = \xi \Omega_L(x),\quad \Omega_L(x) \in SU(2), \quad
\mathcal{Y} = \eta\, \Omega_R,\quad \Omega_R \in SU(2), \quad
\omega = \zeta \,\Omega_{B},\quad \Omega_{B} \in SU(2)~,
\end{equation}
populating the coset space $S_{nat}/S_{EW} = SU(2)^3$. To maintain
invariance under $S_{nat}$, the spurions have to transform as
\begin{equation}
\mathcal{X} \to \Gamma_{L} \mathcal{X} G^{-1}_{L}, \quad
\mathcal{Y} \to \Gamma_R \, \mathcal{Y}\, G_R^{-1}, \quad
\omega \to \Gamma_{R}\, \omega \, G^{-1}_{B} .
\end{equation}
Consequently, the constraints selecting $S_{EW} = SU(2)_W \times U(1)_{Y}$ of
the SM as the maximal subgroup of $S_{nat}$ that is linearly realized at low
energies can be equivalently written as
\begin{equation}
D_{\mu} \mathcal{X} = 0 ,\quad
D_{\mu} \mathcal{Y} = 0 ,\quad
D_{\mu} \omega = 0 ~
\end{equation}
indicating that spurions do not propagate. There exists a gauge in which
the spurions reduce to three real parameters $\xi$, $\eta$ and $\zeta$
which are exterior to the SM and whose magnitude is not fixed by the LEET.
They will be considered as {\bf small expansion parameters} describing
effects beyond the SM.
The physical origin of spurions satisfying the constraints (2.10)
can be understood as resulting from a particular non decoupling limit
of an ordinary Higgs mechanism in which both Higgs bosons and some
combinations of gauge fields become very massive. Massive gauge fields
decouple, whereas heavy Higgs fields reduce to non propagating spurions, defining a
non linear realization of the symmetry $S_{nat}/S_{EW}$.
Spurions are {\bf needed} to write down $S_{nat}$ invariant
fermion masses. Consequently, the latter will be suppressed with respect
to the scale $\Lambda_{W}$ by powers of spurion parameters $\xi$ and $\eta$.
The least suppressed mass - the top mass - will be proportional to the
product
\begin{equation}
\xi \eta \sim m_{top} / \Lambda_{W} = \mathcal{O}(p) , \quad
d^* = d + \frac{1}{2} ( n_{\xi} + n_{\eta} ).
\end{equation}
This suggests to extend the low-energy power counting to spurions introducing
the chiral dimension $d^{\star}$ defined above. This guarantees that both
the fermion mass term and the lagrangian (2.6) have $d^{\star} = 2$
characteristic of the leading order of the LEET. Notice that the power
counting formula also holds replacing in (2.2) $d$ by $d^{\star}$.
The third spurion $\omega$
breaks B-L, which is thus predicted to be a part of the LEET. Consequently,
the parameter $\zeta\ll\xi \sim \eta$ naturally accommodates the smallness
of Lepton number violation and of the Majorana masses.
\section{Next to Leading Order (NLO)}
The NLO consists of all $S_{nat}$ invariant operators of the
chiral dimension $d^{\star} = 3 $ . There are two and only two such
operators: they describe non standard couplings of fermions to W and Z
and they are suppressed by two powers of spurions $\mathcal{X}$ or
$\mathcal{Y}$ :
\begin{equation}
\mathcal{O}_L = \bar \psi_L \mathcal{X}^{\dagger}\gamma^\mu \Sigma D_{\mu}\Sigma^{\dagger}\mathcal{X}\psi_L ,
\end{equation}
for left handed fermions, whereas for right handed fermions one has
\begin{equation}
\mathcal{O}_R^{a,b} = \bar\psi_R \mathcal{Y}^{\dagger}_{a} \gamma^\mu \Sigma^{\dagger}
D_{\mu}\Sigma \mathcal{Y}_{b}\psi_R .
\end{equation}
where $a,b \in [U ,D]$, label covariant projections on Up and
Down components of right handed doublets. These operators already carry their
respective suppression factors, they are $\mathcal{O}(p^2 \xi^2)$ and
$\mathcal{O}(p^2 \eta^2)$ respectively. The full $d^{\star} = 3$ part of
the effective Lagrangian can be written as
\begin{equation}
\mathcal{L}_{\text{NLO}} = \rho_L \mathcal{O}_L(l) + \lambda_L \mathcal{O}_L(q)+
\sum_{a,b} \rho_R^{a,b} \mathcal{O}^{a,b}_R(l) + \sum_{a,b}
\lambda_R^{a,b} \mathcal{O}_R^{a,b}(q)
\end{equation}
where $\rho$ and $\lambda$ are dimensionless low-energy constants which
should be of order one (unless suppressed by an additional symmetry).
The NLO couplings still respect the family symmetry . On the other hand,
at this subleading order, the lepton - quark universality could be broken,
i.e. $\rho \neq \lambda$
by
the existence of additional reflection symmetry $\nu_{R} \to -\nu_{R}$
which does not exist for quarks. Such a symmetry is not obstructed by the LO
couplings to gauge fields (at LO, $\nu_{R}$ decouples). It allows the right
handed neutrino to get a {\bf small Majorana mass} of the order
$\mathcal{O}(\zeta^2 \eta^2)$, i.e. of a comparable size to left handed
Majorana mass $\mathcal{O}(\zeta^2 \xi^2)$ and to the strength of LNV.
On the other hand, the reflection symmetry $\nu_{R} \to - \nu_{R}$ forbids
the Dirac neutrino-mass and could provide a natural explanation of the observed
smallness of neutrino masses. A corollary of this ``anti see-saw''
mechanism~\cite{HS2} of suppression of neutrino masses is the
{\bf suppression of charged leptonic right-handed currents}
i.e. $\rho_{R}^{UD} = 0$ in Eq (3.3).
\section{Couplings to W}
Let us concentrate on couplings of fermions to
W. Using the matrix notation in the family space
$\mathrm{U}= (u,c,t)^T,~\mathrm{D}= (d,s,b)^T,~\mathrm{N}= (\nu_e, \nu_\mu,
\nu_\tau)^T,~\mathrm{L}= (e, \mu, \tau)^T $
and using the mass - diagonal basis, the couplings
to W up to and including NLO become
\begin{eqnarray}
\mathcal{L}_{\text{W}} &=& \frac{e (1-\xi^2 \rho_L)}{\sqrt{2} s}
\left\{ \bar{\mathrm{N}}_L V_{\mathrm{MNS}} \gamma^{\mu} L_L
+(1+\delta)
\bar{\mathrm{U}}_L V_L \gamma^{\mu} D_L +
\epsilon
\bar{\mathrm{U}}_R V_R \gamma^{\mu} D_R\right\}
W_{\mu}^+ + \text{h.c}~.\nonumber \\
\end{eqnarray}
$V_{L}$ and $V_{R}$ are two independent {\bf unitary} mixing
matrices resulting (as in SM) from the diagonalization of quark masses.
The (small) spurionic parameters $\delta = (\rho_L - \lambda_L) \xi^2$ and
$\epsilon = \lambda_R ^{UD} \eta^2$ describe the chiral generalization of
the CKM mixing induced by RHCs. Notice, in particular, that effective
EW couplings in the vector and axial channels (more directly accessible
than $V_L$ and $V_R$)
\begin{eqnarray}
\mathcal{V}_{\mathit{eff}}^{ij}&=&(1+\delta) V_L^{ij}+\epsilon V_R^{ij}+\mathrm{NNLO}~,\nonumber\\
\mathcal{A}_{\mathit{eff}}^{ij}&=&-(1+\delta) V_L^{ij}+\epsilon V_R^{ij}+\mathrm{NNLO}~,
\end{eqnarray}
need not to be unitary. The signal of RHCs can be detected as
$\mathcal{V}_{\mathit{eff}}^{ij} \neq - \mathcal{A}_{\mathit{eff}}^{ij}$, i.e. comparing vector and axial vector
transitions.
A particular attention should be payed to {\bf light
quarks $u, d, s$} for which the chirality breaking effects are tiny.
In this sector all EW effective couplings can be expressed in terms of
$\delta$ and three parameters
\begin{equation}
\epsilon_{ns}= \epsilon\ \mathrm{Re}
\Bigl{(}\frac{V_R^{ud}}{V_L^{ud}}\Bigr{)}, \quad
\epsilon_{s} =\epsilon\
\mathrm{Re} \Bigl{(}\frac{V_R^{us}}{V_L^{us}}\Bigr{)}~, \quad
\mathcal{V}^{ud}_\mathit{eff} = 0.97377(26)\equiv cos\hat{\theta}
\end{equation}
where $\mathcal{V}_{\mathit{eff}}^{ud}$ is determined from $0^+ \to 0^+$ nuclear
transitions~\cite{PDG}.
Using further the unitarity of $V_{L}$ and neglecting $|V_{L}^{ub}|^2$,
all light quark effective couplings can be expressed as
\begin{eqnarray}
|\mathcal{V}^{ud}_\mathit{eff}|^2 &=& \cos^2\hat\theta \nonumber\\
|\mathcal{A}^{ud}_\mathit{eff}|^2 &=& \cos^2\hat\theta\,( 1 - 4 \, \epsilon_{ns}) \nonumber\\
|\mathcal{V}^{us}_\mathit{eff}|^2 &=& \sin^2\hat\theta\, (1 +
2\frac{\delta+\epsilon_{ns}}{\sin^2\hat\theta}) (1 + 2\, \epsilon_{s} - 2\, \epsilon_{ns}) \nonumber\\
|\mathcal{A}^{us}_\mathit{eff}|^2 &=& \sin^2\hat\theta\, (1 +
2\frac{\delta+\epsilon_{ns}}{\sin^2\hat\theta}) (1 -2\, \epsilon_{s} - 2\, \epsilon_{ns})~.
\end{eqnarray}
The genuine spurion parameters $\delta$ and $\epsilon$
are expected to be at most of order few percent. Since
$|V_L^{us}| \ll |V_{L}^{ud}| \sim 1 $ and the matrix $V_{R}$ is unitary,
one should have $|\epsilon_{NS}|< \epsilon$. On the other hand, the parameter
$\epsilon_{S}$ measuring RHCs strangeness changing transitions {\bf can
be enhanced} if the mixing hierarchy for right handed light quarks is inverted,
$V_R^{ud} < V_{R}^{us}$. In this case $|\epsilon_{S}|$ could be as large
as $4.5 \epsilon$. Clearly, this question should be decided experimentally.
\section{The stringent test of RHCs: Scalar $K_{\mu3}$ form factor shape}
Model independent bounds on $V+A$ couplings of
light quarks to W are extremely difficult to find, since they require an
accurate control of QCD chiral symmetry breaking contributions when
comparing
hadronic matrix elements of vector and axial vector currents. One such test
(never considered before) has been identified in Ref~\cite{bops}. It is based
on the Callan Treiman low - energy Theorem already discussed in the talk by
E. Passemar~\cite{emilie}. The (normalized) scalar $K^L_{\mu3}$ form factor
$f(t)$
\begin{equation}
f(t)=\frac{f^{K^0\pi^-}_S(t)}{f^{K^0\pi^-}_+(0)} = \frac{1}{f^{K^0\pi^-}_+(0)}
\left(f^{K^0\pi^-}_+(t) + \frac{t}{\Delta_{K\pi} } f^{K^0\pi^-}_-(t)\right)
\ \ ,\ \ f(0)= 1 .
\end{equation}
where $\Delta_{K\pi} = m^2_{K} - m^2_{\pi}$, satisfies
\begin{equation}
C \equiv f(\Delta_{K\pi})= \frac{F_{K^+}}{F_{\pi^+}}\frac{1}{f_{+}^{K^0\pi^-}
(0)}+ \Delta_{CT}, \quad
C = B_{\mathit{exp}} \, r + \Delta_{CT}.
\end{equation}
Here, $\Delta_{CT} = - 3.5 \times 10 ^ {-3}$ is a tiny correction which has
been estimated in one loop ChPT.
In the absence of RHCs, the value C of the scalar form factor at the Callan
Treiman point can be directly expressed in terms of measured branching
fractions ($K_{l2}/\pi_{l2}$, $K_{l3}$) and $V^{ud}$ giving~\cite{PDG}
$B_{exp}=1.2438 \pm 0.0040$ in the second Eq.~(5.2). RHCs make appear
additional correction factor $r$
\begin{equation}
r = \Bigl{|}\frac{\mathcal{A}^{ud}_\mathit{eff}\mathcal{V}^{us}_\mathit{eff}}{\mathcal{V}^{ud}_\mathit{eff} \mathcal{A}^{us}_\mathit{eff}}\Bigr{|} = 1 + 2 (\epsilon_{S} -
\epsilon_{NS} ).
\end{equation}
Hence, in the presence of RHCs the Callan Treiman theorem yields
\begin{equation}
\ln C = 0.2182 \pm 0.0035 + \tilde\Delta_{CT} + 2 (\epsilon_{s} -
\epsilon_{ns}) = 0.2182 \pm 0.0035 + \Delta\epsilon~,
\end{equation}
with $\tilde\Delta_{CT} = \Delta_{CT} / B_{exp}$.
An accurate
physically motivated parametrization of the scalar form-factor $f(t)$
has been proposed~\cite{bops} which allows to determine the parameter $\ln C$ from the measured
$K^L_{\mu3}$ decay distributions. The corresponding measurement is
particularly delicate, since the experimental t - distribution is not easy to
reconstruct from the data. Furthermore, different experiments have access to different
decay distributions which do not have the same sensitivity to $\ln C$ and to
the shape of the vector form factor. There exists a relation between
$\ln C$ and the slope parameter $\lambda_0$ ~\cite{emilie} but it is not enough
precise to reduce the determination of $\ln C$ to existing (controversial)
determinations of the slope $\lambda_0$ assuming the linear t-dependence of
the scalar form factor~\cite{KLOE, KTeV, Lai} or at most injecting information about its
curvature~\cite{LeptonPhotonKLOE}. Recently, NA48 collaboration has published
the result of a direct determination of $\ln C$ based on the dispersive
representation of $f(t)$ ~\cite{Lai}
\begin{equation}
\ln C_{\mathit{exp}} = 0.1438 \pm 0.0138, \quad
\Delta\epsilon = -0.074\pm0.014
\end{equation}
Other analysis of $K_{\mu3}$ decay distributions from KLOE~\cite{Antonelli} and KTeV~\cite{Glazov} based on
the dispersive representation of the two form factors are underways. They should clarify the experimental situation and
provide an independent cross check of the NA48 result~\cite{Lai}.
Awaiting an independent dispersive analysis of existing data samples,
one should stress that the result (5.5) indicates a 5$\sigma$ deviation
from the SM prediction. In particular, if the discrepancy would have to be
explained within QCD, the ChPT estimate of $\Delta_{CT}$ would have to be
underestimated by a factor 20. On the other hand, within the class of LEET
defined above the interpretation of the result (5.5) as a manifestation of
couplings of right handed quarks to W is unambiguous. It amounts to a
determination of the spurion parameter $2 (\epsilon_{S} - \epsilon_{NS})$.
Its size can be understood as a result of
enhancement of $V_{R}^{us}$ relative to the suppressed $V_{L}^{us}$.
Beyond our LEET framework, other interpretations might
be conceivable. For example a subTeV charged scalar coupled to scalar
densities $\bar{u}s$ and $\bar{\mu}\nu$ could interfere with our analysis.
We prefer to stay within the class of minimal LEET defined above and ask
how does the same non standard operator (3.2) affect the couplings of right
handed quarks to Z.
\section{Couplings to $Z$}
Non standard couplings to Z contained in the NLO
Lagrangian (3.3) are suppressed by the same two spurion parameters $\xi^2$
(LHCs) and $\eta^2$ (RHCs) as in the case of couplings to W discussed in
Section 4. Hence, despite the apriori unknown ``order one'' prefactors
$\rho$ and $\lambda$, it is possible to relate orders of magnitude of
non standard CC and NC couplings. In the left - handed sector we have
altogether two NLO parameters: $ \delta = \xi^2 (\rho_L - \lambda_L)$
and $ \xi^2 \rho_L $, whereas in the right - handed sector there are three
new parameters denoted $\epsilon^{e},~\epsilon^{U},~\epsilon^{D}$
and proportional to the spurion $\eta^2$.
\begin{figure}[h]
\begin{center}
\includegraphics*[scale=0.35]{histoNLOfig.eps}
\caption{\it Pull for the $Z$ pole observables in the full fit }
\end{center}
\end{figure}
We have performed the NLO fit to the usual set of Z
- pole pseudo-observables displayed in Fig. 1 including the lepton branching
fraction of W (particularly sensitive to the parameter $\delta$) as well as
spin asymmetries measured at SLD. The fit is described in details in
\cite{BOPS}. It has $\chi^2 /dof = 8.5/8 $ and it gives
$ \delta \equiv \xi^2 (\rho_{L} - \lambda_{L}) = - 0.004(2)$,~
$\xi^2 \rho_{L} = 0.001(12)$ and
$\epsilon^{e} \equiv \eta^2 \rho_{R}^{DD} = -0. 0024(5)$.\\
The most important NLO modification of couplings to Z turns out to
occur for right handed quarks:
$\epsilon^{U} \equiv \eta^2 \lambda_R^{UU} = -0.02(1)$ ~~
$\epsilon^{D} \equiv \eta^2 \lambda_R^{DD} = - 0.03(1)$.
The correlations can be found in \cite{BOPS}.
Two comments are in order. First, the most
important NLO non standard couplings to Z seem to occur for {\bf right
handed quarks}. Their size compares well with the couplings of right handed
quarks to W as suggested by the $K^L_{\mu3}$ dispersive Dalitz plot analysis
\cite{Lai}. Next, the fit is of a very good quality as illustrated in
Fig. 1 in terms of ``pulls''. In particular, the b-quark forward backward
asymmetry $A^b_{FB}$ {\bf and} $R^{b}$ are both well reproduced
without modifying the flavour universality of NC EW couplings. The long
standing ``puzzle of b-asymmetries'' has apparently gone thanks to
the modified right-handed couplings of D type quarks to Z.
\section{$F_K/F_{\pi}$ and $f_{+}(0)$}
The low-energy QCD quantities $F_K$, $F_{\pi}$, $f_+(0)$ \ldots
are defined independently of EW interactions in terms
of QCD correlation functions and they are accessible to
ChPT and lattice studies. On the other hand their experimental values
extracted from semileptonic branching fractions depend on the presumed EW
vertices via the effective EW couplings (4.4).
Fixing experimental values
of $\mathcal{V}^{ud}_\mathit{eff}$ (4.3) and of the semi leptonic branching ratios,
$F_K , F_{\pi}, f_+(0)$ \ldots become unique functions of spurion parameters
$\epsilon_{NS}$, $\epsilon_S$ and $\delta$. One has
\begin{equation}
\left(\frac{F_{K^+}}{F_{\pi^+}}\right)^2
=\left (\frac{\hat{F}_{K^+}}{\hat{F}_{\pi^+}}
\right)^2 \frac{1+2\,(\epsilon_{s}-\epsilon_{ns})}{1+\frac{2}{\sin^2\hat{\theta}}
(\delta+\epsilon_{ns})},\ \
|f^{K^0\pi^-}_+(0)|^2 = \left[ \hat{f}^{K^0\pi^-}_+(0)\right]^2 \,
\frac{1-2(\epsilon_{s}-\epsilon_{ns})}{1+\frac{2}{\sin^2\hat{\theta}}
(\delta+\epsilon_{ns})},
\end{equation}
\noindent where the hat indicates the corresponding values extracted from semi
leptonic branching fractions ({\bf {assuming SM couplings
$\epsilon_{NS}=\epsilon_S=\delta=0$}}). The latter are known very precisely:
\begin{eqnarray}
\hat{F}_{K^+}/\hat{F}_{\pi^+} = 1.182(7), ~~
\hat{f}^{K^0\pi^-}_{+} (0) = 0.951(5)~.
\end{eqnarray}
\begin{figure}[t]
\begin{center}
\epsfig{width=10cm,file=fkfpi.eps}
\caption{\it {Lines of constant values for $F_{K+}/F_{\pi+}$ and
$f_+^{K^0\pi}(0)$ in the plane $\delta+\epsilon_{ns}$ and $2 (\epsilon_{s}-\epsilon_{ns})$ as resulting from Eqs (7.1) and (7.2).
The vertical band indicates the
range suggested by the NA48 measurement~\cite{Lai}. The SM point $\epsilon=\delta =0$ is
also shown.}}
\end{center}
\end{figure}
\noindent In fig. 2 are displayed lines of constant values of $F_K/F_{\pi}$
and $f_+(0)$ as a function of spurion parameters. One notes
that $F_K/F_{\pi}$ significantly decreases compared with the value
1.22 often used as input in ChPT. On the other hand, $f_+(0)$
is not very constrained despite the Callan Treiman relation.
Finally, nothing prevents the effective vector mixing matrix $\mathcal{V}_{eff}$ to be
nearly unitary without any fine tuning. One has
\begin{equation}
| \mathcal{V}^{ud}_\mathit{eff}|^2 + |\mathcal{V}^{us}_\mathit{eff}|^2
= 1 + 2 \, (\delta + \epsilon_{ns}) +
2\, (\epsilon_{s}-\epsilon_{ns})\, \sin^2\hat{\theta}~.
\label{effunit}
\end{equation}
The contribution of $\epsilon_S$, the only parameter which
might be enhanced above 0.01 is suppressed by $sin^2\hat{\theta}$.
\\
In conclusion, we have presented and motivated a
new low-energy test of non standard EW couplings of
right handed quarks not considered before. Did one observe
couplings of right handed quarks to W in $K^L_{\mu3}$ decay ? The final answer
requires a more complete and dedicated experimental analysis. It also
deserves a particular effort despite its difficulty.
\\
I thank V. Bernard, M. Oertel, and E. Passemar for a valuable
collaboration. This work has been supported in part by the EU contract MRTN-CT-2006-035482 (Flavianet).
|
1,477,468,750,097 | arxiv | \section{Introduction}
\label{sec:level1}
In recent numerical studies of the relativistic hydrodynamics
of close neutron star binaries in three spatial dimensions
(Wilson \& Mathews 1995;
Wilson, Mathews \& Marronetti 1996, henceforth WMM96),
it was noted that as the
stars approach coalescence they appear to experience a collapse
instability. For an appropriate equation of state,
binary neutron stars might generally become black holes
many seconds prior to merger. If correct, this effect could
have a significant impact on the anticipated
gravity wave signal from neutron
star binaries near coalescence. Such premerger collapse
might also be associated with heating, neutrino production,
and electromagnetic bursts as the released gravitational energy
from collapse is converted into thermal energy of the stars.
Moreover, the numerical evidence that such an instability exists
poses a number of new questions
such as the sensitivity of the instability to the specific equation
of state employed, or the intrinsic spin and masses of the stars.
One would also like to understand the time history of the
collapse and any associated electromagnetic or neutrino
emission.
In this paper we present some new three dimensional (3D)
calculations which begin to examine these issues.
Unfortunately, however, such relativistic hydrodynamic
calculations in three spatial dimensions are computationally expensive.
A complete systematic study of this instability in three spatial
dimensions will be long in coming. In this paper, however,
we show that in large part this effect can
be replicated in terms of modified one-dimensional spherical
relativistic hydrodynamics. We show that the relativistic effects
of placing stars in a close binary can be
approximated by adding a term involving an average
Lorentz-like factor which increases the effective
gravitational forces on the stars. The collapse
observed in the three dimensional calculations can
be understood in this
one-dimensional framework and one can survey easily the
sensitivity of this effect to parameters characterizing the
binary and the neutron star equation of state.
We can also follow the possible precollapse compression and
heating of the neutron star material. This provides
a framework in which to model the possible associated
neutrino and electromagnetic signals such as gamma-ray bursts.
We show that significant heating and neutrino emission
is possible as the stars gradually compress before
they reach the collapse instability.
During the heating epoch
the associated neutrino and electromagnetic
radiative losses may dominate over the power loss
from gravitational radiation.
\section{Field Equations}
Our method of solving the field equations in three spatial
dimensions was discussed in Wilson \& Mathews (1995) and WMM96.
Here we present a brief review of
some features relevant to the present discussion.
We start with the slicing of spacetime into a one-parameter
family of hypersurfaces
separated by differential displacements in
time-like coordinates as defined in the (3+1)
formalism (Arnowitt, Deser \& Misner 1962; York 1979).
Utilizing Cartesian $x, y, z$ isotropic coordinates,
proper distance is expressed
\begin{equation}
ds^2 = -(\alpha^2 - \beta_i\beta^i) dt^2 + 2 \beta_i dx^i dt + \phi^4
\delta_{ij}dx^i dx^j
\end{equation}
where the lapse function $\alpha$
describes the differential lapse of proper time between two hypersurfaces.
The quantity $\beta_i$ is the shift vector denoting the shift in space-like
coordinates between hypersurfaces. For an orbiting binary,
$\beta_i$ is dominated by the orbital motion of the system plus
a small contribution from frame drag (WMM96).
In the frame of one star in the one dimensional
calculations described here, most of the effect of the $\beta_i$
is transformed away.
The curvature of the 3-geometry is described by a position
dependent conformal factor $\phi^4$ times a flat-space Kronecker delta.
We refer to this gauge choice
as the {\it Conformally Flat Condition} or CFC. For a static
system, the vanishing of the Weyl tensor in three dimensions
guarantees (cf. Weinberg 1972) that there exists a conformally flat solution
to the Einstein equations. One must be careful, however,
not to overimpose symmetry conditions (e.g. Cook et al. 1996).
For a dynamic system, one can always impose conformal flatness
as an initial condition. There are, however, nonzero time derivatives
of the spatial metric and extrinsic curvature which can begin to
introduce off-diagonal elements of $\gamma_{ij}$ as the system evolves.
In particular, the imposition
of conformal flatness excludes gravity wave
information contained in the transverse traceless components of the metric.
However, as discussed below, several recent studies have
indicated that this approach is still an excellent approximation
when a comparison with exact results can be made.
The implementation of the CFC means that
we solve the
constraint equations of general relativity at each time
as though there were a fixed distribution of matter.
We then evolve the hydrodynamic equations to the next time step
against this background metric.
Thus, at each time slice we can obtain
a solution to the relativistic field equations and
information on the hydrodynamic evolution. Information on the generation of
gravitational radiation can then be obtained from a multipole expansion
(Thorne 1980) of the transverse traceless components of the metric.
It is important to appreciate that at each time slice
a numerically valid solution to the field equations is obtained.
In this way the strong field properties of the system
are included. For this reason, this approach is a significant
improvement over a post Newtonian approach (which is also conformally flat at low
order: cf. Appendix A).
The hydrodynamic variables respond to these fields.
An approximation we have made herein is the
neglect of an explicit coupling of the gravity waves. These, however,
contribute negligibly to the metric, stress energy tensor,
or hydrodynamic evolution
(WMM96). When desired, this coupling can be
added via a multipole expansion.
We reduce the solution of the equations for the field variables
$\phi$, and $\alpha$ to simple Poisson-like equations in flat space.
We begin with the Hamiltonian constraint equation (York 1979)
which reduces to (Evans 1985; WMM96),
\begin{equation}
\nabla^2{\phi} = -4\pi\rho_1.
\label{phi}
\end{equation}
In the Newtonian limit the source term is dominated by the
proper matter density $\rho$. In the strong field of the
orbiting binary, however, $\rho$ must be enhanced by
a generalized curved-space Lorentz factor $W$ [cf. Eq.~(11)].
This derives directly from the occurrence of the four velocity in
the stress energy tensor.
There are also contributions from the internal energy
density $\epsilon \rho$, pressure $P$, and extrinsic curvature $K_{ij}$.
Thus we write,
\begin{eqnarray} \rho_1&=&{\phi^5 \over 2}\biggl[\rho W^2 +
\rho \epsilon \biggl( \Gamma W^2 - \Gamma+1\biggr) \nonumber \\
&&+ {1 \over 16\pi} K_{ij}K^{ij}\biggr]~,
\label{rho1}
\end{eqnarray}
where $\Gamma$ is
an adiabatic index from the equation of state as defined below.
Similarly, the lapse function is determined from,
\begin{equation}
\nabla^2(\alpha\phi) = 4\pi\rho_2~~,
\label{alpha}
\end{equation}
\begin{eqnarray}
\rho_2 &= &{\alpha \phi^5 \over 2}\biggl[\rho (3W^2-2)+
\rho \epsilon[ 3\Gamma (W^2+1)-5]\nonumber \\
&& + {7 \over 16\pi} K_{ij}K^{ij}\biggr]~.
\label{rho2}
\end{eqnarray}
In WMM96 it was shown that the collapse instability
can at least in part be traced to the effect of the Lorentz-like factor
$W$ on the source density for $\phi$ and ($\alpha \phi$).
In WMM96 and below it is shown that terms which scale as $(W^2 - 1)$
also enter into the hydrodynamic equations in a way which enhances
the gravitational force on each star.
In Appendix A we suggest
that, even in a post-Newtonian approximation, such
terms might cause the effective gravitational
potential to be deeper than it would be for static isolated stars.
Regarding the reliability of the {\it CFC} as an approach to this
problem we note that a recent study (Cook, Shapiro \& Teukolski 1996)
has shown than an axially symmetric CFC
approximation is quite good when computed
physical observables are compared with the
exact results for axisymmetric extremely rapidly
rotating neutron stars. This is the simplest
system for which an exact metric begins to differ
from a {\it CFC} metric.
In another recent application to the nonaxisymmetric case of
orbiting binaries Reith \& Sch\"afer (1996) have shown that
an expansion using this metric is identical to
a post-Newtonian expansion for terms of order $(v/c)^2$.
The first deviation appears in terms of
order $(v/c)^4$. However, we find that the
deviations are small. The expressions in their paper
are in terms of a dimensionless parameter $\nu \equiv m_1 m_2 /(m_1 + m_2)^2$.
It is common in post-Newtonian expansions to compare with
a Schwarzschild orbit for which $\nu = 0$. However,
for neutron star binaries $\nu = 0.25$ is most appropriate.
For equal-mass neutron-star binaries $\nu = 1/4$ exactly.
Even for $m_1/m_2 = 2$ (a relatively large asymmetry for neutron stars)
$\nu = 0.22$. For $\nu = 0.25$
the difference between the conformally flat and post Newtonian
$(v/c)^4$ correction for the perihelion advance
is about is 4.5\%. The $(v/c)^4$ term
for the momentum differs by about 2.8\%, and the angular momentum
term differs by 24.1\%. Since $(v/c)^4 \sim 10^{-4}$
for the binaries considered here, these differences in
the two-body dynamics
are probably insignificant. Note also, that in the present application
we compute an exact instantaneous numerical solution to the Einstein equations
using this metric and do not rely upon an expansion which may deviate
in individual terms but still provide the correct results. Note also,
that the principle effect we are investigating here (that due to the
$W^2 - 1$ terms) is of order $(v/c)^2$ (see Appendix A) for which the
post-Newtonian and conformally flat terms agree exactly.
Also, note that the effect described here
is a relativistic effect which completely dominates [see below] over
the possible stablizing influence of Newtonian tidal distortion
as proposed in Lai (1996).
The effect we describe was not considered in that paper.
\section{Relativistic Hydrodynamics}
\label{hydro}
To solve for the fluid motions of the system in curved spacetime
it is convenient to use an Eulerian fluid description (Wilson 1979).
We begin with the perfect fluid stress-energy tensor, which in covariant
form can be written,
\begin{equation}
T_{\mu\nu} = (\rho + \rho \epsilon + P)U_\mu U_\nu + P g_{\mu \nu}~~,
\end{equation}
where $\epsilon$ is the internal energy per gram, $P$ is the pressure, and
$U_\nu$ is the four velocity.
By introducing a set of Lorentz contracted state variables it is possible to
write the relativistic hydrodynamic equations in a form which is
reminiscent of their Newtonian counterparts.
The hydrodynamic state variables are:
The coordinate baryon mass density,
\begin{equation}
D = W \rho~~,
\end{equation}
the coordinate internal energy density,
\begin{equation}
E = W \rho \epsilon~~,
\end{equation}
the spatial three velocity,
\begin{equation}
V^i = \alpha {U_i \over \phi^4 W} - \beta^i~~,
\label{three-vel}
\end{equation}
and the coordinate momentum density,
\begin{equation}
S_i = (D + \Gamma E) U_i~~.
\label{momeq}
\end{equation}
The Lorentz-like factor $W$ is
\begin{equation}
W = \alpha U^t~ = \biggl[ 1 + {\sum_{i = 1}^3{U_i^2} \over \phi^4}\biggr]^{1/2},
\label{weq}
\end{equation}
and the EOS index $\Gamma$ is
\begin{equation}
\Gamma = 1 + {P \over \rho \epsilon}~.
\end{equation}
Note that in flat space ($\alpha = \phi = 1$), $W$ reduces to the
usual special-relativistic Lorentz factor.
In terms of these state variables, the hydrodynamic equations are as follows:
The equation for the conservation of baryon number $(\rho U^\mu)_{;\nu} = 0$ takes
the form,
\begin{equation}
{\partial D\over\partial t} = -6D{\partial \log\phi\over\partial t}
-{1\over\phi^6}{\partial\over\partial x^j}(\phi^6DV^j)~~.\
\end{equation}
The internal energy equation is derived from
$T_{\mu~;\mu}^{~\nu} = 0$,
\begin{eqnarray}
{\partial E\over\partial t} =&& -6\Gamma E{\partial \log\phi\over\partial t}
-{1\over\phi^6}{\partial\over\partial x^j}(\phi^6EV^j)\nonumber \\
&& - P\biggl[{\partial W\over \partial t} +
{1\over\phi^6}{\partial\over\partial x^j}(\phi^6 W V^j)\biggr]~~.
\end{eqnarray}
The spatial components of the momentum conservation
condition ($T_{i~;\mu}^{~\nu} = 0$) takes the form,
\begin{eqnarray}
{\partial S_i\over\partial t}& = & -6 S_i{\partial \log\phi\over\partial t}
-{1\over\phi^6}{\partial\over\partial x^j}(\phi^6S_iV^j)
-\alpha{\partial P\over \partial x^i} \nonumber \\
& + & 2\alpha(D+\Gamma E)(W - {1\over W}){\partial \log\phi\over\partial
x^i} + S_j {\partial \beta^j \over \partial x^i} \nonumber \\
& - & W(D + \Gamma E){\partial \alpha \over \partial x^i} ~,
\label{hydromom}
\end{eqnarray}
where for the present stability analysis we have set the
radiation reaction term (WMM96) to zero.
Regarding the stability of our treatment of numerical relativistic
hydrodynamics, we
have applied a number of standard tests (e.g.~shock tubes,
pressureless collapse, etc.) as noted in
WMM96. One important test in the present context is
that of stable stars in a stable
orbit (with no radiation reaction).
We have found that for such systems, equilibrium configurations
are obtained after a small fraction of an orbit. When the
velocity damping is removed there is no discernible change of
the stars for several orbit periods. This illustrates an advantage of
the shifted grid which we employ. There is essentially no matter
motion with respect to the grid once a stable equilibrium has
been achieved. Hence, numerical stability can be maintained
for a long time.
Another important test is that of a single
nonrotating star on the three-dimensional spatial grid.
We find that the equilibrium gravitational mass as a function of
central density agrees with the spherical hydrostatic
Tolman-Oppenheimer-Volkoff
equilibrium gravitational mass as a function of
central density to within a fraction of a percent.
We also find that a dynamical instability ensues once the
stellar mass exceeds the maximum hydrostatic mass as expected.
In isotropic coordinates, the condition of hydrostatic equilibrium for
the stars ($dS^i/dt = 0$, $V^i = 0$,
$\partial \log{\phi}/\partial t = 0$) can be inferred
from equation (\ref{hydromom}),
\begin{eqnarray}
{\partial P \over \partial x^i}&&= -(\rho + \rho \epsilon \Gamma)
\biggl( {\partial \log{\alpha} \over \partial x^i} \
- {U_j \over \alpha}{\partial \beta^j \over \partial x^i} \nonumber \\
&& + \biggr[ {\partial \log{\alpha} \over \partial x^i} -
2 {\partial \log{\phi} \over \partial x^i}\biggr] (W^2 - 1) \biggr)~~.
\label{hydrostat}
\end{eqnarray}
Some discussion of the relative magnitude of the
terms in Eq.~\ref{hydrostat} is useful.
The first term with ${\partial \log{\alpha}/\partial x^i}$ is
the relativistic analog of the Newtonian gravitational force.
In the Newtonian limit $\alpha \rightarrow 1 - GM/r$. Hence
$-{\partial \log{\alpha}/\partial x^i} \rightarrow GM/r^2$.
In Eq. (\ref{hydrostat}) there are two ways in which the
effective gravitational force increases as $W$ exceeds unity.
One is that the matter contribution to the source density
for $\alpha$ or $\phi$
is increased by factors of $\sim W^2$ [cf. Eqs. (\ref{rho1},\ref{rho2})].
The more dominant effect is that from the terms in Eq.~\ref{hydrostat}
which scale as $(W^2 - 1)$. These terms result from the affine connection
terms
$\Gamma^\mu_{\mu \lambda} T^{\mu \lambda}$ in the covariant differentiation
of $ T^{\mu \nu}$. These terms have no Newtonian analog but
describe a general relativistic
increase in the curvature gravitational force as the specific kinetic energy
of the system increases.
This increase in effective gravity as the stars approach each other
can be thought of as a correction to the
Newtonian gravity which scales as
$(W^2-1)$ times the Newtonian gravity. This $(W^2 - 1)$
factor can be thought of as
a kind of specific kinetic energy [cf.~Eq. (\ref{weq})] from
the orbital motion of the binary. The extra
${\partial log{\phi}/\partial x^i}$ term further increases the
effect by a factor of 2. This factor comes from $\phi^2 \sim (1/\alpha)$.
A further increase of binding arises from the $K^{ij}K_{ij}$ terms in the
field sources, but these terms are much smaller than the
$W^2-1$ contributions.
In our shifted spatial grid, the fluid three velocities $V^i$ are
nearly zero. Hence we can use Eq.~(\ref{three-vel}) to
find $U_i$ as a function of $\beta^i$. We can also replace $\beta^i$
in Eq.~(\ref{three-vel}) with the dominant
contribution from orbital motion,
$\bar\beta \approx \bar R \times \bar \omega$,
where $R$ is the coordinate distance of the stars from the center of mass.
Hence, Eqs.~(\ref{three-vel}) and (\ref{weq})
give,
\begin{eqnarray}
\langle W^2 \rangle &\approx &
{1 \over (1 - \omega^2 R^2 \phi^4/\alpha^2) }\nonumber \\
&\approx &{1 \over (1 - v^2 /c^2 ) }~~,
\end{eqnarray}
where $c= \alpha/\phi^2 < 1$ is the coordinate light speed.
In our simulations, an effective
velocity of $(\omega R/c) \approx 0.25$ is obtained
for the last stable orbit of 1.45 M$_\odot$ stars.
In the 3D calculations the average
$\langle W^2 - 1 \rangle$ typically rises up to $\sim$ 5\% before
the orbit becomes dynamically unstable. Thus, we estimate that
before orbit instability,
the effective hydrostatic gravitational force on the
stars is increased by $\sim 10$\% over that of stationary non-orbiting
stars for which $\langle W^2 - 1 \rangle =0$. This
increased gravitational force increases the central densities
as the stars approach and can induce collapse.
It is of interest to compare the magnitude of the $(W^2-1)$ correction to
the Newtonian gravity with the magnitude of the Newtonian tidal
energy which would tend to stabilize the star. In Lai (1996) it was
estimated that the Newtonian tidal energy should scale as
\begin{equation}
\Delta E_{tidal} \approx -\lambda {G M^2 R^5 \over r^6}
\end{equation}
where for neutron stars $\lambda \approx 0.1$, $M$ is the mass
of a star, $R$ is the neutron star radius, and $r$ is
the orbital separation. In contrast, the correction
to the Newtonian self gravity from the motion of the
stars in a binary is
\begin{equation}
\Delta E_{GR} \approx 2(W^2 - 1){G M^2 \over R}
\end{equation}
Taking the ratio of these two contributions,
we find,
\begin{equation}
{\Delta E_{tidal} \over \Delta E_{GR}} \approx {\lambda \over 2(W^2 - 1)}
\biggl({R \over r}\biggr)^6 \sim 10^{-4}
\end{equation}
where we have used typical values near collapse (cf. Table 1)
of $R/r \sim 0.2$ and $(W^2 -1) \sim 0.05$. Thus,
the effect of the increased relativistic gravitational force is expected
to dominate over the stabilizing tidal distortion by
about 4 orders of magnitude. In a similar analysis, we estimate
that even for white dwarfs near merger, this relativistic
increase in the gravitational
energy exceeds the stabilizing Newtonian tidal energy.
The centrifugal term in Eq.~(16) is dominated
by the contribution from orbital motion
$U_j (\partial \beta^j/\partial x^i) \approx U_j \omega \sim 10^{-8}$.
This term varies little over an individual star and
inside a star this term is small compared to the
${\partial \log{\alpha}/\partial x^i}$ term. Hence,
this term can be neglected in the discussions
of stellar stability. It is important, however,
for determining the orbits and gravity wave frequency (WMM96).
\section{Equivalent Spherical Model}
To better understand the relativistic effects described herein,
it is useful to reduce the
hydrodynamic equations to an approximate spherical model.
Such a model can also
be used to make a schematic survey of the sensitivity of collapse
to EOS parameters and possible interior heating.
From Eq.~(\ref{hydrostat}) we see that the configuration of
each star can be described by a modified version
of the familiar equation of hydrostatic equilibrium.
This is true as long as the contribution of orbital motion to $(W^2-1)$
can be treated as a constant factor and the $K_{ij}K^{ij}$ and centrifugal
terms can be ignored.
Of course, $W^2$ is not constant over the stars. However,
in our three dimensional calculations it is observed
to vary little over the volume of a star.
Contours of constant $(W^2-1)$ from a three-dimensional calculation
are shown in Fig. \ref{fig1}. From this we deduce that it
is not a bad approximation to replace $(W^2 - 1)$ in the
source equations and hydrodynamical equations with
\begin{equation}
(W^2 - 1) \approx (W_r^2 - 1 + \langle W_0^2 - 1 \rangle)~~,
\end{equation}
where $W_r$ is the contribution from radial motion inside a star,
and $\langle W_0^2 - 1 \rangle$ is an approximately constant factor which
accounts for the influence of orbital motion in the curved space-time
of the binary.
The equilibrium and stability of a binary star can then be approximated
using a one-dimensional description.
\placefigure{fig1}
However, since the metric variables $\alpha$ and $\phi$ depend
upon the density distribution, it is not possible to directly
compute the hydrostatic equilibrium of the star. Instead,
the star must be evolved hydrodynamically (with damping)
as $\langle W_0^2-1 \rangle$ is increased to obtain the new equilibrium
configuration. Hence, we construct a modified spherical
hydrodynamic model as follows:
For a given distribution of mass and energy, the
Poisson equations (\ref{phi}) and (\ref{alpha}) for $\phi$ and $\alpha$
can be integrated directly. The only difference is that
the source terms now become
\begin{equation}
\rho_1 \approx {\phi^5 \over 2}\biggr(\rho(1 + \epsilon
+ \epsilon \Gamma (W_r^2 - 1 + \langle W_0^2 - 1\rangle)\biggr)~~,
\end{equation}
and
\begin{eqnarray}
\rho_2 & \approx & {\alpha \phi^5 \over 2}\biggr(3\rho(1 +
\epsilon \Gamma) (W_r^2 - 1 + \langle W_0^2 - 1 \rangle)\nonumber \\
&& + \rho(1 + \epsilon + 6 \epsilon(\Gamma - 1))\biggr)~~.
\end{eqnarray}
The hydrodynamic equations become:
\begin{equation}
{\partial D\over\partial t} = -6D{\partial \log\phi\over\partial t}
-{1\over\phi^6 r^2}{\partial\over\partial r}(\phi^6r^2DV^r)~~.\
\end{equation}
The equation for internal energy conservation becomes,
\begin{eqnarray}
{\partial E\over\partial t} =&& -6\Gamma E{\partial \log\phi\over\partial t}
-{1\over\phi^6 r^2}{\partial\over\partial r}(\phi^6 r^2 EV^r)\nonumber \\
&& - P\biggl[{\partial W_r\over \partial t} +
{1\over\phi^6 r^2}{\partial\over\partial r}(\phi^6 W_r r^2 V^r)\biggr]~~.
\end{eqnarray}
The momentum equation is
\begin{eqnarray}
{\partial S_r\over\partial t}& = & -6 S_r{\partial \log\phi\over\partial t}
-{1\over\phi^6 r^2}{\partial\over\partial r}(\phi^6 r^2 S_rV^r)
-\alpha{\partial P\over \partial r} \nonumber \\
& + & 2\alpha(D+\Gamma E)\biggl({(W_r^2 - 1 + \langle W_0^2-1 \rangle)\over W_r}\biggr)
{\partial \log\phi\over\partial
r} \nonumber \\
& - & (W_r^2 + \langle W_0^2-1 \rangle)(D + \Gamma E){\partial \alpha \over \partial r} ~~,
\label{hydromom1}
\end{eqnarray}
where we have neglected the centrifugal term as noted above.
The calculations reported here were performed
on an Eulerian grid in which
the star is resolved into about 100 radial zones.
\section{Equation of State}
\label{EOS}
A key part of the calculations presented here is the
use a realistic neutron star equation of
state (EOS).
The orbital calculations presented in WMM96 used
the zero temperature, zero neutrino chemical potential
EOS from the
supernova numerical model of Wilson \& Mayle (1993), Mayle, Tavani
\& Wilson (1993).
Calculations made with this
EOS for a model of supernova 1987A give an explosion energy of
$1.5 \times 10^{51}$ ergs, consistent with observation.
Also, the neutrino spectra and time of neutrino emission are in good
agreement with the IMB (Bionata et al. 1987)
and Kamiokande neutrino detections (Hirata et al. 1987).
These models also reproduce the desired abundance
distribution of $r$-process heavy elements
in the baryon wind from the proto-neutron star (Woosley et al. 1994).
The maximum mass neutron star for this EOS as converted
to a zero temperature version for our present studies
is M$_C =$ 1.70 M$_\odot$.
An important point is that with an EOS which would allow
a higher mass neutron star, Wilson \& Mayle (1993) were not able to
obtain satisfactory results.
In WMM96 only cold equilibrium configurations
were computed. However, in the present
work we wish to examine the possible heating of the stars
as they collapse. Hence,
we include finite temperature effects in the EOS.
The electron fraction is small for neutron stars $Y_e << 1$.
Hence, for the heating calculations of interest here,
we can relate the temperature to the internal energy by assuming
a nonrelativistic fermi gas of neutrons.
We wish to analyze the sensitivity of the collapse instability to
the neutron star equation of state.
To do this we diminish the maximum mass
achievable for a given equation of state by
imposing a maximum value for the index $\Gamma$
at high density. From this maximum, the
adiabatic index asymptotes to 2 at high density to guarantee causality.
We find a maximum neutron star mass of M$_C =$ 1.55, 1.64, and 1.70
for $\Gamma_{max} =$ 2.297, 2.346, 2.470, respectively.
This range of masses is consistent with (and even slightly above)
the upper range of the observed upper mass limit for
neutron stars. Finn (1994)
has assigned a lower limit of 1.15 to 1.35 M$_\odot$
and an upper limit of
1.44 to 1.50 M$_\odot$ at the $1 \sigma$ (68\%) confidence level.
At the $2 \sigma$ (95\%) confidence level the upper limit increases to
1.43 to 1.64 M$_\odot$. In an independent approach,
Bethe and Brown (1995) have recently argued from nucleosynthesis
constraints that the maximum neutron star mass is 1.56 M$\odot$.
They also point out that if kaon condensation is taken into account
the critical mass may only be 1.50 M$_\odot$.
If the maximum observed stellar mass were as low as
the $1 \sigma$ upper limit,
i.e. 1.50 M$_\odot$, it could be
that almost all neutron star binaries would collapse before coalescence.
With the present state of knowledge of the nuclear equation
of state at high density, however, it is still possible that the
maximum neutron star mass could be significantly greater than
1.70 M$_\odot$. That is, the observed low mass limits
may only be an artifact of the way in which neutron stars
are formed in type II
supernovae rather than a limit from the EOS.
We have made some preliminary studies
of stars with M$_{G} = 1.45$ M$_\odot$ for an EOS
with M$_C =$ 1.85 M$_\odot$.
We have not observed collapse
before the orbit instability is reached. It seems likely that
for a sufficiently stiff EOS that merger will occur as two
neutron stars as considered in many Newtonian and
post Newtonian simulations (e.g. Rasio \& Shapiro 1992;
Zhuge, Centrella, McMillan 1994; Janka \& Ruffert 1996).
\section{Summary of 3D results}
\subsection{Summary of Previous Results}
In WMM96 orbit calculations were made for two
$1.70$ M$_\odot$ baryonic mass neutron stars for
an EOS for which the gravitational mass
in isolation was M$_{G} = 1.45$
M$_\odot$ and the critical neutron star mass was M$_C = 1.70$ M$_\odot$.
Orbit solutions were sought for three separate
values of angular momentum. The neutron stars were taken
to be corotating initially, although it was noted that
relativistic effects subsequently induce some fluid motion
in the stars relative to the corotating frame. In the present
study we have not considered the more realistic
possibilities of initial neutron-star
spins or non equal masses. Such systems contain less symmetry
and require a larger computational effort
which we leave for future work.
The first calculation was made with an orbital angular momentum
of $2.2 \times 10^{11}$ cm$^2$. The stars settled down into
what appeared at first as a stable orbit, but later
(less than one complete orbit) the stars began to slowly
spiral in.
The next calculation was made with an angular momentum of $2.3 \times 10^{11}$
cm$^2$ for which the orbit appeared stable.
However, after about 1 to 2 revolutions the central
densities were noticed to be rising. By the end of the
calculation the central baryonic densities had continuously
risen to about $2.7 \times
10^{15}$ g cm$^{-3}$ ($\approx 10$ times nuclear matter density)
which is near the maximum density for a stable
neutron star for an EOS with M$_C$ = 1.70.
It appears that neutron stars of this mass
range and the adopted equation of state may continue to collapse
as long as the released gravitational energy can be dissipated.
For this orbit the stars are at a separation distance
of $d_p / m = 9.5$, far from merging.
By the time the calculation was ended, the
minimum $\alpha$
had diminished to 0.379 and $\phi^2$ risen to 2.05
corresponding to a minimum coordinate light speed of 0.18.
A third calculation was made with the angular momentum increased
to $2.7 \times 10^{11}$ cm$^2$. As can be seen in Table \ref{paramtable}
and the stars at this separation
$d_p / m = 12.4$ seemed both stable and in a stable orbit. However,
with only a slight increase in baryonic mass (M$_b$ $1.598 \rightarrow 1.620$
a collapse ensues.
\placetable{paramtable}
\placetable{paramtabl2}
\placetable{paramtabl3}
\placetable{paramtabl4}
\subsection{New 3D Results}
In the present work, these results are supplemented with additional
3D calculations for initially corotating stars.
The new results are obtained with better resolution
and an improved treatment of boundary conditions.
These calculations have been
run several times longer than in WMM96 so that for
cases where the instability occurs we have
followed the collapse to higher densities and stronger fields.
These new results along with the previous results
are summarized in the Tables \ref{paramtable} through \ref{paramtable4}.
For the star with M$_G \approx 1.45$ M$_\odot$ in isolation,
we have added calculations at intermediate orbital angular
momenta of $2.5 \times 10^{11}$
and $2.6 \times 10^{11}$ cm$^2$ here we have run for
much longer times and further into the collapse. We find that even at
$2.6 \times 10^{11}$ cm$^2$ the stars collapse while still at a distance
of over 50 km apart and with a gravity wave frequency of only 250 Hz.
The collapse instability appears to onset between
$2.6$ and $2.7 \times 10^{11}$ cm$^2$ as $\langle W^2 - 1 \rangle
\rightarrow \sim 0.05$ (depending upon the baryon mass).
The rate of energy and
angular momentum loss generally increases as the compression proceeds and
$\alpha$ becomes small. We note, however, that for unstable stars or
orbits, the systems are no longer in quasi-equilibrium orbits.
Since the radiated energy and momentum are sensitive functions of
separation and $\omega$, the computed values of energy and angular
momentum loss become unreliable as an average estimate. This
is the reason for the lack of monotonicity in the $\dot E$
values of Tables \ref{paramtable} - \ref{paramtable4}.
Eventually the system
approaches two black holes and is no longer
well describable in our framework.
From the rates of energy and angular
momentum loss for these calculations we make a crude
estimate (WMM96) that
a delay of about 5 seconds occurs between the collapse instability and
the orbit instability for stars with this EOS and $M_G = 1.45$ M$_\odot$.
This would have interesting consequences on the gravity wave
or electromagnetic signals as discussed below.
We have also run 3-dimensional
calculations for stars which would have $M_G = 1.40$ M$_\odot$
and 1.35 M$_\odot$ in isolation (Tables \ref{paramtable3}
and \ref{paramtable4}).
For these systems the EOS was varied as well as the angular momentum.
Results from these calculations are summarized in Tables 2 - 4.
We found as expected that the 1.40 M$_\odot$ stars are stable
for lower angular momenta than the 1.45 M$_\odot$ stars.
However, stars of this mass will still collapse for an EOS which is more soft.
For example, the 1.40 M$_\odot$ star will collapse for
$J = 2.6 \times 10^{11}$ cm$^2$
if the upper mass limit from the EOS is reduced to 1.55 M$_\odot$.
We have found that with the M$_C = 1.70$ M$_\odot$ EOS it is necessary
to reduce the mass to $M_G = 1.35$ M$_\odot$ before the stars can survive to
final orbit plunge without collapsing first (see Table 3).
\subsection{Connection to 1D Calculation}
One of the concerns in the 3D calculations of WMM96 was whether
the stars and field variables were sufficiently resolved to produce reliable
numerical results. In the 3D calculations, the spatial resolution
only provided $\sim 10-15$ zones in radius across a star.
In the 1D calculations,
however, one can easily provide many radial zones.
We have made a survey of
various physical quantities such as
the central extrema in $\alpha$, $\phi$, and $\rho$, as well as
the total gravitational mass M$_G$ as a function of the
number of radial zones.
We have found that there is no significant difference in the
field variables, hydrodynamics variables,
or gravitational mass as the radial zoning is increased
from 15 to 200 zones. Even for only 10 radial zones
the error in mass only rises to $\sim$1\%. Hence, the zoning in the
3D calculations discussed here and in WMM96 is probably not
a significant source of uncertainty.
We wish to explore the reliability of the 1D model as the Lorentz-like
factor $\langle W_0^2 - 1 \rangle $ is increased. Figure \ref{fig2} shows the proper
central baryon density as a function of $\langle W_0^2 - 1 \rangle$. Results are
given for two different EOS's; one for which M$_C = 1.64$ M$_\odot$;
and one with M$_C = 1.70$ M$_\odot$. Also shown for
comparison are central densities as a function of
average values for $(W^2 - 1)$ from 3D calculations.
We see that the basic trend of increasing central density with
increasing orbital motion is reproduced, although the
central density in the 1D calculations is
about 2\% higher than the 3D calculations for the
same average
$\langle W_0^2 - 1 \rangle$ factor. This suggests that replacing the distribution
in $W$ with a mass weighted average value slightly overestimates
the effect. Nevertheless, this approach is sufficiently accurate
to apply to the schematic parameter study of interest here.
\placefigure{fig2}
\section{Heating}
In WMM96
the released binding energy
was assumed to be deposited only in increased fermi energy.
Any thermal excitation was assumed to be radiated away so that the stars
remained cold.
However, it is not necessarily true that the input energy
above the increasing fermi energy
goes into thermal energy or that it is efficiently radiated away.
If this energy were not
dissipated, the stars could simply oscillate about
the equilibrium rather than collapse.
We argue, however, that it seems most likely that
such oscillations would be
quickly damped relative to the time scale for
inspiraling. Initially, the radial changes will
be quite small, and the coupling of radial motion to
thermal excitation
could occur, for example, via star quakes in analogy with
observed pulsar glitches. As the
rate of energy release becomes more rapid
and the crust melts,
we speculate that the coupling of radial modes
with the orbital motion, nonradial fluid motion, and tidal forces will lead to
a complex excitation of higher modes and shocks
which could further heat the star and increase the entropy.
Also, the coupling of radial modes with the magnetic fields
could damp the oscillations.
Eventually, as the stars become hot enough,
$T \sim 1$ MeV, neutrino viscosity will serve
to damp the radial motion, but this would be late in the evolution.
As these dissipative processes come into play it seems
plausible that significant thermal energy could be excited.
If the thermal energy is efficiently radiated away, then
the stars will remain near zero temperature
and the previous calculations are valid. However,
it is also possible that the energy may not be
radiated away as rapidly as it is released.
In which case the
damping will be converted into both increased fermi energy and
thermal energy. An upper limit to the temperature of the
star would be that corresponding to no
radiation during the compression. In the present
work we estimate the
possible heating and radiation of the stars
as they adjust to the changing orbit $\langle W_0^2 - 1\rangle$ factor
and tidal forces.
If there is sufficient heating, the
stars may produce an associated neutrino and/or
electromagnetic signal as they compress.
Hence, it is of interest to estimate the possible heating
of the stars as released binding energy is converted to
internal energy.
One can make a simple estimate (Mathews \& Wilson 1996)
for the heating from the change in gravitational binding
energy as the stars compress in the 3D calculations.
From Table \ref{paramtable},
as $J$ changes by $4 \times 10^{10}$ cm$^2$
in the three-dimensional numerical calculations
the angular momentum loss rate is
$\dot J \sim 1$ cm. The
time to radiate this change in
$J$ is $\Delta J/J \sim 1.33$ sec.
From the change of binding energy with
central density for single stars for a particular EOS,
it is possible (Mathews \& Wilson 1996)
to estimate the energy available for heating of the stars
after increasing the fermi energy.
This has an associated change in
baryonic mass, hence it is only an order of magnitude estimate
which we improve upon below.
Nevertheless, this change in binding energy could correspond to
a release of as much as
$6 \times 10^{52}$ ergs in thermal energy and a heating rate
of $5 \times 10^{52}$ ergs s$^{-1}$ per star. The corresponding
average energy over the stars could be $2 \times 10^{19}$ ergs g$^{-1}$.
If this energy were injected into a star on the verge
of collapse, (central density of $2.42 \times 10^{15}$ g cm$^{-3}$)
it would heat the core to
a temperature $T \approx 45$ MeV (assuming that the core
has the heat capacity of a degenerate Fermi gas of neutrons.)
This much heating could lead to copious neutrino emission
and may provide a framework in which to produce
gamma-ray or X-ray bursts.
Hence, there is motivation to numerically study
this possible heating. We do this in the spherical calculations
by imposing an energy conserving damping term
in the equations of motion.
This damping relaxes
the stars to their new equilibrium.
The damped kinetic energy is added as
internal energy. By integrating the
rate at which kinetic energy is damped into
Fermi and thermal energy, and estimating the fraction which
can be subsequently radiated away,
we get a measure of the
possible heating of the star before collapse.
\section{Results}
\subsection{Analysis of the Collapse Instability}
With the above 1D approximation
to the effects of orbital motion, we can make
a systematic study of stellar stability
as a function of mass, EOS, and $\langle W_0^2 - 1\rangle$ factor.
First we set
$\langle W_0^2 - 1\rangle = 0$ and run a hydrodynamic calculation
at zero temperature with velocity damping until
an equilibrium configuration is achieved
for a given gravitational mass and EOS. Then we increase
$\langle W_0^2 - 1\rangle$ in small increments and evolve the star
hydrodynamically with conservative damping. That is, the
damped kinetic energy is added to the internal and thermal energy as
the calculation proceeds.
For stable stars,
we generally observe that the kinetic energy at first rises
to a maximum and then damps to zero in the hydrodynamic simulations.
For a star which
has reached the collapse instability, however, the kinetic
energy first rises to a maximum then falls to a minimum
and then begins to rise again as the collapse ensues.
Hence, we define the collapse instability for this systematic
study as the point at which the
radial kinetic energy begins to increase with time rather than
relaxing to zero. We wish to analyze the heating and neutrino
emission up to this point. Once the instability is reached, the
stars quickly collapse and much of the subsequent heating or neutrino emission
becomes lost below the event horizon.
Figures \ref{fig3}-\ref{fig5} show the central value of the
lapse $\alpha$ and the released
gravitational energy (in units of 10$^{53}$ ergs) $E_{53}$ as
$\langle W_0^2 - 1\rangle$ increases from zero to the point at which
dynamical collapse is evident.
Instability in these calculations is also
manifest by $\alpha$ decreasing rapidly once
it falls below some critical value. In these calculations this instability
occurs as $\alpha \rightarrow 0.4-0.5$ depending upon
the value of $(W^2 - 1)$, the initial mass, and the EOS.
A similar phenomenon has been noted
in previous studies of spherical stars in the
isotropic gauge. It was noted (Wilson 1979)
that once $\alpha < 0.5$ unstable collapse generally ensues,
but of course those calculations had no $(W^2 - 1)$ effect.
We see the same $\alpha < 0.5$
limit as the mass is increased with $(W^2 - 1) = 0$. The
central $\alpha$ in the 3D orbit calculations near collapse is
also about 0.4.
The size of the Lorentz factor at instability and the amount of input thermal
energy increases with initial gravitational mass M$_G$ and critical mass
M$_C$ of the EOS as one would expect.
The released energy rises approximately
quadratically with $\langle W_0^2 - 1\rangle$. The total
energy released up to the instability point increases with decreasing
mass and increasing M$_C$ of the EOS.
\placefigure{fig3}
\placefigure{fig4}
\placefigure{fig5}
\subsection{Temperature and Neutrino Luminosity}
The absolute surface neutrino luminosity will depend upon
details of the neutrino transport from the interior.
It should scale, however, with surface temperature according to a
Steffan Boltzmann law, $L_s \propto T_s^4$,
and the total luminosity should scale with the
surface radius $r$ and the lapse $\alpha_s$ at the surface,
$ L_{tot} \propto L_s r^2 \alpha^2_s~~.$
If the luminosity
is not as great as the rate at which
thermal energy is added to the stars, then central temperatures
and the associated neutrino luminosity could be quite high.
To estimate the luminosity and temperature as a function of time
consider the Newtonian angular frequency,
\begin{equation}
\omega^2 \approx { m \over r^3}~~,
\label{omega2}
\end{equation}
and deceleration
due to quadrupole radiation (cf. Blanchet et al. 1995),
\begin{equation}
\dot \omega = {96 \over 20} m^{5/3} \omega^{11/3}~~.
\label{omegadot}
\end{equation}
The evolution of $\omega$ from earlier times
($t < 0$) up to a value $\omega_0$ can be estimated
by integrating Eq.~(\ref{omegadot}).
\begin{equation}
\omega = {\omega_0 \over (1 - (64/5) m^{5/3} \omega_0^{8/3} t)^{3/8}}~~.
\label{omegat}
\end{equation}
From Eq.~(\ref{omega2}) the time
dependence of $(m/r)$ is
\begin{equation}
{ m \over r} = {(m \omega_0)^{3/2} \over
(1 - (64/5) m^{5/3} \omega_0^{8/3} t)^{1/4}}~~.
\end{equation}
Since $(W^2 - 1)$ can be thought of
as a kind of specific kinetic energy, it should scale
as $m/r$ in a stable Keplerian orbit.
\begin{equation}
W^2 - 1 = {\sum U_i^2 \over \phi^4} \propto {m\over r}~~.
\end{equation}
From this we get an approximate time history
\begin{equation}
\langle W_0^2 - 1\rangle = {\bar{W}_{3D}^2 - 1 \over
(1 - (64/5) m^{5/3} \omega_0^{8/3} t)^{1/4}}~~,
\label{wnorm}
\end{equation}
where Eq.~(\ref{wnorm}) is normalized to give the average
$\langle W_{3D}^2 - 1\rangle$ factor from the 3D calculation
which has an angular velocity
$\omega_0$.
From this relation of $\langle W_0^2 - 1\rangle$
as a function of time it is possible
to now construct a possible picture of the luminosity
as a function of time. Ideally, one would like
to model the detailed thermal neutrino production
and transport as released gravitational energy is
deposited in the interior. Although we have begun
such a calculation, a detailed modeling of the neutrino
transport is quite challenging and it will take
some time before a systematic study can be completed.
Nevertheless, we can gain qualitative insight
into the signal expected from a simple schematic model
as follows:
From (Figs. \ref{fig3}-\ref{fig5})
we note that the total input thermal energy
grows quadratically with $\langle W_0^2 - 1\rangle$,
\begin{equation}
E_{in} \propto \langle W_0^2 - 1\rangle^2~~.
\label{ethw}
\end{equation}
To convert this thermal energy into a luminosity
we assume that the neutrino flux through any
surface at radius $r$ should scale as,
\begin{equation}
F_\nu \propto \biggl({r^2 T_0^2 \over \rho \kappa_0 T^2}\biggr)
{ d(T^4) \over d r} \propto
{r^2 \over \rho} T \ {d T \over d r} ~~,
\label{flux1}
\end{equation}
where $\kappa_0$ is the opacity evaluated at $T_0$.
The net neutrino flux passing through $r$
is due to neutrinos diffusing throughout the volume interior to
$r$. To approximate the effective temperature of the flux
passing through $r$ we assume that the thermal energy
is deposited uniformly in mass throughout the star.
The neutrino flux at any radius $r$ is then taken to be
proportional to the mass interior to $r$,
\begin{equation}
F_\nu \propto m(r)~~,
\label{flux2}
\end{equation}
where,
\begin{equation}
m(r) = 4 \pi \int_0^r \rho r'^2 dr'~~.
\end{equation}
Equations (\ref{flux1}) and (\ref{flux2}) define
an effective neutrino temperature profile with radius,
(cf. Fig \ref{fig6}),
\begin{equation}
T(r) = A \biggl[\biggl({\int_0^r m(r')dr' \over r'^2}\biggr)
- \biggl({\int_0^R m(r')dr' \over r'^2}\biggr)\biggr]^{1/2} ~~,
\label{Tr}
\end{equation}
where $A$ is a normalization constant.
We then solve equations (\ref{flux1}) through (\ref{Tr})
under the boundary condition that $T = 0$
outside the star, and by
equating the total thermal energy at any given time to the
integrated thermal internal energy per gram $\epsilon (T)$
for a given temperature and density
profile in our the equation of state, i.e.
\begin{equation}
\int_0^R \epsilon (T) 4 \pi r^2 \rho dr = E_{th}~~.
\label{enorm}
\end{equation}
The radial temperature profile of a 1.40 M$_\odot$ star just as
the collapse instability is reached, is shown
in Figure \ref{fig6} for the equations of
state with M$_C = 1.70$. For this EOS central temperatures
as high as $T \approx 70$ MeV are possible.
The neutrino flux is then calculated using Eq.~(\ref{flux1}) with
the proper coefficients included. The flux near the surface
then gives the luminosity.
The luminosity can be used to then
define an effective neutrino luminosity temperature,
\begin{equation}
L_{\nu} = 4 \pi R^2 {a c T_{eff}^4 \over 4} \biggl({11 \over 4}\biggr)~~.
\label{ltot}
\end{equation}
\placefigure{fig6}
As the stars compress, released gravitational energy can be
deposited as internal energy to be radiated away by neutrinos.
The rate of accumulated thermal energy
is then given by the balance between the rate at which gravitational
energy is deposited as thermal energy
from the contraction of the stars $\dot E_{in}$
and the rate of energy loss $L_\nu$ by neutrinos.
We find that the temperature profile as determined above
is consistent with a scaling $E_{th} \sim T^{2}$, where
$T$ is the evaluated from (Eq.~\ref{Tr}) near the
the surface. This scaling arises because the system
can be approximated as a degenerate nucleon gas.
Also, from the flux scaling (Eq.~\ref{flux1})
we find that $dT/dr \propto T$,
so that $L_\nu \sim T^2$.
Hence, both $E_{th}$ and $L_{\nu}$
scale with the surface temperature $T^2$, and we can write
\begin{equation}
L_{\nu} \propto T^2 \approx k E_{th}~~.
\end{equation}
The evolution of the thermal energy can then be written,
\begin{equation}
\dot E_{th} = \dot E_{in} - k E_{th} ~~.
\end{equation}
The constant $k$ is evaluated from Eq.~(\ref{flux1}) above,
and the rate of deposited thermal
energy is evaluated from Eqs.~(\ref{wnorm}) and (\ref{ethw}).
\begin{equation}
\dot E_{in} \approx {d E_{in} \over d \langle W_0^2 - 1\rangle}
{d \langle W_0^2 - 1\rangle \over dt}~~.
\end{equation}
The analytic solution for the heating of the star then becomes,
\begin{equation}
E_{th} \approx e^{-k t} \int \dot E_{in} e^{k t'} dt'~~.
\end{equation}
Figure \ref{fig7} shows
the estimated neutrino luminosity
$L_{\nu}$ (in units of $10^{53}$ ergs sec$^{-1}$),
the total accumulated internal energy, $E_{th}$ (in units of $10^{53}$ ergs),
and the rate of gravitational energy release, $\dot E_{in}$
(in units of $10^{53}$ ergs sec$^{-1}$),
for a 1.40 M$_{\odot}$ star with the $M_C = 1.70$ EOS. These
quantities are plotted as a function
of time for the last 1.5 sec of a star with M$_G$ = 1.40 M$_\odot$
and an EOS with M$_C$ = 1.70 M$_\odot$ where the time scale is defined by
Eq.~\ref{omegat}.
The total luminosity can become quite significant as the
collapse instability is approached.
About 4 sec before collapse, the neutrino luminosity from each star rises
above 10$^{51}$ ergs sec$^{-1}$.
The luminosity exceeds 10$^{52}$ ergs sec$^{-1}$
about 0.5 sec before collapse. The combined neutrino luminosity from
the two stars ultimately reaches nearly 10$^{53}$ erg sec$^{-1}$
before collapse. This is comparable to the neutrino emission
from type II supernovae, but in this case the emission
is from bare neutron stars.
\placefigure{fig7}
In the quadrupole approximation (cf. Thorne 1980; WMM96),
the gravity wave luminosity $\dot E_{GW}$
scales with the square of the $(l+1)$th time derivative of the mass
quadrupole moment,
$\dot E_{GW} \sim Q^2 \omega^6 \approx (m/r)^5$. Thus, we write,
\begin{equation}
\dot E_{GW} = {\dot E_{GW}^{3D} \over (1 - (64/5) m^{5/3}
\omega_0^{8/3} t)^{5/4}}~~.
\end{equation}
From this we estimate that the power (and angular momentum) lost in
neutrinos will exceed the energy loss in gravity waves for
roughly 3 hours before collapse. This means that the late evolution
up to collapse may not be determined by the rate of
gravitational radiation
but by the hydrodynamics and heat transport of the compressing
neutron stars.
If the radiative momentum loss dominates at early times one may
wonder whether it could be observed as an increased slow down rate
in the orbit of a known binary pulsar.
We have estimated how much the orbit period of the binary pulsar
PSR-1913+16 would
be affected. Damour \& Taylor (1991) have determined that
the ratio of the observed orbit period change to the
general relativistic prediction is
$\dot P^{obs}/ \dot P^{GR}
= 1.0081 \pm 0.0022$(galactic) $\pm 0.0076$(observational).
We estimate that any orbit period change from
compressional heating is at least two orders of magnitude
below the observational error.
\section{Conclusion}
We have made a survey of the compression, heating, and
collapse of neutron stars in close binaries. In particular,
we have developed a schematic model to describe
when the collapse instability may
occur as a function of initial neutron star mass and the EOS.
We have also analyzed the
possible heating of the neutron star interiors
as the stars approach the collapse instability.
We find that the stars may
obtain quite high thermal energy and neutrino luminosity
in the final seconds before collapse. This could have significant
implications both for gravity wave and neutrino astronomy
as follows:
\subsection{Implication for Gravity Wave Detectors}
The analysis here indicates that
the radiative neutrino luminosity
could exceed the gravity luminosity for hours
prior to the collapse instability.
If so, this could have a profound influence
on the inferred gravity wave signal. The loss of orbital
angular momentum due to neutrinos
and electromagnetic radiation will be considerably
greater than that of two cold stable neutron stars.
The merger will occur on a shorter time scale and the
gravity wave signal will be dominated by the dynamics of
heating and thermal radiation and not the gravity wave amplitude
up to the point of the instability.
Once the collapse instability is reached, we estimate that the
formation of one or two black holes will occur rather abruptly.
After collapse, however, the system may not appear simply as
two black holes in vacuum. As has been observed
in supernova calculations for some time (cf. Mayle \& Wilson 1988;
Wilson \& Mayle 1993)
this much neutrino radiation is likely to ablate electron-positron
pairs together with baryonic
material from the surface of the stars.
Baryons ejected in this wind are
likely to be present after collapse and may interact with the
orbiting objects. To some extent they will provide
material to accrete onto the remaining members (neutron star or black holes)
of the binary.
They may also provide a damping medium which could accelerate
the decay of the orbit. Thirdly, this
hot wind material may provide a medium in which to anchor
the magnetic field lines of the precollapse stars (cf. Wilson 1975;
Ruffini \& Wilson 1975; Damour et al. 1978).
We speculate that these effects may serve to accelerate the
merger of the two black holes. The interaction of the stars
with this medium may affect the dynamics of
the black hole inspiral unless the
material is ejected with sufficiently high velocity.
Clearly, this is an area which warrants
further investigation. If our speculation is correct then
the gravity wave signal becomes a probe
of the EOS, hydrodynamics, and thermodynamics of the neutrons
stars as they approach and pass through this collapse instability.
\subsection{Implications for Gamma-Ray Bursts}
The possibility that gamma-ray bursts could be generated
by neutrino emission from coalescing
neutron stars has been speculated upon for some time.
Recently, Janka \& Ruffert (1996) have made post-Newtonian
hydrodynamics calculations of neutron star mergers and
included the neutrino emission therefrom.
They find high luminosities, but the time scales are so short ($\sim
$ msec) that they conclude that it will be difficult to model
gamma-ray bursts by neutron star mergers. This short time
scale stems from the time scale for mergers.
This difficulty
is avoided, however, in our model in which the time scale is set by
the gradual compression of the stars. We estimate
similar luminosities, but in our model the neutrino
luminosity endures for much longer times, thus rendering
the possibility of a gamma-ray burst more plausible.
We have shown that significant
heating and associated
neutrino luminosity is possible in the last seconds before the
collapse instability. This poses some interesting possibilities
for cosmological models of gamma ray bursts. The thermal emission
of neutrinos provides an environment for the
generation of an $e^+-e^-$
pair plasma by $\nu \bar \nu$ annihilation
around the stars. The neutrino
emission is occurring in the deepening gravitational
well of the two stars. Their interactions will be
enhanced by the curved space around the neutron stars.
Furthermore, the region between the stars may provide an environment
for the build up of neutrino and matter flux and the
production of a pair plasma as desired in some gamma-ray
burst scenarios (e.g. Piran \& Shemi 1993).
In addition to the collapse-induced neutrino emission itself,
the escaping neutrinos are likely to generate
a neutrino heated baryon wind from the stars (Mayle
\& Wilson 1988; Wilson \& Mayle 1993).
Unlike in supernovae, the velocity of this wind can be be quite high, particularly
later in the evolution as the neutrino luminosity grows.
This later emission of the high velocity wind could interact
with matter emitted previously producing shock heating in
environments of relatively low optical depth far from the stars.
The interactions themselves may contribute to the production
of a pair plasma.
As a preliminary test of this scenario
we ran a calculation of neutron stars
instantly heated such that the surface temperature was $\sim 5 $ MeV.
We then followed the neutrino and matter transport using the numerical
supernova model of Mayle \& Wilson (1993).
We observed a blow off of the outer layer
($\sim 10^{-5}$ M$_\odot$) of the neutron star. This material
was accelerated to a speed corresponding to a
relativistic $\gamma$-factor of $\sim10$. One
possibility is that this high speed matter interacting
with magnetic fields and/or interstellar clouds might produce $\gamma$-rays.
Finally, we also note that after
collapse, the previously ejected material will continue to
experience heating
either by accretion onto the black holes or by
ram pressure from the orbiting stars.
Once present, this plasma might become anchored
to magnetic field lines around the precollapse stars
(Ruffini \& Wilson 1975; Damour et al. 1978).
The interactions and magnetic recombination of these
field lines could also contribute to heating and pair plasma
production.
All of these processes may be occurring in the background of the
remaining orbiting binary system from times prior to
collapse until the final merger to a single black hole.
This orbit period may lead to an underlying
millisecond substructure in associated burst signals
possibly consistent with observations.
Clearly, this is an area which also warrants more investigation.
Work along this line is underway to explore such effects as
a possible framework in which to model cosmological
gamma-ray bursts.
\acknowledgments
The authors wish to thank P. Marronetti for useful discussions
and contributions to this work.
Work at University of Notre Dame
supported in part by DOE Nuclear Theory grant DE-FG02-95ER40934
and by NASA CGRO Grant NAG5-3123 and NA
Work performed in part under the auspices
of the U.~S.~Department of Energy
by the Lawrence Livermore National Laboratory under contract
W-7405-ENG-48 and NSF grant PHY-9401636.
|
1,477,468,750,098 | arxiv | \section{#1}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}}
\newcommand{\subsect}[1]{\setcounter{equation}{0}\subsection{#1}
\renewcommand{\theequation}{\arabic{section}.\arabic{subsection}.\arabic{equation}}}
\newcommand{\app}{\setcounter{section}{0}
\setcounter{equation}{0} \renewcommand{\thesection}{APPENDIX
\Alph{section}}\renewcommand{\theequation}{\Alph{section}.
\arabic{equation}}}
\usepackage{amssymb}
\usepackage{graphicx}
\journal{Physica A}
\bibliographystyle{elsarticle-num}
\begin{document}
\begin{frontmatter}
\title{$\kappa$-Deformed Fourier Transform}
\author{A.M. Scarfone}
\address{Istituto dei Sistemi Complessi (ISC-CNR) c/o
Politecnico di Torino\\ Corso Duca
degli Abruzzi 24, 10129 Torino, Italy.}
\ead{antoniomaria.scarfone@cnr.it}
\begin{abstract}
We present a new formulation of Fourier transform in the picture of the $\kappa$-algebra derived in the framework of the $\kappa$-generalized statistical mechanics. The $\kappa$-Fourier transform is obtained from a $\kappa$-Fourier series recently introduced by us [2013 Entropy {\bf15} 624]. The kernel of this transform, that reduces to the usual exponential phase in the $\kappa\to0$ limit, is composed by a $\kappa$-deformed phase and a damping factor that gives a wavelet-like behavior. We show that the $\kappa$-Fourier transform is isomorph to the standard Fourier transform through a changing of time and frequency variables. Nevertheless, the new formalism is useful to study, according to Fourier analysis, those functions defined in the realm of the $\kappa$-algebra. As a relevant application, we discuss the central limit theorem for the $\kappa$-sum of $n$-iterate statistically independent random variables.
\end{abstract}
\begin{keyword}
Fourier integral transform, log-periodic oscillations, $\kappa$-deformed algebra, power-law distribution.
\end{keyword}
\end{frontmatter}
\sect{Introduction}
As well known, Fourier transform is a very useful tool widely used in mathematical statistics and physics. It consists of a linear operator acting on a function $f(x)$, with a real
argument $x$, and transforming it into a function $\widehat f(\omega)\equiv{\cal F}[f(x)](\omega)$, with a complex argument $\omega$:
\begin{eqnarray}
{\cal F}[f(x)](\omega)={1\over\sqrt{2\,\pi}}\int\limits_{-\infty}\limits^{+\infty}f(x)\,e^{-i\,x\,\omega}\,dx \ .
\end{eqnarray}
More in general, it belongs to the family of integral transforms, defined in
\begin{eqnarray}
{\cal H}[f(x)](\omega)=\int\limits_{-\infty}\limits^{+\infty}f(x)\,h(x,\,\omega)\,dx \ ,\label{standard}
\end{eqnarray}
like Laplace, Mellin and Hankel transforms, with the kernel $h(x,\,\omega)$ given by a complex exponential-like form. \\
Fourier transform finds applications in many fields of science, running from mathematics to physics and engineering. In statistical physics, and in general in probability theory, it is applied in the study of random walk, of infinite divisible distributions, in the analysis of distribution stability and in the study of their domains of attraction as well as in the large deviation theory and in the proof of several central limit theorems \cite{Feller}.\\
In quantum mechanics, Fourier transform links the representation of a wave function in the configuration space with its dual representation in the momenta space. In addition, it is a powerful tool widely used in quantum field theory to evaluate the Green function of a quantum propagator thanks to its property of changing a derivative in a multiplicative power.\\
Its discrete version is largely applied in the theory of signals \cite{Oppenheim}, especially after the introduction of fast algorithms, the so called fast Fourier transform, that speeds up the computation process, allowing the elaboration of a large amount of information, which has permitted to develop relevant data compression algorithms and the realization of high-definition image devices.\\ In quantum computing theory, Fourier transform is powerfully used in the generation of quantum protocols for the factorization of large numbers and in solving the discrete logarithm problem \cite{Nielsen}.
In the past, several versions of Fourier transform have been proposed in literature by using deformed algebraic structures like, for instance, in the framework of quantum groups where the Hopf algebra underlying the braiding groups has been used to define a Fourier transform in the noncommutative space \cite{Schirmacher}. In \cite{Coulembier}, within the {\em basic}-calculus, it has been advanced a {\em basic}-analogous of standard Fourier transform defined in
\begin{eqnarray}
{\cal F}[f(x);\,q](\omega)=\int\limits_{-\infty}\limits^{+\infty}f(x)\exp(x|\omega;q)\,d_qx \ ,
\end{eqnarray}
that is realized by employing a complex version of the {\em basic}-exponential $\exp(x;\,q)$ \cite{Exton} and the {\em basic}-integral $\int d_qx$ \cite{Gasper}.\\
On a different ground, in the framework of statistical mechanics based on the Tsallis entropy \cite{Tsallis}, the nonlinear Fourier transform \cite{Umarov} defined in
\begin{eqnarray}
{\cal F}_q[f(x)](\omega)=\int\limits_{-\infty}\limits^{+\infty}f(x)\otimes_q
\exp_q(-i\,x\,\omega)\,dx \ ,\label{qF}
\end{eqnarray}
has been introduced by using the $q$-exponential \cite{Tsallis1} and the $q$-product \cite{Borges}. This transform was designed with the purpose to investigate the possible generalization of the central limit theorem within the $q$-statistics, although the final results are still controversial \cite{Hilhorst,Umarov1}.
In this paper, we revisit the standard Fourier transform in a manner that is consistent with the $\kappa$-algebra and the $\kappa$-calculus derived in the framework of the $\kappa$-statistical mechanics. This formalism, together with the $\kappa$-entropy, has been introduced in \cite{Kaniadakis0} to generalize the Boltzmann-Gibbs entropy and the related theory, with the aim to describe non Gibbsian systems characterized by power-law distributions. In the last decade, several papers have been written on the $\kappa$-statistical mechanics concerning its foundations, its theoretical consistence and its possible applications to physical and physical-like systems \cite{Wada,Wada1,Quarati,Pereira,Scarfone,Trivellato} (see \cite{Kaniadakis1} for an up to date references list).\\
The new formulation of Fourier transform (hereinafter $\kappa$-Fourier transform), is obtained starting from a $\kappa$-deformed version of Fourier series recently proposed in \cite{Scarfone-1}, given by the following linear operator
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)={1\over\sqrt{2\,\pi}}\int\limits_{-\infty}\limits^{+\infty}f(x)\,
\exp_\kappa(-x\otimes_\kappa\omega)^i\,d_\kappa x \ ,\label{new}
\end{eqnarray}
and is based on the $\kappa$-exponential and the related $\kappa$-algebra introduced in \cite{Kaniadakis0} (see also \cite{Kaniadakis}), by replacing both the exponential kernel and the integrate operator with their $\kappa$-deformed versions. Here and in the following, for the sake of notation, $\exp_\kappa(x)^a$ means ($\exp_\kappa(x))^a$. \\
Actually, Eq. (\ref{new}) can be rewritten in the form (\ref{standard}) of a standard integral transform over the real numbers with the kernel
\begin{eqnarray}
h_\kappa(x,\,\omega)={\exp(-i\,x_{\{\kappa\}}\,\omega_{\{\kappa\}})\over\sqrt{1+\kappa^2\,x^2}} \ ,
\end{eqnarray}
where $x_{\{\kappa\}}$ are $\kappa$-numbers defined in the remainder \cite{Kaniadakis}.\\
Function $h_\kappa(x,\,\omega)$ contains a deformed phase factor $\exp(-i\,x_{\{\kappa\}}\,\omega_{\{\kappa\}})$ which has an asymptotically log-periodic behavior and a damping factor $1/\sqrt{1+\kappa^2\,x^2}$, that confers to the transform a wavelet-like behaviours. Furthermore, transform (\ref{new}) is isomorph to the standard Fourier transform, where the isomorphism is settled by changing the variables in time and frequency.
The plan of the paper is as follows. In the next Section 2, we summarize the main aspects of the $\kappa$-algebra and its related calculus. We introduce the Euler formula and the cyclic deformed functions in a manner consistent with the $\kappa$-formalism. In Section 3, we derive the new formulation of the Fourier transform based on the $\kappa$-algebra. A list of its main properties is also given, while a potential application concerning the central limit theorem for $\kappa$-sum of $n$-iterate statistically independent random variables is discussed in Section 4. Our conclusions are reported in Section 5.
\sect{Mathematical formalism of the $\kappa$-algebra}
In order to embody power-law distributions in statistical physics, in \cite{Kaniadakis0} it has been proposed the following generalized entropic form
\begin{eqnarray}
S_\kappa[p]=-\int p(x)\,\ln_\kappa\big(p(x)\big)\,dx \ ,\label{k}
\end{eqnarray}
which mimics the Boltzmann-Gibbs entropy by replacing the standard logarithm with its generalized version
\begin{eqnarray}
\ln_\kappa(x)&=&
\frac{x^{\kappa}-x^{-\kappa}}{2\,\kappa}\equiv{1\over\kappa}\,\sinh\big(\kappa\,\ln(x)\big) \ ,\label{lnk}
\end{eqnarray}
In agreement with the maximal entropy principle, the corresponding equilibrium distribution reads
\begin{eqnarray}
p(x)=\alpha\,\exp_\kappa
\left(-{1\over\lambda}\,(\gamma+\beta\,x)\right)
\ ,\label{dist}
\end{eqnarray}
where
\begin{eqnarray}
\exp_\kappa(x)&=&\left(\kappa\, x+\sqrt{1+\kappa^2\,x^2}\right)^{1/\kappa}\equiv\exp\left({1\over\kappa}\,{\rm
arcsinh}\,(\kappa\,x)\right) \ ,\label{expk}
\end{eqnarray}
is the $\kappa$-exponential, with $\ln_\kappa(\exp_\kappa(x))=\exp_\kappa(\ln_\kappa(x))=x$.\\
Both these functions reduce to the standard exponential and logarithm in the $\kappa\to0$ limit: $\ln_0(x)\equiv\ln(x)$ and $\exp_0(x)\equiv\exp(x)$ and consequently, in the same limit, Eqs. (\ref{k}) and (\ref{dist}) converge to the Boltzmann-Gibbs entropy and the Gibbs distribution, respectively.\\
We note that the shape of distribution (\ref{dist}), is characterized by an asymptotic power law tail, since $\exp_\kappa(x)\sim x^{\pm1/\kappa}$, for $x\to\pm\infty$. Therefore, it differs from the exponential behavior of the Gibbs-distribution, a fact that justify the use of $\kappa$-statistical mechanics in the study of free-scale systems.\\
On a mathematical ground, the main features of $\kappa$-statistical physics follow from the analytical properties of the $\kappa$-exponential and the $\kappa$-logarithm \cite{Kaniadakis}.\\
For any $|\kappa|<1$, $\log_\kappa(x)$ and $\exp_\kappa(x)$ are continuous, monotonic, increasing functions, normalized in $\ln_\kappa(1)=0$ and $\exp_\kappa(0)=1$, with $\ln_\kappa(\mathbb{R}^+)\subseteq \mathbb{R}$ and $\exp_\kappa(\mathbb{R})\subseteq \mathbb{R}^+$.
Another function useful in the formulation of the $\kappa$-deformed statistical mechanics is given by
\begin{eqnarray}
u_\kappa(x)=\frac{x^\kappa+x^{-\kappa}}{2}\equiv{1\over\kappa}\,\cosh\big(\kappa\,\ln(x)\big) \ .\label{u}
\end{eqnarray}
For any $|\kappa|<1$, the function $u_\kappa(x)$ is continuous, with $u_\kappa(\mathbb{R}^+)\in\mathbb{R}^+$, $u_\kappa(0)=u_\kappa(+\infty)=+\infty$
and reaches its minimum at $x=1$, where $u_\kappa(1)=1$. Function $u_\kappa(x)$, that reduces to a constant in the $\kappa\to0$ limit ($u_0(x)=1$), is strictly related to $\ln_\kappa(x)$ and $\exp_\kappa(x)$ and appears recurrently in the study of their analytical properties.\\
These $\kappa$-functions, fulfil the following scaling-laws
\begin{eqnarray}
&&\exp_\kappa(\mu\,x)=\exp_{\kappa'}(x)^\mu \ ,\label{se}\\
&&\ln_\kappa(x^\mu)=\mu\,\ln_{\kappa'}(x) \ ,\\
&&u_\kappa(x^\mu)=u_{\kappa'}(x) \ ,
\end{eqnarray}
where $\kappa'=\mu\,\kappa$. In particular, for $\mu=-1$ and accounting for the symmetry relations $\exp_\kappa(x) \equiv\exp_{-\kappa}(x)$, $\ln_\kappa(x)\equiv\ln_{-\kappa}(x)$ and $u_\kappa(x)\equiv u_{-\kappa}(x)$,
we obtain
\begin{eqnarray}
&&\exp_\kappa(x)\,\exp_\kappa(-x)=1 \ ,\label{eq1}\\
&&\ln_\kappa(x)+\ln_\kappa(1/x)=0 \ ,\label{eq2}\\
&&u_\kappa(x)-u_\kappa(1/x)=0 \ .\label{eq3}
\end{eqnarray}
Equations (\ref{eq1}) and (\ref{eq2}) reproduce the well-known relations of standard exponential and logarithm in the $\kappa$-formalism.
\subsect{$\kappa$-Algebra}
In \cite{Scarfone1}, it has been shown that, starting from a pair of continuous, monotonic increasing functions, inverse each other, we can introduce two different algebraic structures endowed by a generalized sum and product, that form two distinct Abelian fields. By specializing this result to the present case, we can define two deformed algebras on the target space and on the probability space, respectively. Among them, the target space algebra is the relevant one in this work and is revisited in the following, remanding to \cite{Scarfone1} for a more detailed discussion.\\ To begin with, let us introduce the $\kappa$-numbers
\begin{eqnarray}
x_{\{\kappa\}}=\frac{1}{\kappa}\, {\rm arcsinh}
\,(\kappa\,x) \ ,\label{xk1}
\end{eqnarray}
and their dual
\begin{eqnarray}
x^{\{\kappa\}}=\frac{1}{\kappa}\,\sinh
\,(\kappa\,x) \ ,\label{xk2}
\end{eqnarray}
with
\begin{eqnarray}
\left(x_{\{\kappa\}}\right)^{\{\kappa\}}=\left(x^{\{\kappa\}}\right)_{\{\kappa\}}=x \ .\label{dual}
\end{eqnarray}
Mapping (\ref{xk1}) [resp. (\ref{xk2})] defines a non uniform stretching of the real axis so that the space of the $\kappa$-numbers $\mathbb{R}^\kappa$ is isomorph to the space of the real numbers, with $(+\infty)_{\{\kappa\}}\equiv+\infty$, $(-\infty)_{\{\kappa\}}\equiv-\infty$ and $0_{\{\kappa\}}\equiv0$.\\
Generalized sum and product are defined on $\mathbb{R}^\kappa$ according to \cite{Scarfone-1}
\begin{eqnarray}
x\oplus_\kappa y=\left(x_{\{\kappa\}}+ y_{\{\kappa\}}\right)^{\{\kappa\}} \ ,\label{kpiu}\\
x\otimes_\kappa y=\left(x_{\{\kappa\}}\cdot y_{\{\kappa\}}\right)^{\{\kappa\}} \ ,\label{kper}
\end{eqnarray}
where, in the following, for the sake of notation, we simply mean $\oplus\equiv\oplus_\kappa$ and $\otimes\equiv\otimes_\kappa$ unless explicably stated.\\
These operations are associative, commutative and distributed, with the null element $\emptyset$ and the identity $I$, and for any $x\in\mathbb{R}^\kappa$, there exist the opposite $(-x)$ and the inverse $(1/x)$.
Therefore, the algebraic structure
$(\mathbb{R}^\kappa,\,\oplus,\,\otimes)$ forms a commutative group.\\
Explicitly, we have
\begin{eqnarray}
&&x\oplus y={1\over\kappa}\,\sinh\Big(\,{\rm
arcsinh}\,(\kappa\,x)+{\rm arcsinh}\,(\kappa\,y)\,\Big) \
,\label{ksum}\\
&&x\otimes y={1\over\kappa}\,\sinh\left({1\over\kappa}\,{\rm
arcsinh}\,(\kappa\,x)\cdot{\rm arcsinh}\,(\kappa\,y)\right) \
,\label{kprod}
\end{eqnarray}
with $\emptyset\equiv0$, $I\equiv\kappa^{-1}\,\,\sinh\kappa$, $(-x)\equiv-x$ and $(1/x)\equiv\kappa^{-1}\,\sinh(\kappa^2/{\rm arcsinh}\,\kappa\,x)$.\\
In the $\kappa\to0$ limit, sum (\ref{ksum}) and product (\ref{kprod}) recover the standard operations and the $\kappa$-algebra reduces to the standard algebra of the real numbers.\\
Remark also that, for large $x$ and $y$, the $\kappa$-sum approaches asymptotically to the standard product
\begin{eqnarray}
x\oplus y\simeq x\cdot y \ ,\qquad \ x,\,y\gg1 \ ,\label{okp}
\end{eqnarray}
for $\kappa\not=0$, as well as
\begin{eqnarray}
x\oplus y\simeq x+y \ ,\qquad \ x,\,y\to0 \ .
\end{eqnarray}
Starting from Eqs. (\ref{ksum}) and (\ref{kprod}), we can introduce other elementary operations like the difference $x\ominus y= x\oplus(-y)$ and the quotient $x\oslash y=x\otimes(1/y)$.\\
However, the main property of $\kappa$-algebra follows from the definitions of $\kappa$-exponential and $\kappa$-logarithm
\begin{eqnarray}
\exp_\kappa(x)=e^{x_{\{\kappa\}}} \ ,\qquad\ln_\kappa(x)=\left(\ln x\right)_{\{\kappa\}} \ .\label{ke}
\end{eqnarray}
Among the many relations satisfied by $\exp_\kappa(x)$ and $\ln_\kappa(x)$, let us recall that
\begin{eqnarray}
&&\exp_\kappa(x\oplus y)=\exp_\kappa(x)\cdot\exp_\kappa(y) \ ,\\
&&\ln_\kappa(x\cdot y)=\ln_\kappa(x)\oplus\ln_\kappa(y) \ ,
\end{eqnarray}
as well as the following, that will be used in the remainder
\begin{eqnarray}
\nonumber
&&\exp_\kappa(x\otimes y)=\exp_\kappa (x)^{y_{\{\kappa\}}}=\exp_\kappa(y)^{x_{\{\kappa\}}} \ ,\\
&&\exp_\kappa(a\,x\otimes_\kappa a\,y)=\exp_{\kappa^\prime}(x\otimes_{\kappa^\prime} y)^{a^2}
=\exp_{\kappa^{\prime\prime}}(a^2(x\otimes_{\kappa^\prime} y)) \ ,\label{eax}\\
\nonumber
&&\exp_\kappa(a\,x\oplus_{\kappa}a\,y)=\exp_{\kappa^\prime}(x\oplus_{\kappa^\prime}y)^{a}=\exp_\kappa
\left(a(x\oplus_{\kappa^\prime} y)\right) \ ,
\end{eqnarray}
with $\kappa^\prime=\kappa\,a$ and $\kappa^{\prime\prime}=\kappa/a$. Other relations can be obtained by inspection of \cite{Scarfone-1}.
\subsect{$\kappa$-Cyclic functions}
According to the Euler formula, the cyclic functions in the $\kappa$-formalism can be introduced starting from the $\kappa$-deformed version of the complex exponential. However,
within the $\kappa$-algebra, we have two substantially different possible definitions \cite{Scarfone-1}. The most natural one is given by
\begin{eqnarray}
\exp_\kappa(x)\to\exp_\kappa(i\,x)\equiv\exp\left((i\,x)_{\{\kappa\}}\right) \ ,\label{ei1}
\end{eqnarray}
and coincides with the choice made in \cite{Kaniadakis1}. The function $\exp_\kappa(i\,x)$ is unitary for $|x|\leq1/|\kappa|$, while becomes a purely imaginary quantity with increasing modulo for $|x|>1/|\kappa|$, that is
\begin{eqnarray}
&&|\exp_\kappa(i\,x)|=1\hspace{41mm}{\rm for \ }|x|\leq1/|\kappa| \ ,\\
&&|\exp_\kappa(i\,x)|=\left(\kappa\,x+\sqrt{\kappa^2\,x^2-1}\right)^{1/\kappa}\qquad{\rm for \ }|x|>1/|\kappa| \ .
\end{eqnarray}
Consistently, with definition (\ref{ei1}) we can introduce the $\kappa$-trigonometric functions according to
\begin{eqnarray}
\exp_\kappa(i\,x)={\rm Cos}_\kappa(x)+i\,{\rm Sin}_\kappa(x) \ ,
\end{eqnarray}
so that
\begin{eqnarray}
&&{\rm Sin}_\kappa(x)={\exp_\kappa(i\,x)-\exp_\kappa(-i\,x)\over2\,i} \ ,\\ &&{\rm Cos}_\kappa(x)={\exp_\kappa(i\,x)+\exp_\kappa(-i\,x)\over2} \ .\label{ck}
\end{eqnarray}
These functions take real values in the range $|x|\leq1/|\kappa|$ and imaginary values otherwise. This is illustrated in Figure 1, where we plot the ${\rm Cos}_\kappa(x)$ for $\kappa=0.05$, in the region $|x|\leq1/|\kappa|$ (left panel) and its modulo $|{\rm Cos}_\kappa(x)|$ (right panel) in the region $|x|>1/|\kappa|$, where the $\kappa$-cosine (\ref{ck}) assumes imaginary values.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig2.eps}\\
\caption{Plot of ${\rm Cos}_\kappa(x)$ defined in Eq. (\ref{ck}), for $\kappa=0.05$, in the region $|x|\leq 1/|\kappa|$ (left panel) and its modulo in the region $|x|>1/|\kappa|$ (right panel).}
\end{center}
\end{figure}
A different definition of the $\kappa$-exponential on the complex unit circle is given by
\begin{eqnarray}
\exp_\kappa(x)\to\exp_\kappa (x)^i\equiv\exp\left(i\,x_{\{\kappa\}}\right) \ .\label{ei}
\end{eqnarray}
Actually, definitions (\ref{ei1}) and (\ref{ei}) are related each other in
\begin{eqnarray}
\exp_\kappa(x)^i=\exp_{\kappa'}(i\,x) \ ,\label{ei2}
\end{eqnarray}
according to the scaling relation (\ref{se}), with $\kappa'=-i\,\kappa$.\\
However, the complex function $\exp_\kappa(x)^i$ has unitary modulo for any $x\in\mathbb{R}$ and, by taking into account that $|x|<|x_{\{\kappa\}}|$ and $|x-x_{\{\kappa\}}|$ increases for $|x|\to\infty$, it follows that the period of functions $\exp_\kappa(x)^i$ increases as $|x|$ increases.\\
According to Eq. (\ref{ei}), we introduce a new set of $\kappa$-deformed cyclic functions as
\begin{eqnarray}
\exp_\kappa(x)^i=\cos_\kappa(x)+i\,\sin_\kappa(x) \ ,
\end{eqnarray}
so that
\begin{eqnarray}
&&\sin_\kappa(x)\equiv\sin(x_{\{\kappa\}}) \ ,\label{ksin}\\
&&\cos_\kappa(x)\equiv\cos(x_{\{\kappa\}}) \ .\label{kcos}
\end{eqnarray}
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig1.eps}\\
\caption{Plot of $\cos_\kappa(x)$ given by Eq. (\ref{kcos}) for $k=0.2$ in the linear-linear scale (a) and in the log-linear scale (b) showing its asymptotic log-periodic behavior.}
\end{center}
\end{figure}
We remark that functions (\ref{ksin}) and (\ref{kcos}) are asymptotically log-periodic. Their period increases for $|x|\to\infty$, because
\begin{eqnarray}
\sin_\kappa(x) =\sin_\kappa(x^\prime) \ , \qquad{\rm when}\qquad x^\prime=\left(x_{\{\kappa\}}+2\,n\,\pi\right)^{\{\kappa\}} \ ,
\end{eqnarray}
and, for large $x$, we have
\begin{eqnarray}
\Delta\ln(x)\simeq2\,n\,\kappa\,\pi \ ,\qquad{\rm that \ is}\qquad x^\prime\simeq x\,e^{2\,n\,\kappa\,\pi} \ ,\label{period}
\end{eqnarray}
where $\Delta\ln(x)=\ln(x^\prime)-\ln(x)$ approaches to a constant for $x\gg1$.\\
This is shown in Figure 2, where we plot the shape of $\cos_\kappa(x)$, for $\kappa=0.2$, in the linear-linear scale (left panel) and in the log-linear scale (right panel) that shows its asymptotic log-periodic behavior.
As proven in \cite{Scarfone-1}, the functions $\sin_\kappa(x)$ and $\cos_\kappa(x)$ can be derived from the following differential equation
\begin{eqnarray}
{d\over d x}\left(\sqrt{1+\kappa^2\,x^2}\,{d\,u(x)\over dx}\right)+{\left(a_{\{\kappa\}}\right)^2\over\sqrt{1+\kappa^2\,x^2}}\,u(x)=0 \ ,\label{ksl}
\end{eqnarray}
with $-h\leq x\leq h$ and $a_{\{\kappa\}}$ a constant. This is a Sturm-Liouville equation
\begin{eqnarray}
{d\over dx}\left(p(x)\,{df(x)\over dx}\right)+\lambda\,w(x)\,f(x)=0 \ ,
\end{eqnarray}
with $p(x)=\sqrt{1+\kappa^2\,x^2}$, weight function $w(x)=1/\sqrt{1+\kappa^2\,x^2}$ and eigenvalue $\lambda=(a_{\{\kappa\}})^2$. A solution of Eq. (\ref{ksl}), with boundary conditions $f(-h)=f(h)=0$, corresponds to the $\kappa$-sine function
\begin{eqnarray}
f(x)=A\,\sin_\kappa(a\otimes x) \ ,
\end{eqnarray}
whilst a solution with boundary conditions $f^\prime(-h)=f^\prime(h)=0$, where prime means derivative with respect to its argument, is given by the $\kappa$-cosine function
\begin{eqnarray}
f(x)=A\,\cos_\kappa(a\otimes x) \ .
\end{eqnarray}
Similar considerations hold for the function
\begin{eqnarray}
f(x)=A\,\exp_\kappa(a\otimes x)^i \ ,
\end{eqnarray}
that is the solution of the same Sturm-Liouville problem, in the $h\to\infty$ limit, with boundary condition $|f(x)|<\infty$.
\subsect{$\kappa$-Calculus}
Following \cite{Kaniadakis0}, we introduce the calculus in the $\kappa$-formalism by means of the $\kappa$-differential $d_\kappa x\equiv dx_{\{\kappa\}}$, given by the differential of the $\kappa$-numbers
\begin{eqnarray}
d_\kappa x={dx\over\sqrt{1+\kappa^2\,x^2}} \ ,\label{dk}
\end{eqnarray}
that is $\kappa$-linear
\begin{eqnarray}
&&d_\kappa(a\otimes x)=a_{\{\kappa\}}\cdot d_\kappa x \ ,\\
&&d_\kappa(x\oplus y)=d_\kappa x+d_\kappa y \ ,
\end{eqnarray}
where $a$ is constant.\\
The $\kappa$-derivative is defined in
\begin{eqnarray}
\left({d\over dx}\right)_\kappa\equiv {d\over d_\kappa x} \ ,
\end{eqnarray}
and is related to the Leibnitz derivative by
\begin{eqnarray}
\frac{d}{d_\kappa x}=\sqrt{1+\kappa^2\,x^2}\,{d\over dx}\equiv u_\kappa\left(\exp_\kappa(x)\right)\,{d\over dx} \ ,\label{dk1}
\end{eqnarray}
where the last equality follows from the relation $u_\kappa(\exp_\kappa(x))=\sqrt{1+\kappa^2\,x^2}$ derived in \cite{Scarfone4}.
Within the standard calculus, the derivative of the $\kappa$-functions previously introduced read
\begin{eqnarray}
&&{d\over dx}\exp_\kappa(x)={\exp_\kappa(x)\over u_\kappa\left(\exp_\kappa(x)\right)} \ ,\label{de1}\\
&&{d\over dx}\ln_\kappa(x)={u_\kappa(x)\over x} \ ,\label{de2}\\
&&{d\over dx}u_\kappa(x)=\kappa\,{\ln_\kappa(x)\over x} \ ,\\
&&{d\over dx}\sin_\kappa(x)={\cos_\kappa(x)\over u_\kappa\left(\exp_\kappa(x)\right)} \ ,\\
&&{d\over dx}\cos_\kappa(x)=-{\sin_\kappa(x)\over u_\kappa\left(\exp_\kappa(x)\right)} \ .\label{de3}
\end{eqnarray}
In this way, by using Eq. (\ref{dk1}), we can rewrite these relations in
\begin{eqnarray}
&&\frac{d}{d_\kappa x}\,\exp_\kappa(x)=\exp_\kappa(x) \ ,\\
&&\frac{d}{d_\kappa x}\,\sin_\kappa(x)=\cos_\kappa(x) \ ,\\
&&\frac{d}{d_\kappa x}\,\cos_\kappa(x)=-\sin_\kappa(x) \ ,
\end{eqnarray}
that are consistent within the $\kappa$-formalism.\\
We observe that quantities like $\exp_\kappa(x)\,d_\kappa x$, $\sin_\kappa(x)\,d_\kappa x$ and $\cos_\kappa(x)\,d_\kappa x$ are all exact differentials in the standard calculus, since they correspond, respectively to the following differential forms: $d\exp_\kappa(x)$, $d\sin_\kappa(x)$ and $d\cos_\kappa(x)$.\\
In addition, accounting for the $\kappa$-linearity, we can verify the relations
\begin{eqnarray}
&&\frac{d}{d_\kappa x}f(x\oplus y)=\frac{d f(z)}{d_\kappa z}\Big|_{z=x\oplus y} \ ,\\
&&\frac{d}{d_\kappa x}\exp_\kappa(x\otimes y)=y_{\{\kappa\}}\frac{d f(z)}{d_\kappa z}\Big|_{z=x\otimes y} \ ,
\end{eqnarray}
and in particular
\begin{eqnarray}
&&\frac{d}{d_\kappa x}\exp_\kappa(x\oplus y)=\exp_\kappa (x\oplus y) \ ,\\
&&\frac{d}{d_\kappa x}\,\exp_\kappa(x\otimes y)=y_{\{\kappa\}}\,\exp_\kappa(x\otimes y) \ ,
\end{eqnarray}
that will be used in the remainder.
Finally, let us introduce the $\kappa$-integral $\int f(x)\,d_\kappa x$, as the inverse operator to the $\kappa$-derivative, according to
\begin{eqnarray}
\left({d\over d x}\right)_\kappa\left(\int f(x)\,d_\kappa x+const.\right)=f(x) \ ,
\end{eqnarray}
extending, in this way, the fundamental integral theorem to the $\kappa$-formalism.\\
We observe that the $\kappa$-integral can be written like a weighted ordinary integral. In fact, recalling Eq. (\ref{dk}), we have
\begin{eqnarray}
\int f(x)\,d_\kappa x=\int f(x)\,w(x)\,dx\equiv\int {f(x)\over\sqrt{1+\kappa^2\,x^2}}\,dx \ ,\label{int1}
\end{eqnarray}
where the weight function
\begin{eqnarray}
w(x)={1\over u_\kappa\left(\exp_\kappa(x)\right)}\equiv{1\over\sqrt{1+\kappa^2\,x^2}} \ ,
\end{eqnarray}
coincides with that introduced in the Sturm-Liouville problem.\\
In addition, we can also use the following relation
\begin{eqnarray}
\int\limits_a\limits^bf(x)\,d_\kappa x=\int\limits_{a_{\{\kappa\}}}\limits^{b_{\{\kappa\}}}f^{\{\kappa\}}(x)\,dx \ ,
\end{eqnarray}
where
\begin{eqnarray}
f^{\{\kappa\}}(x)\equiv f(x^{\{\kappa\}})=f\left({1\over\kappa}\sinh(\kappa\,x)\right) \ ,
\end{eqnarray}
that links the $\kappa$-integral on the $\kappa$-numbers to a standard integral on the real numbers.
\sect{$\kappa$-Fourier transform}
\subsect{The transform}
In \cite{Scarfone-1}, it has been introduced a consistent formulation of Fourier series in the framework of the $\kappa$-algebra. There, it has been shown as any square-integrable function $f(x):\,(-h,\,h)\to\mathbb{R}$, may be expanded in the $\kappa$-Fourier series with respect to the orthogonal and complete system of functions $\sin_\kappa(a_n\otimes x)$ and $\cos_\kappa(a_n\otimes x)$, that is
\begin{eqnarray}
f(x)=c_0+\sum_{n=1}^\infty \left(s_n\,\sin_\kappa(a_n\otimes x)+c_n\,\cos_\kappa(a_n\otimes x)\right) \ ,\label{kfs}
\end{eqnarray}
where $a_n=(n\,\pi/h_{\{\kappa\}})^{\{\kappa\}}$ are suitable constants.\\
This series expansion can be written in the complex form
\begin{eqnarray}
f(x)=\sum_{n=-\infty}^\infty\gamma_n\,\exp_\kappa(-a_n\otimes x)^i \ ,
\end{eqnarray}
where the complex Fourier coefficients are given by
\begin{eqnarray}
\gamma_n={\sqrt{2\over h}}\int\limits_{-h}\limits^hf(x)\,\exp_\kappa(a_n\otimes x)^i\,d_\kappa x \ ,\label{coef}
\end{eqnarray}
and are related to the real Fourier coefficients in $\gamma_n=(c_n-i\,s_n)/2$ and $\gamma_{-n}=\gamma_n^\ast$.\\
Starting from this result, $\kappa$-deformed Fourier transform can be formally obtained in the $h\to\infty$ limit, as usually done in the standard theory.\\
Therefore, from Eq. (\ref{coef}) we derive the following integral transform for a real function $f(x)$ given by
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)={1\over\sqrt{2\,\pi}}\int\limits_{-\infty}\limits^{+\infty} f(x)\,
\exp_\kappa (-x\otimes\omega)^i\,d_\kappa x \ .\label{kfourier}
\end{eqnarray}
Accounting for Eqs. (\ref{kper}), (\ref{ke}) and (\ref{dk}), this formula can be written like a standard integral transform [cfr. Eq. (\ref{standard})], with the kernel
\begin{eqnarray}
h_\kappa(x,\,\omega)={\exp(-i\,x_{\{\kappa\}}\,\omega_{\{\kappa\}})\over\sqrt{1+\kappa^2x^2}} \ .\label{kkernel}
\end{eqnarray}
In the $\kappa\to0$ limit, kernel $h_\kappa(x,\,\omega)$ reduces to the usual exponential phase $h(x,\,\omega)=\exp(i\,x\,\omega)$ and the $\kappa$-Fourier transform collapses into the standard Fourier transform.
Kernel (\ref{kkernel}) is formed by a deformed phase factor, that arises from the generalization of the complex exponential in its $\kappa$-deformed version and by a damping factor, that arises from the measure of the $\kappa$-integral. As a consequence, the new transform acquires interesting analytical features. For instance, both the real part and the imaginary part of the phase factor have an asymptotically log-periodic behaviour, a fact that may be relevant in the study of log-oscillating phenomena \cite{Scarfone1}. Otherwise, the damping factor confers to the kernel a wavelet-like behaviour, as shown in Figure 3, where we have plotted both the real and the imaginary part of the kernel $h_\kappa(x,\,\omega)$.\\
However, many properties of the standard Fourier transform are preserved in the generalized version provided that they are reformulated in the $\kappa$-formalism.\\
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig3.eps}\\
\caption{Real (left panel) and imaginary (right panel) part of the kernel $h_\kappa(x,\,\omega)$, for $k=0.1$ and $\omega=1$.}
\end{center}
\end{figure}
It is duty to note that, in addition to Eq. (\ref{kkernel}), there are other possible generalizations of the kernel. They should satisfy the following reasonable conditions: 1) the kernel must reduce to the standard complex exponential $\exp(i\,\omega\,x)$ in a suitable limit; 2) the kernel should be an unimodular function on the whole real axis. These requests exclude the function
\begin{eqnarray}
h^{(0)}_\kappa(x,\,\omega)=\exp_\kappa(-i\,x\,\omega) \ ,
\end{eqnarray}
since condition 2) is violated in the far region of the real axis, $|x\,\omega|\geq1/|\kappa|$. Otherwise, the following functions are suitable candidates
\begin{eqnarray}
&&h_\kappa^{(1)}(x,\,\omega)=\exp_\kappa (-x)^{i\,\omega} \ ,\label{h1}\\
&&h_\kappa^{(2)}(x,\,\omega)=\exp_\kappa(-\omega)^{i\,x} \ ,\\
&&h_\kappa^{(3)}(x,\,\omega)=\exp_\kappa(-x\,\omega)^i \ ,\label{h3}
\end{eqnarray}
since they have all unitary modulus on $\mathbb{R}$ and converge to $\exp(i\,\omega\,x)$ in the $\kappa\to0$ limit. However, although some of these functions are equivalent each others, like, for instance
\begin{eqnarray}
h_{\kappa}\left(x,\,\omega\right)=h_\kappa^{(1)}\left(x,\,\omega_{\{\kappa\}}\right)
=h_\kappa^{(2)}\left(x_{\{\kappa\}},\,\omega\right) \ ,
\end{eqnarray}
they define different integral transforms with different analytical properties. By inspection, definition (\ref{kfourier}), with the kernel (\ref{kkernel}) is the most consistent choice. In fact, it turns out to be isomorph to the standard transform, so that all the analytical properties of the Fourier transform are preserved when rewritten in the $\kappa$-formalism.\\
Isomorphism follows from a changing of time and frequency variables according to $X=x_{\{\kappa\}}$ and $\Omega=\omega_{\{\kappa\}}$, so that transform (\ref{kfourier}) can be related to a standard Fourier transform of a deformed function
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)\equiv{\cal F}[ f^{\{\kappa\}}(X)]\left(\Omega\right)={1\over\sqrt{2\,\pi}}\int\limits_{-\infty}\limits^{+\infty} f^{\{\kappa\}}(X)\,e^{-i\,X\,\Omega}\,dX \ .\label{kfourier1}
\end{eqnarray}
It is worthy to note that, if the Fourier transform of a function $f(x)$ exists, then certainly exists its $\kappa$-Fourier transform since
\begin{eqnarray}
\Big|{\cal F}_\kappa[f(x)](\omega)\Big|\leq\int\limits_{-\infty}\limits^{+\infty} |f(x)|\,d_\kappa x=\int\limits_{-\infty}\limits^{+\infty} \left|{f(x)\over\sqrt{1+\kappa^2x^2}}\right|\,d x\leq\|f(x)\| \ ,\label{conv}
\end{eqnarray}
where the norm of $f(x)$ is defined in the Banach space $L^1(\mathbb{R})$ as usual
\begin{eqnarray}
\|f(x)\|=\int\limits_{-\infty}\limits^{+\infty}|f(x)|\,dx \ .
\end{eqnarray}
Therefore, the functions space of convergent $\kappa$-Fourier transform is wider than the functions space of convergent standard Fourier transform, thanks to the factor $\sqrt{1+\kappa^2x^2}$ that enforces the convergence of the integral.
Finally, when the function $f(x)$ has a well defined parity, Eq. (\ref{kfourier}) can be rewritten in the form of a $\kappa$-cosine transform
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)={1\over\sqrt{2\,\pi}}\int\limits_0\limits^\infty f(x)\,\cos_\kappa (\omega\otimes x)\,d_\kappa x \ ,
\end{eqnarray}
for even functions or a $\kappa$-sine transform
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)={i\over\sqrt{2\,\pi}}\int\limits_0\limits^\infty f(x)\,\sin_\kappa (\omega\otimes x)\,d_\kappa x \ ,
\end{eqnarray}
for odd functions.
For the sake of illustration, we present in Table 1 the $\kappa$-Fourier transform of several functions. Clearly, the new formulation here proposed has its main advantage in the analysis of $\kappa$-deformed functions. Thus, certain $\kappa$-deformed functions, that can be hardly handled with the standard transform, are easily solved in a closed form with the present formalism, and vice versa. For this reason, we have considered in Table 1 the transform of some deformed functions like $\exp_\kappa(x)$ or $\cos_\kappa(x)$ instead of the corresponding un-deformed versions that, otherwise, are well processed with the standard analysis.\\
\begin{center}
Table 1. $\kappa$-Fourier transform of several simple functions.\\
\begin{tabular}{|c|c|c|}
\hline\hline
&$f(x)$ & ${\cal F}_\kappa[f(x)](\omega)$ \\
\hline\hline
Step function & $\theta(x)$ & $\sqrt{2\,\pi}\,\delta(\omega)+{1\over\sqrt{2\,\pi}\,i\,\omega_{\{\kappa\}}}$ \\
\hline
Modulation & $\cos_\kappa(a\otimes x)$ & $\sqrt{\pi\over2}\,u_\kappa(\exp_\kappa(a))\,\left(\delta(\omega+a)+\delta(\omega-a)\right)$\\
\hline
Causal $\kappa$-exponential & $\theta(x)\,\exp_\kappa(-a\otimes x)$ & ${1\over\sqrt{2\,\pi}}{1\over a_{\{\kappa\}}+i\,\omega_{\{\kappa\}}}$\\
\hline
Symmetric $\kappa$-exponential & $\exp_\kappa(-a\otimes |x|)$ & $ \sqrt{2\over\pi}\,{a_{\{\kappa\}}\over a_{\{\kappa\}}^2+\omega_{\{\kappa\}}^2}$\\
\hline
Constant & $1$ & $\sqrt{2\,\pi}\,\delta(\omega)$\\
\hline
$\kappa$-Phasor & $\exp_\kappa\,(a\otimes x)^i$ & $\sqrt{2\,\pi}\,u_\kappa(\exp_\kappa(a))\,\delta(\omega-a)$\\
\hline
Impuslse & $\delta(x-a)$ & ${1\over\sqrt{2\,\pi}}{\exp_\kappa\,(\omega\otimes a)^i\over u_\kappa\left(\exp_\kappa\,(a)\right)}$\\
\hline
Signum & Sgn$(x)$ & $\sqrt{2\over\pi}\,\,{1\over i\,\omega_{\{\kappa\}}}$\\
\hline
Rectangular & $\Pi\left({x\over a}\right)$ & $\sqrt{2\over\pi}\,\,a_{\{\kappa\}}\,{\rm sinc}_\kappa(\omega\otimes a)$\\
\hline\hline
\end{tabular}
\end{center}
\hspace{10mm}
As an example, we have plotted in Figure 4, the shape of the sinc$_\kappa(x)$ function, corresponding to the $\kappa$-Fourier transform of the rectangular function, for several values of the deformation parameter $\kappa$. The stretching effect produced by the $\kappa$-deformation is clearly observable.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig4.eps}\\
\caption{$\kappa$-sinc function for several values of the deformation parameter $\kappa$.}
\end{center}
\end{figure}
\subsect{The inverse transform}
As expected, the original function $f(x)$ can be reconstructed starting from the transformed function $\widehat f_\kappa(\omega)$ by means of the inverse integral transform, given by
\begin{eqnarray}
{\cal F}^{-1}_\kappa\left[\widehat f_\kappa(\omega)\right](x)={1\over\sqrt{2\,\pi}}\int\limits_{-\infty}\limits^{+\infty}
\widehat f_\kappa(\omega)\,\exp_\kappa (\omega\otimes x)^i\,d_\kappa\omega \ .\label{antikfourier}
\end{eqnarray}
This can be shown by using standard arguments, as follows
\begin{eqnarray}
\nonumber
{\cal F}^{-1}_\kappa\left[{\cal F}_\kappa[f(x)]\right]&=&{1\over2\,\pi}
\int\limits_{-\infty}\limits^{+\infty}
f(y)\,\exp_\kappa (\omega\otimes y)^i\,\exp_\kappa (-\omega\otimes x)^i\,d_\kappa\omega\,d_\kappa y\\
\nonumber
&=&{1\over2\,\pi}\int\limits_{-\infty}\limits^\infty
f(y)\,e^{i\,\omega_{\{\kappa\}}\,\left(y_{\{\kappa\}}-x_{\{\kappa\}}\right)}\,d\omega_{\{\kappa\}}
\,dy_{\{\kappa\}}\\
&=&\int\limits_{-\infty}\limits^\infty
f(y)\,\delta\left(y_{\{\kappa\}}-x_{\{\kappa\}}\right)\,dy_{\{\kappa\}}=f(x)
\ ,\label{proof1}
\end{eqnarray}
where we used the relations
\begin{eqnarray}
\delta\left(y_{\{\kappa\}}-x_{\{\kappa\}}\right)=\delta(y-x)\,\sqrt{1+\kappa^2\,x^2} \ ,\label{kdelta}
\end{eqnarray}
and
\begin{eqnarray}
\delta\left(y_{\{\kappa\}}-x_{\{\kappa\}}\right)\,d_\kappa x=\delta(y-x)\,dx \ .
\end{eqnarray}
Otherwise, plugging Eq. (\ref{antikfourier}) in Eq. (\ref{kfourier}), we obtain
\begin{eqnarray}
\nonumber
{\cal F}_\kappa\left[{\cal F}^{-1}_\kappa[\widehat f_\kappa(\omega)]\right]&=&{1\over2\,\pi}\int\limits_{-\infty}\limits^{+\infty}\widehat f_\kappa(\omega^\prime)\exp_\kappa(-\omega^\prime\otimes x)^i\exp_\kappa(\omega\otimes x)^i\,d_\kappa\omega^\prime\,d_\kappa x\\
\nonumber &=&{1\over2\,\pi}\,\int\limits_{-\infty}\limits^{+\infty}
\widehat f_\kappa(\omega)\,e^{-i\,(\omega_{\{\kappa\}}^\prime-\omega_{\{\kappa\}})\,x_{\{\kappa\}}}\,
d\omega_{\{\kappa\}}^\prime\,dx_{\{\kappa\}}\\
&=&\int\limits_{-\infty}\limits^{+\infty}\widehat f_\kappa(\omega^\prime)\,\delta(\omega_{\{\kappa\}}^\prime-\omega_{\{\kappa\}})\,d\omega_{\{\kappa\}}^\prime=\widehat f_\kappa(\omega) \ .
\end{eqnarray}
Many properties of standard Fourier transforms can be reformulated in the $\kappa$-formalism. For instance, let us consider the $\kappa$-version of the multiplication theorem
\begin{eqnarray}
\int_{-\infty}\limits^{+\infty}f(x)\,g^\ast(x)\,d_\kappa x
=\int_{-\infty}\limits^{+\infty}\widehat f_\kappa(\omega)\,\widehat g_\kappa^{\,\ast}(\omega)\,d_\kappa\omega \ ,\label{multitheo}
\end{eqnarray}
that, rewritten in
\begin{eqnarray}
\int_{-\infty}\limits^{+\infty}{f(x)\,g^\ast(x)\over\sqrt{1+\kappa^2\,x^2}}
\,dx =\int_{-\infty}\limits^{+\infty}{\widehat f_\kappa(\omega)\,\widehat g_\kappa^{\,\ast}(\omega)\over\sqrt{1+\kappa^2\,\omega^2}}\,d\omega \
,
\end{eqnarray}
states a relation between the product of the functions $f(x)$ and $g(x)$ with the product of their $\kappa$-deformed Fourier transforms $\widehat f_\kappa(\omega)$ and $\widehat g_\kappa(\omega)$. Equation (\ref{multitheo}) follows from standard arguments, according to
\begin{eqnarray}
\nonumber
\int_{-\infty}\limits^{+\infty}\widehat f_\kappa(\omega)\widehat g_\kappa^{\,\ast}(\omega)\,d_\kappa\omega&=&\int_{-\infty}\limits^{+\infty}f(x)\,
\exp_\kappa(-\omega\otimes x)^i\,\widehat g^{\,\,\ast}(\omega)\,d_\kappa x\,d_\kappa\omega\\
\nonumber
&=&\int_{-\infty}\limits^{+\infty}f(x)\,\widehat g_\kappa^{\,\ast}(\omega)\,
\exp_\kappa(\omega\otimes x)^i\,d_\kappa\omega\,d_\kappa x\\
&=&\int_{-\infty}\limits^{+\infty}f(x)\,g^\ast(x)\,d_\kappa x \ ,
\end{eqnarray}
and, in particular, for $f(x)=g(x)$, it gives the Plancherel theorem in the $\kappa$-formalism
\begin{eqnarray}
\int_{-\infty}\limits^{+\infty}|f(x)|^2\,d_\kappa x=\int_{-\infty}
\limits^{+\infty}|\widehat f_\kappa(\omega)|^2\,d_\kappa\omega \ .\label{kplanch}
\end{eqnarray}
This relation can be rewritten by means of standard integrals in
\begin{eqnarray}
\int_{-\infty}\limits^{+\infty}{|f(x)|^2\over\sqrt{1+\kappa^2\,x^2}}\,dx=\int_{-\infty}
\limits^{+\infty}{|\widehat f_\kappa(\omega)|^2\over\sqrt{1+\kappa^2\,\omega^2}}\,d\omega \
\end{eqnarray}
that differs clearly from the well known Plancherel theorem formulation. Actually, Eq. (\ref{kplanch}) states a relationships between $\kappa$-Fourier transformed functions and corresponds to the Plancherel relation only in the $\kappa\to0$ limit.
\subsect{Main properties}
Since transform (\ref{kfourier}) can be mapped into a standard Fourier transform, it is not a surprise, has already stated, that many properties of Fourier transform still hold in the deformed version if opportunely rephrased in the $\kappa$-formalism. This is shown in Table 2, where we report the most relevant relations, leaving the reader to verify easily.
\newpage
\begin{center}
Table 2. Main $\kappa$-Fourier properties.\\
\begin{tabular}{|l|l|}
\hline\hline
Linearity &
\small ${\cal F}_\kappa[\alpha\,f(x)+\beta\,g(x)](\omega)=\alpha\,{\cal F}_\kappa[f(x)](\omega)+\beta\,{\cal F}_\kappa[g(x)](\omega)$\\
\hline
Scaling &
\small ${\cal F}_\kappa\left[f(\alpha\,x)\right](\omega)={1\over\alpha}\,{\cal F}_{\kappa^\prime}\left[f(x)\right](\omega^\prime)$\\
&\small where $\kappa^\prime=\kappa/\alpha$ and $\omega^\prime=(a/\kappa)\,\sinh\left({\rm arcsinh}(\kappa\,\omega)/a^2\right)$\\
\hline
$\kappa$-Scaling &
\small ${\cal F}_\kappa\left[f(\alpha\otimes x)\right](\omega)={1\over\alpha_{\{\kappa\}}}\,{\cal F}_\kappa[f(x)]\left(\frac{1}{\alpha}\otimes\omega\right)$\\
\hline
Complex conjugation &
\small ${\cal F}_\kappa\big[f(x)\big]^{\ast}(\omega)={\cal F}_\kappa\big[f(x)\big](-\omega)$\\
\hline
Duality &
\small ${\cal F}_\kappa\Big[{\cal F}_\kappa\big[f(x)\big](\nu)\Big](\omega)=f(-\omega)$\\
\hline
Reverse &
\small ${\cal F}_\kappa\left[f(-x)\right](\omega)={\cal F}_\kappa[f(x)](-\omega)$\\
\hline
$\kappa$-Frequency shift &
\small ${\cal F}_\kappa\left[\exp_\kappa
(\omega_0\otimes x)^if(x)\right](\omega)={\cal F}_\kappa[f(x)](\omega\ominus\omega_0)$\\
\hline
$\kappa$-Time shift &
\small ${\cal F}_\kappa\left[f(x\oplus x_0)\right](\omega)=\exp_\kappa (\omega\otimes x_0)^i\, {\cal F}_\kappa[f(x)](\omega)$\\
\hline
Transform of $\kappa$-derivative &
\small ${\cal F}_\kappa\left[\frac{d\,f(x)}{d_\kappa x}\right](\omega)=i\,\omega_{\{\kappa\}}\,{\cal F}_\kappa[f(x)](\omega)$\\
\hline
$\kappa$-Derivative of transform &
\small $\frac{d}{d_\kappa\omega}\,{\cal F}_\kappa[f(x)](\omega)=-i\,\omega_{\{\kappa\}}\,{\cal
F}_\kappa\left[x_{\{\kappa\}}\,f(x)\right](\omega)$\\
\hline
Transform of integral &
\small ${\cal F}_\kappa\left[\int\limits_{-\infty}\limits^x
f(y)\,dy\right](\omega)={1\over i\,\omega_{\{\kappa\}}}{\cal F}_\kappa[f(x)](\omega)+2\,\pi\,{\cal F}_\kappa[f(x)](0)\,\delta(\omega)$\\
\hline
$\kappa$-Convolution &
\small ${\cal F}_\kappa\left[(f\mbox{$\bigcirc\hspace{-3mm}*\hspace{1.5mm}$}g)(x)\right](\omega)=\sqrt{2\,\pi}\,{\cal F}_\kappa[f(x)](\omega)\,{\cal F}_\kappa[g(x)](\omega)$\\
&\small where $
(f\mbox{$\bigcirc\hspace{-3mm}*\hspace{1.5mm}$}g)(x)=\int\limits_{-\infty}\limits^{+\infty}
f(y)\,g(x\ominus y)\,d_\kappa y $\\
\hline
Modulation &
\small ${\cal F}_\kappa\left[f(x)\,g(x)\right](\omega)={1\over\sqrt{2\,\pi}}\left({\cal F}_\kappa\left[f(x)\right]\mbox{$\bigcirc\hspace{-3mm}*\hspace{1.5mm}$}{\cal F}_\kappa\left[g(x)\right]\right)(\omega)$\\
\hline
\end{tabular}
\end{center}
\sect{Limiting distribution of $\kappa$-sum of $n$-iterate statistically independent random variables}
As known, Fourier transform has many interesting applications both in mathematical statistics and in physics. Among them, the problem of searching for stable distributions of a large number of iterates of independent random variables can be easily handled within the canonical Fourier transform theory. Differently, the problem of searching for stable distributions of a large number of iterates of random variables with a specific statistics dependence or statistically independent random variables with a specific iteration, has been a topic of investigation in the recent years. It can be studied by introducing opportunely defined Fourier transforms \cite{Umarov}.
In this section we are looking for stable distributions of $\kappa$-sum of $n$-iterate statistically independent random variables.\\
In order to introduce this question in the framework of $\kappa$-statistics, let us consider a possible generalization of the characteristic function
\begin{eqnarray}
\varphi\left(f(x),\,\omega\right)=\langle h(x,\,\omega)\rangle\equiv\sqrt{2\,\pi}\,{\cal F}[f(x)](\omega) \ .\label{char}
\end{eqnarray}
For a given normalized probability distribution $f(x)$ of a random variable $X$, is natural to define, in the picture of $\kappa$-algebra, the quantity
\begin{eqnarray}
\varphi_\kappa\left(f(x),\,\omega\right)=\sqrt{2\,\pi}\,{\cal F}_\kappa[f(x)](\omega) \ ,\label{kchar}
\end{eqnarray}
that mimics Eq. (\ref{char}) recovered in the $\kappa\to0$ limit.\\ Like in the standard case, Eq. (\ref{kchar}) coincides with the linear average of the kernel (\ref{kkernel}) in the space of the real numbers
\begin{eqnarray}
\varphi_\kappa\left(f(x),\,\omega\right)=\langle h_\kappa(x,\,\omega)\rangle\equiv\int f(x)\,h_\kappa(x,\,\omega)\,dx \ .\label{linear}
\end{eqnarray}
Equivalently, $\varphi_\kappa\left(f(x),\,\omega\right)$ can be written as the $\kappa$-average of the deformed phase factor
\begin{eqnarray}
\varphi_\kappa\left(f(x),\,\omega\right)={\cal N}\,\langle \exp_\kappa(-x\otimes\omega)^i\rangle_\kappa \ ,\label{fk}
\end{eqnarray}
where the $\kappa$-average is defined in
\begin{eqnarray}
\langle{\cal O}(x)\rangle_\kappa={1\over\cal N}\int{\cal O}(x)\,f(x)\,d_\kappa x \ ,
\end{eqnarray}
with ${\cal N}=\int f(x)\,d_\kappa x$ the normalization constant. Both Eqs. (\ref{linear}) and (\ref{fk}) reduce to the standard characteristic function $\varphi(f(x),\,\omega)=\langle\exp(-i\,x\,\omega)\rangle$ in the $\kappa\to0$ limit.
As an example, let us consider the deformed Gibbs distribution
\begin{eqnarray}
f_\kappa^{\rm G}(x)=\theta(x)\,\exp_\kappa(-\beta\,x) \ , \label{kgibbs}
\end{eqnarray}
where characteristic function $\varphi_{\kappa^\prime}\left(f(x),\,\omega\right)$, with deformation parameter $\kappa^\prime=\beta\,\kappa$, can be easily calculated in
\begin{eqnarray}
\varphi_{\kappa'}\left(f^{\rm G}_\kappa(x),\,\omega\right)={\kappa^\prime\over\kappa^\prime\,\beta-i\,{\rm arcsinh}{(\kappa^\prime\,\omega)}} \ ,
\end{eqnarray}
and recovers, in the $\kappa\to0$ limit, the well-known result $\varphi\left(f^{\rm G}(x),\,\omega\right)=1/(\beta-i\,\omega)$.\\
Otherwise, the standard characteristic function of distribution (\ref{kgibbs}), $\varphi\left(f^{\rm G}_\kappa(x),\,\omega\right)$, assumes a rather complicate expression by means of special functions which are somewhat cumbersome in the analytical manipulation. Remark that, $\varphi_\kappa\left(f(x),\,\omega\right)$ carries similar information of $\varphi\left(f(x),\,\omega\right)$ since both these functions are related to the starting density $f(x)$ by an analytic mapping. In fact, as like as $f(x)$ and $\varphi\left(f(x),\,\omega\right)$ are Fourier transform each other, $f(x)$ and $\varphi_\kappa\left(f(x),\,\omega\right)$ are $\kappa$-Fourier transforms each other. This observation justify the use of the $\kappa$-Fourier transform instead of the classical Fourier transform in the analysis based on the $\kappa$-formalism.\\
Following standard arguments, the phase factor in Eq. (\ref{fk}) can be expanded in powers of $\omega_{\{\kappa\}}$, such that
\begin{eqnarray}
\varphi_\kappa\left(f(x),\,\omega\right)={\cal N}\,\sum_{n=0}^\infty{(-i\,\omega_{\{\kappa\}})^n\over n!}\,\left\langle\left(x_{\{\kappa\}}\right)^n\right\rangle_\kappa \ .\label{series}
\end{eqnarray}
Taking into account the inequality
\begin{eqnarray}
{\cal N}\,\Big\langle\left(x_{\{\kappa\}}\right)^n\Big\rangle_\kappa=\int \left(x_{\{\kappa\}}\right)^n\,f(x)\,d_\kappa x<\int x^n\,f(x)\,dx=\langle x^n\rangle_{\kappa=0} \ ,
\end{eqnarray}
it follows that the $n$-order of the $\kappa$-linear momentum certainly exists if the standard momentum of the same order exists.\\
On the other hand, it is easy to find distributions with only the few first finite standard momenta in spite of the $\kappa$-linear momenta that exist for any order. Again, distribution (\ref{kgibbs}) is a paradigm. In fact, in this case, it is easy to verify that only the few standard momenta with $n<1/\kappa-1$ exist \cite{Scarfone3}, while the $\kappa$-momenta are finite for any order.\\
Finally, from Eq. (\ref{series}) we can obtain the relation
\begin{eqnarray}
\Big\langle\left(x_{\{\kappa\}}\right)^n\Big\rangle_\kappa
={i^n\over\cal N}\,{d^n\varphi_\kappa\left(f(x),\,\omega\right)\over d_\kappa\omega^n}\Bigg|_{\omega=0} \ .
\end{eqnarray}
provided that the left hand side exist, a relation that can be also derived straightforward by using the $\kappa$-derived of transform given in Table 2.
Definition (\ref{kchar}) is useful to study the limit distribution of the $\kappa$-sum of $n$ statistically independent random variables, that is, the limit distribution $f(S_n)$, for large $n$, of $\kappa$-summed random variables
\begin{eqnarray}
S_n=X_1\oplus X_2\oplus \ldots\oplus X_n \ ,\label{sn}
\end{eqnarray}
given by
\begin{eqnarray}
f\left(S_n\right)=\Pi_{i=1}^nf(x_i) \ .
\end{eqnarray}
In this case, function $\varphi\left(f(S_n),\,\omega\right)$ coincides with the product of the characteristic functions of $f(x_i)$, i.e.
\begin{eqnarray}
\varphi_\kappa\left(f(S_n),\,\omega\right)=\varphi_\kappa\left(f(x_1),\,\omega\right)
\cdot\varphi_\kappa\left(f(x_2),\,\omega\right)\cdot
\ldots\varphi_\kappa\left(f(x_n),\,\omega\right) \ ,
\end{eqnarray}
as it follows from the distributive property of $\kappa$-sum and $\kappa$-product.
In particular, if the quantities $x_i$ are also identically distributed, then $\varphi_\kappa\left(f(S_n),\,\omega\right)=\varphi_\kappa\left(f(x),\,\omega\right)^n$.
To study the problem of searching families of stable distributions of $\kappa$-sum of $n$-iterate of statistically independent and identically distributed (iid) random variables, we consider a pair of iid random variables $X$, with density distribution $f_1(x)$. The density distribution $f_2(x)$ of $S_2=X\oplus X$, can be obtain from the relation
\begin{eqnarray}
f_2(y)=\int f_1(x)\,f_1(y\ominus x)\,dx \ .\label{conv1}
\end{eqnarray}
Stable distributions fulfill the condition $f_1(x)=f_2(x)=\ldots=f_n(x)$, where $f_i(x)$, with $i=1,\ldots,\,n$, refers to the pdf of the $i$-iterate $S_i$ given in Eq. (\ref{sn}).\\
They can be derived easily by using the property of $\kappa$-Fourier transform under $\kappa$-convolution (cfr. Table 2), since the characteristic function of stable distributions is invariant under $\kappa$-convolution.\\ Let us consider the following ansatz
\begin{eqnarray}
f_\kappa(x;\,\sigma)=C_\kappa\,\exp_\kappa\left(-{x\over\sqrt{2}\,\sigma}\otimes_\kappa
{x\over\sqrt{2}\,\sigma}\right) \ ,\label{kgauss}
\end{eqnarray}
where $C_\kappa=e^{-\kappa^2/4}/\sqrt{2\,\pi}\,\sigma$ is the normalization constant. Equation (\ref{kgauss}) represents a possible $\kappa$-generalization of Gaussian distribution. Standard Gaussian is recovered in the $\kappa\to0$ limit.\\
Note that, Eq. (\ref{kgauss}) differs from other versions of $\kappa$-Gaussian proposed in the literature \cite{Wada1,Trivellato}. For instance, the following function
\begin{eqnarray}
\widetilde f_\kappa(x)=A_\kappa\,\exp_\kappa\left(-{x^2\over2\,\sigma^2}\right) \ ,\label{kgauss1}
\end{eqnarray}
that corresponds to the asymptotic solution of a diffusive process studied in \cite{Wada1}, has a power law tail different from that of distribution (\ref{kgauss}) that rather decays with a log-normal tail, being
\begin{eqnarray}
f_\kappa(x;\,\sigma)\approx\exp\left(-{1\over\kappa^2}\,\ln^2\left({\sqrt{2}\,\kappa\over\sigma}\,x\right)\right) \ ,
\end{eqnarray}
for $x\gg1$.\\
However, in spite of Eq. (\ref{kgauss1}), Eq. (\ref{kgauss}) is invariant under $\kappa$-Fourier transform, since its $\kappa$-characteristic function $\varphi_{\kappa^\prime}\left(f_\kappa(x;\,\sigma),\,\omega\right)$, with $\kappa^\prime=\kappa/(\sqrt{2}\,\sigma)$, is yet a $\kappa$-Gaussian given by
\begin{eqnarray}
\nonumber
\varphi_{\kappa^\prime}\left(f_\kappa(x;\,\sigma),\,\omega\right)&=&\sigma\, C_\kappa\,\exp_{\kappa^{\prime\prime}}\left(-{\sigma\over\sqrt{2}}\,\omega\otimes_{\kappa^{\prime\prime}}
{\sigma\over\sqrt{2}}\,\omega\right)\\
&\equiv&\sigma\,f_{\kappa^{\prime\prime}}(\omega;\,1/\sigma) \ ,\label{tkg}
\end{eqnarray}
where $\kappa^{\prime\prime}=\kappa/\sigma^2$.\\
More important, function (\ref{kgauss}) is invariant after $n$-iterates of $\kappa$-convolution, defined in
\begin{eqnarray}
(f\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,g)(x)=\int\limits_{-\infty}\limits^{+\infty}
f(y)\,g(x\ominus_\kappa y)\,d_\kappa y \ ,\label{conv2}
\end{eqnarray}
that is commutative $(f\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,g)(x)=(g\,\mbox{$\bigcirc\hspace{-3.8mm}*
\hspace{.5mm}_\kappa$}\,f)(x)$, associative $((f\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,g)\,\mbox{$\bigcirc\hspace{-3.8mm}*
\hspace{.5mm}_\kappa$}\,h)(x)=
(f\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,(g\,\mbox{$\bigcirc\hspace{-3.8mm}*
\hspace{.5mm}_\kappa$}\,h))(x)$ and bilinear $((c_1\,f+c_2\,g)\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,h)(x)=
c_1\,(f\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,h)
+c_2\,(g\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_\kappa$}\,h)$.\\
In fact, posing
\begin{eqnarray}
F^{(n)}(x;\,\sigma)=(f_\kappa\,\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_{\kappa^\prime}$}\, f_\kappa\,\mbox{$\bigcirc\hspace{-4mm}*\hspace{.5mm}_{\kappa^\prime}$}
\ldots\mbox{$\bigcirc\hspace{-3.8mm}*\hspace{.5mm}_{\kappa^\prime}$}\,f_\kappa)(x;\,\sigma) \ ,\label{kco}
\end{eqnarray}
its $\kappa$-Fourier transform is related to Eq. (\ref{tkg}) by
\begin{eqnarray}
\varphi_{\kappa^\prime}\left(F^{(n)}(x;\,\sigma),\,\omega\right)=
\left(\varphi_{\kappa^\prime}\left(f_\kappa(x;\,\sigma),\,\omega\right)\right)^n \ .\label{fg}
\end{eqnarray}
On the other hand, accounting for relations (\ref{eax}), the $n$-power of Eq. (\ref{tkg}) can be written in
\begin{eqnarray}
\left(\varphi_{\kappa^\prime}\left(f_\kappa(x;\,\sigma),\,\omega\right)\right)^n=\sigma_n\, C_{\kappa_n}\,\exp_{\kappa^{\prime\prime}_n}\left(-{\sigma_n\over\sqrt{2}}\,\omega\otimes_{\kappa_n^{\prime\prime}}
{\sigma_n\over\sqrt{2}}\,\omega\right) \ ,
\end{eqnarray}
where $\sigma_n=\sigma\,\sqrt{n}$ and $\kappa_n=k\,\sqrt{n}$,
that corresponds to the characteristic of the function
\begin{eqnarray}
f^{(n)}_{\kappa_n}(x,\,\sigma_n)=C_{\kappa_n}\,\exp_{\kappa_n}\left(-{x\over\sqrt{2}\,\sigma_n}\otimes_{\kappa_n}
{x\over\sqrt{2}\,\sigma_n}\right) \ .\label{klc}
\end{eqnarray}
Remark that, the structure of Eq. (\ref{conv2}) used in this proof differs from Eq. (\ref{conv1}) since, the former, contains a $\kappa$-integral instead of a standard integral. Nevertheless, accounting for relation (\ref{int1}), it is easy to verify that the function
\begin{eqnarray}
f_\kappa(x,\,\sigma)={1\over\sqrt{2\,\pi}\,\sigma}{\exp_\kappa\left(-{x\over\sqrt{2}\,\sigma}\otimes_\kappa
{x\over\sqrt{2}\,\sigma}\right)\over\sqrt{1+\kappa^2\,\left(x\over\sqrt{2}\,\sigma\right)^2}} \ ,\label{kg1}
\end{eqnarray}
properly normalized, is stable under composition (\ref{conv1}). Therefore, we can confirm that
\begin{eqnarray}
f^{(n)}_{\kappa_n}(x,\,\sigma_n)={1\over\sqrt{2\,\pi}\,\sigma_n}{\exp_{\kappa_n}\left(-{x\over\sqrt{2}\,\sigma_n}
\otimes_{\kappa_n}
{x\over\sqrt{2}\,\sigma_n}\right)\over\sqrt{1+\kappa_n^2\,\left(x\over\sqrt{2}\,\sigma_n\right)^2}} \ ,\label{kg2}
\end{eqnarray}
is the density distribution of a random variable corresponding to the $\kappa$-sum of $n$-iterate random variables independent and identically distributed according to Eq. (\ref{kg1}).\\
In analogy with the log-normal distribution and the sinh-normal distribution, Eq. (\ref{kg1}) defines a family of arcsinh-normal distributions parameterized by the deformation parameter $\kappa$ and belongs to the family of the Johnson $S_U$ distributions introduced in \cite{Johnson} and given by
\begin{eqnarray}
f(x)={\delta\over\sqrt{2\,\pi}\,\lambda}\,{e^{-{1\over2}\,\left(\gamma+\delta\,{\rm arcsinh}\left({x-\xi\over\lambda}\right)\right)^2}\over\sqrt{1+\left({x-\xi\over\lambda}\right)^2}} \ .\label{Jo}
\end{eqnarray}
Family (\ref{Jo}) coincides with distribution (\ref{kg1}) for $\gamma=\xi=0$, $\delta=\sqrt{2}/\kappa_n$ and $\lambda=\sqrt{2}\,\sigma_n/\kappa_n$.\\
In \cite{Kaniadakis0} it has been shown that the $\kappa$-sum is substantially equivalent to the relativistic addition of momenta and there it was conjectured a possible relation between the $\kappa$-statistics and the theory of the special relativity. In this sense, the $\kappa$ parameter plays the role of a speed limit according to the relation $\kappa\propto1/c$. Consequently, in the $\kappa\to0$ limit, corresponding to the Galilean relativistic limit ($c\to\infty$), the $\kappa$-sum reduces to the standard sum and consistently, the arcsinh-normal distribution (\ref{kg2}) recovers the Gaussian distribution that, as stated by the standard central limit theorem, is the stable limiting distribution of the sum of iid random variables.
It is fair to note that a similar result has been derived recently in \cite{Keague} by using a different approach.
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig5.eps}\\
\caption{Plot of $\kappa$-Gaussian (\ref{kg1}) in the linear-linear scale (left panel) and in the log-linear scale (right panel) for several values of $\kappa$. The full-line coincides with the standard Gaussian function.}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics*[width=11cm]{fig6.eps}\\
\caption{$\kappa$-convolution of rectangular-function after several iterates, with $\kappa_1=0.5$ (left panel). The comparison of the numerical curve after $n=15$ iterates (full line) with the standard Gaussian (dotted line) and the $\kappa$-Gaussian (\ref{kg2}) (dashed line), are depicted in the right panel.}
\end{center}
\end{figure}
We observe also that, like distribution (\ref{kgauss}), distribution (\ref{kg2}) has a log-normal asymptotic behaviour rather than a power-law tail. This is shown in Figure 5 (left panel), where we have depicted, for the sake of illustration, the shape of the $\kappa$-Gaussian (\ref{kg1}) for several values of the deformation parameter. In the right panel of Figure 5, the same curves are reported in a log-linear scale, showing the log-normal asymptotic behaviour.\\
Finally, like as like the standard Gaussian is the limiting distribution of iid summed random variables, we expect the same holds for distribution (\ref{kg2}) when iid random variables are $\kappa$-summed.
We show the reliability of such statement by means of a numerical computation reported in Figure 6 (left panel) where we plot the distribution of the random variable $S_n$, with $\kappa_1=0.5$, after several $n$-iterations. The starting distribution for the single random variable is assumed to be a rectangular-function. As expected, iterated distribution quickly approaches to a bell-shape. In the same Figure 6 (right panel), this limiting distribution, obtained by a numerical computation after $n=15$ iterates (full line), is compared with the standard Gaussian ($\kappa_1=0$, dotted line) and the $\kappa$-Gaussian (\ref{kg2}) ($\kappa_1=0.5$, dashed line). It is evident the good fit between the numerical curve and the $\kappa$-Gaussian with respect to the standard Gaussian.\\
A further consistency of this result is supported by recalling that the $\kappa$-sum for large values reduces to a standard product [cfr. Eq. (\ref{okp})]. This means that, in the far region of large $x$ values, the random variable $S_n$ corresponds to the product of $n$ iid random variables that, according to the central limit theorem, has a log-normal limit distribution, in agreement with the tail of Eq. (\ref{kg2}).
\sect{Conclusions}
In this paper we have reformulated the standard Fourier transform in a formalism consistent with the $\kappa$-algebra and the $\kappa$-calculus. The new formulation has been derived starting from a $\kappa$-deformed Fourier series recently introduced by us in \cite{Scarfone-1}.\\
The $\kappa$-Fourier transform $\widehat f_\kappa(\omega)\equiv{\cal F}_\kappa[f(x)](\omega)$, belongs to the integral transforms (\ref{standard}) and is characterized by a kernel $h_\kappa(x,\,\omega)$ composed by a deformed phase and a damping factor that confer to $h_\kappa(x,\,\omega)$ a wavelet-like shape. In addition, both the real part and the imaginary part of the phase factor have an asymptotical log-periodic behaviour.\\
We have shown that the $\kappa$-deformed transform of a function $f(x)$ is isomorph to the canonical transform since $\widehat f_\kappa(\omega)$ is equivalent to the standard Fourier transform of the function $f^{\{\kappa\}}(x)\equiv f(x^{\{\kappa\}})$, that is
\begin{eqnarray}
{\cal F}_\kappa[f(x)](\omega)\equiv{\cal F}[f^{\{\kappa\}}(x)](\omega_{\{\kappa\}}) \ .
\end{eqnarray}
However, in spite of this equivalence, the $\kappa$-Fourier transform turns out to be more appropriate to handle functions defined in the realm of the $\kappa$-algebra.\\
As a relevant example, we have applied our formalism to the study of the limit distribution of $\kappa$-summed statistically independent random variables, that is the distribution given in Eq. (\ref{kg2}) of the random variables $S_n=X\oplus X\oplus \ldots\oplus X$ by using similar arguments employed in the derivation of the central limit theorem. \\
\noindent{\bf References}\\
|
1,477,468,750,099 | arxiv | \section{Introduction}
It is well known that the time development of reaction diffusion
systems in which two types of particles A and B (`particles' and
`antiparticles') annihilate irreversibly is dominated by
density fluctuations~\cite{TW,KR,BL,LC1,SSB}.
If an equal number of A and B particles is randomly placed on
the sites of a $d$-dimensional lattice the densities are
asmptotically given by $n(t) \sim t^{-d/4}$ (for $d<4$) whereas a
na\"{\ii}ve approach based on mean-field rate equations predicts
$n(t) \sim t^{-1}$.
The slow density decay is due to the asymptotic segregation
of particles into regions of purely A or B particles. The reaction
becomes more efficient if the effects of segregation are reduced
by an appropriate mixing mechanism like turbulence~\cite{DW} or
random forces in low-viscosity liquids~\cite{AHLS}. In these
cases the diffusive particle motion is replaced by a
superdiffusive behaviour characterized by a mean-square
displacement of the form $\langle x(t)^{2}\rangle \sim t^{2/z}$
with $z < 2$. Very recently the annihilation reaction in
a driven diffusive system with repulsive interaction between
particles of the same type~\cite{Jan,IKR} has been studied. If
the particles are subject to an external driving force in one
direction the repulsion gives rise to superdiffusion with $z=3/2$
(for $d=1$), and the density decreases asymptotically as $n(t)
\sim t^{-1/3}$. The relation between the long-time behaviour of
$n(t)$ and the value of the exponent $z$ has been investigated
for particles performing independent L\'evy walks~\cite{ZK}. In
this case $z$ is defined through the scaling of the transition
probability (which is a function of $r^{-z} t$) since the the
mean-square displacement for L\'evy flights with $z<2$ diverges.
It has been shown that below the upper critical dimension
$d_{c}(z) = 2z$ the long-time behaviour of the density is
given by $n(t) \sim t^{-d/(2z)}$.
The mixing caused by the inhomogeneous velocity field in a linear
shear flow reduces the critical dimension to $d_{c}=2$, i.e. $n(t)
\sim t^{-1}$ in any physically accessible dimension~\cite{HB}.
Other models for particle mixing have been considered in
references~\cite{SB1,SB2}. In this paper we investigate the
A+B-annihilation reaction in a medium with a random velocity
field. It is assumed that the velocity at every point ${\bf r}
= (x, {\bf y})$ of a $d$-dimensional system is parallel or
antiparallel to the $x$-axis and depends only on the coordinates
perpendicular to the flow. The velocity field is modelled by
quenched Gaussian random variables with zero mean and the
correlations
\begin{equation}
\left[ v({\bf y}) v({\bf y}^{\prime}) \right] = f \delta({\bf y}
- {\bf y}^{\prime}) \label{eq1}
\end{equation}
where the square brackets indicate the average over the
realizations of the flow. Originally this model was
introduced to describe the ground water transport in
heterogeneous rocks~\cite{MdeM}.
Below three dimensions a random walk in the presence of a
velocity field of the form~(\ref{eq1}) displays superdiffusive
behaviour in the $x$-direction~\cite{JH,BGKPR}. For a particle
starting at ${\bf r} = {\bf 0}$ at time $t=0$ the mean square
displacement in the $x$-direction averaged over the configurations
of $v({\bf y})$ is given by $[\langle x^{2} \rangle] \simeq
\sigma^{2} t^{(5-d)/2}$ for $d<3$ (with a generalized diffusion
constant $\sigma^{2}$). One can use this result for a na\"{\ii}ve
estimate of the densities at large times $t$. If we approximate
the motion of the particles by independent anisotropic L\'evy
flights with exponents $z_{\|} = 4/(5-d)$ (in $x$-direction) and
$z_{\bot}=2$ a straightforward generalization of the arguments
given in~\cite{ZK} yields
\begin{equation}
n_{\rm A, B}(t) \sim \left[ n_{0} t^{-1/z_{\|}}
t^{-(d-1)/z_{\bot}} \right]^{1/2} = n_{0}^{1/2} t^{-(d+3)/8}
\label{levy}
\end{equation}
where $n_{0}$ is the initial density of each particle type.
However, it is not clear in how far the effects of the velocity
field can be described by independent L\'evy walks since
$v({\bf y})$ is a {\em quenched} random variable with an
infinite correlation length in the $x$-direction.
In the following sections we will use the field theoretic
methods developed in~\cite{Pel,LC1} to show that the power
law~(\ref{levy}) is correct for $t \rightarrow \infty$ and
calculate the amplitude at first order in $\epsilon
= 3-d$. The result of this calculation reads
\begin{equation}
n_{\rm A, B}(t) \simeq A(\epsilon) \left( 2 \pi^{3/2} (8 \pi
D)^{(d-1)/2} \sigma \right)^{-1/2} \sqrt{n_{0}} t^{-(d+3)/8}
\end{equation}
where
\begin{equation}
A(\epsilon) = 1 + \frac{\epsilon}{8} \left( 3 \ln 2 - 1 \right)
+ \Or(\epsilon^{2})
\end{equation}
and $n_{0} = n_{\rm A}(0)=n_{\rm B}(0)$ is the initial
density of each particle type.
In contrast to the case of annihilating L\'evy particles
which can be treated in a mean field approximation with random
initial conditions the correlated particle motion in a
quenched velocity field requires the application of the
renormalization group.
In the next section a microscopic model for a reaction diffusion
system in a random velocity field is defined and its mapping to a
continuum field theory is discussed. In Section~\ref{nonint}
we briefly review the renormalized field theory for
non-interacting particles in a random velocity field.
The results are used in Section~\ref{Seff} to derive an
effective action which describes the long-time behaviour
of the system. The asymptotic decay of the densities is
calculated in section~\ref{decay} and the results are discussed
in Section~\ref{concl}.
\section{The model} \label{model}
At every time $t$ the microscopic state of the system is defined
by the occupation numbers $m({\bf r}_{i})$ of A-particles and
$n({\bf r}_{i})$ of B-particles at every site $i$ (with position
${\bf r}_{i}$) of a $d$-dimensional lattice.
Denoting the probability for a given configuration $\{m, n \}$ at
time $t$ by $P(\{m, n\}; t)$ we can describe the stochastic
dynamics of the system by a set of linear differential equations
(master equation) for the probabilities $P(\{m, n\}; t)$.
(We assume that the time evolution is Markoffian.)
In order to apply field theoretic renormalization group methods
we use a functional integral description which is equivalent to
the master equation. Since a detailed derivation of this
formalism is given in references~\cite{Pel,BLee,LC1} we only give
the main steps focussing on the modifications that are necessary
to study the influence of the random velocity field.
We first map the probabilities $P(\{m, n\}; t)$ to a vector
$|\Phi(t)\rangle$ of the infinite dimensional Fock
space spanned by the vectors $|\{m, n\}\rangle$ by
writing~\cite{Doi,PG}
\begin{equation}
|\Phi(t)\rangle = \sum_{\{m, n\}} P(\{m, n\}; t)
\,|\{m, n\}\rangle .
\end{equation}
Since the master equation is linear and of first order in $t$ it
may be written in the form
\begin{equation}
\partial_{t} |\Phi(t)\rangle = \hat{L} |\Phi(t)\rangle
\label{Liou}
\end{equation}
with an appropriate Liouville operator $\hat{L}$. This operator
can be expressed in terms of the annihilation operators
$\hat{a}_{\rm A}({\bf r}_{i})$, $\hat{a}_{\rm B}({\bf r}_{i})$
and creation operators $\hat{a}_{\rm A}^{\star}({\bf r}_{i})$,
$\hat{a}_{\rm B}^{\star}({\bf r}_{i})$ defined by
\begin{equation}
\eqalign{
\hat{a}_{\rm A}({\bf r}_{i}) |m({\bf r}_{i}) \rangle = m({\bf
r}_{i}) |m({\bf r}_{i}) - 1 \rangle \qquad & \hat{a}_{\rm
A}^{\star}({\bf r}_{i}) |m({\bf r}_{i}) \rangle = |m({\bf
r}_{i}) + 1 \rangle \\
\hat{a}_{\rm B}({\bf r}_{i}) |n({\bf r}_{i}) \rangle = n({\bf
r}_{i}) |n({\bf r}_{i}) - 1 \rangle & \hat{a}_{\rm
B}^{\star}({\bf r}_{i}) |n({\bf r}_{i}) \rangle = |n({\bf
r}_{i}) + 1 \rangle .
}
\end{equation}
These operators leave the occupation numbers $m({\bf
r}_{j})$ and $n({\bf r}_{j})$ of the sites $j \neq i$ unchanged.
The diffusion of non-interacting particles corresponds to the
Liouvillean
\begin{eqnarray}
\fl \hat{L}_{D} = \frac{D}{h^{2}} \sum_{<i,j>} \left(
\hat{a}_{\rm A}^{\star}({\bf r}_{j}) - \hat{a}_{\rm
A}^{\star}({\bf r}_{i}) \right) \left( \hat{a}_{\rm A}({\bf
r}_{i}) - \hat{a}_{\rm A}({\bf r}_{j}) \right) \nonumber \\
+ \frac{D}{h^{2}} \sum_{<i,j>} \left( \hat{a}_{\rm
B}^{\star}({\bf r}_{j}) - \hat{a}_{\rm B}^{\star}({\bf r}_{i})
\right) \left( \hat{a}_{\rm B}({\bf r}_{i}) - \hat{a}_{\rm
B}({\bf r}_{j}) \right)
\end{eqnarray}
where the sum extends over all pairs of nearest neighbour sites,
$h$ is the lattice spacing and $D$ denotes the
diffusion constant. The annihilation of particle-antiparticle
pairs at the same site is described by
\begin{equation}
\hat{L}_{\rm reac} = \frac{k}{h^{d}} \sum_{i} \left( 1 -
\hat{a}_{\rm A}^{\star}({\bf r}_{i}) \hat{a}_{\rm
B}^{\star}({\bf r}_{i}) \right) \hat{a}_{\rm A}({\bf r}_{i})
\hat{a}_{\rm B}({\bf r}_{i})
\end{equation}
where $k/h^{d}$ is the reaction rate.
To investigate the effect of a random shear flow we introduce
independent Gaussian random variables $v({\bf y})$ labelled by
the coordinates ${\bf y}$ perpendicular to the direction of the
flow. At each point ${\bf r}_{i} = (x_{i}, {\bf y}_{i})$ the
velocity field prefers particle jumps parallel or antiparallel to
the $x$-direction depending on the sign of $v({\bf y}_{i})$. If
$v({\bf y}_{i})$ is positive particles at the point $(x_{i}, {\bf
y}_{i})$ will jump with rate $v({\bf y}_{i})/h$ in positive
$x$-direction whereas for $v({\bf y}_{i}) < 0$ they will hop with
rate $-v({\bf y}_{i})/h$ in the opposite direction . Here we
assume that $v({\bf y})$ have zero mean and the correlations
\begin{equation}
\left[ v({\bf y}) v({\bf y}^{\prime}) \right] = \bar{f}
h^{-(d-1)} \delta_{{\bf y}, {\bf y}^{\prime}} .
\end{equation}
While deviations from the Gaussian distribution turn out to be
irrelevant for the asymptotic scaling behaviour long range
correlations of the velocity field~\cite{ALR} or a non-vanishing
mean shear~\cite{HB} change the universality class. The motion
of the particles in the velocity field is described be the
Liouvillean
\begin{eqnarray}
\fl \hat{L}_{\rm flow} = \frac{1}{h} \sum_{i} |v({\bf y}_{i})|
\left[ \hat{a}_{\rm A}^{\star}({\bf r}_{i} + {\rm sgn}(v({\bf
y}_{i})) h {\bf e}_{x}) - \hat{a}_{\rm A}^{\star}({\bf r}_{i})
\right] \hat{a}_{\rm A}({\bf r}_{i}) \nonumber \\
+ \frac{1}{h} \sum_{i} |v({\bf y}_{i})| \left[
\hat{a}_{\rm B}^{\star}({\bf r}_{i} + {\rm sgn}(v({\bf y}_{i}))
h {\bf e}_{x}) - \hat{a}_{\rm B}^{\star}({\bf r}_{i}) \right]
\hat{a}_{\rm B}({\bf r}_{i})
\end{eqnarray}
where ${\rm sgn}(v) = \pm 1$ is the sign function and ${\bf
e}_{x}$ denotes the unit vector in positive $x$-direction.
In order to perform the average over the realizations of $v({\bf
y})$ it will be convenient to write $\hat{L}_{\rm flow}$ as a
sum, $\hat{L}_{\rm flow} = \hat{L}^{(-)} + \hat{L}^{(+)}$, where
\begin{eqnarray}
\fl \hat{L}^{(-)} = \frac{1}{2 h} \sum_{i} v({\bf y}_{i})
\left[ \hat{a}_{\rm A}^{\star}({\bf r}_{i} + h {\bf e}_{x}) -
\hat{a}_{\rm A}^{\star}({\bf r}_{i} - h {\bf e}_{x}) \right]
\hat{a}_{\rm A}({\bf r}_{i}) \nonumber \\
+ \frac{1}{2 h} \sum_{i} v({\bf y}_{i})
\left[ \hat{a}_{\rm B}^{\star}({\bf r}_{i} + h {\bf e}_{x}) -
\hat{a}_{\rm B}^{\star}({\bf r}_{i} - h {\bf e}_{x}) \right]
\hat{a}_{\rm B}({\bf r}_{i}) \label{L-}
\end{eqnarray}
is odd with respect to the velocity field and
\begin{eqnarray}
\fl \hat{L}^{(+)} = \frac{1}{2 h} \sum_{i} |v({\bf y}_{i})|
\left[ \hat{a}_{\rm A}^{\star}({\bf r}_{i} + h {\bf e}_{x}) +
\hat{a}_{\rm A}^{\star}({\bf r}_{i} - h {\bf e}_{x}) - 2
\hat{a}_{\rm A}^{\star}({\bf r}_{i})\right] \hat{a}_{\rm
A}({\bf r}_{i}) \nonumber \\
+ \frac{1}{2 h} \sum_{i} |v({\bf y}_{i})| \left[ \hat{a}_{\rm
B}^{\star}({\bf r}_{i} + h {\bf e}_{x}) + \hat{a}_{\rm
B}^{\star}({\bf r}_{i} - h {\bf e}_{x}) - 2 \hat{a}_{\rm
B}^{\star}({\bf r}_{i})\right] \hat{a}_{\rm B}({\bf r}_{i})
\label{L+}
\end{eqnarray}
depends only on the modulus $|v({\bf y})|$. The even part
$\hat{L}^{(+)}$ can be interpreted as a contribution to the
diffusion in $x$-direction with a ${\bf y}$-dependent diffusion
constant.
The formal solution of equation~(\ref{Liou}) is given by
\begin{equation}
|\Phi(t)\rangle = \exp(\hat{L} t) |\Phi(0)\rangle
\end{equation}
with $\hat{L} = \hat{L}_{\rm diff} + \hat{L}_{\rm reac} +
\hat{L}_{\rm flow}$.
In order to derive a functional integral representation for
the dynamics one uses the Trotter formula
\begin{equation}
\exp(\hat{L} t) = \lim_{n \rightarrow \infty} (1+\hat{L}
t/n)^{n}
\end{equation}
to rewrite the time evolution operator as a product of n
factors linear in $\hat{L}$. Inserting the identity operator
in a coherent state representation between the individual
factors~\cite{Pel,BLee,LC1} one obtains a path integral with
action
\begin{eqnarray}
\fl S[\tilde{a}, a; \tilde{b}, b] = h^{d} \int \d t \sum_{i} [
\tilde{a}({\bf r}_{i}, t) \partial_t a({\bf r}_{i}, t) +
\tilde{b}({\bf r}_{i}, t) \partial_t b({\bf r}_{i}, t) ]
\nonumber \\
- \int \d t L[\tilde{a}(t), a(t); \tilde{b}(t), b(t)] -
h^{d} \sum_{i} n_{0} \left( \tilde{a}_{\rm A}({\bf r}_{i}, 0) +
\tilde{a}_{\rm B}({\bf r}_{i}, 0) \right) \label{12}
\end{eqnarray}
where $\tilde{a}$, $a$, $\tilde{b}$ and $b$ are c-number
functions and $L[\tilde{a}(t), a(t); \tilde{b}(t), b(t)]$
can be obtained from $\hat{L}$ via the replacements
\begin{equation}
\eqalign{
\hat{a}_{\rm A}({\bf r}_{i}) \rightarrow h^{d} a({\bf r}_{i}, t)
\qquad & \hat{a}_{\rm A}^{\star}({\bf r}_{i}) \rightarrow 1 +
\tilde{a}({\bf r}_{i}, t) \\
\hat{a}_{\rm B}({\bf r}_{i}) \rightarrow h^{d} b({\bf r}_{i}, t)
& \hat{a}_{\rm B}^{\star}({\bf r}_{i}) \rightarrow 1 +
\tilde{b}({\bf r}_{i}, t) .
}
\end{equation}
In equation~(\ref{12}) $n_{0}$ denotes the initial density of
each species.
In order to calculate the average of correlation and response
functions with respect to disorder we use the effective action
$\bar{S}$ defined by
\begin{equation}
\exp(-\bar{S}[\tilde{a}, a; \tilde{b}, b]) = \left[
\exp(-S[\tilde{a}, a; \tilde{b}, b]) \right] .
\end{equation}
For each value of the coordinate ${\bf y}$ one has to calculate
an integral of the form
\begin{equation}
\int_{-\infty}^{\infty} d v({\bf y}) (2 \pi
\bar{f}/h^{d-1})^{-1/2} \exp\left( \frac{v({\bf y})^{2}}{2
\bar{f}/h^{d-1}} \right) \cdot \exp(- A v({\bf y}) - B |v({\bf
y})|) \label{av}
\end{equation}
where $A$ and $B$ are shorthand notations for the contributions
to $S$ coming from $\hat{L}^{(-)}$ and $\hat{L}^{(+)}$,
respectively. The integral~(\ref{av}) can be expressed in
terms of error functions. We only need the approximation
\begin{equation}
(\protect\ref{av}) = \exp\left( \frac{1}{2} (\bar{f}/h^{d-1})
A^{2} - \sqrt{\frac{2}{\pi}} (\bar{f}/h^{d-1})^{1/2} B + \ldots
\right)
\end{equation}
since higher orders in $A$ and $B$ turn out to be irrelavant
for the asymptotic scaling behaviour. The first term in the
exponential function (proportional to $A^{2}$) gives rise to a
new interaction which is non-local with respect to time while
the second contribution (linear in $B$) modifies the diffusion
constant in the direction parallel to the flow. Neglecting all
irrelevant terms one arrives at
\begin{eqnarray}
\fl \bar{S}[\tilde{a},a; \tilde{b}, b] = \int \d t \int \d^{d}r
\left[ \tilde{a} (\partial_{t} a - D \triangle_{\bot} a - D_{\|}
\partial_{\|}^{2} a) \right. \nonumber \\
+ \left. \tilde{b} (\partial_{t} b - D \triangle_{\bot} b - D_{\|}
\partial_{\|}^{2} b) + k a b (\tilde{a}+\tilde{b}+\tilde{a} \tilde{b})
- n_{0} (\tilde{a} + \tilde{b}) \delta(t) \right] \nonumber \\
- \frac{\bar{f}}{2} \int \d^{d-1}y \left( \int \d t \int \d x
(\tilde{a} \partial_{\|}a + \tilde{b} \partial_{\|}b) \right)^{2} .
\label{actab}
\end{eqnarray}
Here the sum over lattice sites has been replaced by an integral
over the continuous variable ${\bf r} = (r_{\|}, {\bf r}_{\bot})
= (x, {\bf y})$. This continuum model is appropriate for the
study of the long-time and large-distance behaviour of the
system.
Since the quantity $\psi = (a-b)/\sqrt{2}$ is closely related to
the conserved density difference of A-particles and B-particles
it is convenient to introduce the fields~\cite{LC1}
\begin{equation}
\fl \psi = \frac{1}{\sqrt{2}} (a-b) \qquad \tilde{\psi} =
\frac{1}{\sqrt{2}} (\tilde{a} - \tilde{b}) \qquad \phi =
\frac{1}{\sqrt{2}} (a+b) \qquad \tilde{\phi} = \frac{1}{\sqrt{2}}
(\tilde{a} + \tilde{b}) . \label{psi}
\end{equation}
After rescaling of time by the diffusion constant $D$ the action
becomes in terms of the new variables
\begin{eqnarray}
\fl \bar{S}[\tilde{\psi},\psi; \tilde{\phi}, \phi] = \int \d t \int
\d^{d}r \left[ \tilde{\psi} (\partial_{t} \psi - \triangle_{\bot}
\psi - \kappa \partial_{\|}^{2} \psi) + \tilde{\phi} (\partial_{t}
\phi - \triangle_{\bot} \phi - \kappa \partial_{\|}^{2} \phi)
\right. \nonumber \\
\left. + \lambda_{1} \tilde{\phi} (\phi^{2} - \psi^{2}) +
\lambda_{2} (\tilde{\phi}^{2} - \tilde{\psi}^{2}) (\phi^{2} -
\psi^{2}) - \sqrt{2} n_{0} \tilde{\phi} \delta(t) \right]
\nonumber \\
- \frac{f}{2} \int \d^{d-1}y \left( \int \d t \int \d x
(\tilde{\psi} \partial_{\|}\psi + \tilde{\phi} \partial_{\|}\phi)
\right)^{2} \label{act}
\end{eqnarray}
with the coupling coefficients $\lambda_{1} = k/(\sqrt{2} D)$,
$\lambda_{2} = k/(4 D)$, $f = \bar{f}/D^{2}$ and $\kappa =
D_{\|}/D$.
The action $\bar{S}$ includes two different types of
fluctuations. The interaction proportional to
$\lambda_{2}$ leads to anticorrelations neglected in the mean
field rate equations. In the case of the single species
annihilation $A+A \rightarrow \emptyset$ these anticorrelations
are responsible for the slow density decay $n(t) \sim t^{-d/2}$
in $d<2$~\cite{BLee}. The interaction proportional to
$f$ describes the effect of the quenched random velocity field
and gives rise to the superdiffusive motion of particles found in
references \cite{BGKPR,JH}. A straightforward dimensional
analysis shows that $\lambda_{2}$ is relevant below
$d_{\lambda}=2$ while the upper critical dimension of the
disorder is $d_{f}=3$. As already pointed out in~\cite{LC1} this
does not mean that $\lambda_{2}$ may be neglected from the
outset if $d>d_{\lambda}$. For non-vanishing $\lambda_{2}$
coarse graining of the action $\bar{S}$ generates new
interactions which are relevant below four dimensions. Since
these interactions are located at the `time surface' $t=0$ they
modify the initial distribution of the fields $\psi$ and $\phi$.
An effective action appropriate for the analysis of fluctuation
effects in dimensions $d>d_{\lambda}$ includes all relevant
interactions which may be generated by the renormalization
group for $\lambda_{2}>0$.
We will use renormalization group improved perturbation theory
to derive results from the effective action for non-zero $f$.
Since this calculation is based on the results of
reference~\cite{JH} for non-interacting particles (i.e.
$\lambda_{1} = \lambda_{2} = 0$) it is useful to review the
renormalization group approach to this problem briefly in the
next section.
\section{Non-interacting particles in a random velocity field}
\label{nonint}
For $\lambda_{1} = \lambda_{2} = n_{0} = 0$ only Green functions
of the form
\begin{equation}
G_{M,N}(\{{\bf r}, t\}) =
\left.\left\langle \prod_{i=1}^{M} \psi({\bf r}_{i}, t_{i})
\tilde{\psi}(\tilde{{\bf r}}_{i}, \tilde{t}_{i}) \cdot
\prod_{j=M+1}^{M+N} \phi({\bf r}_{j}, t_{j})
\tilde{\phi}(\tilde{{\bf r}}_{j}, \tilde{t}_{j}) \right\rangle
\right|_{\lambda=0}
\label{GMN}
\end{equation}
with an equal number of insertions of $\psi$ ($\phi$) and
$\tilde{\psi}$ ($\tilde{\phi}$) are non-zero.
(Throughout this section the angular brackets indicate an
average with weight $\exp(-\bar{S})$ for $\lambda_{1} =
\lambda_{2} = n_{0} = 0$.)
Since $f$ is a relevant parameter below three dimensions the
asymptotic scaling behaviour of the Green functions cannot be
obtained from a na\"{\ii}ve perturbation expansion at finite
order in $f$. We therefore use the renormalization group to
improve the perturbation theory by a partial summation of the
perturbation series. Due to ultraviolet-divergencies at the upper
critical dimension $d_{f}=3$ the individual contributions to
the perturbation series have poles in $\epsilon = 3-d$.
In the minimal renormalization scheme these poles are absorbed
into renormalizations of coupling coefficients and fields.
The propagator of the free (Gaussian) field theory is given
by
\begin{equation}
\langle \psi({\bf r}, t) \tilde{\psi}({\bf 0}, 0) \rangle_{0}
= \langle \phi({\bf r}, t) \tilde{\phi}({\bf 0}, 0) \rangle_{0}
= \int_{{\bf q}} \e^{\i {\bf q} {\bf r}} G({\bf q}, t)
\end{equation}
with the Fourier transform
\begin{equation}
G({\bf q}, t) = \Theta(t) \exp\left(-(q_{\bot}^{2} + \kappa
q_{\|}^{2}) t\right) . \label{prop}
\end{equation}
Here $\Theta(t)$ is the Heaviside step function and the indices
`$\|$' and `$\bot$' indicate the respective directions parallel
and perpendicular to the flow.
Due to the strong anisotropy of the velocity field the theory
is superrenormalizable in the upper critical dimension~\cite{JH},
i.e. the number of superficially divergent diagrams is finite.
In fact, it is sufficient to subtract the $\epsilon$-pole of the
diagram shown in figure~\ref{fig1} to render the field theory
finite at every order of the perturbation theory.
The required renormalization factor can be obtained from the
response function $G_{1,0}=G_{0,1}$ at one-loop order. A
straightforward calculation yields
\begin{eqnarray}
\fl \int \d^{d}r \exp(-\i q_{\|} r_{\|}) G_{1,0}({\bf r}, t) =
\exp(-\kappa q_{\|}^{2} t) \left[ 1 - \frac{2
f}{\epsilon (1+\epsilon/2) (4 \pi)^{1-\epsilon/2}} q_{\|}^{2}
t^{1+\epsilon/2} + \Or(f^{2} q_{\|}^{4}) \right]. \nonumber \\
\label{1loop}
\end{eqnarray}
To subtract the $\epsilon$-pole in this function we
introduce the renormalized diffusion constant $\kappa_{R} =
Z^{-1} \kappa$, where
\begin{equation}
Z = 1 - \frac{2 u}{\epsilon} \qquad
{\rm and} \qquad A_{\epsilon} f = \kappa_{R} u \mu^{\epsilon} .
\end{equation}
Here $u$ is a renormalized coupling coefficient, $A_{\epsilon} =
1/((4 \pi)^{1-\epsilon/2} (1+\epsilon/2))$ is a geometrical
factor and $\mu$ denotes an external momentum scale.
\begin{figure}[b]
\caption{The only superficially divergent Feynman diagram of the
field theory defined by the action
$\bar{S}$ [equation~(\protect\ref{act})] with $\lambda_{1} =
\lambda_{2} = n_{0} = 0$. The full line with an arrow represents
the propagator~(\protect\ref{prop}) and the wavy line corresponds
to the correlator $f \delta({\bf r}_{\bot} - {\bf
r}_{\bot}^{\prime})$ of the velocity field.}
\label{fig1}
\vspace*{5mm}
\epsfxsize=160pt
\hspace*{50mm}\epsfbox{graph1.eps}
\end{figure}
Since the bare Green function is independent of $\mu$ its
derivative at fixed bare parameters vanishes, i.e.
\begin{equation}
\mu \left.\frac{\d}{\d \mu}\right|_{0} G_{M,N}(\{{\bf r}, t\};
\kappa, f) = 0 .
\end{equation}
Expressing $\kappa$ and $f$ in terms of $\kappa_{R}$, $u$ and
$\mu$ we obtain for the renormalized Green function the
renormalization group equation (RGE)
\begin{equation}
\left[ \mu \partial_{\mu} - \zeta(u) \kappa_{\rm R}
\partial_{\kappa_{\rm R}} + \beta(u) \partial_{u} \right]
G_{M,N}(\{{\bf r}, t\}; \kappa_{R}, u; \mu) = 0 . \label{RGE}
\end{equation}
Due to superrenormalizability the Wilson functions
\begin{equation}
\zeta(u) = \mu \left.\frac{\d}{\d \mu}\right|_{0} \ln Z = 2 u
\qquad \beta(u) = \mu \left.\frac{\d}{\d \mu}\right|_{0} u =
u (-\epsilon + 2 u)
\end{equation}
are exact at every order of the perturbation theory.
Solving the RGE by characteristics one finds
\begin{equation}
G_{M,N}(\{{\bf r}, t\}; \kappa_{R},
u; \mu) = G_{M,N}(\{{\bf r}, t\};
Y(l) \kappa_{R}, \bar{u}(l); \mu l) \label{flow}
\end{equation}
where
\begin{equation}
\fl l \frac{\d}{\d l} \ln Y(l) = -\zeta(\bar{u}(l))
\qquad l \frac{\d}{\d l} \bar{u}(l) = \beta(\bar{u}(l))
\qquad Y(1) = 1 \qquad \bar{u}(1) = u. \label{29}
\end{equation}
For $l \rightarrow 0$ the scale dependent coupling constant
$\bar{u}(l)$ tends to the fixed point $u_{\star} = \epsilon/2$
and $Y(l) \simeq Y_{\star} l^{-\epsilon}$ (with a non-universal
scaling factor $Y_{\star}$). The asymptotic scaling behaviour
of the Green functions follows from equation~(\ref{flow}) and
dimensional analysis. The result reads (for $l \rightarrow 0$)
\begin{eqnarray}
\fl G_{M,N}(\{{\bf r}, t\}; \kappa_{R}, u; \mu) \simeq
\left(\mu^{d} (Y_{\star} \kappa_{R})^{-1/2} l^{d+\epsilon/2}
\right)^{M+N} \nonumber \\
\times G_{M, N}\left(\left\{\mu (Y_{\star} \kappa_{R})^{-1/2}
l^{1+\epsilon/2} r_{\|}, \mu l {\bf r}_{\bot}, \mu^{2} l^{2}
t\right\}; 1, u_{\star}; 1\right) . \label{green}
\end{eqnarray}
This equation shows that anomalous scaling dimensions of
the variables and fields are given by
\begin{equation}
\fl r_{\bot} \sim l^{-1} \qquad r_{\|} \sim l^{-(1+\epsilon/2)}
\qquad t \sim l^{-2} \qquad \tilde{\psi} \psi \sim l^{d +
\epsilon/2} \qquad \tilde{\phi} \phi \sim l^{d + \epsilon/2} .
\label{scal}
\end{equation}
The asymptotic form of the Green functions~(\ref{green}) depends
on the arbitrary momentum scale $\mu$ and the non-universal
amplitude $Y_{\star}$. A combination of these parameters
occuring in~(\ref{green}) can be expressed by a quantity
that is accessible to experiments or simulations~\cite{BGKPR}.
Consider the random walk of a particle starting at time $t=0$
at the point ${\bf r} = {\bf 0}$. The mean square
displacement of the particle in $r_{\|}$-direction averaged over
the realizations of the velocity field reads
\begin{equation}
X(t)^{2} = \int \d^{d}r G_{1,0}({\bf r}, t) r_{\|}^{2} .
\end{equation}
Using the one-loop result~(\ref{1loop}) and
equation~(\ref{green}) with $l=(\mu^{2} t)^{-1/2}$ it is easy to
show that (for $t \rightarrow \infty$)
\begin{equation}
X(t) \simeq \sigma t^{1/2 + \epsilon/4} \qquad {\rm with} \qquad
\sigma = \sqrt{2} (\kappa_{R} Y_{\star})^{1/2} \mu^{\epsilon/2}
\left[1 + \Or(\epsilon^{2}) \right] .
\end{equation}
In the following sections we will always use the parameter
$\sigma$ instead of $Y_{\star}$. Due to the strong anisotropy of
the velocity field it is also possible to calculate $X(t)$
directly from equation~(\ref{1loop}) without application of the
renormalization group. Since the $n^{\rm th}$ order contribution
to $G_{1, 0}$ is proportional to $q_{\|}^{2n}$ and $X(t)^{2}$ is
proportional to the derivative of $G_{1, 0}$ with respect to
$q_{\|}^{2}$ at ${\bf q} = {\bf 0}$ the one-loop result is
sufficient to obtain the function $X(t)$ exactly:
\begin{equation}
X(t)^{2} = 2 \kappa t + \frac{8 f}{\epsilon (2+\epsilon)
(4 \pi)^{1-\epsilon/2}} t^{1 + \epsilon/2} . \label{34}
\end{equation}
For $\epsilon = 1$ this result was first given by Bouchaud
\etal~\cite{BGKPR}.
For $d = 3$ ($\epsilon = 0$) an ultraviolet cut-off $\Lambda$
is required to regularize the singularity in equation~(\ref{34}).
In this case the mean sqare displacement is asymptotically
given by
\begin{equation}
X(t)^{2} \simeq \frac{f}{2 \pi} t \ln(\Lambda^{2} t) .
\label{d=3}
\end{equation}
\section{Effective action below three dimensions} \label{Seff}
The effect of the $\lambda_{2}$-vertex in equation~(\ref{act})
on the long-time behaviour of a reaction diffusion system
without velocity field can be described by interactions located
at the `time surface' $t=0$. In reference~\cite{LC1} it was
shown by dimensional analysis that an interaction of the type
\begin{equation}
\int \d^{d}r \frac{1}{m! n!} \Delta_{m,n} \left.\tilde{\psi}^{m}
\tilde{\phi}^{n}\right|_{t=0}
\label{ini}
\end{equation}
is relevant below $d_{m,n} = 2(m+n)/(m+n-1)$ dimensions.
($\Delta_{1,0}$ and $\Delta_{0,1}$ are relevant in any
dimension.) The critical dimension of $\Delta_{m,n}$ is changed
if the motion of the particles is not purely diffusive, e.g. if
the particles are subject to a linear shear flow~\cite{HB}.
In order to see how the values of $d_{m,n}$ are changed by the
random velocity field consider the expansion of $\langle \phi(t)
\rangle$ in powers of $\lambda_{1}$, $\lambda_{2}$ and $f$.
For $d < d_{f} = 3$ ($d > d_{f}$) the contributions to this
expansion at a given order in $\lambda_{1}$ and $\lambda_{2}$
become more important (less important) for large $t$ if we
increase the order in $f$. Therefore the long-time behaviour of
$\langle \phi(t) \rangle$ for $d > 3$ is dominated by the
diffusive motion of the particles and we may set $f=0$. Below
three dimensions we have to retain all orders in $f$ to determine
the asymptotic decay of the density. In this case
the many-particle response functions~(\ref{GMN}) play the same
role as the Gaussian propagotor~(\ref{prop}) in the case $f=0$.
It is clear that it is not possible to sum the the power series
in $f$ exactly but the renormalization group allows us a partial
summation and gives the correct scaling behaviour.
We first determine the scaling dimension of $\lambda_{2}$ for a
non-vanishing velocity field below three dimensions. Using the
anomalous scaling dimensions given in~(\ref{scal}) one finds
$\lambda_{2} \sim l^{2-d-\epsilon/2} = l^{(1-d)/2}$, i.e.
$\lambda_{2}$ is irrelevant for $d > 1$. In the same way we
obtain
\begin{equation}
\lambda_{1} \psi \sim l^{2} \qquad \lambda_{1} \phi \sim l^{2}
\qquad \lambda_{1}^{-1} \tilde{\psi} \sim l^{d-2+\epsilon/2}
\qquad \lambda_{1}^{-1} \tilde{\phi} \sim l^{d-2+\epsilon/2}
\label{scall}
\end{equation}
and the dimension of the initial term
\begin{equation}
\lambda_{1}^{m+n} \Delta_{m,n} \sim l^{(1-m-n)(d+\epsilon/2)
+ 2 (m+n)} .
\end{equation}
In $d=3$ the vertices with $m+n < 3$ are relevant. For $m+n \geq
3$ the initial coupling $\Delta_{m,n}$ is relevant below
$d_{m,n} = (m+n+3)/(m+n-1)$ dimensions. However, we will show
below that the only initial couplings which are generated by the
coarse graining procedure employed in reference~\cite{LC1} are
$\Delta_{2, 0}$ and $\Delta_{0, 2}$ (in addition to $\Delta_{0,
1} = \sqrt{2} n_{0}$), and these couplings are given by
$\Delta_{2, 0} = -\Delta_{0, 2} = n_{0}$ at every order in
$\lambda_{2}$.
Since the coupling coefficient $\lambda_{2}$ is irrelevant
for $d>1$ one has to introduce a cut-off wave number $\Lambda$ to
avoid ultraviolet-divergencies in the perturbation series. The
cut-off is required to regularize the diagrams shown in
figure~\ref{fig2} which contribute to an effective coupling
constant $\lambda_{2,{\rm eff}}$. The Green functions $G_{M,N}$
considered in the previous section occur in these diagrams as
subintegrals (represented by rectangles). As in the previous
section we calculate these subintegrals without cut-off since
they are ultraviolet-convergent for $d<3$. After summation of
the subdiagrams (using the renormalization group) the cut-off
is introduced to perform the remaining integrations. If the
cut-off wave number is not too large [an estimate based on
equation~(\ref{34}) with $t \sim \Lambda^{-2}$ gives
$\Lambda^{\epsilon} \ll f/(\kappa \epsilon)$] the scaling
behaviour~(\ref{green}) still holds on the momentum scale of
$\Lambda$. In this case the effective coupling has the scaling
form
$\lambda_{2,{\rm eff}} = \lambda_{2} F_{\lambda}(\lambda_{2}
\Lambda^{(d-1)/2}/\sigma)$ .
In the same way coarse graining leads to an effective coupling
coefficient $\lambda_{1, {\rm eff}} = \lambda_{1}
F_{\lambda}(\lambda_{2} \Lambda^{(d-1)/2}/\sigma)$ (with the same
scaling function $F_{\lambda}$).
Evaluating the diagrams depicted in figure~(\ref{fig2}) by
renormalization group improved pertubation theory it is possible
to compute the coefficients of the Taylor expansion of
$F_{\lambda}$ in an $\epsilon$-expansion. [Due to the scaling
behaviour~(\ref{green}) of the response functions the diagrams
are (infrared-) finite for $d > 1$ in the limit of zero external
momentum and frequency.] However, this is not very interesting
since the result strongly depends on the type of the cut-off.
\begin{figure}[b]
\caption{Contributions to the effective coupling coefficient
$\lambda_{2, {\rm eff}}$ to third order in $\lambda_{2}$.
The hatched rectangles represent the Green functions $G_{M,N}$
[equation~(\protect\ref{GMN})], where $M$ ($N$) is the number
of broken lines (full lines) running into the rectangle.
The direction of each line is indicated by an arrow and
reflects causality, i.e. each line points into the direction
of larger time arguments.}
\epsfxsize=360pt
\vspace*{5mm}\hspace*{20mm}\epsfbox{graph2.eps}
\label{fig2}
\end{figure}
The initial coupling $\Delta_{2, 0}$ is more interesting since
it is known to determine (at least for systems without random
velocity field) the amplitude of $\langle \phi(t)
\rangle$~\cite{LC1}. For the calculation of $\Delta_{2, 0}$ it is
convenient to shift the field $\phi$ by the mean field density
\begin{equation}
\Phi_{\rm mf}(t) = \Phi_{0} \left( 1 + \lambda_{1} \Phi_{0} t
\right)^{-1} \label{phimf}
\end{equation}
with the initial value $\Phi_{0} = \sqrt{2} n_{0}$. The shifted
action $\bar{S}[\tilde{\psi}, \psi; \tilde{\phi}, \Phi(t) + \phi]$
consists of a Gaussian part
\begin{eqnarray}
\fl \bar{S}_{G}[\tilde{\psi}, \psi; \tilde{\phi}, \phi; \Phi(t)]
= \int \d t \int \d^{d}r \left[ \tilde{\psi} (\partial_{t}
\psi - \triangle_{\bot} \psi - \kappa \partial_{\|}^{2} \psi)
\right. \\
\left. + \tilde{\phi} \left(\partial_{t} \phi - \triangle_{\bot}
\phi - \kappa \partial_{\|}^{2} \phi + 2\lambda_{1} \Phi_{\rm
mf}(t) \phi\right)\right] \nonumber
\end{eqnarray}
and higher order terms summarized in
\begin{eqnarray}
\fl \bar{S}_{\rm int}[\tilde{\psi}, \psi; \tilde{\phi}, \phi;
\Phi(t)] = \int \d t \int \d^{d}r \left[ \lambda_{1} \tilde{\phi}
(\phi^{2}-\psi^{2}) + \lambda_{2} (2 \Phi_{\rm mf}(t) \phi +
\Phi_{\rm mf}(t)^{2})
(\tilde{\phi}^{2} - \tilde{\psi}^{2}) \right. \label{sint} \\
\left. + \lambda_{2} (\tilde{\phi}^{2} - \tilde{\psi}^{2}) (\phi^{2}
- \psi^{2}) \right] - \frac{f}{2} \int \d^{d-1}y \left( \int
\d t \int \d x (\tilde{\psi} \partial_{\|}\psi + \tilde{\phi}
\partial_{\|}\phi) \right)^{2} . \nonumber
\end{eqnarray}
Since $\bar{S}_{G}$ is independent of $f$ the Gaussian (tree)
propagators are the same as in reference~\cite{LC1}, i.e.
\begin{eqnarray}
\fl G_{\psi}({\bf q}; t, t^{\prime}) = \int \d^{d}r \e^{-\i {\bf
q} {\bf r}} \langle \psi({\bf r}, t) \tilde{\psi}({\bf 0},
t^{\prime}) \rangle_{\rm G} = G({\bf q}; t-t^{\prime})\\
\fl G_{\phi}({\bf q}; t, t^{\prime}) = \int \d^{d}r \e^{-\i {\bf
q} {\bf r}} \langle \phi({\bf r}, t) \tilde{\phi}({\bf 0},
t^{\prime}) \rangle_{\rm G} = \left( \frac{1 + \lambda_{1}
\Phi_{0} t^{\prime}}{1 + \lambda_{1} \Phi_{0} t} \right)^{2}
G({\bf q}; t-t^{\prime}) \label{gphi}
\end{eqnarray}
where $G({\bf q}; t)$ is the propagator defined in
equation~(\ref{prop}). Corrections to the mean field density
can now be calculated in a diagramatic expansion around the
Gaussian action $\bar{S}_{G}$ treating the interactions in
$\bar{S}_{\rm int}$ as perturbations.
\begin{figure}[b]
\caption{General form of the contributions to $\Delta_{2, 0}$.
The full and broken bold lines represent the
propagators~$G_{\phi}$ and~$G_{\psi}$, respectively, and the
zigzag line corresponds to the mean field density $\Phi_{\rm
mf}(t)$. At lowest order in $\lambda_{2}$ the bubbles in the
graphs~(b) and~(c) contain only the graph~(a) as a subdiagram.
The vertex proportional to $\Phi_{\rm mf}(t)$ in the
diagrams~(c) and~(e) is generated by the shift of the field
$\phi$ [see equation~(\protect\ref{sint})].}
\epsfxsize=300pt
\vspace*{5mm}\hspace*{25mm}\epsfbox{graph3.eps}
\label{fig3}
\end{figure}
Following the lines taken in reference~\cite{LC1} we replace
the irrelevant parameter $\lambda_{2}$ by effective initial
couplings of the form~(\ref{ini}). The diagrams contributing to
$\Delta_{m,n}$ have $(m+n)$ external legs each of which is
associated with one of the vertices proportional to
$\lambda_{2}$. Figure~\ref{fig3} shows the general form of the
contributions to $\Delta_{2,0}$.
Since the function $\Phi_{\rm mf}(t)^{2}$ is damped on
time scales which are large compared to $(\lambda_{1}
\Phi_{0})^{-1}$ its effect on the long-time behaviour of the
field theory can be described by a $\delta$-function. We may
therefore write
\begin{equation}
\lambda_{2} \Phi_{\rm mf}(t)^{2} \tilde{\psi}^{2} \simeq
\lambda_{2} \left[ \int_{0}^{\infty} \d t^{\prime} \Phi_{\rm
mf}(t^{\prime})^{2} \right] \delta(t) \tilde{\psi}^{2} =
\frac{\lambda_{2}}{\lambda_{1}} \Phi_{0} \delta(t) \tilde{\psi}^{2}
\end{equation}
and obtain $\Delta_{2, 0} = 2 \lambda_{2} \Phi_{0}/ \lambda_{1}
+ \Or(\lambda_{2}^{2})$. In the same way higher order
contributions to $\Delta_{2, 0}$ can be calculated by integrating
the diagrams~(b--e) in figure~\ref{fig3} with respect to the
time argument associated with the leftmost vertex. However, the
diagrams~(b) and~(c) cancel, and the same is true for the
diagrams~(d) and~(e). In order to see this we write the
contribution of the graph~\ref{fig3}(b) in the form
\begin{equation}
{\rm graph~\protect\ref{fig3} (b)} = \int_{0}^{\infty}
\d t (-\lambda_{2}) f(t)
\end{equation}
where $t$ is the time argument carried by the leftmost vertex,
and the function $f(t)$ represents the subdiagrams in the
hatched bubble and the loop integration. The same function $f(t)$
occurs in the contribution~\ref{fig3}(c) which is given by
\begin{eqnarray}
{\rm graph~\protect\ref{fig3} (c)} &= \int_{0}^{\infty} \d t
2\lambda_{2} \Phi_{\rm mf}(t) \int_{0}^{t} \d t^{\prime}
G_{\phi}({\bf q} = {\bf 0}; t, t^{\prime}) \lambda_{1}
f(t^{\prime}) \nonumber \\
&= 2 \lambda_{1} \lambda_{2} \Phi_{0} \int_{0}^{\infty} \d
t^{\prime} \int_{t^{\prime}}^{\infty} \d t \frac{(1+\lambda_{1}
\Phi_{0} t^{\prime})^{2}}{(1 + \lambda_{1} \Phi_{0} t)^{3}}
f(t^{\prime}) \\
&= \int_{0}^{\infty} \d t^{\prime} \lambda_{2} f(t^{\prime}) .
\nonumber
\end{eqnarray}
Therefore the contributions~(b) and~(c) cancel. Analogously
one can show that the sum of the diagrams~(d) and~(e)
vanishes. Since the same line of arguments applies if the
hatched bubbles in figure~\ref{fig3} have additional external
legs\footnote{Due to causality each diagram which contributes
to the initial couplings contains at least one pair of
$\tilde{\psi}$- or $\tilde{\phi}$- legs coming from a single
$\lambda_{2}$ vertex. This is the vertex explicitely shown in
figure~\protect\ref{fig3}.}, all effective interactions
$\Delta_{m, n}$ with $m+n > 2$ are zero, and the only
non-vanishing initial couplings are given by $\Delta_{2, 0} =
-\Delta_{0, 2} = 2 \lambda_{2} \Phi_{0}/ \lambda_{1} = n_{0}$.
In the appendix of reference~\cite{LC1} a non-zero correction to
$\Delta_{2, 0}$ of the order $\Or(\lambda_{2}^{2} n_{0}^{d/2})$
was obtained because the diagrams of the form~(c) and~(e) were
incorrectly thought to be accounted for by taking $\lambda_{1}$
to $\lambda_{\rm eff}$.
The cancellation of diagrams suggests that there should be a
simpler way to calculate the initial coupling coefficients. In
fact, the effective action can be derived without using Feynman
diagrams. This formal derivation exploits the equation of
motion
\begin{equation}
\int D[\tilde{\psi}, \psi; \tilde{\phi}, \phi]
\frac{\delta}{\delta \tilde{\psi}({\bf r}, t)}
\exp(-S[\tilde{\psi}, \psi; \tilde{\phi}, \phi]) = 0
\end{equation}
and the corresponding equation with a functional derivative
with respect to $\tilde{\phi}$. The explicit form of the
equations of motion (which are valid after insertion into
averages) is given by
\begin{equation}
\fl \partial_{t} \psi - \triangle_{\bot} \psi -\kappa
\partial_{\|}^{2} \psi - 2 \lambda_{2} \tilde{\psi} (\phi^{2} -
\psi^{2})
- f (\partial_{\|} \psi) \int \d t^{\prime} \int \d x^{\prime}
\left( \tilde{\psi} \partial_{\|} \psi + \tilde{\phi}
\partial_{\|} \phi \right
= 0 \label{eqmo1}
\end{equation}
and
\begin{eqnarray}
\fl \partial_{t} \phi - \triangle_{\bot} \phi -\kappa
\partial_{\|}^{2} \phi + \lambda_{1} (\phi^{2} - \psi^{2}) + 2
\lambda_{2} \tilde{\phi} (\phi^{2} - \psi^{2}) -
\Phi_{0} \delta(t) \nonumber \\
- f (\partial_{\|} \phi) \int \d t^{\prime} \int \d x^{\prime}
\left( \tilde{\psi} \partial_{\|} \psi + \tilde{\phi}
\partial_{\|} \phi \right
= 0 . \label{eqmo2}
\end{eqnarray}
Averaging~(\ref{eqmo2}) one finds
\begin{equation}
\partial_{t} \Phi(t) + \lambda_{1} \langle \phi({\bf r}, t)^{2}
- \psi({\bf r}, t)^{2} \rangle = \Phi_{0} \delta(t) \label{56}
\end{equation}
where $\Phi(t) = \langle \phi({\bf r}, t) \rangle$ is the exact
density. (The equal time averages $\langle \tilde{\phi} \psi^{2}
\rangle$ and $\langle \tilde{\phi} \phi^{2} \rangle$ are zero due
to the prepoint time discretization used in the derivation of the
action.) Integration of equation~(\ref{56}) over t yields
\begin{equation}
\lambda_{1} \int_{0}^{\infty} \d t \langle \phi({\bf r}, t)^{2}
- \psi({\bf r}, t)^{2} \rangle = \Phi_{0} .
\end{equation}
The initial fluctuations generated by the irrelevant
$\lambda_{2}$ coupling can now be obtained by the replacement
\begin{equation}
2 \lambda_{2} (\phi^{2} - \psi^{2}) \longrightarrow
2 \lambda_{2} \delta(t) \int_{0}^{\infty} \d t^{\prime}
\langle \phi({\bf r}, t^{\prime})^{2} - \psi({\bf r},
t^{\prime})^{2} \rangle = n_{0} \delta(t)
\end{equation}
in the equations of motion~(\ref{eqmo1}) and~(\ref{eqmo2}).
The effective equations of motion derived in this way are
equivalent to the effective action with the initial vertices
$\Delta_{m, n}$ calculated above.
\section{Asymptotic decay of the density} \label{decay}
In order to calculate the long-time behaviour of $\langle\phi(t)
\rangle$ we start from the effective action $\bar{S}_{\rm eff} =
\bar{S}_{\rm reac} + S_{\rm ini}$, where
\begin{eqnarray}
\fl \bar{S}_{\rm reac}[\tilde{\psi}, \psi; \tilde{\phi}, \phi] =
\int \d t \int \d^{d} r \left[\tilde{\psi} \left(\partial_{t} \psi
- \triangle_{\bot} \psi -\kappa \partial_{\|} \psi \right)
+ \tilde{\phi} \left(\partial_{t} \phi - \triangle_{\bot}
\phi -\kappa \partial_{\|} \phi \right) \right. \nonumber \\
\left. + \lambda_{1} \tilde{\phi} (\phi^{2}-\psi^{2}) \right]
- \frac{f}{2} \int \d^{d-1}y \left( \int
\d t \int \d x (\tilde{\psi} \partial_{\|}\psi + \tilde{\phi}
\partial_{\|}\phi) \right)^{2}
\end{eqnarray}
and
\begin{equation}
S_{\rm ini}[\tilde{\psi}, \tilde{\phi}] = -\int \d^{d}r
\left[ \Phi_{0} \tilde{\phi} + \frac{1}{2} n_{0} \left(
\tilde{\psi}^{2} - \tilde{\phi}^{2} \right) \right]_{t=0} .
\end{equation}
At this point it is convenient to reintroduce the
Gaussian velocity field $v({\bf y})$ (with $[v({\bf y})]=0$ and
$[v({\bf y}) v({\bf y}^{\prime})] = f \delta({\bf y} - {\bf
y}^{\prime})$ as in section~\ref{model}) to replace
$\bar{S}_{\rm reac}$ in favour of a $v$-dependent action which is
local with repect to time. This new action is equivalent to the
equations
\begin{eqnarray}
\partial_{t} \psi - \triangle_{\bot} \psi - \kappa
\partial_{\|}^{2} \psi + v({\bf y}) \partial_{\|} \psi = 0
\label{eqpsi} \\
\partial_{t} \phi - \triangle_{\bot} \phi - \kappa
\partial_{\|}^{2} \phi + v({\bf y}) \partial_{\|} \phi
+ \lambda (\phi^{2} - \psi^{2}) = 0 \label{eqphi}
\end{eqnarray}
where $\psi$ and $\phi$ are now {\em real} fields.
The solutions $\psi$, $\phi$ have to be averaged with respect to
both the initial conditions and the realizations of the velocity
field. Since the initial state defined by $S_{\rm ini}$ and the
distribution of $v({\bf y})$ are homogeneous the averages of the
spatial derivatives in~(\ref{eqpsi},\ref{eqphi})
vanish.
Averaging equation~(\ref{eqphi}) we find
\begin{equation}
\fl \dot{\Phi}(t) + \lambda \left( \Phi(t)^{2} - \langle
\psi({\bf r}, t)^{2} \rangle + C(t) \right) = 0 \qquad {\rm
where} \qquad C(t) = \left\langle (\phi({\bf r}, t)-\Phi(t))^{2}
\right\rangle . \label{Phi(t)}
\end{equation}
To obtain the asymptotic solution of this equation one
first has calculate $\langle \psi({\bf r}, t)^{2} \rangle$ for
large $t$. Later in this section we will also need higher moments
$\langle \psi^{2n} \rangle$ with $n= 1, 2\ldots$. Since in
equation~(\ref{eqpsi}) the dynamics of $\psi$ is decoupled from
the reaction process the moments of $\psi$ may be calculated with
the Green functions~(\ref{GMN}). In this way we get
\begin{eqnarray}
\fl \langle \psi({\bf r}, t)^{2n} \rangle = \frac{1}{2^{n} n!}
n_{0}^{n} \int \d^{d}r_{1}^{\prime} \ldots \int
\d^{d}r_{n}^{\prime} \left.\left\langle \psi({\bf r}, t)^{2 n}
\tilde{\psi}({\bf r}_{1}^{\prime}, 0)^{2} \ldots \tilde{\psi}({\bf
r}_{n}^{\prime}, 0)^{2} \right\rangle \right|_{\lambda=0}
\label{58} \\
\lo \simeq {\rm const} \times \left(\frac{n_{0}}{\sigma}
t^{-(d+3)/4} \right)^{n} \nonumber
\end{eqnarray}
where the scaling form~(\ref{green}) has been used.
[A simple power counting as in reference~\cite{JH} shows that
insertions of composite fields such as $\psi^{2n}$ in
equation~(\ref{58}) require no additional renormalizations.]
To obtain the amplitude of the second moment $\langle \psi^{2}
\rangle$ at first order in $\epsilon$ one has to calculate the
diagrams shown in figure~\ref{fig4}. The result reads in terms
of the unrenormalized coupling constants
\begin{equation}
\fl \langle \psi({\bf r}, t)^{2} \rangle = n_{0} \kappa^{-1/2}
(8 \pi t)^{-d/2} \left[ 1 - \frac{f t^{\epsilon/2}}{(4 \pi)^{1
-\epsilon/2} \kappa} \left(\frac{1}{\epsilon} - \frac{3}{2} \ln 2
+ \Or(\epsilon) \right) + \Or(f^{2}) \right] .
\end{equation}
Upon application of the renormalization group this becomes (for
large $t$)
\begin{equation}
\langle \psi({\bf r}, t)^{2} \rangle \simeq n_{0}
\frac{\sqrt{2}}{(8 \pi)^{d/2} \sigma} t^{-(d+3)/4} \left[
1 + \frac{\epsilon}{4} \left(3 \ln 2 - 1 \right) +
\Or(\epsilon^{2}) \right] .
\end{equation}
\begin{figure}[b]
\caption{Contributions to $\langle \psi^{2} \rangle$ at first
order in $\epsilon$. The full and empty circles represent the
initial vertex $\Delta_{2, 0}$ and the field $\psi^{2}$,
respectively. A solid line with an arrow corresponds to the
Gaussian propagator~(\protect\ref{prop}).}
\epsfxsize=300pt
\vspace*{5mm}\hspace*{25mm}\epsfbox{graph4.eps}
\label{fig4}
\end{figure}
We are now in a position to determine the asymptotic decay of
$\Phi(t)$ in a similar way as in the case $v({\bf y}) \equiv
0$~\cite{LC1}:
We first solve equation~(\ref{Phi(t)}) for $C(t)=0$, i.e. we are
looking for a function $\Phi_{\rm u}(t)$ satisfying
\begin{equation}
\dot{\Phi}_{\rm u}(t) + \lambda \left( \Phi_{\rm u}(t)^{2} -
\langle \psi({\bf r}, t)^{2} \rangle \right) = 0.
\end{equation}
Since $\langle \psi^{2} \rangle = {\rm cst} \times t^{-(d+3)/4}$
for $t \rightarrow \infty$ (with $(d+3)/4 < 2$) the asymptotic
solution of this equation is given by $\Phi_{\rm u}(t) \simeq
\sqrt{{\rm cst}} \times t^{-(d+3)/8}$. Using the positivity of
$C(t)$ one can show~\cite{LC1} that $\Phi_{\rm u}(t)$ is an upper
bound for $\Phi(t)$.
To derive a lower bound $\Phi_{\rm l}(t) \leq \Phi(t)$
we use the fact that $a = (\phi + \psi)/\sqrt{2}$ and $b = (\phi
- \psi)/\sqrt{2}$ are real densities which satisfy the equations
of motion
\begin{equation}
\eqalign{
\partial_{t} a - \triangle_{\bot} a - \kappa \partial_{\|}^{2} a
+ v({\bf y}) \partial_{\|} a + \sqrt{2} \lambda a b =& 0 \\
\partial_{t} b - \triangle_{\bot} b - \kappa \partial_{\|}^{2} b
+ v({\bf y}) \partial_{\|} b + \sqrt{2} \lambda a b =& 0 .
}
\end{equation}
For non-negative initial values the densities remain non-negative
for all $t$. This implies $\phi^{2}-\psi^{2} = 2 a b \geq 0$
and leads to the lower bound $\Phi_{\rm l}(t) = \langle |\psi|
\rangle$.
Although the initial distribution of $\psi$ is Gaussian, the
velocity field generates for $t>0$ higher cumulants in the
distribution ${\cal P}_{t}$ of $\psi({\bf r}, t)$. We perform a
cumulant expansion for this distribution to calculate $\langle
|\psi| \rangle$.
Since higher cumulants are generated by the disorder vertex $f$
(or its renormalized counterpart $v$ with fixed point value
$v_{\star} = \epsilon/2$) this amounts to an $\epsilon$-expansion
for ${\cal P}_{t}(\psi)$.
The Fourier transform $\tilde{{\cal P}}_{t}$ of the distribution
with respect to $\psi$ can be written as
\begin{equation}
\tilde{{\cal P}}_{t}(h) = \int_{-\infty}^{\infty} \d \psi
{\cal P}_{t}(\psi) \e^{\i h \psi} = \exp\left(
\sum_{n=1}^{\infty} \frac{c_{2n}(t)}{(2n)!} (-1)^{n} h^{2n}
\right) \label{four}
\end{equation}
where $c_{2n}(t)$ denotes the connected part (cumulant) of the
$(2n)^{\rm th}$ moment of $\psi$, e.g.
\begin{equation}
c_{2}(t) = \langle \psi({\bf r}, t)^{2} \rangle \qquad
c_{4}(t) = \langle \psi({\bf r}, t)^{4} \rangle - 3 c_{2}(t)^{2}
\qquad {\rm etc.}
\end{equation}
For a Gaussian distribution all $c_{m}(t)$ with $m > 2$ vanish.
Equation(\ref{four}) can be used to calculate $\langle |\psi|
\rangle$ in an expansion in $c_{2n}(t)$, $n=4, 6, \ldots$:
\begin{eqnarray}
\fl \langle |\psi({\bf r}, t)| \rangle = \int_{-\infty}^{\infty}
\d \psi |\psi| \int_{-\infty}^{\infty} \frac{\d h}{2 \pi}
\tilde{{\cal P}}_{t}(h) \e^{-\i h \psi} \nonumber \\
\lo = \sqrt{\frac{2 c_{2}(t)}{\pi}} \left[ 1 - \frac{1}{4!}
\frac{c_{4}(t)}{c_{2}(t)^{2}} + \Or\left(c_{4}(t)^{2}, c_{6}(t),
\ldots \right) \right] .
\end{eqnarray}
Due to the simple scaling $c_{2n}(t) \sim t^{-n (d+3)/4}$ of the
cumulants for large $t$ the expression in the square brackets tends
to a constant. The first correction to the Gaussian
distribution comes from the four-point cumulant $c_{4}(t)$.
The contributions to
$c_{4}(t)$ at second order in $f$ are shown in
figure~\ref{fig5}~(b-d). The first order (figure~\ref{fig5}~(a))
vanishes due to the $x$-derivative associated with the vertex $f$.
This means that upon application of the renormalization group the
amplitude of $c_{4}(t)$ at the fixed point is of the order
$\epsilon^{2}$. Analogously any cumulant $c_{2 n}(t)$ with $n
\geq 2$ is of the order $\epsilon^{n}$ since a non-vanishing
contribution to this function requires at least $n$ vertices
proportional to $f$. Therefore we only need the Gaussian part of
the distribution to calculate $\langle |\psi| \rangle $ at first
order in $\epsilon$. The result is
\begin{equation}
\Phi_{\rm l}(t) = \left( \frac{2}{\pi} \frac{\sqrt{2}
n_{0}}{(8 \pi)^{d/2} \sigma} \right)^{1/2} t^{-(d+3)/8}
\left[ 1 + \frac{\epsilon}{8} \left( 3 \ln 2 - 1 \right)
+ \Or(\epsilon^{2}) \right] . \label{phil}
\end{equation}
Using the inequalities
\begin{equation}
\langle \phi - |\psi| \rangle^{2} \leq \langle (\phi -
|\psi|)^{2} \rangle \leq \langle \phi^{2} \rangle - \langle
\psi^{2} \rangle = \frac{1}{\lambda} \left( -\dot{\Phi}(t)
\right)
\end{equation}
in conjunction with the upper bound $\Phi_{\rm u}(t)$ as
in~\cite{LC1} one can show that the lower bound~(\ref{phil})
gives the exact long-time behaviour of the density, i.e.
$\Phi(t) \simeq \Phi_{\rm l}(t)$ for $t \rightarrow \infty$.
In three dimensions the random velocity field gives a
logarithmic contribution to the density $\Phi(t)$. Using
equation~(\ref{d=3}) one obtains
\begin{equation}
\Phi(t) \simeq \frac{n_{0}^{1/2}}{2 \pi} t^{-3/4} \left(2 f
\ln(\Lambda^{2} t) \right)^{-1/4} .
\end{equation}
\begin{figure}[b]
\caption{Non-zero contributions to $c_{4}(t)$ at second
order in $f$~(b-d). The first order diagram~(a) vanishes.}
\epsfxsize=350pt
\vspace*{5mm}\hspace*{17mm}\epsfbox{graph5.eps}
\label{fig5}
\end{figure}
\section{Summary and Discussion} \label{concl}
In this paper the effects of a quenched random shear flow
on the long-time behaviour of the A+B-annihilation reaction
have been studied. It is well-known that in dimensions $d<3$
a random velocity field gives rise to enhanced diffusion and
should, therefore, accelerate the reaction process. Similar
to the case of purely diffusive particle motion the system
segregates after a short time into regions of purely A or B
particles. After this initial stage the density decay is
governed by an effective Gaussian distribution for the initial
density fluctuations.
While a simple model based on anisotropic L\'evy walks
already yields the correct exponent for the density decay one
has to take many-particle correlations generated by the
quenched randomness into account in order to calculate the
amplitude. We have shown how this is possible in the framework
of a systematic expansion in $\epsilon = 3-d$ and computed
the first order term in this expansion.
In the present paper it was assumed that the diffusion
constants $D_{A}$, $D_{B}$ of A and B particles are equal
and that the hopping rate of both species depends in the same
way on $v({\bf y})$. The more general case $D_{A} \neq D_{B}$
with three different disorder couplings $f_{A}$, $f_{B}$,
$f_{AB}$ (instead of $f$) can be treated in a similar way at
least as long as both species are mobile. For unequal
diffusion constants the fields $\phi$ and $\psi$ are coupled
already in the Gaussian part of the action. Using similar
arguments as in reference~\cite{LC1} it can be shown that
this effect changes only the amplitude of $\Phi(t)$.
It would be interesting to check the results of this work
by simulations. In order to observe the asymptotic
long-time behaviour of a disordered system one has to
perform configurational averages unless the system size is
very large. In the case of a random shear flow the
sample-to-sample fluctuations of the density depend on
the linear size $L_{y}$ in the direction perpendicular to
the velocity field. Since a diffusing particle covers a
distance $l_{D}(t) \sim (D t)^{1/2}$ during the time $t$
the density measured at time t represents approximatively
$N \sim L_{y}/l_{D}(t)$ independent configurations of
the flow. Therefore, lack of self-averaging in a finite
sample leads to a relative error of the order
$N^{-1/2} \sim (D t)^{1/4} L_{y}^{-1/2}$. In a pure
system (without shear flow) the asymptotic power law
can be observed at times of the order $D t \sim
10^{5}$~\cite{TW}. The above estimate shows that a
random system with $L_{y} > 1.2 \cdot 10^{5}$ (or
an equivalent number of smaller systems) is required
to reduce the sample-to-sample fluctuations for
$D t \sim 10^{5}$ to less than $5\%$.
\ackn
The author thanks J. Cardy, M. Howard and B. P. Lee for useful discussions.
This work has been supported by grant Oe199/1-1 of the
Deutsche Forschungsgemeinschaft.
\section*{References}
|
1,477,468,750,100 | arxiv | \section{Introduction}
In science, when presented with a highly complex system to understand, a good first step is to identify and model order present in a subset of the data to develop hypotheses to investigate further. Finding hidden pockets of order is especially difficult in large datasets, and as a result finding these pockets is a task ripe for machine learning. However, machine learning methods typically attempt to model all of the data or most of the data in the presence of outliers. Modeling the majority of the data is inappropriate for hypothesis generation as we are looking for order in small subsets of the data, which entails that most of the data should be considered as outliers by the models we are trying to fit. Solving the problem of automatic hypothesis generation from large, noisy datasets has to potential to accelerate the pace of discovery in the natural and social sciences by automating the formulation of research questions from order hidden deep in the data \cite{WeinsteinInPreparation}.
\textcite{Duvenaud2013StructureSearch,Lloyd2014AutomaticModels} introduce a method for the automatic statistical analysis of time series using compositional Gaussian process kernel search. A time series is modeled by a Gaussian process model and the goal is to find a descriptive and expressive kernel. This approach is capable of automatically discovering underlying structure in a time series such as change points, trends, local and global behaviors, periodicities, and variations at multiple resolutions. Compositional kernel search builds its explanation of the data starting from simple, interpretable concepts (periodicity, linearity, noise, variance, change...) and combining these concepts iteratively to better model the data. The compositional nature of the approach allows for the automatic description of the discovered data characteristics in human-friendly natural language. For example, the product of squared exponential and periodic kernels can be interpreted as “locally periodic” structure, and the addition of squared exponential and periodic kernels can be interpreted as “periodic with noise.” \textcite{Yunseong2016AutomaticSeries} introduces two extensions to the work of \textcite{Duvenaud2013StructureSearch,Lloyd2014AutomaticModels} which allow for the modeling of multiple time series using compositional kernel search. At a high level, the authors of \textcite{Yunseong2016AutomaticSeries} achieve this by assuming either that all the time series share a same kernel or that they are all modeled by kernels that share a common component that should be interpretable, while allowing the remaining unexplained structure to be modeled in a non-interpretable manner.
Computational tractability is the primary challenge to extending the techniques from \textcite{Duvenaud2013StructureSearch,Lloyd2014AutomaticModels, Yunseong2016AutomaticSeries} to find structure in subsets of the time series as searching through all the possible structure sharing combinations would result in an explosion in complexity. We propose a computationally simple extension to the techniques of \textcite{Yunseong2016AutomaticSeries} to discover interpretable structure in subsets of time series data. In addition to advancing the automatic statistician, our research introduces a new interpretable kernel embedding for time series with applications that include clustering and anomaly detection based on the structural similarities and differences among time series in a dataset.
\section{Related Work}
\subsection{Gaussian Processes}
The Gaussian process (GP) is the generalization of the Gaussian probability distribution to functions. More specifically, a Gaussian process is a collection of random variables, any finite number of which have a joint Gaussian distribution \cite{Rasmussen2006RasmussenLearning}. A Gaussian process is completely specified by its mean function and covariance function:
$f(x) \sim GP(m(x),k(x,x'))$
where
$m(x) = E[f(x)]$ and
$k(x,x') = E[(f(x)-m(x))(f(x')-m(x'))]$.
A zero mean function is often used as marginalizing over an unknown mean function can be expressed using a zero mean GP with a modified kernel. The structure of the kernel function determines how the Gaussian process model generalizes.
\subsection{Automatic Bayesian Covariance Discovery}
\textcite{Duvenaud2013StructureSearch} define a language of regression models by specifying a set of base kernels capturing different function properties and a set of composition rules that combine kernels to produce other valid kernels. To fit a time series, a greedy search is performed over the space of regression models, where each kernel specified model's parameters are optimized by conjugate gradient descent and where optimized models are compared using the Bayesian Information Criterion (BIC):
$$BIC(M) = -2 \log{p(D|M)} + |M| \log{n}$$
where $M$ is an optimized model, $|M|$ is the number of kernel parameters, $p(D|M)$ is the marginal likelihood of the data $D$, and $n$ is the number of data points. BIC is chosen as the criterion for evaluating kernels because it balances model fit and model complexity while avoiding an intractable integral over kernel parameters \cite{Rasmussen2001OccamsRazor, Schwarz1978EstimatingModel}.
\textcite{Lloyd2014AutomaticModels} introduce the Automatic Bayesian Covariance Discovery (ABCD) algorithm which uses the language of regression models from \textcite{Duvenaud2013StructureSearch} to automatically generate natural language descriptions of time series.
\subsection{(Semi-)Relational Kernel Learning}
\textcite{Yunseong2016AutomaticSeries} introduce two kernel learning methods that extend ABCD to model shared covariance structures across multiple time series. Relational Kernel Learning (RKL) aims to find a model that explains multiple time series $D = {d_1, d_2, ..., d_J}$ well. Assuming conditional independence of the marginal likelihoods of each time series allows for the simple computation of the the marginal likelihood of the entire dataset:
$$p(D|M) = p(d_1, d_2, ..., d_J|M) = \prod_{j=1}^J p(d_j|M)$$
The presence of exactly identical structure across all the time series in a dataset is rare. To accommodate for variation in individual time series within a dataset, Semi-Relational Kernel Learning (SRKL) relaxes the exactly identical structure constraint of RKL by learning a set of kernels, one for each time series in a dataset. The kernels share a common component that captures structure found across the dataset while retaining individual components. In particular, the set of kernels learned by SRKL can be written as
$${K_j=K_S+K_{d_j} | d_j \in D, j = 1,2,...,J}$$
where $K_S$ is the shared kernel component and the $K_{d_j}$ are the individual kernel components.
The authors of \textcite{Yunseong2016AutomaticSeries} note that while it would be ideal to use an ABCD type search over a space of regression models to learn the individual kernel components, doing so is not practically feasible due to the explosion of the search space. To avoid the complexity issues, the individual kernel components are represented using the spectral mixture kernel \cite{Wilson2013GaussianExtrapolation}. While this allows SRKL to model multiple time series that may have some structural differences, the single shared kernel component makes it still necessary that the time series be somewhat homogeneous in nature. This is problematic when outliers exist in the data or when the data is heterogeneous.
\section{Heterogeneous Relational Kernel Learning}
In this section, we introduce a computationally feasible procedure for uncovering structure found in subsets of time series but not necessarily all time series in the dataset. These time series could be described as heterogeneous, and so we term our method Heterogeneous Relational Kernel Learning (HRKL). The procedure is simple to implement and can readily be incorporated into RKL and SRKL with little additional computation, enabled by the reuse of intermediary computational outputs from RKL.
Intuitively, if a subset of time series are structurally similar, kernels that explain one of the members of the subset well should explain the entire subset generally well. Similarly, kernels that explain one of the members of the subset poorly should tend to explain the entire subset poorly.
Our extension of RKL is simple. Instead of using BIC values for determining only the best model, we save the BIC value for every kernel-series combination evaluated during the RKL search process. After an iteration of searching over $K$ kernels to fit $J$ time series a $J$ by $K$ BIC history matrix $B$ can be defined. Each matrix element $B_{j,k}$ corresponds to the BIC of a Gaussian process model specified by kernel $k$, optimized for time series $d_j$. This matrix $B$ is illustrated in \autoref{fig:embedding}. Each row of the BIC history matrix is then standardized by removing the mean and scaling to unit variance. This standardized matrix is used as the time series embedding. Each row of the BIC history matrix corresponds to the representation of a time series in the embedded space, and each column is a dimension of the embedded space and is associated with a specific kernel. We note that each dimension of the embedding is interpretable because if the language of regression models from \textcite{Duvenaud2013StructureSearch} is used, then each dimension of the embedding corresponds to an interpretable kernel composed of interpretable base kernels. The above explanation fully describes our new approach. Despite its simplicity, it allows us to extend RKL to easily handle time series of a heterogeneous nature, which we demonstrate with a number of experiments. HRKL is summarized in \autoref{alg:hrkl}.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}[
Brace/.style={
decorate,
thick,
decoration={
brace,
amplitude=2pt,
raise=-7pt
}
},]
\matrix [matrix of math nodes,left delimiter=(,right delimiter=)] (m)
{
32 & 14 & 15 & 23 \\
49 & 51 & 11 & 7 \\
29 & 22 & 34 & 10 \\
};
\draw[color=red] (m-1-3.north west) -- (m-1-3.north east) -- (m-3-3.south east) -- (m-3-3.south west) -- (m-1-3.north west);
\draw[color=red] (m-3-1.north west) -- (m-3-1.north east) -- (m-3-1.south east) -- (m-3-1.south west) -- (m-3-1.north west);
\draw [LaTeX-] (m-3-3.south) ++(0,-2.5pt) [out=-90,in=160] to ++(5mm,-10mm) node [right, xshift=+0.5mm, font=\itshape, text=black, align=center] {Each column is a\\dimension of\\the embedding.};
\draw [LaTeX-] (m-3-1.south) ++(0,-2.5pt) [out=-90,in=0] to ++(-5mm,-10mm) node [left, xshift=-0.5mm, font=\itshape, text=black, align=center] {BIC value.};
\node[inner xsep=20pt,inner ysep=0pt,fit=(m)](A){};
\node[inner xsep=00pt,inner ysep=10pt,fit=(m)](B){};
\draw[Brace] (A.180 |- A.270) -- (A.180 |- A.90) node[midway,left]{\shortstack[l]{Each row $j$\\corresponds\\to one of $J$\\time series.} };
\draw[Brace] (B.90 -| B.180) -- (B.90 -| B.0) node[midway,above]{\shortstack[l]{Each column $k$ corresponds to one of $K$ kernels.}};
\end{tikzpicture}
\caption{
Illustration of the HRKL time series embedding.}
\label{fig:embedding}
\end{figure}
\begin{algorithm}[]
\KwIn{Multiple time series $D=d_1, \ldots,d_M$, initial kernel $k$, initial hyperparameters $\theta$, expansion grammar $G$.}
\KwOut{$\forall i \in [1, C] , (D_i, k_i)$ where $D_i$ is the set of time series in cluster $i$, and $k_i$ is the kernel that best describes cluster $i$.}
\tcp{Use initial kernel and expansion grammar to generate list of kernels to evaluate.}
$K \leftarrow \text{expand}(k, G)$\;
\tcp{Initialize BIC history matrix.}
$B \leftarrow \text{array}(M,\text{length}(K))$\;
\tcp{For each kernel.}
\For{$k \in K$}{
\tcp{For each time series in the dataset.}
\For{$d \in D$}{
\tcp{Fit Gaussian process.}
$k(\theta) \leftarrow \argmin_{\theta} - \log p(d|k)$\;
\tcp{Compute and save BIC value.}
$B_{d,k} = \text{BIC}(d,k)$\;
}
}
\tcp{Cluster BIC history matrix using columns as features.}
$\{D_1, \ldots,D_C\} \gets \text{cluster}(B)$\;
\tcp{For each cluster of time series.}
\For{$D_c \in \{D_1, \ldots,D_C\}$}{
\tcp{Find the kernel that best describes the cluster.}
$k_c \leftarrow \argmin_{k \in K} \text{BIC}(D_c,k)$\;
}
\KwRet{$[(D_1,k_1),\ldots,(D_C,k_C)]$}
\caption{Heterogeneous Relational Kernel Learning}
\label{alg:hrkl}
\end{algorithm}
\section{Experiments}
We run three experiments to explore the properties and behavior of our interpretable kernel embedding. In particular, we aim to elucidate the strengths and weaknesses of the embedding to understand what applications might benefit from the use of HRKL.
\subsection{Clustering}
We begin the evaluation of our interpretable kernel embedding on a clustering task aimed at characterizing what is considered as similar in the embedding space.
\subsubsection{Data}
We generate a synthetic dataset consisting of 60 standardized time series as shown in \autoref{fig:data_synthetic}. The first 10 series are sine waves with varying amplitudes, frequencies, phases, and noise levels. The next 10 are lines with varying slopes, intercepts, and noise levels. The next 10 are sine waves with linear trends. The next 10 are random noise. The next 10 are variations on the Heaviside step function. The last 10 time series are variations on the sinc function. Each set of 10 time series are considered to form a class. The composition of these classes capture the idea that time series can be considered to be similar if they share structural elements, even if the elements differ in parameter values. These six classes will be used as ground truth labels in the evaluation of the clustering task.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{data_figs/syntheticdata.png}
\caption
Dataset consisting of 60 time series
with heterogeneous structure that can't be modeled well by (S)RKL.
}
\label{fig:data_synthetic}
\end{figure}
\subsubsection{Clustering Methodology}
Pairwise distances between the rows of the BIC history matrix, $B$, are computed using cosine distance to obtain a $J$ by $J$ distance matrix $P$. We use this distance matrix $P$ to uncover clusters of time series. Cosine distance is used because vector orientation is much more important than vector magnitude when trying to capture the intuition that if a subset of time series are structurally similar they should be well described by a common subset of kernels and poorly described by another common subset of kernels.
Multiple approaches could be used for clustering. We use HDBSCAN, a density-based, hierarchical clustering algorithm which improves upon DBSCAN \cite{Campello2013Density-BasedEstimates}. We use HDBSCAN because of its high cluster stability and because it does not require the specification of the number of clusters beforehand.
\subsubsection{Kernels}
We use as base kernels the squared exponential kernel, the linear kernel, and the periodic kernel. We then generate a list of 87 kernels to evaluate by taking all non-redundant kernel structures of the following forms where $k_a$, $k_b$, and $k_c$ are base kernels: $k_a$, $k_a * k_b$, $k_a + k_b$, $(k_a * k_b) * k_c$, $(k_a + k_b) * k_c$, $(k_a * k_b) + k_c$, and $(k_a + k_b) + k_c$.
\subsubsection{Baselines}
We also evaluate three baseline approaches to highlight the differences between our approach and most previous approaches.
Dynamic Time Warping (DTW) measures similarity between time series by non-linearly warping the series in the time dimension \cite{Salvador2004FastDTWSpace}. We use Euclidean distance DTW with HDBSCAN for clustering.
Symbolic Aggregate approXimation Bag-of-Patterns (SAX BoP) is a histogram-based representation for time series data which is essentially a bag-of words model of the quantized time series. The SAX BoP representation can then be used to compute a pairwise distance matrix followed by clustering. We use the recommended SAX BoP hyperparameter settings from \textcite{Lin2009FindingRepresentation} with Euclidean distance and HDBSCAN for clustering.
The k-Shape algorithm is a stronger baseline as a time-series clustering algorithm that is invariant to scaling and shifting \cite{Paparrizos2017FastClustering}. k-Shape is centroid-based with a distance measure based on the cross-correlation measure. We note that k-Shape requires that the number of clusters be specified beforehand, a requirement that is not shared by our method nor by the other baselines. Knowledge of the number of clusters ahead of time gives k-Shape an inherent advantage over the other methods in the context of our evaluation.
\subsubsection{Evaluation Metrics}
We use homogeneity, completeness, and V-measure as cluster evaluation metrics given the known labels for our six classes \cite{Rosenberg2007V-MeasureMeasure}. The homogeneity score captures how well the clustering reflects the desired property that each cluster contains only members of a single class. The completeness score captures how well the clustering reflects the desired property that all members of a given class are assigned to the same cluster. The V-measure is the harmonic mean of the homogeneity and completeness scores.
\subsubsection{Results and Discussion}
\begin{table}[]
\caption{Clustering performance metrics comparing HDBSCAN using HRKL, DTW, and SAX BoP, as well as k-Shape. Homogeneity, completeness, and V-measure are all bounded below by 0 and above by 1, where 1 corresponds to a perfect clustering.}
\label{tbl:clus_table}
\begin{tabular}{l|llll}
& Homogeneity & Completeness & V-Measure \\ \hline
HRKL & \textbf{0.820} & \textbf{0.852} & \textbf{0.836} \\
DTW & 0.496 & 0.627 & 0.553 \\
SAX BoP & 0.363 & 0.684 & 0.475 \\
k-Shape & 0.490 & 0.526 & 0.507
\end{tabular}
\end{table}
\autoref{tbl:clus_table} summarizes the homogeneity, completeness, and V-measure metrics of the clustering of the data described in \autoref{fig:data_synthetic} using HRKL with HDBSCAN, DTW with HDBSCAN, SAX BoP with HDBSCAN, and k-Shape. HRKL performs the best by a wide margin on all metrics.
An examination of the cluster assignments made by HDBSCAN with our interpretable kernel embeddings also provides insights into the behavior of the embedding. Five clusters were found. The lines, random noise, Heavyside step variations, and sinc function variations ground truth classes were perfectly clustered. A single error was made in the clustering of members of the sine waves class, where the series shown in second row and middle column of \autoref{fig:data_synthetic} was assigned the same cluster as the members of the random noise class. We believe this is a reasonable mistake to make as the high noise level in that particular series made it look very similar to random noise. The class that our method had difficulty with was the third, sine waves with linear trends, class. The majority of the members of this class were clustered with members of the lines class, followed by members of this class being labeled as outliers or clustered with the sine wave class. Again, we believe that these mistakes are somewhat reasonable as members of the sine waves with linear trends class were clustered with members of the sine waves class and lines class. In contrast to our method, the DTW, SAX BoP, and k-Shape baselines all fail to distinguish sine waves from random noise, consistently clustering members of the sine wave and random noise classes together.
We compared to SAX BoP, DTW, and k-Shape to illustrate how methods aimed at discriminating between time series that share a model class but have different model parameters are less appropriate than HRKL for discriminating between time series that are best described by different model classes.
These results validate our intuition that interpretable kernel embeddings are unique in how they consider time series to be similar if the time series share structural elements.
In the context of an automatic statistician, HRKL improves upon RKL and SRKL in the presence of heterogeneous time series data. In particular, RKL and SRKL are both forced to select a \emph{single kernel} to describe the \emph{entire} dataset, while HRKL is able to select a \emph{separate kernel for each sub-population} resulting in more useful descriptions of the data.
By iterating through each cluster HRKL finds and looking at the selected kernel, we can interpret the HRKL results with the same ease as RKL. We run through this as an example below to denote how HRKL's interpretation captures the heterogeneous nature of the data.
When run on the data, RKL and SRKL both select the kernel $\text{PER} * \text{SE} + \text{LIN}$ for the \emph{entire} dataset. This kernel would be described in the language of \textcite{Lloyd2014AutomaticModels} as encoding the following additive components: `a linear function' and `a periodic function whose shape changes smoothly'. While this kernel describes the sine waves with linear trends sub-population well, it is not an appropriate description for the majority of the dataset.
The HRKL sub-population discovery and kernel selection procedure leads to the selection of the following kernels and descriptions for each of the five sub-populations found.
For the sub-population containing mostly sine waves, the kernel $\text{PER} * \text{PER} + \text{SE} * \text{PER}$ is selected, encoding the additive components: `a periodic function modulated by a periodic function' and `a periodic function whose shape changes smoothly'. The periodic nature of sine waves is well captured by the selected kernel. For the sub-population containing random noise and one sine wave with high noise, the same kernel, $\text{PER} * \text{PER} + \text{SE} * \text{PER}$, is selected.
For the sub-population containing mostly lines as well as sine waves with linear trends, the kernel $\text{LIN} + \text{PER} * \text{SE}$ is selected, encoding the additive components: `a linear function' and `a periodic function whose shape changes smoothly'. The characteristics of the sub-population, linear trends sometimes with a periodic trend, are well captured by the selected kernel.
For the sub-population containing step functions, the kernel $\text{SE} + \text{PER} * \text{SE}$ is selected, encoding the additive components: `a smooth function' and `a periodic function whose shape changes smoothly'.
Finally, the sub-population containing sinc function is described by the $\text{PER} + \text{SE}$ kernel which encodes the additive components: `a periodic function' and `a smooth function'.
The use of our interpretable kernel embeddings leads to a more precise and useful automatic description of heterogeneous time series data as it allows for the uncovering and characterization of sub-populations.
\subsubsection{Running Time and Complexity}
HRKL is able to handle sub-population structure discovery at practically the same computational cost as using RKL to find a single kernel shared by all time series. The cost is practically the same because the $O(n \log n)$ complexity of running a clustering algorithm like HDBSCAN once at the end of the search is far smaller than the complexity of fitting a Gaussian process to evaluate a kernel at $O(n^3)$ complexity.
In terms of running time on a single CPU, the clustering component of HRKL takes on average 3935 microseconds with a standard deviation of 114 microseconds, while a single kernel evaluation takes on average 63984 microseconds with a standard deviation of 9677 microseconds. In other words, HRKL achieved sub-population discovery at the additional cost of 1/16 of the cost of a single kernel evaluation or at the additional cost of 0.07 percent relative to all of the kernel evaluations.
HRKL improves significantly from a computational complexity perspective in comparison to RKL and SRKL. As Hwang et al note, SRKL is suboptimal from an interpretability perspective because the spectral mixture kernel is used to model variance unexplained by the kernel shared by all time series, but SRKL needs to be used as learning distinctive interpretable kernels for all time series is computationally unfeasible. In particular, the RKL search could be performed to discover one shared kernel and $n$ distinctive kernels, but the search space explodes in terms of complexity as $O(k^{n+1})$ where $k$ is the number of possible kernels on each search grammar tree for every depth. On top of this, using RKL to discover subsets of time series that share common structure results in a combinatorial explosion as each possible subset combination needs to be evaluated using RKL.
\subsection{Pattern Discovery}
Next, we evaluate our interpretable kernel embedding on a pattern discovery task. We find HRKL and DTW both perform best and discover the same overall structures.
\subsubsection{Data}
We use a set of nine search volume time series from Google Trends for the following terms: summer, winter, spring, fall, Zika, Rubio, python, coffee, and finance. The search volumes represent relative weekly search popularity in the United States from 2/16/14 to 2/3/19. The standardized time series are shown in \autoref{fig:data_gt}. The data can be divided into four structural subsets. The search terms representing seasons have a periodic structure, "zika" and "Rubio" are overall flat with temporary surges in interest, "python" and "coffee" have linearly increasing trends, and "finance" has a flat structure with a couple of small surges in interest.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{data_figs/data_gt.png}
\caption{Standardized search volume time series from Google Trends. The search volumes represent relative weekly search popularity in the United States from 2/16/14 to 2/3/19. The data can is divided into four structural subsets.}
\label{fig:data_gt}
\end{figure}
\subsubsection{Methodology}
We employ a similar methodology to the clustering task in the pattern discovery task, using the same kernels and baselines. As multiple plausible groupings of the data exist and to elucidate what the different approaches consider as similar, we use a hierarchical agglomerative clustering (HAC) algorithm. HAC builds a binary tree over the data by first assigning each datum to its own cluster and then merging groups together. The HAC algorithm maintains an active set of clusters and at each stage determines which two clusters to merge, their union is added to the active set, and they are each removed from the active set. The tree is constructed by keeping track of which clusters are merged together at each step. In order to determine which clusters to merge, the HAC algorithm chooses the pair of clusters in the active set that have the smallest dissimilarity or distance. For the distance metric we choose a single linkage criterion which looks at the euclidean distance between the nearest members of the clusters. A dendrogram can then be used to visualize the computed clustering.
\subsubsection{Results and Discussion}
\autoref{fig:googletrends-combined-nosrkl-run2_hac_gp} illustrates the clustering found using our HRKL method, where leaf labels correspond to the grouping labels from \autoref{fig:data_gt}. \autoref{fig:googletrends-combined-nosrkl-run2_hac_sax} shows the clustering found using SAX BoP, and \autoref{fig:googletrends-combined-nosrkl-run2_hac_dtw} shows the clustering found using DTW. As a centroid-based algorithm, k-Shape is not amenable to a dendrogram representation and requires that the number of clusters be specified beforehand. When initialized with the number of clusters set to four, k-Shape recovers the same groupings as our method HRKL, but has the unfair advantage of knowing that there are four clusters.
\autoref{tbl:clus_table_gt} summarizes the clustering performance metrics on the Google Trends data comparing HAC clustering using HRKL, DTW, and SAX BoP, as well as k-Shape, all provided with the correct number of clusters. DTW performs the best, followed by HRKL and k-Shape with a tie. SAX BoP does poorly on this task.
An examination of \autoref{fig:googletrends-combined-nosrkl-run2_hac_gp} shows that the use of HRKL leads to a clustering structure that immediately groups "Zika" and "Rubio", the time series with spikes in search volumes but overall flat structures. These two time series are then grouped with "finance", a time series with an overall flat structure and a number of relatively less significant spikes. The seasons "fall", "winter", "spring", and "summer" are grouped together, and the series with linear trends "python" and "coffee" are also grouped together. Overall on this dataset, the use of our interpretable kernel embedding results in logical groupings of the time series that would allow for heterogeneous relational kernel learning without resulting in an explosion of the search space.
\autoref{fig:googletrends-combined-nosrkl-run2_hac_sax} shows that while SAX BoP does a good job at grouping "Zika" and "Rubio", SAX BoP is not effective at finding the structure in the rest of the data, for example not being able to uncover the shared periodic structure in the seasonal data. On the other hand, \autoref{fig:googletrends-combined-nosrkl-run2_hac_dtw} shows that DTW led to a nearly identical HAC clustering as the use of our interpretable kernel embedding.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/googletrends-combined-nosrkl-run1_hac_gp.png}
\caption{HRKL}
\label{fig:googletrends-combined-nosrkl-run2_hac_gp}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/googletrends-combined-nosrkl-run1_hac_sax.png}
\caption{SAX BoP}
\label{fig:googletrends-combined-nosrkl-run2_hac_sax}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/googletrends-combined-nosrkl-run1_hac_dtw.png}
\caption{DTW}
\label{fig:googletrends-combined-nosrkl-run2_hac_dtw}
\end{subfigure}
\caption{Dendrogram visualizing the HAC clustering of the Google Trends data found using (a) HRKL, (b) SAX BoP, and (c) DTW. The leaf labels correspond to the grouping labels from \autoref{fig:data_gt}. }
\end{figure}
\begin{table}[]
\caption{Clustering performance metrics on the Google Trends data comparing HAC clustering using HRKL, DTW, and SAX BoP, as well as k-Shape, all provided with the correct number of clusters.}
\label{tbl:clus_table_gt}
\begin{tabular}{l|llll}
& Homogeneity & Completeness & V-Measure \\ \hline
HRKL & 0.833 & 0.809 & 0.821 \\
DTW & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} \\
SAX BoP & 0.470 & 0.597 & 0.526 \\
k-Shape & 0.833 & 0.809 & 0.821
\end{tabular}
\end{table}
As RKL and SRKL do not result in an embedding but instead a single shared kernel, they cannot be used for the pattern discovery task. This demonstrates the improvement of HRKL over RKL and SRKL, as HRKL can be used for tasks that RKL and SRKL cannot be used for, and HRKL performs better overall than the other methods that can do these tasks.
\subsection{Anomaly Detection}
We also evaluate our interpretable kernel embedding on an anomaly detection task. In this task, we find HRKL and SAX BoP both solve the task with equal performance, but DTW and k-Shape have significantly degraded results.
\subsubsection{Data}
We use the PhysioNet Gait in Aging and Disease dataset which consists of walking stride interval (the time between successive
heel strikes of the same foot) time series for 15 subjects: 5
healthy young adults, 5 healthy old adults, and 5 older adults with Parkinson's
disease. We then randomly select one time series from each class to corrupt, where corruption consists of a zeroing out of sections of the series. This simulates the effect of real world errors that often occur during the reading, processing, transmission, writing, and storage of sensor data \cite{BoozAllenHamilton2013TheScience}. \autoref{fig:data_corrupted} shows both the uncorrupted and corrupted data.
\begin{figure}[!htb]
\centering
\includegraphics[width=\linewidth]{data_figs/data_corrupted.png}
\caption{PhysioNet Gait in Aging and Disease dataset which consists of walking stride interval time series for 15 subjects. Three time series are corrupted, where corruption consists of a zeroing out of sections of the series. The uncorrupted time series are shown in the top panel labeled 0, and the corrupted time series are shown in the bottom panel labeled 1.}
\label{fig:data_corrupted}
\end{figure}
\subsubsection{Methodology}
For our anomaly detection experiment, we use an identical methodology to the one used for the pattern discovery experiment. The goal now is to uncover the corrupted data which should be modeled differently from the uncorrupted data.
\subsubsection{Results and Discussion}
\autoref{tbl:clus_table_corrupted} summarizes the clustering performance metrics on the corrupted gait data comparing HAC clustering using HRKL, DTW, and SAX BoP, as well as k-Shape, all provided with the correct number of clusters. HRKL significantly outperforms all of the other methods by a wide margin.
\autoref{fig:corrupt-nosrkl-run2_hac_gp} illustrates the clustering found using our HRKL, where leaf labels correspond to the grouping labels from \autoref{fig:data_corrupted}. \autoref{fig:corrupt-nosrkl-run2_hac_sax} shows the clustering found using SAX BoP, and \autoref{fig:corrupt-nosrkl-run2_hac_dtw} shows the clustering found using DTW. As previously mentioned, k-Shape is not amenable to a dendrogram representation and requires that the number of clusters be specified beforehand. When initialized with the number of clusters set to two, k-Shape does not recover the same groupings as shown in \autoref{fig:data_corrupted}. Instead, k-Shape achieves a homogeneity score of 0.141, a completeness score of 0.122, and a V-measure score of 0.131.
\autoref{fig:corrupt-nosrkl-run2_hac_gp} shows that the use of our HRKL embedding leads to a clear separation of the corrupt data from the uncorrupted data. \autoref{fig:corrupt-nosrkl-run2_hac_sax} indicates that SAX BoP also clearly separates the corrupted data from the uncorrupted data. However, \autoref{fig:corrupt-nosrkl-run2_hac_dtw} shows that a larger number of clusters, four to be precise, would be required to successfully separate the corrupted data from the uncorrupted data using DTW.
As with the pattern discovery task, since RKL and SRKL do not result in an embedding but instead a single shared kernel, they cannot be used for the anomaly detection task. This again demonstrates the improvement of HRKL over RKL and SRKL, as HRKL can be used for tasks that RKL and SRKL cannot be used for.
\begin{figure}[!htb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/corrupt-nosrkl-run2_hac_gp.png}
\caption{HRKL}
\label{fig:corrupt-nosrkl-run2_hac_gp}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/corrupt-nosrkl-run2_hac_sax.png}
\caption{SAX BoP}
\label{fig:corrupt-nosrkl-run2_hac_sax}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\linewidth,height=140pt]{data_figs/corrupt-nosrkl-run2_hac_dtw.png}
\caption{DTW}
\label{fig:corrupt-nosrkl-run2_hac_dtw}
\end{subfigure}
\caption{Dendrogram visualizing the HAC clustering of the corrupted gait data found using (a) HRKL, (b) SAX BoP, and (c) DTW. The leaf labels correspond to the grouping labels from \autoref{fig:data_corrupted}.}
\end{figure}
\begin{table}[]
\caption{Clustering performance metrics on the corrupted gait data comparing HAC clustering using HRKL, DTW, and SAX BoP, as well as k-Shape, all provided with the correct number of clusters.}
\label{tbl:clus_table_corrupted}
\begin{tabular}{l|llll}
& Homogeneity & Completeness & V-Measure \\ \hline
HRKL & \textbf{1.000} & \textbf{1.000} & \textbf{1.000} \\
DTW & 0.287 & 0.287 & 0.287 \\
SAX BoP & 0.235 & 0.480 & 0.316 \\
k-Shape & 0.141 & 0.122 & 0.131
\end{tabular}
\end{table}
\section{Conclusion} \label{sec:conclusion}
We have extended prior work to create an interpretable kernel embedding for time series which allows for wider flexibility to deal with noisy data that may contain outliers and for the inclusion of sub-population identification as a natural part of an automated statistician. In other words, this embedding allows for heterogeneous relational kernel learning and for automatic hypothesis generation from sets of time series where only subsets of the time series share structure. This embedding can also be used for tasks such as clustering and anomaly detection in sets of time series. We showed the validity of our interpretable kernel embedding on three separate tasks including clustering, pattern discovery, and anomaly detection, where our method was shown to perform well across all tasks and in comparison to other popular time series clustering techniques.
\bibliographystyle{named}
|
1,477,468,750,101 | arxiv | \section{Introduction}
Even after a cursory reading of the papers published in the seminal Vatican conference
on stellar populations it becomes clear the key role that RR Lyrae played in the development
of what we now call stellar populations and in the shaping of Galactic structure.
Their relevance is also supported by the first sentence of one the papers given by
W. Baade at that conference:
{\em Variable stars, in particular the cepheids and the cluster-type variables, have
becoming increasingly important in the exploration of our own and other galaxies.}
The relevance of RR Lyrae stars in stellar astrophysics is further supported by
the fact that RR Lyrae are the most popular old, low-mass distance indicators.
In early times it was assumed that their mean luminosity was constant, but soon after
the spectroscopic investigations showed that they do obey a visual
magnitude metallicity relation. Therefore, the knowledge of their metallicity
was fundamental for accurate estimates of their individual distances.
However, high-resolution spectroscopy in single slit mode was prohibitive
for large numbers of RR Lyrae stars. The $\Delta$S method invented by
George \citep{Preston59} opened a new path in the extensive use of field RR Lyrae
as stellar tracers of the Galactic halo \citep{Suntzeff94} and of
the bulge \citep{wal91,blan97}.
RR Lyrae are also fundamental laboratories to constrain the physical mechanisms
that drive their pulsation instability. After the pioneering investigations by
\citet{Christy66} based on nonlinear radiative models, the modelling of RR Lyrae
pulsation behavior was at the cross-roads of several theoretical and observational
investigations. Once again the papers by \citet{Preston64} and \citet{Preston65} have been
a benchmark for those interested in understanding the impact that the different
physical mechanisms (convection) and input physics (opacity, equation of state)
have on light and radial velocity curves.
The structure of this paper is the following. In \S2 we briefly discuss pros and cons
of cluster and field RR Lyrae. The pulsation framework developed by our group during
the last few years is outlined in \S3, together with the added values and the drawbacks
of the Bailey Diagram (amplitude versus period). In \S5 we discuss the current
observational scenario of the K-band Period-Luminosity relation of cluster RR Lyrae.
In \S6 we discuss the diagnostics currently adopted to estimate the primordial
helium content, in particular we focus our attention on RR Lyrae stars. In the last
section we briefly mention the possible paths that the next generation of observing
facilities will open during the next few years.
\section{Pros and cons of field and cluster RR Lyrae stars}
Field and cluster RR Lyrae bring forward several distinctive advantages worth
discussing in detail:
{\em i)}--Distance-- Cluster RR Lyrae provide accurate distance estimates of
Galactic Globular Clusters (GGCs),
and in turn more accurate estimates of their absolute age. Unfortunately,
they are not ubiquitous, i.e. their occurrence in GGCs does depend on the
morphology of the horizontal branch. They are not present in GGCs that have
either a very red or a very blue HB morphology. This means that they are
affected by the so-called second parameter problem (e.g., Kunder et al. 2011).
{\em ii)}--Ensemble properties-- Cluster and field RR Lyrae stars do show different
period distributions, i.e. they are affected by the so-called Oosterhoff dichotomy
\citep{oos39,Bono95a,smith95,cate09}.
However, RR Lyrae in MC globulars are Oosterhoff intermediate \citep{Bono94a}.
The same outcome applies to RR Lyrae in different dwarf spheroidal galaxies in
the Local Group (e.g. Carina, \citealt{Dallora03}). The fraction of first overtone
(FO, RRc) and fundamental (F, RRab) variables is also affected by the HB morphology and
changes when moving from GGCs to dwarf galaxies to the Galactic field \citep{Petroni03}.
This indicates that pulsation properties of RR Lyrae
do depend on the chemical and dynamical evolution of their stellar environment.
{\em iii)}--Evolutionary status-- Empirical evidence and theoretical predictions
indicate that the range in visual magnitude between the Zero-Age-Horizontal-Branch
(ZAHB) and the end of central helium burning is correlated with the metallicity
\citep{Sandage90,Bono95b}. Together with this intrinsic evolutionary
effect, we are also facing the problem that we still lack a firm photometric
diagnostic to constrain the evolutionary status of individual RR Lyrae stars.
This drawback applies to both cluster and field RR Lyrae and causes a series
of problems not only in the distance estimate, but also in the estimate of the
observed ZAHB luminosity level, and in turn on the parameters correlated with
this evolutionary feature, namely the R parameter (\citealt{Sandquist00};
Troisi et al. 2011, in preparation), the $\Delta$$V_{bump}^{HB}$
\citep{Dicecco10} and the relative ages of GGCs \citep{Marin09}.
A possible solution to overcome this problem is to estimate the individual distances
using the NIR PL relations, since they are minimally affected by evolutionary
effects and by a possible spread in mass inside the instability strip \citep{Bono01,Bono03}.
On the basis of the true distance modulus and of their individual reddenings,
the same objects can be located in the absolute CMD ($M_V$ vs $(B-V)_0$) to constrain
their evolutionary status (Bono et al. 2011, in preparation).
{\em iv)}--Evolutionary/Pulsation connection-- Cluster variables are also relevant
to constrain the input physics adopted in pulsation and evolutionary codes. Observables
predicted by pulsation models (periods, modal stability, pulsation amplitudes) and by
evolutionary models (lifetimes, mass-luminosity relation, effective temperature) do
depend on the same intrinsic parameters (stellar mass, chemical composition). The
comparison between theory and observations using the same objects (RR Lyrae),
seen as variables and as HB stars, provide a fundamental sanity check for both
the micro (opacity, equation of state, cross sections) and the macro
(gravitational settling, mixing, mass-loss) physics. This is a fundamental
stepping stone to improve the plausibility of the physical assumptions adopted
in pulsation and evolutionary models, since the former rely on the mass-luminosity
relation predicted by evolutionary models. This is the path that has been
followed to address several open problems (Oosterhoff dichotomy, Sandage period
effect, hysteresis mechanism, topology of the instability strip) that are at the
cross-roads between theory and observations.
{\em iv)}--Statistics and progenitors-- The number of GGCs that host several tens
of RR Lyrae stars is quite limited \citep{Clement01}. The new photometric surveys
are disclosing several thousands of field RR Lyrae. This means that robust comparisons
can only be based on field stars. However, for cluster variables we have accurate
knowledge of the absolute age and chemical composition of the progenitors. For field
RR Lyrae we can either estimate (photometric indices) or measure (spectroscopy) the
chemical composition, but we can barely constrain their absolute age, and in turn
the mass of the progenitor.
{\em v)}--Helium abundance-- More than 30 years ago it was suggested that the
mass-luminosity ratio of RR Lyrae, the so-called A-parameter \citep{Caputo83}
can be adopted, via the pulsation relation \citep{Vanalbada73,cap98},
to estimate the helium content of individual variables \citep{Sandquist00}.
The application of this diagnostic to cluster RR Lyrae is very powerful, since the
precision is only limited by the size of the sample. The two main drawbacks of
this approach is that the He estimates are affected by the precision of the
color-temperature relation and by the evolutionary status of individual objects.
A breath of fresh air on this delicate topic arrived with the discovery by
\citet{Preston09} of both He~I and He~II emission and absorption
lines in field RR Lyrae stars. The lines show up during the rising branch of
RR Lyrae stars and appear to be the consequence of the shock propagation
\citep{Bono94b,Chadid96}.
\section{Theoretical Framework}
The content of \S2 indicates that RR Lyrae stars are the cross-roads of several
open astrophysical problems. Several of these problems call for new theoretical
and observational insights. In particular, the theoretical approach does require
a theoretical framework dealing with both evolutionary and pulsation properties
of radial variables. Our group undertook this project almost twenty years ago
\citep{cap08}. In the following, we briefly mention recent findings concerning
the predicted Bailey diagram and the K-band Period-luminosity relation.
We constructed new sets of \hbox{RR Lyrae~} pulsation models by using the hydrodynamical
code developed by \citet{Stellingwerf82} and updated by \citet{Bono94b}
[see also \citealt{Smolec10} for a similar approach], \citet{Bono99}.
The physical assumptions adopted to compute these models will be described in
Marconi et al. (2011, in preparation). We adopted the OPAL radiative opacities
released in 2005 by \citet{Iglesias96}\footnote{http://opalopacity.llnl.gov/}
and the molecular opacities by \citet{Alexander94}.
To properly constrain the pulsation properties of \hbox{RR Lyrae~} stars, we typically cover
a wide range in metal abundances (scaled-solar, $\alpha$-enhanced). Moreover,
to constrain the dependence of the pulsation properties of \hbox{RR Lyrae~} stars on the
He abundance we also adopt different values of this crucial parameter.
For each fixed chemical composition the stellar mass of \hbox{RR Lyrae~} stars
was fixed by using the evolutionary prescription for $\alpha$-enhanced structures provided by
\citet{Pietrinferni06} and available on the BaSTI database\footnote{http://albione.oa-teramo.inaf.it/}.
Note that for each fixed metal content the mass of the He-enhanced models was estimated
assuming the same cluster age (13 Gyr).
Together with the luminosity predicted by evolutionary models we often adopted a brighter
luminosity level to account for the possible occurrence of evolved
\hbox{RR Lyrae~} stars. The reader interested in a more detailed discussion concerning evolutionary
and pulsation ingredients is referred to \citet[][and references therein]{Marconi11}.
\subsection{The luminosity amplitude vs period (Bailey) diagram}
The period distribution and the Bailey diagram (luminosity amplitude vs pulsation period)
are very robust observables, since they are independent of distance and reddening corrections.
After the seminal discovery of this diagram \citep{oos39},
\citet{Preston59} found that objects with different
metal abundances display different trends. The use of the pulsation amplitude as a proxy
of the metal content is still lively debated both from the observational
(\citealt{Kinemuchi06,Kunder09, Kunder11a}) and the theoretical \citep{Bono07,Fiorentino10}
point of view. The reader interested in a detailed discussion concerning the development and
the use of the Bailey diagram is referred to the detailed paper by Smith et al.
(these proceedings). In the following, we focus our attention on the use of the Bailey
diagram to constrain the helium content of RR Lyrae stars.
The bolometric light curves predicted by nonlinear, convective, pulsation models
are typically transformed into the observational plane using the bolometric
corrections and the Color-Temperature transformations provided by Castelli et al. (1997a,b).
Figure~1 shows predicted B-band amplitudes for fundamental (top) and first overtone (bottom)
RR Lyrae models computed at fixed metal content (Z=0.001) and primordial helium contents
ranging from Y=0.24 to Y=0.38 (see labeled values). To constrain the dependence of
the luminosity amplitude on evolutionary effects the models --for each fixed chemical
composition-- were constructed using two different luminosity levels.
\begin{figure}
\centering
\includegraphics[width=9.0cm]{bailey.ps}
\vspace*{1.55truecm}
\caption{Top -- Predicted B-band amplitudes versus period for fundamental RR Lyrae
computed at fixed
metallicity
(Z=0.001). Different symbols
display different luminosity levels, while different colors show models
constructed assuming different helium abundances and/or stellar
masses. From left to right the labelled numbers show helium content
(mass fraction), the stellar mass and the logarithmic luminosity
(solar units).
Bottom -- Same as the top, but for first overtone pulsators.
}
\label{bailey}
\end{figure}
Data plotted in this figure indicate that the He content marginally affects the pulsation
behavior of RR Lyrae. The top panel shows that an increase of 30\% in He content
(0.24 vs 0.30) causes a systematic shift in the pulsation period. Thus, suggesting that
the difference in the helium enhanced models is mainly due to the increase in the
luminosity level. The different sets of models disclose that structures with a difference
in helium content of $\sim$50\% might have, at fixed period, similar pulsation amplitudes.
The amplitudes of the He enhanced models --Y=0.30, Y=0.38-- display minimal changes, since
the latter group was constructed assuming the same luminosity levels and slightly smaller
stellar masses (0.60 vs 0.65 $M_\odot$).
The first the overtone pulsators (bottom panel) show similar trends concerning the He
dependence.
This circumstantial evidence indicates that the Bailey diagram is not a good diagnostic
to constrain the helium content of cluster and field RR Lyrae. Moreover, in dealing with
luminosity amplitudes we need to keep in mind two relevant limits.
{\em i)}-- The theoretical Bailey diagram is affected by uncertainties on the mixing length
parameter \citep[see e.g.][]{Marconi03}. A decrease in the convective efficiency causes larger
amplitudes, but the pulsation periods are minimally affected. This means that the mixing
length affects the slopes of the predicted amplitude--period relations, but the systematic
drift as a function of the He content is not affected (Marconi et al. 2011, in preparation).
{\em ii)}-- Recent space (COROT, \citealt{Chadid10}) and ground-based \citep{Kunder10}
observations indicate that the fraction of \hbox{RR Lyrae~} stars affected by the Blazhko phenomenon
is higher than previously estimated ($\approx$50\%, \citet{Benko10}.
\section{The NIR Period-luminosity relation of RR Lyrae stars}
RR Lyrae variables are relatively bright and have been detected in several Local Group galaxies
(e.g.~\citealt{Dallora03,Dallora06,Pietrynzski08,Greco09,Fiorentino10,Yang10}) and
can be easily identified from their characteristic light curves.
The most popular methods to estimate their distances is to use either the visual
magnitude--metallicity relation or the near-infrared (NIR) Period-Luminosity (PL) relation
(e.g.~\citealt{Bono03b,Cacciari03}) or parallaxes and proper motions \citep{Feast08}.
The reader interested in independent approaches based on RR Lyrae to estimate stellar
distances is referred to the thorough investigations by, e.g.,~\citet{Marconi03,Dicriscienzo04,Feast08}.
The visual magnitude--metallicity relation appears to be hampered by several uncertainties
affecting both the zero-point and the slope~\citep{Bono03}. On the other hand,
the NIR PL relation seems very promising, since it shows several relevant advantages.
\citet{Longmore86} demonstrated, on an empirical basis, that RR Lyrae do obey
to a well defined $K$-band PL relation. The reason why the PL relation shows up in the
NIR bands is due to the fact that the BC in the NIR bands, in contrast with the optical bands,
steadily decreases when moving from the hot (blue) to the cool (red) edge of the RR Lyrae
instability strip. This means that they become brighter as they become redder.
The pulsation periods --at fixed stellar mass and luminosity-- become longer, since redder
RR Lyrae have larger radii. The consequence of this intrinsic property is that periods and
magnitudes are strongly correlated when moving toward longer wavelengths.
Theoretical and empirical evidence indicates that the NIR PL relations of RR Lyrae
are robust methods to determine stellar distances.\\
{\em i)}-- The NIR PL relations are minimally affected by evolutionary effects
inside the RR Lyrae instability strip. The same outcome applies for the typical
spread in mass inside the RR Lyrae instability strip~\citep{Bono01, Bono03}.
Therefore, RR Lyrae distances based on the NIR PL relations are minimally
affected by systematics introduced by their evolutionary status.
{\em ii)}-- Theory and observations indicate that fundamental (F) and
first overtone (FO) RR Lyrae do obey independent NIR PL relations.
{\em iii)}-- Theory and observations indicate that the NIR PL relations are
linear over the entire period range covered by F and FO pulsators.
The NIR PL relations also have three observational advantages. \\
{\em i)}-- The NIR magnitudes are minimally affected by uncertainties on
the reddening.
{\em ii)}-- The NIR amplitudes are at least a factor of 2-3 smaller than
in the optical bands. Therefore, accurate estimates of the mean
NIR magnitudes can be obtained with a modest number of observations. Moreover,
empirical K-band light curve templates~\citep{Jones96} can be adopted to
further improve the accuracy of the mean magnitudes.
{\em iii)}-- Thanks to 2MASS, accurate samples of local NIR standard stars
are available across the sky. This means that both relative and absolute NIR
photometric calibrations can be easily accomplished.
The use of the NIR PL relations is also affected by two limits. \\
{\em i)}-- Empirical estimates of the slope of NIR PL relations show a significant
scatter from cluster to cluster. They range from $\sim -1.7$
(IC$4499$, Sollima et al. 2006)
to $\sim -2.9$
(M$55$, Sollima et al. 2006) and it is not clear whether the
difference is intrinsic or caused by possible observational biases.
{\em ii)}-- Current predictions indicate that the intrinsic spread
of the NIR PL relations decreases by taking into account either the metallicity
or the HB-type of the Horizontal Branch (HB)
\citep{Bono03,Cassisi04,Catelan04,Delprincipe06}.
Therefore accurate distance estimates of field RR Lyrae do require an estimate of
the metallicity. However, no general consensus has been reached yet concerning the value
of the coefficient of the metallicity term in the NIR Period-Luminosity-Metallicity
(PLZ) relations. The current estimates for the $K$-band range from
$0.08$~\citep{Sollima06} to $0.23$~\citep{Bono03} mag/dex by using cluster and
field RR Lyrae, respectively.
\begin{figure}
\centering
\includegraphics[width=8cm]{M92_plk.ps}
\vskip0pt
\caption{K-band Period-Luminosity relation of RR Lyrae in the Galctic globular cluster M92,
based on data collected by \citet{Delprincipe05}.
The squares display fundamental (RRab) variables, while the triangles the first overtones (RRc).
The latter were fundamentalized ($\log P_F$= $\log P_{FO}$ + 0.127). The red line shows the
linear fit of the PLK relation and the slope is labeled.
}
\label{M92}
\end{figure}
To address these problems our group undertook a long-term project aimed at providing
homogeneous and accurate NIR photometry for several GCs hosting a good sample of
RR Lyrae and covering a wide range of metal abundances. In the following, we briefly
discuss the slope of the K-band\footnote{The intensity weighted mean magnitudes
of the RR Lyrae discussed in this section were calibrated to the 2MASS NIR photometric
system using either local standards or the transformations provided by \citet{carp01}.}
PL relation when moving from metal-poor to metal-intermediate GCs.
\begin{figure}
\centering
\includegraphics[width=8cm]{Reticulum_plk.ps}
\vskip0pt
\caption{Same as Fig.~2, but for RR Lyrae stars in the metal-poor LMC globular cluster Reticulum
based on data collected by \citet{Dallora04}.
}
\label{Reti}
\end{figure}
{\bf M92)}-- This is the most metal-poor cluster in our sample \citep{Delprincipe05}.
According to the metallicity scale by \citet{Kraft03} based on FeII lines, the
iron content is [Fe/H]=$-$2.38$\pm$0.07. We observed eight fundamental and three first overtones.
The latter were fundamentalized and we found a slope of $-$2.26$\pm$0.20, where the error
only accounts for the uncertainty on the linear fit to the data. By using the K-band
PL relation provided by \citet{Cassisi04} we found a true distance modulus of $\mu$=14.62$\pm$0.04 mag,
that agrees quite well with similar estimates available in the literature
\citet[][and references therein]{Dicecco10}.
{\bf Reticulum)}-- This is an LMC metal-poor cluster \citep{Dallora04}.
According to the metallicity scale by \citet{Suntzeff92} based on Calcium triplet
lines, the iron content is [Fe/H]=$-$1.71$\pm$0.1. We observed 21 fundamental and
five first overtones.
The latter were fundamentalized and we found a slope of $-$2.16$\pm$0.09, where the error
only accounts for the uncertainty on the linear fit to the data. By using the K-band
PL relation provided by \citet{Bono03} we found a true distance modulus to Reticulum
of $\mu$=18.523$\pm$0.005 mag, that agrees quite well with similar LMC distances
available in the literature.
\begin{figure}
\centering
\includegraphics[width=8cm]{M5_plk_new.ps}
\vskip0pt
\caption{Same as Fig.~2, but for RR Lyrae stars in the metal-intermediate globular cluster M5
based on data collected by \citet{Coppola11}.
}
\label{M5}
\end{figure}
{\bf M5)}-- This is the metal-intermediate ([Fe/H]=$-$1.26$\pm$0.06) cluster in our
sample with the richest sample of first overtone (52) and fundamental (24) pulsators \citep{Coppola11}.
By using the entire sample we found a slope of $-$2.33$\pm$0.08. We also found a true
distance modulus to M5 of $\mu$=14.44$\pm$0.02 mag, that agrees quite well with
distances based on different distance indicators. The good agreement also applies
to the kinematic distance. This seems an important finding, since this geometrical
methods provides distance that are systematically larger than the distances based on
different distance indicators \citep{Bono08}.
\begin{figure}
\centering
\includegraphics[width=8cm]{omcenplk.eps}
\vskip0pt
\caption{Same as Fig.~2, but for RR Lyrae stars in the giant globular cluster $\omega$ Centauri
based on data collected by \citet{Delprincipe06}.
}
\label{omcen}
\end{figure}
{\bf $\omega$ Cen)}-- Finally, we also estimated the distance to $\omega$ Cen and observed
93 first overtones and 104 fundamentals \citep{Delprincipe06}. The nature of this massive
stellar system is not well established yet \citep{Bekki03}. However, there is general
consensus that the stellar content of this system shows a metallicity distribution with
multiple peaks \citep{Calamida09}. The same outcome applies to the heavy element
abundances (\citealt{Johnson09, Pancino02}, and references therein).
To test the dependence of the slope on the iron abundance we split the sample into
a metal-poor and a less metal-poor subsample. We found that the slopes, within the errors, are very similar.
Therefore, we decided to perform a linear fit over the entire sample and we found a slope
of $-$2.54$\pm$0.09. By using the calibration of the K-band PL relation provided by
\citet{Cassisi04} we found a true distance modulus to $\omega$ Cen
of $\mu$=13.77$\pm$0.07 mag, that agrees quite well with distance estimates available
in the literature \citep{Bono08}.
The iron content of the GGCs we have already investigated, neglecting $\omega$ Cen,
ranges from $-$2.38 to $-$1.26 dex. The slope of the K-band PL relation should change from
$\sim$$-$2.1 \citep{Bono03} to $\sim$$-$2.4 \citep{Delprincipe06}. The former slope is based
on models that include the metallicity term, while the latter include the HB type.
Current evidence indicates that the slope of the K-band PL relation is marginally affected
by iron abundance, thus supporting the results by by theory and by \citet{Sollima06}.
However, more accurate data both in the metal-poor and in the metal-rich regime are
required before we can reach firm conclusions concerning the metallicity dependence
of both the slopes and the zero-points of the NIR PL relations.
\section{Helium abundances of RR Lyrae: where the eagles dare}
Precise abundances of primordial helium ($Y_p$) together with the abundances
of a few other light elements can provide robust constraints on the primordial
nucleosynthesis, and in particular on the baryonic density of the Universe.
Unfortunately, stellar spectra display strong photospheric helium absorption lines in the
visual spectral range only at high effective temperatures ($T_e >$ 10,000 K). High-mass
stars are useless in constraining $Y_p$, since their material was already polluted
by previous stellar generations. Low-mass stars only attain hot effective temperatures
during central helium-burning phases. The typical evolutionary lifetime of these phases
--called Hot and Extreme Horizontal-Branch phases-- is of the order of 100 Myr. However,
these stellar structures cannot be adopted to constrain $Y_p$, since their surface
abundance are affected by gravitational settling and/or by radiative levitation
\citep{Behr03,Moehler04}.
To overcome current observational limits, accurate estimates of $Y_p$ have been
provided using He emission lines in HII region of blue compact galaxies.
Current estimates agree, within the errors, with the $Y_p$ abundance provided
by WMAP \citep{Larson11}. However, the agreement is not
solid because $Y_p$ is used as a prior in the cosmological solutions. Moreover,
the spectroscopic abundances based on nebular lines might be affected by
systematic errors \citep{Bresolin09, Bresolin11}.
The observational scenario concerning the helium abundance has been recently
enriched by the possible occurrence of a variation in the helium abundance
of Globular Cluster (GC) stars. The presence
of He-enriched stars in GCs has been suggested to explain not only the presence
of multiple unevolved sequences, but also the presence of extended blue HB tails
\citep{Dantona08}. This working hypothesis relies on the
well established anti-correlations between the molecular band-strengths of
CN and CH \citep{Smith87,Kraft94} and between O--Na and Mg--Al measured in
evolved (RG, Horizontal Branch [HB]),
and in unevolved (Main Sequence [MS]) stars of the GCs investigated
so far with high-resolution spectra \citep{Pilachowski83, Gratton04}.
More recently, deep Hubble Space Telescope photometry disclosed the presence
of multiple stellar populations in several massive GCs. Together with $\omega$
Centauri \citep{Bedin04} multiple stellar sequences have been detected in
GCs covering a broad range of metal contents: NGC~2808 \citep{Piotto07},
M54 \citep{Siegel07} and NGC~1851 \citep{Calamida07,Milone08}. Some of these
multiple sequences ($\omega$ Cen, NGC~2808, NGC~1851) might be explained
either with a He-enhanced \citep{Norris04,Dantona08,Piotto07}, or with a
CNO-enhanced \citep{Calamida07, Cassisi08} sub-population.
However, we still lack an empirical validation of the occurrence of He-enhanced
sub-population(s) in GCs.
To overcome the spectroscopic problems, \citet{Iben68} suggested
to use the R-parameter --the number ratio between HB and RGB stars brighter
than the luminosity level of the HB at the RR Lyrae instability strip (IS)--
to estimate the initial He abundance of cluster stars.\\
Two independent approaches to estimate the helium content in GCs were also suggested by
\citet{Caputo83}. The $\Delta$-parameter,
--the difference in magnitude between the MS at (B-V)$_0$=0.7 and the luminosity
level of the HB at the RR Lyrae IS, and the A-parameter --the mass-to-luminosity
ratio of RR Lyrae stars. The pros and cons of the quoted parameters were
discussed in a thorough investigation by \citet{Sandquist00}.
Theoretical and empirical limits affecting the precision of the R-parameter
have been also discussed by \citet{Zoccali00},
\citet{Riello03} and \citet{Salaris04}.
The key feature of the quoted parameters is that they are directly or
indirectly connected with the HB luminosity level. In spite of the improvements
in the photometric precision, in the sample of known cluster HB stars,
we still lack firm empirical methods to estimate the HB luminosity level
in GCs. This problem is partially due to substantial changes in the HB morphology
when moving from metal-poor (blue HB) to metal-rich (red HB) GCs. Moreover, we still
lack a robust diagnostic to constrain the actual off-ZAHB evolution of HB stars
(\citep{Ferraro99, Dicecco10, Cassisi11}.
To overcome several of the above drawbacks, \citet{troisi11} suggested a new method
based on the difference in luminosity between the RGB bump and the main sequence
benchmark at the same color of the bump. The new method shows several indisputable
advantages, but also a strong dependence on the metal content.
A new approach has been suggested by \citet{Dupree11} who
detected, in a few RG stars in $\omega$ Cen, the chromospheric
He~I line at 10830\AA. The He line was detected in more
metal-rich stars and it seems to be correlated with Al and Na, but no clear
correlation was found with Fe abundance. However, a more detailed non-LTE analysis
of the absolute abundance of He is required before firm conclusions can be drawn concerning
the occurrence of a spread in helium content.
Interestingly enough, \citet{Preston09} discovered both He~I and
He~II emission and absorption lines in field RR Lyrae stars. The lines show up
during the rising branch of RR Lyrae stars and appear to be the consequence
of the shock propagation soon after the phases of minimum radius
\citep{Bono94b,Chadid96}.
The first detection of helium lines in low-mass variables dates back to
\citet{Wallerstein59}, who detected helium lines in Type II
Cepheids. The key advantage of RR Lyrae stars, when compared with
detections of helium lines in hot and extreme HB stars (Behr 2003), is that
they have an extended convective envelope. Therefore, they are minimally
affected by gravitational settling and/or radiative levitation \citep{Michaud04}.
The drawback is that the measurement of the helium abundance requires hydrodynamical
atmosphere models accounting for time-dependent, convective transport, radiative
transfer equations, together with the formation and the propagation of sonic shocks.
Empirical evidence indicates that He absorption and emission lines in RR Lyrae
stars take place along the rising branch of the light curve. Plain
physical arguments suggest that they are triggered by the formation and the development
of strong shocks across the phases of maximum compression. To further constrain the
physical mechanism(s) driving the occurrence of these interesting phenomena, new
high-resolution, high signal-to-noise spectra are required to constrain the dependence
of He lines on evolutionary and pulsation properties of RR Lyrae stars and eventually
to estimate the helium abundance.
\section{Future remarks}
Recent theoretical and observational perspectives indicate that RR Lyrae stars are
heading for a new golden age. This is due to the huge amount of field RR Lyrae stars that
have already been collected by extended photometric surveys. A sample of $\sim$1,200
field RR Lyrae were collected by the Northern Sky Variability Survey (NSVS, \citealt{woz04})
and analyzed by \citet{Kinemuchi06}. They cover a distance of $\sim$7-9 Kpc in the solar
neighborhood and reach a limit magnitude of V=15 mag.
An even more ambitious project was realized by the All Sky Automated Survey (ASAS,
\citealt{szcz09}), since they covered the southern
sky up to $\delta$=+28 ($\sim$ 75\% of their sky). The survey has a limiting magnitude of
V$\sim$14 and they identified more than 1,450 RR Lyrae within 4 kpc in the solar neighborhood.
Deep multiband photometric (ugriz) and spectroscopic data for almost 500 RR Lyrae have been collected
by the Sloan Digital Sky Survey II (SDSS--II, \citealt{sesar10}, Sesar et al. these proceedings).
They have been able to investigate the spatial distribution of halo RR Lyrae stars up to
galactocentric distances of 5-100 kpc. A sizable sample of field RR Lyrae (more than 2,000)
was found by \citep{keller08} using V and R-band archival data collected by the
Southern Edgeworth-Kuiper Belt Object (SEKBO) survey. This survey covers 1675 square degrees
along the ecliptic to a mean depth of V=19.5, i.e. up heliocentric distances of $\sim$50 kpc.
However, the real quantum jump concerning the photometric precision, the depth and the sampling
of current RR Lyrae surveys, is given by the Optical Gravitational Lensing Experiment III
survey (OGLE III, \citealt{pietr11}). They collected V and I-band data over a time interval
of almost ten years, toward the Galactic bulge and identified the richest sample ever
collected of field RR Lyrae, i.e. more than 16,800 stars.
The near future appears even more promising not only concerning the new optical (Pan--STARRS)
and NIR (VISTA) photometric surveys, but also for the spectroscopic surveys (M2FS@Magellan,
FLAMES@VLT). This means that together with pulsation properties, also the metallicity of a
relevant fraction of field RR Lyrae will become available.
It is noteworthy that to attack the open problems mentioned above requires a multiwavelength,
a spectroscopic, and a theoretical approach \citep{Benko10,Marconi11}: a challenge that we have
to deal with to further improve our knowledge of the detailed structure of the Galactic spheroid
and to improve the accuracy of the RR Lyrae distance scale. The goal is not trivial:
not only to further constrain the structure of the Galactic bulge and the interaction
with the Galactic Bar, but also to trace the possible transition from thick to thin
disc stars (Kinemuchi et al. 2006).
Note that up to now, we still lack firm empirical evidence concerning the presence of old,
low-mass stars --like RR Lyrae stars-- in the thin disc. There are a few field stars that are
good candidates (TV Lib), but their intrinsic properties might be significantly different
than typical RR Lyrae \citep{bono97}. The RR Lyrae can be even more relevant as beacons
to trace the stellar streams in the Galactic halo (Marconi et al. 2006) and to constrain the
outermost radial extent of the Galactic halo.
This new spin will make a comeback to RR Lyrae and in this ongoing effort George Preston
will always be a reference point for suggestions and new ideas.
\acknowledgments
One of us (G.B.) thanks G. Preston and A. McWilliam for the invitation and for the support to
attend this exciting meeting. This project was partially supported by the PRIN-INAF
(P.I.: R. Gratton).
|
1,477,468,750,102 | arxiv | \section{Introduction}
In this
paper we deal with stability of a dynamical system from statistical point of view and its connection to the existence of a class of dynamical systems called \textit{non-statistical dynamics}. Roughly speaking non-statistical dynamics are those dynamical systems for which a large subset of points in the phase space (positive measure subset) have non-statistical behavior, meaning that the orbit of these points does not have asymptotic distribution in the phase space. Here we have fixed a reference measure on the phase space. The notion of \textit{statistical stability} of a map within a family of dynamics is related to the possibility of giving a essential change to the statistical properties of the initial map by arbitrary small perturbations. we give a general formalization of this notion and show how it is connected to the existence of non-statistical dynamics. In the following we first give a review of the literature on non-statistical dynamics, and then we give a more detailed introduction about the notion of statistical stability.\\
\subsection*{Non-statistical dynamics:} Assume $X$ is a compact Riemannian manifold and $f:X\to X$ is a continuous map and let $\mu$ be a probability measure whose density w.r.t. a Lebesgue measure is a smooth positive function. By $\mathcal{M}_1(X)$ we denote the space of probability measures on $X$. For a point $x\in X$ the $n^{\text{th}}$ empirical measure
\begin{equation*}
e_{n}^{f}(x):=\frac{1}{n} \sum_{i=0}^{n-1} \delta_{f^i (x)},
\end{equation*}
describes the distribution of the orbit of $x$ up to the $n^\text{th}$ iteration in the phase space, which asymptotically may or may not converge. If it converges, then by observing a finite number of iterations (but possibly large) one can predict how the orbit of $x$ behaves approximately for larger iterations, from statistical point of view. However it may not converge. In this case we say that $x$ is non-statistical. The non-statistical points exist in many well-known dynamics. For example, in the symbolic dynamics (see Example \ref{exmpl symbolic dynamics}) and any other dynamics that has a shift map as a subdynamics. One of the first results in this area is the Baire genericity of non-statistical points within the phase space for subshifts of finite type, Anosov diffeomorphisms and more generally any map with periodic specification property (see \cite{Zig70},\cite{Zig74} and \cite[Theorem 1.1.11]{GK16}). In these examples the set of points with non-statistical behavior is of zero measure with respect to the natural reference measure on the phase space. One can ask how large can be the set of non-statistical points. Can it be of positive Lebesgue measure? If yes, how large is the subset of maps having this behavior? In most of well-known examples, the non-statistical points have zero measure. For example if a dynamical system $f$ preserves the measure $\mu$, by Birkhoff ergodic theorem, we know that the sequence of empirical measures converge for almost every points, and hence, the non-statistical points, if they exist, have zero measure. There are also many examples of non-conservative maps that have convergent statistical behavior on a full measure set of points; for example in the context of logistic family $f_\lambda(x)=\lambda x(1-x)$, Jakobson has proved that there is a set of parameters with positive measure such that for any parameter in this set, the corresponding map has a unique ergodic absolutly continuous probability invariant measure and the empirical measures of Lebesgue almost every point converges to this unique measure (see \cite{Ja81}).\\
However, there are a few examples of dynamical systems on smooth manifolds having a positive Lebesgue measure set of non-statistical points. Let us call any map with this kind of behavior a non-statistical map (Later, we introduce different versions of this definition).
One of the first examples of the non-statistical dynamical systems is the so called Bowen eye \cite{takens1994heteroclinic}. It is a vector field on $\mathbb{R}^2$ with an eye like open region such that Lebesgue almost every point in this region is non-statistical (see Example \ref{Boweneye}). This region is bounded by two saddle connections of two hyperbolic equilibrium points (see Figure \ref{bowen eye}). These kinds of examples are very non-persistence since the saddle connections are so. In \cite{CV01}, Colli and Vargas introduced a new kind of examples of non-statistical dynamics. Their example is a diffeomorphism of a two dimensional surface with a non-trivial wandering domain such that every point in this domain has non-statistical behavior. Such a diffeomorphism was obtained by doing a careful perturbation of an initial diffeomorphism with a thick horseshoe, having tangency between stable and unstable sets. In \cite{Kriki&Soma} Kiriki and Soma have shown that on any closed surface $M$ and any open Newhouse domain $\mathcal{N}$ in $\text{Diff}^r(M)$ for $2\leqslant r<\infty$, the maps which have a non-trivial wandering domain with non-statistical behavior are dense in $\mathcal{N}$ (see also \cite{Ru01}, \cite{KYT19} and \cite{KLS16} for other related results). Let us mention the work of Crovisier et al. \cite{crovisier2020empirical} which contains examples of non-statistical maps in the context of partially hyperbolic diffeomorphisms. There is also an explicit example of a non-statistical diffeomorphism of the annulus introduced by Herman that can be found in \cite{Yoc-Her}.
There are also examples of non-statistical maps in the world of more specific families of dynamical systems (e.g. polynomial maps) where one looses the possibility of local perturbations as a possible mechanism to control the statistical behavior of the orbits. One of the recent examples of specific family of dynamics displaying non-statistical behavior is the recent work of Berger and Biebler \cite{Berger-Biebler2020}. They prove the existence of real polynomial automorohisms of $\mathbb{C}^2 $ having some wandering Fatou component on which the dynamics has non-statistical behavior. Their work also contains a generalization of the result of Kiriki-Soma \cite{KS15} to the case of $r=\infty$ or $\omega$ and also the result in \cite{KNS2019}. There is another example of the non-statistcal dynamics in the logistic family $f_\lambda:[0,1]\to [0,1]$ where $f_\lambda(x)=\lambda x(1-x)$ and $\lambda\in [0,4]$. Hofbauer and Keller have shown in \cite{hofbauer1990} that there are uncountably many parameters $\lambda \in [0,4]$ such that $f_\lambda$ is non-statistical. Later, in \cite{HK2} they proved that there are uncountably many parameters $\lambda\in[0,4]$ such that the map $f_\lambda$ has indeed maximal oscillation property:
\begin{theorem*}[{Hofbauer-Keller \cite{HK2}}]\label{theorem HK}
There exist uncountably many $\lambda\in [0,4]$ for which $f_\lambda$ has maximal oscillation:
\begin{equation*}
\text{for Lebesgue } a.e. \ \ x\in[0,1],\ \ acc(\{e_{f_\lambda}^n(x)\}_{n})=\mathcal{M}_1(f_\lambda),
\end{equation*}
where $\mathcal M_1(f)$ is the set of all invariant probability measures of $f$.
\end{theorem*}
In \cite{talebi20}, the author proves the Baire genericity of non-statistical behavior (and even maximal oscillation) within the maximal bifurcation locus of rational maps of degree $d>1$. Let us denote the set of maximal bifurcation locus by $\Lambda$:
\begin{theorem*}[{Talebi \cite{talebi20}}]\label{mainTheorem}
For a Baire generic map $f$ in the maximal bifurcation locus, the set of accumulation points of the sequence of empirical measures is equal to the set of invariant measures of $f$ for Lebesgue almost every point.
\end{theorem*}
We note that for a rational map $f$ with degree larger than one, the set of invariant measures $\mathcal M_1(f)$ is a large set, and in particular has more than one element and hence a generic map in the maximal bifurcation locus is non-statistical.
\subsection*{Statistical instability and non-statistical dynamics.} Statistical instability of a dynamical system roughly speaking is the possibility of making essential changes in statistical properties of a map under arbitrary small perturbations. Despite the structural stability which is more strict and sensitive to the topological structure of every point in the phase space, statistical stability sees the statistical behavior of almost every point and does not care about the dynamical behavior of a set of measure zero. The Axiom A maps are examples of statistical stable dynamics since they are indeed structurally stable which is a stronger condition. Beyond structural stable dynamics, there are also other examples of the statistical stable maps. Alvez and Viana in \cite{AV02} study a class of dynamics which is formed by statistically stable maps: In their work the dynamics which are studied have a unique physical measure with a full basin and stability or instability of a dynamics is equal to the continuity or discontinuity of the map sending the dynamics to its unique physical measure. There are several other works in this direction among which we can quote \cite{APV16} where the statistical stability is proved for multidimensional piecewise expanding maps, the result of Baladi-Benedicks-Schnellmann \cite{Baladi2015} and also the results in \cite{ACF10}, where the statistical stability is proved for the H\'enon maps of Benedicks-Carleson type and the paper \cite{T01} where the statistical instability is proved for certain maps in the quadratic family (see also \cite{AS13}). We generalize the notion of statistical (in)stability and define it in general case for any dynamical system independent of its statistical behavior. In particular, we do not assume that the system has a physical measure. And next we study the connection between this notion and existence of non-statistical maps in a given family of dynamics.
To be more precise, consider the $n^{\text{th}}$ empirical function $e^f_n:X\to \mathcal M_1(X),$ sending each point $x\in X$ to its $n^{\text{th}}$ empirical measure $e^f_n(x)$. We study three types of (non-)convergence of the sequence $\{e^f_n\}_n$; almost sure convergence, $L^1$ convergence and convergence in law. We define non-statistical maps and statistical instability for each kind of (non-)convergence and show how these two notions are related in each topology.
Let us start with explaining the results regarding the convergence in law. If we push-forward the reference measure $\mu$ on $X$ to the space of probability measures on $X$ using an empirical function, we obtain a probability measure on the space of probability measures on $X$ which is denoted as follows:
$$\hat{e}_n^f:=(e^f_n)_{*}(\mu) \in \mathcal M_1(\mathcal M_1(X)).$$
A map $f$ is called non-statistical in law if the sequence $\{\hat{e}_n^f\}_n$ is not convergent. let us denote the set of accumulation points of this sequence by $acc(\{\hat{e}_n^f\}_n)$ which is a compact subset of $\mathcal M_1(\mathcal M_1(X))$. Now let $\Lambda$ be a closed subset of $C^0(X,X)$ which is endowed with a topology finer than $C^0$ topology. In general, the set valued map sending the dynamics $f\in\Lambda$ to the set $acc(\{\hat{e}_n^f\}_n)$ does not have any regularity for the Hausdorff topology on the space of compact subsets of $\mathcal M_1(\mathcal M_1(X))$. However a simple but important observation is that when you consider the sequence along with its accumulation points, we prove that this map enjoys from the semi-continuity property. Then as a consequence we obtain the following lemma which is the main lemma in this section:
\begin{main lemma}
A Baire generic map $f\in\Lambda$ is a continuity point for the map $\mathcal E$ where
$$\mathcal E(f)=\overline{\{\hat{e}_n(f)|n\in \mathbb{N}\}}.$$
\end{main lemma}
To define statistical instability of a map, we need to give some more definitions. The space $\mathcal M_1(\mathcal M_1(X))$ endowed with weak-$*$ topology is a compact metric space. Let $\hat{\nu}$ be an element of $\mathcal M_1(\mathcal M_1(X)) $. We say $f\in \Lambda$ \emph{statistically bifurcates toward} $\hat{\nu}$ if it can be approximated by elements of the form $\hat{e}_{n_k}^{f_k}$ where $f_k$ approaches to $f$ and $n_k$ goes to infinity. Let $\mathcal B_{\Lambda,f}$ be the subset of those elements of $\mathcal M_1(\mathcal M_1(X))$ toward which $f$ statistically bifurcates. We can think of the set $\mathcal B_{\Lambda,f}$ as the set of all asymptotic statistical behaviors that the family $\Lambda$ can displays locally around the map $f$. We say $f\in \Lambda$ is \emph{statistically unstable in law} iff $\# \mathcal B_{\Lambda,f}>1$.
The following theorem is our main theorem regarding the connection between statistical instability and non-statistical maps in the level of convergence in law:
\begin{theorem}\label{thm.generic BLambda}
Baire generically, $B_{\Lambda,f}$ is equal to $acc(\{\hat{e}_n^f\}_n)$.
\end{theorem}
We also investigate the statistical stability and non-statistical maps from the $L^1$ (non-) convergence point of view. We say a map $f$ is \textit{$L^1$ non-statistical} if the sequence of maps $e^f_{n}:X\to \mathcal M_1(X)$ is not convergent for $L^1$ topology (see \ref{L^1 distance} for definition of $d_{L^1}$):
\begin{equation*}
\limsup_{m,n\to\infty}d_{L^1}(e^f_{n},e^f_{m})>0.
\end{equation*}
We say a map $f\in \Lambda$ is $L^1$ statistically unstable if the following quantity is positive:
\begin{equation*}
\limsup_{h,g\to f,m,n\to \infty} d_{L^1}(e^h_{n},e^g_{m}).
\end{equation*}
This quantity measures how different are the statistical behavior of the maps approaching to $f$ for iterations close to infinity. If a map is $L^1$ non-statistical then according to the definitions it is $L^1$ statistically unstable, but the existence of $L^1$ statistically unstable maps in a family of dynamics, does not necessarily imply the existence of $L^1$ non-statistical maps (see Example \ref{exp-IdS1} ). However the following theorem states that if a family $\Lambda$ contains sufficient $L^1$ statistically unstable maps, then we can conclude the existence of $L^1$ non-statistical maps within that family.
\begin{theorem}\label{Thm-generic non-stat}
The $L^1$ non-statistical maps form a Baire generic subset of the interior of $L^1$ statistically unstable maps.
\end{theorem}
The version of almost sure (non-)convergence of the above definitions and results is very similar to the $L^1$ version and we avoid explaining it in the introduction. \\
As an application of the results in this abstract setting we can improve the result of Hofbauer and Keller in the following way. Let $\Lambda$ be the closure of the set of parameters found by Hofbauer and Keller \cite{HK2}, then in section \ref{bif-to} we prove that:
\begin{theorem}\label{thm.HK.max.oscl}
The set of parameters $\lambda$ for which the map $f_\lambda $ has maximal oscillation is a Baire generic subset of $\Lambda$.
\end{theorem}
Our initial motivation for developing an abstract setting, was to capture those properties of the family of rational maps that imply the existence of non-statistical behavior within this family, and develop a setting that allows us to prove the existence of non-statistical dynamics in the other families having the same properties. But surprisingly, the theorems and lemmas which are proved in this abstract setting, turned out to have some applications in the world of conservative dynamics, where we know there is no non-statistical dynamics. \\
\subsection*{Application to Conservative Dynamics}.
Let $X$ be a smooth and compact manifold and $\Lambda \subset \text{Diff}^r_{Leb}(X)$. To each map $f\in\Lambda$ we can associate the ergodic decomposition $\hat{\mu}_f\in \mathcal M_1(\mathcal M_1(X))$ of the Lebesgue measure. Observe that
$$\hat{\mu}_f=\lim_n \hat{e}_n^f.$$
Using Theorem \ref{thm.generic BLambda} we can conclude the following result regarding the continuity of the ergodic decomposition with respect to the dynamics. We should note that this theorem has been proved previously by Avila and Bochi in \cite{AB09} but our approach for proof is different.
\begin{theorem*}[{Avila-Bochi \cite[Theorem B]{AB09}}]
A generic $f\in\text{Diff}^r_{Leb}$ is a continuity point of the map $f\mapsto\hat{\mu}_f$.
\end{theorem*}
Another section of this paper is to show the existence of non-statistical dynamics within the Anosov-Katok diffeomorphisms of the annulus:
\subsection*{Maximally oscillating Anosov-Katok maps of the annulus.} Let us call the closure of the set of those $C^r$ diffeomorphims of the annulus which are $C^r$-conjugated to a rotation, the space of $C^r$ Anosov-Katok maps, and denote it by $\mathcal{AK}^r$. Our next result shows the existence and Baire genericity of maximally oscillating dynamics in this space:
\begin{theorem}\label{Anosov-Katok example}
A Baire generic map in the set of Anosov-Katok maps $\mathcal {AK}^r$ has exactly two ergodic invariant measures each of which is supported by a different boundary component of the annulus and more over the map is maximally oscillating.
\end{theorem}
\subsection*{Questions:}
This paper provides tools to study the statistical behaviour of generic dynamical systems in an abstract class. When the class of dynamical systems is formed by dissipative $C^r$-diffeomorphisms of a compact manifold, this study is traditionally related to the notion of physical measures. We recall that an invariant probability measure $\nu$ is \emph{physical} if its basin $B_\nu:=\{x\in M: e _n(x)\to \nu\}$ has positive Lebesgue measure. \\
\textbf{Question:}
For $r \geqslant 2$, is it true for generic $f$ in $\mathrm{Diff}^r(M)$ that the union of the basins of the physical measures of $f$ has full Lebesgue measure in M?\\
This question has been asked by Wilkinson and Shub in \cite{WS00}, but was in the mind of several other people
(see \cite{Palis00}, \cite{BV00} and \cite{PS95}).\\
Let us now relax the conditions on physical measures and develop some questions on the abundance of non-statistical dynamics.
\textbf{Question:}
Is there any non-trivial family of dynamics having a positive measure subset of non-statistical maps?\\
\textbf{Question:}
Is there any open subset of dynamics in which the non-statistical maps are generic? In Newhouse domains?\\
These questions are related to the following:\\
\textbf{Question:}[Takens' last problem,\cite{Takens08}]
Can non-statistical dynamics exist persistently within a non-trivial class of smooth dynamical systems? \\
\subsection*{Acknowledgment.}
I would like to give a special thank to Pierre Berger for his very useful comments and discussions.\\
I am also thankful to Meysam Nassiri for having very useful discussion together and his useful comments.
I acknowledge Bassam Fayad who gave us the idea of improving our result from the Baire genericity of non-statistcal Anosov-Katok maps to the Baire genericity of maximally oscillating Anosov-Katok maps.\\
I have to list all of my financial sources during writing this paper: The Institute for Research in Fundamental Sciences (IPM), Campus France, Iranian Ministry of Science, Research and Technology, Universit\'e Paris 13 and ERC project 818737 \textit{Emergence of wild differentiable dynamical
systems}.
\section{preliminaries}\label{basic defs}
Let $X$ be a compact metric space endowed with a reference (Borel) probability measure $\mu $ and $\Lambda $ a subset of continuous self-mappings of $X$ endowed with a topology finer than $C^0$ topology. For instance $\Lambda$ can be a subset of $C^r$ self-mappings of a smooth manifold, endowed with $C^r$ topology and $\mu$ be a
probability measure whose density w.r.t. a Lebesgue measure is a smooth positive function.
For a compact metric space $(X,d)$, Let us denote the space of probability measures on $X$ by $\mathcal M_1(X) $. This space can be endowed with weak-$*$ topology which is metrizable, for instance with \emph{Wasserstein metric} where the distance $d_w$ between two probability measures $\nu_1$ and $\nu_2$ is defined as below:
$$d_w(\nu_1,\nu_2):=\inf_{\zeta \in \pi(\nu_1,\nu_2)}\int_{X\times X} d(x,y) d\zeta \ ,$$
where $\pi(\nu_1,\nu_2)$ is the set of all probability measures on $X\times X$ which their projections on the first coordinate is equal to $\nu_1$ and on the second coordinate is equal to $\nu_2$. The Wasserestein distance induces the weak-$*$ topology on $\mathcal M_1(X)$ and hence the compactness of $(X,d)$ implies that $(\mathcal M_1(x),d_w)$ is a compact and complete metric space.
We should note that our results and arguments in the rest of this note hold for any other metric inducing the weak-$*$ topology on the space of probability measures.\\
For a point $x\in X$ and a map $f:X \rightarrow X$, the \emph{empirical measure}
\begin{equation*}
e_{n}^{f}(x):=\frac{1}{n} \sum_{i=0}^{n-1} \delta_{f^i (x)}
\end{equation*}
describes the distribution of the orbit of the point $x$ up to the $n^\text{th}$ iteration in the phase space, which asymptotically may or may not converge.
If it converges, then by observing a finite number of iterations (but possibly large) one can predicts how the orbit of $x$ behaves approximately for larger iterations, from a statistical point of view. However it may not converge. In this case we fix the following terminology:
\begin{definition}
For a map $f:X\to X$ we say the orbit of a point $x$ \textit{displays non-statistical behavior}, or briefly $x$ is \textit{non-statistical} if the sequence $\{ e_{n}^f(x)\}_{n}$ is divergent.
\end{definition}
\begin{example}\label{exmpl symbolic dynamics}
This example shows the existence of non-statistical points for a well known dynamics; the shift map $\sigma$ on $X=\{0,1\}^{\mathbb Z}$.
Consider a point $\omega\in X$
\begin{equation*}
\omega= 0.\underbrace{0...0}_{{n_1}}\underbrace{0101...0101}_{{n_2}}\underbrace{0...0}_{{n_3}}...
\end{equation*}
made by putting together consecutive blocks of zero's and blocks of zero's and one's and suppose the length of $i^{\text{th}}$ block is $n_i\,$satisfying
$$\lim_{i\to\infty} \frac{n_i}{n_{i+1}}=0.$$
Then it can be checked easily that $\omega$ is a non-statistical point.
\end{example}
One can ask how large can be the set of points for which the empirical measures does not converge. Can it be of positive measure? Here is an example to answer this question:
\begin{example}[The Bowen eye]\label{example Bowen eye}
One of the first examples of non-statistical maps was given by Bowen. It is a vector field in the plane with an eye-like region having two saddle fixed points in the corners with two saddle connections as the boundary of this region (see Figure \ref{Boweneye}). The vector field has a source equilibrium point inside this region and all of the points except this fixed point converge to the boundary. Let us denote the two equilibrium points in the corner by $A$ and $B$ and the unstable and stable eigenvalues of the linearization of the vector field at $A$ by $\alpha_+$ and $-\alpha_-$ and at $B$ by $\beta_+$ and $-\beta_-$. For suitable choices of these numbers, the time one map of the vector field becomes a non-statistical diffeomorphism of $\mathbb{R}^2$ with respect to the Lebesgue measure restricted to the eye-like region.
Takens introduced in \cite{takens1994heteroclinic} the modulus associated with the upper and lower saddle
connection which are denoted respectively by $\lambda$ and $\sigma$. They are defined by
$$\lambda=\alpha_-/\beta_+ \text{ and } \sigma=\beta_-/\alpha_+.$$
The following theorem has been proved first by Gaunersdorfer in \cite{gaunersdorfer1992time} and restated by Takens in \cite{takens1994heteroclinic}:
\begin{theorem*}
If g is a continuous function on $\mathbb{R}^2$ with $g(A)>g(B)$, and $x(t)$ an orbit converging to the cycle, then we have:
\begin{align*}
&\limsup_{T\to \infty}\frac{1}{T}\int_0^T g(x(t))dt=\frac{\sigma}{1+\sigma}g(A)+\frac{1}{1+\sigma}g(B)\\
&\liminf_{T\to \infty}\frac{1}{T}\int_0^T g(x(t))dt=\frac{\lambda}{1+\lambda}g(B)+\frac{1}{1+\lambda}g(A).
\end{align*}
\end{theorem*}
Let us denote the time-$t$ map of the vector field by $\phi_t$.
\begin{corollary}
The diffeomorphism $\phi_1$ is non-statistical with respect to the restriction of the Lebesgue measure to the eye like region.
\end{corollary}
\begin{figure}\label{bowen eye}
\centering
\includegraphics[scale=0.15]{Bowen.jpg}
\caption{Bowen eye}
\label{Boweneye}
\end{figure}
\end{example}
\begin{proof}
By suitable choices of the eigenvalues, we can make sure that the limsup and the liminf in the theorem are not equal. In fact this is the case if
$$ \alpha_{-} \beta_{-} \neq \alpha_{+}\beta_{+},$$In this case, the time averages of the map $g$ along the orbit of almost every point in the eye like region oscillates between the limsup and the liminf and so is not convergent. Assume for the sake of contradiction that for a point $x_0$ in the eye like region, which is not the source, the sequence of empirical measures $\{e^{\phi_1}_n(x_0)\}_n$ converges to a probability measure $\nu$. So we have
$$\lim_{n\to \infty}\int_{\mathbb{R}^2} g(z)d(e^{\phi_1}_n(x_0))(z)=\int_{\mathbb{R}^2}g(x)d\nu.$$
On the other hand, the orbit of the point $x_0$ spends asymptotically most of its time around two fixed points $A$ and $B$. This is because for any neighbourhood $U$ of these two points, the time that the orbit of $x_0$ spends in $U$ in each visit is more than the time it spends in the previous return and the time difference each two visit is uniformly bounded. As a consequences we conclude that the time averages of the map $\phi_1$ are asymptotically the same as the time averages of the continuous system $\phi_t$ and so:
\begin{align*}
\lim_{n\to \infty}\int_{\mathbb{R}^2} g(z)d(e^{\phi_1}_n(x_0))(z)&=
\lim_{n\to \infty}\frac{1}{n}\Sigma_{i=0}^n g(\phi_i(x_0)) \\
&=\lim_{n\to \infty}\frac{1}{n}\int_0^n g(\phi_t(x_0))dt,
\end{align*}
which is a contradiction since according to our assumption, the last limit does not exist.
So the sequence of empirical measures is not convergent for the point $x_0$.
\end{proof}
\section{The $L^1$ convergence version}
We define the $n^{\text{th}}$ empirical function of a map $f$ to be the map $e^f_n:X\to \mathcal M_1(X)$ sending a point $x\in X$ to the $n^{\text{th}}$ empirical measure $e^f_n(x)$. We are going to study the $L^1$ (non-)convergence of the sequence of empirical functions. For this purpose, we need to give some definitions:
Let us denote the space of Borel measurable maps from $X$ to $\mathcal M_1(X) $ by $L^1(X,\mathcal M_1(X))$. Note that since the empirical functions are continuous maps with respect to $x$, they are elements of $L^1(X,\mathcal M_1(X))$.
We define a metric on this space where the distance between two elements $e,e'\in L^1(X,\mathcal M_1(X))$ is defined as follows:
\begin{equation}\label{L^1 distance}
d_{L^1}(e,e')=\int _X d_w(e(x),e'(x))d\mu.
\end{equation}
Let us study the convergence of the sequence of empirical functions with respect to this metric:
\begin{definition}
We say a map $f$ is \textit{$L^1$ non-statistical} if the sequence of maps $e^f_{n}:X\to \mathcal M_1(X)$ is not convergent for the $L^1$ topology:
\begin{equation*}
\limsup_{m,n\to\infty}d_{L^1}(e^f_{n},e^f_{m})>0.
\end{equation*}
\end{definition}
In the following we introduce a condition, that if it is satisfied by a Baire space of dynamics $\Lambda$, then we can conclude the existence of $L^1$ non-statistical maps within $\Lambda$.
Let us first, quantify the extent to which the statistical behavior of a map $f\in \Lambda$ can be changed by small perturbations. To this aim, we propose the following definition:
\begin{definition}
The \emph{amplitude of $L^1$ statistical divergence w.r.t. $\Lambda$} of a map $f\in\Lambda$ is a real valued non-negative mapping which is defined as follows
\begin{equation*}
\Delta^1(f):= \limsup_{\substack{{h,g\to f,m,n\to \infty}\\ {h,g\in \Lambda}}} d_{L^1}(e^h_{n},e^g_{m}).
\end{equation*}
\end{definition}
Note that the definition of $\Delta^1$ depends also on the set $\Lambda$, and not only on the map $f$. However for the sake of simplicity we hided this in the notation.
Observe that if $\Delta^1$ is positive at $f$ then the asymptotic behaviors of dynamics close to $f$ are very sensitive to perturbations and in this sense the map $f$ is statistically unstable. We introduce the following definition:
\begin{definition}
A map $f\in \Lambda$ is $L^1$ \textit{statistically unstable with respect to $\Lambda$ } if its amplitude of $L^1$ statistical divergence w.r.t. $\Lambda$ is positive:
$$\Delta^1(f)>0.$$
\end{definition}
\begin{example}\label{exp-IdS1}
Suppose $\Lambda$ is the set of rigid rotations on $\mathbb{S}^1$. The identity map $Id_{\mathbb{S}^1}\in\Lambda$ is $L^1$ statistically unstable, since the empirical measures of all of its points are atomic whereas we can approach the map $Id_{\mathbb{S}^1}$ by irrational rotations, and the empirical measures of any point are close to the Lebesgue measure for large enough iterations.
\end{example}
Now we want to investigate the relationship between $L^1$ statistical instability and existence of $L^1$ non-statistical maps. It is clear that if a map $f$ is $L^1$ non-statistical then $\Delta^1(f) >0$ and so $f$ is $L^1$ statistically unstable, but the existence of a $L^1 $ statistically unstable map does not necessarily imply the existence of $L^1$ non-statistical maps (see Example \ref{exp-IdS1} ). However, if the interior of $L^1$ statistically unstable maps is non-empty, then the existence of plenty of $L^1$ non-statistical maps is guaranteed. We recall that $\Lambda $ is a Baire space and its topology is finer than $C^0$-topology.
\begin{customthm}{1.2}
The $L^1$ non-statistical maps form a Baire generic subset of the interior of $L^1$ statistically unstable maps.
\end{customthm}
Before proving this theorem, let us give a particular application of this theorem:
\begin{corollary}\label{cor.generic.generic}
If a Baire generic map $f \in \Lambda$ is $L^1-$statistically unstable, then a generic map in $\Lambda$ is $L^1$ non-statistical.
\end{corollary}
\begin{proof}
\end{proof}
The proofs of Theorem \ref{Thm-generic non-stat} and corollary \ref{cor.generic.generic} use the following lemma:
\begin{lemma}\label{lem.l.s.c.Delta'}
The map $\Delta^1$ is upper semi-continuous.
\end{lemma}
\begin{proof}
Let $\{f_k\}_k $ be a sequence of maps converging to $f$. For each $k$ we can find two natural numbers $n_k$ and $m_k$ and two maps $g_k$ and $h_k$ near $f_k$ such that
$$|\Delta^1(f_k)-d_{L^1}(e_{n_k}^{g_k},e_{m_k}^{h_k})|<\frac{1}{k}.$$
Note that we can choose the sequences $\{n_k\}_k$ and $\{m_k\}_k$ both converging to infinity and also the sequence of maps $\{g_k\}_k$ and $\{h_k\}_k$ both converging to $f$. So we obtain
\begin{equation*}
\Delta^1(f)\geqslant \limsup_{k}\Delta^1(f_k),
\end{equation*}
and this implies the upper semi-continuity of $\Delta^1$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor.generic.generic}]
If a generic map $f\in \Lambda$ is statistically unstable, then by definition $\Delta^1(f)>0$. On the other hand since $\Delta^1$ is semi-continuous, a generic map in $\Lambda$ is a continuity point for it. Hence generic $f\in \Lambda$ has a neighbourhood in which all of the maps are statistically unstable. So there is an open and dense subset of statistically unstable maps and hence by Theorem \ref{Thm-generic non-stat}, a generic map in $\Lambda$ is non-statistical.
\end{proof}
\begin{proof}[Proof of Theorem \ref{Thm-generic non-stat}]
Since $\Delta^1$ is upper semi-continuous, there is a generic subset $\mathcal{G}\subset \Lambda$ on which the map $\Delta^1$ is continuous. For a map $f \in \mathcal{G}$ which is also in the interior of $L^1$ statistically unstable maps, there exists a neighborhood $\mathcal{U}_f\subset \Lambda$ around $f$ on which $\Delta^1$ is uniformly positive:
\begin{equation}\label{equ unif}
\exists d>0 \quad s.t. \quad \forall g \in \mathcal U_f,\quad \Delta^1(g)>d.
\end{equation}
Now we construct a sequence of open and dense subsets in ${\mathcal U}_f$ such that any map in the intersections of these sets is $L^1$ non-statistical. This will imply that $L^1$ non-statistical maps are Baire generic in ${\mathcal U}_f$ and hence the $L^1$ non-statistical maps are locally generic in the interior of $L^1$ statistically unstable maps.
To construct such open and dense sets, first note that the function $\Delta^1$ can be written as
\begin{equation*}
\Delta^1(f)=\limsup_{g,h \to f,N \to \infty} \Delta ^{1}_{N}(h,g),
\end{equation*}
where $\Delta^1_{N}(h,g)=\sup_{i,j\ge N}d_{L^1}(e^h_{i},e^g_{j}).$
\begin{claim}
The map $\Delta^1_{N}$ is lower semi-continuous.
\end{claim}
\begin{proof}
Note that
$$\Delta^1_{N}(h,g)=\sup_{M\ge N}\{\sup_{N \le i,j \le M}d_{L^1}(e^h_{i},e^g_{j})\}.$$
But $\sup_{N \le i,j \le M}d_{L^1}(e^h_{i},e^g_{j})$ is continuous with respect to $(h,g)$, and supremum of a sequence of continuous functions is lower semi-continuous.
\end{proof}
Next we show that for any $N\in\mathbb N$, the set
$$E(N):=\{h\in {\mathcal U}_f|\Delta^1_N(h,h)>\frac{d}{3}\},$$
is an open and dense subset of ${\mathcal U}_f$ and moreover every map in the intersection $\bigcap_N E(N)$ is $L^1$ non-statistical.
The openness of $E(N)$ is guaranteed by lower semi-continuity of $\Delta^1_N.$ Now we prove the denseness of $E(N)$. For any arbitrary map $h\in {\mathcal U}_f$, take a neighborhood $V_h$ such that for any map $g\in V_h$ we have:
\begin{equation}\label{in.eq d/3}
d_{L^1}(e^h_{N},e^g_{N})<\frac{d}{3}.
\end{equation}
This is possible because $N$ is fixed and $e^g_{N}$ depends continuously on $g$. By \ref{equ unif} we know that $\Delta^1(h)>d$, and so we can choose $g_1,g_2 \in V_h$ such that for some integers $n,m>N$ it holds true that
\begin{equation}\label{in.eq d}
d_{L^1}(e^{g_1}_{n},e^{g_2}_{m})>d.
\end{equation}
But note that
\begin{equation*}
d_{L^1}(e^{g_1}_{n},e^{g_2}_{m})\le d_{L^1}(e^{g_1}_{n},e^{g_1}_{N})+d_{L^1}(e^{g_1}_{N},e^{g_2}_{N})+d_{L^1}(e^{g_2}_{N},e^{g_2}_{m}).
\end{equation*}
Inequalities \ref{in.eq d/3} and \ref{in.eq d} imply that either $$d_{L^1}(e^{g_1}_{n},e^{g_1}_{N})>\frac{d}{3} \text{ or } d_{L^1}(e^{g_2}_{m},e^{g_2}_{N})>\frac{d}{3}.$$
So at least one of the maps ${g_1}$ or ${g_2}$ is inside $E(N)$, and then recalling that $h$ was chosen arbitrarily in ${\mathcal U}_f$ and $V_h$, we conclude that $E(N)$ is dense in ${\mathcal U}_f$.
Now observe that for any map $h\in\bigcap_{N=1}^\infty E(N)$, the sequence of empirical functions is not a Cauchy sequence and hence $h$ is $L^1$ non-statistical. The set $\bigcap_{N=1}^\infty E(N)$ is a Baire generic set in the open neighbourhood $\mathcal U_f$, so the $L^1$ non-statistical maps are generic in the set $\mathcal U_f$. Considering the fact that $f$ is an arbitrary map in the generic set $\mathcal G$ we can then conclude that $L^1 $ non-statistical maps are indeed a generic subset of the interior of $L^1$ statistically unstable maps.
\end{proof}
\begin{definition}
We say a map $f\in \Lambda$ is $L^1$ \emph{statistically stable w.r.t. $\Lambda$} if $\Delta^1(f)=0.$
\end{definition}
\begin{corollary}\label{generic.stat.stable}
If $\Lambda$ contains no $L^1$ non-statistical map, then a Baire generic map in $\Lambda$ is $L^1$ statistically stable.
\end{corollary}
\begin{proof}
Let $\mathcal{G}\subset \Lambda$ be the set of the continuity points of $\Delta^1$ which is a Baire generic set as a consequence of Lemma \ref{lem.l.s.c.Delta'}. We can decompose $\mathcal{G}$ into two subsets: the set $\mathcal{G}_0$ of maps $f\in \mathcal{G}$ such that $\Delta^1(f)=0$, and the set $\mathcal{G}_{+}$ of maps $f\in \mathcal{G}$ such that $\Delta^1(f)>0$. The set $\mathcal{G}_{+}$ is an open set and also it is in the interior of $L^1$statistical unstable maps. Theorem \ref{Thm-generic non-stat} implies the existence of a generic subset of $L^1$ non-statistical maps in the interior of $L^1$statistical unstable maps. But by our assumption, there is no $L^1$ non-statistical map in $\Lambda$. This implies that the interior of $L^1$statistical unstable maps is empty and hence the set $\mathcal{G}_{+}$ is empty as well. So $\mathcal{G}_{0}$ is equal to $\mathcal{G}$, which is generic in $\Lambda$.
\end{proof}
Let us introduce an application of Corollary \ref{generic.stat.stable} in the world of conservative dynamics where there is no possibilities to have $L^1$ non-statistical maps:
\begin{corollary}\label{cor.generic.conservative}
Suppose $\Lambda$ is a set of $\mu$-preserving dynamics, then a generic map $f\in \Lambda$ is $L^1$ statistically stable.
\end{corollary}
\begin{proof}
By Birkoff ergodic theorem, for any map $f\in \Lambda$, the sequence $\{e^f_{n}\}_n$ is $L^1$ convergent, so we have no $L^1$ non-statistical maps in $\Lambda$, and hence by corollary \ref{generic.stat.stable}, a generic map in $\Lambda$ is $L^1$ statistically stable.
\end{proof}
\section{The essential convergence version}
In the previous section, we introduced the notion of $L^1$ statistical instability and investigated the relationship between this notion and existence of $L^1$ non-statistical maps. In this section we consider the pointwise (non-)convergence of the empirical functions $e^f_n:X\to \mathcal M_1(X)$ instead of $L^1$ (non)convergence. We show that the same statements hold true for pointwise convergence version of the results in the previous section, however, the arguments are a little bit more technical.
\begin{definition}[Non-statistical dynamics]
A map $f:X\to X$ is called \textit{non-statistical} if the set of points that have non-statistical behavior is of positive measure.
\end{definition}
First let us quantify how different is the statistical behavior of two arbitrary maps $h,g\in \Lambda$ for iterations larger than a fixed number $N\in \mathbb{N}$. To this aim we propose the following map $\Delta^e_N$ that associates to a couple of maps $h,g\in\Lambda$ a non-negative real number:
$$ \Delta^e_{N}(h,g):=\int_{X}{\ \sup_{N\leqslant n,m}d_w(e_{n}^{h}(x),e_{m}^{g}(x)})\ d\mu,$$
which can be interpreted as the average of the maximum difference between statistical behaviors that the orbit of a point can display under iterations of $h$ and $g$ for iterations larger than $N$. Note that $\Delta^e_N$ is not a distance. In particular if $f$ is a non-statistical map, then $\Delta_N^e(f,f)$ is uniformly positive for every $N$:
\begin{lemma}\label{lemma unif}
A map $f$ is non-statistical if and only if there is a real number $d>0$ such that for each $N\in\mathbb{N}$ we have $\Delta^e_{N}(f,f)>d$.
\end{lemma}
\begin{proof}
Let $f$ be a non-statistical map, and let $x\in X$ be a non-statistical point. Since the sequence of empirical measures of this point does not converge,
$$d_x:=\inf_{N>0}\sup_{N\leqslant n,m}d_w(e_{n}^f(x),e_{m}^f(x))>0$$
By definition, the set of non-statistical points has positive measure and $x\mapsto d_x$ is measurable, thus
$$\Delta^e_{N}(f,f)=\int_{X} \ \sup_{N\leqslant n,m}d_w(e_{n}^f(x),e_{m}^f(x)) d\mu \ge \int_X d_x d\mu>0.$$
To prove the other side let $f$ be a map for which the sequence of empirical measures of almost every point converges. So for almost every $x\in X$ we have
$$\lim_{N\to\infty} \sup_{N\leqslant n,m}d_w(e_{n}^f(x),e_{m}^f(x))=0.$$
Since the distance between empirical measures is bounded, we can then use Lebesgue dominated convergence theorem, to conclude
\begin{align*}
\lim_{N\to \infty} \Delta^e_{N}(f,f)&=\lim_{N\to \infty}\int_{X}\sup_{N\leqslant n,m}d_w(e_{n}^f(x),e_{m}^f(x)) d\mu\\
&=\int_{X}\lim_{N\to\infty} \sup_{N\leqslant n,m}d_w(e_{n}^f(x),e_{m}^f(x))d\mu=0.
\end{align*}
This finishes the proof.
\end{proof}
We recall that $\Lambda$ is a Baire space of maps endowed with a topology finer than the $C^0$-topology. Now like the previous section, we quantify the difference in the statistical behaviors of maps converging to $f\in \Lambda$. For this purpose, we introduce the following definition:
\begin{definition}
The \textit{amplitude of essential statistical divergence} of a map $f\in\Lambda$ is defined as bellow
\begin{equation*}
\Delta^e(f):= \limsup_{\substack{{h,g\to f,m,n\to \infty}\\ {h,g\in \Lambda}}}\Delta^e_N(h,g).
\end{equation*}
\end{definition}
Observe that if $\Delta^e$ is positive at $f$ then the asymptotic behaviors of nearby maps are very sensitive to perturbations of $f$ and so the map $f$ is unstable from the statistical view point.
\begin{definition}
A map $f\in \Lambda$ is \textit{statistically unstable with respect to $\Lambda$ } if $\Delta^e(f)>0$.
\end{definition}
\begin{example}
Let $X$ be the circle $\mathbb{S}^{1}$, $\mu$ the normalized Lebesgue measure and $\Lambda$ be the set of rotations of the circle. The map $f=Id_{\mathbb{S}^1}$ is statistically unstable, since the empirical measures of any point are the Dirac mass at that point, but for any arbitrarily close irrational rotation the sequence of empirical measures converges to the Lebesgue measure.
\end{example}
\begin{example}
Let $X$ be the Riemann sphere $\hat{\mathbb{C}}$ and $\mu$ its normalized Lebesgue measure. Consider the set of quadratic maps:
\begin{align*}
\Lambda=\{f_{c}:\hat{\mathbb{C}} \to \hat{\mathbb{C}}| f_{c}(x)=x^2+c \ \ \mathrm{for}\ \ x\in\mathbb{C}, \ \ f_{c}(\infty)=\infty\}.
\end{align*}
The map $f_{\frac{1}{4}}$ has a fixed point at $x=\frac{1}{2}$ which attracts the points in a non-empty open set $U$. For any $\epsilon > 0 $, the map $f_{\frac{1}{4}+\epsilon}$ has a different dynamics: almost every point goes to infinity. So for any $\epsilon>0$ and every point $x\in U$ the sequence of empirical measures converges to $\delta_{\frac{1}{2}}$ and $\delta_{\infty}$ under iterating the maps $f_{\frac{1}{4}}$ and $f_{\frac{1}{4}+\epsilon}$ respectively.
Hence the supremum in the definition of $\Delta^e_N$ is at least $d_w(\delta_{\frac{1}{2}},\delta_{\infty})$ for almost every point. So $\Delta^e_{N}(f_{\frac{1}{4}},f_{\frac{1}{4}+\epsilon})>\mu(U)d_w(\delta_{\frac{1}{2}},\delta_{\infty})$, which is independent of $\epsilon$ and $N$. According to the definition, this means that $f_{\frac{1}{4}}$ is statistically unstable.
\end{example}
\begin{proposition}\label{prop Delta^e u.s.c}
The map $\Delta^e:\Lambda\to \mathbb{R}$ is upper semi-continuous.
\end{proposition}
\begin{proof}
We recall that
\begin{equation*}
\Delta^e(f):= \limsup_{h,g\to f,N\to\infty}\Delta^e_N(h,g).
\end{equation*}
Now let $\{f_k\}_k $ be a sequence of maps converging to $f$. For each $k$ we can find a natural number $N_k$ and two maps $g_k$ and $h_k$ near $f_k$ such that
$$|\Delta^e(f_k)-\Delta^e_{N_k}(g_k,h_k)|<\frac{1}{k}.$$
Note that we can choose the sequence $\{N_k\}_k$ such that it converges to infinity and also the sequence of maps $\{g_k\}_k$ and $\{h_k\}_k$ such that both converge to $f$. So we obtain
\begin{equation*}
\Delta^e(f)\geqslant \limsup_{k}\Delta^e(f_k),
\end{equation*}
and this implies the upper semi-continuity of $\Delta^e$.
\end{proof}
Now we want to investigate the relationship between statistical instability and existence of non-statistical maps. The following theorem is the counterpart of Theorem \ref{Thm-generic non-stat}:
\begin{theorem}
The non-statistical maps form a Baire generic subset of the interior of statistically unstable maps.
\end{theorem}
\begin{proof}
Since by Proposition \ref{prop Delta^e u.s.c} the map $\Delta^e$ is an upper semi-continuous map, there is a Baire generic set $\mathcal{G}$ on which the map $\Delta^e$ is continuous. For a map $f \in \mathcal{G}$, which is also in the interior of statistically unstable maps, there exists a neighborhood $\mathcal{U}_f$ around $f$ on which $\Delta^e$ is uniformly positive:
\begin{equation*}
\exists d>0 \quad s.t. \quad \forall g \in \mathcal{U}_f,\quad \Delta^e(g)>d.
\end{equation*}
Now we construct a sequence of open and dense subsets in ${\mathcal U}_f$ such that any map in the intersection of these sets is non-statistical. And then we can conclude that non-statistical maps are Baire generic in ${\mathcal U}_f$.
To construct such open and dense sets we need to show a semi-continuity property of the map $\Delta^e_N$:
\begin{lemma}
The map $\Delta^e_{N}$ is lower semi-continuous.
\end{lemma}
\begin{proof}
We recall that for $h,g\in \Lambda$
$$ \Delta^e_{N}(h,g):=\int_{X}{\ \sup_{N\leqslant n,m}d_w(e_{n}^{h}(x),e_{m}^{g}(x)})\ d\mu,$$
Now note that
$$ \Delta^e_{N}(h,g)= \sup_{N\leqslant M}\int_{X} {\ \sup_{N\leqslant n,m \leqslant M}d_w(e_{n}^{h}(x),e_{m}^{g}(x))\ d\mu}, $$
and we know that $\int_{X} {\ \sup_{N\leqslant n,m \leqslant M}d_w(e_{n}^{h}(x),e_{m}^{g}(x))\ d\mu}$ is continuous with respect to $h$ and $g$ and supremum of a sequence of continuous functions is lower semi-continuous so we are done.
\end{proof}
Next we show that for any $N\in\mathbb N$, the set
$$E(N):=\{h\in {\mathcal U}_f|\Delta^e_N(h,h)>\frac{d}{3}\},$$
is an open and dense subset of ${\mathcal U}_f$ and moreover every map in the intersection $\bigcap_N E(N)$ is non-statistical.
The openness of $E(N)$ is guaranteed by lower semi-continuity of $\Delta^e_N.$ Now we prove the denseness of $E(N)$. For any arbitrary map $h\in {\mathcal U}_f\subset \Lambda$, take a neighborhood $V_h$ such that for any map $g\in V_h$ and any $x\in X$:
\begin{equation}\label{p.w. in.eq d/3}
d_w(e^h_{N}(x),e^g_{N}(x))<\frac{d}{3}.
\end{equation}
This is possible because $N$ is fixed, $e^g_{N}$ depends continuously on $g$ and $X$ is compact. Now since $\Delta^e(h)>d$, we can choose $g_1,g_2\in V_h$ such that for some integer $M>N$ it holds true that
\begin{equation}
\Delta^e_M(g_1,g_2)>d.
\end{equation}
But since $\Delta^e_N(g_1,g_2)$ is decreasing in $N$ we obtain:
\begin{equation}\label{p.w. in.eq d}
\Delta^e_N(g_1,g_2)>d.
\end{equation}
Now note that for each $x\in X$ we have
\begin{multline*}
\quad\quad\quad\quad\quad\quad\quad \sup_{N\leqslant n,m} d_w(e_{n}^{g_1}(x),e_{m}^{g_2}(x))\leqslant \\
\sup_{N\leqslant n,m}d_w(e_{n}^{g_1}(x),e_{m}^{g_1}(x)) + \sup_{N\leqslant n,m}d_w(e_{n}^{g_2}(x),e_{m}^{g_2}(x))+ d_w(e_{N}^{g_1}(x),e_{N}^{g_2}(x)),
\end{multline*}
and hence after integrating with respect to $\mu$ and using inequality \ref{p.w. in.eq d/3} we obtain:
$$\Delta^e_N(g_1,g_2)\leqslant \Delta^e_N(g_1,g_1)+\Delta^e_N(g_2,g_2)+ \frac{d}{3}.$$
Now using inequality \ref{p.w. in.eq d} we conclude that at least one of the maps $g_1$ and $g_2$ is inside $E(N)$, and then recalling that $h$ was chosen arbitrarily in ${\mathcal U}_f$ and $V_h$ arbitrary small, we conclude that $E(N)$ is dense in ${\mathcal U}_f$.
Observe that Lemma \ref{lemma unif} implies that any map $h$ in the set $\bigcap_{N=1}^\infty E(N)$, which is a Baire generic set inside ${\mathcal U}_f$, is non-statistical. So the non-statistical maps are generic in some open neighbourhood of $f$ and $f$ is an arbitrary map in the interior of statistically unstable maps intersected by the generic set $\mathcal G$. This implies that non-statistical maps are indeed a generic subset of the interior of statistically unstable maps as well.
\end{proof}
\begin{definition}
We say a map $f\in \Lambda$ is statistically stable with respect to $\Lambda$ if $\Delta^e(f)=0.$
\end{definition}
\begin{corollary}\label{cor.generic.stat.stable}
If $\Lambda$ contains no non-statistical map, then a generic map in $\Lambda$ is statistically stable.
\end{corollary}
Note that being statistically stable is a stronger condition than being $L^1$ statistically stable. So the conclusion of the previous theorem, is a stronger version of the the conclusion in Corollary \ref{cor.generic.stat.stable}.
We omit the proof of this theorem as well as the applications in the conservative setting since they are identical to what we had in the previous section.
\section{The version of convergence in law}\label{bif-to}
For a dynamical system $f:X\to X$ the map $e^f_n:X\to \mathcal M _1(X)$ associates to each point $x\in X$, its $n^{th}$ empirical measure. Different points usually have different empirical measures. We can investigate how the empirical measures $e^f_n(x)$ are distributed in $\mathcal M_1(X)$ with respect to the reference measure $\mu$ on $X$ and what is the asymptotic behavior of these distributions. To this aim, we can push forward the measure $\mu$ to the set of probability measures on $X$ using the map $e^f_n$:
$$\hat{e}_n(f):=(e_n^f)_*(\mu).$$
The measure $\hat{e}_n(f)$ is a probability measure on the space of probability measures on $X$. We denote the space of probability measures on the space of probability measures by $\mathcal M_1(\mathcal M _1(X))$. We denote the Wasserstein metric on this space by $\hat{d}$. Note that the compactness of $X$ implies the compactness of $\mathcal M_1(X)$ and hence the compactness of $\mathcal M_1(\mathcal M_1(X))$. So the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ lives in a compact space and have one or possibly more than one accumulation points.
\begin{example}
For any $\mu$ preserving map $f:X\to X$, the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ converges to a measure $\hat{\mu}$ which is the ergodic decomposition of the measure $\mu$.
\end{example}
\begin{example}
If $\nu$ is a physical measure for the map $f:X\to X$ whose basin covers $\mu$-almost every point, the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ converges to the Dirac mass concentrated on the point $\mu\in \mathcal M_1(X)$, which we denote by $\delta_\mu$.
\end{example}
In the next section we will see examples of the maps for which the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ does not converge. \\
The following lemma provides some information about the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$:
\begin{lemma}\label{lemma.dec.dist}
For any $f\in\Lambda$ and any $n\in\mathbb{N}$ it holds true that
$$\hat{d}_w(\hat{e}_n^f,\hat{e}_{n+1}^f)< \frac{diam(X)}{n+1},$$
where $diam(X)$ is the diameter of the space $X$.
\end{lemma}
\begin{proof}
We recall that
$$\hat{e}_n^f=(e_n^f)_{*}(\mu).$$
First let us show for any $x\in X$ the following inequality holds true independent of the choice of $f\in\Lambda$:
$$d_w(e^f_n(x),e^f_{n+1}(x))<\frac{diam(X)}{n+1}.$$
So according to the definition we should show that
$$\inf_{\gamma\in\pi(e^f_n(x),e^f_{n+1}(x))}\int_{X\times X}d(x,y)d\gamma(x,y)<\frac{diam(X)}{n+1}.$$
Consider the following element of $\pi(e^f_n(x),e^f_{n+1}(x))$:
$$\gamma=\frac{1}{n+1}\Sigma_{0\leqslant i \leqslant n-1}\delta_{(f^i(x),f^i(x))}+\frac{1}{n(n+1)}\Sigma_{0\leqslant i \leqslant n-1}\delta_{(f^i(x),f^n(x))}.$$
Note that
$$(\pi_1)_*(\gamma)=e_n^f(x)=\frac{1}{n}\Sigma_{0\leqslant i\leqslant n-1}\delta_{f^i(x)},$$
$$(\pi_2)_*(\gamma)=e_{n+1}^f(x)=\frac{1}{n+1}\Sigma_{0\leqslant i\leqslant n}\delta_{f^i(x)},$$
where $\pi_1$ and $\pi_2$ are the projection on the first and second coordinates. So we have $\gamma\in\pi(e^f_n(x),e^f_{n+1}(x))$ and hence
\begin{align*}
{d}_w(e^f_n(x),e^f_{n+1}(x))\leqslant & \int_{X\times X}d(x,y)d\gamma(x,y)\\
&=\Sigma_{0\leqslant i\leqslant n-1}\frac{1}{n(n+1)}d(f^i(x),f^n(x))\\
&\leqslant \frac{diam(X)}{n+1}.
\end{align*}
Now consider the following measure on $\mathcal M_1(\mathcal M_1(X))\times \mathcal M_1(\mathcal M_1(X))$:
$$\hat{\gamma}=\int_X \delta_{(e^f_n(x),e^f_{n+1}(x)}d\mu.$$
Obviously $\hat{\gamma}\in \pi(\hat{e}^f_n,\hat{e}^f_{n+1})$, and so
\begin{align*}
\hat{d}_w(\hat{e}^f_n,\hat{e}^f_{n+1})
&\leqslant \int_{\mathcal M_1(\mathcal M_1(X))\times \mathcal M_1(\mathcal M_1(X))}d(x,y)d\hat{\gamma}(x,y)\\
&=\int_X {d}_w(e^f_n(x),e^f_{n+1}(x))d\mu \le \frac{diam(X)}{n+1}.
\end{align*}
\end{proof}
Now let $\Lambda$ be a Baire space of self-mappings of $X$ endowed with a topology finer than $C^0$-topology. For each $f\in\Lambda$ the accumulation points of the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ form a compact subset of $\mathcal M_1(\mathcal M_1(X))$ which we denote it by $acc( \{\hat{e}_n(f)\}_{n\in \mathbb{N}})$. This set can vary dramatically by small perturbations of $f$ in $\Lambda$:
\begin{example}\label{identity on S1}
Let $\Lambda$ be the set of rigid rotations on $\mathbb S^1$ and Lebesgue as the reference measure. For the identity map $id$ on $\mathbb S^1$, the sequence $\{\hat{e}_n(id)\}_{n\in \mathbb{N}}$ is a constant sequence. Indeed for every $n\in\mathbb N$ we have:
\begin{align}\label{def e^}
\hat{e}_n(f)=\int_{\mathbb S^1}\delta_{\delta_x}dLeb.
\end{align}
So $acc( \{\hat{e}_n(f)\}_{n\in \mathbb{N}})$ is equal to $\{\int_{\mathbb S^1}\delta_{\delta_x}dLeb\}$. But for any irrational rotation $R_\theta$ (arbitrary close to the identity map), the sequence $\{\hat{e}_n(R_\theta)\}_{n\in \mathbb{N}}$ converges to $\delta_{Leb}$ which is a different accumulation point.
\end{example}
In the previous example, for an irrational rotation $R_\theta$ close to the identity map, the empirical measures of almost every point start to go toward the Lebesgue measure, and hence the sequence $\{\hat{e}_n(R_\theta)\}_{n\in \mathbb{N}}$ goes toward $\delta_{Leb}$. To study the same phenomenon for the other dynamical systems, we propose the following definition. We recall that $\Lambda$ is a Baire space of self-mappings of $X$ endowed with a topology finer than $C^0$-topology and $\mu$ is a reference measure on $X$.
\begin{definition}
For a map $f\in\Lambda$ and a probability measure $\hat{\nu}\in \mathcal M_1(\mathcal M_1(X))$, we say $f$ \emph{statistically bifurcates toward $\hat{\nu}$ through perturbations in $\Lambda$}, if there is a sequence of maps $\{f_k\}_k $ in $\Lambda$ converging to $f$ and a sequence of natural numbers $\{n_k\}_k$ converging to infinity such that the sequence $\{\hat{e}_{n_k}(f_k)\}_k$ converges to $\hat{\nu}\in\mathcal M_1(\mathcal M_1(X))$.
\end{definition}
For the sake of simplicity, when the space $\Lambda$ in which we are allowed to perturb our dynamics is fixed, we say $f$ statistically bifurcates toward $\hat{\nu}$.
For any map $f\in\Lambda$, by $\mathcal{B}_{\Lambda,f}$ we denote the set of those measures $\hat{\nu}\in\mathcal M_1(\mathcal M_1(X))$ that $f$ statistically bifurcates toward $\hat{\nu}$ through perturbations in $\Lambda$.
\begin{remark}\label{Rem.acc.subset}
By definition, it holds true that
$$acc(\{\hat{e}_n^f\}_n)\subset B_{\Lambda,f}.$$
\end{remark}
Here are some nice properties of the set $\mathcal{B}_{\Lambda,f}$:
\begin{lemma}\label{compact BLambda}
The set $\mathcal{B}_{\Lambda,f}$ is a compact subset of $\mathcal M_1(\mathcal M_1(X))$.
\end{lemma}
\begin{proof}
By the definition it is clear that the set $\mathcal{B}_{\Lambda,f}$ is closed. The compactness is a consequence of compactness of $\mathcal M_1(\mathcal M_1(X))$.
\end{proof}
\begin{lemma}
The set $\mathcal{B}_{\Lambda,f}$ is a connected subset of $\mathcal M_1(\mathcal M_1(X))$.
\end{lemma}
\begin{proof}
For the sake of contrary assume that $B_{\Lambda,f}$ is not connected, and can be decomposed to two non-empty disjoint closed sets $A$ and $B$. Therefore there is some real number $d>0$ such that $\tilde{d}_w(A,B)>d.$ Take two elements $\hat{\nu}\in A$ and $\hat{\eta}\in B$ and let $N$ in Lemma \ref{lemma.dec.dist} is chosen so that
$$\forall n>N, \hat{d}_w(\hat{e}_n^f,\hat{e}_{n+1}^f)<\frac{d}{3}.$$
We can find a neighborhood $U$ of $f$ so that
$$\forall g\in U, \quad \hat{d}_w(\hat{e}_n^f,\hat{e}_{n}^g)<\frac{d}{3}.$$
This is possible since the map sending $f$ to $\hat{e}_N^f $ is continuous. Now take two maps $h,g\in U$ such that for some integers $n_1,n_2>N$ it holds true that
$$\hat{d}_w(\hat{e}_{n_1}^g,\hat{\nu})<\frac{d}{3} \quad \text{and}\quad \hat{d}_w(\hat{e}_{n_2}^h ,\hat{\eta})<\frac{d}{3}.$$
Consider the following sequence of elements of $\mathcal M_1(\mathcal M_1(X))$:
$$\hat{\nu}, \hat{e}_{n_1}^g,\hat{e}_{n_1 -1}^g,...,\hat{e}_{N}^g,\hat{e}_{N}^h,...,\hat{e}_{n_2 -1}^h,\hat{e}_{n_2}^h,\hat{\eta}.$$
The distance between two consecutive elements of this sequence is less than $\frac{d}{3}$, and hence there is an element of this sequence which lies out of $\frac{d}{3}$ neighborhood of $A\bigcup B=B_{\Lambda,f}$. By taking $N$ larger, we obtain another element of
$\mathcal M_1(\mathcal M_1(X))$ out of $\frac{d}{3}$ neighborhood of $B_{\Lambda,f}$. So there is a sequence like $\hat{e}_{n_k}^{f_k}$ out of $\frac{d}{3}$ neighborhood of $B_{\Lambda,f}$ and because of the compactness of $\mathcal M_1(\mathcal M_1(X))$ this sequence has a convergent subsequence converging to an element out of $B_{\Lambda,f}$. By definition any accumulation point of this sequence is an element of $B_{\Lambda,f}$ which is a contradiction.
\end{proof}
\begin{lemma}
For any $\hat{\nu }\in \mathcal B_{\Lambda,f}$, any measure $\nu$ in the support of $\hat{\nu}$ is invariant under iteration of $f$.
\end{lemma}
\begin{proof}
By definition there is a sequence of maps $\{f_k\}_k$ in $\Lambda$ converging to $f$ and a sequence of natural numbers $\{n_k\}_k$ converging to infinity such that
$$\lim_{k\to\infty} \hat{d}_w(\hat{e}_{n_k}(f_k),\hat{\nu})=0.$$
If $\nu$ is in the support of $ \hat{\nu}$ then for any neighbourhood $\mathcal U$ of $\nu$ and for $k$ large enough, we have
$$\hat{e}_{n_k}(f_k)(\mathcal U)>0.$$
Recalling that
$$\hat{e}_{n_k}(f_k)(\mathcal U)=\int_{X}\delta_{e^{f_k}_{n_k}(x)}(\mathcal U)d\mu,$$
we conclude that the integrand of the integral above is non-zero for a subset of $X$ with positive measure and hence in particular for each $k$ there is a point $x_k\in X$ such that $e^{f_k}_{n_k}(x_k)\in \mathcal U$. Since $\mathcal U $ is an arbitrary neighbourhood of $\nu$ we can choose $x_k$ such that
$$\lim_{k\to\infty} e_{n_k}^{f_k}(x_k)=\nu.$$
On the other hand note that for a large $k$ the map $f_k$ is close to the map $f$ so the measure $e^{f_k}_{n_k}(x_k)$ is close to $f_*(e^{f_k}_{n_k}(x_k)$. So we have
$$\lim_{k\to\infty}d_w(e^{f_k}_{n_k}(x_k),f_*(e^{f_k}_{n_k}(x_k)))=0,$$
which together with the continuity of $f_*$ imply that $f_*(\nu)=\nu$ and so we are done.
\end{proof}
The set $\mathcal B_{\Lambda,f}$ depends on the choice of the set of dynamics $\Lambda$ in which we are allowed to perturb the map $f$. If $\Lambda $ is replaced by a larger set of maps, then we may have more elements in $\mathcal M_1(\mathcal M_1(X))$ toward which $f$ statistically bifurcates. This is what we can see in the following example:
\begin{example}\label{expl bif to circle}
If $\Lambda$ is the set of rigid rotations of $\mathbb S^1=\mathbb R / \mathbb Z$, the elements of $\mathcal M_1(\mathcal M_1(X))$ toward which the identity map statistically bifurcates, are exactly the following ones:
$$\hat{\nu}_s:=\int_X \delta_{Leb[x,x+s]}dLeb,$$
where $s\in \mathbb R$ is arbitrary and $ Leb[x,x+s]$ denotes the normalized Lebesgue measure on the interval $[x,x+s]$. When $s$ is larger than one, we choose an interval of length one in the universal cover of $\mathbb S^1$ starting from a point in the fiber above $x$ and we push forward the normalized Lebesgue measure on this interval to the circle by the projection map. When $s=0$, we set $Leb[x,x] $ to be the Dirac mass on $x$.
To prove that the identity map statistically bifurcates toward $\hat{\nu}_s$, the sequence $\{f_k=R_{\frac{1}{k}}\}_k $ of rotations converging to the identity and the sequence of times $\{n_k=\lfloor sk \rfloor\}_k$ work:
$$\lim_{k\to \infty} e^{f_k}_{n_k}(x)=Leb[x,x+s].$$
It is not hard to check that these are the only measures that the identity map statistically bifurcates toward through perturbations in $\Lambda$.
But if $\Lambda$ is the set of all smooth diffeomorphisms of $\mathbb S^1$ the set $\mathcal B_{\Lambda,f}$ contains all these measures togher with other measures, in particular the measure $\delta_{\delta_x}$ (the Dirac mass on the Dirac mass on $x$), for any point $x\in \mathbb S^1$. To see this, note that we can approach the identity map by Morse-Smale maps having two fixed points and the point $x$ as their only attracting fixed point.
\end{example}
Let us remind some definitions that we need in the rest of this section. \\
Let $X$ and $Y$ be two topological spaces with $Y$ compact. Denote the set of all compact subsets of $Y$ by $C(Y)$.
\begin{definition}
A map $\phi:X\to C(Y)$ is called lower semi-continuous if for any $x\in X$ and any $V$ open subset of $Y$ with $\phi(x)\cap V\neq \emptyset$, there is a neighbourhood $U$ of $x$ such that for any $y\in U$ the intersection $\phi(y)\cap V$ is non-empty. The map $\phi$ is called upper semi-continuous if for any $x\in X$ and any $V$ open subset of $Y$ with $\phi(x)\subset V$, there is a neighbourhood $U$ of $x$ such that for any $y\in U$ the set $\phi(y)$ is contained in $V$. And finally $\phi$ is called continuous at $x$ if it is both upper and lower semi-continuous at $x$.
\end{definition}
\begin{remark}
To say $x$ is a continuity point of a set valued map $\phi:X\to C(Y)$ with the above definition, is indeed equal to say $x$ is a continuity point of $\phi$ with considering $C(Y)$ as a topological space endowed with Hausdorff topology.
\end{remark}
We also recall the following theorem of Fort \cite{fort1951points} which generalizes the well known theorem about real valued semi-continuous maps to the set valued semi-continuous maps:
\begin{theorem*}[Fort]
For any Baire topological space $X$ and compact topological space $Y$, the set of continuity points of a semi-continuous map from $X$ to $C(Y)$ is a Baire generic subset of $X$.
\end{theorem*}
Now fixing a set of dynamics $\Lambda$ we reprove the following fact on semi-continuity property of the map sending the dynamics to its set of invariant probability measures.
\begin{lemma}\label{semi-cont inv meas}
The map sending $f\in \Lambda$ to its set of invariant probability measures is upper semi-continuous.
\end{lemma}
\begin{proof}
We need to prove that if we have a sequence of dynamics $\{f_n\}_{n}$ converging to $f$ and a sequence of invariant measures $\{\mu_n\}_{n}$ for these maps converging to a measure $\mu$ then $\mu$ is an $f$ invariant measure. But this is an special case of Proposition 5.9 in \cite{Viana14} where this fact is proved in the context of stationary measures for locally constant skew products.
\end{proof}
We recall that by Lemma \ref{compact BLambda}, the set $\mathcal B_{\Lambda,f}$ is compact. We can ask about dependence of the set $\mathcal B_{\Lambda,f}$ on the map $f$. The following lemma shows that this dependency is semi-continuous:
\begin{lemma}\label{lemma.BLambda.usc}
The map sending $f\in\Lambda $ to the set $\mathcal B_{\Lambda, f}$ is upper semi-continuous.
\end{lemma}
\begin{proof}
Let $\{f_n\}_{n\in\mathbb N}$ be a sequence converging to $f\in \Lambda$. We need to prove that if for each $n\in \mathbb N$, the map $f_n$ statistically bifurcates toward a measure $\hat{\nu}_n\in \mathcal M_1(\mathcal M_1(X))$ through perturbations in $\Lambda$ and the sequence $\{\hat{\nu}_n\}_n$ is convergent to a measure $\hat{\nu}$, then the map $f$ statistically bifurcates toward $\hat{\nu}$ through perturbations in $\Lambda$.
Then the proof is finished by observing that for $n$ large enough, small perturbations of the map $f_n$ are small perturbations of the map $f$, and $\hat{\nu}_n$ is close to $\hat{\nu}$.
\end{proof}
To each map $f\in\Lambda$ one can associate the set of accumulation points of the sequence $\{\hat{e}_n(f)\}_{n\in \mathbb{N}}$ which is a compact subset of $\mathcal M_1 (\mathcal M _1(X))$. By looking more carefully at the Example \ref{identity on S1}, we see that this map is neither upper semi-continuous nor lower semi-continuous. However if we add the points of this sequence to its accumulation points and consider the map sending $f\in \Lambda$ to the closure $\overline{\{\hat{e}_n(f)|n\in \mathbb{N}\}}$, we obtain a semi-continuous map:
\begin{lemma}\label{E lsc}
The map $\mathcal E: \Lambda \to C({\mathcal M_1(\mathcal M_1(X))})$ defined as
$$\mathcal E(f):=\overline{\{\hat{e}_n(f)|n\in \mathbb{N}\}},$$
is lower semi-continuous.
\end{lemma}
\begin{proof}
Let $V$ be an open subset of $\mathcal M_1(\mathcal M_1(X))$ intersecting $\mathcal E (f)$. So there is $n\in \mathbb N$ such that $\hat{e}_n(f)\in V$. But note that the map $f\mapsto\ \hat{e}_n(f)$ is continuous and so there is a neighborhood $U$ of $f$ so that for any $g\in U$, we have $\hat{e}_n(g)\in V$ and so $\mathcal E (g)$ intersects the set $V$. This shows that $\mathcal E$ is lower semi-continuous.
\end{proof}
The following lemma is an interesting consequence of Lemma \ref{E lsc} that shows how the set $\mathcal{E}(f)$ depends on the dynamics $f$.
\begin{main lemma}\label{cor of Fort}
A Baire generic map $f\in\Lambda$ is a continuity point for the map $\mathcal E$.
\end{main lemma}
This lemma gives a view to the statistical behaviors of generic maps in any Baire space of dynamics: for a generic map, the statistical behavior that can be observed for times close to infinity can not be changed dramatically by small perturbations.
\begin{proof}
Using Lemma \ref{E lsc}, this is a direct consequence of Fort's theorem.
\end{proof}
The following theorem reveals how two notion of statistical instability in law and being non-statistical in law are connected. There is another proof of this theorem which is communicated by Pierre Berger that can be found in \cite{talebi20} (Theorem 1.14).
\begin{theorem}
Baire generically, $B_{\Lambda,f}$ is equal to $acc(\{\hat{e}_n^f\}_n)$.
\end{theorem}
\begin{proof}
Take a generic map $f$ from the main lemma above. By remark \ref{Rem.acc.subset}, $acc(\{\hat{e}_n^f\}_n)\subset B_{\Lambda,f}$. So if $B_{\Lambda,f}$ has only one element, since $acc(\{\hat{e}_n^f\}_n)$ is non-empty, the equality holds. Now suppose that $B_{\Lambda,f}$ has more than one element. For the sake of contradiction suppose there is a measure $\hat{\nu}\in B_{\Lambda,f}$ which is not in $acc(\{\hat{e}_n^f\}_n)$. Then there is a number $n\in\mathbb{N}$ such that $\hat{\nu}=\hat{e}_n^f$ and $\hat{e}_n^f$ is an isolated point of the sequence $\{\hat{e}_n^f\}_n$. Recalling that for generic $f$ we have $B_{\Lambda,f}\subset \mathcal{E}(f)$, we can conclude that $B_{\Lambda,f}$ can be written as a union of two disjoint and non-empty closed set:
$$B_{\Lambda,f}= \{\hat{e}^f_n\} \bigcup \{\hat{e}^f_n\}^c.$$
This is in contrary to the connectedness of the set $B_{\Lambda,f}$.
\end{proof}
If we have any information about the set $\mathcal{B} _{\Lambda,f}$ then by using Theorem \ref{thm.generic BLambda} we may translate it to some information about $acc(\{e^f_n(x)\}_{n\in\mathbb N})$ for generic $f\in \Lambda$. In particular we obtain the following corollary:
\begin{corollary}\label{cor.gen-instability=non-stat}
The set $\Lambda$ contains a Baire generic subset of maps that are statistically unstable in law iff it contains a Baire generic subset of maps which are non-statistical in law.
\end{corollary}
Now we are going to study a special statistical bifurcation scenario for which this lemma can be used to deduce information about the behavior of generic maps. Suppose the initial map $f\in \Lambda$ has an invariant measure $\nu$ such that by a small perturbation of the map, the empirical measures of arbitrary large subset of points is close to $\nu$ for an iteration close to infinity. For instance you can think of the identity map on $\mathbb{S}^1$ which can be perturbed to an irrational rotation for which the empirical measures of almost every point converges to the Lebesgue measure or it can be perturbed to a Morse-Smale map having one attracting fixed point and so the empirical measures of almost every point converges to the Dirac mass on that attracting fixed point.
These measures could be interpreted as a potential physical measure with full basin for our initial dynamics; a measure that for some small perturbation of the initial map and for some large iteration, the empirical measures for a large set of points is close to that measure. We denote these measures by $\mathcal M_{\Lambda,f}$ which are defined more precisely as bellow:
$$\mathcal M_{\Lambda, f}:=\{\nu\in \mathcal M_1(X)|\delta_{\nu}\in\mathcal B_{\Lambda,f}\}.$$
\begin{theorem}\label{generic stat behav}
Let $\Lambda$ be a Baire space of self-mappings of $X$ endowed with a topology finer than $C^0$-topology. For a Baire generic map $f\in\Lambda$ the empirical measures of $\mu$ almost every point $x\in X$, accumulates to each measure in $\mathcal M _{\Lambda,f}$ or in other words:
\begin{align}\label{genericity of large accumulation}
for\ \mu-a.e. \ x\in X,\quad \mathcal M_{\Lambda,f}\subset acc(\{e^f_n(x)\}_{n\in\mathbb N}).
\end{align}
\end{theorem}
\begin{proof}
To prove the theorem it suffices to show that if $f\in \Lambda$ is a continuity point of the map $\mathcal E$ it satisfies condition \eqref{genericity of large accumulation}. This is because by Corollary \ref{cor of Fort} the continuity points of the map $\mathcal E$ form a Baire generic subset of $\Lambda$.
Take any measure $\nu$ inside $\mathcal M_{\Lambda,f}$. Lemma \ref{thm.generic BLambda} implies that $\delta_\nu\in \mathcal E(f)$. Now there are two possibilities, either there is a number $n\in \mathbb N$ such that $\hat{e}_n(f)=\delta_\nu$ or not. If not, there is a sequence $\{n_i\}_{i\in \mathbb N}$ converging to infinity such that
$$\lim_{i\to\infty}\hat{e}_{n_i}(f)=\delta_\nu.$$
So for a small neighbourhood $U\subset\mathcal M_1(X)$ of $\nu$, we have:
$$\lim_{i\to \infty} \hat{e}_{n_i}(f)(U)=\delta_\nu(U)=1,$$
and by equation \eqref{def e^}, in the definition of $\hat{e}_n(f)$, we obtain:
$$\lim_{i\to \infty}(\int_{X}\delta_{e_{n_i}^f(x)}d\mu)(U)=1.$$
So for $\mu$-almost every point $x\in X$ we have:
$$ \lim_{i\to\infty}\delta_{e_{n_i}^f(x)}(U)=1. $$
Since $U$ is an arbitrary neighbourhood, we can conclude that for $\mu$-almost every point $x\in X$, the measure $\delta_\nu$ is contained in the accumulation points of the sequence $\{\delta_{e_{n_i}^f(x)}\}_i$. But this is equal to say that $\nu $ is in the accumulation points of the sequence $\{e_{n_i}^f(x)\}_i$. So the measure $\nu$ is an accumulation point of the sequence $\{e_n^f(x)\}_{n\in\mathbb N}$, which is what we sought.
It remains to check the case that there is a number $n\in\mathbb N$ such that $\hat{e}_n(f)=\delta_\nu$. In this case, again by using \eqref{def e^} in the definition of $\hat{e}_n(f)$ we obtain:
$$\int_{X}\delta_{e_n^f(x)}d\mu=\delta_\nu,$$
so $\mu$-almost every $x\in X$ has the property that the measure $e_n^f(x)$ is equal to $\nu$. Recalling that $\nu$ is an $f$-invariant measure, every point $x$ with this property should be a periodic point and $\nu$ should be the invariant probability measure supported on the orbit of $x$. So obviously the measure $\nu$ lies in the accumulation points of the sequence $\{e_n^f(x)\}_{n\in\mathbb N}$. This finishes the proof.
\end{proof}
If one can find any information about the set $\mathcal M_{\Lambda,f}$ for a generic map $f$ in $\Lambda$ then by Theorem \ref{generic stat behav}, we can translate this information to information about the statistical behavior of $\mu$-almost every point for a generic subset of maps.
The following lemma shows how the set $\mathcal M_{\Lambda,f}$ depends on the map $f$:
\begin{lemma}\label{MLambda,f usc}
The map sending $f\in\Lambda $ to the set $\mathcal M_{\Lambda, f}$ is upper semi-continuous.
\end{lemma}
\begin{proof}
Let $\{f_n\}_{n\in\mathbb N}$ be a sequence converging to $f\in \Lambda$. We need to prove that if for each $n\in \mathbb N$, the map $f_n$ statistically bifurcates toward a measure $\nu_n\in \mathcal M_{\Lambda, f_n}$ through perturbations in $\Lambda$ and the sequence $\{\nu_n\}_n$ is convergent to a measure $\nu$, then the map $f$ statistically bifurcates toward $\nu$ through perturbations in $\Lambda$. Considering the fact that for $n$ large enough, small perturbations of the map $f_n$ are small perturbations of the map $f$, the rest of the proof is straight forward.
\end{proof}
Now let us see what is the consequence of this lemma and Theorem \ref{generic stat behav} when we know the maps in a dense subset bifurcates toward each Dirac mass at an invariant measure. Before that we introduce the following definition which was used for the first time by Hofbauer and Keller in \cite{HK2}:
\begin{definition}
A map $f\in\Lambda$ is said to have \emph{maximal oscillation} if the empirical measures of almost every point accumulates to all of the invariant measures in $\mathcal{M}_1(f)$.
\end{definition}
\begin{proposition}[Maximal oscillation]\label{maximal-oscil}
If there is $D\subset\Lambda$ dense such that any map $f\in D$ bifurcates toward the Dirac mass at each invariant measure through perturbations in $\Lambda$, or in other word $M_{\Lambda,f}=\mathcal M_1(f)$, then a generic $f\in\Lambda$ has maximal oscillation.
\end{proposition}
\begin{proof}
By Proposition \ref{MLambda,f usc} the map sending $f$ to $\mathcal M_{\Lambda,f}$ is semi-continuous. By Lemma \ref{semi-cont inv meas}, The map sending $f$ to $\mathcal M_1(f)$ is also upper semi-continuous. So by applying Fort's theorem we can find a Baire generic subset $\mathcal B\subset \Lambda$ such that any $f$ in this set is a continuity point for each of these maps. Now we can approach each map $f$ in $\mathcal B$ by maps $g$ in $D$, for which we know $\mathcal M_1(g)$ and $\mathcal M_{\Lambda,g}$ co-inside. So $\mathcal M_1(f)$ and $\mathcal M_{\Lambda,f}$ co-inside as well. By Theorem \ref{generic stat behav} we know there is a Baire generic subset of $\Lambda$ that for any map $f$ in this set the empirical measures of $\mu$ almost every point $x\in X$ accumulates to each of measures in $\mathcal M _{\Lambda,f}$. The intersection of this Baire generic set with $\mathcal B$ is still a Baire generic set and for a map $f$ in this intersection the empirical measures of $\mu$ almost every point $x\in X$ accumulates to each of measures in $\mathcal M_1(f)$.
\end{proof}
Hofbauer and Keller proved in \cite{HK2} that there is an uncountable set of parameters $\lambda$ for which the logistic map $f_{\lambda}(x)=\lambda x(1-x)$ restricted to the interval $[0,1]$ has maximal oscillation. Let us denote the closure of this set of parameters by $\Lambda$. It can be shown that this is indeed the closure of the set of parameters for which the critical point of the map is in the preimages of some repelling periodic points, and by Jacobson theorem \cite{Ja81} we know this set is of positive Lebesgue measure. As a corollary of Proposition \ref{maximal-oscil} we can give the following improvement of the Hofbauer and Keller result:
\begin{theorem}
The set of parameters $\lambda$ for which the map $f_\lambda $ has maximal oscillation is a Baire generic subset of $\Lambda$.
\end{theorem}
\begin{proof}
Take the set $D$ in Proposition \ref{maximal-oscil} to be the set of Hofbauer and Keller parameters.
\end{proof}
Will see in the next chapter that the scenario which is described in Theorem \ref{maximal-oscil} actually happens in the context of complex dynamics where $\Lambda$ is a subset of the space of rational maps on the Riemann sphere with a fixed degree, which is called ``maximal bifurcation locus".
In the next section we present an example for application of Theorem \ref{generic stat behav} to a special class of maps.
\section{Non-statistical Anosov-Katok maps}\label{sec.Anosov-Katok}
In \cite{AK70} Anosov and Katok introduced a method for obtaining Lebesgue conservative ergodic maps with unexpected metric properties on manifolds which admit a $\mathbb{S}^1$ free action. They considered a class of conservative maps on such a manifold that can be approximated by periodic maps (like rational rotations of the torus) and prove that the set of ergodic transformations is a Baire generic subset of this space ( which is an intersection of countably many open and dense subsets). Herman and Fathi in \cite{FH77} pushed forward their method to construct minimal and uniquely ergodic maps. They also proved that these maps form a second category subset of the space of maps that can be approximated by periodic ones. Being a $G_\delta$ subset had been previously known for properties like ergodicity, minimality and transitivity. The main aspect of the mentioned works in proving the genericity of these properies was to use this new method of Anosov and Katok to conclude the densitiy of such properties. Herman could also apply the Anosov-Katok method to construct exotic invariant sets for holomorphic maps of the Riemann sphere. Here we use the Anosov-Katok method to construct and prove the genericity of diffeomorphisms of the annulus with unexpected statistical properties.\\
Let us denote the annulus $[0,1]\times\mathbb{R}/\mathbb Z$ by $\mathbb{A}$ and for $r\in [0,\infty]$ the space of all $C^r$ orientation preserving diffeomorphisms of $\mathbb{A}$ by $\text{Diff}_{+}^r(\mathbb{A})$ endowed with the $C^r-$topology. We denote the closure of the set of all $C^r$ diffeomorphisms of the annulus which are $C^r-$conjugate to a rotation by $\mathcal{AK}^r$ and call it the space of $C^r$ Anosov-Katok maps. We also use $\mathcal{AK}_{vol}^r$ to denote the closure of the set of $C^r$ volume preserving diffeomorphisms conjugate to a rotation with a conjugacy fixing every point of the boundary. The spaces $\mathcal{AK}^r$ and $\mathcal{AK}_{vol}^r$ endowed with the induced $C^r -$topology are Baire spaces. We remind that for any measure $\nu\in\mathcal M_1(X)$, the Dirac mass on $\nu$, which is an element of $\mathcal M_1(\mathcal M_1(X))$, is denoted by $\hat{\delta}_\nu$. \\
Before stating our main result in this section let us recall a related result of Fayad and Katok that we will use later in our proof:
\begin{theorem*}[{Fayad-Katok \cite[Theorem 3.3]{FK04}}]\label{FK04}
A Baire generic map in the space $\mathcal {AK}_{vol}^\infty$ has only three ergodic measures, two one dimensional Lebesgue measures on the boundary components and the volume measure of the annulus.
\end{theorem*}
Now we are going to prove the Baire genericity of non-statitical behavior and indeed maximal oscillation in the set of Anosov-Katok maps $\mathcal{AK}^r$.
\begin{theorem}\label{thmC}
A Baire generic map in the set of Anosov-Katok maps $\mathcal {AK}^r$ has exactly two ergodic invariant measures each of which is supported by a different boundary component of the annulus and more over the map is maximally oscillating.
\end{theorem}
\begin{remark}
Note that an Anosov-Katok map has at least two invariant measures which are supported on different boundary components of the annulus.
\end{remark}
\begin{proof}
We need the following lemma to prove the theorem:
\begin{lemma}\label{lemma-AK}
Let $C$ be one of the connected components of the boundary of $\mathbb{A}$, and $f$ be an arbitrary map in $\mathcal{AK}$. Then there is a measure $\nu$ supported on the set $C$ such that $f$ statistically bifurcates towards $\hat{\delta}_{\nu}$.
\end{lemma}
\end{proof}
\begin{proof}
Since $f$ can be approximated by maps which are conjugate to rotation, there is a rational number $\frac{p}{q}$ and a $C^r$ diffeomorphism $h$ such that the map $h^{-1}R_{\frac{p}{q}}h$ is close to $f$.
Let
$$B:=[r_1,r_2]\times \mathbb R/ \mathbb Z,$$
for some distinct real numbers $r_1,r_2\in (0,1)$. Take a real number $\theta>0$ and define
$$B_1:= [r_1,r_2]\times [0,\theta)$$
and
$$B_2:= [r_1,r_2]\times [\theta,1). $$
\begin{figure}
\centering
\includegraphics[scale=0.15]{Anosovkatok.png}
\caption{The map $\hat{g}$}
\label{Anosov-Katok-pic}
\end{figure}
\begin{sublemma}\label{sublemma}
For any $\sigma< 1$ close to $1$ and $\epsilon>0$ small, there is a map $\hat{g} \in\mathrm{Diff}_{+}^r(\mathbb{A})$ with the following properties:
\begin{itemize}
\item $\hat{g}$ is identity on a neighborhood of the set $C$,
\item $Leb(\hat{g}(B_{1}))>\sigma$,
\item $\hat g(B_2)$ is included in the $\epsilon$-neighborhood $N_\epsilon(C)$ of $C$.
\end{itemize}
\end{sublemma}
\begin{proof}
Using bump functions, we construct a map $\hat g$ as depicted in Figure \ref{Anosov-Katok-pic}. The technical details are left to the reader.
\end{proof}
Now let $\hat{g}$ be a map found in the sublemma. This map can be lifted using the covering map $\pi:\mathbb{A}\to \mathbb{A}$, $\pi(r,\theta)=(r,q\theta)$. Let $g$ be the lift of $\hat{g}$ which is identity around the set $C$. The diffeomorphism $g$ has similar properties:
\begin{itemize}
\item ${g}$ is identity on a neighbourhood of the set $C$,
\item $Leb({g}(\pi^{-1}(B_{1})))>\sigma$,
\item ${g}(\pi^{-1}(B_{2}))\subset N_\epsilon(C)$.
\end{itemize}
Now note that $g$ commutes with $R_{\frac{p}{q}}$ so:
\begin{equation*}
h\circ g \circ R_{\frac{p}{q}} \circ g^{-1} \circ h^{-1}=h\circ R_{\frac{p}{q}} \circ h^{-1}
\end{equation*}
Choose $\alpha'$ irrational and small enough so that $h\circ g \circ R_{\alpha'} \circ g^{-1} \circ h^{-1}$ is arbitrary close to $h\circ R_{\frac{p}{q}} \circ h^{-1} $. Indeed $h\circ g$ is $C^r$ and the map sending $\alpha$ to $h\circ g \circ R_{\alpha} \circ g^{-1} \circ h^{-1}$ is hence continuous. Since $\alpha'$ is irrational, the orbit closure of any point in $\mathbb{A}$ under iterating the map $h\circ g \circ R_{\alpha'} \circ g^{-1} \circ h^{-1}=:f'$ is $h\circ g$-image of the orbit closure of a point under iterating the map $R_{\alpha'}$, which is a circle in $\mathbb{A} $. So for any point $x\in h\circ g(\pi^{-1}(B_{2}))$, the orbit closure of $x$ is the $h\circ g$ image of a vertical circle $C'$ intersecting $\pi^{-1}(B_{2})\subset B$ and so contained in $\pi^{-1}(B)$. The map $h\circ g \circ R_{\alpha'} \circ g^{-1} \circ h^{-1}|_{h\circ g(C')}$ is conjugate to $R_{\alpha'}|_{C'}$.
Now note that if $R_{\alpha'}^{n}\circ g^{-1}\circ h^{-1}(x) \in \pi^{-1}(B_{i})$ then $(f')^{n}(x)\in h\circ g(\pi^{-1}(B_{i}))$ for $i\in \{1,2\}$. The orbit of each point in $C$, in average, spends ${\theta}$ portion of times in $\pi^{-1}(B_1)$ and ${1-\theta}$ portion of times in $\pi^{-1}(B_2)$. So the orbit of the point $x$ spends ${\theta}$ portion of times in $h\circ g(\pi^{-1}(B_1))$ and ${1-\theta}$ portion of times in $h\circ g(\pi^{-1}(B_2))$. By choosing $\theta$ and $\epsilon$ sufficiently small, we can guarantee that the asymptotic averages of any point in $h\circ g(\pi^{-1}(B_1))$ is arbitrary close to a measure $\nu_{f'}$ which is the pushforward of the Lebesgue measure on the boundary component $C$ by the map $h\circ g$. If $\sigma$ is chosen sufficiently close to one, then map $g$ is so that $Leb(h\circ g(\pi^{-1}(B_1)))$ is sufficiently close to one and so since for a large number $n$ the $n^{th}$ empirical measures of points in $h\circ g(\pi^{-1}(B_1))$ is close to $\nu_{h\circ g}$ and so the measure $\hat{e}^{f'}_n$ is close to $\hat{\delta}_{\nu_{f'}}$. Now taking $\nu$ as any accumulation point of measures like $\nu_{f'}$ where $f'$ approches $f$, according to the definition we can see that $f$ statistically bifurcates toward $\hat{\delta}_{\nu}$ and so we are done.
\end{proof}
\begin{proof}[Proof of \ref{thmC}]
Lemma \ref{lemma-AK} implies that for any map $f\in \mathcal{AK}^r$ there are two measures $\nu_{1,f}$ and $\nu_{2,f}$ which are supported on different connected components of the boundary of $\mathbb{A}$ such that $f$ statistically bifurcates toward both $\hat{\delta}_{\nu_{1,f}}$ and $\hat{\delta}_{\nu_{2,f}}$. Now using Theorem \ref{generic stat behav} we conclude that for a generic map $f\in\mathcal{AK}^r$ and for almost every point $x$ in the phase space $X$, the set of accumulation points of the sequence $\{{e}_n^f(x)\}_n$ contains at least two measures ${\nu_{1,f}}$ and ${\nu_{2,f}}$. We are going to show that generically these two measures are the only ergodic invariant measures of the map $f$ and the empirical measures of almost every point accumulates to any convex combination of these two ergodic measures (which is indeed the whole space of invariant measures) and hence $f$ is maximally oscillating. Note that since the map which sends each dynamics to its set of invariant measures is semi-continuous, and the continuity points of a semi-continuous map is a generic set, we can choose $f$ to be a continuity point of this mapping. Now approximate $f$ by a map like $h\circ g\circ R_{\alpha'}\circ g^{-1}\circ h^{-1}$ coming from Lemma \ref{lemma-AK}. Using Theorem \ref{FK04}, we know that the map $R_{\alpha'}$ can be approximated in $C^\infty$-topology (and hence in $C^r$-topology) by a map $e\in \mathcal{AK}_{vol}^\infty$ which has only three ergodic measures, two one dimensional Lebesgue measures on the boundaries and the volume measure of the annulus. The map $h\circ g\circ e \circ g^{-1} \circ h^{-1}$ is close to the initial map $f$ and has only three ergodic invariant measures which are the push forward of three ergodic measures of $e$ by the map $h\circ g$. Note that if in subLemma \ref{sublemma} the numbers $r_1$ and $r_2$ are chosen close to $0$ and $1$ then set $B_2$ has measure close to one. In this case observe that the pushforward of the volume measure by the map $h\circ g$ is a measure which is close to the pushforward of the one dimensional Lebesgue measure of one of the boundary components (which is denoted by $C$ in the lemma). Hence the set of invariant measures for the map $h\circ g\circ R_{\alpha'}\circ g^{-1}\circ h^{-1}$ is a triangle that have two of its vertices very close to each other. We know that the map sending the dynamics to its set of invariant measures is upper semi-continuous (see \ref{semi-cont inv meas}) and hence it is continuous for maps in a Baire generic set. Hence we can assume that $f$ is a continuity point of this mapping and the set of invariant measures of $f$ is in an arbitrary neighbourhood of triangle which is arbitrary close to a segment. So the set of invariant measures of $f$ is a segment. But this means that $f$ has exactly two ergodic invariant measures. Note that these measures are supported on different boundary components of the annulus. So on each boundary component of the annulus there is only one ergodic measure and hence any invariant measures on one of the boundary components is equal to the corresponding ergodic measure on that component. So two measures $\nu_{1,f}$ and $\nu_{2,f}$ toward which $f$ statistically bifurcates are exactly two ergodic measures of $f$. Moreover since these two measures are in the accumulation points of the sequence of empirical measures for almost every points, and the set of invariant measures is the line segment between these two measures, the sequence of empirical measures of almost every points have to accumulate to every point in this line segment and this finishes the proof.
\end{proof}
\section{Comparison between different versions}\label{sec.comparison}
In this section we compare different versions of defining statistical instability and non-statistical dynamics and show how they are related.
The first propositions describes the relation between different versions of defining non-statistical maps.
\begin{proposition}\label{prop hier non-stat in law}
Suppose $f:X\to X$ is a continuous map of a compact metric space. \\
i) If $f$ is non-statistical in law, then it is $L^1$ non-statistical. \\
ii) If $f$ is $L^1$ non-statistical, then it is non-statistical.
\end{proposition}
\begin{proof}
i) Let $f$ be non-statistical in law. So by definition the sequence $\{\hat{e}_n(f)=(e^f_n)_* \mu\}_n$ is not convergent. We recall that
\[(e_n^f)_* (\mu)=\int_X\hat{\delta}_{e^f_n(x)}d\mu(x).\]
where $\hat{\delta}_{e^f_n(x)}\in \mathcal{M}_1(\mathcal{M}_1(X))$ is the Dirac mass supported on the point $e^f_n(x)\in \mathcal{M}_1(X)$.\\
Suppose to the contrary that $f$ is not $L^1$ non-statistical. So the sequence of maps $\{e_n^f:X\to \mathcal{M}_1(X)\}$
is convergent in the $L^1$ topology. Let us call the limit point of this sequence by $e^f_\infty$. Now we show that $(e^f_n)_*\mu$ converges to $(e^f_\infty)_*\mu$ which is a contradiction. For simplicity we denote $(e_n^f)_*\mu$ by $\nu_n$ and $(e_\infty^f)_*\mu$ by $\nu_\infty$. We recall that
\[\hat{d}_w(\nu_n,\nu_\infty)=\min\limits_{\xi\in\pi(\nu_n,\nu_\infty)}\int_{\mathcal{M}_1(X)\times \mathcal{M}_1(X)} d_w(e,e') d\xi(e,e').\]
where $\pi(\nu_n,\nu_\infty)$ is the set of all probability measures on $\mathcal{M}_1(X)\times \mathcal{M}_1(X)$ which projects to $\nu_n$ and $\nu_\infty$ under the projections to the first and second coordinate respectively. Consider the following element of $\pi(\nu_n,\nu_\infty)$:
\[\xi:=\int_Xd_w(\delta_{e_n^f(x)},\delta_{e_\infty^f(x)})d\mu\]
We have
$$\hat{d}_w(\nu_n,\nu_\infty)\leqslant \int_{\mathcal{M}_1(X)\times \mathcal{M}_1(X)}d_w(e,e')d\xi_n(e,e').$$
On the other hand,
\[\int_{\mathcal{M}_1(X)\times \mathcal{M}_1(X)}d_w(e,e')d\xi_n(e,e')=\int_Xd_w(e^f_n(x), e^f_\infty(x))d\mu=d_{L'}(e_n^f, e_\infty^f).\]
So we obtain $\hat{d}_w(\nu_n,\nu_\infty)\leqslant d_{L^1}(e_n^f,e_\infty^f)$, which implies that $\nu_n$ is converging to $\nu_\infty$ and is a contradiction. \\
ii) Let $f$ be $L^1$ non-Statistical. We show that $f$ is non-statistical and so the maps $e_n^f:X\to \mathcal{M}_1(X)$ do not converge almost surely. By contrary, suppose the maps $e_n^f:X\to \mathcal{M}_1(X)$ converge almost surely to a map $e_\infty^f:X\to \mathcal{M}_1(X)$. Hence the map $d_w(e_n^f(.), e_\infty^f(.)):X\to \mathbb{R}$ converges to zero almost surely and by dominated convergence theorem we obtain that
\[ d_{L^1}(e_n^f,e_\infty^f)=\int_X d_w(e_n^f(x),e^f_\infty(x))d\mu(x)\to 0 \quad (n\to \infty).\]
Which is a contradiction.
\end{proof}
The next proposition, shows that the same hierarchy holds for different versions of defining statistically unstable maps.
\begin{proposition}\label{prop hier stat stab}
Suppose $\Lambda$ is a set of continuous self mappings of a compact metric space $X$. It holds true that:\\
i) If $f\in \Lambda$ is statistically unstable in law, then it is $L^1$ statistically unstable. \\
ii) If $f\in \Lambda$ is $L^1$ statistically unstable, then it is statistically unstable.
\end{proposition}
\begin{proof}
i) If $f$ is statistically in law, then by definition there is at least two different elements $\hat{\nu}_1$ and $\hat{\nu}_2$ in $\mathcal{M}_1(\mathcal{M}_1(X))$ toward which $f$ statistically bifurcates. This means that there are two sequences of maps $\{f^1_k\}_k$ and $\{f^2_k\}_k$ converging to $f$, and two sequences of positive integers $\{n^1_k\}_k$ and $\{n^2_k\}_k$ converging to infinity such that
\[\lim\limits_{k\to \infty} \hat{e}_{n^1_k}(f^1_k) =\hat{\nu}_1, \lim\limits_{k\to \infty} \hat{e}_{n^2_k}(f^2_k) =\hat{\nu}_2\]
Now on the contrary suppose $f$ is $L^1$ statistically stable. So the sequence $\{e^{f^1_k}_{n^1_k}:X\to X\}_k$ and $\{e^{f^2_k}_{n^2_k}:X\to X\}_k$
both converge to a map $e^f_\infty:X\to X$ in the $L^1$ topology. Using the same arguments in the proof of part (i) in the previous proposition we conclude that both sequences $\{(e^{f^1_k}_{n^1_k})_* \mu\}_k$ and $\{(e^{f^2_k}_{n^2_k})_* \mu\}_k$
converge to $(e_\infty^f)_*\mu$. Hence we have $\hat{\nu}_1=\hat{\nu}_2=(e_\infty^f)_*\mu$ and since $\hat{\nu}_1$ and $\hat{\nu}_2$ were distinct elements $\mathcal{M}_1(\mathcal{M}_1(X))$, this is a contradiction.\\
ii) Suppose $f$ is not statistically stable, so there is a map $e_\infty^f:X\to \mathcal{M}_1(X)$ such that for any sequence $\{f_k\}_k$ converging to $f$ and any sequence of natural numbers $\{n_k\}_k$ converging to infinity the sequence of maps $\{e_{n_k}^{f_k}:X\to \mathcal{M}_1(X)\}_k$ converge almost surely to the map $e_\infty^f$. Using dominated convergence theorem, we conclude the convergence of this sequence in the $L^1$ topology to the map $e_\infty^f$ and hence $f$ can not be $L^1$ statistically unstable.
\end{proof}
Let us mention that Proposition \ref{prop hier stat stab} implies a theorem of Avila and Bochi \cite[Theorem B]{AB09}:
\begin{theorem*}[Avila-Bochi 09]
Assume $X$ is a smooth compact connected manifold and $m$ a smooth volume measure on $X$. For any conservative map $f$ of $X$, denote the ergodic decomposition of $m$ by $\kappa_f$. Fix an integer $r\geqslant 0$. The points of continuity of the map
$$f\in \text{Diff}^{r}_m(X)\mapsto {\kappa}_f\in \mathcal M _1(\mathcal M_1(X)),$$
form a residual set.
\end{theorem*}
Here we can give a short proof of this theorem using Corollary \ref{cor.generic.conservative} and Proposition \ref{prop hier stat stab}:
\begin{proof}
We prove that every map $f$ in the set of $L^1$ statistically stable maps, which by Corollary \ref{cor.generic.conservative} is residual in $\text{Diff}^r_m(X)$, is a continuity point of ergodic decomposition. Let $f$ be $L^1$ statistically stable. By Proposition \ref{prop hier stat stab}, $f$ is statistically stable in law. So for any sequence $\{f_k\}_k$ converging to $f$ and $\{n_k\}_k$ converging to infinity the sequence $\{\hat{e}_{n_k}(f_k)\}_k$ converges to the ergodic decomposition $\kappa_f=\lim\limits_{n\to \infty}\hat{e}_n(f)$, and hence we can conclude that $\kappa_{f_k}$ converges to $\kappa_f$. So $f$ is a continuity point of the map sending $f$ to $\kappa_f$.
\end{proof}
\begin{remark}
In fact if $f\in \text{Diff}^r_m$ is $L^1$ statistically stable, then for any positive number $\epsilon >0$, there is a neighbourhood $\mathcal U$ of $f$ such that not only for any map $g\in\mathcal U$, the limit $e^g_{\infty}$ of the sequence of empirical functions is $\epsilon$-close to $e^f_{\infty}$, but also for large enough integer $n$, the empirical functions $e^g_n$ are $\epsilon$-close to $e^f_{\infty}$ as well. This is because
$$0=\Delta^1(f)\geqslant \limsup_{n\to \infty}\limsup_{g \to f}d_{L^1}(e^f_{\infty},e^g_n).$$
\end{remark}
\def\polhk#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{`}\hidewidth\crcr\unhbox0}}}
|
1,477,468,750,103 | arxiv | \section{INTRODUCTION
Low temperature adsorption of highly quantal fluids, such
as helium or {\it para}-hydrogen ({\it p}-H$_2$), on a variety of substrates,
has been the focus of much experimental and theoretical work. A major
motivation of these studies is the exploration of the fascinating properties
that such adsorbed quantum films display, often considerably different
than those of the bulk materials.
In particular, a fluid of {\it p}-H$_2$ molecules is an interesting physical system for a number of
reasons. Because a {\it p}-H$_2$ molecule has half the mass of a helium atom,
zero-point motion can be expected to be quite significant; each molecule is
a spin-zero boson, and therefore it is conceivable that, at low enough temperature,
a {\it p}-H$_2$ fluid might display physical behavior similar to that of fluid
helium, including superfluidity.\cite{ginzburg72}
Unlike helium, though, bulk {\it p}-H$_2$ solidifies at low temperature
($T_{\rm c} \approx$ 14 K); this prevents the observation of phenomena such as
Bose Einstein condensation (BEC) and, possibly, superfluidity (SF), which are speculated
to occur in the liquid phase below $T$ $\approx$ 6 K. Solidification is due the depth of the
attractive well of the potential between two hydrogen molecules, significantly greater than that between two helium atoms. Several, attempts have been made\cite{bretz81,maris86,maris87,schindler96} to supercool bulk liquid {\it p}-H$_2$, but the search for SF (in the bulk) has so far not met with success.
Potential avenues to explore, toward stabilizing a liquid phase of {\it p}-H$_2$ to low enough temperatures that a SF transition could be observed, include the reduction of dimensionality. An extensive theoretical study of the phase diagram of {\it p}-H$_2$ in two dimensions (2D) has been recently carried out, based on Path Integral Monte Carlo (PIMC) simulations.\cite{boninsegni04b} The main result is that the equilibrium phase of the system at low $T$ is a triangular crystal, with a melting temperature $T_{\rm m}\sim$ 6.8 K. This value is approximately half that of bulk {\it p}-H$_2$, but still significantly higher than the temperature at which the system, if it remained a liquid, would undergo Bose Condensation and turn superfluid, estimated at $\sim$ 2 K in 2D. Another important result of the same study, is that no {\it metastable} liquid phase exists; for, the system remains a liquid down to the spinodal density, below which it breaks down into solid clusters. These results, in part, cast doubts on the prospects of observing a liquid (SF) phase of {\it p}-H$_2$ at low $T$, in 2D.
The closest experimental realization of a two-dimensional system, is a film of {\it p}-H$_2$ molecules adsorbed on a substrate. However, quantum zero-point motion of adsorbed particles in the direction perpendicular to the substrate can be quite significant, as calculations for adsorbed helium films on alkali metal substrates show.\cite{boninsegni99} One might speculate that such zero-point motion may result in an effective screening of the {\it p}-H$_2$ intermolecular interactions, possibly leading to a reduction of $T_{\rm m}$, with respect to the purely two-dimensional case. Indeed, some PIMC studies\cite{boninsegni04_li} of {\it p}-H$_2$ films adsorbed on a lithium substrate yielded a melting temperature of approximately 6.5 K, i.e., slightly lower than in 2D. The interesting physical question is whether $T_{\rm m}$ may be reduced even further, by an appropriate choice of substrate, to the point where collective quantum many-body phenomena could become observable, in the liquid phase.
Experimentally, adsorbed films of {\it p}-H$_2$ on surfaces are readily accessible, and indeed have been studied extensively on a variety of substrates. For example, various techniques\cite{nielsen80,lauter90,wiechert91,vilches92} have been used to study the phase diagram and structure of monolayer {\it p}-H$_2$ films adsorbed on graphite. In a recent neutron scattering investigation of deuterium (D$_2$) films adsorbed on a krypton pre-plated graphite substrate,\cite{wiechert04} evidence of a stable liquid phase of D$_2$ down to $T\sim$1.5 K was reported. Such a result is truly remarkable, particularly considering that it pertains to the heavier isotope of hydrogen, which should display an even stronger tendency to crystallize than {\it p}-H$_2$. Motivated by this finding, we have undertaken a theoretical study of the low temperature phase diagram of {\it p}-H$_2$ films adsorbed on krypton pre-plated graphite.
On general grounds, one would expect {\it p}-H$_2$ to remain liquid as well, on account of its lighter mass, even though the intermolecular interaction is slightly more attractive for {\it p}-H$_2$.\cite{wang03,shi03} We explore a fairly simple model of our system of interest in its ground state ($T$=0 limit) to serve as a baseline for future, more refined calculations. Energetic and structural properties of a layer of {\it p}-H$_2$ molecules adsorbed on Kr/Graphite are investigated
theoretically, by means of Path Integral ground state (PIGS) Monte Carlo simulations.
The main results of this study are the following:
\begin{enumerate}
\item {A stable commensurate solid {\it p}-H$_2$ monolayer exists, with coverage (i.e., 2D density) $\theta_\circ$=0.0636 \AA$^{-2}$. This solid is commensurate with the krypton plating.}
\item {An incommensurate solid {\it p}-H$_2$ monolayer exists, with coverage $\theta_{1}$=0.0716 \AA$^{-2}$, compressible up to $\theta_{2}$=0.0769 \AA$^{-2}$.}
\item{No evidence is observed of a thermodynamically stable {\it liquid} phase in the ground state of this system.}
\item{Evidence of quantum exchange between {\it p}-H$_2$ molecules is not found in the $T$=0 limit; this in turn indicates absence of superfluid behavior in this system.}
\end{enumerate}
The remainder of this manuscript is organized as follows: Sec. \ref{model} offers a description of the model used for our system
of interest, including a discussion of the potentials and justifications for the main underlying
assumptions. Sec. \ref{method} involves a brief discussion of the computational technique and specific details of its implementation, in addition to details of calibration and optimization. The results are presented in Sec. \ref{results}; finally, Sec. \ref{conclusions} is a summary of the
findings and our concluding remarks.
\section{MODEL}
\label{model}
We consider a system of $N$ {\it p}-H$_2$
molecules, sitting above a substrate consisting of a single atomic layer of krypton, below which is a graphite substrate, which we assume to be smooth.
The ($L$) krypton atoms are assumed fixed in space at positions ${\bf R}_k$, owing to their relatively large mass. They sit at a height of 3.46 \AA\ over the graphite; this distance corresponds to the minimum of the most accurate Kr-graphite interaction potential currently available.\cite{steele,gooding} All of the Kr atoms and {\it p}-H$_2$ molecules are regarded as point particles. The model quantum many-body Hamiltonian is therefore as follows:
\begin{eqnarray}\label{hm}
\hat{H}=-\frac{\hbar^{2}}{2m}\sum_{i=1}^{N}\nabla_{i}^{2} +
\sum_{i<j}V(r_{ij}) +
\sum_{i=1}^N\sum_{k=1}^{L} U (|{\bf r}_i-{\bf R}_k|) +
\sum_{i=1}^N {\tilde U}({z}_i)
\end{eqnarray}
Here, $m$ is the mass of a {\it p}-H$_2$ molecule,
$\{{\bf r}_j\}$
(with $j$=1,2,...,$N$) are the positions of the {\it p}-H$_2$ molecules, $r_{ij}\equiv |{\bf r}_i-{\bf r}_j|$; $z_{i}$ is the height of the $i$th {\it p}-H$_2$ molecule above the graphite surface.
$V$ is the potential describing the interaction between any two {\it p}-H$_2$ molecules, and
$U$ represents the interaction of a {\it p}-H$_2$ molecule with a Kr atom.
As mentioned above, the underlying graphite substrate is regarded as smooth. The justification for this assumption is that, due to the presence of the Kr spacer layer, the {\it p}-H$_2$ molecules remain at a distance of at least $\sim$ 7 \AA\ from the graphite substrate. Thus, the effect of the corrugation of the substrate should be negligible. Therefore, we use a simple ``3-9" potential to describe the interaction of {\it p}-H$_2$ molecules with the smooth graphite substrate.\cite{gatica04} $\tilde{U}$ has the following form:
\begin{eqnarray}\label{sim}
\tilde U(z_{i}) = \frac{4 C^{3}}{27 D^{2} z^{9}} - \frac{C}{z^{3}}
\end{eqnarray}
where $C$=7913.24 \AA$^{3}$ K and $D$=259.39 K are parameters derived from the original {\it p}-H$_2$-C Lennard-Jones parameters\cite{levesque02} ($\sigma=3.18$ \AA, $\epsilon=32.05$ K) and the density of carbon atoms in graphite ($\rho=0.114$ \AA$^{-3}$).
All pair potentials are assumed to depend only on relative distances.
The interaction $V$ is described by the Silvera-Goldman potential,\cite{Silvera1} which provides an accurate description of energetic and
structural properties of condensed {\it p}-H$_2$ at ordinary conditions of
temperature and pressure.\cite{johnson96,operetto}
The interaction of a {\it p}-H$_2$ molecule
and a Kr atom is modeled using a
standard 6-12 Lennard-Jones (LJ) potential; to our knowledge, there are no published values for the parameters of this potential for this specific interaction. Therefore, we make use of the Lorentz-Berthelot mixing rule,\cite{wang97,macgowan86} yielding $\epsilon=75.6$ K and
$\sigma=3.3$ \AA.
The model (\ref{hm}) clearly contains important physical simplifications, such
as the neglect of zero-point motion of Kr atoms,
as well as
the restrictions to additive pairwise interactions (to the exclusion of,
for example, three-body terms), all taken to be central, and the use of the
highly simplified LJ and "3-9" potentials. Nonetheless, it seems a reasonable starting point, and even quantitatively we expect it to capture the bulk of the physical picture.
\section{COMPUTATIONAL METHOD}
\label{method}
Accurate ground state expectation values for quantum many-body systems
described by a Hamiltonian such as (\ref{hm}) can be computed
by means of Quantum MOnte Carlo (QMC) simulations.
In this work, the method utilized is \textit{Path Integral ground state}
(PIGS), which is an extension to zero temperature of the standard,
Path Integral Monte Carlo method.\cite{Ceperley1}
PIGS is a projection technique, which filters the exact ground state wave
function out of an initial trial state. It is therefore closely related to
other ground state projection methods, such as Diffusion Monte Carlo (DMC),
but has a few distinct advantages (for a discussion, see, for instance, Ref.
\onlinecite{Sarsa1}).
Because this method is described in a number of publications, it will not be reviewed here. Some of the technical details of the calculation performed in this work (mainly, the short imaginary time propagator) are the same as in Ref. \onlinecite{boninsegni04a}.
The trial wave function utilized is of the Jastrow type:
\begin{equation}\label{trial}
\Psi_T({\bf r}_1,{\bf r}_2,...{\bf r}_N)= \biggl ( \prod_{i<j}^N e^{-v(r_{ij})}\biggr ) \times \biggl (\prod_{i=1}^N\prod_{k=1}^L
e^{-u(|{\bf r}_i-{\bf R}_k|)}\biggr ) \times \biggl ( \prod_{i=1}^N e^{-w(z_i)}\biggr )
\end{equation}
with
pseudo-potentials $w$ ({\it p}-H$_2$-graphite), $u$ ({\it p}-H$_2$-Kr), and $v$ ({\it p}-H$_2$-{\it p}-H$_2$) chosen as follows:
\begin{equation}
w(r)=\frac{\alpha}{z^3} \;\; , \;\; u(r)=\frac{\gamma}{r^5} \;\; {\rm and} \;\; v(r)=\frac{\mu}{r^5}
\end{equation}
The values of the parameters $\alpha=30$ \AA$^{3}$, $\gamma=250$
\AA$^{5}$ and $\mu=750$ \AA$^{5}$ were obtained empirically, by minimizing the energy
expectation value computed in separate variational calculations. Using the trial wavefunction as defined above,
we observe convergence of the ground state energy
estimates with a projection time 0.250 K$^{-1}$, using a time step $\tau = 7.8125\times 10^{-4}$ K$^{-1}$.
PIGS calculations for a range of {\it p}-H$_2$ coverages were carried out, starting from an initial configuration of
para-hydrogen molecules sitting atop the Kr layer. The simulation cell consists of a 6$\times$8 triangular lattice of
Kr atoms with 4.26 \AA\ nearest neighbor spacing. Periodic boundary conditions are used in the three directions, but the simulation cell is chosen sufficiently big in the $z$ direction that they have no effect vertically. Because of the strongly attractive character of the composite substrate, for a small
enough number of hydrogen molecules, the system remains vertically
confined (i.e., {\it p}-H$_2$ molecules do not evaporate).
The systematic errors of our calculation are attributable to finite
projection time and the finite time step $\tau$.
Based on comparisons of results obtained from simulations with different
values of projection time and time step, we estimate our combined systematic error on the
total energy per {\it p}-H$_2$ molecule to be of the order of 0.7 K or less (below 0.5\%).
\section{RESULTS}
\label{results}
Physical quantities of interest include the ground state energy per {\it p}-H$_2$
molecule, $e(N)$, and the vertical {\it p}-H$_2$ density profile, $\rho(z)$, above the composite substrate. These quantities can be computed in an unbiased fashion, i.e., the variational bias arising from the initial state (i.e., trial wave function (\ref{trial})) can be essentially completely removed.\cite{Sarsa1}
\begin{figure}
\centerline{\includegraphics[height=2.75in]{energy1}}
\caption{Energy per {\it p}-H$_2$ molecule $e(N)$ (in K) computed by PIGS, as a function of the coverage $\theta$ (in \AA$^{-2}$).}
\label{energyplot1}
\end{figure}
Results for $e(N)$ are shown in Fig. \ref{energyplot1}. The main features are two energetic minima, one at $\theta_\circ = 0.0636$ \AA$^{-2}$, a coverage corresponding to commensuration ($N$=48 {\it p}-H$_2$ molecules). The second, at $\theta_{1} = 0.0716$ \AA$^{-2}$, corresponds to an incommensurate solid monolayer, which remains stable, based on an analysis of the associated chemical potential (see below), up to $\theta_{2} = 0.0769$ \AA$^{-2}$ ($N$=58 {\it p}-H$_2$ molecules).
A direct comparison can be made between our $e(N)$ curve, and that obtained by Nho and Manousakis (Ref. \onlinecite{nho03}), who studied {\it p}-H$_2$ monolayer films on bare graphite (at low temperature). The shape of the energy curve in both cases is very similar; there is an energetic minimum at commensuration, corresponding to precisely the same coverage, followed by a negative-curvature increase. Where we find a second energetic minimum, they find a change in curvature, in both cases preceding a positive curvature (thermodynamically stable) portion, although, on Kr pre-plated graphite, such a region seems more extended.
\begin{figure}
\centerline{\includegraphics[height=2.75in]{chempot}}
\caption{Coverage of {\it p}-H$_2$ molecules, $\theta$, ( \AA$^{-2}$) as a function of the chemical potential, $\mu$ (in K), computed as explained in the text. Dotted vertical line shows the chemical potential of bulk solid {\it p}-H$_2$, at the $T$=0 equilibrium density (from Ref. \onlinecite{operetto}).}
\label{chempot}
\end{figure}
We calculate the chemical potential, shown in Fig. \ref{chempot}, by first fitting the curve for $e(N)$ and then minimizing the grand canonical energy $\phi(N)=N(e(N)-\mu)$ with respect to $N$, for different values of $\mu$. As shown by the data in Fig. \ref{chempot}, these are the only stable coverages, at least up to $\theta \approx 0.0848$ \AA$^{-2}$ (the highest coverage explored in this work). The presumption is that the next thermodynamically stable configuration would be at second layer completion, $\theta \approx 0.127$ \AA$^{-2}$.
Fig. \ref{profile} shows the (three-dimensional) {\it p}-H$_2$ density profile $\rho(z)$ for a coverage $\theta_\circ$ (commensurate solid layer), as well as the {\it p}-H$_2$ density profile for the highest stable coverage, $\theta_{2}$ (incommensurate solid layer).
Also shown in Fig. \ref{profile} is the density profile for {\it p}-H$_2$ on lithium\cite{boninsegni04_li} at low temperature (2 K). It is evident that for the substrate studied here, {\it p}-H$_2$ is more localized in the direction perpendicular to the substrate (the density profiles are peaked much more strongly) in comparison, despite being at $T$=0. Thus, the physics of {\it p}-H$_2$ is considerably more 2D on this substrate, than on the weak Li substrate. This is consistent with the stronger substrate attraction, and leads us to predict a melting temperature for the adsorbed film of the order of $\sim$ 7 K, i.e., comparable to that of purely 2D {\it p}-H$_2$. Obviously, however, only finite temperature calculation can address this point quantitatively.
\begin{figure}[t]
\centerline{\includegraphics[height=2.65in]{profile}}
\caption{Density profile $\rho(z)$ (in \AA$^{-3}$) of adsorbed {\it p}-H$_2$ at a coverage $\theta_\circ$ (solid line with lower maximum value), and at $\theta_{2}$ (solid line with higher maximum value). The square represents the position of the graphite substrate and the circle that of the Kr layer. Dashed line is the density profile for {\it p}-H$_2$ on a lithium substrate at the coverage $\theta$=0.070 \AA$^{-2}$ (from Ref. \onlinecite{boninsegni04_li}; it has been shifted along $z$ for direct comparison). Distances are expressed in \AA.}
\label{profile}
\end{figure}
We also see that the profile for $\theta_{2}$ is extended slightly further to the right than that for $\theta_\circ$; in accommodating more {\it p}-H$_2$ in the first layer, the molecules are ``squeezed'' to occupy regions higher above the substrate.
\begin{figure}
\centerline{\includegraphics[height=3in]{snapshot_48}}
\caption{Snapshot of a typical configuration of {\it p}-H$_2$ molecules adsorbed to the Kr/graphite substrate at a coverage of $\theta_\circ$=0.0636 \AA$^{-2}$. Large circles are Kr atoms. The positions of all molecules at each one of 320 imaginary time slices are shown as discrete paths. Distances are expressed in \AA.}
\label{snap1}
\end{figure}
\begin{figure}
\centerline{\includegraphics[height=3in]{snapshot_58}}
\caption{Same as Fig. \ref{snap1} but at a coverage of $\theta_{2}$=0.0769 \AA$^{-2}$}
\label{snap2}
\end{figure}
Snapshots of typical configurations of the {\it p}-H$_2$ solid film are displayed in Figs. \ref{snap1} and \ref{snap2}. The first, at $\theta_\circ$, shows the commensurate structure of {\it p}-H$_2$ corresponding to this system's first energetic minimum. The second is a snapshot at the highest stable coverage, $\theta_{2}$; {\it p}-H$_2$ form an incommensurate solid, not found to be rotated relative to the Kr lattice (although the size of our simulated system clearly limits our capability of resolving such an issue). 56 of the 58 molecules in the simulations appear to try to form a 7$\times$8 regular triangular lattice, with the two additional molecules packing in, giving rise to the clear dislocations. Simulations with higher coverages (thermodynamically unstable) were consistently found to put additional particles in the second layer.
In contrast, for {\it p}-H$_2$ on bare graphite,\cite{nho03} the somewhat stronger adsorption potential allows for an even denser packing; compression of the incommensurate phase is reported well beyond $\theta=0.0849$ \AA$^{-2}$, with the {\it p}-H$_2$ lattice rotated 5$^o$ relative to the graphite lattice. We should also mention that, due to the system size and geometries employed in this work, we could not observe such crystal phases as the uniaxially compressed and the stripe one, which have been experimentally observed and theoretically studied for {\it p}-H$_2$ on graphite;\cite{nho03} we presume that these phases should exist in this system as well, and can certainly be studied with the computational method used here. However, based on the results of Ref. \onlinecite{nho03} we do not think that their inclusion would significantly alter our main conclusion, concerning the existence of a liquid phase, which was our main interest in this work.
It should also be noted that the computational method adopted here does not allow one to make a direct estimation of the {\it p}-H$_2$ molecules exchange frequency, unlike its finite temperature counterpart (Path Integral Monte Carlo). Nevertheless, visual inspection of many-particle configurations generated in the Monte Carlo simulation shows little or no overlap of paths associated to different molecules, which is substantial evidence that many-particle permutations are absent in this system. This is consistent with the high degree of localization that molecules experience.
\section{CONCLUSIONS}
\label{conclusions}
Using a numerically exact ground state Quantum Monte Carlo method, we studied {\it p}-H$_2$ adsorption to krypton pre-plated graphite. We performed calculations based on a simple model, in which graphite corrugation is ignored, the Kr atoms in the spacer layer are assumed static and point-like, and {\it p}-H$_2$-substrate interactions are given by Lennard-Jones type potentials.
We find that there are two
stable phases of {\it p}-H$_2$, both solid; one is a monolayer commensurate with the Kr layer, while the other is an incommensurate
monolayer, compressible within a small range of coverages. Quantum exchanges of hydrogen molecules are suppressed in this system. This is similar to what is seen for hydrogen on bare graphite.\cite{nho03} Altogether, this study has provided no evidence of a thermodynamically stable liquid phase of {\it p}-H$_2$ at $T=$0 on the substrate considered here; it is unlikely, based on comparisons with {\it p}-H$_2$ on lithium from Ref. \onlinecite{boninsegni04_li} and 2D {\it p}-H$_2$ from Ref. \onlinecite{boninsegni04b}, that a liquid would appear, in our model, at temperatures as low as 1.5 K.
There are obviously several sources of uncertainty in this calculation which need to be discussed. The potentials used to describe the interactions between the {\it p}-H$_2$ and the substrate are very rough; this does not seem too important an issue, as far as the interaction of {\it p}-H$_2$ molecules with graphite is concerned, given the relatively large average distance at which molecules sit. On the other hand, a more realistic interaction potential between {\it p}-H$_2$ and krypton may conceivably alter the basic energetics shown here. Despite these issues, and other simplifications, it does not seem likely (to us) that the basic structural information will change dramatically. It is also unlikely that the basic physics will change much at temperatures as low as $\sim$ 1.5 K (namely those reached by the experiment of Ref. \onlinecite{wiechert04}), though finite temperature calculations are planned to address this concern, as are simulations of larger systems.
Thus, our preliminary conclusion is that our calculation appears to yield results in disagreement with the experimental findings of Ref. \onlinecite{wiechert04}, reporting a low temperature liquid phase for the heavier isotope of {\it p}-H$_2$ (D$_2$), which should display an even stronger tendency to crystallize than {\it p}-H$_2$, on account of its heavier mass.
\section*{Acknowledgments}
This work was supported in part by the Petroleum Research Fund of the American Chemical Society under research grant 36658-AC5, by the Natural Sciences and Engineering Research council of Canada (NSERC) under research grant G121210893, and by an NSERC PGSB scholarship. Useful discussions with Milton W. Cole are gratefully acknowledged.
|
1,477,468,750,104 | arxiv | \section{Introduction}
The recent detection of a gravitational-wave and electromagnetic signal from the merging of two neutron stars \cite{TheLIGOScientific:2017qsa} (events GW170817 and GRB 170817A, hereinafter ``the Event'') provides not only an exciting discovery, but also strongly challenges the observational viability of large classes of gravitational theories for the late Universe as anticipated in \cite{Amendola:2012ky} and discussed e.g.\ in \cite{Creminelli:2017sry,Ezquiaga:2017ekz,Sakstein:2017xjx,Baker:2017hug}. Theories aiming to describe the late-time acceleration of the Universe introduce novel, non-trivial interactions between the spacetime metric and new extra degrees of freedom such as scalar (Horndeski \cite{Horndeski:1974wa,Deffayet:2011gz}, vector (Einstein-Aether \cite{Jacobson:2000xp}, generalized Proca \cite{Heisenberg:2014rta,Tasinato:2014eka}) and tensor (bi-gravity) fields.
In \cite{Saltas:2014dha,Sawicki:2016klv} it was shown for the first time that {\it a precise link between tensor and scalar fluctuations in cosmology exists}: a modification to the gravitational-wave propagation at any scale implies the existence of gravitational slip ($\eta\neq1$) for large-scale scalar fluctuations, with both modifications driven by exactly the same theory-space parameters of the gravitational model. The gravitational slip leaves a particular and observable imprint on the formation of structures in the universe, which can be used to constrain models of late-time acceleration \cite{Amendola:2012ys,Amendola:2012ky,Saltas:2010tt,Motta:2013cwa}.
The Event has allowed LIGO to measure the speed of GWs with precision $\left|c_\text{T}/c-1\right| \leq 1\times10^{-15}$ \cite{Monitor:2017mdv}. For all intents and purposes, from the point of view of cosmology, the speed of GWs at the present time is now known to be that of light. By the above argument, the range of possible scenarios for structure formation is also narrowed. In this Letter, we ask the question: {\it What are the key implications of the Event for the phenomenology of large-scale structure?}
We will discuss below the implications for each class of models of acceleration featuring one extra degree of freedom in turn, introducing here only the essential notation and definitions. Considering the line element of scalar fluctuations in Newtonian gauge, $\mathrm{d}s^2 = -(1 + 2\Psi(\boldsymbol{x},t))\mathrm{d}t^2 + a^2(t)(1 -2\Phi(\boldsymbol{x},t))\mathrm{d}\boldsymbol{x}^2$, we define the gravitational slip as $\eta \equiv \Phi/\Psi\neq 1$ and the respective effective Newton's constants in momentum space $Y \equiv -2 k^2 \Psi/(a^2 \rho_m \delta_m)$ and $Z=\eta Y$, where $\delta_m$ is the comoving matter density contrast. Our working {\it definition of modified gravity} is the one introduced in \cite{Saltas:2014dha}, i.e.\ any gravitational model modifying the linear propagation of tensor modes compared to GR, i.e.\ which by \cite{Saltas:2014dha,Sawicki:2016klv} produces gravitational slip from perfect-fluid sources.
In this paper, we start with the remaining viable {\it scalar-tensor models} of gravity. With the new constraint, they can have at most a conformal coupling to curvature \cite{Creminelli:2017sry,Ezquiaga:2017ekz,Sakstein:2017xjx,Baker:2017hug}, which is now the only admissible cause of gravitational slip from perfect-fluid matter. We demonstrate that, if slip is generated at all, it either has no scale dependence at linear scales, or it disappears at large scales, with the theory of gravity returning to $\eta=1$ there. We show that the growth rate must be higher than GR for all models with the possible exception of the single remaining class of `beyond Horndeski' theories.
Then we show that the remaining viable {\it vector-tensor theories} cannot generate slip at all and in no remaining viable such model can the growth rate be lower than in GR.
The Event has not placed new constraints on {\it theories of massive gravity} \cite{Baker:2017hug}. We do not study these further, since there is no single model of massive (bi)-gravity which could account for the whole of cosmological history without some sort of pathology \cite{D'Amico:2011jj,Konnig:2015lfa,Lagos:2014lca,Cusin:2014psa} unless it has the same predictions as $\Lambda$CDM \cite{Akrami:2015qga}.
\section{Dark energy phenomenology}
\subsection{Scalar-tensor theories: Horndeski}
Horndeski theories are the most general scalar-field theories which have equations of motion with no higher than second derivatives \cite{Horndeski:1974wa} and where all matter is universally coupled to gravity. They include as a subset the archetypal modifications of gravity such as $f(R)$ and Brans-Dicke theories, as well as galileons \cite{Nicolis:2008in,Deffayet:2009wt}.The popular dark energy models of quintessense \cite{Ratra:1987rm,Wetterich:1987fm} and k-essence \cite{ArmendarizPicon:2000ah} are also subclasses of Horndeski. At the level of the action, they are described by four, in principle arbitrary, coupling functions $G_{2,3,4,5}(X,\phi)$ where $X \equiv -(\partial \phi)^2/2$. At the same time, it has been shown \cite{Bellini:2014fua} that the dynamics of linear fluctuations on the cosmological background are completely characterized by four (time-dependent) functions: the kineticity $\alpha_\mathrm{K}(t)$, related to the Jeans scale for the scalar, the braiding $\alpha_\mathrm{B}(t)$, measuring the degree of kinetic mixing between the scalar and the metric, the running of Planck mass $\alpha_\text{M}(t)$, and the tensor speed excess $\alpha_\mathrm{T}(t)$. In particular, $\alpha_\text{T} \equiv c_\mathrm{T}^2 - 1$ measures the departure of the GW’s speed from that of light and is the parameter constrained to be effectively zero by the Event \cite{TheLIGOScientific:2017qsa}. Requiring that this not be achieved by a severe tuning of the parameters implies that the scalar can at most be coupled conformally to curvature, i.e.\ $G_5=0$ and $G_4=G_4(\phi)$ \cite{Creminelli:2017sry,Ezquiaga:2017ekz,Baker:2017hug}. This means that the most general Horndeski model still allowed has the Lagrangian
\begin{equation}
\mathcal{L} = \frac{f(\phi)}{2}R + K(X,\phi) - G(X,\phi)\Box\phi \label{eq:KGBf}\,,
\end{equation}
i.e.\ belongs to the class of kinetic gravity braiding (KGB \cite{Deffayet:2010qz}) augmented by a conformal coupling to gravity. Setting $G=G(\phi)$ reduces these models to conformally coupled k-essence \cite{Babichev:2009ee,Sawicki:2012re}, while setting $K=V(\phi),\, G=0$ is equivalent to $f(R)$ gravity. Setting $f=\text{const}$, means gravity is no longer modified, but the model is nonetheless capable of accelerating the expansion of the Universe without a cosmological constant \cite{Deffayet:2010qz}.
The class of models \eqref{eq:KGBf} is significantly more restricted than full Horndeski, thus we can make precise predictions for large-scale structure formation. In particular, using the definitions and results of \cite{Bellini:2014fua}, we have that in the small-scale limit, yet still linear regime within the quasi-static approximation (i.e. $k\to\infty$),%
\footnote{In ref.~\cite{Sawicki:2016klv}, we showed that under extremely fine-tuned choices of a Horndeski action, it would in principle be possible to preserve a configuration with $Y=Z$ everywhere under time evolution, thus dynamically shielding the modification of gravity. The measurement of $\alpha_\text{T}=0$ means \emph{none of those models are still viable}, thus if the coupling to gravity is not minimal, the configuration of the fluctuations will reflect it.}
\begin{equation}
Y_\infty = 1 + \frac {(\varkappa+\alpha_M)^2}{2N}\,,\quad
Z_\infty = 1 + \frac{\varkappa^2-\alpha_M^2}{2N}\, , \label{eq:YZ}
\end{equation}
where $\alpha_M=\dot\phi f_{,\phi}/Hf$ is the rate of evolution of the effective Planck mass. The function $\varkappa\equiv \alpha_B+\alpha_M$ is the part of the braiding produced by the the term $G(X)$, it is zero in Brans-Dicke, k-essence and $f(R)$ models. $N$ is the numerator of the sound speed of the scalar and must be positive definite (the denominator of the sound speed is positive as a result of the no-ghost condition),
\begin{align}
N\equiv& -(2+\alpha_M)\dot{H}/H^2 + 3\Omega_m/f +\alpha_M(2+\alpha_M) \\
&-\alpha_M' +\varkappa(2-\varkappa)/2-\varkappa \dot{H}/H^2 +\varkappa' \, . \notag
\end{align}
We can combine these results to obtain
\begin{equation}
\eta_\infty -1 = -\frac{2\alpha_M(\varkappa+\alpha_M)}{2N+(\varkappa + \alpha_M)^2} \,.
\end{equation}
{\it We can thus make general statements about the properties of gravity for the remaining scalar--tensor theories at small linear scales:}
\begin{itemize}
\item The effective Newton's constant for non-relativistic matter, $Y_\infty\geq1$, so in the remaining Horndeski models, matter cannot cluster slower than in GR given the same background and the same $\Omega_m$ \cite{Piazza:2013pua}
\item The effective Newton's constant for the lensing potential, $\Sigma\equiv(Y+Z)/2$ is different from unity whenever the KGB term is present, $\varkappa\neq 0,\alpha_M$.
\item The gravitational slip parameter $\eta_\infty\equiv Z_\infty/Y_\infty$ can be both larger or smaller than unity.
\end{itemize}
If the KGB term is not present, $\Sigma_\infty=1$ and $\eta_\infty\leq1$, thus a violation of either of these conditions can be interpreted as a detection of the presence of kinetic gravity braiding. When gravity is minimally coupled $f=\text{const}$, $\alpha_M=0$. This gives $Y_\infty=Z_\infty>1$ and $\eta=1$ at all scales; in this case, if the modification is large enough, the sign of the cross-correlation of the galaxies and the integrated Sachs-Wolfe effect can reverse, which is quite strongly disfavored by data \cite{Barreira:2014jha,Renk:2017rzu}.
For a generic lagrangian \eqref{eq:KGBf}, the fluctuations of the scalar field will have a mass $M$ (see e.g.\ Ref.\ \cite{DeFelice:2011hq,Amendola:2013qna} for the expression). If $M\lesssim H$, then the expressions \eqref{eq:YZ} are valid at all linear scales inside the sound horizon of the scalar, $c_\text{s}k\gg H$ \cite{Sawicki:2015zya}. If $H \ll M \ll k_\text{NL}$, where $k_\text{NL}$ is a scale associated with non-linearities either in the dark matter or screening of gravity, then a transition will occur and
\[
\eta\rightarrow 1\,,\qquad k<M\,,
\]
recovering the GR result for slip. Note that theories modifying $c_\text{T}$ do not recover this GR result at large scales. Clearly, for large masses $M\gg k_\text{NL}$, no modification of gravity is observed in linear structure formation at all.
\subsection{Scalar-Tensor Theories: Beyond Horndeski}
`Beyond Horndeski' models extend the Horndeski action by allowing for higher-order equations of motion. As a result of having degenerate Hamiltonians they nonetheless propagate no extra d.o.f.\ beyond one scalar and two tensors \cite{Zumalacarregui:2013pma,Gleyzes:2014dya,Deffayet:2015qwa,Crisostomi:2016tcp}. There exists exactly one choice of term additional to Lagrangian \eqref{eq:KGBf} which does not affect the speed of GWs: combining the quartic Horndeski term with the quartic beyond-Horndeski term and setting $F_{4,X}=-G_{4,X}$ \cite{Creminelli:2017sry,Sakstein:2017xjx,Baker:2017hug}. This is the case, since the overall coupling of the scalar to curvature remains conformal, but with a function of $X$ instead of just $\phi$, giving $\eta\neq1$. One may argue that this particular choice of $F_4$ is not necessarily a tuning, since in flat space the two Lagrangian terms are related through a total derivative and so their curved-space corrections should be suppress by the Planck mass. The new physics in large-scale structure can be described by a new parameter $\alpha_\text{H}$. General beyond-Horndeski models can have $Y_\infty < 1$, for a large-enough $\alpha_\text{H}$ and thus reduce the growth rate w.r.t. GR on the same background \cite{DAmico:2016ntq}. In the remaining viable models, however, $\alpha_\text{H}$ is related to $\alpha_\text{M}$, the rate at which the Planck mass evolves, which is constrained by observations \cite{Hart:2017ndk}. We leave the detailed analysis of whether these constraints can be evaded sufficiently to reduce growth rates for future work.
\subsection{Vector-Tensor Theories}
There are two classes of modifications of gravity featuring vectors: (i) the Einstein-Aether (EA) model \cite{Jacobson:2000xp} and its generalization \cite{Zlosnik:2006sb}, and (ii) the generalized Proca theory \cite{Tasinato:2014eka,Heisenberg:2014rta} along with its `beyond generalized Proca' generalized version, similar to the `beyond Horndeski' \cite{Heisenberg:2016eld}. Class (i) is also closely related to the low-energy limit of Ho\v{r}ava-Lifshitz theories \cite{Blas:2010hb}.
In (generalized) EA models, one removes a would-be ghost by constraining the magnitude of the vector field using a Lagrange multiplier. As we discussed in our first paper on this topic \cite{Saltas:2014dha}, the same term in the action gives a source for gravitational slip from perfect-fluid matter in cosmology, and changes the speed of propagation of GWs: in the language of ref.~\cite{Lim:2004js}, $c_\text{shear}=\beta_1+\beta_3$; the result for generalized EA theories is the same \cite{Zuntz:2010jp}. As a consequence of the Event, the constraint $c_\text{shear}=0$ applies to all these models \cite{Baker:2017hug}. This means that (generalized) Einstein-Aether models cannot anymore produce gravitational slip from perfect-fluid matter, $\eta=1$ at all scales. Moreover, the effective Newtons' constant in EA is now $Y_\infty=(1+3\beta_2)/(1-\beta_1)$ with both $\beta_{1,2}>0$ for stability, i.e.\ $Y_\infty\geq1$ and growth rates on the same background must be higher than in GR. In generalized EA, the expression has the same form, with the replacement $\beta_i\rightarrow \mathcal{F}_\mathcal{K}\beta_i$ \cite{Zlosnik:2007bu} and the same conclusions can be reached.
Generalized Proca theories, on the other hand, are the most general theories with second-order equations of motion which propagate a massive vector in addition to the graviton, with spurious degrees of freedom eliminated through a non-linearly realized Abelian gauge symmetry. The general Lagrangian is described by five functions of the vectors' magnitude $X\equiv-A_\mu A^\mu /2$, $G_{3,4,5,6}$ and $g_5$ and one function $G_2$ of $X$, the vector kinetic term $F^{\mu\nu}F_{\mu\nu}$ and the magnitude of $F^{\mu\nu}A_\nu$, resulting in a structure similar to Horndeski models (see ref.~\cite{Heisenberg:2014rta,Allys:2015sht,Jimenez:2016isa,Allys:2016jaq}). There are two branches of solutions in these theories, only one of which is dynamical. In the dynamical branch, the vector is explicitly coupled to curvature through the functions $G_4$ and $G_5$, which modifies the speed of GWs. Requiring that GWs propagate at the speed of light leads to the constraint that $G_5=0$ and $G_4=\text{const}$. $2G_4$ can be then identified with the effective Planck mass squared and is not a free parameter of the theory \cite{Baker:2017hug}.
We have re-analyzed the implications of this constraint on structure formation in these models. Ref.\ \cite{DeFelice:2016uil} showed that the general dynamics of scalar perturbations on a cosmological background depend on seven functions of time, $w_i$, which similar to the case of Horndeski theories, depend on the free functions appearing in the action. Taking $G_5=0$ and $2G_4=1$, this set reduces to just two: $w_2$ and $w_3<0$, where the sign is fixed by requiring that the propagating vectors not be ghosts. For all these models, there is no gravitational slip, $\eta=1$, and the short-scale gravitational constant reduces to
\begin{equation}
Y_\infty=1+\frac{-w_3w_2^2}{N}\,, \qquad N\equiv 2\mu_2/\phi +w_3w_2^2 \,,
\end{equation}
where $\mu_2$ is a function of the $w_i$ defined in \cite{DeFelice:2016uil}. For $N>0$, $Z=Y\geq1$ and gravity is stronger that in $\Lambda$CDM. This is \emph{always} the case for a model with effective dark energy (DE) equation of state parameter $w_{\rm DE}\leq-1$, since the sound speed of the scalar is only positive when $N-4w_{3}\left(2H+w_{2}\right)^{2}\rho_{\rm DE}(1+w_{\rm DE})>0$. In addition, in the future de Sitter attractor common to these theories, we necessarily have $N>0$. Changing the sign of $N$ at an earlier time would be in principle possible for a dark energy with $w_{\rm DE}>-1$, but that would lead to $N=0$ and thus a divergent effective Newton's constant at least at one moment in time. We thus conclude that $Y\geq 1$ at all times in the remaining viable generalized Proca models.
In the same spirit as `beyond Horndeski', there exists a more general action for vectors (`beyond generalized Proca'). It is possible to write down four additional terms in the action, $f_{4,5}$ and $\tilde{f}_{5,6}$, with arbitrary functions of $X$ as coefficients \cite{Heisenberg:2016eld}. Then $f_{4,5}$ enter the speed of GWs and must be set to zero as a result of the Event. It was shown in ref.~\cite{Heisenberg:2016eld} that, on FRW, the scalar perturbations depend on one more function of time, $w_8$, but, again, requiring $c_T=1$ reduces both the equations of the background and the quadratic action for perturbations to be exactly the same as in the case of the generalized Proca, i.e.\ there is no new phenomenology allowed in the scalar sector of linear cosmology in this much wider set of theories.
We can thus conclude that the Event has constrained general vector-tensor theories and the low-energy limit of Ho\v{r}ava-Lifshitz theories to be unable to produce gravitational slip from dust, and the growth rate in the remaining viable models must be at least as fast as in GR for the same background and $\Omega_m$.
\section{Constraining the scalar mass}
As shown in \cite{Amendola:2012ky}, one can obtain three quantities from cosmological large-scale structure power spectra in the linear regime such that the dependence on the power spectrum shape, which cannot be known without an assumption for the theory of gravity, cancels out:
\begin{eqnarray}
P_{1} & \equiv & R/A=f/b \, ,\\
P_{2} & \equiv & L/R=\Omega_{\text{m}0}Y(1+\eta)/f \, ,\\
P_{3} & \equiv & R'/R=f+f'/f \, ,
\end{eqnarray}
where $b$ is the dark matter-galaxy bias and a prime stands for a derivative with respect to $\log a$.
The second quantity is also often called $E_G$ \cite{Zhang:2007nk}, while the third one is related to the commonly used quantity $f\sigma_8(z)$ (see \cite{Percival:2008sh}) by the following relation
\begin{equation}
P_3=\frac{(f\sigma_8(z))'}{f\sigma_8(z)} \, .
\end{equation}
Observationally, $P_3$ can be obtained by taking finite differences across redshift bins
\begin{equation}
P_3\approx -(1+z)\frac{\Delta (f\sigma_8(z))}{f\sigma_8(z)\Delta z} \, .
\end{equation}
In \cite{Motta:2013cwa}, we showed that the assumption of the weak equivalence principle for galaxies is enough to write in a general theory of modified gravity
\begin{equation}
\frac{3P_{2}(1+z)^{3}}{2E^{2}\left(P_{3}+2+\frac{E'}{E}\right)}-1=\eta\,,\label{eq:kpol}
\end{equation}
where $E(z)\equiv H(z)/H_0$ is the dimensionless Hubble function. This relation is valid for any cosmology and scale, regardless of $Y$, of bias, and of initial conditions. We can now form a null relation which is violated whenever gravity is modified
\begin{equation}
P_{2}=4E^{2}\frac{\left(P_{3}+2+\frac{E'}{E}\right)}{3(1+z)^3} \,.
\end{equation}
Since the only remaining viable modified gravity models which generate slip are scalar-tensor, any violation of this relationship will be evidence that the effective Planck mass evolves in time. We have argued here that in the remaining parameter space, this relation will either be scale independent until the sound horizon, or will return to the GR value at $k<M$, giving a method to constrain the mass of the degree of freedom modifying gravity.
\section{Summary and Conclusions}
The observation of coincident gamma radiation and GWs from the same source at cosmological distances by LIGO/VIRGO and Fermi/Integral has put so strong a bound on any deviation of the speed of GWs from that of light that, for the purposes of cosmology, any dynamics of modified gravity that cause a change in the speed of propagation at the present epoch must be completely irrelevant. In each class of gravitational theories beyond GR, this severely limits the viable theory space.
We previously proved that there is a one-to-one relationship between the modification of the propagation of GWs and the sourcing of gravitational slip in the presence of perfect-fluid matter. Above new constraint in turn significantly reduces the sort of configurations/sources of slip that are still allowed, leading to strong observational consequences.
In this paper, we have shown that, in the newly restricted viable parameter space of universally coupled modified gravity theories, it is impossible to reduce the growth rate in structure formation at small, linear scales \emph{with respect to standard gravity on the same background and dark matter density}. This applies to Horndeski scalar-tensor models and any vector-tensor theories. Moreover, gravitational slip in the presence of perfect-fluid matter can only be produced by a conformal coupling in scalar-tensor models, and therefore an evolving Planck mass. As a direct observational consequence, it follows that a future detection of gravitational slip would exclude all vector-tensor and Ho\v{r}ava-Lifshitz Lorentz-violating models.
If slip be present, in the remaining viable models, it is either constant on scales inside the sound horizon of the scalar or, for models with a sufficiently large mass, it is screened away to its GR value $\eta=1$ at scales above the Compton length. We have further shown how slip can be measured and thus this mass constrained in all remaining viable theories which generate it.
We note however that, it is in principle possible that growth rates can be reduced w.r.t.\ GR in `beyond Horndeski' models, although we leave the exact limits of this to a future study. Although we have not discussed the case of massive (bi-)gravity models, for the sake of completeness, let us state that it is also possible to choose parameters in massive bimetric theories so that $\eta\neq1$ at large scales and $Y<1$ at small linear scales, at least for some period of time, without instabilities during that time. However, these theories cannot actually provide a complete cosmological background from the Big Bang to today that does not suffer from pathologies at some point in the course of the cosmological evolution \cite{D'Amico:2011jj,Konnig:2015lfa,Lagos:2014lca,Cusin:2014psa}, apart from the limit where the theory behaves exactly like $\Lambda$CDM \cite{Akrami:2015qga}.
To conclude, GWs have provided an extremely strong constraint on possible modifications of gravity at both large and small scales. This in turn has restricted the possible modifications to the evolution of large-scale structure in a very sharp manner, removing some of the freedom resulting from a large model space. This will only serve as to increase the power of upcoming cosmological surveys to constrain or eliminate the remaining viable models.
\begin{acknowledgments}
\emph{Acknowledgements.} The work of L.A.~is supported
by the DFG through TRR33 ``The Dark Universe''. M.K.~acknowledges
funding by the Swiss National Science Foundation. I.S.~and I.D.S.~are supported by ESIF and MEYS (Project CoGraDS -- CZ.02.1.01/0.0/0.0/15\_003/0000437).
\end{acknowledgments}
\bibliographystyle{utcaps}
|
1,477,468,750,105 | arxiv | \section*{Abstract}
Exceptionally clear images of intramolecular structure can be attained in dynamic force microscopy through the combination of a passivated tip apex and operation in what has become known as the ``Pauli exclusion regime'' of the tip-sample interaction. We discuss, from an experimentalist's perspective, a number of aspects of the exclusion principle which underpin this ability to achieve submolecular resolution. Our particular focus is on the origins, history, and interpretation of Pauli's principle in the context of interatomic and intermolecular interactions. \\
\begin{small}
{\noindent\textbf{Keywords:} dynamic force microscopy;non-contact atomic force microscopy; NC-AFM; Pauli exclusion principle; submolecular resolution; intramolecular; single molecule imaging}\\\\
\noindent
From \textit{Imaging and Manipulation of Adsorbates using Dynamic Force Microscopy},\\ Vol V of \textit{Advances in Atom and Single Molecule Machines}, Springer-Verlag; \url{http://www.springer.com/series/10425}. To be published late 2014.
\end{small}
\section{Intramolecular resolution via Pauli exclusion}
In 2009 the results of a pioneering dynamic force microscopy (DFM\footnote{Although the term non-contact atomic force microscopy (NC-AFM) is widespread -- to the extent that the major conference in the field is the annual International NC-AFM meeting -- it is arguably something of a misnomer to label the technique ``non-contact'' when it is now commonplace to operate in a regime where the probe is in contact with the sample. We will therefore use the term \textit{dynamic} force microscopy throughout this chapter.}) experiment by Leo Gross and co-workers at IBM Z\"urich were published\cite{Gross:2009} and revolutionised the field of scanning probe microscopy. Gross \emph {et al.} captured arguably the clearest real space images of a molecule achieved up to that point, resolving the ``textbook'' structure of the molecular architecture. Two important experimental protocols enabled Gross \emph {et al.} -- and, subsequently, a number of other groups\cite{deOteyza:2013,Zhang:2013,Sweetman:2014,Hapala:2014,Pavlicek:2012,Pawlak:2011,Riss:2014} (see Fig. 1.1 for examples) -- to attain this exceptionally high resolution. First, the apex of the probe was functionalised (by picking up a molecule) to render it inert. This enabled the scanning probe to be placed extremely close to the adsorbed molecule of interest -- so close that the second experimental protocol, namely the exploitation of electron repulsion via the Pauli exclusion principle\footnote{We shall return, in Sections 1.4 and 1.5, to a detailed discussion of whether or not it is appropriate to describe the effects of Pauli exclusion as a repulsive force.}, played a key role in the imaging mechanism.
It is this second protocol which is the primary focus of this chapter. We'll discuss just how Pauli exclusion is exploited in state-of-the-art scanning probe microscopy, what pitfalls there might be in interpreting features in DFM images as arising directly from chemical bonds, and to what extent scanning probe measurements of tip-sample interactions provide deeper experimental insights into the exclusion principle itself. We should also stress right from the outset that although we concentrate on dynamic force microscopy throughout this chapter, prior to Gross \emph {et al.}'s 2009 paper, Temirov, Tautz and co-workers had achieved unprecedented spatial resolution using a technique for which they coined the term scanning tunnelling hydrogen microscopy (STHM)\cite{STHM1,STHM2,STHM3,STHM4}. Both STHM and the type of DFM imaging introduced by Gross \emph {et al.}\cite{Gross:2009} exploit Pauli exclusion as a means to acquire exceptionally high resolution. Before covering the exploitation of the exclusion principle in scanning probe microscopy, we'll consider a number of aspects of the fascinating history of Pauli's \textit{Ausschlie{\ss}ungsregel}\cite{Massimi} and outline some of the rich physics underpinning the principle.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{PauliProbes_Fig1.png}
\caption{\textbf{Imaging bonds via the Pauli exclusion principle}.\textbf{(A)} Combination of schematic illustration and experimental data to demonstrate experimental protocol used to acquire submolecular resolution. The apex of the probe used in a dynamic force microscope is passivated (in this case with a CO molecule) and scanned across a pentacene molecule at a height where Pauli exclusion plays a key role in determining the tip-sample interaction. \textbf{(B)} Experimental frequency shift image for a pentacene molecule. [A and B taken from Gross et al.\cite{Gross:2009}. \copyright American Assocation for the Advancement of Science (2009)]. \textbf{(C)} Dynamic force microscope image of four 8-hydroxyquinoline molecules. Both intra- and intermolecular features are observed. (See Section 1.7). \textbf{(D)} Schematic diagram of molecular arrangement shown in (C) with the expected positions of hydrogen-bonds drawn as lines between the molecules. [C and D taken from Zhang et al.\cite{Zhang:2013}. \copyright American Assocation for the Advancement of Science (2012)]. \textbf{(E)} High resolution image of a chain of oligo-(E)-1,1′-bi(indenylidene) with associated structural model. Taken from Riss \emph{et al.}\cite{Riss:2014}.\copyright American Chemical Society (2014). \textbf{(F)} DFM image of two different conformers of dibenzo[a,h]thianthrene on a NaCl/Cu(111) substrate with (lower panel) structural models of both conformers. Taken from Pavlicek et al.\cite{Pavlicek:2012}.\copyright American Physical Society (2012).\textbf{(G)} Structural model of a naphthalenetetracarboxylic diimide (NTCDI) molecule and a DFM image of a hydrogen-bonded assembly of NTCDI molecules. From Sweetman et al.\cite{Sweetman:2014}.\copyright Nature Publishing Group (2014).}
\label{fig:Fig1}
\end{figure}
\section{A potted history of Pauli's exclusion principle}
Michela Massimi has written an authoritative and engaging history of the Pauli exclusion principle (PEP)\cite{Massimi}, which impressively combines clear explanations of the quantum and statistical physics underlying the PEP with engaging discussions of both the history and the philosophical ramifications of the principle. As Massimi points out in the preface to her book, her research on the origin and validation of the exclusion principle took almost ten years. For those readers interested in a comprehensive account of the ``evolution'' of the PEP we therefore strongly recommend Masimi's book. Here we will limit ourselves to providing a brief summary of those aspects of the PEP which are of key significance for (ultra)high resolution scanning probe microscopy.
The origins of the exclusion principle lie, like so many aspects of quantum physics, in the interpretation of spectroscopic data. In particular, a series of so-called anomalies in the spectra of alkali and alkaline earth metals, and, arguably more importantly, the response of atomic spectra to the application of a magnetic field, i.e. the (``anomalous'') Zeeman effect, became a major challenge to the Bohr-Sommerfeld theory of the electronic structure of atoms in the early 1920s. It was only with the introduction of what came to be known as electron spin -- but which Pauli initially called simply the electron \textit{Zweideutigkeit} (``twofoldness'') -- that the spectroscopic data could be reconciled with the theoretical predictions. The introduction of electron \textit{Zweideutigkeit}\cite{Pauli:1925a} was followed very closely by Pauli's statement of the exclusion principle\cite{Pauli:1925b} (or, as it was known at the time, the exclusion rule). Pauli subsequently won the Nobel prize in 1945 for his discovery of the exclusion principle.
It is worth quoting directly from Pauli's Nobel lecture, given on Dec. 13 1946, as this provides key insights into the original formulation of the principle ``straight from the horse's mouth'', as it were:
\begin{quote}
On the basis of my earlier results on the classification of spectral terms in a strong magnetic field the general formulation of the exclusion principle became clear to me. The fundamental idea can be stated in the following way:
The complicated numbers of electrons in closed subgroups are reduced to the simple number \emph{one} if the division of the groups by giving the values of the four quantum numbers of an electron is carried so far that every degeneracy is removed. An entirely non-degenerate energy level is already \emph{closed}, if it is occupied by a single electron; states in contradiction with this postulate have to be excluded.
\end{quote}
\noindent
Or, if we couch this in the lexicon of modern quantum mechanics, no two electrons can have the same values of $n$, $l$, $m_l$, and $m_s$ (i.e. the principal, orbital angular momentum, magnetic, and spin quantum numbers). More succinctly, no two electrons can occupy the same quantum state. (The Pauli exclusion principle of course holds for all fermions (half-integer spin particles), not just electrons. We'll return to this point very soon).
Pauli's \emph{Zweideutigkeit} is now of course known as particle spin but the inferred connection with the classical concept of a spinning object is unfortunately misleading. Indeed, Pauli himself switched from being firmly opposed to any connection between his \emph{Zweideutigkeit} and spin, to a somewhat grudging acceptance of a link, and then, as his Nobel lecture highlights, back to a significant degree of scepticism about the value of any classical analogy:
\begin{quote}
On the other hand, my earlier doubts as well as the cautious expression ``classically non-describable two-valuedness'' experienced a certain verification during later developments, since Bohr was able to show on the basis of wave mechanics that the electron spin ... must therefore be considered as an essentially quantum-mechanical property of the electron.
\end{quote}
\subsection{Particle statistics and the quantum identity crisis}
Following hot on the heels of Pauli's publication of the exclusion principle, first Fermi\cite{Fermi:1926,FermiTranslation} and then Dirac\cite{Dirac:1926} explored the quantum statistics of an ideal gas of particles which was subject to the exclusion principle. Dirac coined the term \emph{fermion} to describe a particle subject to the Fermi-Dirac statistics he and Fermi derived; a fermion is therefore a particle which obeys the Pauli exclusion principle (and concomitantly is of half-integer spin). At the very heart of quantum statistics -- and, indeed, of classical statistical mechanics -- lies the issue of the distinguishability of particles\footnote{Long before the advent of quantum mechanics, the effect of considering indistinguishable vs distinguishable particles on the partition function for a system was known as the Gibbs paradox in classical thermodynamics/statistical mechanics}. A simple back-of-the-envelope argument based on the (in)distinguishability of particles can provide a helpful insight into the origin of the exclusion principle\cite{DaviesTextbook}.
Before we introduce that back-of-the-envelope approach, however, it is first important to define just what it is we mean by indistinguishable particles. This, despite first appearances, is a far from trivial question to address and has been the subject of quite considerable debate and interest for many decades. De Muynck\cite{DeMuynck:1975}, Berry and Robbins\cite{Berry-Robbins:2000}, Ginsberg \emph {et al.}\cite{Ginsberg:2007} (see also Fleischhauer\cite{Fleischhauer:2007} for a very readable overview of Ginsberg et al's work), Omar\cite{Omar:2005}, and Dieks and co-workers\cite{Dieks:2008, Dieks:2011}, amongst many others, have considered and explored the important issue of how indistinguishability and quantum statistics are intrinsically coupled. We shall not delve into the detailed arguments -- be they physical, philosophical, or semantic in scope -- and instead restrict ourselves to the following relatively simple, although certainly not ``universal'', definitions. (It is also important to note that the condition for antisymmetry and the exclusion principle are not equivalent statements).
First, we draw a distinction between \emph{identical} and \emph{indistinguishable} particles. Identical particles are those which have the same intrinsic (or ``internal'') properties (and the same values associated with those intrinsic properties), i.e. mass, charge, spin. So two electrons are identical to each other. And two protons, or two neutrons, are similarly identical to each other. But electrons are clearly not identical to protons, nor to neutrons. (We apologise for labouring the point to this extent but the terms ``identical'' and ``indistiguishable'' are often used interchangeably -- including in many textbooks -- and this has led to quite some confusion at times).
If we have a collection of identical particles then they are \emph{indistinguishable} if we cannot separate them on the basis of their ``external'' properties such as position or momentum. But classically it \emph{is} possible to distinguish between identical particles (at least in principle): we can effectively ``label'' individual identical particles on the basis of their positions or trajectories and distinguish them accordingly\footnote{In a thought-provoking paper, Versteegh and Dieks\cite{VersteeghAJP} discuss the importance of the distinguishability of identical particles and what these means for classical thermodynamics and statistical mechanics, including the Gibbs paradox. We note, however, that there is a very important omission in the list of papers cited by Versteegh and Dieks, namely a paper by Edwin Jaynes\cite{Jaynes} who makes the point, following a similar analysis by Pauli, that the classical thermodynamic definition of entropy as the integration of d$Q$/$T$ over a reversible path is only introduced in the context of constant particle number. This means that there is always (ultimately, see Ehrenfest and Trkal\cite{Ehrenfest}) an arbitrary integration function (not an integration constant, but a function of $N$) that can be used to yield the desired extensivity of the entropy.}. Quantum mechanically, however, the standard argument is that due to delocalisation we lose this ability to label particles on the basis of their trajectories and they then become indistinguishable.
But to what extent is this true? Are quantum particles indeed indistinguishable? One can find undergraduate-level descriptions of quantum statistics\cite{Rohlf} which claim that quantum particles can in fact be distinguished on the basis of what might be called a ``Rayleigh criterion'' for wavepackets: if two particles are separated by a distance greater than their de Broglie wavelength (i.e. such that the wavefunction overlap is minimal) then they are distinguishable on the basis of their respective positions. Versteegh and Dieks\cite{VersteeghAJP} invoke similar arguments about the spatial extent of wavepackets enabling identical quantum particles to be distinguished.
However, whether this is a valid condition for distinguishability is far from clear-cut. In his commentary on Ginsberg \emph {et al.}'s work\cite{Ginsberg:2007}, Fleicschhauer\cite{Fleischhauer:2007} states the following:
\begin{quote}
In the quantum world, particles of the same kind are indistinguishable: the wavefunction that describes them is a superposition of every single particle of that kind occupying every allowed state. Strictly speaking, this means that we can't talk, for instance, about an electron on Earth without mentioning all the electrons on the Moon in the same breath.
\end{quote}
Why might Fleicschhauer say this?\footnote{It is perhaps worth noting at this point that the ``interconnectedness'' to which Fleicschhauer alludes in this quote, and its relevance (or not) to the Pauli exclusion principle, was the subject of a great deal of sometimes ill-tempered online debate following the BBC's broadcast of a popular science lecture on quantum mechanics by Brian Cox, which included a discussion of the PEP. Jon Butterworth's post for The Guardian\cite{Butterworth} is a short, clear and entertaining discussion of the furore and the physics surrounding Cox's lecture.} The answer is, from one perspective at least, rather straight-forward. The universal superposition to which Fleicschhauer refers arises because in reality we never have perfect confinement of particles: there is no such thing as the infinite potential well beloved of introductory quantum physics courses and there is therefore some finite (albeit extremely small) probability for tunnelling. Thus, in this sense an electron on the Earth is indeed indistinguishable from an electron on the Moon (or on Alpha Centauri).
But what really matters, of course, are the effects that this type of ``coupling'' might have on experimental measurements. And for electrons separated by centimetres, let alone light years, those effects are, to put it mildly, utterly negligible. If we consider a "double well" system for an electron on Earth and an electron on Alpha Centauri, the energy level splitting is unimaginably tiny (and beyond anything we could ever begin to hope to measure), and the time-scale for evolution of the quantum state exceeds the age of the universe.
So \emph{in any practical sense}, position can indeed be used to distinguish quantum particles. This is why we can treat electrons in well-separated atoms as being distinguishable. In principle, the electrons are indeed described by a single multi-particle (``universal'') wavefunction and are thus indistinguishable. In practice, however, the spatial extent of the particle wavepacket is such that we can treat electrons in atoms separated by distances much greater than their equilibrium bond length as distinguishable. Only when those atoms are brought together so that there is appreciable overlap of electronic wavefuctions, as in chemical bond formation or, as we shall discuss below, a dynamic force microscopy experiment, can one state that the electrons on each atom become indistinguishable.
Following this lengthy ``detour'' on the topic of distinguishability vs indistinguishability, we are now finally at the point where we can return to a consideration of that back-of-an-envelope argument for the PEP which was mentioned above.
\section{Statistics, symmetry, and spin}
Let's take a system where identical quantum particles can't be distinguished from another. As the particles are indistinguishable then when we compute the probability density for the system, i.e. $|\Psi|^2$, we must get the same answer regardless of how we arrange the particles, i.e. their spatial positions have no influence on the probability density. We'll consider a very simple system with just two particles whose positions are $\textbf{r}_1$ and $\textbf{r}_2$ and whose single particle wavefunctions are $\psi_1$ and $\psi_2$ respectively. If we cannot distinguish Particle 1 from Particle 2 then it's clear that
\begin{equation}
|\Psi(\textbf{r}_1,\textbf{r}_2)|^2 = |\Psi(\textbf{r}_2,\textbf{r}_1)|^2
\end{equation}
\noindent
This means one of two things. Either
\begin{equation}
\Psi(\textbf{r}_1,\textbf{r}_2) = \Psi(\textbf{r}_2,\textbf{r}_1)
\end{equation}
\noindent
or
\begin{equation}
\Psi(\textbf{r}_1,\textbf{r}_2) = -\Psi(\textbf{r}_2,\textbf{r}_1)
\end{equation}
To meet the condition imposed by Eqn. 1.2, we must have the following two-particle wavefunction:
\begin{equation}
\Psi(\textbf{r}_1,\textbf{r}_2) = \frac{1}{\sqrt{2}}\big(\psi_1(\textbf{r}_1)\psi_2(\textbf{r}_2)+\psi_2(\textbf{r}_1)\psi_1(\textbf{r}_2)\big)
\end{equation}
\noindent
Or to satisfy Eqn. 1.3 we need the following:
\begin{equation}
\Psi(\textbf{r}_1,\textbf{r}_2) = \frac{1}{\sqrt{2}}\big(\psi_1(\textbf{r}_1)\psi_2(\textbf{r}_2)-\psi_2(\textbf{r}_1)\psi_1(\textbf{r}_2)\big)
\end{equation}
Eqn. 1.4 represents what is called the symmetric case, while Egn. 1.5 is termed the antisymmetric case \footnote{The use of the terms symmetric and antisymmetric follows from Eqn. 1.2 (where $\Psi$ is a symmetric function with respect to the exchange of coordinates) and Eqn 1.3 (where $\Psi$ is an antisymmetric function). Note also that the factor of $\frac{1}{\sqrt{2}}$ in Eqn. 1.4 and Eqn. 1.5 arises from normalisation of the wavefunction}. The antisymmetric equation leads us to a simple, but exceptionally important, result -- a result that is at the very core of how the universe behaves because it is ultimately responsible for the stability of matter\cite{Lieb:1975,Lieb:1976,Lieb:1990}. Note what happens when we make $\psi_1 = \psi_2$ in Eqn. 1.5 (or, in other words, we put both particles in the same quantum state): \emph{the two-particle wavefunction, $\Psi$, vanishes}. \emph{This} is the essence of the Pauli exclusion principle: in the antisymmetric case, no two particles can exist in the same quantum state\footnote{We are neglecting explicit consideration of the spin contribution here -- see Section 1.3.1. Moreover, we are making drastic simplifications regarding the treatment of many electron systems in order to put across the ``essence'' of the exclusion principle. For example, equations 1.4 and 1.5 are approximations because, in reality, there are many more contributing terms (as in the Configuration Interaction method of quantum chemistry. See Kantorovich\cite{KantorovichBook} for a summary.)}. (We should also stress that the exclusion principle is\textit{ not equivalent} to the statement that fermions have antisymmetric wave functions. Rather, the exclusion principle follows from the antisymmetric character of fermions).
A rather remarkable observation is that \emph{only} antisymmetric and symmetric wavefunctions are found in nature for fundamental particles, i.e. we only have bosons (symmetric state) and fermions (anti-symmetry). No other particles have been found that fall outside these symmetry classes\footnote{Note, however, that the key principle underlying the concept of \emph{supersymmetry} is that bosons can be converted into fermions and vice versa. Supersymmetry therefore introduces a bosonic partner for every fermion (and, again, vice versa). To the chagrin of (some of) the particle physics community, however, any evidence for supersymmetry remains frustratingly elusive. Moreover, we are omitting any discussion of quasiparticles here. The results of measurements of two-dimensional systems exhibiting the fractional quantum Hall effect have been interpreted in terms of anyons\cite{Wilczek:1982}, quasiparticles with mixed symmetry.} As Omar\cite{Omar:2005} points out in a comprehensive and very readable review of the ramifications of indistinguishability in quantum mechanics, this existence of only symmetric and antisymmetric states\footnote{...for the \emph{total} wavefunction. Again, see Section 1.3.1.} is best described as a postulate (the ``symmetrization postulate''). And, disconcertingly, it's a postulate that apparently can't be deduced from the framework of quantum mechanics (either the non-relativistic or relativistic ``breeds'' of the theory). In other words, we simply have to accept that only bosons and fermions exist (or, at least, we have no good experimental evidence to date for fundamental particles arising from other rather more exotic statistics/symmetries such as parastatistics (see Omar\cite{Omar:2005})). In this sense, we have progressed very little since Pauli voiced his misgivings about the origin of the exclusion principle almost seventy years ago:
\begin{quote}
I was unable to give a logical reason for the Exclusion Principle or to deduce it from more general assumptions... in the beginning I hoped that the new quantum mechanics would also rigorously deduce the Exclusion Principle.
\end{quote}
\subsection{Putting a spin on the story}
All known fundamental particles are either bosons or fermions. (Within the Standard Model, fermions are ``matter'' particles, whereas bosons are generally force ``carriers''\footnote{...although the Higgs boson is an honourable exception.}. Again, we are not including quasiparticles in the discussion.). All bosons have integer spin while fermions have half-integer spin. Clearly there must be a strong connection between spin and symmetry. Indeed, this is known as the spin-statistics theorem and holds not just for individual particles but composites of fundamental particles.
This link between spin, statistics, and the exclusion principle, however, very much appears not to be something that can be deduced from non-relativistic quantum mechanics. This is the origin of the statement from Feynman quoted at the start of this chapter -- the link between spin and the exclusion principle is ``deep down'' in relativistic quantum mechanics. More recently, Bartalucci \emph {et al.}\cite{Bartalucci:2006} have put it like this:
\begin{quote}
Although the principle has been spectacularly confirmed by the number and accuracy of its predictions, its foundation lies deep in the structure of quantum field theory and has defied all attempts to produce a simple proof...
\end{quote}
\noindent This means that within the non-relativistic quantum framework the spin-statistics-symmetry link is generally accepted as a dictum, although alternative non-relativistic approaches have certainly been explored\cite{Berry-Robbins:2000}. (Duck and Sudarshan\cite{DuckSudarshan} detail a proof of the spin-statistics theorem which can be ``recast'' in non-relativistic quantum field theory, but only if an aspect of their proof which stems from relativistic quantum theory (via Lorentz invariance) can be invoked as a postulate).
Notwithstanding its essential relativistic origin, the spin contribution can be incorporated into the particle wavefunction in non-relativistic quantum mechanics in a straight-forward fashion via the introduction of the spin orbital. A spin orbital is a product of a spatial wavefunction (such as those described in the preceding section) and a spin function, which we can represent as $\chi(\uparrow)$ or $\chi(\downarrow)$ for the spin-up and spin-down states respectively. So, if we use $\textbf{x}$ as a variable which incorporates both the spatial and spin coordinates, and we switch to using $\phi$ to represent only the spatial part (so that we can, as per convention, use $\psi$ to represent the wavefunction), we have the following for the spin-up state of an electron:
\begin{equation}
\psi(\textbf{x}_1)=\phi(\textbf{r}_1) \chi(\uparrow)
\end{equation}
\noindent We therefore now have two options for ensuring antisymmetry in a two electron (or multi-electron) system: either the spatial part \textit{or} the spin part can lead to an antisymmetric total wavefunction, $\Psi(\textbf{x}_1,\textbf{x}_2)$. In other words, if two electrons have opposite spin states then there is no constraint on the spatial wavefunction. But this is nothing more than the statement of the Pauli exclusion principle given earlier: no two electrons can exist in the same quantum state.
\section{The origin of Pauli repulsion: A Gedankenexperiment}
At short interatomic or intermolecular separations, Pauli repulsion\footnote{We focus throughout this chapter only on fermions. For bosons, and as discussed by Mullin and Blaylock\cite{MullinBlaylock}, an effective \textit{attractive} force is often invoked.} is much stronger than any electrostatic interaction, increasing \emph{very} rapidly with decreasing distance between atoms or molecules. Recall, for example, that the Pauli repulsion term in the Lennard-Jones potential is modelled not with a $\frac{1}{r}$ dependence, as one would expect for a classical electrostatic interaction (between point charges), but with a $\frac{1}{r^{12}}$ function. This $\frac{1}{r^{12}}$ dependence is, of course, purely empirical in the Lennard-Jones (L-J) potential -- it has no grounding in theory -- but, nonetheless, the exceptionally high sensitivity of the repulsive interaction to small changes in interatomic/intermolecular separation is captured well by the functional form.
Of course, and as Baerends\cite{Baerends:1992} discusses in a clear overview of Pauli repulsion effects in adsorption, we are dealing not with point charges and a pure Coulombic interaction but with a screened Coulomb potential and delocalised electron ``clouds''. The overlap of the electron clouds at short separations leads in a classical model, and perhaps counter-intuitively, to an \emph{attractive} electrostatic interaction. It is only when the inter-atomic separation becomes so small that nuclear repulsion dominates that the overall electrostatic force becomes repulsive.
Thus, and as we hope is abundantly clear from previous sections, we cannot expect to understand electron repulsion due to Pauli exclusion in the context of classical electrostatics. The fundamental origin of the repulsion comes from, as we've seen, the physical impossibility of ``squeezing'' two fermions into the same quantum state. But the central question is this: just how does the exclusion principle translate into a physically measurable interaction? We'll see in the following section how dynamic force microscopy allows us to directly probe the exclusion-derived repulsion between the electron density of two atoms or molecules. Before we consider the results of the real-world experiment, however, it's very helpful to think about a ``stripped-down'' system involving the overlap of two single particle wavefunctions (see Section 1.3)\cite{WilsonGoddard1,WilsonGoddard2,WilsonGoddard3}. This ``\textit{Gedankenexperiment}'', if you will, provides compelling insights into the origin of Pauli repulsion.
First, recall that the kinetic energy operator is $-\frac{\hbar^2}{2m}\nabla^2$. The curvature of a wavefunction therefore determines its kinetic energy (via the Laplacian, $\nabla^2$). Wilson and Goddard's approach\cite{WilsonGoddard1} to elucidating the origin of Pauli repulsion was to compare the kinetic energy (KE) of a Hartree product of the wavefunctions for two same-spin electrons with the KE of an antisymmetrized product (see Fig. 1.2). A Hartree product is simply the following:
\begin{equation}
\Psi_{\textrm{\small{Hart}}}(\textbf{r}_1,\textbf{r}_2) = \psi(\textbf{r}_1)\psi(\textbf{r}_2)
\end{equation}
\noindent As should be clear from Section 1.3, the multiparticle wavefunction $\Psi_{\textrm{\small{Hart}}}$ is not antisymmetric (nor does it take into account indistinguishability of the particles) and is therefore in general not appropriate to use to describe fermions. However, we can take the Hartree product as a representation of the system when the Pauli exclusion principle is ``suppressed'' and determine the resulting kinetic energy.
\begin{figure}[b!]
\centering
\includegraphics[width=0.75\linewidth]{PauliProbes_Fig2.png}
\caption{The effective repulsion due to Pauli exclusion stems from the change in the curvature of the wavefunction due to the requirement for antisymmetrization in fermion systems. One approach to visualising this is to consider the orthogonalization of orbitals (which is placed as a constraint on Slater determinant approaches to constructing a multi-particle wavefunction). Higher wavefunction curvature leads to a higher kinetic energy. Equivalently, higher curvature is accounted for in Fourier space by higher spatial frequency (momentum) components. Figure taken from the PhD thesis of Julian Su\cite{SuThesis}. \copyright Julian Su (2007).}
\label{fig:Fig2}
\end{figure}
In order to incorporate Pauli exclusion we have to consider a multi-particle wavefunction which is appropriately\textit{ anti-symmetrized}. Slater introduced an elegant method of enforcing this antisymmetry requirement via the determinant approach which now bears his name\cite{Slater:1929}. Wilson and Goddard\cite{WilsonGoddard1} focussed on the orthogonality of orbitals which is generally \textit{imposed} in approaches which treat the multiparticle wavefunction in terms of (a sum of) Slater determinants (see Fig. 1.2, taken from the PhD thesis of Julius Su\cite{SuThesis}). We note, however, that orthogonality is a constraint on the multiparticle wavefunction \textit{that is not strictly necessary}\cite{KantorovichBook} and, as discussed by Beylkin et al.\cite{Beylkin:2008} leads to ever-increasing levels of computational expense as the size of a system grows.
Nonetheless, to ensure antisymmetry (i.e. the requirement of Eqn. 1.5), wavefunction slope and curvature must necessarily increase and thus the overall picture emerging from Fig. 1.2 is correct (even if one doesn't invoke orthogonality as the root cause of the increase in wavefunction curvature). This change in curvature results in a corresponding increase in kinetic energy. A complementary explanation from a Fourier analysis perspective, as noted in the following section, is that the increase in curvature of the wavefunction necessitates the introduction of higher spatial frequency contributions, i.e. higher \emph{momentum} components). It is this increase in KE (or momentum) which is responsible for the majority of Pauli repulsion.
There are two important assumptions built into this description of Pauli exclusion, however. First, we have adopted a ``pairwise'' approach to considering electron-electron interactions when, in reality, Pauli exclusion is an \emph{n}-body, rather than a two-body problem. The second, and related, issue is that the modification of the wavefunction due to orthogonalisation will mean that the electron density will be distributed differently, affecting electron-electron interactions and giving rise to the effect known as correlation. Interactions between same-spin electrons go by the name \textit{Fermi correlation}, whereas those between opposite-spin electrons are known as \textit{Coulomb correlation}\footnote{The combined contributions of the exclusion principle and electron correlation produce the exchange-correlation contribution to the functional in density functional theory.}. Nonetheless, the dominating contribution to Pauli repulsion is the pure quantum-mechanical component arising from wavefunction antisymmetry.
\section{Is there a Pauli exclusion \emph{force}?}
Having spent much of the chapter up to this point using the term ``Pauli repulsion'', it might seem a little perverse for us to now pose the question as to whether there is a Pauli exclusion force or not (particularly as the experimental technique we're considering is dynamic \emph{force} microscopy). Notwithstanding the use of ``\textit{Pauli repulsion}'' or ``\textit{Pauli exclusion force}'' in the DFM literature -- and, more broadly, throughout very many areas of science (spanning, for example, particle physics, single molecule imaging and spectroscopy, astrophysics\footnote{The Pauli exclusion principle prevents the collapse of white dwarf and neutron stars. See \textit{Neutron Stars 1: Equation of State and Structure}, P. H\"ansel, AY Potekhin, and DG Yakovlev, Springer (New York, 2007).}, and cosmology) -- a number of authors have made the claim that Pauli exclusion does not produce a force in the traditional sense. Mullin and Blaylock\cite{MullinBlaylock}, in particular, present a set of arguments as to why they are of the opinion that couching the effects of Pauli exclusion in terms of a repulsive \textit{force}, or exchange \textit{force}, can be rather misleading. Indeed, they go so far as to argue -- and we quote directly from their paper -- that \emph{``there is no real force due to Fermi/Bose symmetries''}, citing, amongst others, Griffiths' description of the effects of Pauli exclusion\cite{Griffiths}:
\begin{quote}
We call it an exchange force but it is not really a force at all - no physical agency is pushing on the particles; rather it is purely a geometric consequence of the symmetrization requirement.
\end{quote}
\noindent What does Griffiths (and, by extension, Mullin and Blaycock) mean by this?
To back up their assertion that Pauli ``repulsion'' is not a force in the traditional sense, Mullin and Blaycock's consider a number of ``archetypal'' physicochemical phenomena where the exclusion principle plays a key role. Arguably the most instructive of these is their discussion of the changes in momentum in a classical gas as compared to a Fermi gas. We encourage the reader to follow the detail of the analysis in Section II of their paper (under the sub-section entitled \textit{Virial Expansion}) and restrict ourselves here simply to highlighting the central point they make.
Consider first a classical ideal gas in a container. Pressure, $P$, arises from the combined impacts of each atom of that gas on the walls of the container and is given by the force per unit area. Force, in turn, is the rate of change of momentum. The mean force, $\bar{F}$, which each individual molecule of the gas contributes is $\bar{F}={\Delta p}/{\Delta t}$, where $\Delta p$ is the momentum change on striking the wall. (This is twice the atomic momentum because the sign of the momentum flips on collision). $\Delta t$ is the time required for an atom to cross the container, i.e. $\Delta t = mL/\bar{p}$ where $L$ is the width of the container and $m$ is the atomic mass. The key point in the classical case is this: if we make the volume of an ideal gas smaller or we introduce repulsive interactions (with no change in temperature), the pressure of the gas will rise because of a decreased $\Delta t$ due to a change in (the effective) $L$ arising from collisions, \emph {but $\bar{p}$ remains the same}. (Recall that for a classical gas the root mean square momentum, $p_{rms}$ is $\sqrt{3mk_BT}$)
Compare this to what happens for a Fermi gas subject to the exclusion principle. The effect of the exclusion principle is to modify the \textit{momentum distribution}. Mullin and Blaylock argue that this is subtly different to what happens for the classical gas when repulsive interactions are introduced. Classically, the repulsive forces raise the pressure of the gas because the collisions and deflections of the atoms change the atomic transit time. Quantum-mechanically, the momentum distribution is ``intrisically'' modified because of the higher curvature of the wavefunction which results from the exclusion principle. Position and momentum are conjugate variables and are thus two sides of the same coin - Fourier transformation allows us to switch between the two (entirely equivalent) representations. The higher wavefunction curvature demanded by Pauli exclusion is entirely equivalent to stating that higher spatial frequency components are required in reciprocal (i.e. momentum) space\footnote{This, of course, is the fundamental origin of the Heisenberg uncertainty principle.}. It is this intrinsic symmetry-driven modification of the momentum distribution which raises the pressure of the Fermi gas.
It is worth lifting another couple of quotes from Mullin and Blaylock's paper to highlight just how strongly opposed they are to equating Pauli exclusion with a repulsive force:
\begin{quote}
The idea of an effective repulsion between fermions ignores the real physics and gives a very poor analogy with classical repulsive gases...we offer the following guiding principle regarding statistical symmetries: ``May the force be \emph{not} with you''.
\end{quote}
\noindent Is this degree of anti-force scepticism justified, however?
\section{Beyond Gedanken: Exploiting exclusion in force microscopy}
At this point, the pragmatic scanning probe microscopist could quite reasonably take issue with the preceding arguments because the primary experimental observable in a dynamic force microscopy experiment is the frequency shift of the probe. And this, via the Sader-Jarvis formalism\cite{Sader:2004}, for example, can be converted directly to a tip-sample \textit{force}. The effects of Pauli exclusion are directly measurable in DFM because they shift the resonant frequency of the probe-cantilever system, and this ultimately can be interpreted as a change in the tip-sample force. Notwithstanding the arguments put forward by Mullin and Blaylock\cite{MullinBlaylock}, and Griffiths\cite{Griffiths}, amongst others, if Pauli exclusion isn't giving rise to a force then it certainly very much looks like it in a DFM experiment.
The resolution of this apparent conflict may lie, as Moll \emph {et al.} have discussed in a recent paper focussed on the interpretation of submolecular resolution DFM images\cite{Moll:2012}, in the virial theorem. Slater showed in the 1930s that the virial theorem can be applied to a molecule\cite{SlaterVirial}, assuming that the nuclei are fixed in place by external forces. The total electron energy, $E$, is related to the electronic kinetic energy, $T$, and potential energy, $V$, as follows:
\begin{eqnarray}
T=-E-r(\frac{dE}{dr})\\
V=2E+r(\frac{dE}{dr})
\end{eqnarray}
The electronic kinetic energy and potential energy are thus coupled via the virial theorem. Moll \emph {et al.}\cite{Moll:2012} claim that, despite the Pauli exclusion force being non-conservative in character, if it is assumed that we have a diatomic (or dimolecular) system with one degree of freedom -- as is the case for the tip-sample system in DFM -- the Pauli energy and the increase in electronic kinetic energy can be related as follows:
\begin{equation}
E_{\textrm{\small{Pauli}}}(z)=\frac{1}{z} \int_z^{\infty}\Delta E_{kin}(z')dz'
\end{equation}
\noindent where $z$ is the interatomic/intermolecular separation. The issue of extracting accurate measures of non-conservative forces from the frequency shift observable in DFM, however, continues to attract considerable debate and discussion. For example, the Sader-Jarvis inversion technique\cite{Sader:2004} widely applied to extract forces from frequency-shift-vs-separation curves must, as John Sader and his co-authors themselves highlight\cite{Sader:2005}, be applied with great care under conditions were there is a significant contribution from non-conservative forces.
Although the authors cited in the previous section propose reasons for drawing a distinction between a traditional force and the effects arising from Pauli exclusion, the increase in kinetic energy and momentum resulting from the requirement for wavefunction antisymmetry nonetheless ultimately result in an interaction which is measured as a repulsive force in a DFM experiment. That is, the connection between the change in kinetic energy and the total energy of the tip-sample system appears to result in a measurable, and positive (i.e. repulsive), contribution to the frequency shift due to the Pauli exclusion principle. What is important to realise from the previous sections, however, is that Pauli exclusion really is not comparable to other types of interparticle interaction. In this sense it is a phenomenon which is distinct from the four fundamental forces, i.e. strong, weak, electromagnetic (in particular), and, if the graviton exists, gravity.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\linewidth]{PauliProbes_Fig3.png}
\caption{Comparison of \textbf{(a)} Experimental frequency shift image and \textbf{(b)} a simulated frequency shift image for a 3,4,9,10-perylenetetracarboxylic dianhydride (PTCDA) molecule calculated on the basis of the Pauli exclusion-derived change in electron kinetic energy estimated using Eqn.1.12. \textbf{(c)} Charge density of a PTCDA molecule at a given tip-sample separation. Compare with \textbf{(d)}, the change in kinetic energy at the same tip-sample separation. Figure adapted from Moll et al.\cite{Moll:2012}. \copyright Institute of Physics Publishing (2012).}
\label{fig:Fig3}
\end{figure}
\subsection{Intramolecular Imaging}
Although DFM's ``sibling'' technique, scanning tunnelling microscopy (STM), has long been capable of submolecular resolution imaging, in the sense that molecular orbital density can be probed (see an earlier volume of this Springer series on Atom and Single Molecule Machines \cite{SpringerVolIII}), only DFM is capable of resolving the chemical framework or atomic structure of a molecule. This is because STM probes orbital density only within a specific energy window (set by the potential difference between the tip and sample) and in conventional tunnelling microscopy therefore only the frontier molecular orbitals are accessible\footnote{In the scanning tunnelling hydrogen microscopy (STHM)\cite{STHM1,STHM2,STHM3} variant of STM mentioned earlier, this constraint can be circumvented.}. The spatial distribution of the frontier orbital density generally does not map onto the atomic positions, and indeed often bears very little relationship to the ``ball-and-stick'' models of molecules so familiar to chemists and physicists.
As Giessibl has highlighted\cite{Giessibl:1998}, however, DFM is not restricted to probing the frontier orbital density and is sensitive to the total charge density. This is because intramolecular forces depend on the total electron density, rather than the density of states within a certain energy window\cite{Feynman:1939}. The sensitivity of DFM to the total electron density is particularly pronounced when in the Pauli exclusion regime of imaging, i.e. at very small tip-sample separations. Fig. 1.1 at the start of this chapter shows very clearly that, unlike STM, DFM in this Pauli exclusion regime produces images which are remarkably similar to the ball-and-stick structural models of molecules.
On the basis of Fig. 1.3 (and related theoretical and experimental data), Moll et al.\cite{Moll:2012} argue that there is a close connection between the charge density of a molecule and the increase in electron kinetic energy due to Pauli exclusion. This assumes that \textit{(a)} the arguments regarding wavefunction curvature outlined in Sections 1.4 and 1.5 provide an accurate model of electron-electron interactions at the tip-sample junction, and \textit{(b)} the dominant effect is the change in kinetic energy, and that this can be ``deconvolved'' from the overall response of the electron density as a function of the tip-sample separation. They approximate the complicated relationship between the increase in kinetic energy and the separation of two atoms with different nuclear charges (see Eqn 6 of their paper) as follows:
\begin{equation}
\Delta E_{\textrm{\small{kin}}}(z) = A\rho_s(z)^B
\end{equation}
\noindent where $z$ is the interatomic/intermolecular separation, $\rho(z)$ is the sample charge density at separation $z$, and $A$ and $B$ are two tunable parameters. As can be seen in Fig. 1.3, this simple power law model, which involves no explicit consideration of the probe, provides good agreement with experimental frequency shift images of a 3,4,9,10-perylenetetracarboxylic dianhydride (PTCDA) molecule. We also include in Fig. 1.3, again from Moll \emph {et al.}'s paper, a comparison of the charge density of the PTCDA molecule with the increase in kinetic energy calculated using the simple model of Equation 1.11. There is again apparently good agreement, adding support to the idea that DFM is sensitive to the total charge density of the system.
What is not included in the model used to generate the simulated images in Fig. 1.3 -- although Moll and co-workers deal with this point elsewhere\cite{Gross:2012} -- is the relaxation or bending of the CO molecule at the tip apex as it is moved across the underlying PTCDA molecule. It turns out that this is an extremely important contribution to the observation of intramolecular and intermolecular contrast in DFM images and we'll return to it in the final section.
\subsection{Density depletion}
The modification of the curvature and spatial extent of the tip-sample wavefunction due to Pauli exclusion produces extensive modification to the total electron density of the system. A key aspect of this is the generation of regions of density depletion. Baerends\cite{Baerends:1992} discusses the importance of density depletion in the context of the Ag-O bond where a substantial degree of Pauli exclusion-derived depletion around the centre of the bond is observed.
As a more recent example in the context of DFM, a number of the authors of this chapter have explored the importance of density depletion in the interpretation of images taken in the Pauli exclusion regime. The molecular system we used is that shown in Fig. 1.1(G) -- a hydrogen-bonded assembly of naphthalenetetracarboxylic diimide (NTCDI) molecules on a passivated silicon surface. Fig. 1.4 shows a comparison of the total electron density for an NTCDI assembly vs the density difference at a number of different $z$ positions of the tip above a C-C bond (Fig. 1.4(a)-(c)) and above an intermolecular region where hydrogen-bonding is expected\cite{Sweetman:2014}. Pauli exclusion results in strong tip-induced electron depletion above both the intermolecular and intramolecular bond regions.
\begin{figure}[t!]
\centering
\includegraphics[width=0.75\linewidth]{PauliProbes_Fig4.png}
\caption{Total electronic density (TED) and electron density difference (EDD) calculated for an NTCDI assembly plotted 100pm above the molecular plane for a variety of different tip heights. At each tip height in a simulated $F$($z$) curve, the EDD was obtained by first calculating the TED for (i) the isolated surface, and (ii) the isolated NTCDI tip. These two densities were then summed together and
subtracted from the relaxed total density for the full system. The remaining quantity is the EDD. This quantifies the fraction of charge which is redistributed due to the interaction of the DFM tip and the NTCDI molecule. The TED (left) and EDD (right) are shown for an oxygen-down NTCDI probe molecule at (a)-(c) the C-C location on an NTCDI molecule, and (d)-(f) at the intermolecular H-bond location for the different tip heights specified in each figure. Figure from Sweetman et al.\cite{Sweetman:2014}. \copyright Nature Publishing Group (2014).}
\label{fig:Fig3}
\end{figure}
The most important insight to be derived from this analysis of density depletion is that, as is always the case in any type of scanning probe experiment (and as is well-understood across the SPM community), the influence of the tip on the imaging process must \textit{always} be carefully considered. Tip-sample interactions and convolution have been an issue for scanning tunnelling microscopy since its invention, of course, but with the advent of DFM imaging in the so-called ``Pauli regime'' the probe can certainly no longer be treated as just a perturbation of the electronic structure. The tip-sample separation for the type of high resolution images shown in Fig. 1.1 is such that the repulsive Pauli component makes a strong contribution -- the tip interacts heavily with the underlying molecule adsorbed on the sample surface. In this sense, the sample-tip apex system should be considered as one large molecule.
In the following, and final, section of this chapter we'll see just how important a role the tip can play in generating high resolution DFM images.
\section{But do we \emph{really} see bonds?}
A key ``ingredient'' in attaining intramolecular contrast in DFM is the passivation of the tip apex. Gross \emph {et al.}\cite{Gross:2009} first showed that CO was a particularly appropriate molecule to use for imaging submolecular structure. (In the same paper, and in subsequent work\cite{GrossAPL:2013}, they compared the imaging capabilities of CO with those of other species adsorbed at the tip apex). Although deliberate functionalisation with CO is certainly not necessary to obtain intra- (and inter-)molecular contrast\cite{Sweetman:2014}, carbon monoxide remains the molecule of choice at present for high resolution DFM.
It turns out that CO is very far from a rigid probe, however, and the tilting of the molecule at the tip apex plays an essential role in the imaging process. The flexibility of CO has been studied in some detail by a number of groups \cite{Sun:2011,Welker:2012,Gross:2012,Weymouth:2014} but it is a very recent paper\cite{Hapala:2014}, available only at the condensed matter arXiv at the time of writing, on which we would like to focus here. This paper provides particularly telling insights into the extent to which the probe itself contributes to the structure seen in molecular and submolecular images.
Hapala \emph {et al.}\cite{Hapala:2014} use an exceptionally simple, but remarkably powerful, model to simulate DFM (and scanning tunnelling hydrogen microscopy\cite{STHM1,STHM2,STHM3}) images acquired either with a CO probe or any other type of tip apex. They represent the tip-sample geometry as shown in Fig. 1.5 and account for interactions between the probe and sample molecule using analytical Lennard-Jones potentials. It is very important to note that \emph{no account is taken of intra- or intermolecular charge density} in this model: the approach adopted by Hapala \emph {et al.} uses only the coordinates of the atoms within the molecule under study -- electron density due to bonding between those atoms is not incorporated in their model. In other words, the force-field does not rely on the electron density of the system. Although this might at first glance appear to be a rather crude approach (as compared to, for example, modelling the system using an \emph{ab initio} method such as density functional theory), it is nonetheless the case that their ``stripped-down'' model accurately reproduces the experimental data. This is the acid test of any theory or simulation.
\begin{figure}[b!]
\centering
\includegraphics[width=0.5\linewidth]{PauliProbes_Fig5.png}
\caption{Schematic model of the tip-sample geometry used by Hapala et al.\cite{Hapala:2014} in their analysis of the origin of intra- and intermolecular contrast in DFM images. The final metal atom at the tip apex and the ``probe particle'' are shaded in gold and cyan respectively with the underlying molecular layer represented by the standard space-filling model. The coloured vectors show the various forces on the tip: $F_{Tip,R}$ (green) is the radial force; $F_{Tip,xy}$ (red) is the lateral force; and $F_Surf$ (yellow) is the force due to the sample molecules. ($T_i$ and $T_t$ refer to tunnelling processes not of interest in this chapter.) Taken from Hapala\textit{ et al}.\cite{Hapala:2014}.}
\label{fig:Fig5}
\end{figure}
Fig. 1.6 shows a comparison between experimental DFM images and the output of Hapala \emph {et al.}'s simulations for two systems comprising assemblies of 8-hydroxyquinoline tetramers and NTCDI molecules (as discussed above in the context of Fig. 1.4), respectively. For both of these systems, intermolecular interactions are mediated by hydrogen bonding. Note, however, how the sharp intra- and intermolecular features in the simulated image of Fig. 1.6(a) agree extremely well with those in the experimental data shown in Fig. 1.6(b), despite the absence of any intra- or intermolecular charge density in the model. Fig. 1.6(c) and 6(d)
similarly show a comparison between the ``flexible tip'' model and DFM images of a hydrogen-bonded NTCDI assembly\cite{Sweetman:2014} taken by a number of the authors of this chapter. Again, intramolecular and intermolecular features are observed in the simulated image, despite the absence of any charge density due to covalent- or hydrogen-bonding.
\begin{figure}[t!]
\centering
\includegraphics[width=0.8\linewidth]{PauliProbes_Fig6.png}
\caption{\textbf{(a)},\textbf{(b)}: Comparison of a simulated DFM image of a hydrogen-bonded assembly of 8-hydroxyquinoline molecules (from Hapala et al.\cite{Hapala:2014}) with the corresponding experimental DFM image taken from Zhang et al.\cite{Zhang:2013}. \textbf{(c)} Series of simulated frequency shift images at different tip-sample separations, again from Hapala et al.\cite{Hapala:2014}, of NTCDI molecules using a (top row) unrelaxed, and (bottom row) relaxed tip. \textbf{(d)} Experimental frequency shift image for comparison. (From Sweetman et al.\cite{Sweetman:2014}).}
\label{fig:Fig6}
\end{figure}
It therefore would appear that the flexibility of the probe molecule plays a major role in the imaging of intra- and intermolecular structure. But we've seen in previous sections that there's also a close correspondence between images simulated on the basis of an increase in electron kinetic energy due to Pauli exclusion and the experimental frequency shift data\cite{Moll:2012}. Moreover, the intensity of intramolecular bonds as observed by DFM is related to the Pauling bond order\cite{Gross:2012}, i.e. the charge density. Similarly, the DFM images of de Oteyza \emph {et al.}\cite{deOteyza:2013} clearly show a pronounced difference between single, double, and triple bonds. The key issue is therefore the extent to which the response of the tip to interatomic and/or intermolecular charge density is a ``first order'' vs ``second order'' contribution to the imaging mechanism, as compared to the flexibility of the probe. This is currently a very active area of debate.
In order to explore the influence of tip relaxation on the DFM images of NTCDI shown in Figs. 4 and Fig. 1.6, we (i.e. SJ, AS, LK, PJM, and co-workers\cite{Sweetman:2014}) generated simulated images using a variant of DFT where both the atomic geometry and the electronic structure of the system were ``frozen''. Despite the lack of probe relaxation, a weak feature at the expected position of the hydrogen-bond was observed. Nonetheless, another question remains: to what extent might convolution of the tip's electron density with molecular charge density at the edge of a molecule account for the observation of ``intermolecular'' features? In the supplementary information file associated with their paper, Hapala \emph {et al.}\cite{Hapala:2014} suggest that this convolution effect could be as strong as the interaction of the probe with any charge density due to an intermolecular bond. This is an exceptionally important issue which needs to be addressed in a timely fashion by the scanning probe microscopy community.
\section{Conclusions}
The history of the development of the Pauli exclusion principle provides fascinating insights into just how problematic it is to associate purely quantum mechanical concepts with classical ``real world'' analogies. In this sense, it's a shame that Pauli's \textit{Zweideutigkeit} term did not gain wider acceptance as it's a less misleading, albeit rather more prosaic, description than ``spin''. Similarly, when we describe the Pauli exclusion principle as giving rise to a repulsive force we should bear in mind that the origin of the repulsion detected in dynamic force microscopy is not at all adequately explained via classical analogies. The interaction arises from the modification of the electrons' momentum distribution due to the increased curvature of the wavefunction imposed by the requirement for antisymmetrization in fermion systems. Classical analogies will clearly fail. Understanding the fundamental origin of the increased wavefunction curvature is ultimately, as Feynman puts it in the quote at the start of this chapter, ``\textit{deep down in relativistic quantum mechanics}".
Dynamic force microscopy provides us with direct access to the effects of Pauli exclusion on an atom-by-atom and/or molecule-by-molecule basis, and with resolution comparable to the spatial extent of a single chemical bond. This is a remarkable capability. At the time of writing it has been only five years since Gross \emph {et al.}\cite{Gross:2009} pioneered the exploitation of Pauli exclusion in force microscopy. As this variant of scanning probe microscopy is therefore in its infancy, there is potentially immense scope for detailed insights into the effects of the exclusion principle in a variety of atomic and molecular systems. However, every probe microscope image -- indeed, every image (regardless of the technique used to generate that image) -- is, at some level, a convolution of the properties of the sample and those of the imaging system. In the Pauli exclusion regime of dynamic force microscopy this convolution can be exceptionally strong. We therefore need to temper our enthusiasm for the acquisition of ultrahigh resolution images with caution regarding the interpretation of the data, as the examples included in this chapter clearly show.
\section{Acknowledgments}
We are very grateful for financial support from the UK Engineering and Physical Sciences Research Council in the form of a fellowship (EP/G007837), from the Leverhulme Trust (through grant F/00114/BI), from the European Commission's ICT-FET programme via the Atomic Scale and Single Molecule Logic gate Technologies (AtMol) project (www.atmol.eu), Contract No. 270028, and the ACRITAS Marie Curie Initial Training Network (www.acritas.eu). We are also very grateful for the support of the University of Nottingham High Performance Computing Facility. PJM thanks Christian Joachim (CNRS Toulouse) and Leo Gross (IBM Zurich) for helpful discussions and e-mail exchanges on the role of the exclusion principle in probe microscopy.
\bibliographystyle{unsrtnat}
|
1,477,468,750,106 | arxiv | \section{Introduction}
Computer experiments with both qualitative and quantitative variables are becoming increasingly common
(see, for example, Rawlingson et al., 2006; Qian, Wu and Wu, 2008; Han et al., 2009; Zhou, Qian and Zhou, 2011; Deng et al., 2017). Extensive studies have been devoted to design and modeling of such experiments.
This article focuses on a particular class of designs, namely, {\em marginally coupled designs}, which have been argued to be a cost-effective design choice (Deng, Hung and Lin, 2015). The goal here is to propose a general method for constructing marginally coupled designs when the design for qualitative variables is a multi-level orthogonal array.
The first systematical plan to accommodate computer experiments with both qualitative and quantitative variables is sliced Latin hypercube designs
proposed by Qian and Wu (2009). In such a design, for each level combination of the qualitative factors,
the corresponding design for the quantitative factor is a small Latin hypercube (McKay, Beckman and Conover, 1979).
The run size of a sliced Latin hypercube design increases dramatically with the number of the qualitative factors.
To accommodate a large number of qualitative factors with an economical run size, Deng, Hung and Lin (2005) introduced marginally coupled designs which possess the property that with respect to each level of each qualitative variable, the corresponding design for quantitative variables is a sliced Latin hypercube design. Other enhancements of sliced Latin hypercubes include multi-layer sliced Latin hypercube designs (Xie et al., 2014), clustered-sliced Latin hypercube designs (Huang et al., 2016), bi-directional sliced Latin hypercube designs (Zhou et al., 2016).
Since being introduced by Deng, Hung and Lin (2015), there have been two developments of marginally coupled designs, due to He, Lin and Sun (2017) and He et al. (2017), respectively. Comparing with the original work, both developments provide designs for quantitative factors without clustered points, thereby improving the space-filling property which refers to spreading out points in the design region as evenly as possible (Lin and Tang, 2015).
He, Lin and Sun (2017) constructs marginally coupled designs of $s^u$ runs that can accommodate $(s+1-k)s^{u-2}$ qualitative factors and $k$ quantitative factors for a prime power $s$ and $1 \leq k < s+1$.
The drawback of this method is when $s=2$, the corresponding designs can accommodate only up to $3$ quantitative factors.
He et al. (2017) addressed this issue and introduced a method for constructing marginally coupled designs of $2^u$ runs
for $2^{u_1-1}$ qualitative factors of two levels and up to $2^{u-u_1}$ quantitative factors, where $1 \leq u_1 \leq u$.
The paper aims to construct marginally coupled designs of $s^u$ runs in which designs for qualitative factors are $s$-level orthogonal arrays for a prime power $s$ and any positive integer $u$. The primary technique in the proposed construction is the subspace theory of Galois field $GF(s^u)$. Although such a technique was used in the constructions in He et al. (2017) for $s=2$, it is not trivial to generalize their constructions for any prime power $s$. Extra care must be taken in the generalization. The other contribution of this article is to introduce two cases
for which guaranteed low-dimensional space-filling property for quantitative factors can be obtained.
For example, for $s=2$, the designs of $2^u$ runs for quantitative factors achieve stratification on a $2 \times 2 \times 2$ grid of any three dimensions.
The remainder is arranged as follows. Section 2 introduces background and preliminary results.
New constructions and the associated theoretical results are presented in Section 3.
Section 4 tabulates the designs with three-level qualitative factors.
The space-filling property of the newly constructed designs is discussed in Section 5,
and the last section concludes the paper. All the proofs are relegated to Appendix.
\section{Background and Preliminary Results}
\subsection{Background}
A matrix of size $n\times m$, where the $j$th column has $s_j$ levels $0,\ldots,s_j-1$, is called an orthogonal array of strength $t$,
if for any $n\times t$ sub-array, all possible level combinations appear equally often.
It is denoted by ${\mbox{\small OA}}(n, s_1\cdots s_m, t)$ and the simplified notation ${\mbox{\small OA}}(n, s^{u_1}_1s^{u_2}_2\cdots s^{u_k}_k, t)$ will be used if the first $u_1$ columns have $s_1$ levels, the next $u_2$ columns have $s_2$ levels, and so on. If $s_1=\cdots=s_m=s$,
it is shortened as ${\mbox{\small OA}}(n, m, s, t)$. If all rows of an ${\mbox{\small OA}}(n, m, s, t)$ can form a vector space,
it is called a linear orthogonal array (Hedayat, Sloane and Stufken, 1999).
For a prime power $s$, let $GF(s)=\{\alpha_0, \alpha_1, \ldots, \alpha_{s-1}\}$ be a Galois field of order $s$,
where $\alpha_0=0$ and $\alpha_1=1$. Throughout this paper, unless otherwise specified, entries of any $s$-level array are from $GF(s)$.
For a set $S$, {\bf $|S|$} represents the number of elements in $S$.
A Latin hypercube is an $n\times k$ matrix each column of which is a random permutation of $n$ equally spaced levels (McKay, Beckman and Conover, 1979).
In this article, these $n$ levels are represented by $0, \ldots, n-1$, and a Latin hypercube of $n$ runs for $k$ factors is denoted by ${\mbox{\small LHD}}(n, k)$.
A special type of Latin hypercubes is a {\em cascading Latin hypercube} for which with $n=n_1n_2$ points and levels $(n_1, n_2)$ is an $n_2$-point Latin hypercube about each point in the $n_1$-point Latin hypercube (Handcock, 1991).
Latin hypercubes can be obtained from orthogonal arrays.
Given an ${\mbox{\small OA}}(n, m, s, t)$, replace the $r=n/s$ positions having level $i$ by a random permutation
of $\{ ir, \ldots, (i+1)r-1\}$, for $i=0,\ldots,s-1$.
The resulting design achieves $t$-dimensional stratification, and is called an
orthogonal array-based Latin hypercube (Tang, 1993).
This approach is referred to as the {\it level replacement-based Latin hypercube} approach.
Let $D_1$ be an ${\mbox{\small OA}}(n, m, s, 2)$ and $D_2$ be an ${\mbox{\small LHD}}(n, k)$. Design $D=(D_1, D_2)$ is called
a {\em marginally coupled design}, denoted by ${\mbox{\small MCD}}(D_1, D_2)$, if for each level of every column of $D_1$, the corresponding rows
in $D_2$ have the property that when projected onto each column, the resulting entries consist of exactly one level from
each of the $n/s$ equally-spaced intervals $\{[0,s-1],[s,2s-1],\ldots,[n-s,n-1]\}.$ As a space-filling design is generally sought, a $D_2$ in which the whole design or any of its column-wise projections has clustered points shall be avoided.
We define a Latin hypercube $D_2$ to be {\em non-cascading} if, when projected onto any two distinct columns of $D_2$, the resulting design is not a cascading Latin hypercube of levels $(s,n/s)$.
To study the existence of ${\mbox{\small MCD}}(D_1,D_2)$'s, He, Lin and Sun (2017) defined the matrix
$\tilde{D}_2$ based on $D_2$. Let $d_{2,ij}$ be the ($i,j)$th entry of $D_2$. The ($i,j$)th entry $\tilde{d}_{2,ij}$ is given by
\begin{equation}\label{eq:tildeD2}
\tilde{d}_{2,ij} =\Big \lfloor d_{2,ij}/s\Big\rfloor, \ i=1,\ldots, n \ \hbox{and} \ j=1,\ldots,k,
\end{equation}
\noindent where $\lfloor x\rfloor$ denotes the greatest integer less than or equal to $x$. The operator in (\ref{eq:tildeD2})
scales the levels in the interval $[0,s-1]$ to level 0, the levels in the interval $[s,2s-1]$ to level 1, and so on. Thus, the levels in $\tilde{D}_2$ are $\{0, 1, \ldots, n/s-1\}$.
On the other hand, design $D_2$ can be obtained from $\tilde{D}_2$ via the {\it level
replacement-based Latin hypercube} approach.
Lemma \ref{lem:D1-D2-condition} given by He, Lin and Sun (2017) provides a necessary and sufficient condition
for the existence of an ${\mbox{\small MCD}}(D_1,D_2)$ when $D_1$ is an $s$-level orthogonal array.
\begin{lem}\label{lem:D1-D2-condition}
Given that $D_1$ is an ${\mbox{\small OA}}(n, m, s, 2)$, $D_2$ is an ${\mbox{\small LHD}}(n, k)$ and $\tilde{D}_2$ is defined via (\ref{eq:tildeD2}),
then $(D_1, D_2)$ is a marginally coupled design if and only if for $j=1,\ldots,k$,
$(D_1, {\bf d}_j)$ is an ${\mbox{\small OA}}(n, s^m(n/s), 2)$, where ${\bf d}_j$ is the $j$th column of $\tilde{D}_2$.
\end{lem}
In addition to conveniently study the existence of marginally coupled designs, the definition of $\tilde{D}_2$
allows us to determine whether or not $D_2$ is {\em non-cascading}. By definition, a Latin hypercube $D_2$ is {\em non-cascading} if any two distinct columns of the corresponding $\tilde{D}_2$
cannot be transformed to each other by level permutations.
\subsection{Preliminary results}
This subsection presents a result that is the cornerstone of the proposed general construction in next section. Although the result itself is trivial, it is important to review the notation, concepts and existing results to help understand the later development. An example is also given to facilitate the understanding. Suppose that we wish to construct an ${\mbox{\small MCD}}(D_1,D_2)$ with $D_1 = {\mbox{\small OA}}(s^u, m,s,2)$ and $D_2 = {\mbox{\small LHD}}(s^u, k)$. Lemma \ref{lem:D1-D2-condition} indicates that it is equivalent to construct $D_1=( {\bf a}_1, \ldots, {\bf a}_m)$ and $\tilde D_2=({\bf d}_1, \ldots, {\bf d}_k) = {\mbox{\small OA}}(s^u,k, s^{u-1},1)$ such that $({\bf d}_j, {\bf a}_i)={\mbox{\small OA}}(s^u, s^{u-1}\times s,2)$ (Here $s^{u-1}\times s$ means ${\bf d}_j$ has $s^{u-1}$ levels, and ${\bf a}_i$ has $s$ levels) and any distinct two columns ${\bf d}_i$ and ${\bf d}_j$ cannot be transformed to each other by level permutations. This subsection focuses on a construction of an ${\mbox{\small OA}}(s^u, s^{u-1} \times s, 2)$.
First, we review the connection between an $s^{u-1}$-level column and a $(u-1)$-dimensional subspace of $GF(s^w)$, where $w\geq u-1$. To see this, note that an $s^{u-1}$-level column can be generated by choosing a subarray $A_0={\mbox{\small OA}}(s^w, u-1,s, u-1)$ from a linear ${\mbox{\small OA}}(s^{w}, m, s, 2)$, say $A$, and substituting each level combination of these columns
by a unique level of $\{0,1, \ldots, s^{u-1}-1\}$ in some manner. This procedure is known as the {\em method of replacement} (Wu and Hamada, 2011). One method to achieve the substitution is $A_0\cdot(s^{u-2}, \ldots, s, 1)^T$, where the superscript $T$ represents the transpose
of a matrix or a vector; this is exactly what we adopt in this paper.
The $A_0$, consisting of $u-1$ independent columns, can also be generated using all linear combinations of rows of a $w\times (u-1)$ matrix $G$, called the {\it generator matrix} of $A_0$ (Hedayat, Sloane and Stufken, 1999). In addition, all linear combinations of columns of $G$ form a $(u-1)$-dimensional vector subspace of $GF(s^w)$.
Therefore, an $s^{u-1}$-level column corresponds
to one $(u-1)$-dimensional subspace of $GF(s^w)$, where $w\geq u-1$.
Consider the case of $w=u$. Let $S_u$ consist of $s$-level column vectors of length $u$,
then all of its column vectors form a space of dimension $u$.
For the detail of vector spaces, refer to Horn and Johnson (2015).
For two column vectors ${\bf x}, {\bf y}\in S_u$, if ${\bf x}^T{\bf y}=0$ in $GF(s)$,
they are said to be orthogonal.
For a nonzero element ${\bf x}\in S_u$, define
\begin{equation}\label{def:O(x)}
O({\bf x})=\{{\bf y}\in S_u\ | \ {\bf y}^T{\bf x}=0\}.
\end{equation}
It can be seen that $O({\bf x})$ is a $(u-1)$-dimensional subspace of $S_u$.
Let $G({\bf x})$ be a $u\times (u-1)$ matrix consisting of
$u-1$ independent columns of $O({\bf x})$. For a vector from $S_u\setminus O({\bf x})$, say ${\bf z}$,
all linear combinations of rows of the matrix $(G({\bf x}), {\bf z})$
can generate an $s^u\times u$ matrix. For ease of presentation, the first $u-1$ columns and the last column of the resulting matrix are denoted by $A({\bf x})$ and ${\bf a}$, respectively.
Applying the {\em method of replacement} to $A({\bf x})$ yields an $s^{u-1}$-level vector, say ${\bf d}$.
Lemma~\ref{lemma:basic-idea} indicates that the ${\bf d}$ and ${\bf a}$ are orthogonal.
\begin{lem}\label{lemma:basic-idea}
For ${\bf d}$ and ${\bf a}$ constructed above, we have that $({\bf d}, {\bf a})$ is an $OA(s^u, s^{u-1}\times s, 2)$.
\end{lem}
\begin{example}
For $s=u=3$, we have $GF(3)=\{0, 1, 2\}$ and $S_3=\{(x_1, x_2, x_3)^T \mid x_i\in GF(3), i=1, 2,3\}$. Consider ${\bf x}=(1, 2, 0)^T$, and we have
\begin{equation*}
O({\bf x}) =
\left(\begin{array}{ccc ccc ccc}
0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 \\[-6pt]
0 & 0 & 0 & 1 & 1 & 1 & 2 & 2 & 2 \\[-6pt]
0 & 1 & 2 & 0 & 1 & 2 & 0 & 1 & 2
\end{array}
\right),
\end{equation*}
\noindent and the dimension of $O({\bf x})$ is 2. Choose two independent columns $(0, 0, 1)^T$ and $(1, 1, 0)^T$ from $O({\bf x})$, and column-combining them gives
$G({\bf x})$.
For ${\bf z}=(1, 2, 0)^T\in S_3\setminus O({\bf x})$,
$(G({\bf x}), {\bf z} )$ generates a $27\times 3$ matrix $(A({\bf x}), {\bf a})$, whose transpose
is as follows
\[
\left(\begin{array}{ccccccccc ccccccccc ccccccccc}
0&1&2&0&1&2&0&1&2&0&1&2&0&1&2&0&1&2&0&1&2&0&1&2&0&1&2\\[-6pt]
0&0&0&1&1&1&2&2&2&1&1&1&2&2&2&0&0&0&2&2&2&0&0&0&1&1&1\\[-6pt]
0&0&0&2&2&2&1&1&1&1&1&1&0&0&0&2&2&2&2&2&2&1&1&1&0&0&0
\end{array}
\right).
\]
By the method of replacement, let ${\bf d}=A({\bf x})\cdot(3, 1)^T$. Then $({\bf d}, {\bf a})$ is an $OA(27, 9\times 3, 2)$ whose
transpose is
\[
\left(\begin{array}{ccccccccc ccccccccc ccccccccc}
0&3&6&1&4&7&2&5&8&1&4&7&2&5&8&0&3&6&2&5&8&0&3&6&1&4&7\\[-6pt]
0&0&0&2&2&2&1&1&1&1&1&1&0&0&0&2&2&2&2&2&2&1&1&1&0&0&0
\end{array}\right).
\]
\end{example}
\section{Construction}\label{sec:2}
This section introduces a general construction and a subspace construction for marginally coupled designs using a set of vectors from $S_u$. For each construction, a necessary condition for the set of vectors is given. For the given
design parameters $s, u, u_1$, two constructions provide marginally coupled designs with different numbers of qualitative factors and quantitative factors. The key results are summarized in Theorems 1 and 2.
In the following constructions, when choosing nonzero vectors ${\bf x}, {\bf y}$ from $S_u$ to construct
orthogonal arrays or to construct $(u-1)$-dimensional subspaces $O({\bf x})$ and $O({\bf y})$,
we require ${\bf x}\neq \alpha {\bf y}$ for any $\alpha \in GF(s)$. This is because if ${\bf x}=\alpha{\bf y}$
for some $\alpha\in GF(s)$, ${\bf x}$ and ${\bf y}$ generate the columns representing the same factor, and
$O({\bf x})$ and $O({\bf y})$ actually represent the same $(u-1)$-dimensional subspace.
\subsection{General construction}
Suppose we choose $m+k$ vectors ${\bf z}_1, \ldots, {\bf z}_m, {\bf x}_1, \ldots, {\bf x}_k$ from $S_u$,
such that ${\bf z}_i$ is not in any of $O({\bf x}_j)$. We propose the following three-step construction.
\begin{itemize}
\item[Step 1.] Obtain $D_1=({\bf a}_1, \ldots, {\bf a}_m)$ by taking all linear combinations of the rows of $({\bf z}_1, \ldots, {\bf z}_m)$, where ${\bf a}_i$ is the $i$th column of $D_1$;
\item[Step 2.] For each ${\bf x}_j$, choose $u-1$ independent columns from $O({\bf x}_j)$ in (\ref{def:O(x)}) to form a generator matrix $G({\bf x}_j)$. Obtain $A({\bf x}_j)$ by taking all linear combinations of the rows of $G({\bf x}_j)$. Apply the
{\em method of replacement} to obtain an $s^{u-1}$-level column vector ${\bf d}_j$ from $A({\bf x}_j)$. Denote the resulting design by $\tilde D_2=({\bf d}_1, \ldots, {\bf d}_k)$;
\item[Step 3.] Obtain $D_2$ from $\tilde D_2$ via the {\it level replacement-based Latin hypercube} approach.
\end{itemize}
The method of obtaining ${\bf d}_j$ and $ {\bf a}_i$ in Steps 1 and 2 in the general construction are essentially the construction in Section 2.2 and thus by Lemma \ref{lemma:basic-idea}, $({\bf d}_j, {\bf a}_i)$ is an $OA(s^u, s^{u-1}\times s, 2)$. In addition, $D_1$ is an ${\mbox{\small OA}}(s^u, m, s, 2)$ and $D_2$ is an ${\mbox{\small LHD}}(s^u,k)$. Therefore, the $(D_1, D_2)$ is a marginally coupled design. The condition of the construction is to have ${\bf z}_i$ not in any of $O({\bf x}_j)$.
To find such ${\bf z}_i$'s and ${\bf x}_j$'s, we consider the set of vectors $\{{\bf e}_1, \ldots, {\bf e}_{u_1}\}\subset S_u$,
where ${\bf e}_i$ is a vector of $S_u$ with the $i$th entry equal to 1 and the other entries equal to 0, and
$1\leq u_1\leq u$. We further define
\begin{equation}\label{def:mathcal{A}}
\mathcal{A}=\{ {\bf x}\in S_u\setminus (\cup_{i=1}^{u_1} O({\bf e}_i)) \mid \mbox{the first entry of ${\bf x}$ is 1}\},
\end{equation}
where $O(\cdot)$ is defined in (\ref{def:O(x)}). The main result of using $\mathcal{A}$ and ${\bf e}_i$'s to construct
${\mbox{\small MCD}}(D_1,D_2)$'s is provided in Theorem \ref{thm-simple-construction}. Before presenting the theorem, we describe a result which counts the number of vectors in $\mathcal{A}$.
\begin{lem}\label{lemma:general-union-O(e)}
There are $n_A=(s-1)^{u_1-1}s^{u-u_1}$ column vectors in $\mathcal{A}$ in (\ref{def:mathcal{A}}).
\end{lem}
The value of $n_A$ is the number of columns in $D_1$ or $D_2$, as revealed in Theorem \ref{thm-simple-construction}.
\begin{theorem}\label{thm-simple-construction}
For $\{{\bf e}_1,\ldots, {\bf e}_{u_1}\}$ defined above, $\mathcal{A}$ in (\ref{def:mathcal{A}}) and $n_A$ in Lemma \ref{lemma:general-union-O(e)}, if in the general construction we
\begin{itemize}
\item[(i)] choose ${\bf z}_i={\bf e}_i$ and ${\bf x}_j\in \mathcal{A}$
for $1\leq i\leq u_1$ and $1\leq j\leq n_A$, an $\text{MCD}(D_1, D_2)$ with $\ D_1={\mbox{\small OA}}(s^u, u_1, s, u_1), D_2={\mbox{\small LHD}}(s^u, n_A)$
can be obtained, or,
\item[(ii)] choose ${\bf z}_i\in \mathcal{A}$ and ${\bf x}_j={\bf e}_j$ for $1\leq i\leq n_A$ and $1\leq j\leq u_1$, an
$\text{MCD}(D_1, D_2)$ with $D_1={\mbox{\small OA}}(s^u, n_A, s, 2), D_2={\mbox{\small LHD}}(s^u, u_1)$ can be obtained,
\end{itemize}
where both $D_2$'s are non-cascading Latin hypercubes.
\end{theorem}
The design $D_1$(or $D_2$) in Theorem \ref{thm-simple-construction} (i) (or (ii)) can only accommodate $u_1\leq u$ columns.
A natural question is whether or not more columns in $D_1$ (or $D_2$) can be constructed. The answer is positive for $s=2$ as shown in He et al. (2017) by choosing some linear
combinations of $\{{\bf e}_1, \ldots, {\bf e}_{u_1}\}$ besides themselves for
${\bf z}_i$'s (or ${\bf x}_j$'s). For $s>2$, the answer is still positive, however, there is a price to pay. That is, when more columns of $D_1$ than those in Theorem~\ref{thm-simple-construction} are constructed using some linear combinations of
$\{{\bf e}_1, \ldots, {\bf e}_{u_1}\}$ in addition to themselves, the number of columns in $D_2$ will be less than that in Theorem~\ref{thm-simple-construction}. The reason for paying such cost is quantified in Proposition~\ref{thm-combination is impossible}.
\begin{proposition}\label{thm-combination is impossible}
For $s>2$ and the set $\{{\bf e}_1,\ldots, {\bf e}_{u_1}\}$ defined above,
let ${\bf z}=\sum_{i=1}^{u_1}\lambda_i {\bf e}_i$ with at least two nonzero coefficients,
where $\lambda_i\in GF(s)$. For such ${\bf z}$'s and $\mathcal{A}$ in (\ref{def:mathcal{A}}), there exists a column vector ${\bf x}\in \mathcal{A}$,
such that ${\bf z}\in O({\bf x})$.
\end{proposition}
Proposition \ref{thm-combination is impossible} shows that, when $s>2$,
except $\{\alpha{\bf e}_i \mid \alpha\in GF(s)\setminus\{0\}, i=1, \ldots, u_1\}$,
for any of their other combinations, say ${\bf z}$, it is impossible that ${\bf z}$ is not in $O({\bf x})$ for all ${\bf x}\in \mathcal{A}$. This means if adding ${\bf z}$ for constructing one more column for $D_1$, not all the columns in $\mathcal{A}$ can be used for constructing columns for $D_2$. As a compromise, after adding more combinations of $\{{\bf e}_1, \ldots, {\bf e}_{u_1}\}$ for $D_1$,
we use a subset $\{{\bf x}_1, \ldots, {\bf x}_k\}\subset\mathcal{A}$ to construct $(u-1)$-dimensional subspaces $\{O({\bf x}_1), \ldots, O({\bf x}_k)\}$, where $k<n_A$. Next section discusses an approach to find such a subset.
\subsection{Subspace construction}
This subsection introduces an approach to find a proper subset $\{{\bf x}_1, \ldots, {\bf x}_k\}\subset\mathcal{A}$
and judiciously select some linear combinations
${\bf z}=\lambda_1{\bf e}_1+ \cdots + \lambda_{u_1}{\bf e}_{u_1}$, with $\lambda_j\in GF(s)$,
such that ${\bf z}\in S_u\setminus (\cup_{i=1}^kO({\bf x}_i))$.
One building block of the proposed approach is some disjoint groups of
$\mathcal{A}$. To partition $\mathcal{A}$ into different groups, note that for $1\leq j\leq u_1$, the last $u-u_1$ entries of ${\bf e}_j$
are zeros and thus the first $u_1$ entries of ${\bf z}$ and ${\bf x}_i$ determine whether or not ${\bf z}$ is orthogonal to ${\bf x}_i$.
In light of this observation, the partition of $\mathcal{A}$ is based on the distinct values of the first $u_1$ entries of vectors in $\mathcal{A}$. The proof of Lemma \ref{lemma:general-union-O(e)} reveals that the first $u_1$ entries of
${\bf x}\in \mathcal{A}$ can take $n_B=(s-1)^{u_1-1}$ distinct values, say $\{(1, b_{i2}, \ldots, b_{iu_1})\mid i=1, \ldots, n_B\}$. Let ${\bf b}_i=(1, b_{i2}, \ldots, b_{iu_1}, 0,\ldots, 0)^T$, and
define $\mathcal{A}_i$ to be the subset of $\mathcal{A}$ whose column vectors
have the same first $u_1$ entries as those of ${\bf b}_i$. It shall be noted that $|\mathcal{A}_i|=s^{u-u_1}$ and $\mathcal{A}_i$'s form a disjoint partition of $\mathcal{A}$. That is,
\begin{equation}\nonumbe
\mathcal{A}=\cup_{i=1}^{n_B}\mathcal{A}_i.
\end{equation}
The other building block is a set of $\overline E_i$'s defined as follows.
Let $E=\{ \sum_{j=1}^{u_1}\lambda_j {\bf e}_j \mid \lambda_j\in GF(s)\}$
consist of all linear combinations of ${\bf e}_1, \ldots, {\bf e}_{u_1}$.
For fixed $i$, ${\bf b}_i$ and $\mathcal{A}_i$, $1\leq i\leq n_B$, define
\begin{eqnarray}\nonumbe
E_i=\{\ {\bf z}\in E \mid {\bf z}^T{\bf b}_i=0 \ \} \ \mbox{and} \ \overline E_i=E\setminus E_i.
\end{eqnarray}
If ${\bf z}\in \overline E_i$, then ${\bf z}\notin O({\bf b}_i)$,
which implies ${\bf z}\notin O({\bf x})$ for all ${\bf x}\in \mathcal{A}_i$ since
the last $u-u_1$ entries of ${\bf z}$ are zeros. This leads to
Lemma \ref{lemma:Ei-and-Mi}.
\begin{lem}\label{lemma:Ei-and-Mi}
For $1\leq v\leq n_B$, any ${\bf z}\in \cap_{i=1}^v \overline E_{i}$ and
any ${\bf x}\in \cup_{i=1}^v \mathcal{A}_{i}$, we have ${\bf z}\notin O({\bf x})$.
\end{lem}
Lemma \ref{lemma:Ei-and-Mi} is useful because it provides
$\{{\bf z}_i\}$'s and $\{{\bf x}_j\}$'s required by the general construction in Section 3.1. That is, one can choose ${\bf z}_i$ from $\cap_{i=1}^v \overline E_{i}$,
and ${\bf x}_j$ from $\cup_{i=1}^v \mathcal{A}_{i}$, that is exactly the
method Theorem \ref{theorem:subspace-construction} adopts.
So far, it remains to resolve the question that what the elements are in
$\cap_{i=1}^{v} \overline E_{i}$ for $1\leq v \leq n_B$.
The answer is not difficult for $v=1$, and that
for $v=n_B$ can be found in Proposition~\ref{prop-the-bound-for-intersectionset}
in Appendix for interested readers.
For $1< v< n_B$, the explicit form for elements in $\cap_{i=1}^{v}\overline E_{i}$
depends on the specific sets $\overline E_{1}, \ldots, \overline E_{v}$. Thus, we cannot express the elements
in $\cap_{i=1}^{v}\overline E_{i}$ using a general form. However, we are able to compute the number of elements in $\cap_{i=1}^{v}\overline E_{i}$
for some cases.
Theorem \ref{theorem:subspace-construction}
shows that this number is closely related to the number of variables in the marginally
coupled design. In practice, experimenters also hope to
know the number in advance, as it can help them determine which marginally coupled design to choose given the numbers of qualitative and quantitative variables in the experiment.
Proposition \ref{prop:s-level} below provides the number, {\bf $|\cap_{i=1}^{v}\overline E_{i}|$}, in some circumstances.
\begin{proposition}\label{prop:s-level}
For $\{{\bf b}_1, \ldots, {\bf b}_{n_B}\}$ defined above,
suppose that there exists a subset $\{{\bf b}_{i_1}, \ldots, {\bf b}_{i_{n^*}}\}$ such that any $u_1$ elements of the set are independent, for $n^*\leq n_B$. We have that
for $1\leq v\leq n^*$ and $1\leq i_1<i_2\ldots <i_v\leq n_B$, the set $\cap_{j=1}^v \overline E_{i_j}$
contains $f(v)$ elements with \begin{eqnarray}\label{eq:f(v)}
f(v)=
\begin{cases}
(s-1)^{v}s^{u_1-v}, & 1\leq v\leq u_1, \\
m^*, & u_1+1\leq v\leq n^*,
\end{cases}
\end{eqnarray}
where $m^*=s^{u_1}[1-{v\choose 1}s^{-1} + \cdots +(-1)^{u_1}{v\choose u_1}s^{-u_1}] + \sum_{i=u_1+1}^v(-1)^i{v\choose i}$.
\end{proposition}
The value of $n^*$ in Proposition \ref{prop:s-level} will be studied in Section 3.3. Example \ref{ex:1} provides an illustration of the ${\bf b}_i$'s, $\mathcal{A}_i$'s, $\overline E_i$'s and Proposition
\ref{prop:s-level}.
\begin{example}\label{ex:1}
Consider $s=3$, $u=4$ and $u_1=3$. By definition, we have ${\bf e}_1=(1,0,0,0)^T$, ${\bf e}_2=(0,1,0,0)^T$ and ${\bf e}_3=(0,0,1,0)^T$, $\mathcal{A}=\{(x_1, x_2, x_3, x_4)^T \ | \ x_1=1, x_2, x_3\in \{1, 2\}, x_4\in \{0,1,2\}\}$,
$n_B=(3-1)^{3-1}=4$, ${\bf b}_1=(1,1,1,0)^T$, ${\bf b}_2=(1, 1, 2,0)^T$, ${\bf b}_3=(1, 2, 1,0)^T$, and ${\bf b}_4=(1,2,2,0)^T$. The disjoint groups $\mathcal{A}_1, \ldots, \mathcal{A}_4$ are displayed in Table \ref{tb0}. Note that any three of $\{{\bf b}_1, {\bf b}_2, {\bf b}_3, {\bf b}_4\}$
are independent. According to (\ref{eq:f(v)}), we have $f(1)=18$, $f(2)=12$,
$f(3)=8$ and $f(4)=6$. That is, each of $\overline E_i$'s has $18$ vectors, as shown in Table \ref{tb1};
the intersection of any two of $\overline E_i$'s has
$12$ vectors, the intersection of any three of $\overline E_i$'s has $8$
vectors, and the intersection of four of them has $6$ vectors.
\begin{table}[h]
{\tabcolsep=6pt
\renewcommand{\arraystretch}{1
\begin{center}
\caption{Partition of $\mathcal{A}$ in Example~\ref{ex:1}\label{tb0}}
\scalebox{0.8}{
\begin{tabular}{ccc c ccc c ccc c ccc} \hline
\multicolumn{3}{c}{$\mathcal{A}_1$ } & \multicolumn{1}{c}{} &\multicolumn{3}{c}{$\mathcal{A}_2$} & \multicolumn{1}{c}{}&\multicolumn{3}{c}{$\mathcal{A}_3$ } & \multicolumn{1}{c}{}& \multicolumn{3}{c}{$\mathcal{A}_4$ } \\\hline
1 & 1 & 1&& 1 & 1& 1&& 1 & 1& 1&& 1 & 1 & 1 \\[-5pt]
1 & 1 & 1&& 1 & 1& 1&& 2 & 2& 2&& 2 & 2 & 2 \\[-5pt]
1 & 1 & 1&& 2 & 2& 2&& 1 & 1& 1&& 2 & 2 & 2 \\[-5pt]
0 & 1 & 2&& 0 & 1& 2&& 0 & 1& 2&& 0 & 1 & 2 \\\hline
\end{tabular}}
\end{center}}
\end{table}
\begin{table}[h]
{\tabcolsep=10pt
\renewcommand{\arraystretch}{1
\begin{center}
\caption{Vectors of $\overline E_i$'s in Example~\ref{ex:1}\label{tb1}}
\scalebox{0.8}{
\begin{tabular}{ ccc ccc ccc ccc ccc ccc}
\multicolumn{18}{c}{$\overline E_1$}\\ \hline
0&0 &0 &1 &1 &1 &1 &1 &1 & 0 &0 &0 &2 &2 &2 &2 &2 &2\\[-5pt]
0&1 &1 &0 &0 &1 &2 &1 &2 & 0 &2 &2 &0 &0 &2 &1 &2 &1\\[-5pt]
1&0 &1 &0 &1 &0 &2 &2 &1 & 2 &0 &2 &0 &2 &0 &1 &1 &2\\[-5pt]
0&0 &0 &0 &0 &0 &0 &0 &0 & 0 &0 &0 &0 &0 &0 &0 &0 &0\\\hline
\multicolumn{18}{c}{$\overline E_2$}\\ \hline
0 &0 &0 &1 &1 &1 &1 &1 &1 &0 &0 &0 &2 &2 &2 &2 &2 &2\\[-5pt]
0 &1 &1 &0 &0 &1 &1 &2 &2 &0 &2 &2 &0 &0 &2 &2 &1 &1\\[-5pt]
1 &0 &2 &0 &2 &0 &1 &2 &1 &2 &0 &1 &0 &1 &0 &2 &1 &2\\[-5pt]
0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0 &0\\ \hline
\multicolumn{18}{c}{$\overline E_3$}\\ \hline
0 & 0 & 0& 1& 1 &1 &1 &1 &1 &0&0&0&2&2&2&2&2&2\\[-5pt]
0 & 1 & 1& 0& 0 &2 &1 &2 &1 &0&2&2&0&0&1&2&1&2\\[-5pt]
1 & 0 & 2& 0& 1 &0 &1 &2 &2 &2&0&1&0&2&0&2&1&1\\[-5pt]
0 & 0 & 0& 0& 0 &0 &0 &0 &0 &0&0&0&0&0&0&0&0&0\\ \hline
\multicolumn{18}{c}{$\overline E_4$}\\ \hline
0 & 0 & 0 & 1&1 &1 &1 &1 &1 &0&0&0&2&2&2&2&2&2\\[-5pt]
0 & 1 & 1 & 0&0 &2 &1 &1 &2 &0&2&2&0&0&1&2&2&1\\[-5pt]
1 & 0 & 1 & 0&2 &0 &1 &2 &1 &2&0&2&0&1&0&2&1&2\\[-5pt]
0 & 0 & 0 & 0&0 &0 &0 &0 &0 &0&0&0&0&0&0&0&0&0\\ \hline
\end{tabular}}
\end{center}}
\end{table}
\end{example}
Next, we show how to use ${\bf b}_i$, $\mathcal{A}_i$ and $\overline E_i$ ($i=1,\ldots,n_B$) to construct marginally coupled designs. To do so, we define $E_v^*$, $\mathcal{A}_v^*$ and $g(v)$ as follows. To define $E_v^*$,
given $s$, $u$ and $u_1$, find a set of
$\{{\bf b}_{i_1}, \ldots, {\bf b}_{i_{n^*}}\}$, by calculation or computer search, such that any $u_1$ elements in the set are independent; for $1\leq v\leq n^*$, obtain
$\cap_{j=1}^v\overline E_{i_j}$ which
has $f(v)$ elements as shown in Proposition \ref{prop:s-level}. Define $E_v^*$ to be the subset of
$\cap_{j=1}^v\overline E_{i_j}$ in which the first nonzero entry of each element is equal to 1.
The value $g(v)=f(v)/(s-1)$ is the number of elements of $E_v^*$. Define $\mathcal{A}_v^*=\cup_{j=1}^v\mathcal{A}_{i_j}$.
\begin{theorem}\label{theorem:subspace-construction}
For $E_v^*$, $\mathcal{A}_v^*$ and $g(v)$ defined above, if in the general construction, we
\begin{itemize}
\item[(i)] choose ${\bf z}_i \in E_v^*$
and ${\bf x}_j \in \mathcal{A}_v^*$, $i=1,\ldots,g(v)$ and $j=1,\ldots,vs^{u-u_1}$, an $\text{MCD}(D_1, D_2)$ with $\ D_1={\mbox{\small OA}}(s^u, g(v), s, 2), D_2={\mbox{\small LHD}}(s^u, vs^{u-u_1})$
can be obtained, or
\item[(ii)] choose ${\bf z}_i \in \mathcal{A}_v^*$
and ${\bf x}_j \in E_v^*$, $i =1,\ldots, vs^{u-u_1}$ and $j =1, \ldots, g(v)$, an $\text{MCD}(D_1, D_2)$ with $\ D_1={\mbox{\small OA}}(s^u, vs^{u-u_1}, s, 2), D_2={\mbox{\small LHD}}(s^u, g(v))$ can be obtained,
\end{itemize}
where both $D_2$'s are non-cascading Latin hypercubes.
\end{theorem}
For ease of the presentation, the method in Theorem
\ref{theorem:subspace-construction} is called {\em subspace construction}.
Example~\ref{ex:2} provides a detailed illustration of obtaining marginally coupled designs via
the subspace construction using the $\mathcal{A}_i$'s and $\overline E_{i}$'s
in Example \ref{ex:1}.
\begin{example}\label{ex:2}
(Continuation of Example \ref{ex:1})
Table \ref{tb:thm3andthm4} presents ${\bf {\mbox{\small MCD}}}(D_1, D_2)$'s
obtained according to the subspace construction method by
choosing $v=1,2,3$ or $4$.
As an illustration, we provide the detailed steps of applying item $(i)$ of Theorem \ref{theorem:subspace-construction} for $v=3$.
Consider the sets $\cap_{j=1}^3 \overline E_j$ and $\cup_{j=1}^3 \mathcal{A}_{j}$.
In {Step 1}, $f(3)=8$, hence $g(3)=4$. The four elements in $\cap_{j=1}^3 \overline E_j$ with the first nonzero entry being 1
are ${\bf z}_1=(0,0,1,0)^T, {\bf z}_2=(0,1,0,0)^T, {\bf z}_3=(1,0,0,0)^T$, and ${\bf z}_4=(1,2,2,0)^T$;
take $({\bf z}_1, {\bf z}_2, {\bf z}_3, {\bf z}_4)$ as a generator matrix to obtain
$D_1=({\bf a}_1, {\bf a}_2, {\bf a}_3, {\bf a}_4)$, an ${\mbox{\small OA}}(81,4,3,2)$.
In {Step 2}, the $3\cdot 3^{4-3}=9$ elements in
$\cup_{j=1}^3 \mathcal{A}_{j}=\{{\bf x}_1, {\bf x}_2, \ldots, {\bf x}_9\}$ are shown in Table \ref{tb0}.
For each ${\bf x}_i$, let $G({\bf x}_i)$ consist of three independent columns of $O({\bf x}_i)$,
and take $G({\bf x}_i)$ as a generator matrix to obtain the matrix $A_i$, an ${\mbox{\small OA}}(81, 3, 3, 3)$;
let ${\bf d}_i=A_i\cdot(3^2, 3, 1)^T$, and further let $\tilde D_2=({\bf d}_1, \ldots, {\bf d}_9)$, an ${\mbox{\small OA}}(81, 9, 27, 1)$.
In {Step 3}, construct $D_2$, an ${\mbox{\small LHD}}(81, 9)$, from $\tilde D_2$ by the {\it level-replacement based Latin hypercube} approach. The above three-step procedure results in an ${\mbox{\small MCD}}(D_1,D_2)$,
which is listed in Table \ref{tb:thm3andthm4} marked by $\#$, and in the middle of Table \ref{tb-designs} marked by $\diamondsuit$.
\begin{table}[h]
{\tabcolsep=6pt
\renewcommand{\arraystretch}{1
\begin{center}
\caption{$MCD(D_1,D_2)$'s with $s=3$, $u=4$ and $u_1=3$ in Example~\ref{ex:2}}\label{tb:thm3andthm4}
\scalebox{0.8}{
\begin{tabular}{c cc cc} \hline
& \multicolumn{2}{c}{By item $(i)$} & \multicolumn{2}{c}{By item $(ii)$} \\
\hline
$v$ & ${D}_1$ & ${D}_2$ & ${D}_1$ & ${D}_2$ \\\hline
$1$ & $ {\mbox{\small OA}}(3^4, 9, 3, 2)$& $ {\mbox{\small LHD}}(3^4, 3)$ & $ {\mbox{\small OA}}(3^4, 3, 3, 2)$&$ {\mbox{\small LHD}}(3^4, 9)$ \\[-5pt]
$2$ & $ {\mbox{\small OA}}(3^4, 6, 3, 2)$& $ {\mbox{\small LHD}}(3^4, 6)$ & $ {\mbox{\small OA}}(3^4, 6, 3, 2)$&$ {\mbox{\small LHD}}(3^4, 6)$ \\[-5pt]
\hspace{-3mm}$^{\#}3$ & $ {\mbox{\small OA}}(3^4, 4, 3, 2)$& $ {\mbox{\small LHD}}(3^4, 9)$ & $ {\mbox{\small OA}}(3^4, 9, 3, 2)$&$ {\mbox{\small LHD}}(3^4, 4)$ \\[-5pt]
$4$ & $ {\mbox{\small OA}}(3^4, 3, 3, 2)$& $ {\mbox{\small LHD}}(3^4, 12)$ & $ {\mbox{\small OA}}(3^4, 12, 3, 2)$&$ {\mbox{\small LHD}}(3^4, 3)$ \\ \hline
\end{tabular}}
\end{center}}
\end{table}
\end{example}
\subsection{The maximum value of $n^*$}
Both Proposition \ref{prop:s-level} and Theorem \ref{theorem:subspace-construction} require a set of vectors $\{{\bf b}_{i_1}, \ldots, {\bf b}_{i_{n^*}}\}$ in which
any $u_1$ elements are independent.
The value of $n^*$ directly determines the number of columns in $D_1$ or $D_2$.
Of theoretical interest is the maximum value of $n^*$ that can be achieved,
and the bound of such a value if not obtained explicitly. We provide the maximum value of $n^*$ for the three cases: (1) $s=2$ with $u_1\geq 2$, (2) $s>2$ with $u_1=1$, and
(3) $s>2$ with $u_2=2$. For other values of $s$, $u$, and $u_1$,
we provide bounds of the maximum value of $n^*$.
\noindent{\bf Case 1:} $s=2$, $u_1\geq 2$
For $s=2$, and $1\leq u_1<u$, we have $n_B=(s-1)^{u_1-1}=1$ and thus $n^*=1$. The only choice
for ${\bf b}_i$'s, $\mathcal{A}_i$'s and $\overline E_i$'s is ${\bf b}_1=(1,\ldots,1, 0, \ldots, 0)$,
$\mathcal{A}=\mathcal{A}_1=\{(1, \ldots, 1, x_{u_1+1}, \ldots, x_{u})\mid x_i \in \{0, 1\}\}$,
and $\overline E_1$ contains all the combinations of $\lambda_1{\bf e}_1 +{\cdots} + \lambda_{u_1}{\bf e}_{u_1}$ that are
not orthogonal to column vectors of $\mathcal{A}_1$. Note that $\overline E_1$ consists of all combinations with odd numbers
of $\{{\bf e}_1, \ldots, {\bf e}_{u_1}\}$. Therefore, $\overline E_1$ has $2^{u_1-1}$ elements. In addition, $v=1$, $f(1)=g(1)=2^{u_1-1}$ and $k=1\cdot2^{u-u_1}$.
\noindent{\bf Case 2:} $s\geq 3$, $u_1=1$
As $u_1=1$, we have $n_B = (s-1)^{u_1-1} = 1$ and $n^*=1$.
It is clear that $\mathcal{A}=\mathcal{A}_1$, $\overline E_1=\{\alpha{\bf e}_1\mid \alpha\in GF(s)\setminus\{0\}\}$,
$v=1$, $f(1)=s-1$, $g(1)=1$ and $k=1\cdot s^{u-1}$.
\vspace{3mm}
\noindent{\bf Case 3:} $s\geq 3$, $u_1=2$
We have $n_B=(s-1)^{u_1-1}=s-1$. The first $u_1$ entries of vectors of $\mathcal{A}$ have
$s-1$ choices as $(1,\alpha_1)^T, (1, \alpha_2)^T, \ldots, (1, \alpha_{s-1})^T$ for $\alpha_i \in GF(s)$,
hence ${\bf b}_i=(1, \alpha_i, 0, \ldots, 0)^T$. As any two vectors of
$\{{\bf b}_1, {\bf b}_2, \ldots, {\bf b}_{s-1}\}$ are independent, the maximum value
of $n^*$ is $s-1$. The values of $f(v)$ at $v=1, 2$, and $2< v\leq s-1$ are $s(s-1)$, $(s-1)^2$ and $(s-1)(s-v+1)$
according to (\ref{eq:f(v)}), respectively. The values of $g(v)$ at $v=1, 2$, and $2< v\leq s-1$ are $s$, $s-1$ and $s-v+1$, respectively.
Table \ref{n-star} summarizes the maximum values of $n^*$ under cases 1 to 3, where
the marginally coupled designs are obtained as in Theorem \ref{theorem:subspace-construction}.
For $s=2$, $D_1$ is an orthogonal array of strength three follows by Corollary 2
of Deng, Hung and Lin (2015). For $s, u_1>2$, Proposition \ref{prop:upperbound}
presents a bound for the maximum value of $n^*$.
\begin{table
{\tabcolsep=6pt
\renewcommand{\arraystretch}{0.8
\begin{center}
\caption{Maximum values of $n^*$ and $MCD(D_1,D_2)$'s for $s=2$ or $u_1\leq 2$\label{n-star}}
\scalebox{0.7}{
\begin{tabular}{ccccc ll} \hline
$s$ & $u_1$ & maximum value of $n^*$ &$v$ & $g(v)$ &\multicolumn{1}{c}{${D}_1$} & \multicolumn{1}{c}{${D}_2$}\\\hline
\multirow{2}{2cm}{$s=2$} & \multirow{2}{2cm}{$2\leq u_1\leq u$}& \multirow{2}{2cm}{$1$}& 1 &$2^{u_1-1}$& ${\mbox{\small OA}}(2^u, 2^{u_1-1}, 2, 3)$ & ${\mbox{\small LHD}}(2^u, 2^{u-u_1})$ \\
& & & 1 &$2^{u_1-1}$& ${\mbox{\small OA}}(2^u, 2^{u-u_1}, 2, 3)$ & ${\mbox{\small LHD}}(2^u, 2^{u_1-1})$ \\\hline
\multirow{2}{2cm}{$s\geq 3$} & \multirow{2}{2cm}{$1$} & \multirow{2}{2cm}{$1$} & 1 & 1 & ${\mbox{\small OA}}(s^u, 1, s, 2)$ & ${\mbox{\small LHD}}(s^u, s^{u-1})$ \\
& & & 1 & 1 & ${\mbox{\small OA}}(s^u, s^{u-1}, s, 2)$ & ${\mbox{\small LHD}}(s^u, 1)$ \\ \hline
\multirow{6}{2cm}{$s\geq 3$} & \multirow{6}{2cm}{$2$} & \multirow{6}{2cm}{$s-1$}& 1 & $s$ & ${\mbox{\small OA}}(s^u, s, s, 2)$ & ${\mbox{\small LHD}}(s^u, s^{u-2})$ \\
& & & 1 & $s$ & ${\mbox{\small OA}}(s^u, s^{u-2}, s, 2)$ & ${\mbox{\small LHD}}(s^u, s)$ \\
& & & 2 & $s-1$ & ${\mbox{\small OA}}(s^u, s-1, s, 2)$ & ${\mbox{\small LHD}}(s^u, 2s^{u-2})$ \\
& & & 2 & $s-1$ & ${\mbox{\small OA}}(s^u, 2s^{u-2}, s, 2)$ & ${\mbox{\small LHD}}(s^u, s-1)$ \\
& & &$2<v\leq s-1$& $s-v+1$& ${\mbox{\small OA}}(s^u, s-v+1, s, 2)$ & ${\mbox{\small LHD}}(s^u, vs^{u-2})$ \\
& & &$2<v\leq s-1$& $s-v+1$& ${\mbox{\small OA}}(s^u, vs^{u-2}, s, 2)$ & ${\mbox{\small LHD}}(s^u, s-v+1)$ \\ \hline
\end{tabular}}
\end{center}}
\end{table}
\begin{proposition}\label{prop:upperbound}
Given positive integers $s, u>2$, and $2<u_1\leq u$, suppose any $u_1$ vectors of $\{{\bf b}_1, \ldots, {\bf b}_{n^*}\}$ are independent. We have
\begin{eqnarray}\label{eq:$n^*$}
\max n^*\leq
\begin{cases}
u_1+1, & s\leq u_1, \\
s+u_1-2, & s>u_1\geq 3 \ \mbox{and} \ s \ \mbox{is odd},\\
s+u_1-1, & \mbox{in all other cases}.
\end{cases}
\end{eqnarray}
\end{proposition}
\noindent{\bf Remark 1.} According to the proof of Proposition \ref{prop:upperbound}, the maximum value of $n^*$ is not greater than the maximum value of $m$ in an $OA(s^{u_1}, m, s, u_1)$. It shall be noted that, however, it is possible to give an upper bound tighter than that given by Proposition \ref{prop:upperbound}, for example, for $u_1=2$, the maximum value of $n^*$ is $s-1$, but the maximum value of $m$ in an ${\mbox{\small OA}}(s^2, m, s, 2)$ is {\bf $s+1$}.
\section{Tables for Three-level Qualitative Factors}
This section tabulates the marginally coupled designs with three-level qualitative factors obtained by the proposed methods for practical use.
Tables \ref{tb-simpleconstructiondesigns} and \ref{tb-designs} present the designs constructed in Theorems \ref{thm-simple-construction} and \ref{theorem:subspace-construction}, respectively, where $\overline u_1=u-u_1$,
and the symbol $*$ indicates the case of $v=n^*$.
\begin{table
{\tabcolsep=10pt
\renewcommand{\arraystretch}{0.8
\begin{center}
\caption{$MCD(D_1, D_2)$s with $3^u$ runs by Theorem \ref{thm-simple-construction}, $u=2,3,4,5$\label{tb-simpleconstructiondesigns}}
\scalebox{0.8}{
\begin{tabular}{c c c ll ll} \hline
\multirow{2}{0.5cm}{$u$} & \multirow{2}{0.5cm}{$u_1$} &\multirow{2}{0.5cm}{$n_A$} & \multicolumn{2}{c}{By item $(i)$ } & \multicolumn{2}{c}{By item $(ii)$}\\\cline{4-7}
& & & \multicolumn{1}{c}{${D}_1$} & \multicolumn{1}{c}{${D}_2$} & \multicolumn{1}{c}{${D}_1$} & \multicolumn{1}{c}{${D}_2$} \\ \hline
2 & 1 & 3 & ${\mbox{\small OA}}(3^2, 1, 3, 1)$ & ${\mbox{\small LHD}}(3^2, 3)$ & ${\mbox{\small OA}}(3^2, 3, 3, 2)$ & ${\mbox{\small LHD}}(3^2, 1)$\\
2 & 2 & 2 & ${\mbox{\small OA}}(3^2, 2, 3, 2)$ & ${\mbox{\small LHD}}(3^2, 2)$ & ${\mbox{\small OA}}(3^2, 2, 3, 2)$ & ${\mbox{\small LHD}}(3^2, 2)$ \\
3 & 1 & 9 & ${\mbox{\small OA}}(3^3, 1, 3, 1)$ & ${\mbox{\small LHD}}(3^3, 9)$ & ${\mbox{\small OA}}(3^3, 9, 3, 2)$ & ${\mbox{\small LHD}}(3^3, 1)$\\
3 & 2 & 6 & ${\mbox{\small OA}}(3^3, 2, 3, 2)$ & ${\mbox{\small LHD}}(3^3, 6)$ & ${\mbox{\small OA}}(3^3, 6, 3, 2)$ & ${\mbox{\small LHD}}(3^3, 2)$\\
3 & 3 & 4 & ${\mbox{\small OA}}(3^3, 3, 3, 3)$ & ${\mbox{\small LHD}}(3^3, 4)$ & ${\mbox{\small OA}}(3^3, 4, 3, 2)$ & ${\mbox{\small LHD}}(3^3, 3)$\\
4 & 1 & 27 & ${\mbox{\small OA}}(3^4, 1, 3, 1)$ & ${\mbox{\small LHD}}(3^4, 27)$ & ${\mbox{\small OA}}(3^4, 27, 3, 2)$ & ${\mbox{\small LHD}}(3^4, 1)$\\
4 & 2 & 18 & ${\mbox{\small OA}}(3^4, 2, 3, 2)$ & ${\mbox{\small LHD}}(3^4, 18)$ & ${\mbox{\small OA}}(3^4, 18, 3, 2)$ & ${\mbox{\small LHD}}(3^4, 2)$\\
4 & 3 & 12 & ${\mbox{\small OA}}(3^4, 3, 3, 3)$ & ${\mbox{\small LHD}}(3^4, 12)$ & ${\mbox{\small OA}}(3^4, 12, 3, 2)$ & ${\mbox{\small LHD}}(3^4, 3)$\\
4 & 4 & 8 & ${\mbox{\small OA}}(3^4, 4, 3, 4)$ & ${\mbox{\small LHD}}(3^4, 8)$ & ${\mbox{\small OA}}(3^4, 8, 3, 2)$ & ${\mbox{\small LHD}}(3^4, 4)$\\
5 & 1 & 81 & ${\mbox{\small OA}}(3^5, 1, 3, 1)$ & ${\mbox{\small LHD}}(3^5, 81)$ & ${\mbox{\small OA}}(3^5, 81, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 1)$\\
5 & 2 & 54 & ${\mbox{\small OA}}(3^5, 2, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 54)$ & ${\mbox{\small OA}}(3^5, 54, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 2)$\\
5 & 3 & 36 & ${\mbox{\small OA}}(3^5, 3, 3, 3)$ & ${\mbox{\small LHD}}(3^5, 36)$ & ${\mbox{\small OA}}(3^5, 36, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 3)$\\
5 & 4 & 24 & ${\mbox{\small OA}}(3^5, 4, 3, 4)$ & ${\mbox{\small LHD}}(3^5, 24)$ & ${\mbox{\small OA}}(3^5, 24, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 4)$ \\
5 & 5 & 16 & ${\mbox{\small OA}}(3^5, 5, 3, 5)$ & ${\mbox{\small LHD}}(3^5, 16)$ & ${\mbox{\small OA}}(3^5, 16, 3, 2)$ & ${\mbox{\small LHD}}(3^5, 5)$\\\hline
\end{tabular}}
\end{center}}
\end{table}
\begin{table
{\tabcolsep=10pt
\renewcommand{\arraystretch}{0.8
\begin{center}
\caption{$MCD(D_1, D_2)$s with $3^u$ runs by Theorem \ref{theorem:subspace-construction}, $u=2,3,4,5$\label{tb-designs}}
\scalebox{0.7}{
\begin{tabular}{c c l c c c ll ll} \hline
\multirow{2}{0.5cm}{$u$} & \multirow{2}{0.5cm}{$u_1$} &\multirow{2}{0.5cm}{$v$} & \multirow{2}{0.5cm}{$g(v)$} & \multirow{2}{0.5cm}{$\overline u_1$}& \multirow{2}{0.5cm}{$k$} & \multicolumn{2}{c}{By item $(i)$} & \multicolumn{2}{c}{By item $(ii)$} \\\cline{7-10}
&&&&& &\multicolumn{1}{c}{${D}_1$} & \multicolumn{1}{c}{${D}_2$} & \multicolumn{1}{c}{${D}_1$} & \multicolumn{1}{c}{${D}_2$}\\\hline
2 & 1 & 1* & 1 & 1 & 3 & $\mbox{OA}(3^2, 1, 3, 2)$ & $\mbox{LHD}(3^2, 3)$ &$\mbox{OA}(3^2, 3, 3, 2)$ & $\mbox{LHD}(3^2, 1)$\\
2 & 2 & 1 & 3 & 0 & 1 & $\mbox{OA}(3^2, 3, 3, 2)$ & $\mbox{LHD}(3^2, 1)$ &$\mbox{OA}(3^2, 1, 3, 2)$ & $\mbox{LHD}(3^2, 3)$\\
2 & 2 & 2* & 2 & 0 & 2 & $\mbox{OA}(3^2, 2, 3, 2)$ & $\mbox{LHD}(3^2, 2)$ & $\mbox{OA}(3^2, 2, 3, 2)$ & $\mbox{LHD}(3^2, 2)$\\
3 & 1 & 1* & 1 & 2 & 9 & $\mbox{OA}(3^3, 1, 3, 2)$ & $\mbox{LHD}(3^3, 9)$ & $\mbox{OA}(3^3, 9, 3, 2)$ & $\mbox{LHD}(3^3, 1)$\\
3 & 2 & 1 & 3 & 1 & 3 & $\mbox{OA}(3^3, 3, 3, 2)$ & $\mbox{LHD}(3^3, 3)$ & $\mbox{OA}(3^3, 3, 3, 2)$ & $\mbox{LHD}(3^3, 3)$\\
3 & 2 & 2* & 2 & 1 & 6 & $\mbox{OA}(3^3, 2, 3, 2)$ & $\mbox{LHD}(3^3, 6)$ & $\mbox{OA}(3^3, 6, 3, 2)$ & $\mbox{LHD}(3^3, 2)$\\
3 & 3 & 1 & 9 & 0 & 1 & $\mbox{OA}(3^3, 9, 3, 2)$ & $\mbox{LHD}(3^3, 1)$ & $\mbox{OA}(3^3, 1, 3, 2)$ & $\mbox{LHD}(3^3, 9)$\\
3 & 3 & 2 & 6 & 0 & 2 & $\mbox{OA}(3^3, 6, 3, 2)$ & $\mbox{LHD}(3^3, 2)$ & $\mbox{OA}(3^3, 2, 3, 2)$ & $\mbox{LHD}(3^3, 6)$\\
3 & 3 & 3 & 4 & 0 & 3 & $\mbox{OA}(3^3, 4, 3, 2)$ & $\mbox{LHD}(3^3, 3)$ & $\mbox{OA}(3^3, 3, 3, 2)$ & $\mbox{LHD}(3^3, 4)$\\
3 & 3 & 4* & 3 & 0 & 4 & $\mbox{OA}(3^3, 3, 3, 2)$ & $\mbox{LHD}(3^3, 4)$ & $\mbox{OA}(3^3, 4, 3, 2)$ & $\mbox{LHD}(3^3, 3)$\\
4 & 1 & 1* & 1 & 3 & 27 & $\mbox{OA}(3^4, 1, 3, 2)$ & $\mbox{LHD}(3^4, 27)$& $\mbox{OA}(3^4, 27, 3, 2)$& $\mbox{LHD}(3^4, 1)$\\
4 & 2 & 1 & 3 & 2 & 9 & $\mbox{OA}(3^4, 3, 3, 2)$ & $\mbox{LHD}(3^4, 9)$ & $\mbox{OA}(3^4, 9, 3, 2)$ & $\mbox{LHD}(3^4, 3)$\\
4 & 2 & 2* & 2 & 2 & 18 & $\mbox{OA}(3^4, 2, 3, 2)$ & $\mbox{LHD}(3^4, 18)$& $\mbox{OA}(3^4, 18, 3, 2)$& $\mbox{LHD}(3^4, 2)$\\
4 & 3 & 1 & 9 & 1 & 3 & $\mbox{OA}(3^4, 9, 3, 2)$ & $\mbox{LHD}(3^4, 3)$ & $\mbox{OA}(3^4, 3, 3, 2)$ & $\mbox{LHD}(3^4, 9)$\\
4 & 3 & 2 & 6 & 1 & 6 & $\mbox{OA}(3^4, 6, 3, 2)$ & $\mbox{LHD}(3^4, 6)$ & $\mbox{OA}(3^4, 6, 3, 2)$ & $\mbox{LHD}(3^4, 6)$\\
\hspace{-3mm}$^\diamondsuit$4 & 3 & 3 & 4 & 1 & 9 & $\mbox{OA}(3^4, 4, 3, 2)$ & $\mbox{LHD}(3^4, 9)$ & $\mbox{OA}(3^4, 9, 3, 2)$ & $\mbox{LHD}(3^4, 4)$\\
4 & 3 & 4* & 3 & 1 & 12 & $\mbox{OA}(3^4, 3, 3, 2)$ & $\mbox{LHD}(3^4, 12)$& $\mbox{OA}(3^4, 12, 3, 2)$& $\mbox{LHD}(3^4, 3)$\\
4 & 4 & 1 & 27 & 0 & 1 & $\mbox{OA}(3^4, 27, 3, 2)$& $\mbox{LHD}(3^4, 1)$ & $\mbox{OA}(3^4, 1, 3, 2)$ & $\mbox{LHD}(3^4, 27)$\\
4 & 4 & 2 & 18 & 0 & 2 & $\mbox{OA}(3^4, 18, 3, 2)$& $\mbox{LHD}(3^4, 2)$ & $\mbox{OA}(3^4, 2, 3, 2)$ & $\mbox{LHD}(3^4, 18)$\\
4 & 4 & 3 & 12 & 0 & 3 & $\mbox{OA}(3^4, 12, 3, 2)$& $\mbox{LHD}(3^4, 3)$ & $\mbox{OA}(3^4, 3, 3, 2)$ & $\mbox{LHD}(3^4, 12)$\\
4 & 4 & 4 & 8 & 0 & 4 & $\mbox{OA}(3^4, 8, 3, 2)$& $\mbox{LHD}(3^4, 4)$ & $\mbox{OA}(3^4, 4, 3, 2)$& $\mbox{LHD}(3^4, 8)$\\
4 & 4 & 5* & 5 & 0 & 5 & $\mbox{OA}(3^4, 5, 3, 2)$ &$\mbox{LHD}(3^4, 5)$ & $\mbox{OA}(3^4, 5, 3, 2)$ & $\mbox{LHD}(3^4, 5)$\\
5 & 1 & 1* & 1 & 4 & 81 & $\mbox{OA}(3^5, 1, 3, 2)$& $\mbox{LHD}(3^5, 81)$& $\mbox{OA}(3^5, 81, 3, 2)$& $\mbox{LHD}(3^5, 1)$\\
5 & 2 & 1 & 3 & 3 & 27 & $\mbox{OA}(3^5, 3, 3, 2)$& $\mbox{LHD}(3^5, 27)$& $\mbox{OA}(3^5, 27, 3, 2)$& $\mbox{LHD}(3^5, 3)$\\
5 & 2 & 2* & 2 & 3 & 54 & $\mbox{OA}(3^5, 2, 3, 2)$& $\mbox{LHD}(3^5, 54)$& $\mbox{OA}(3^5, 54, 3, 2)$& $\mbox{LHD}(3^5, 2)$\\
5 & 3 & 1 & 9 & 2 & 9 & $\mbox{OA}(3^5, 9, 3, 2)$& $\mbox{LHD}(3^5, 9)$& $\mbox{OA}(3^5, 9, 3, 2)$& $\mbox{LHD}(3^5, 9)$\\
5 & 3 & 2 & 6 & 2 & 18 & $\mbox{OA}(3^5, 6, 3, 2)$& $\mbox{LHD}(3^5, 18)$& $\mbox{OA}(3^5, 18, 3, 2)$& $\mbox{LHD}(3^5, 6)$\\
5 & 3 & 3 & 4 & 2 & 27 & $\mbox{OA}(3^5, 4, 3, 2)$& $\mbox{LHD}(3^5, 27)$& $\mbox{OA}(3^5, 27, 3, 2)$& $\mbox{LHD}(3^5, 4)$\\
5 & 3 & 4* & 3 & 2 & 36 & $\mbox{OA}(3^5, 3, 3, 2)$& $\mbox{LHD}(3^5, 36)$& $\mbox{OA}(3^5, 36, 3, 2)$& $\mbox{LHD}(3^5, 3)$\\
5 & 4 & 1 & 27 & 1 & 3 & $\mbox{OA}(3^5, 27, 3, 2)$& $\mbox{LHD}(3^5, 3)$& $\mbox{OA}(3^5, 3, 3, 2)$& $\mbox{LHD}(3^5, 27)$\\
5 & 4 & 2 & 18 & 1 & 6 & $\mbox{OA}(3^5, 18, 3, 2)$& $\mbox{LHD}(3^5, 6)$& $\mbox{OA}(3^5, 6, 3, 2)$& $\mbox{LHD}(3^5, 18)$\\
5 & 4 & 3 & 12 & 1 & 9 & $\mbox{OA}(3^5, 12, 3, 2)$& $\mbox{LHD}(3^5, 9)$& $\mbox{OA}(3^5, 9, 3, 2)$& $\mbox{LHD}(3^5, 12)$\\
5 & 4 & 4 & 8 & 1 & 12 & $\mbox{OA}(3^5, 8, 3, 2)$& $\mbox{LHD}(3^5, 12)$& $\mbox{OA}(3^5, 12, 3, 2)$& $\mbox{LHD}(3^5, 8)$\\
5 & 4 & 5* & 5 & 1 & 15 & $\mbox{OA}(3^5, 5, 3, 2)$& $\mbox{LHD}(3^5, 15)$& $\mbox{OA}(3^5, 15, 3, 2)$& $\mbox{LHD}(3^5, 5)$\\
5 & 5 & 1 & 81 & 0 & 1 & $\mbox{OA}(3^5, 81, 3, 2)$& $\mbox{LHD}(3^5, 1)$& $\mbox{OA}(3^5, 1, 3, 2)$& $\mbox{LHD}(3^5, 81)$\\
5 & 5 & 2 & 54 & 0 & 2 & $\mbox{OA}(3^5, 54, 3, 2)$& $\mbox{LHD}(3^5, 2)$& $\mbox{OA}(3^5, 2, 3, 2)$& $\mbox{LHD}(3^5, 54)$\\
5 & 5 & 3 & 36 & 0 & 3 & $\mbox{OA}(3^5, 36, 3, 2)$& $\mbox{LHD}(3^5, 3)$& $\mbox{OA}(3^5, 3, 3, 2)$& $\mbox{LHD}(3^5, 36)$\\
5 & 5 & 4 & 24 & 0 & 4 & $\mbox{OA}(3^5, 24, 3, 2)$& $\mbox{LHD}(3^5, 4)$& $\mbox{OA}(3^5, 4, 3, 2)$& $\mbox{LHD}(3^5, 24)$\\
5 & 5 & 5 & 16 & 0 & 5 & $\mbox{OA}(3^5, 16, 3, 2)$& $\mbox{LHD}(3^5, 5)$& $\mbox{OA}(3^5, 5, 3, 2)$& $\mbox{LHD}(3^5, 16)$\\
5 & 5 & 6* & 11 & 0 & 6 & $\mbox{OA}(3^5, 11, 3, 2)$& $\mbox{LHD}(3^5, 6)$& $\mbox{OA}(3^5, 6, 3, 2)$& $\mbox{LHD}(3^5, 11)$\\
\hline
\end{tabular}}
\end{center}}
\end{table}
Since the last $u-u_1$ entries of each ${\bf b}_i$ are zeros, to obtain the maximum value
of $n^*$, we only need to
consider the independent relationship between the vectors with the first $u_1$ entries of ${\bf b}_i$'s.
For $s=3$, $n_B=2^{u_1-1}$ and these vectors can form a $u_1\times 2^{u_1-1}$ matrix,
which is denoted by $B_{u_1}$ in this paper. Columns of $B_{u_1}$ are arranged in an order such that
the $j$th column is determined by the $(i,j)$th entry $B_{u_1}(i,j)$ as follows:
\begin{eqnarray}\nonumber
j-1=\sum_{i=1}^{u_1}2^{u_1-i}(B_{u_1}(i,j)-1).
\end{eqnarray}
Hence the $j$th column is labeled by bold ${\bf j-1}$ in Table \ref{tb-B2}, in which
the matrices of $B_2$ to $B_5$ are presented.
Correspondingly, define $B_{u_1}^*$ to be an $n^*$-column subset of $B_{u_1}$,
such that any $u_1$ columns in it are independent.
The following is a list of the sets $B_2^*$ to $B_5^*$:
$B_2^*$ containing columns $\{{\bf 0, \bf 1}\}$ of $B_2$;
$B_3^*$ containing columns $\{{\bf 0, \bf 1, \bf 2, \bf 3}\}$ of $B_3$;
$B_4^*$ containing columns $\{{\bf 0, \bf 1,\bf 2, \bf 4, \bf 7}\}$ of $B_4$; and
$B_5^*$ containing columns $\{{\bf 0, \bf 1,\bf 2, \bf 4, \bf 9, \bf 14}\}$ of $B_5$,
where $B_2^*$ and $B_3^*$ are obtained by calculation, and $B_4^*$ and $B_5^*$ are
obtained by computer search.
All of their $n^*$'s are maximal, refer to Proposition \ref{prop:upperbound}.
With those $B_{u_1}^*$'s, one can obtain the set of column vectors
$\{{\bf b}_{i_1}, \ldots, {\bf b}_{i_{n^*}}\}$ required by Theorem \ref{theorem:subspace-construction}.
\begin{table}[htbp]
{\tabcolsep=12pt
\renewcommand{\arraystretch}{0.8
\begin{center}
\caption{Matrices $B_{u_1}$'s for $u_1=2, 3, 4, 5$ and $s=3$\label{tb-B2}}
\scalebox{0.8}{
\begin{tabular}{cc c cccc c cccccccc } \hline
\multicolumn{2}{c}{$B_2$} &\multicolumn{1}{c}{} & \multicolumn{4}{c}{$B_3$} & \multicolumn{1}{c}{}& \multicolumn{8}{c}{$B_4$} \\
{\bf 0} & {\bf 1} && {\bf 0} & {\bf 1}& {\bf 2} & {\bf 3} && {\bf 0}& {\bf 1} & {\bf 2}& {\bf 3}&{\bf 4}& {\bf 5}&{\bf 6}&{\bf 7}\\\hline
1 & 1 &&1 & 1 & 1 & 1 && 1 & 1 & 1 & 1 &1 & 1 & 1 & 1\\[-5pt]
1 & 2 &&1 & 1 & 2 & 2 && 1 & 1 & 1 & 1 &2 & 2 & 2 & 2\\[-5pt]
& &&1 & 2 & 1 & 2 && 1 & 1 & 2 & 2 &1 & 1 & 2 & 2\\[-5pt]
& && & & & && 1 & 2 & 1 & 2 &1 & 2 & 1 & 2\\\hline
\multicolumn{16}{c}{$B_5$} \\
{\bf 0}& {\bf 1}& {\bf 2}& {\bf 3}& {\bf 4}& {\bf 5}&{\bf 6}& {\bf 7}& {\bf 8}& {\bf 9}& {\bf 10}&{\bf 11}&{\bf 12} &{\bf 13}&{\bf 14} &{\bf 15} \\\hline
1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1\\[-5pt]
1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 2 & 2 & 2 & 2 &2 & 2 & 2 & 2\\[-5pt]
1 & 1 & 1 & 1 &2 & 2 & 2 & 2 & 1 & 1 & 1 & 1 &2 & 2 & 2 & 2\\[-5pt]
1 & 1 & 2 & 2 &1 & 1 & 2 & 2 & 1 & 1 & 2 & 2 &1 & 1 & 2 & 2\\[-5pt]
1 & 2 & 1 & 2 &1 & 2 & 1 & 2 & 1 & 2 & 1 & 2 &1 & 2 & 1 & 2\\
\hline
\end{tabular}}
\end{center}}
\end{table}
\section{Space-filling Property}
One important issue of marginally coupled designs is the space-filling property of design $D_2$. To achieve or improve the space-filling property, several approaches have been proposed; see, for example, Dragulic, Santner and Dean (2012), Joseph, Gul and Ba (2015), and Sun and Tang (2017). In our case, one approach to improve the space-filling property is to use an optimal level replacement
with some optimization criterion when obtaining $D_2$ from $\tilde{D}_2$, as done in Leary, Bhaskar and Keane (2003); another approach is to make $D_2$ possess some guaranteed space-filling property, for example, having
uniform projections on lower dimensions. In this paper, we address this issue
through the latter approach. For $s=2$, the approach uses a concept, anti-mirror vector, defined below.
\begin{defi}
Two column vectors $v_1$ and $v_2$ of the same length with entries from $\{0,1\}$ are said to be anti-mirror vectors if
their sum is equal to the vector of all ones. We use the notation $\overline v_1=v_2$ and $\overline v_2=v_1$.
\end{defi}
For example, $(1,1,0)^T$ is the anti-mirror vector of $(0,0,1)^T$.
It is clear that $v^T\overline v=0$, and the anti-mirrors of two different vectors are different.
For practical application, given parameters $1\leq u_1, u'_1\leq u$, item $(ii)$
of Theorem \ref{theorem:subspace-construction} can construct an $MCD(D_1, D_2)$ with
$D_1={\mbox{\small OA}}(2^u, 2^{u-u_1}, 2, 3)$ and $D_2={\mbox{\small LHD}}(2^u, 2^{u_1-1})$, and item $(i)$ can construct
an $MCD(D_1, D_2)$ with $D_1={\mbox{\small OA}}(2^u, 2^{u'_1-1}, 2, 3)$ and $D_2={\mbox{\small LHD}}(2^u, 2^{u-u'_1})$. When
setting $u_1'=u-u_1+1$, the MCD obtained by item $(i)$ has the same set of parameters
as that obtained by item $(ii)$. In this sense, for $s=2$, we only need to consider the subspace construction
by item $(i)$ of Theorem \ref{theorem:subspace-construction}.
To investigate the space-filling property of $D_2$ when $D_1$ is a two-level orthogonal array, we take a closer look at
Step 2 of the general construction.
Recall that $\mathcal{A}=\mathcal{A}_1$ has $2^{u-u_1}$ vectors, $n_B=1$
and ${\bf b}_1=(1,\ldots,1,0,\ldots,0)^T$ with
the first $u_1$ entries being 1. As in item (ii) of Theorem~\ref{theorem:subspace-construction}, let
$\{{\bf x}_1, \ldots, {\bf x}_{2^{u-u_1}}\}$ be the vectors in $\mathcal{A}_1$, and note that
each ${\bf x}_i$ can be written as
$${\bf x}_i=({\bf 1}_{u_1}^T, {\bf y}_i^T)^T,$$
where ${\bf y}_i\neq {\bf y}_j$ for $i\neq j$.
Let ${\bf x}_0=(1,1,0,\ldots, 0)^T$
be a vector with the first two entries being 1 and the last $u_1-2$ entries being 0;
for $1\leq i\leq 2^{u-u_1}$, define ${\eta}_i=({\bf x}_0^T, \overline {\bf y}_i^T)^T$,
where $\overline {\bf y}_i$ is the anti-mirror vector of ${\bf y}_i$. We have
$\eta_i\in O({\bf x}_i)$ as ${\eta}_i^T{\bf x}_i={\bf x}_0^T{\bf 1}_{u_1} + \overline {\bf y}_i^T{\bf y}_i=0$
For each ${\bf x}_i$,
let $G({\bf x}_i)$ be a generator matrix that consists of $u-1$ independent columns of $O({\bf x}_i)$. Set the first column of $G({\bf x}_i)$ to be $\eta_i$. Generate $A_i$ based on $G({\bf x}_i)$ and obtain ${\bf d}_i=A_i\cdot(2^{u-2}, \ldots, 2, 1)^T$,
and let $\tilde D_2=({\bf d}_1, \ldots, {\bf d}_{2^{u-u_1}})$. The method is called the {\em anti-mirror arrangement} in this paper.
\begin{proposition}\label{prop:anti-mirror}
When $2\leq u_1<u-1$, the design $\tilde D_2$ obtained by the anti-mirror arrangement is an $OA(2^u, 2^{u-u_1}, 2^{u-1}, 1)$
achieving stratifications on a $2\times 2\times 2$ grid of any three dimensions.
\end{proposition}
For $s\geq 2$, Proposition \ref{prop:5} provides a result of the space-filling property of $D_2$'s in marginally coupled designs in Theorem~\ref{theorem:subspace-construction}.
\begin{proposition}\label{prop:5}
If the number, $k$, of columns in $D_2$ in Theorem~\ref{theorem:subspace-construction} satisfies $k\leq (s^{u-1}-1)/(s-1)$, a $\tilde {D}_2$ that achieves stratifications on an $s\times s$ grid of any two dimensions can be constructed.
\end{proposition}
\section{Conclusion and {\bf Discussion}}
We have proposed a general method for constructing marginally coupled designs of $s^u$ runs in which
the design for quantitative factors is a non-cascading Latin hypercube, where $s$ is a prime power.
The approach uses the theory of $(u-1)$-dimensional subspaces in the Galois field $GF(s^u)$.
The newly constructed marginally coupled designs with three-level qualitative factors are tabulated. For other prime numbers of levels, marginally coupled designs can be obtained similarly.
In addition, we discuss two cases for which guaranteed space-filling property can be obtained.
The results for the subspace construction in this article extend those in He et al. (2017) for two-level
qualitative factors to any $s$-level qualitative factors. The {\em Construction 2} of He, Lin and Sun (2017)
is also a special case of the general construction in this article. The reason is as follows.
There are $s+1$ matrices of size $s^u\times (s^{u-1}-1)/(s-1)$, denoted by $C_1, \ldots, C_{s+1}$, each of which contains $s$ replications
of the linear saturated orthogonal array ${\mbox{\small OA}}(s^{u-1}, (s^{u-1}-1)/(s-1), s, 2)$.
According to their construction procedure, the matrix $C_i$ is corresponding to the $(u-1)$-dimensional subspace generated by
$\{{\bf e}_1, \ldots, {\bf e}_{u-2}, {\bf e}_{u-1}+\alpha_{i-1}{\bf e}_u\}$ for $1\leq i\leq s$,
and $C_{s+1}$ is corresponding to the $(u-1)$-dimensional subspace generated by
$\{{\bf e}_1, \ldots, {\bf e}_{u-2}, {\bf e}_u\}$.
They are respectively identical to the $(u-1)$-dimensional subspaces $O({\bf x}_1), \ldots, O({\bf x}_{s+1})$,
where ${\bf x}_1={\bf e}_u$, ${\bf x}_i={\bf e}_{u-1}-\alpha_{i-1}^{-1}{\bf e}_u$ for $2\leq i\leq s$, and
${\bf x}_{s+1}={\bf e}_{u-1}$. Therefore, in the general construction, by choosing such ${\bf x}_1, \ldots, {\bf x}_k$,
for $1\leq k < s+1$, and choosing ${\bf z}_1, \ldots, {\bf z}_m$ from the set of
$\cup_{j=k+1}^{s+1}O({\bf x}_j)\setminus (\cup_{i=1}^k O({\bf x}_i))$,
one can obtain the marginally coupled design provided by {\em Construction 2} of He, Lin and Sun (2017).
For practitioners, three related issues need further investigations. One is that, the low-dimensional projection space-filling property of the quantitative factors for each level of a qualitative factor; the second one is
to improve the space-filling property of the quantitative factors in 3 to 4 dimensions, when the two-dimensional
uniform projections are already obtained; and the last one is to construct designs with good coverage if
perfect space-filling property under some criterion is not expected. We hope to study them and report our results in future.
\section*{Appendix}
\noindent {\it Proof of Lemma \ref{lemma:general-union-O(e)}}
\begin{proof}
For $1\leq i\leq u_1$ and any vector ${\bf x}=(x_{1}, \ldots, x_u)^T \in S_u \setminus O({\bf e}_i)$, we have ${\bf x}^T{\bf e}_i\neq 0$,
that means $x_i\neq 0$. Thus, for any ${\bf x}\in \mathcal{A}$, we have $x_1=1$, $x_i\in GF(s)\setminus \{0\}$
for $i=2,\ldots, u_1$, and $x_j\in GF(s)$ for $j=u_1+1, \ldots, u$. So,
the conclusion follows.
\end{proof}
\noindent {\it Proof of Theorem \ref{thm-simple-construction}}
\begin{proof}
As every ${\bf z}_i$ is not in any of $O({\bf x}_j)$, every ${\bf x}_j$ is not in any of $O({\bf z}_i)$.
The conclusion follows by the definition of $\mathcal{A}$,
Lemma \ref{lemma:basic-idea}, and Lemma \ref{lem:D1-D2-condition}.
Because in both items $(i)$ and $(ii)$, $O({\bf x}_i)\neq O({\bf x}_j)$ when $i\neq j$,
${\bf d}_i$ cannot be transformed to ${\bf d}_j$ by level permutations. Thus
$D_2$'s are non-cascading Latin hypercubes.
\end{proof}
\noindent{\it Proof of Proposition \ref{thm-combination is impossible}}
\begin{proof}
Suppose ${\bf z}=\sum_{i=1}^{u_1}\lambda_i {\bf e}_i$ has $l$ nonzero coefficients $\lambda_{i_1}, \ldots, \lambda_{i_l}$,
where $1\leq i_j\leq u_1$ and $ 2\leq l\leq u_1$. Denote by $\lambda^*=\sum_{j=1}^{l-1}\lambda_{i_j}$, and let
${\bf x}=(x_1, \ldots, x_u)^T$. If $\lambda^*$ is nonzero, take $x_{i_l}=-\lambda_{i_l}^{-1}\lambda^*$ and
all the other $x_i$'s equal 1, then ${\bf x}\in\mathcal{A}$ since the first $u_1$ entries of ${\bf x}$ are nonzero.
More specifically, the first entry of ${\bf x}$ is 1, and
$${\bf z}^T{\bf x}=\sum_{i=1}^{u_1}\lambda_ix_i=\sum_{j=1}^{l}\lambda_{{i_j}}x_{i_j}=\sum_{j=1}^{l-1}\lambda_{i_j}\cdot 1
+ \lambda_{i_l}\cdot x_{i_l}=\lambda^* - \lambda_{{i_l}}\cdot\lambda_{{i_l}}^{-1}\lambda^*=0,$$
where the first equality holds because the last $u-u_1$ entries of ${\bf z}$ are zeros.
Otherwise, if $\lambda^*=0$, we must have $l-1\geq 2$, and one can take $x_{i_{l-1}}=\alpha_2$,
$x_{{i_l}}=-\lambda_{{i_l}}^{-1}\lambda_{i_{l-1}}(\alpha_2-1)$, and all other $x_i$'s equal 1.
Note for $s>2$, we have $\alpha_2\neq 1$, hence $x_{{i_l}}\neq 0$ and $\bf x\in \mathcal{A}$ again.
In addition,
$${\bf z}^T{\bf x}=\sum_{i=1}^{u_1}\lambda_ix_i=\sum_{j=1}^{l}\lambda_{{i_j}}x_{i_j}
=\sum_{j=1}^{l-1}\lambda_{{i_j}}\cdot 1 + \lambda_{i_{l-1}}\cdot(\alpha_2 - 1)- \lambda_{{i_l}}\cdot\lambda^{-1}_{{i_l}}\lambda_{i_{l-1}}(\alpha_2-1)=0.$$
So, there always exists an $\bf x\in \mathcal{A}$, such that ${\bf z}\in O({\bf x})$.
\end{proof}
\vspace{4mm}
\noindent{\it Proof of Proposition \ref{prop:s-level}}
\begin{proof}
First, consider $v=1$. As $(\sum_{j=1}^{u_1}\lambda_je_j)^T{\bf b}_1= 0$, we have
\begin{equation}\nonumber
\lambda_1b_{11} + \lambda_2b_{12} + \cdots + \lambda_{u_1}b_{1u_1}=0.
\end{equation}
There are $s^{u_1-1}$ solutions for such an equation, hence there are $s^{u_1}-s^{u_1-1}=(s-1)s^{u_1-1}$
combinations in $\overline E_1$.
For $v=2$, as $(\sum_{j=1}^{u_1}\lambda_je_j)^T{\bf b}_i= 0$ for $i=1,2$, then
\begin{eqnarray}\nonumber
\left\{
\begin{array}{cc}
\lambda_1b_{11} + \lambda_2b_{12} + \cdots + \lambda_{u_1}b_{1u_1} &=0, \label{eq:1}\\
\lambda_1b_{21} + \lambda_2b_{22} + \cdots + \lambda_{u_1}b_{2u_1} &=0, \label{eq:2}
\end{array}
\right.
\end{eqnarray}
which has $s^{u_1-2}$ solutions since ${\bf b}_1$ and ${\bf b}_2$ are independent.
However, elements in $\overline E_1\cap \overline E_2$ should not be the solution of
neither of the two equations.
Then, we have
$$\mid \overline E_1\cap \overline E_2 \mid =\mid E\setminus ( E_1\cup E_2)\mid=s^{u_1}- [{2\choose 1}s^{u_1-1} - {2\choose 2}s^{u_1-2}]=(s-1)^2s^{u_1-2}.$$
For $1\leq v\leq u_1$, as any $u_1$ elements of $\{{\bf b}_1, {\bf b}_2, \ldots, {\bf b}_{n^{*}}\}$ are independent, we have
{\small
\begin{eqnarray}
\mid \cap_{i=1}^v \overline E_i \mid =\mid E\setminus \cup_{i=1}^{v} E_{i}\mid &=&s^{u_1}-[{v\choose 1}s^{u_1-1} -{v\choose 2}s^{u_1-2} + \cdots +
(-1)^{v-1}{v\choose v} s^{u_1-v}]\nonumber\\
&=&s^{u_1}[1-{v\choose 1}s^{-1} + \cdots +(-1)^{v}{v\choose v}s^{-v}]\nonumber\\
&=&(s-1)^vs^{u_1-v}.\nonumber
\end{eqnarray}}
For $u_1+1\leq v\leq n^*$, the intersection of any $t\geq u_1$ sets of $E_i$'s only contains one vector, namely the zero column vector. Since
any $u_1$ elements of $\{{\bf b}_1, {\bf b}_2, \ldots, {\bf b}_{n^{*}}\}$ are independent, we have
\begin{eqnarray}
\mid \cap_{i=1}^v \overline E_i \mid& = & \mid E\setminus \cup_{i=1}^{v} E_{i}\mid \\
&=&s^{u_1}-[{v\choose 1}s^{u_1-1} -{v\choose 2}s^{u_1-2} + \cdots +
(-1)^{u_1-1}{v\choose u_1} s^{u_1-u_1}\nonumber\\
& &+ (-1)^{u_1}{v\choose u_1+1}\cdot 1 + \cdots + (-1)^{v-1}{v\choose v}\cdot 1]\nonumber\\
&=&s^{u_1}[1-{v\choose 1}s^{-1} + \cdots +(-1)^{u_1}{v\choose u_1}s^{-u_1}] + \sum_{i=u_1+1}^v(-1)^i{v\choose i}\nonumber\\
&=&m^*.\nonumber
\end{eqnarray}
\end{proof}
\noindent{\it Proof of Theorem \ref{theorem:subspace-construction}}
\begin{proof}
Followed by Lemma \ref{lemma:Ei-and-Mi}, for any ${\bf z}\in \cap_{j=1}^v\overline E_{i_j}$ and
${\bf x}\in \cup_{j=1}^v\mathcal{A}_{i_j}$, we have ${\bf z}\notin O({\bf x})$.
Thus, by Lemmas \ref{lemma:basic-idea} and \ref{lem:D1-D2-condition},
the $(D_1, D_2)$'s constructed in both items are marginally coupled designs.
In addition, both items $(i)$ and $(ii)$,
$O({\bf x}_i)\neq O({\bf x}_j)$ when $i\neq j$, which implies that ${\bf d}_i$ cannot be obtained
from ${\bf d}_j$ by level permutations. Therefore, $D_2$'s are non-cascading Latin hypercubes.
\end{proof}
\noindent{\it Proof of Proposition \ref{prop:upperbound}}
\begin{proof}
Since any $u_1$ vectors of $\{{\bf b}_1, \ldots, {\bf b}_{n^*}\}$ are independent,
one can use them to obtain an ${\mbox{\small OA}}(s^{u_1}, n^*, s, u_1)$. The run size here is $s^{u_1}$,
not $s^u$, because the last $u-u_1$ entries of ${\bf b}_i$'s are zeros.
Note that the maximum value of $n^*$ must not be greater than the maximum value of $m$ for an ${\mbox{\small OA}}(s^{u_1}, m, s, u_1)$ to exist. The right hand side of (\ref{eq:$n^*$}) are the upper bounds of $m$ for different cases, which were provided by
Theorem 2.19 of Hedayat, Sloane and Stufken (1999) .
\end{proof}
\noindent{\it Proof of Proposition \ref{prop:anti-mirror}}
\begin{proof}
It is straightforward to see $\tilde D_2$ is an ${\mbox{\small OA}}(2^u, 2^{u-u_1}, 2^{u-1}, 1)$. For $u-u_1>1$ and therefore $2^{u-u_1}>3$, consider a subarray $({\bf d}_p, {\bf d}_q, {\bf d}_l)$ of $\tilde D_2$, for $1\leq p< q< l\leq 2^{u-u_1}$.
Let ${\bf c}_i=\lfloor{\bf d}_{i}/2^{u-2}\rfloor$. As ${\bf d}_{i}=A_i\cdot(2^{u-2}, \ldots, 2, 1)^T$, ${\bf c}_i$ is the first column of $A_i$. In addition, $({\bf c}_p, {\bf c}_q, {\bf c}_l)$ is the projection of $({\bf d}_p, {\bf d}_q, {\bf d}_l)$
on the $2\times 2\times 2$ grid. Because $A_i$ is constructed by $G({\bf x}_i)$, ${\bf c}_i$ is generated from $\eta_i$. As ${\bf y}_i\neq {\bf y}_j$ for $i\neq j$, we have $\overline {\bf y}_i\neq \overline {\bf y}_j$.
Since the last $u-u_1$ entries of $\eta_i$ is $\overline {\bf y}_i$,
$\eta_p, \eta_q$ and $\eta_l$ are three different columns. In addition, $\eta_{p}+\eta_q \neq \eta_l$
because the first $u_1$ entries
of $\eta_p, \eta_q, \eta_l$ are equal to ${\bf x}_0=(1,1,0, \ldots, 0)^T$. As a result, $\eta_{p}, \eta_q, \eta_l$ are three independent
column vectors. Thus, the array $({\bf c}_p, {\bf c}_q, {\bf c}_l)$ is an $OA(2^u, 3, 2, 3)$, and
the conclusion follows.
\end{proof}
\noindent{\it Proof of Proposition \ref{prop:5}}
\begin{proof}
In the subspace construction of Theorem \ref{theorem:subspace-construction}, for $i=1,\ldots,k$,
each $O({\bf x}_i)$ contains a set of $(s^{u-1}-1)/(s-1)$ different column vectors,
the first nonzero entry of each of which is equal to $1$.
If $k\leq (s^{u-1}-1)/(s-1)$, one can always choose ${\bf y}_i\in O({\bf x}_i)$,
such that ${\bf y}_i\neq \alpha {\bf y}_j$ for $1\leq i\neq j\leq k$ and any $\alpha\in GF(s)$.
Let ${\bf y}_i$ be the first column of $G({\bf x}_i)$ which is used to obtain $A_i$ and consists of $u-1$ independent columns of $O({\bf x}_i)$. For such $\{A_1, \ldots, A_k\}$, the first $k$ columns form an ${\mbox{\small OA}}(s^u, k, s, 2)$, which guarantees
$\tilde D_2$ to achieve stratifications on an $s\times s$ grid of any two dimensions.
\end{proof}
\begin{proposition}\label{prop-the-bound-for-intersectionset}
The set $\cap_{i=1}^{n_B}\overline E_i$ is equal to $(i)$
$\{{\bf e}_{i_1} + {\bf e}_{i_2} + \cdots + {\bf e}_{i_{2t+1}} \mid 2t+1\leq u_1, 1\leq i_1<i_2<\ldots< i_{2t+1}\leq u_1\}$
when $s=2$, or equal to $(ii)$ $\{ \alpha{\bf e}_i \mid \alpha\in GF(s)\setminus\{0\}, i=1, \ldots, u_1 \}$ when $s>2$.
\end{proposition}
\begin{proof}
For $s=2$, we have $n_B=1$, $\mathcal{A}=\mathcal{A}_1$, and ${\bf b}_1=(1,\ldots, 1, 0, \ldots, 0)^T$ where the first $u_1$
entries are equal to 1. If ${\bf z}\in E$ and ${\bf z}^T{\bf b}_1\neq 0$,
${\bf z}$ must be a sum of an odd number of ${\bf e}_i$'s.
Thus, item (i) follows. If ${\bf z}\in \cap_{i=1}^{n_B}\overline E_i${\bf ,}
${\bf z}\notin O({\bf x})$ for any ${\bf x}\in \mathcal{A}$ by Lemma \ref{lemma:Ei-and-Mi}. Therefore, for $s>2$,
the possible elements in $\cap_{i=1}^{n_B}\overline E_i$ can only be ${\bf z}=\alpha{\bf e}_j$
for any $\alpha\in GF(s)\setminus\{0\}$ and $j=1, \ldots, u_1$,
according to Proposition \ref{thm-combination is impossible}, while ${\bf e}_j\in \cap_{i=1}^{n_B}\overline E_i$,
for $j=1,\ldots, u_1$. Combining these two results, item (ii) follows.
\end{proof}
\section*{Acknowledgements}
The authors wish to thank the Editor, an Associate Editor, and two referees for their
helpful comments which have led to the improvement of the manuscript.
Yuanzhen He is supported by the National Natural Science
Foundation of China Grant 11701033. C. Devon Lin's research was supported by the Discovery grant from Natural Sciences and Engineering
Research Council of Canada. Fasheng Sun is supported by the National Natural Science
Foundation of China Grants 11471069, 11771220 and the Fundamental Research Funds for the Central Universities.
\vspace{0.3in}
\noindent {\bf \Large References}
\def\beginref{\begingroup
\clubpenalty=10000
\widowpenalty=10000
\normalbaselines\parindent 0pt
\parskip.0\baselineskip
\everypar{\hangindent1em}}
\def\par\endgroup{\par\endgroup}
\renewcommand{\baselinestretch}{1}
\beginref
Deng, X., Hung, Y.\ and Lin, C.D.\ (2015). Design for computer experiments with qualitative and quantitative factors. {\em Statistica Sinica}, {\bf 25}, 1567--1581.
Deng, X., Lin, C.D., Liu, K.W.\ and Rowe, R.K. \ (2017). Additive Gaussian process for computer models with qualitative and quantitative factors.
{\em Technometrics}, {\bf 59}, 283--292.
Draguljic, D., Santner, T.J.\ and Dean, A.M.\ (2012). Noncollapsing space-filling designs for bounded nonrectangular regions.{\em Technometrics}, {\bf 54}, 169--178.
Han, G., Santner, T.J., Notz, W.I.\ and Bartel, D.L.\ (2009). Prediction for computer experiments having quantitative and qualitative input variables. {\em Technometrics}, { \bf 51}, 278--288.
Handcock, M.S.\ (1991). On cascading Latin hypercube designs and additive models for
experiments. {\em Comm. Statist. Theory Methods}, {\bf 20}, 417--439.
He, Y., Lin, C.D.\ and Sun, F.S.\ (2017). On the construction of marginally coupled designs. {\em Statistica Sinica}, {\bf 27}, 665--683.
He, Y., Lin, C.D., Sun, F.S.\ and Lv, B.J.\ (2017). Marginally coupled designs for two-level qualitative factors. {\em Journal of Statistical Planning and Inference}, {\bf 187}, 103--108.
Hedayat, A.S., Sloane, N.J.A.\ and Stufken, J.\ (1999). {\it Orthogonal Arrays: Theory and Applications}. Springer, New York.
Horn, R.A.\ and Johnson, C.R.\ (2015). {\it Matrix analysis, Second Edition}. Cambridge University Press $\&$ Posts $\&$ Telecom Press.
Huang, H., Lin, D.K.J., Liu, M.Q.\ and Yang, J.F.\ (2016). Computer experiments with both qualitative and quantitative variables. {\em Technometrics}, {\bf 58}, 495--507.
Joseph, V.R., Gul, E.\ and Ba, S.\ (2015). Maximum projection designs for computer experiments. {\em Biometrika}, {\bf 102},371--380.
Leary, S., Bhaskar, A.\ and Keane, A.\ (2003). Optimal orthogonal-array-based Latin hypercubes. {\em Journal of Applied Statistics}, {\bf 30}, 585--598.
Lin, C.D.\ and Tang, B.\ (2015). Latin hypercubes and space-filling designs. In
{\em Handbook of Design and Analysis of Experiments}. CRC Press. Bingham, D., Dean, A., Morris, M., and
Stufken, J. ed., 593--626.
McKay, M.D., Beckman, R.J.\ and Conover, W.J. (1979). A comparison of three methods for selecting values of input variables
in the analysis of output from a computer code. {\em Technometrics}, {\bf 21}, 239--245.
Qian, P.Z.G.\ and Wu, C.F.J.\ (2009). Sliced space-filling designs. {\em Biometrika},
{\bf 96}, 733--739.
Qian, P.Z.G., Wu, H.\ and Wu, C.F.J.\ (2008). Gaussian process models for computer experiments with qualitative and quantitative factors. {\em Technometrics}, {\bf 50}, 383--396.
Rawlinson, J.J., Furman, B.D., Li, S., Wright, T.M.\ and Bartel, D.L. \ (2006). Retrieval, experimental, and computational assessment of the performance of total knee replacements.
{\em Journal of Orthopaedic Research Official Publication of the Orthopaedic Research Society}, {\bf 24}, 1384--1394.
Sun, F.\ and Tang, B.\ (2017). A method of constructing space-filling orthogonal designs. {\em Journal of the American Statistical Association}, {\bf 112}, 683--689.
Tang, B.\ (1993). Orthogonal array-based Latin hypercubes. {\em Journal of the American Statistical Association}, {\bf 88}, 1392--1397.
Wu, C.F.J.\ and Hamada, M.S. (2011). {\it Experiments: Planning, Analysis, and Optimization.} John Wiley \& Sons.
Xie, H., Xiong, S., Qian, P.Z.G.\ and Wu, C.FJ.\ (2014). General sliced Latin hypercube designs. {\em Statistica Sinica}, {\bf 24}, 1239--1256.
Zhou, Q., Qian, P.Z.G. and Zhou, S.\ (2011). A simple approach to emulation for computer models with qualitative and quantitative factors. {\em Technometrics}, {\bf 53}, 266--273.
Zhou, Q., Jin, T., Qian, P.Z.G.\ and Zhou, S. (2016). Bi-directional sliced Latin hypercube designs. {\em Statistica Sinica}, {\bf 26}, 653--674.
\par\endgroup
\end{document}
|
1,477,468,750,107 | arxiv | \section{Introduction}
The cytosol is crowded and replete with macromolecules such as ribosomes, lipids, proteins, and RNA. Estimates show that
the volume fraction ($\phi$) of these macromolecules, collectively referred to as crowding agents, can exceed 0.2.
Spontaneous folding of nascent proteins and RNA in the crowded environment can be different from {\it in vitro} experiments
that are typically conducted under infinite dilution. Minton, who has made pioneering contributions in elucidating the
importance of crowding in biophysics, was the first to recognize that the thermodynamics of folding, association, and
biochemical reactions can be altered by crowding
agents~\citep{Minton81Biopolymers,Minton01JBC,Minton05BJ,Hall03BBA,Zhou08ARB}. More recently there has been much interest
in the study of crowding effects on folding and function of
proteins~\citep{Cheung05PNAS,Homouz08PNAS,Dhar10PNAS,Elcock10COSB}, which was inspired by the insightful studies initiated
by Minton.
Describing even the simplest process, namely, the transition between folded and unfolded states of proteins and RNA under
crowded conditions is complicated because the nature of the effective interactions between the crowding agents and the
polypeptide or polynucleotide chains is not fully understood. However, to a first approximation, the dominant effect of
crowding agents is to exclude the molecule of interest from the volume occupied by crowders. If excluded volume interactions
dominate (an assumption that has to be tested before the theory can be applied to analyze experiments), then the stability
of the folded state of the protein or RNA is enhanced, compared to the case when $\phi =0$. In this case, the loss
in entropy of the folded state due to crowding is much less than that of the unfolded state, resulting in the
stabilization of the folded state. The entropic stabilization of the folded state~\citep{Cheung05PNAS,Minton05BJ} in the
presence of crowding agents has been firmly established theoretically in a number of studies.
Most of the studies on the effects of crowding on self-assembly process have been on the folding of proteins. Recently,
it has been recognized that crowding agents could have a significant impact on the folding of
RNA~\citep{Pincus08JACS,Kilburn10JACS,Denesyuk11JACS}. Simple theoretical arguments and coarse-grained simulations were
used to show that crowding can modestly stabilize RNA secondary structures~\citep{Pincus08JACS}. However, RNA requires
counterions (Mg$^{2+}$, for example) for tertiary folding. Thus, the effect of macromolecular crowding on tertiary
structures of RNA may be complicated. Using small angle X-ray scattering measurements it has been shown that, in the
presence of polyethylene glycol (PEG), the 195 nucleotide {\it Azoarcus} ribozyme is more compact relative
to $\phi = 0$~\citep{Kilburn10JACS}. It was concluded that excluded volume effects play a dominant role in the
compaction of RNA in low molecular weight PEG. Interestingly, the transition to the folded state occurs at a lower
Mg$^{2+}$ concentration in the presence of PEG~\citep{Kilburn10JACS}. Even if excluded volume interactions largely
determine the stability of the folded states of RNA, a number of variables besides $\phi$, such as size and shape
of crowding agents, also contribute to the stability of RNA in the presence of inert crowding agents. Thus,
a systematic study of the influence of macromolecular crowding on RNA is required.
It is known that, in contrast to proteins~\citep{Guo92JCP}, the stability gap separating the native state and low energy
excitations in RNA is small~\citep{Thirumalai05Biochem}, which implies that external factors (crowding or force) can
modulate the stabilities and functions of RNA. For example, riboswitches undergo a transition between two distinct
conformations that have a profound influence on their functions~\citep{Montage08ARB}. In this review article,
we outline essential aspects of crowder-RNA interactions, with the particular emphasis on how crowding in the cellular
environment may alter conformational equilibria related to enzymatic function. As a biologically relevant example, we
discuss crowding effects on the transition between the hairpin (HP) and pseudoknot (PK) conformations (Figure \ref{SS}) in the
pseudoknot domain of human telomerase RNA, hTR \citep{Theimer03PNAS}. The pseudoknot domain is conserved in different
organisms and its activity is closely linked to chromosome stability \citep{Blasco03COGD,Chen04PNAS}.
However, the precise role of the PK and HP conformations of the pseudoknot domain in the context of telomerase activity is
not known. Mutations that either increase or decrease the stability of the PK conformation result in a reduction in
telomerase activity \citep{Comolli02PNAS,Theimer05MolCell}. Therefore, it is important to compare the impact of physical
factors, such as macromolecular crowding, with that of naturally occurring chemical mutations. Our discussion of
crowding presented in this review is original in two ways. First, we consider the effects of crowding on the conformational
equilibrium between two folded states. Traditionally, discussions of crowding effects on biomolecules, mostly proteins,
concern with the stability of the folded (active) state with respect to the unfolded (inactive) state. In addition, we
establish a quantitative and novel connection between the magnitude of the crowding effect and activity of wild-type and
mutant enzymes.
\section {Computational models of RNA and crowders}
Entropic stabilization, the mechanism by which macromolecular crowding stabilizes folded states of proteins and RNA in the
excluded volume limit, is similar to spatial confinement, in a sense that crowders confine the biomolecule of interest to
the interstitial space. Although analytical results exist for polymers confined to cavities with simple geometries, it is
difficult to obtain accurate estimates for nontrivial confinement geometry associated with macromolecular crowding.
Furthermore, unfolded RNA is a highly nonideal polymer whose conformational ensemble will be determined, among other things,
by stacking propensities between adjacent bases and by the ionic strength of the buffer. Estimating how the conformational
space of such a polymer will be reduced even in a simple geometrical confinement is a challenging task.
On the other hand, simulations have proven to be a useful tool in assessing the magnitude of entropic stabilization in
response to varying external conditions. Coarse-grained models have been particularly effective in the studies of RNA
folding, since they do not suffer from computational complexity associated with all-atom force
fields \citep{Lin08JACS,Whitford09BJ,Cho09PNAS,Feng11JACS,Denesyuk11JACS}. As a specific example of this class of techniques,
we will be discussing results obtained with a widely appreciated coarse-grained representation of RNA, where each
nucleotide is modeled by three interactions sites (TIS) --- a phosphate, a sugar and a
base \citep{Hyeon05PNAS,Cho09PNAS,Denesyuk11JACS}. We developed a force field in conjunction with the TIS model,
which originally included stacking and hydrogen bond interactions as essential components in the stability of RNA
structures and was thermodynamically accurate in the limit of high ionic strength \citep{Denesyuk11JACS}. The quantitative
agreement of thermodynamic predictions of this force field with experiments is illustrated in Figure \ref{rC}a.
Subsequently, we added an electrostatic component to the TIS model to describe the RNA thermodynamics at different
ionic concentrations (Denesyuk and Thirumalai, to appear elsewhere). The description of electrostatic effects in the model
is based on Manning's concept of counterion condensation, which posits that counterions condense onto the RNA molecule and
reduce the charge of phosphate groups from $-e$ to $-Qe$, where $Q<1$ and $e$ is the proton charge. The uncondensed mobile
ions are treated using the Debye-H{\"u}ckel theory. Therefore, the electrostatic energy of RNA in simulation is computed
using
\begin{equation}
U_{\rm EL}=\frac{Q^2e^2}{2\varepsilon}\sum_{i,j}\frac{\exp\left(-|{\bf r}_i-{\bf r}_j|/\lambda\right)}{|{\bf r}_i-{\bf r}_j|},
\label{GDH}
\end{equation}
where $|{\bf r}_i-{\bf r}_j|$ is the distance between two phosphates $i$ and $j$, $\varepsilon$ is the dielectric
constant of water and $\lambda$ is the Debye-H{\"u}ckel screening length. We showed that, if the reduced phosphate
charge $Q$ was taken to be
\begin{equation}
Q =\frac{b}{l_{\rm B}},
\label{Manning}
\end{equation}
where $l_{\rm B}$ is the Bjerrum length and $b=4.4$ \r{A}, the measured thermodynamics of a variety of RNA sequences could
be reproduced well over a wide range of temperatures and monovalent salt concentrations (Denesyuk and Thirumalai, to be
published).
A generalized Lennard-Jones potential, introduced by \cite{Denesyuk11JACS}, has been successfully employed to model
interactions of RNA with the spherical crowders of arbitrary size,
\begin{eqnarray}
U_{\rm LJ}(r)&=&\varepsilon\frac{2R_i}{D_0}\left[\left(\frac {D_0}{r + D_0 - D}\right)^{12}
- 2\left(\frac {D_0}{r + D_0 - D}\right)^{6} + 1\right],\ r\le D, \nonumber \\
U_{\rm LJ}(r)&=&0,\ r > D,
\label{POT1}
\end{eqnarray}
where $r$ is the distance between the centers of mass of two interacting particles, $D_0$ is the effective penetration
depth of the interaction, $R_i$ is the radius of an RNA coarse-grained bead, $r_{\rm C}$ is the radius of a crowder,
and $D=R_i+r_{\rm C}$. The ratio $2R_i/D_0$ in Eq.~(\ref{POT1}) is used to scale the interaction strength
$\varepsilon$ in proportion to the surface contact area. This potential correctly accounts for nonspecific
surface interactions between spherical crowders representing large macromolecules and individual segments of the
coarse-grained RNA.
Assessing the magnitude of crowding effect from simulations requires an accurate technique for calculating the
folding free energies. To this end, a thermodynamic method has been proposed (Denesyuk and Thirumalai, to appear
elsewhere), which does not require a structural order parameter to define folded and unfolded ensembles. To summarize, we
perform a series of simulations at different temperatures in the range from $T_1$ to $T_2$, where $T_1$ and $T_2$ are on
the order of the lowest and highest temperatures used in thermodynamic simulations or measurements. Using statistical
mechanics techniques, we compute from these simulations the free energy of the molecule, $G$, as a function of temperature,
$T$ (Figure \ref{dGillust}). If the population of the unfolded state is negligible at $T_1$, the free energy of the folded
state at $T_1$, $G_{\rm f}(T_1)$, equals the computed free energy $G(T_1)$. Similarly, $G_{\rm u}(T_2)=G(T_2)$, if the
population of the folded ensemble is statistically insignificant at the highest temperature $T_2$. Assuming similar
equalities for the enthalpies and heat capacities of the folded and unfolded state, we can use thermodynamic relationships
between the free energies, enthalpies and heat capacities to extrapolate $G_{\rm f}(T_1)$ and $G_{\rm u}(T_2)$ to
intermediate temperatures, at which both folded and unfolded states are populated (asymptotic lines in Figure
\ref{dGillust}). For any $T$ between $T_1$ and $T_2$, geometric definition of the stability of the folded state with
respect to the unfolded state, $\Delta G$, is given in Figure \ref{dGillust}.
\section{Crowding effect is negligible for large crowders}
The high density of macromolecules in the cell (volume fractions $\phi \approx 0.2-0.4$) reduces the space available for
conformational fluctuations. Therefore, macromolecular crowding should result in a shift in the thermodynamic equilibrium
between the HP and PK states of the hTR pseudoknot domain towards the more compact PK. To assess the extent to which
PK is favored at $\phi \ne 0$, we discuss the simulation results~\citep{Denesyuk11JACS} for the HP and PK states of the
modified pseudoknot domain, $\Delta$U177 (Figure \ref{SS}). The molecular construct $\Delta$U177 has been examined experimentally
{\it in vitro} at $\phi$ = 0~\citep{Theimer05MolCell}, and the atomistic structures of its HP and PK conformations are
available from the Protein Data Bank, codes 1NA2 and 2K96, respectively. Under native conditions, the RNA sequence in
Figure \ref{SS} will predominantly populate the PK conformation. To obtain adequate statistics on both conformations, two
independent sets of simulations were carried out, each modeling a limited subset of hydrogen bonds~\citep{Denesyuk11JACS}.
In the first set of simulations, the RNA sequence could form only those hydrogen bonds that are found in the NMR structure
of the PK. Similarly, in another set of simulations, only hydrogen bonds from the NMR structure of the HP were included. In
the HP simulations strand C166--A184 remained unbound in the folded state, because no hydrogen bonds could form between
this strand and the remainder of the molecule. In this way, the interconversion between the PK and HP was eliminated and
the HP structure could be sampled exhaustively. The stabilities of the PK and HP structures,
$\Delta G_{\rm PK}=G_{\rm PK, f} - G_{\rm PK, u}$ and $\Delta G_{\rm HP}=G_{\rm HP, f} - G_{\rm HP, u}$, were obtained
from the simulations using the technique illustrated in Figure \ref{dGillust}. Since the unfolded state is effectively defined as a
high-temperature state (in which all hydrogen bonds are broken) and the RNA sequence is the same in the PK and
HP simulations, it was assumed that $G_{\rm PK, u} = G_{\rm HP, u}$. Therefore, the stability of the PK
with respect to HP, $G_{\rm PK, f}-G_{\rm HP, f}$, was computed as $\Delta G_{\rm PK}-\Delta G_{\rm HP}$, without the
need for explicit simulations of the PK-HP interconversion.
In order to illustrate the essential aspects of crowding effects on RNA, we consider spherical particles with radius
$r_{\rm C}$. For monodisperse particles, the volume fraction is $\phi = 4 \pi r_{\rm C}^3 \rho/3$, where $\rho$ is
the number density. Thus, $\phi$ can be changed by increasing or decreasing $\rho$ or by altering the size of the crowding
particles. In this review, we fix $\phi = 0.3$ and examine the consequences of changing $r_{\rm C}$. Based on general theoretical
considerations~\citep{Asakura58JPS,Shaw91PRA} it can be shown that, in the colloid limit $r_{\rm C}> R_{\rm G}^0$, the
crowding agents would have negligible effects on RNA stability. Here, $R_{\rm G}^0$ is the size of RNA in the absence of
the crowding agent. It is only in the opposite polymer limit, $r_{\rm C}< R_{\rm G}^0$, that the crowding particles
would affect RNA stability. We therefore expect that the magnitude of the crowding effect should depend on the ratio
$r_{\rm C}/R_{\rm G}^0$.
In Figure \ref{rC}a we show the HP melting profile, taken to be the negative derivate of the number of intact base pairs
$N_{\rm BP}$ with respect to $k_{\rm B} T$, for the crowder radius $r_{\rm C}=26$ \r{A}~\citep{Denesyuk11JACS}. Such
crowders are larger than the radius of gyration of strand G93--C121 in the unfolded state, $R_{\rm G}^0=20$ \r{A}
(Figure \ref{SS}c). As discussed above, large crowders have minimal effects on the melting of the HP even at $\phi=0.3$. For a
fixed $\phi$, the average distance between two spherical crowders will increase with the crowder size. If the unfolded
hairpin can easily fit in the interstitial space, the folding/unfolding transition will not be affected significantly by
the presence of crowders. For $\phi=0.3$ and the crowder radius $r_{\rm C}=26$ \r{A}, which is only slightly larger than
$R_{\rm G}^0$, the increase $\Delta T$ in the melting temperature is 1.5 $^{\circ}$C for stem 1 of the HP and is
negligible for stem 2 (Figure \ref{rC}a). Further increase in $r_{\rm C}$ results in $\Delta T\approx0$ for both stems (data not
shown).
Figure \ref{rC}a also shows the melting profile of the HP in a ternary mixture of crowders, containing volume fractions
$\phi=0.11$, 0.11 and 0.08 of particles with $r_{\rm C}=104$ \r{A}, 52 \r{A} and 26 \r{A},
respectively~\citep{Denesyuk11JACS}. The sizes and volume fractions of individual components in the model mixture correspond
to the ribosome, large enzymatic complexes and relatively small individual proteins, found in {\it E. coli}. Because all
the values of $r_{\rm C}$ in the {\it E. coli} mixture are larger than $R_{\rm G}^0$, we expect only small changes in the
melting profile of the HP (Figure \ref{rC}a). For the total volume fraction of 0.3, the melting temperature of the HP stem 1
increases only by 2 $^{\circ}$C with respect to $\phi=0$ (Figure \ref{rC}a). Interestingly, the effect of the {\it E. coli}
mixture is similar in magnitude to that of a monodisperse suspension with $r_{\rm C}=26$ \r{A} and $\phi=0.3$. In contrast,
a monodisperse suspension with $r_{\rm C}=26$ \r{A} and $\phi=0.08$, which is equivalent to the smallest particle component
in the mixture, has negligible effect on the melting of the HP (Figure \ref{rC}a).
To summarize, the crowding effect of polydisperse mixtures is largely the effect of the smallest particle component,
but taken at the total volume fraction of the mixture. As we discuss in the next section, the excess stability of the
folded state due to crowding decreases nonlinearly with the radius of the crowding particle $r_{\rm C}$. We therefore
propose that, for crowding in the cellular environment, the main role of large macromolecules will be to increase the
effective volume fraction of the relatively small macromolecules.
\section{Role of crowder size in the PK-HP equilibrium}
The sensitivity of the crowding effect to the relative sizes of RNA and crowders is at the basis of the equilibrium
shift in the hTR pseudoknot domain. Figure \ref{rC}b shows the change in stability of the HP and PK at 37 $^{\circ}$C
induced by monodisperse crowders for different crowder radii $r_{\rm C}$ ($\phi=0.3$). As anticipated by arguments given
above, the magnitude of the excess stability $\Delta G(0.3)-\Delta G(0)$ is small if $r_{\rm C}/R_{\rm G}^0>1$ and
increases sharply for $r_{\rm C}/R_{\rm G}^0<1$. Note that the crowding effect is larger for the PK for all values of
$r_{\rm C}$ (Figure \ref{rC}b), indicating an equilibrium shift towards this conformation. The discussed change in the
PK-HP relative stability is of entropic origin. The unfolded ensembles of the PK and HP are equivalent, as discussed above,
and will therefore be depleted by macromolecular crowding to a similar degree. The compact folded PK is, to a first
approximation, unaffected by crowding. However, the folded HP contains an unbound strand C166--A184 (Figure \ref{SS}),
whose loose conformations will be restricted in a crowded environment. Therefore, the excess stability of the PK with
respect to HP is due to a partial suppression of the HP folded ensemble by macromolecular crowding.
The crowder radius $r_{\rm C}=12$ \r{A} corresponds to the size of an average protein {\it in vivo}.
For $\phi=0.3$ and $r_{\rm C}=12$ \r{A}, we have $\Delta G_{\rm PK}(0.3)-\Delta G_{\rm PK}(0)=-2.4$ kcal/mol and
$\Delta G_{\rm HP}(0.3)-\Delta G_{\rm HP}(0)=-1.0$ kcal/mol, which amounts to the relative stabilization of the PK
conformation by $-1.4$ kcal/mol (Figure \ref{rC}b). Below we analyze this value in the context of standard changes in the
PK stability caused by mutations.
\section{Implications for function}
As mentioned in the Introduction, changes in the relative stability of the HP and PK conformations of the hTR pseudoknot
domain compromise the enzyme activity. The estimate of the crowding effect in a typical cellular environment,
$\Delta\Delta G=-1.4$ kcal/mol, allows us to assess the extent to which macromolecules could regulate telomerase
activity. Experimental data on hTR mutants~\citep{Comolli02PNAS,Theimer05MolCell} clearly demonstrate that enzyme
activity decreases when plotted as a function of $|\Delta\Delta G^*|=|\Delta G^*_{\rm PK}(0)-\Delta G_{\rm PK}(0)|$,
where $\Delta G^*_{\rm PK}(0)$ and $\Delta G_{\rm PK}(0)$ are the stabilities of mutant and wild-type pseudoknots at
$\phi=0$ (Figure \ref{Activity}). The majority of mutations destabilize the PK, $\Delta\Delta G^*>0$ (black squares in Figure \ref{Activity}) and only two
mutants have $\Delta\Delta G^*<0$ (red stars in Figure \ref{Activity}). For destabilizing mutants the reduction in activity,
$\alpha$, was shown by \cite{Denesyuk11JACS} to follow the exponential dependence, $\alpha=\exp (-0.6\Delta\Delta G^*)$
(thick curve in Figure \ref{Activity}). The naturally occurring destabilizing mutations DKC and C116U have been linked to diseases
dyskeratosis congenita and aplastic anemia, respectively~\citep{Vulliamy02Lancet,Fogarty03Lancet}. The DKC and the
stabilizing $\Delta$U177 mutations have been studied {\it in vivo} (green symbols in Figure \ref{Activity}), as well as
{\it in vitro}. In both cases, mutant telomerase {\it in vivo} was found to be significantly less active than the
corresponding construct {\it in vitro}, suggesting that a number of factors determine the activity of telomerase
{\it in vivo}.
Although macromolecular crowding enhances the stability of the PK state, the crowding effect ($\Delta\Delta G=-1.4$ kcal/mol) is less than
the stability changes caused by mutations. In Figure \ref{Activity} the grey area marks the domain of potential mutants with
$\Delta\Delta G^*>0$, whose activity may be completely restored by macromolecular crowding. All experimentally studied
mutants fall outside the marked domain, including the two disease related mutants DKC and C116U. Nevertheless, due to the
strong dependence of enzyme activity on $\Delta\Delta G^*$, the effect of crowding on telomerase function may be
significant. We estimate that the activity of telomerase can be up- or down-regulated by more than two-fold in response
to density fluctuations in its immediate environment. Furthermore, due to the expected dynamical heterogeneities in cells,
there will be variations in enzyme activity in different cell regions.
\section{Crowding effects on RNA at different ionic strengths}
Entropic stabilization mechanism~\citep{Cheung05PNAS} implies that crowding increases the stability of the folded state by reducing the
population of expanded conformations in the unfolded state. Therefore, we expect that the magnitude of the crowding
effect will be sensitive to the ionic strength of the RNA buffer, since the latter determines the size of the
unfolded RNA. The quantitative discussion above assumed the limit of high ionic strength. As the buffer ionic concentration $c$ is lowered, the screening of the negative charge on the RNA
sugar-phosphate backbone becomes less efficient, which in turn increases the mean radius of gyration of conformations in
the unfolded state. The function $R_{\rm G}(c)$ for the unfolded PK is shown in Figure \ref{Salt}a in the absence (black diamonds)
and presence (green circles) of crowding. The same general trend is observed in both cases, with the $R_{\rm G}$ values
being consistently smaller when crowders are present for the entire range of $c$. In accord with our predictions, the
crowder-induced stabilization of the folded PK becomes more significant at low ionic strengths (red squares in Figure \ref{Salt}a).
Interestingly, the stabilization effect increases rapidly upon lowering $c$ from 1 M to 0.1 M, but shows little change
when $c$ is lowered further to below 0.1 M. The underlying reason for such behavior can be traced to the probability
distributions $p(R_{\rm G})$ in the unfolded state (Figure \ref{Salt}b). For a given crowder solution, we can identify a typical
size of the cavity which will be free of any
crowders. If the radius of gyration of RNA conformations is such that they fit into the cavity, these conformations will
not be perturbed by crowding. On the other hand, the population of conformations with $R_{\rm G}$ larger than the typical
cavity size will be significantly depleted by crowding. For $\phi=0.3$ and $r_{\rm C}=12$ \r{A}, we can infer the size of
a standard empty cavity from the distributions $p(R_{\rm G})$ at high ionic strength ($c=1$ M, Figure \ref{Salt}b). Note
that $p(R_{\rm G})$ decreases for $R_{\rm G}>20$ \r{A} when crowders are present (green solid line in Figure \ref{Salt}b),
but $p(R_{\rm G})$ increases with $R_{\rm G}$ around 20 \r{A} in the absence of crowders (black solid line in
Figure \ref{Salt}b). This indicates that the crowders significantly perturb the unfolded conformations with $R_{\rm G}$
larger than 20 \r{A}, which can serve as an upper estimate of the smallest RNA size affected by crowders. When $c$
decreases, the distribution $p(R_{\rm G})$ shifts to larger $R_{\rm G}$, increasing the fraction of the unfolded
conformations affected by crowders. At $c=0.1$ M, all statistically significant values of $R_{\rm G}$ in the unfolded
state fall within the range $R_{\rm G}>20$ \r{A}, both for $\phi = 0$ (black diamonds in Figure \ref{Salt}b) and
for $\phi=0.3$ (green circles in Figure \ref{Salt}b), so that the entire distribution $p(R_{\rm G})$ is depleted due to
crowding. This explains why the crowding-induced stabilization is almost constant below 0.1 M, even if the average
$R_{\rm G}$ continues to increase rapidly all the way to 0.01 M (symbols in Figure \ref{Salt}a).
\section{RNA becomes compact as $\phi$ increases}
The reduction in conformational space accessible to RNA should increase with $\phi$, for a fixed $r_{\rm C}$. Thus, we
expect that $R_{\rm G}$ should decrease as $\phi$ increases. This is precisely what is observed in
experiments~\citep{Kilburn10JACS}, which show that at all concentrations of Mg$^{2+}$ the {\it Azoarcus} ribozyme becomes
more compact as the volume fraction of the crowding agent (PEG) increases (Figure \ref{Azoarcus}). Interestingly, the midpoint of the folding
transition $c_{\rm m}$ --- the concentration of Mg$^{2+}$ at which the folded and unfolded states of the {\it Azoarcus}
ribozyme have equal populations --- also decreases as $\phi$ increases (Figure \ref{Azoarcus}). This finding can be readily
explained in terms of the entropic stabilization mechanism~\citep{Cheung05PNAS} and suggests that, to a first approximation,
PEG behaves as an inert hard sphere crowding agent. Based on our considerations from the previous section, we predict that
the shift in $c_{\rm m}$ due to crowding will also depend on the concentration of monovalent counterions in the RNA buffer,
a prediction that is amenable to experimental test. In addition, it would be of interest to perform experiments at a fixed
$\phi$ but varying $r_{\rm C}$, which can be changed by decreasing or increasing the molecular weight of PEG.
\section{Conclusions}
The phenomena discussed here illustrate just one aspect of crowding effects on RNA. There are other physical and chemical
factors that could determine how crowding modulates RNA stability, and hence its function. The interplay between electrostatic interactions,
hydration of phosphate groups and crowding particles, as well as the shape of crowding particles, could potentially present
a much more nuanced picture than the simplest case considered here. Even though these effects are important and warrant
further study, the analysis presented here is important in establishing the maximum increase in stability that can be
realized when excluded volume interactions dominate. From this perspective, our conclusion that the crowding effects
cannot fully restore telomerase activity {\it in vivo} is robust. In future work, it will be important to connect the
consequences of crowding effects on RNA folding with the RNA activity under cellular conditions.
\begin{acknowledgements}
We are grateful to Sarah Woodson for providing Figure 6. Our work
was supported by a grant from the National Science Foundation (CHE 09-10433).
\end{acknowledgements}
Conflict of Interest: None
|
1,477,468,750,108 | arxiv | \section{general formalism}
We begin with conventional form of the exchange interaction
between the localized moment ${\bf J}$ and electron spin density
${\bf s}({\bf r})$ :
\begin{equation}
V({\bf r})=
-A {\bf J}({\bf R}) {\bf s}({\bf r}) \delta({\bf R}-{\bf r})
\end{equation}
Here $A$ is the exchange coupling constant. The RKKY
interaction between two localized moments via the conduction
electrons may then be written in the following form
\begin{equation}
\label{rkky-def}
H_{RKKY}= - \frac12 A^2 {\bf J}_1{\bf J}_2 \chi(
{\bf r}_{1},{\bf r}_{2})\ .
\end{equation}
where $r-$dependent part of the interaction
coincides with the Fourier transform of the non-uniform static
susceptibility $\chi(q)$ (Lindhard function) and is usually
written in the form :
\begin{eqnarray}
\chi({\bf r}_{1},{\bf r}_{2}) &=&
\frac{v_0^2}{(2\pi)^6}
\int d^3{\bf k}\, d^3{\bf q}\,
\frac {n_k- n_q} {\varepsilon_q -\varepsilon_k}
e^{i({\bf q} - {\bf k})({\bf r}_{1}-{\bf r}_{2})}
\label{rkk-conv}
\end{eqnarray}
with
the unit cell volume $v_0$ and the Fermi function $n_k
=(\exp(\varepsilon_k/T) + 1)^{-1}$. For our purpose, it is more
convenient however to represent the above expression in the
equivalent form
\begin{equation}
\chi({\bf r}_{1},{\bf r}_{2}) =
- T \sum_n
G(i\omega_n, {\bf r}_{1},{\bf r}_{2} )
G(i\omega_n, {\bf r}_{2},{\bf r}_{1} )
\label{rkk-inter}
\end{equation}
here Matsubara frequency $ \omega_n = \pi T(2n+1)$ and $G$ is the
electronic Green's function. In the case
of low temperatures considered below we use the limiting
relation $T \sum_n \to \int_{-\infty}^\infty d\omega/(2\pi)$.
The electronic Green's function is given by
\begin{equation}
G(i\omega, {\bf r}_{1},{\bf r}_{2}) =
\frac{v_0}{(2\pi)^3}
\int d^3{\bf k}
\frac{\exp(i{\bf k}({\bf r}_{1}-{\bf r}_{2}))}
{i\omega -\varepsilon_k}
u_k({\bf r}_1) u_k^\ast ({\bf r}_2)
\label{g-def}
\end{equation}
with the periodic Bloch function $u_k({\bf r})$.
Generally speaking the $u-$functions should be inserted into
(\ref{rkk-conv}), and it is the case of free electrons only when
the RKKY interaction depends on
the absolute value of the distance ${\bf r}_{1}-{\bf r}_{2}$. We consider
below the tight-binding form of the electronic Hamiltonian which
implies the following form of $u_k({\bf r})$ \cite{aaa}:
\begin{equation}
u_k({\bf r}) =
\sum_n e^{i{\bf k} ({\bf a}_n -{\bf r}) }
\varphi({\bf r}- {\bf a}_n) ,
\label{bloch}
\end{equation}
here the Wannier function $\varphi({\bf r})$ rapidly decays
with distance and is close to the atomic wave-function at
small ${\bf r}$. In view of this rapid decrease on the scale of
interatomic distances, it is possible to neglect the dependence
of $u_k({\bf r})$ on ${\bf k}$ within the first Brillouin
zone for most positions of ${\bf r}$ in the unit cell.
Therefore we may replace $u_k({\bf r})$ by $u_0({\bf r})$
and assume the exchange coupling is appropriately redefined,
$A \to A|u_0({\bf r})|^2$. It is clear that upon this
redefinition the Green's function (\ref{g-def}) depends only on
the difference ${\bf r} \equiv {\bf r}_{1} - {\bf r}_{2}$.
In view of forthcoming consideration we stress that the above replacement
of the amplitude of Bloch function
is the only uncontrolled appoximation of our treatment.
The relevant calculations \cite{Bel-Ch}
show that $\varphi({\bf r})$
for the nearest neighbors is of order of magnitude smaller than $\varphi(0)$.
Therefore we expect that omitting the dependence
of $u_k({\bf r})$ on ${\bf k}$ might cause only minor corrections
to our results.
\section{nearly nested Fermi surface}
We consider a two-dimensional case of almost nested
Fermi surface, with the quasiparticle dispersion given by
\begin{equation}
\varepsilon_{{\bf k}} =
-2t (\cos k_x + \cos k_y ) - \mu,\quad |\mu| \ll t
\label{disp-nest}
\end{equation}
with $t> 0 $.
This tight-binding form of the spectrum particularly appeared in
different models related to the high-$T_c$ phenomenon.
\cite{htsc} Henceforth we let the lattice parameter to be unity.
We will regard the different parts of the spectrum
(\ref{disp-nest}) on the different manner.
We divide the whole FS onto four parts,
which are as follows.
{\em The vicinities of the saddle points of the
spectrum.} The dispersion near the points $(0,\pm\pi)$ and
$(\pm\pi,0)$ are given by
\begin{equation}
\varepsilon_{ (0,\pm\pi) +{\bf k}} \simeq
t (k_x^2 - k_y^2 ) - \mu + O(t k^4) ,
\label{exp1}
\end{equation}
\begin{equation}
\varepsilon_{(\pm\pi,0) + {\bf k} } \simeq
-t (k_x^2 - k_y^2 ) - \mu + O(t k^4) ,
\label{exp2}
\end{equation}
respectively.
{\em The flat parts of the Fermi surface.}
In the vicinity of the pair of wave-vectors $\pm(\pi/2,\pi/2)$
the dispersion takes the form similar to the one-dimensional case near
half-filling \cite{Luther}:
\begin{equation}
\varepsilon_{{\bf k} \pm(\pi/2,\pi/2)} \simeq
\pm 2t (k_x + k_y ) - \mu + O(t k^3) .
\label{exp3}
\end{equation}
The pair $\pm(\pi/2,-\pi/2)$
is characterized by the similar dispersion law
\begin{equation}
\varepsilon_{{\bf k} \pm(\pi/2,-\pi/2) } \simeq
\pm 2t (k_x - k_y ) - \mu + O(t k^3) .
\label{exp4}
\end{equation}
The expansions (\ref{exp1})-- (\ref{exp4}) are valid until
$k\lesssim 1$. In
its turn, it means that dropping the higher terms of the
expansions, one hopes to obtain the correct form of RKKY
interaction at $r\gtrsim 1$. Below we revise this statement
and compare our analytic results with the
numerical findings.
We see that the whole vicinity of the FS can
naturally be divided into parts of the mainly two-dimensional
hyperbolical and
linear character of dispersion. Therefore we can
represent the Green's function as a sum of eight different
contributions which is symbolically written in the form
\begin{eqnarray}
G(i\omega, {\bf r} ) &=&
\sum_{{\bf k}_0}
G_{{\bf k}_0 }(i\omega, {\bf r} )
\label{G-total}
\end{eqnarray}
where ${\bf k}_0 = \pm(0,\pi), \pm(\pi,0), (\pm\pi/2, \pm\pi/2) $ and
{\em the partial Green's function} $ G_{{\bf k}_0 }(i\omega,
{\bf r} ) $ originates from the integration over the part of the
Brillouin zone with particular form of dispersion law :
\begin{equation}
G_{{\bf k}_0 } (i\omega, {\bf r} ) =
\frac{v_0}{(2\pi)^2}
\int_{k\lesssim 1} d^2{\bf k}
\frac{\exp(i({\bf k}+{\bf k}_0){\bf r})}
{i\omega -\varepsilon_{{\bf k}+{\bf k}_0}} .
\label{gpart-def}
\end{equation}
It should be noted that this decomposition is a general one and may be
done in all cases when the FS possesses the saddle points and flat parts.
Hence our consideration of the nearly nested FS may be regarded as a
particular example of the analysis of the RKKY interaction for
the strongly non-spherical FS.
\section{ the Green's function from the saddle points in
the spectrum}
In this Section we consider the contribution to the Green's function from the
vicinities of the saddle points of the spectrum only. The flat parts
are analyzed in Section \ref{app:flat}. As we will show below the saddle
points determine the RKKY interaction almost at all directions in $r-$space
except the regions near the diagonals $x=\pm y$ where the contribution from
the flat parts of the FS becomes important.
At the first step we use the following auxiliary representation of the
Green's function :
\begin{equation}
\label{g-tau1}
G(i\omega, {\bf r} ) =
\frac{ e^{-i\alpha} }{(2\pi)^2} \!
\int\limits_0^{\infty} \! d\tau \!
\int \!\!\!
d^2{\bf k}
\exp \!\left[
i{\bf k}{\bf r} + \tau e^{i\alpha}
\left[i\omega - \varepsilon_k \right]
\right]
\end{equation}
where we put $\alpha = sign(\omega) \pi/2$.
It is evident that $e^{i\alpha} = i\, sign(\omega)$ but this formal trick
facilitates the following analysis of the complex-valued expressions.
The evaluation of the saddle point contribution to the Green's funciton can be
made in more general form applicable for the parts of the electronic spectrum
near the so-called stationary points \cite{aaa}.
By definition these are the points ${\bf k}_0$ where
$\varepsilon_{{\bf k}}$ is well approximated by the expression
\begin{equation}
\label{stpoint}
\varepsilon_{{\bf k}_0+ {\bf k}}
\simeq \frac12 {\bf k}\, \tensor{ {\rm m}}^{-1} \, {\bf k}
- \mu ,
\end{equation}
with the generally anisotropic tensor of masses $\tensor{ {\rm m}}$ and some
effective chemical potential $\mu$. We assume the applicability
of (\ref{stpoint}) for the wave-vectors ${\bf k}$ not exceeding
some scale $\kappa$, comparable to inverse lattice parameter. In
other words the significant parts of the FS may be approximately mapped
by (\ref{stpoint}).
In view of (\ref{stpoint}) the integration over ${\bf k}$
in (\ref{g-tau1}) becomes
Gaussian one and is easily performed. The only complication here
comparing to the case of free electron gas \cite{rkky-anyD} is
the finite value of $\kappa$ which restricts the validity of the subsequent
equations by $r \gtrsim \kappa^{-1}$; we discuss it in more detail below.
Thus we obtain the following expression for the partial
contribution to Green's function from the vicinity of ${\bf k}_0$
\begin{eqnarray}
G_{\bk0}(i\omega, {\bf r} ) &=&
- e^{-i\alpha}\frac{ \sqrt{|det\, \tensor{ {\rm m}}}|}{2\pi} e^{i{\bf k}_0{\bf r}}
\nonumber \\
&\times&
\int_0^{\infty} \frac{d\tau}{\tau}
\exp[ \tau z e^{i\alpha} -
\frac{\rho}{2\tau} e^{-i\alpha} ]
\label{g-tau2}
\end{eqnarray}
where $z = \mu + i\omega$
and $\rho = {\bf r}\, \tensor{ {\rm m}} \,{\bf r}$ is the square of distance {\em
in the metrics defined by the mass tensor}.
The last integral is expressed via the modified Bessel
(Macdonald) function, \cite{GR} namely
\begin{equation}
G_{\bk0}(i\omega, {\bf r} ) =
- e^{-i\alpha}\frac{\sqrt{|det\, \tensor{ {\rm m}}|}}{\pi} e^{i{\bf k}_0{\bf r}}
K_0 (\sqrt{-2z\rho})
\label{g-mcdo}
\end{equation}
In the equation (\ref{g-mcdo}) the branch of $\sqrt{-2z\rho}$ is
chosen from the condition of its positive real part. In
particular case of $2\mu\rho >0$, this latter condition means
that the argument of Macdonald function $K_0 (\sqrt{-2z\rho})
$ has a discontinuity at $\omega = 0$, \cite{GR}
\begin{equation}
K_0 (\sqrt{-2z\rho}) =
\left\{
\begin{array}{rl}
\frac{\pi i}2 H_0^{(1)}(\sqrt{2\mu\rho}), &
\quad \omega/\mu \to +0 \\
-\frac{\pi i}2 H_0^{(2)}(\sqrt{2\mu\rho}), &
\quad \omega/\mu \to -0
\end{array}
\right.
\label{K2H}
\end{equation}
where $H_0^{(1,2)}(x)$ are Hankel functions. Note that in the
case of spherical Fermi surface the condition $2\mu\rho \equiv
k_F^2 r^2 > 0 $ is fulfilled automatically. For the
electronic type of dispersion $\tensor{ {\rm m}} >0$ and $\mu >0$ while
for the hole-like dispersion law one has $\tensor{ {\rm m}}<0 $ and $\mu< 0$.
It is clear from Eq.(\ref{K2H}) that for non-spherical Fermi surface
the effective Fermi momentum is given by $\sqrt{2\mu\rho }= k_F^\ast r$
and may strongly depend on the direction in real space
(cf.\ Eq.(\ref{def-kF}) below).
Let us discuss now the region of
applicability of the expression (\ref{g-mcdo}).
The Gaussian integration in (\ref{g-tau1})
for $\varepsilon_{\bf k}$ given by (\ref{stpoint}) is
justified upon two conditions. First, the center of
quadratic form in ${\bf k}$ which is $\tau^{-1} {\bf r} \,\tensor{ {\rm m}}$
should lie within the circle of radius $\kappa$, where
the expansion (\ref{stpoint}) is applicable.
Second, one should demand the criterion $\tau |{\bf
\kappa}\,\tensor{ {\rm m}}^{-1} \,{\bf \kappa} |\gg 1$ to be
satisfied , to ensure
the Gaussian value (\ref{g-tau2}).
In the principal axes $j$ of $\tensor{ {\rm m}}$ these
criteria can be combined as follows :
\begin{equation}
\tau \gg \max\left[ \frac{|r_jm_j|}{\kappa},
\frac{|m_j|}{\kappa^2}\right]
\label{star}
\end{equation}
At this point our analysis is somewhat different for the cases
$ k_F^\ast r = \sqrt{2\mu\rho}\gg 1 $ and $ k_F^\ast r \lesssim 1 $.
In the first case of largest $r$ the principal contribution to the integral
(\ref{g-tau2}) is delivered by $\tau = \tau_0 (1 + O(1/\sqrt{2z\rho}) )$ with
$\tau_0 = \sqrt{\rho/2z} = k_F^\ast r/(2\mu)$ \cite{fnote2}.
Simple arguments show then that for the ellipsoidal FS the criterion
({\ref{star}) is always fulfilled, coinciding with the obvious demand
$k_F^\ast \ll \kappa$. For the spectrum with the saddle point
the condition ({\ref{star})
may be violated at large distances and near the nodes $\rho\simeq 0$.
Let the angle $\varphi$
be measured from the $\hat x$-axis in ${\bf r}$-plane.
Then $ k_F^\ast $ is given by (\ref{def-kF})
and we lose the applicability of (\ref{g-mcdo}) at
\begin{equation}
\frac{\sqrt{|t/\mu|} }{r} \ll \sqrt{|\varphi \pm \pi/4|}
\lesssim \frac{\sqrt{|\mu/t|} }{a},
\label{star2}
\end{equation}
i.e.\ within a narrow sector along the diagonals if
$r/a \gg |t/\mu| \gg 1$.
In the case of $ k_F^\ast r \lesssim 1 $ it is the region
$|\rho|\lesssim \tau \lesssim |z|$ which is essential in (\ref{g-tau2}).
Substituting the lower boundary $\tau = |\rho| $ into (\ref{star}) we
come to the evident condition $\kappa r > 1 $ for the ellipsoidal FS.
For the FS with the saddle point the criterion (\ref{star})
is again violated near the node of $\rho$. For our particular form of the
spectrum the Eq.(\ref{g-mcdo}) is applicable outside the band along the
diagonals $x=\pm y$ :
\begin{equation}
r |\varphi \pm \pi/4| > \kappa^{-1} \sim a.
\label{star3}
\end{equation}
We depict the above regions of applicability on the Figure \ref{fig:regions}.
Let us now consider the particular form of the Green's functions arising in the
case of the tight-binding spectrum (\ref{disp-nest}).
The integration over ${\bf k}$ being restricted to the first Brillouin
zone leads to the additional phase factor in (\ref{g-mcdo}) as discussed
in Appendix \ref{app:bzb}. The desired expressions acquire the form :
\begin{eqnarray}
G_{(0,\pi)}(i\omega, {\bf r} ) &=&
- \frac{ i sign(\omega) }{2\pi t}
K_0 (\sqrt{-2z\rho}) e^{i \pi |y| sign(\omega ) }
\label{gf0pi} \\
G_{(\pi,0)}(i\omega, {\bf r} ) &=&
- \frac{ i sign(\omega)}{2\pi t}
K_0 (\sqrt{2z\rho}) e^{i \pi |x| sign(\omega)}
\label{gfpi0}
\end{eqnarray}
where $\rho = ( x^2 - y^2 )/ 2t $ and $\sqrt{|det\,\tensor{ {\rm m}}|}$ is
replaced by its actual value $1/2t$.
\section{evaluation of the RKKY interaction}
\subsection{integer values of r}
We note that if the values of coordinates $x,y$
in (\ref{gf0pi}), (\ref{gfpi0}) coincide with the integer
numbers of lattice periods then one has $ e^{i \pi |x| sign(\omega )} =
e^{i \pi x }$, $ e^{i \pi |y| sign(\omega )} =
e^{i \pi y }$ and $ e^{2i \pi x } = e^{2i \pi y } = 1$.
We consider this simpler case first,
while the case of non-integer $x,y$ is discussed
in the next subsection.
Away from the diagonals $x=\pm y$ in ${\bf r}$-space, we have
from (\ref{rkk-inter}) the following expression :
\begin{eqnarray}
\chi({\bf r}) &=&
\frac {1}{ 4\pi^2 t^2}
\int_{\mu- i\infty}^{\mu+i\infty} \frac{dz}{2\pi i}
\left[ K_0^2 (\sqrt{-2z\rho}) + K_0^2 (\sqrt{2z\rho})
\right. \nonumber \\ && \left.
+ 2 e^{i\pi(x+y)} K_0(\sqrt{-2z\rho}) K_0(\sqrt{2z\rho})
\right]
\label{rkk-ne-0}
\end{eqnarray}
Here we redefined the variable of integration $\omega \to z = \mu+i \omega$.
As we discussed above if $\mu\rho > 0$ then the function $K_0(\sqrt{-2z\rho})$
has the discontinuity (\ref{K2H}) at $z =\mu$.
On the contrary
the second term in (\ref{rkk-ne-0}) is continuous function
of $z$ in this case and the corresponding integral is zero, because one can
shift the integration contour to $z\to +\infty$ wherein
$K_0(z) \propto e^{-z}$.
At $\mu\rho<0$ the situation is reversed, hence one can
combine these two cases and cast the contribution of first two terms in
(\ref{rkk-ne-0}) into the form \cite{Ab-St}
\begin{mathletters}
\label{rkk-ne-1}
\begin{eqnarray}
\chi_1({\bf r}) &=&
\frac {|\mu|}{ 8\pi t^2} \Phi_1(k_F^\ast r)
\\
\Phi_1(a) &=&
J_0(a) Y_0(a) + J_1(a) Y_1(a)
\end{eqnarray}
\end{mathletters}
with Bessel functions $J_n(x)$ and $Y_n(x)$
and
the direction-dependent value of the effective Fermi momentum
\begin{equation}
k_F^\ast =
\frac{\sqrt{|2\mu\rho|}}r
= \frac1a \sqrt{\left|\frac\mu t\cos(2\varphi) \right|} .
\label{def-kF}
\end{equation}
where we restored in the rhs
the lattice parameter $a$.
Note that the expression (\ref{rkk-ne-1})
gives the RKKY interaction
for the cylindrical Fermi surface as well, in which case $k_F^\ast$ coincides
with the conventional Fermi momentum \cite{rkky-anyD}. The only
difference is in the general minus sign resulted from the sign-indefinite
property of the mass tensor, $det(\tensor{ {\rm m}}) = -1/(4t^2) < 0 $.
The asymptotes of this part of RKKY interaction
under criteria (\ref{star2}) and (\ref{star3})
are as follows
\begin{mathletters}
\label{rkk-asymp1}
\begin{eqnarray}
\chi_1({\bf r}) &=&
\frac {|\mu|}{ 8\pi^2 t^2} \frac{\sin(2k_F^\ast r)}{(k_F^\ast r)^2}
,\quad k_F^\ast r \gg 1
\\ &=&
\frac {|\mu|}{ 4\pi^2 t^2} \ln k_F^\ast r
,\quad k_F^\ast r \ll 1
\end{eqnarray}
\end{mathletters}
Therefore in the limit of large distances the power-law
decrease of $\chi_1({\bf r})$ is accompanied by oscillations with
the direction-dependent period $2k_F^\ast$, in accordance with usual
expectations, while for the small $k_F^\ast r$ these oscillations
are replaced by logarithmic singularity.
The third term in (\ref{rkk-ne-0}) is the integral of the product of the
Green's functions resulting from the
regions of the FS with the different character of dispersion.
As a result, one has the additional prefactor
of the form $\exp(i {\bf Q}_0 {\bf r}) = (-1)^{x+y}$ (for integer $x,y$).
The appearing
``antiferromagnetic'' wave-vector $ {\bf Q}_0 = (\pi,\pi)$ merely
connects two regions in the Brillouin zone, where the dispersion is close
to the Fermi level.
In this third term of Eq. (\ref{rkk-ne-0}) the discontinuity
at $z = \mu$ exists for both signs of $\mu\rho$. The integration
over $z$ can also be done \cite{Ab-St} and we obtain after some calculations :
\begin{mathletters}
\label{rkk-ne-2}
\begin{eqnarray}
\chi_2({\bf r}) &=&
e^{i {\bf Q}_0 {\bf r}}
\frac {|\mu|}{ 4\pi^2 t^2} \Phi_2(k_F^\ast r)
\\
\Phi_2(a) &=&
\frac{J_0(a) K_1(a) - J_1(a) K_0(a)}{a}
\end{eqnarray}
\end{mathletters}
The asymptotes of this expression are
\begin{mathletters}
\label{rkk-asymp2}
\begin{eqnarray}
\chi_2({\bf r}) &=&
e^{i {\bf Q}_0 {\bf r}}
\frac { |\mu|\sqrt2}{ 4\pi^2 t^2}
\frac{\cos( k_F^\ast r)}{(k_F^\ast r)^2} e^{-k_F^\ast r}
,\quad k_F^\ast r \gg 1
\\ &=&
e^{i {\bf Q}_0 {\bf r}}
\frac {|\mu|}{ 4\pi^2 t^2} \frac 1{ (k_F^\ast r)^2 }
,\quad k_F^\ast r \ll 1
\end{eqnarray}
\end{mathletters}
Let us discuss the significance of this second part of RKKY interaction.
First, one sees from
(\ref{rkk-asymp1}), (\ref{rkk-asymp2})
that in the far asymptotic region $k_F^\ast r \gg 1$ the
first term $ \chi_1({\bf r})$ dominates while $\chi_2({\bf r})$ is exponentially
small.
In particular, it explains why the $\chi_2({\bf r})$ term
could not be obtained by the previous methods
\cite{rkky-ani} --- the RKKY interaction as expressed
in Ref.\ \cite{rkky-ani} was a series in powers of $1/r$, and the exponential
tail was evidently missing.
On the contrary, the term $\chi_2({\bf r})$ starts to play a decisive
role at smaller distances $k_F^\ast r \lesssim 1$, where it prevails
according to (\ref{rkk-asymp1}), (\ref{rkk-asymp2}).
We stress that the condition $k_F^\ast r > 1$ was essential for the
previous theories, while our expressions
are applicable at weaker
conditions (\ref{star2}),(\ref{star3}). Therefore the part $\chi_2({\bf r})$
represents the {\em intermediate asymptote} of the RKKY interaction in
the nearly-nested situation.
Second we note that $\chi_2({\bf r})$ has the antiferromagnetic sign-reversal
character. This feature of the RKKY interaction at short distances
appears to be quite general. One can show for other types of dispersion
\cite{tobepub} that this kind of short range oscillations
is always determined by the wave-vector connecting saddle points
in the band structure.
To clarify these two items further we consider the case of the
perfect nesting
of the electronic spectrum, $\mu =0$, when the far asymptotic regime
is not realized. We immediately see the disappearance
of the first part of the RKKY interaction (\ref{rkk-ne-1}), while only
this term could be obtained in the former methods of $1/r$ expansion.
At the same time our second part of RKKY interaction (\ref{rkk-ne-2}) survives.
As a result we have
\begin{eqnarray}
\label{perfect}
\chi({\bf r}) &=&
e^{i {\bf Q}_0 {\bf r}}
\frac {1}{ 4\pi^2 t \,| x^2 - y^2| }
\end{eqnarray}
This behavior indicates the log-squared singularity of the polarization
operator on the antiferromagnetic wave vector ${\bf Q}_0$ discussed,
e.g., by Dzyaloshinskii \cite{dzya}.
To validate our analytical findings (\ref{rkk-ne-1}), (\ref{rkk-ne-2})
we performed the direct numerical
calculation of the RKKY interaction
on the square $|x|, |y| \leq 10$
with the spectrum given by (\ref{disp-nest}).
The results for $\mu = 0.1$ and $t= 0.5$ are shown on the Figure
\ref{fig:num}.
We plotted on the Figure \ref{fig:num}a
the calculated value of the RKKY interaction versus the
``distance'' in the saddle point metrics, $r^\ast = \sqrt{|x^2 - y^2|}$.
According to this convention $k_F^\ast r = (k_F^\ast)_{max} r^\ast $,
so we can show the results for the whole plane in a simplest manner;
for the chosen parameters $(k_F^\ast)_{max} = \sqrt{\mu/t} \simeq 0.45 $.
On the same plot we have drawn the curves
$ \chi_1(k_F^\ast r) - \chi_2(k_F^\ast r)$ and
$ \chi_1(k_F^\ast r) + \chi_2(k_F^\ast r)$,
which are the predicted values of the interaction for the
odd and even sites, respectively. No additional parameters were used.
We see the remarkable agreement between the calculated points
and the theoretical formulas.
As we expected at large distances the oscillations in the RKKY
are observed while at smaller distances $r^\ast < \sqrt{t/\mu} \simeq 2.5$
the situation is changed. The interaction for the ``odd'' sites
$x + y = 2n +1 $ (i.e.\ for ${\bf r} = (0,1), (1,2), (0,3)$\ldots) is of the
antiferromagnetic (negative) sign.
The interaction for the ``even'' sites (${\bf r} = (0,2), (1,3), (0,4)$
{\em etc.}) has a tendency to be ferromagnetic. In both
cases the calculated points closely follow our curves
$ \chi_1({\bf r}) \pm \chi_2({\bf r})$ up to the interatomic distances.
The RKKY interaction expected from the previously known
expressions ( the term $ \chi_1(k_F^\ast r)$) is shown by a dashed
line.
To clearer represent the region of larger $r^\ast$ we multiplied the
calculated points $\chi({\bf r})$ onto the appropriate values of $(r^\ast)^2$.
The same was done for the theoretical curves, the results
are shown on the Figure \ref{fig:num}b. We see again that
at large $r^\ast$ the interaction is characterized by
the usually discussed asymptotic oscillations (\ref{rkk-asymp1}).
At the same time the difference between the
``odd'' and ``even'' sites is clear at the lower distances.
Note that $\chi({\bf r})$ for the diagonal $x=\pm y$ is not present on the
Fig.\ \ref{fig:num} and cannot be in principle compared to
(\ref{rkk-ne-1}), (\ref{rkk-ne-2}) due to the criteria
(\ref{star2}), (\ref{star3}); we discuss it also in the next Section.
\subsection{noninteger values of R}
Let us now extend our analysis for the case of non-integer
values of $x, y$. One can easily note that now the factors of the
type $\exp[{i\pi |x| sign(\omega )}]$ in (\ref{gf0pi}), (\ref{gfpi0}),
(\ref{appeq:Bzb2}) produce another source of
discontinuity at $\omega = 0 $, in addition to the previously
discussed one of the value $\sqrt{2\rho (\mu + i \omega)}$ .
In particular both first and second terms in (\ref{rkk-ne-0})
acquire the factors $e^{ \pm 2i\pi |x|} \neq 1$ and
$ e^{ \pm 2i\pi |y|} \neq 1$,
respectively.
Therefore both these terms now contribute although in a different
manner.
Consider first the case of $|x| > |y|$ and $\mu <0$, which means
$\rho\mu <0$ and the closed character of the FS.
A straightforward calculation \cite{Ab-St} shows then
that the above expressions
(\ref{rkk-ne-1}), (\ref{rkk-ne-2}) are generalized as follows :
\begin{mathletters}
\label{noninteger}
\begin{eqnarray}
\chi_1({\bf r}) &=&
\frac {|\mu|}{ 8\pi t^2} \left[
\cos( 2\pi|x| ) \Phi_1(k_F^\ast r)
\right. \nonumber \\ &&
+ \sin( 2\pi|x| ) \Phi_3^{(1)}(k_F^\ast r)
\\ && \left.
+ \sin( 2\pi|y| ) \Phi_3^{(2)}(k_F^\ast r)
\right] \nonumber \\
\chi_2({\bf r}) &=&
\frac {|\mu|}{ 4\pi^2 t^2} \left[
\cos( \pi|x| + \pi|y|) \Phi_2(k_F^\ast r)
\right. \nonumber \\ && \left.
+ \sin( \pi|x| + \pi|y|) \Phi_4(k_F^\ast r)
\right]
\end{eqnarray}
\end{mathletters}
with the functions
\begin{mathletters}
\label{nonint-fu}
\begin{eqnarray}
\Phi_3^{(1)}(a) &=&
\frac12 [ Y_0^2(a) - J_0^2(a) + Y_1^2(a) - J_1^2(a) ]
\\
\Phi_3^{(2)}(a) &=&
\frac2{\pi^2} [ K_0^2(a) - K_1^2(a) ]
\\
\Phi_4(a) &=&
\frac{Y_0(a) K_1(a) - Y_1(a) K_0(a)}{a}
\end{eqnarray}
\end{mathletters}
The different terms appeared in (\ref{noninteger}) have different
significance at large and small $k_F^\ast r$.
In the far asymptotic regime $k_F^\ast r \gtrsim 1$
the terms $\Phi_3^{(2)}, \Phi_2, \Phi_4$ are exponentially small and
we find :
\begin{eqnarray}
\chi({\bf r}) \propto
\frac{\sin (2\pi|x| - 2k_F^\ast r) }
{ (k_F^\ast r)^2} , \quad k_F^\ast r \gtrsim 1
\label{far}
\end{eqnarray}
The period of oscillation in the above expression
corresponds to the notion of the calipering points
on the FS. \cite{rkky-ani}
We remind that these are the points where the direction
of normal to the
Fermi surface is (anti)parallel to the direction of ${\bf r}$.
In other words,
the normal to the FS coincides with the direction of the Fermi velocity
${\bf v} = (v_x, v_y)$ and it is parallel to ${\bf r} = r (
\cos \varphi, \sin \varphi)$ provided
\begin{equation}
\label{calpoint}
v_x \sin \varphi = v_y \cos \varphi.
\end{equation}
Near the saddle points $(\pm\pi,0)$ one has $v_x / v_y = - k_x / k_y $ and
$ k_x^2 - k_y^2 = -\mu/t $. Therefore the calipering points, satisfying
the condition (\ref{calpoint}), are given by $\widetilde {\bf k}^c =
\pm ( \cos \varphi, - \sin \varphi) [-(t/\mu) \cos 2 \varphi ]^{-1/2}$
near the points $(\mp\pi,0)$, respectively.
We measured
the wave-vectors from the saddle points, therefore the true
caliper of the fermi surface is given by the vector
${\bf k} ^c = (2\pi, 0 ) - 2 \widetilde {\bf k}^c$. The scalar product ${\bf k}^c {\bf r}$
is exactly what one finds in
the Eq.(\ref{far}) since $\widetilde {\bf k}^c {\bf r} =
r \sqrt{-\mu \cos(2\varphi) /t} \equiv k_F^\ast r$.
At the smaller distances $k_F^\ast r \leq 1$ these are the terms
$\Phi_3^{(1)}, \Phi_3^{(2)},\Phi_2 $ , which determine the main contribution
to $\chi({\bf r})$. In this case one obtains :
\begin{eqnarray}
\chi({\bf r}) &\simeq&
\frac{\pi \cos (\pi|x| +\pi|y|)
+ \sin 2\pi|x| - \sin 2\pi|y| }
{ 4 \pi^3 t |x^2 - y^2 | } ,
\label{near}\\
&& \quad k_F^\ast r \lesssim 1
\nonumber
\end{eqnarray}
We see again that the interaction has a commensurate period of
oscillations, although the oscillations for non-integer ${\bf r}$
are not described by a unique factor as it was in Eqs.(\ref{rkk-asymp2}b),
(\ref{far}).
\section{the flat parts of the spectrum}
\label{app:flat}
Let us discuss here the contribution to the RKKY interaction produced
by the flat parts of the spectrum (\ref{exp4}). First we observe that
the formalism developed in the main part of the paper cannot be applied
to the vicinities of the points $(\pm\pi/2,\pm\pi/2)$ since all the
components of the mass tensor are infinite at these points. This
very special case should be treated separately; one can also distinguish
here the regions of intermediate and far asymptotes.
At the intermediate distances $r \lesssim |t/\mu|$ we obtain the
Green's function
from the vicinities of $( \pi/2, \pi/2)$ and $( -\pi/2, -\pi/2)$ in
the form
\begin{equation}
G_{ (\pi/2,\pi/2)}(i\omega, {\bf r} ) =
\frac{ e^{-i\alpha} }{ 2 t}
\delta_\kappa( x- y)
\exp{ \left[ i |x|(\pi + z/2t ) sign(\omega ) \right] }
\label{gf-cBz}
\end{equation}
and the corresponding Green's function from the
vicinities of $( \pi/2, -\pi/2)$ and $( -\pi/2, \pi/2)$ is obtained
from this expression by the replacement $y \to - y$.
The function $\delta_\kappa(x)$ in (\ref{gf-cBz})
has the $\delta-$function-like properties and is defined by
\begin{equation}
\delta_\kappa(x) =
\frac{\sin \kappa x}{\pi x}, \quad \kappa \sim \frac1a
\label{appeq:delta}
\end{equation}
We wish to point out that the power-law
decrease of $\delta_\kappa(x)$ at large $x$ stems from our assumption that
the absence of dispersion along $x$ is lost abruptly at $|k_x| > \kappa$.
In fact the expression (\ref{appeq:delta}) is the Fourier transform of
$\theta(\kappa - |k_x|)$. In general the dependence of dispersion on $k_x$ is
much smoother ; as a result, the decay of $\delta_\kappa(x)$ at large $x$
should be much faster, while the $\delta-$function-like
property preserves.
We see that the above Green's function has a sizeable values only in a band
$|x - y| \lesssim 1$. Outside this domain the principal
contribution to the total $G(i\omega, {\bf r})$ (\ref{G-total})
is delivered by $G_{(0,\pi)}(i\omega, {\bf r} )$ and $G_{(\pi,0)}(i\omega,
{\bf r} )$, Eqs.\ (\ref{gf0pi}) and (\ref{gfpi0}).
Particularly it means (see Fig.\ref{fig:regions}) that the terms
of the type $ G_{ (\pi/2,\pi/2)} G_{(0,\pi)} $ in the expression
(\ref{rkk-ne-0}) should not be considered.
As a result the contributions to the RKKY interaction from the flat parts
of FS acquire the following form :
\begin{eqnarray}
\chi_{flat} ({\bf r} ) &=&
\frac{\cos x(2\pi + \mu/t )}{ 4\pi t |x|}
\left[ \delta_\kappa^2( x- y) + \delta_\kappa^2( x+ y)
\right],
\label{chi-flat} \\
&& \quad |x|,|y| \lesssim |t/\mu|
\nonumber
\end{eqnarray}
here two terms in the square brackets correspond to the different
regions in the ${\bf r}$-space.
We see that this part of interaction which is present along the diagonals
is slowly decaying as $1/r$. The amplitude of it, according to
(\ref{appeq:delta}) has the model cutoff parameter $\kappa^2$. Hence we
cannot directly compare this part of RKKY with the results of our
numerical calculations, although the overall $1/r$ dependence of RKKY
interaction along the diagonals and slow oscillations are verified by
the numerical data as well. At small integer values of
$x=y$ the RKKY term (\ref{chi-flat}) corresponds to the
ferromagnetic sign of the interaction between the localized moments. This
behavior however does not define the particular type of magnetic ordering
and it is the term (\ref{perfect}) which determines it.
At extremely large distances close to diagonal $r\gg |t/\mu|$ the dropped
$k^3-$terms in the expansions (\ref{exp3}), (\ref{exp4})
become important. Hence the spectrum becomes essentially two-dimensional,
with the corresponding
change in the character of RKKY. Near the diagonals
$\varphi = \pm\pi/4 + \phi $ one has :
\begin{equation}
\chi_{flat} ({\bf r} ) =
-\frac{\sin |x|(2\pi + \mu/t (1- \phi^2/\phi_0^2) )}
{ 4\pi^2 |\mu| x^2 [1+ \phi^2/\phi_0^2 ] }
,\quad |x| \gg \frac t {|\mu|}
\label{chi-flat-far}
\end{equation}
This far asymptote of RKKY interaction from the flat
parts of dispersion holds in the narrow sectors near the diagonals
$ |\phi| \lesssim \phi_0 = \sqrt2 |\mu/4t|$,
( cf. (\ref{star2}), (\ref{star3}) and Fig.\ \ref{fig:regions} ). It has
the $1/r^2$ dependence while its period of oscillations correponds
to the notion of calipering points discussed above.
\section{concluding remarks}
It is worthwhile to compare our expressions for the Green's
function with previous results.
It was observed (see e.g. \cite{economou}) that for
the tight-binding spectrum (\ref{disp-nest})
one can use the recurrence relations to express the
value of $G(\omega, {\bf r})$ in terms of $G(\omega, 0)$.
It was noted also however that using these relations
one meets the numerical
instabilities at large $r$. Alternatively
$G(\omega, {\bf r})$ can be estimated
at large $r$ by the steepest descent method. The solution
obtained by this latter method corresponds to the asymptotes of Eqs.
(\ref{g-mcdo}), (\ref{K2H}) for the
case of large $\sqrt{2\mu\rho} = k_F^\ast r$.
In this sense our expressions extend the
previous findings for the Green's function and provide the analytical
formulas in the region of the intermediate
distances $1\lesssim r \lesssim 1/k_F^\ast$.
Let us briefly discuss here the role
of finite temperatures for our treatment.
In this case instead of the integral (\ref{rkk-ne-0})
one considers the sum over the Matsubara frequencies
(\ref{rkk-inter}) with the Green's functions given by (\ref{gf0pi}),
(\ref{gfpi0}).
With the use of analytical continuation, this sum can be represented
as the integral over the real axis of $\omega$.
One can note however directly from the
form of the Green's functions that the
effect of finite temperatures
is important when $T$ exceeds the effective
chemical potential $\mu$. At large distances $ r \gtrsim \xi =
(k_F^\ast)^{-1}\sqrt{\mu/T}\propto \sqrt{t/T}$ we have
the RKKY interaction exponentially suppressed. \cite{fnote3}
The opposite case $ r \lesssim \xi$
corresponds essentially to the case $\mu = 0 $ described by the
Eq.(\ref{perfect}). Therefore the far asymptote of RKKY leading to
possible incommensurate magnetic ordering is absent in this case and we
remain with the only tendency to commensurate AF order.
In conclusion we found the closed analytic expressions for the
RKKY interaction in a layered metal with nearly nested
Fermi surface. Along with the usual $2k_F$-oscillations
realized at far distances we demonstrate the existence
of the intermediate asymptote of the interaction. This latter
asymptote has the commensurate AF type of oscillations and is the only
term surviving at the exact nesting.
We show that our analytical formulas are in the good accordance
with the numerically found values of interaction in a range
up to near interatomic distances.
\acknowledgements
We thank A. Furrer, M. Kiselev, F. Onufrieva, P. Pfeuty
for useful discussions.
The financial support of this work from the Russian
Foundation for Basic Researches ( Grant No.\
96-02-18037-a ) and Russian State Programme for Statistical Physics
is acknowledged.
One of us (D.N.A.) thanks LNS at ETH\&PSI for the hospitality extended
to him during his visit there.
|
1,477,468,750,109 | arxiv | \section{Introduction}
Studies of the cosmic microwave background (CMB, Penzias and Wilson
(1965)), the afterglow radiation of the Big Bang, are currently in
a period of renaissance after the breakthrough discovery of
anisotropy by the \texttt{COBE} mission (Smoot et al (1992)).
Confirmed with much improved resolution and statistics by
\wmap~(Hinshaw et al (2009)), the phenomenon provides vital
information on the primordial `seeds' of structure formation. The
anisotropy is attributed to frequency shift of CMB light induced by
these `seed' density perturbations, which has the unique property
that it leads to changes in the temperature of the black body
spectrum and not the shape of it. The CMB has maximum anisotropy
power at the 1$^\circ$ scale, or harmonic number $\ell \approx$ 220,
with lower amplitude secondary and tertiary peaks at higher $\ell$.
The $\Lambda$CDM cosmological model (Spergel et al(2007)) explains
the entire power spectrum remarkably using six parameters, by
attributing the peaks to acoustic oscillations of baryon and dark
matter fluids, as long wavelength modes of density contrast enter
the horizon and undergo causal physical evolution. CMB light
emitted from within an overdense region of the oscillation are
redshifted by a constant fractional amount, resulting in a cold
spot, which is a lowering by $\delta T$ of the black body
temperature $T$, and is frequency independent, i.e. $\delta T/T =
\delta\nu/\nu =$ constant. The opposite effect of blueshift applies
to underdense regions, leading to hot spots. Therefore, if the
anisotropy is genuinely due to acoustic oscillations, the inferred
change in $T$ at a given region should be the same for all the
`clean' frequency passbands of the \wmap~ mission. Since a
corresponding variation of the CMB flux $B(\nu, T)$ at any given
frequency $\nu$ is $\delta B = (\partial B/\partial T)\delta T$ if
the cause is solely $\delta T$ with no accompanying distortion of
the functional form of $B$ itself, the expected $\delta B$ at
constant $\delta T$ is then the `dipole spectrum' $\partial
B/\partial T$ which is well measured by \texttt{COBE-FIRAS} (Mather
et al(1994)). Moreover, the \wmap~data are calibrated w.r.t. this
dipole response.
A noteworthy point about the acoustic peaks is that one needs to
employ the technique of cross correlation to reduce the noise
contamination at high $\ell$, especially the harmonics of the second
and higher acoustic peaks. Specifically one computes the all-sky
cross power spectrum \beq C_{\ell}^{ij} = \frac{1}{2\ell+1}
\sum_{m}~ a_{\ell m}^i a_{\ell m}^{j*}, \eeq where the indices $i$
and $j$ denote independent data streams with uncorrelated noise that
arise from a pair of maps at different frequency bands (or same band
but taken at different times), and $a_{\ell m}^i = \delta T_{\ell
m}^i$ is the apparent CMB temperature anisotropy for the spherical
harmonics $(\ell, m)$ as recorded by observation $i$. Since the use
of multiple passbands is crucial to the accurate profiling of the
acoustic oscillations, it is important that we do compare them with
care, down to the level of measurement uncertainties. Only {\it a
priori} statistically consistent maps should be cross correlated, in
the sense that any real discrepancies between such maps may carry
vital information about new physical processes that their cross
power spectrum does not reveal. In one previous attempt to address
this point (see Figure 9 of Bennett et al(2003a)) \texttt{WMAP1}
data downgraded to an angular resolution commensurate with
\texttt{COBE}~were used to produce a difference (subtraction) map
between the two missions. When displayed side by side with the map
of the expected noise for each resulting pixel, the two maps did
appear consistent. Nevertheless, this powerful method of probing the
CMB anisotropy does, in the context of the specific datasets used by
Bennett et al(2003a), suffer from one setback: it is limited by the
sensitivity and resolution of \texttt{COBE}.
In another test of a similar kind, we observe that each amplitude
$a_{\ell m}^i$ can further be factorized as $a_{\ell m}^i = a_{\ell
m} b_{\ell}^i$, where the array $b_{\ell}^i$ accounts for the
smoothing effects of both the beam and the finite sky map pixel
size, and $a_{\ell m} = \delta T_{\ell m}$ is the true amplitude of
the CMB anisotropy. The results (see Figure 13 of Hinshaw et al
(2007)) indicate agreement of the variance $C_{\ell}^{ij}$, hence
$\delta T_{\ell}$, within the margin of a few percent for $\ell
\lesssim$ 400 among the many cross power spectra formed by the
various possible combinations of pairs of all-sky maps. This offers
more ground for optimism, but to be definitive the remaining
discrepancy needs to be demonstrably attributed to noise,
instrumental systematics, or foreground emission.
The purpose of our investigation is to perform further, more
revealing comparisons than the two past ones described above,
initially by focussing upon the angular scale of the first acoustic
peak, which is $\sim$ 1$^\circ$. Our analysis will be done in both
real (angular) and harmonic domains, because while most of the
effort have hitherto been pursued in the latter, the former is the
domain in which the raw data were acquired and organized.
\section{The all sky difference map between the \fiver~V and W
bands}
We adopted the \texttt{Healpix}\footnote{See
\texttt{http://healpix.jpl.nasa.gov}.} pixelization scheme to ensure
that all pixels across the sky have the same area (or solid angle).
Firstly the W band data is smoothed to the V band resolution. Then
the whole sky map is downgraded to $\approx$ 1$^\circ$ diameter
(corresponding to \texttt{nside} of 64 in the parametrization of the
\wmap~database), which is not only commensurate with the scale of
global maximum $\delta T$ power, but also large enough to prevent
data over-sampling due to the use of too high a resolution, as the
size is comfortably bigger than the beam width of the \wmap~~V band
(61 GHz) larger than that of the W band.
The resulting $\delta T$ values for the two cosmological passbands
of V and W, span $\approx$ 35,000 clean (i.e.
\kp-masked\footnote{\kp~is short for \texttt{external temperature
analysis}.} and foreground
subtracted\footnote{For foreground subtracted \fiver~maps see\\
\indent\texttt{http://lambda.gsfc.nasa.gov/product/map/current/m\_products.cfm}.})
pixels, from which a $V-W$ difference map at this $\approx$
1$^\circ$ resolution was made. After removing the monopole and
dipole residuals (the latter aligned with the original \texttt{COBE}
dipole), this map is displayed in Figure \ref{mapVWmK} along with
the corresponding pixel noise map for reference; the latter
represents the expected appearance of the $V - W$ map if the CMB
anisotropy is genuinely acoustic in nature, so that the map would
consist only of null pixels should the \fiver~instruments that
acquired them be completely noise free. When comparing the real
data map of Figure \ref{mapVWmK}a with the simulated map of Figure
\ref{mapVWmK}b, the former appears visibly noisier on the resolution
scale $\approx$ 1$^\circ$; moreover, the Leo and Aquarius (i.e. the
first and third) sky quadrants contain more cold pixels than the
other half of the sky, indicative of the existence of a quadrupole
residual.
The extra signals revealed by the $V - W$ subtraction map are
elucidated further in respect of their aforementioned properties by
examining the statistical distribution of the pixel values across
the four sky quadrants. As shown in Figure \ref{hist_four_VW}, the
distribution of the 1$^\circ$ anisotropy is considerably wider than
that expected from the \fiver~pixel noise for all the quadrants, by
$\approx$ 10 $\mu K$, which is $\sim$ 10 \% of the $\approx$ 75 $\mu
K$ power in the first acoustic peak, and is therefore very
significant. A detailed confirmation by Gaussian curve fitting is
given in Table \ref{VWmusigma}.
The $V - W$ quadrupole is more subtle, and is evident in the
residual plots at the bottom of each graph in Figure
\ref{hist_four_VW}, from which a slight skewness of the data to the
right is apparent in quadrants 1 and 3 (the quadrants of the CMB
dipole), with 2 and 4 exhibiting the opposite behavior. For this
reason, the effect does not manifest itself as shifts in the
Gaussian mean value $\mu$ of Table \ref{VWmusigma}. Rather, the
high statistical significance of both the quadrupole and the
degree-scale signals, with the former having a magnitude of
$\approx$ 1 $\mu K$, are established by computing the cross power
spectra of the temperature difference maps, Figure \ref{psVW}. This
was performed at the resolution of \texttt{nside}$=$ 64 using the
\texttt{PolSpice} software\footnote{Available from
\texttt{http://www.planck.fr/article141.html}.}. From Figure
\ref{psVW} also, the presence of excess non-acoustic anisotropy at
all harmonics $\ell > 2$, including the cosmologically important
$\theta\approx 1^\circ$ angular scale, appears robust. At the
$1^\circ$ scale ($200 \lesssim \ell \lesssim 300$), the r.m.s. is
about 7 $\mu K$, or 10 \% of the maximum CMB anisotropy. Lastly, the
$V-W$ quadrupole may be displayed in isolation by arranging the data
of the subtracted map as a multipole expansion \beq \delta T
(\theta,\phi) = \sum_{\ell,m} a_{\ell m} Y_{\ell m} (\theta, \phi),
\eeq and evaluating at $\ell=2$ the amplitude \beq \delta T_\ell
(\theta,\phi) = \sum_m a_{\ell m} Y_{\ell m} (\theta, \phi), \eeq
(note $\delta T_\ell (\theta,\phi)$ is always a real number if the
original data $\delta T (\theta,\phi)$ are real). The ensuing whole
sky map is in Figure \ref{VW_alm}, and the coordinates of the axes
are in Table \ref{poleaxis}.
\section{Interpretation of results}
The \fiver~$V-W$ map reveals two principal anomalies to be
explained: (a) the quadrupole at $\ell =2$, with an amplitude of
$\approx 1 \mu K$, and (b) the higher harmonic signals, especially
the $\approx 8~\mu K$ anisotropy at $\ell \gtrsim$ 200 (Figure
\ref{psVW}). Similar findings are also made by others, like the
noticeable hemispherical power asymmetry in the {\texttt{WMAP1}}~analysis of
Eriksen et al (2004) and confirmed in the \fiver~data by Hoftuft et
al (2009), or the large scale distribution investigated by Diego et
al (2009). Also because both (a) and (b) are not small effects,
claims to precision cosmology are overstatements until they are
properly accounted for and the cosmological model accordingly
adjusted.
Concerning (a), unlike the dipole, there is no previous known CMB
quadrupole of sufficient amplitude to justify its dismissal as a
cross band calibration residual. In fact, our reported amplitude of
$1~\mu$K is about 7 \% of the 211 $\mu$K$^2$ \fiver~anisotropy in
the unsubtracted maps of the individual bands at $\ell=$ 2, which is
far larger than the calibration uncertainty of $\approx$ 0.5 \%
(Hinshaw et al (2009)) for each band.
It will probably be more rewarding to search for remaining
foreground contamination not yet removed by the standard data
filtering and correction procedures of the \fiver~team (Bennett et
al(2003b), Gold et al(2009)). Thermal dust emission might have a
power law spectrum with an index too close to that of the
Rayleigh-Jeans tail in the V and W bands for an appreciable V - W
signal, although this is an interesting scenario worthy of further
study (Diego et al 2009). We consider here another possibility,
viz. free-free emission from High Velocity Clouds (HVCs, Wakker et
al (2009) and references therein). The clouds are moving at
velocities sufficiently large for any H$\alpha$ emission from them
to be outside the range\footnote{Example of a HVC missed by
\texttt{WHAM} is Hill et al 2009, a cloud of unit emission measure.
A notable exception (counter example) would be the HVC K-complex
(Haffner et al 2001), with an emission measure of 1.1 units, that
happens to fall inside the velocity window of \texttt{WHAM}.} of the
\texttt{WHAM} survey, the database employed to estimate the
free-free contribution to the \texttt{WMAP} foreground. HVC
parameters for the larger and brighter clouds can reach: $n_e
\approx$ 0.2 $cm^{-3}$ and column density $\approx$
3~$\times$~10$^{19}$~cm$^{-2}$ (Wakker et al 2008). This
corresponds to an emission measure of two units, or
6~$\times$~10$^{18}$~cm$^{-5}$, or $\approx$ 0.6~$\mu K$ of V-W
temperature excess (Finkbeiner D.P. (2003)), on par with the 1~$\mu
K$ of our observed quadrupole. Moreover, as can be seen from the
all-sky map of $N_{{\rm HI}}$ and an estimate of the V-W excess in
Figure \ref{HVCs} when they are compared with Figures \ref{VW_alm}
and \ref{psVW}, the strength and distribution of HVCs do appear to
be responsible for a non-negligible fraction of the observed anomaly
on very large scales. Further work in this area is clearly
necessary, and will be pursued in a future, separate paper.
We now turn to (b), the effect that occurs on the much smaller and
cosmologically most significant angular scale of 1$^\circ$.
Calibration issues are again immediately excluded here, since the 8
$\mu K$ anomalous amplitude is on par with the pixel noise of
\fiver~for the scale in question (Table \ref{VWmusigma}). Moreover,
because the subtracted $V - W$ dipole and the (unsubtracted) $V - W$
quadrupole, the latter being (a), are both relatively feeble
phenomena, of amplitudes $\approx$ 0.2 and 1 $\mu K$ respectively as
compared to the 7 $\mu K$ amplitude of (b), the prospect of smaller
scale fluctuations having been enhanced by a larger scale one can be
ruled out here. CMB spectral distortion during the recombination
era, or subsequently from the Sunyaev-Zel\'dovich (SZ) scattering,
or from other foreground re-processing that were not properly
compensated by the data cleaning procedure of \fiver, could all be
responsible for the observed anomaly. Although the first two
interactions (Sunyaev and Chluba (2008), Birkinshaw and Gull (1983))
exert much smaller influences than 7~$\mu K$ (bearing in mind that
the degree of SZ needs to be averaged over the scale of the whole
sky), the foreground could potentially play a relevant role in a
similar way as it did at very low $\ell$. Thus, in respect of
free-free emission by HVCs alone, until a full survey at high
angular resolution is performed one cannot be certain that the
emission measure from these clouds is too weak to account for our
(b) anomaly. However, the action of the foreground is {\it
systematic} in that it does {\it not lead to random and symmetric
temperature excursions} (about zero) between two frequencies of V
and W. More precisely, because the sources or sinks involved have a
characteristic spectrum that differs from black body in a specific
way, any widening in Figure \ref{hist_four_VW} of the data
distribution w.r.t. the expected simulated gaussian ought to be
highly asymmetric. This obviously contradicts our findings, i.e. we
note from Figure \ref{hist_four_VW} that the widening of the data
histogram is highly symmetric. As a result, the symptoms do not
point to the foreground as responsible cause.
\section{Conclusion}
We performed a new way of testing the black body nature of the CMB
degree scale anisotropy, by comparing the all-sky distribution of
temperature difference between the \fiver~cosmological bands of V
and W, with their expected pixel noise behavior taken fully into
consideration by means of simulated data. In this way a non
acoustic signal is found in the \texttt{ext}-masked $V-W$ map at the
$\approx$ 1$^\circ$ resolution of \texttt{nside} $=$ 64, with the
following two properties. It has a quadrupole amplitude $\approx$ 1
$\mu$K (Figures \ref{hist_four_VW}, \ref{VW_alm}, and \ref{psVW})
which may in part be attributed to unsubstracted foreground
emission. It also has excess anisotropy (or fluctuation) on all
scales $\ell
>$ 2, including and especially the scales of $200 \lesssim \ell
\lesssim 300$ where most of the acoustic power resides, and about
which the anomaly we reported is in the form of a symmetric random
excursion about zero temperature with a r.m.s. $\approx$ 8 $\mu K$
(Figures \ref{hist_four_VW} and \ref{psVW}, Table \ref{VWmusigma})
which is $\approx$ 10 \% of the maximum acoustic amplitude found at
$\ell \approx$ 220. This type of excursion frustrates attempts to
explain the effect as foreground residuals, i.e. it opens the
question of whether the \texttt{WMAP} anisotropy on the 1$^\circ$
scale is genuinely related to the seeds of structure formation.
In any case, it is clear that both anomalies have sufficiently large
magnitudes to warrant their diagnoses through future, further
investigations, if the status of precision cosmology is to be
reinstated.
\begin{figure}[H]
\centering
\includegraphics[scale=0.5,angle=90]{figure1a.ps}
\includegraphics[scale=0.5,angle=90]{figure1b.ps}
\caption{The \kp-masked and point sources subtracted
\texttt{WMAP5}~$V-W$ map, viz. the difference map between the CMB
anisotropy as measured in the V band and the W band, for the real
data after the removal of residual monopole and dipole components
(top), and simulated pixel noise that reflect precisely the
observational condition (bottom). Both maps are plotted in Galactic
coordinates with the Galactic center $(l,b)=(0,0)$ in the middle and
Galactic longitude $l$ increasing to the left. To avoid the
problems of beam size variation from one band to the next, the W
band data is smoothed to the V band resolution, then the pixels were
downgraded to the common resolution of \texttt{nside}$=$ 64 using
the foreground-reduced \fiver~data (see section 2); this resolution
under-samples the data in both bands. The color scale is coded
within a symmetrical range: those pixels with values beyond $\pm
40~\mu$K are displayed in the same (extreme) color; most of such
pixels are around the masked regions. The existence of additional
non- black body signal in the real data can readily be seen from
this comparison, as the simulated map is noticeably quieter.
\label{mapVWmK}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.3,angle=90]{figure2a.ps}
\includegraphics[scale=0.3,angle=90]{figure2b.ps}
\includegraphics[scale=0.3,angle=90]{figure2c.ps}
\includegraphics[scale=0.3,angle=90]{figure2d.ps}
\caption{The data points show quadrant sky occurrence frequency
distribution of the difference in the degree-scale
(\texttt{nside}$=64$) anisotropy between the \fiver~V and W bands,
while the errors in the data are due to the \fiver~pixel noise for
the same \kp-masked quadrant sky area, i.e. they are the statistical
fluctuations in the various parts of the solid line, which gives the
mean histogram of this noise. The orientation of each quadrant
follows the same convention as the sky maps of Figure \ref{mapVWmK},
with the 1st and 3rd quadrants marking the \texttt{COBE} dipole.
\label{hist_four_VW}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.45,angle=90]{figure3.ps}
\caption{$V-W$ quadrupole of the \texttt{nside}$=64$
\fiver~temperature difference maps, after \kp-masking and point
source subtraction. The mathematical procedure of extracting each
multipole $\ell$ is given in eqs. (3) and (4) of the text, and the
software used to do these computations was from \texttt{anafast} of
\texttt{Healpix}.\label{VW_alm}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.35]{figure4a.eps}
\includegraphics[scale=0.35]{figure4b.eps}
\includegraphics[scale=0.35]{figure4c.eps}
\caption{Real and simulated (noise) power spectra of the
\fiver~$V-W$ map. These are V-W {\it cross} power spectra computed
by cross correlating the first three years of observations with the
last two. The errors in the real data of the first two graphs
represent the pixel noise power of the last graph, i.e. 4c is the
average of 1,000 simulated realizations of the V-W \fiver~pixel
noise. Thus, if the noise power at harmonic $\ell$ is $(\delta
T_\ell)^2$ from 4c, the upper error bar in 4a and 4b will extend
from $T_l^2$ to $(T_\ell+\delta T_\ell)^2$ where $T_\ell$ is the
observed V-W anisotropy of each real data point (given by the
intersection of the error bars with the zig-zag line) in 4a and 4b.
The rising trend ($\sim l^2$) of all three curves towards higher $l$
simply reflects the relatively larger pixel noise for smaller
angular areas. For $l>$ 200 the real data of 4a and 4b rapidly
become noise dominated. \label{psVW}}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[scale=0.6,angle=0]{figure5.eps}
\caption{Upper map shows 21 cm data of HVCs with HI column density
($N_{{\rm HI}}$) larger than 7 $\times$ 10$^{18}$ cm$^{-2}$ (i.e.
the greyscale shows $N_{{\rm HI}}$ with the outer contour at 7
$\times$ 10$^{18}$ cm$^{-2}$). Complex C is the cloud in the region
$l=$ 90$^\circ$ -- 130$^\circ$, $b=$ 40$^\circ$ -- 60$^\circ$.
Complex A is around $l=$ 150$^\circ$, $b=$ 30$^\circ$ -- 45$^\circ$.
The Magellanic Stream (MS) and Bridge is at $l=$ 280$^\circ$ --
310$^\circ$, $b <$ -30$^\circ$. The Leading Arm of the MS, plus some
other bright HVCs are at $l=$ 240$^\circ$ -- 300$^\circ$, $b=$
10$^\circ$ -- 30$^\circ$. Lower map gives our estimated V-W
temperature excess due to HVCs. Note that because the dynamic range
of conversion from $N_{{\rm HI}}$ to this excess (via free-free
emission measure $EM$ of $N_{{\rm HII}}$) is not linear (e.g. Putman
et al 2003, Hill et al 2009). Our approach is to assign 0.5 and 1.0
unit of $EM$, or 0.15 and 0.3 $\mu$K of V-W excess, to every
direction with $N_{{\rm HI}} \geq$ 2 $\times$ 10$^{19}$~cm$^{-2}$
and 5 $\times$ 10$^{19}$~cm$^{-2}$ respectively. \label{HVCs}}
\end{figure}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{V - W} & $\mu (\mu$K) & error ($\mu$K) & $\sigma$ ($\mu$K) & error ($\mu$K)\\
\hline
& WMAP5 & -0.23 & 0.15 & 16.23 & 0.13 \\
\cline{2-6}
Quadrant 1 & Simulation & 0.00 & 0.13 & 14.70 & 0.12\\
\cline{2-6}
& Difference $\Delta$ & -0.23 & 0.20 & 6.88 & 0.40\\
\hline
& WMAP5 & 0.24 & 0.12 &14.47 & 0.10\\
\cline{2-6}
Quadrant 2 & Simulation & -0.04 & 0.12 & 12.10 & 0.10\\
\cline{2-6}
& Difference $\Delta$ & 0.28 & 0.17 & 7.94 & 0.24\\
\hline
& WMAP5 & -0.11 & 0.16 &16.22 & 0.13\\
\cline{2-6}
Quadrant 3 & Simulation & 0.03 & 0.15 & 14.70 & 0.12\\
\cline{2-6}
& Difference $\Delta$ & -0.14 & 0.22 & 6.86 & 0.40\\
\hline
& WMAP5 & 0.40 & 0.13 &14.80 & 0.10\\
\cline{2-6}
Quadrant 4 & Simulation & -0.01 & 0.13 & 12.26 & 0.10\\
\cline{2-6}
& Difference $\Delta$ & 0.41 & 0.18 & 8.30 & 0.23 \\
\hline
\end{tabular}
\end{center}
\caption{Parameters for the gaussian curves that fitted the \fiver~
data and the pixel noise histograms (the latter are the solid lines)
of Figure \ref{hist_four_VW}. Each parameter uncertainty is set by
the $\chi^2_{{\rm min}} + 1$ criterion, which represents the usual
68 \% (or unit standard deviation) confidence interval for one
interesting parameter, when the error bars shown in Figure
\ref{hist_four_VW} are employed for fitting both the real and pixel
noise data. The difference in the width $\sigma$ between the two
models, which gives the distribution width of the additional random
signal, is given by $(\Delta\sigma)^2 = \sigma_r^2 - \sigma_s^2$.
The smaller simulated gaussian widths for quadrants 2 and 4
(relative to 1 and 3) is due to the higher exposure times there
(which contain the heavily scanned ecliptic poles) leading to lower
pixel noise. \label{VWmusigma}}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
\multicolumn{3}{|c|} {V-W quadrupole location $(l,b)$} \\
\hline
hot & \multicolumn{2}{|c|} {$(-132.1^\circ,-14.4^\circ)$,$(48.0^\circ,14.4^\circ)$} \\
\hline
cold & \multicolumn{2}{|c|} {$(-81.5^\circ,68.0^\circ)$,$(98.5^\circ,-68.0^\circ)$} \\
\hline
\end{tabular}
\end{center}
\caption{Orientation of the quadrupole in the \fiver~ V-W map of
Figure \ref{VW_alm}. \label{poleaxis}}
\end{table}
\acknowledgments We are grateful to the referee for very valuable
suggestions towards the improvement of this paper. Lyman Page,
Priscilla Frisch, Gary Zank, and Barry Welsh are also acknowledged
for helpful discussions. Some of the results were obtained by means
of the HEALPix package (G$\acute{o}$rski et al (2005)).
\newpage
|
1,477,468,750,110 | arxiv | \section{Introduction}
\subsection{Quick presentation of the moments' methods for kinetic equations}
We study here an unusual but very simple choice of closure for the moments' method where we can completely characterize the stability and convergence rates of the approximation. As far as we know this very simple situation was never considered before. Before presenting this choice though, let us very briefly give the main ideas behind the method of moments.
Moments' methods have been introduced in \cite{Gr} in the context of the Boltzmann equation. This well known equation is posed on the density $f(t,x,v)$ of particles in the phase space and reads
\begin{equation}
\partial_t f+v\cdot\nabla_x f=Q(f),\label{boltz}
\end{equation}
where $Q$ is a non linear operator (in the velocity variable $v$) which expresses how the velocity of a particle may change when it has a random collision.
Solving numerically an equation like \eqref{boltz} is in general very costly. The structure of the righthand side $Q$ is non local in $v$ and moreover the equation is posed in phase space which means that one has to work in dimension $2d+1$ if $x,v\in \R^d$ ($7$ for instance if $x\in \R^3$).
The moments' method is one possible answer to this problem and it consists in solving equations on the polynomial moments of $f$. Let $m(v)$ be polynomial in $v$ then
\begin{equation}
\partial_t <m,f>+\nabla_x\cdot<v\,m(v),f>=<m,Q(f)>,\label{momeq}
\end{equation}
where we denote
\begin{equation}
<m,f>=\int m(v)\,f(t,x,v)\,dv.\label{defmom}
\end{equation}
Now instead of solving one equation in dimension $2d+1$, one has to solve several equations but in dimension $d+1$. Moreover in general one can expect that $<m,\,Q(f)>$ is not too complicated to compute.
However the system given by \eqref{momeq} is not closed as $vm$ is always one degree higher than $m$. Therefore no matter how many moments $m_i$ one chooses, it is never possible to express all the $v\,m_j$ in terms of the $m_i$. This is the closure problem and it means that \eqref{boltz} is never equivalent to \eqref{momeq} for any finite number of moments.
Instead one typically chooses a closure equation, {\em i.e.} a relation between $<vm_i,\,f>$ and the $<m_j,f>$ for those $i$ where $vm_i$ cannot be expressed in terms of the $m_j$.
The first big difficulty for this type of method is how to choose the closure in order to ensure that the corresponding moments' system has good properties and gives a good approximation of Eq. \eqref{boltz}. This problem was of course recognized early on, see for instance \cite{Bo}, as well as the role of entropy, see \cite{Pe} among many others.
One of the first systematic ways of finding a closure was introduced in \cite{Lev1} and \cite{LM}. It is still not easy to actually compute the relation which means that it is often computed numerically instead (see \cite{ST}, \cite{CL}, \cite{CLW}). Different closures can of course be used (see \cite{Tor2} for example).
Theoretically even checking that the corresponding method leads to a hyperbolic system is not easy (we refer for instance to \cite{Br}, \cite{CFL}). Proving convergence rates seems to be out of reach for the time being although in practice it seems to be a good approximation (see \cite{LP} for a numerical study).
Let us also mention that the methods of moments has also been used for theoretical purposes (as in \cite{De}) and not only numerical computations.
We conclude this very brief overview by refering to \cite{Bi}, \cite{St} or \cite{To} for more on numerical simulations for kinetic equations in this context.
\subsection{Linear closure relations}
The guiding question in this article is whether it can make sense to consider a linear closure relation. This is certainly delicate in the nonlinear case of Boltzmann eq. \eqref{boltz}. Instead we choose a simplified 1d setting where it is possible to fully analyze the method.
Instead of \eqref{boltz}, we consider the linear model
\begin{equation}\label{eq_neutronics}
\left\{\begin{array}{lll}
\del_t f+v\del_x f=L(f),\qquad (x,v)\in\r\times \r\\\\
\ds L(f)=\int_{\r}Q(v,v^*)f(t,x,v^*)dv^*-\lambda f
\end{array}\right.,
\end{equation}
with $\lambda>0$ and where the operator $Q$ corresponds to a velocity jump process. While much simplified with respect to \eqref{boltz}, it is not uninteresting in itself, with applications to physics (see \cite{Ch}, \cite{Win}) or biology (see \cite{ODA} for example). The equation had to be supplemented with some initial data, which for simplicity we assume to be compactly supported in velocity
\begin{equation}
f(t=0,x,v)=f^0(x,v)\in L^2(\R^2),\quad \mbox{supp}\,f^0\subset \r\times I.\label{initialdata}
\end{equation}
In general $Q$ could even be assumed to depend on $t$ and $x$. Here we make the additional approximation
\begin{equation}\label{forme_noyau}
Q(v,v^*)=\left(q(v)\sum_{j=0}^d \alpha_j {v^*}^j\right)\1_{\{(v,v^{*})\in I^{2}\}},
\end{equation}
with $q$ smooth and compactly supported in some interval $I$, $d\in\N^*$ and $(\alpha_j)_{0\leq j\leq d}\in
\r^{d+1}$. With this special form, one of course expects to be in a very favorable situation for the method of moments. Hence this should be seen as a simple toy model where the method can easily be tested.
Denoting the moments of the solution $f$ by
\begin{equation}\label{def_moments_f}
\mu_i^f(t,x):=\int_{I}v^if(t,x,v)dv,\qquad i\in\N,
\end{equation}
Eq. \eqref{eq_neutronics} simply becomes
\begin{equation}\label{eq_neutronics_bis}
\begin{array}{llll}
\del_t f+v\del_x f=
\ds L(f) = q(v)\sum_{j=0}^d \alpha_j \mu_j^f -\lambda f.
\end{array}
\end{equation}
As we work in dimension $1$, the structure of the hierarchy of equations on the moments is also very simple
\begin{equation}\label{eq_moments}
\del_t \mu_i^f+\del_x \mu_{i+1}^f=\gamma_i\left(\sum_{j=0}^d \alpha_j
\mu_j^f \right)-\lambda \mu_i^f,\qquad\qquad i\in\N,
\end{equation}
where we truncate at order $N$ and we define the moments of $q$
\begin{equation}\label{def_gamma}
\gamma_i=\mu_i^q=\int_{I}v^i q(v)dv,\qquad i\in\N.
\end{equation}
In order to close the system, it would be necessary to be able to express $\mu_{N+1}^f$ in terms of the $\mu_i^f$ for $i\leq N$.
The linear closure relation that we study here consists in assuming that $\mu_{N+1}^f$ is a linear combination of the lower moments.
That means that instead of \eqref{eq_neutronics_bis} or \eqref{eq_moments}, we solve
\begin{equation}\label{method_moments}
\left\{\begin{array}{lll}
\del_t \mu_i+\del_x \mu_{i+1}=\ds\gamma_i\left(\sum_{j=0}^d
\alpha_j \mu_j\right)-\lambda \mu_i,\qquad i=0,\dots,N\\
\mu_{N+1}=\ds\sum_{i=0}^N a_i \mu_i.
\end{array}\right.,
\end{equation}
with $(a_0,\dots,a_N)$ given real coefficients that one should choose in the ``best'' possible way.
One can rewrite $\eqref{method_moments}$ in matrix form as
\begin{equation}\label{systeme_hyperbolique}
\del_t M_N+\del_x (A M_N)=BM_N,
\end{equation}
where $M_N=M_N(t,x)=(\mu_0(t,x),\dots,\mu_N(t,x))^{T}\in \r^{N+1},$ and $A,B\in
M_{N+1}(\r)$ are defined by
\begin{equation}\label{matrices_systeme}
A=\left(\begin{array}{cccccc}
0 & 1 & & \\ \vspace{2mm}
& \ddots & \ddots & \\\vspace{2mm}
& & 0 & 1\\\vspace{2mm}
a_0 & \dots & a_{N-1} & a_N
\end{array}
\right),\quad
B=\left(\begin{array}{ccc|ccc}
& \vdots & & & 0 & \\
& & & & & \\
& (\gamma_i \alpha_j)_{\genfrac{}{}{0pt}{}{0\leq i\leq N}{0\leq j\leq d}} &
& & \vdots &\\
& & & & &\\
& \vdots & & & 0 &
\end{array}
\right)-\lambda I_{N+1}.
\end{equation}
\subsection{Basic properties of the method and the main result}
Given the simple form of \eqref{method_moments} or \eqref{systeme_hyperbolique}, some properties are obvious.
Given its linear nature, the system is hyperbolic if the characteristic polynomial
\begin{equation}\label{poly_carac_A}
\chi_A(X)=det(XI_{N+1}-A)=X^{N+1}-\sum_{i=0}^N a_i X^i
\end{equation}
has $N+1$ real roots, denoted by
\begin{equation}\label{spec_A}
spec(A)=\{\lambda_0,\dots,\lambda_N\} .
\end{equation}
This is enough to guarantee the well posedness of the numerical system \eqref{systeme_hyperbolique} but not necessarily good stability properties. Norms of the numerical appoximation could for instance grows fast as $N$ increases. If the initial data is compactly supported in $x$ then the solution is as well and the support propagates with speed $\max_k |\lambda_k|$.
On the other hand, the major inconvenients of such a method are also pretty clear. For instance positivity of the even moments will likely not be preserved. Even the positivity of the macroscopic density $\mu^f_0$ has no reason to be propagated.
However a careful analysis can in fact reveal that it is possible to choose appropriately the coefficients $a_i$ in order to have not only stability but also very fast convergence.
For simplicity, assume that $I=[-1,\ 1]$ (just rescale and translate otherwise) and choose the Tchebychev polynomial for $\chi_A$ or
\begin{equation}
\forall 0\leq k\leq N,\qquad \lambda_{k}=\cos\left(\left(\frac{2k+1}{2N+2}\right)\pi\right),
\label{tcheblambda}
\end{equation}
or
\begin{equation}
\chi_{A}^{(N+1)}(X)=\prod_{k=0}^{N}\left(X-\cos\left(\left(\frac{2k+1}{2N+2}\right)\pi\right)\right).\label{tchebchi}
\end{equation}
Then it is possible to show:
\begin{Th}\label{th stabilite tchebychev}
Assume that $f^0$ satisfies \eqref{initialdata}, that $Q$ satisfies \eqref{forme_noyau}. Then the solution to the truncated moments' hierarchy \eqref{method_moments} where the coefficients $a_i$ are chosen according to \eqref{tcheblambda} or \eqref{tchebchi} satisfies
\begin{equation}\label{estimation_de_stabilite_macro_tchebychev}
\sup_{0\leq i\leq N}\sup_{t\in[0,T]}\|\mu_{i}(t)\|_{L^{2}(\r)}\leq
e^{T C_{d,\alpha}\|q\|_{L^{2}(I)}}\|f^{0}\|_{L^{2}(\r^{2})},
\end{equation}
where $C_{d,\alpha}=\sqrt{\pi(d+1)}\left(\sum_{j=0}^{d}{\alpha_{j}}^{2}\right)^{1/2}$ is independent of $N$.
In addition if $f^0\in H^k(\r^{2})$, $d=0$, denoting by $f$ the corresponding solution to \eqref{eq_neutronics} and defining its moments by \eqref{def_moments_f}, one has
\begin{equation}\label{errorestimatebasic}
\|\mu_{0}-\mu_{0}^{f}\|_{L^{\infty}\left([0,T],L^{2}(\r)\right)}\leq \frac{C}{N^{k-3/4}}\times\|f^{0}\|_{H^{k}(\r^{2})}.
\end{equation}
where $C\geq 0$ depends on $T,\lambda,q$ and $k$.
\end{Th}
\begin{Rk} The convergence result is given for the first moment and $d=0$. Higher moments would lose small powers of $N$ so the same proof would give
\[
\|\mu_{i}-\mu_{i}^{f}\|_{L^{\infty}\left([0,T],L^{2}(\r)\right)}\leq \frac{C\,N^{k\,(i+d)/N}}{N^{k-1}}\times\|f^{0}\|_{H^{k}(\r^{2})}.
\]
We do not know whether those results are optimal or not and in particular whether the numerical value $3/4$ in \eqref{errorestimatebasic} is optimal (though actual numerical simulations do suggest it is).
\end{Rk}
Hence in this setting, the conclusion of this analysis is that the linear method of moments should be seen in the same light as spectral methods (see \cite{ShT} for instance). It is stable, converges automatically at order $k-3/4$ if the initial data is in $H^k$ but does not propagate any additional property (positivity being probably the most important).
The next section is a more detailed presentation of the stability and convergence analysis in the general case (without necessarily choosing the Tchebychev points). The corresponding technical justifications and calculations are presented in two separate sections. Theorem \ref{th stabilite tchebychev} is proved in section 5. The last section is an appendix and recalls some well known results.
\section{Stability and convergence results}
We present here in more details the kind of stability and convergence results that can be proved for the method \eqref{method_moments} without assuming any particular choice of the eigenvalues (like \eqref{tcheblambda}). We give here the main ideas in the approach and leave the technical proofs to further sections.
\subsection{Eigenvectors for System \eqref{systeme_hyperbolique}}
As they enter in some of the estimates, we start by a short parenthesis about the eigenvectors for the matrix $A$.
Define $P\in M_{N+1}(\r)$
the matrix of the
eigenvectors; then
\begin{equation}\label{matrice_de_passage}
P_{i,j}=\lambda_{j}^i,\qquad 0\leq i,j\leq N.
\end{equation}
Its inverse $P^{-1}$ can be computed
as easily and
\begin{equation}\label{matrice_de_passage_inverse}
P_{i,j}^{-1}=\frac{
\tilde p_{i,j}}{\pi_i},\qquad 0\leq i,j\leq N,
\end{equation}
where
\begin{equation}\label{formules_P_-1}
\pi_i=\prod_{j\neq i}
(\lambda_i-\lambda_j),\qquad
\tilde p_{i,j}=(-1)^{N-j}
\sum_{\genfrac{}{}{0pt}{}{k_1<\ldots<k_{N-j}}{k_l\neq i\ \forall l}}
\prod_{l=1}^{N-j} \lambda_{k_l}=\sum_{l=0}^j \frac{a_l}{\lambda_i^{j-l+1}}.
\end{equation}
with the convention that $\tilde p_{i,N}=1$.
Moreover, an easy computation shows that:
\begin{equation}\label{autre formule inverse P}
\tilde{p}_{i,j}=\lambda_i^{N-j}-a_N\lambda_i^{N-j-1}-\dots-a_{j+2}\lambda_i-a_{j+1},\qquad 0\leq i,j\leq N.
\end{equation}
We can notice that $\tilde{p}_{i,j}$ is an homogeneous polynomial of degree $N-j$ in the eigenvalues
$(\lambda_0,\dots,\lambda_N)$.
\subsection{Stability estimate and kinetic interpretation of the method}
Let us first state our main stability estimate.
\begin{Th}\label{th stabilite}
Assume \eqref{initialdata} and that $\{\lambda_{0},\dots,\lambda_{N}\}\subset I$. Moreover assume that there exists
a function $\rho_N(v)$, positive on $I$, such that
\begin{equation}\label{poids_rho_N}
\int_{I}\frac{R(v)}{\rho_{N}(v)}dv=\sum_{k=0}^N R(\lambda_k),\qquad \forall R\in\r_{2N+1}[X].
\end{equation}
Then, the hyperbolic system \eqref{systeme_hyperbolique} is stable and
\begin{equation}\label{estimation_de_stabilite_macro}
\sup_{t\in[0,T]}\|\mu_{i}(t)\|_{L^{2}(\r)}\leq
\left(\sum_{k=0}^{N}\lambda_{k}^{2i}\right)^{1/2}\times e^{T C_{N}(q)}
\times C_{N}(f^{0}),
\qquad i=0,\dots,N
\end{equation}
where
\begin{equation}\label{constante_pour_donnee_initiale}
C_{N}(f^{0})=\left(\int\!\!\!\int_{\r\times I} |f^{0}(x,v)|^{2}\rho_{N}(v)dxdv\right)^{1/2},
\end{equation}
\begin{equation}\label{constante_pour_q}
C_{N}(q)=\Lambda_{N,d}\left(\int_{I}|q(v)|^{2}\rho_{N}(v)dv\right)^{1/2}-\lambda,
\end{equation}
\begin{equation}\label{constante Lambda}
\Lambda_{N,d}=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{j=0}^{d}\sum_{k=0}^{N}\lambda_{k}^{2j}}
=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{k=0}^{N}\frac{1-\lambda_{k}^{2d+2}}{1-\lambda_{k}^{2}}}.
\end{equation}
\end{Th}
This result does not assume any particular distribution on the eigenvalues but of course it is by no means guaranteed in general that one could find $\rho_N$ satisfying \eqref{poids_rho_N}. Notice that the corresponding relation is really a quadrature formula for computing integrals on $I$ which we ask to be exact for polynomials of degree $2N+1$.
We do not prove this result directly on the system \eqref{systeme_hyperbolique}. Instead we show a corresponding result on a linear BGK problem (see \cite{BGK} for the simplification of collision kernels into BGK models).
For any $\eps>0$ and $N\geq d$, consider the equation
\begin{equation}\label{BGK}
\begin{split}
&\del_t f_N^{\eps}+v\del_x{f_N^\eps} = \frac{M^{(N)}\,f_N^{\eps}-f_N^{\eps}}{\eps}
+L(f_N^{\eps})\\
&f_N^{\eps}(0,x,v)=f_0(x,v),
\end{split}\end{equation}
where the linear operator $L$ is defined by $\eqref{eq_neutronics_bis}$
and the Maxwellian $M^{(N)}:f\mapsto M^{(N)}f$ satisfies the moment conditions
\begin{equation}\label{conditions moments maxwellienne}
\left\{\begin{array}{llll}
\ds\int_{I}v^i M^{(N)}\,f dv & = & \ds\int_{I}v^i f dv & i=0,\dots,N,\\\\
\ds\int_{I}v^{N+1} M^{(N)}\,f dv & = &\ds \sum_{i=0}^N a_i \int_{I}v^i f dv &
\end{array}
\right..
\end{equation}
This problem is a kinetic approximation of the macroscopic problem
\eqref{method_moments} as formally when $\eps\longrightarrow 0$ then one recovers \eqref{method_moments} from \eqref{BGK}.
Indeed,
multiplying \eqref{BGK} by $v^i$ and integrating over
$v\in I$, we obtain
\[
\del_t \mu_i^{\eps}+\del_x\mu_{i+1}^{\eps}=
\gamma_i\left(\sum_{j=0}^{d} \alpha_j \mu_j^{\eps}\right)-\lambda \mu_i^{\eps}
\quad \mbox{for}\ i=0,\dots,N,\]
where $\mu_i^{\eps}:=\int_{I}v^i f_N^{\eps} dv$ for
$i\in\N$.
Moreover, when $\eps\rightarrow 0$, we have
formally
\begin{equation*}
\mu_{N+1}^{\eps}=\int_{I}v^{N+1}f_N^{\eps} dv\sim
\int_{I}{v^{N+1}}M^{(N)} f_N^{\eps} dv=\sum_{i=0}^N a_i \int_{I}v^i f_N^{\eps}
dv=\sum_{i=0}^N a_i \mu_i^{\eps},
\end{equation*}
which is our closure relation.
The interest of \eqref{BGK} is to make some computations more transparent and easier to follow and in addition at the kinetic level which is more natural given the original equation. If we can prove stability estimates for \eqref{BGK} that are uniform in $\eps$ then simply by passing to the limit, we will obtain estimates for \eqref{systeme_hyperbolique}-\eqref{method_moments}.
The
most obvious choice for the maxwellian is simply
\begin{equation}\label{formule_maxwellienne}
M^{(N)}\,f=\sum_{i=0}^N \left(\int_{I}v^i f dv\right)m_i,
\end{equation}
where the $v\mapsto m_i(v)$, $i=0,\dots,N$, are any functions satisfying
\begin{equation}\label{cond_moments_m_i}
\int_{I}v^j m_i(v) dv=\delta_{i,j},\quad 0\leq j\leq N,\qquad\qquad
\int_{I}v^{N+1}m_i(v) dv =a_i.
\end{equation}
The conditions \eqref{cond_moments_m_i} ensure
that \eqref{conditions moments maxwellienne} holds.
There are obviously many ways to choose the $m_i$ s.t. \eqref{cond_moments_m_i} is satisfied. What we are looking for is a choice compatible with an inner product $\Phi_N$ such that the application $M^{(N)}$ is an orthogonal projection for $\Phi_N$. Formally this implies that for any $f$
\[
\Phi_N(f,M^{(N)} f)\leq \Phi_N(f,f).
\]
In addition this inner product should have good symmetry properties s.t. for $f$ and $g$
\[
\Phi_N(f,v g)=\Phi_N(vf,g).
\]
The simplest way to ensure this is to look for a weight $\rho_N$ s.t.
\[
\Phi_N(f,g)=\int_I f(v)\,g(v)\,\rho_N(v)\,dv.
\]
In that case formally
\[
\Phi_N(f,v\partial_x f)=\frac{1}{2}\partial_x\,\Phi_N(f,vf).
\]
Therefore if $f_N^\eps$ solves \eqref{BGK} then one expects that
\[
\frac{d}{dt}\int_\R \Phi_N(f_N^\eps,f_N^\eps)\,dx\leq 2\,\int_\R \Phi_N(f_N^\eps,\,L(f_N^\eps))\,dx.
\]
This is the strategy that we implement. Find appropriate conditions on $\rho_N$ and the $m_i$ to obtain the correct structure and then simply bound
$\Phi_N(f, L(f))$ in terms of $\Phi_N(f)$.
For the first part of this strategy we actually prove
\begin{Th}\label{th maxwellienne} Let $\rho_{N}(v)$ a positive function on $I$ such that
\begin{equation}\label{conditions moments poids}
\int_{I}\frac{R(v)}{\rho_{N}(v)}dv=\sum_{k=0}^N R(\lambda_k),\qquad \forall R\in\r_{2N+1}[X].
\end{equation}
We set
\begin{equation*}
E_{N}=L^2(I,\rho_{N}(v)dv)=\left\{f\,\, measurable,\quad \int_{I}|f(v)|^2\rho_{N}(v)dv<\infty\right\}.
\end{equation*}
Then, the map $\phi_{N}$ defined by
\begin{equation}\label{weightedl2}
\phi_{N}(f,g)=\int_{I}f(v)g(v)\rho_{N}(v)dv,\qquad (f,g)\in E_{N}^2
\end{equation}
is an inner product on $E_{N}$, and the
Maxwellian $M^{(N)}:E_{N}\rightarrow E_{N}$, defined by
$$M^{(N)}f=\sum_{i=0}^N\left(\int_{I}v^{i}f(v)dv\right)\frac{\tilde{T_i}(v)}{\rho_{N}(v)},\qquad f\in E_{N},$$
is an orthogonal projection
and satifies \eqref{conditions moments maxwellienne},
where $(\tilde{T_i})_{0\leq i\leq N}$ is the basis of $\r_N[X]$ defined by:
$$\tilde{T_i}(X)=\sum_{k=0}^NQ_{k,i}X^k,\qquad 0\leq i\leq N.$$
with $Q=(P^{T})^{-1}P^{-1}$.
Furthermore, with that choice of $M^{(N)}$, any solution $f_{N}^{\eps}$ of the problem \eqref{BGK} formally satisfies:
\begin{equation}\label{bornes_stab_BGK}
\frac{d}{dt}\int_{\r}\phi_{N}(f_N^{\eps},f_N^{\eps})dx\leq 2
\int_{\r}\phi_{N}(f_N^{\eps},L(f_N^{\eps}))dx.
\end{equation}
\end{Th}
\subsection{Error estimate}
We now turn to the convergence of the method of moments to the solution. For simplicity we state the results here for the case $d=0$ in \eqref{forme_noyau}, namely
$$Q\left(v,v^{*}\right)=q(v)\1_{\{(v,v^{*})\in I^{2}\}}.$$
We also assume that $I\subset [-1,\ 1]$, still for simplicity.
The convergence results of course require a stability estimate. However in themselves, they do not use the specific form of the closure.
Therefore here we do not assume any specific closure relation. Instead we assume that we have a well defined methods of moments, {\em i.e.}, some way of computing $\mu_i$ which satisfy
\begin{equation}\label{method_moments_d=0}
\begin{array}{lll}
\del_t \mu_i+\del_x \mu_{i+1}=\ds\gamma_i
\mu_0-\lambda \mu_i,\qquad i=0,\dots,N\\
\mu_{N+1}=F(\mu_1,...,\mu_N).
\end{array},
\end{equation}
where
$$\gamma_{i}=\int_{I}v^{i}q(v)dv.$$
Moreover we assume that the corresponding method has good stability estimates in the sense that
\begin{equation}\label{estimation_de_stabilite_macro_d=0_sous_hypotheses}
\begin{split}
& \exists C,\,\gamma\geq 0,\ \mbox{For any}\ (\mu_i)_{i=0..N}\ \mbox{and}\ (\tilde\mu_i)_{i=0..N} \mbox{ solutions to \eqref{method_moments} for}\ f^0,\;\tilde f^0,\\
& \mbox{then}\qquad \sum_{i=0}^{N}\|\mu_{i}-\tilde\mu_i\|_{L^{\infty}([0,T],\;H^k(\r))}\leq
CN^{\gamma}\|f^{0}-\tilde f^0\|_{L^2(I,\ H^k(\r))},
\end{split}\end{equation}
where $C,\;\gamma$ may depend on $T$, $q$, $\lambda$ but not on $N$ or $f^0$.
Note that Th. \ref{th stabilite} can indeed be expected to imply such a result. The exponent $\gamma$ will depend on the choice of the coefficients in our method. Of course Th. \ref{th stabilite} controls only the $L^2$ norm of the $\mu_i$. However as the method \eqref{method_moments} is linear, the $\partial_x^k \mu_i$ are also a solution to the same system and an estimate like \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} can be derived. For a more detailed analysis of how to obtain \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} from Th. \ref{th stabilite}, we refer to Section \ref{Tcheby} where it is performed when the $\lambda_k$ are the Tchebychev points.
For any method that satisfies \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses}, then one has the following convergence result
\begin{Th} Assume \eqref{initialdata} with $I\subset [-1,\ 1]$, that the method \eqref{method_moments_d=0} satisfies \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} for some $\gamma\geq 0$ and that $f^{0}\in H^{k}(\r^{2})$, with $k\in\N^{*}$. Consider $f$ the solution to \eqref{eq_neutronics} with $d=0$ and the corresponding solution $\mu_i$ to \eqref{method_moments_d=0}. Then, for all $T\geq 0$ and for all $N\geq 1$,
we have the estimate
\begin{equation}\label{estimation_erreur}
\|\mu_{0}-\mu_{0}^{f}\|_{L^{\infty}\left([0,T],L^{2}(\r)\right)}\leq \frac{C}{N^{k-\gamma}}\times\|f^{0}\|_{H^{k}(\r^{2})}.
\end{equation}
where $C\geq 0$ depends on $T,\lambda,q$ and $k$ but not on $N$.
\label{errorestimate}
\end{Th}
This error estimate is a sort of interpolation between the stability bounds and the following result for $C^\infty$ solution to \eqref{method_moments_d=0}
\begin{Prop}\label{resultat_smooth_case} Assume that the $(\mu_i)_{i=0...N+1}$ solve
\begin{equation}
\partial_t \mu_i+\partial_x \mu_{i+1}=\gamma_i\,\mu_0-\lambda\mu_i,\qquad i=0,..., N,
\end{equation}
with $\mu_i(t=0)=0$ for $i=0...N$.
Then, for all $T\geq 0$ and for all $N\geq 1$,
we have the estimate
\begin{equation}\label{formule erreur cas_regulier}
\|\mu_0-\mu_0^{f}\|_{L^{\infty}([0,T],L^2(\r))}\leq C\times
\frac{T^N}{N!}\,\sum_{i=0}^N
\|\del_x^N \mu_{i}\|_{L^\infty([0,T],\ L^2(\r))},
\end{equation}
where $C\geq 0$ depends on $T,\,\lambda,\,q$.
\end{Prop}
\section{Proof of Theorems \ref{th stabilite} and \ref{th maxwellienne}}
To simplify the presentation, we omit here the subscript $N$ in
$E_{N}$, $\phi_{N}$, $\rho_{N}$, and the superscript $N$ in $M^{(N)}$.
\subsection{Elementary space decomposition}
The difficulty is to combine the fact that $M$ has to be an orthogonal projection for $\Phi$ with the symmetry property on $\Phi$. We take here a slighty more general approach by not assuming directly that $\Phi$ satisfies \eqref{weightedl2}.
Consider a Maxwellian which has the form
\eqref{formule_maxwellienne} and satisfies \eqref{cond_moments_m_i}.
First, such an application $M$ is a projection
because $M\circ M=M$, which is a straightforward consequence of
\begin{equation*
M(m_i)=\sum_{j=0}^N \left(\int_{I}v^j m_i dv\right)m_j=m_i,\qquad
i=0,\dots,N.
\end{equation*}
Moreover, one has that
\begin{equation*
\begin{array}[]{llll}
Ker M=\ds \left\{f\in E,\quad \int_{I}v^if(v)dv=0, \quad
i=0,\dots,N \right\}:=K,\\\\
Ker(M-I)=Im(M)=\ds Span(m_0,m_1,\dots,m_N):=V,
\end{array}
\end{equation*}
and we have the space decomposition
\begin{equation*
E=V\oplus K,
\end{equation*}
with
\begin{equation*
dim(V)=codim(K)=N+1.
\end{equation*}
We start by a more detailed explanation of the sufficient conditions to obtain Th. \ref{th maxwellienne}
\begin{Lemma}\label{estimation_formelle} Assume that the
inner product $\phi:E\times E\rightarrow \r$
satisfies
\begin{equation*
K=V^{\bot,\phi}, \mbox{(i.e. the decomposition $V\oplus K$ becomes orthogonal)}.
\end{equation*}
\begin{equation}\label{cond_symetrie}
\forall (f,g)\in E^2,\qquad \phi(vf,g)=\phi(vg,f).
\end{equation}
Then, for any solution $f_N^{\eps}=f_N^{\eps}(t,x,v)$ of the problem
$\eqref{BGK}$, inequality \eqref{bornes_stab_BGK} formally holds:
\begin{equation*
\frac{d}{dt}\int_{\r}\|f_N^{\eps}(t,x,.)\|^2_{\phi}dx\leq 2
\int_{\r}\phi(f_N^{\eps},L(f_N^{\eps}))dx.
\end{equation*}
\end{Lemma}
\noindent\underline{Proof} :
Take a smooth $f=f(t,x,v)$ such that $\del_t f+v\del_x f =\frac{1}{\eps}(Mf-f)+L(f)$. We
have
\[
\begin{split}
&\frac{d}{dt} \int_{\r}\|f(t,x,.)\|^2_{\phi}dx =
2\int_{\r}\phi(f,\del_t
f)dx\\
&\ = \ds 2\left(\frac{1}{\eps}\int_{\r}\phi(f,Mf-f)dx-\int_{\r}\phi(f,
v\del_x f)dx+\int_{\r}\phi(f,L(f))dx\right).
\end{split}
\]
Since $Mf-f\in K$ and $Mf\in V$, we have $\phi(Mf,Mf-f)=0$, thus
\[
\phi(f,Mf-f)=\phi(f-Mf,Mf-f)=-\|f-Mf\|_{\phi}\leq0.
\]
We deduce
\begin{equation*
\frac{d}{dt}\int_{\r}\|f(t,x)\|^2_{\phi}dx\leq
-2\int_{\r}\phi(f,v\del_x f)dx+2\int_{\r}\phi(f,L(f))dx.
\end{equation*}
Thus, having $\int_{\r}\phi(f,v\del_x f)dx=0$ is sufficient to obtain
\eqref{bornes_stab_BGK}.
Since $\phi(f,v\del_x f)=\del_x \phi(f,vf)-\phi(\del_x f, vf)$, we can write
\begin{equation*
\int_{\r}\phi(f,v\del_x
f)dx=\frac{1}{2}\int_{\r}\left(\phi(f,v\del_x f)-\phi(vf,\del_x
f)\right)dx.
\end{equation*}
Therefore the symmetry condition $\eqref{cond_symetrie}$ on $\phi$
is enough to conclude. If $f$ is not smooth enough to follow the previous steps, one simply regularizes it by convolution in $x$. As the equation is linear and $x$ is only a parameter in $M$ and $L$ then the regularized function solves the same equation. Therefore it satisfies \eqref{bornes_stab_BGK} and letting the regularizing parameter vanish, one recovers the same inequality for $f$.
\cqfd
\bigskip
Now, take a inner product $\phi$ on $E$ such that
$K=V^{\bot,\phi}$, {\em i.e.} $K$ and $V$ are orthogonal for this inner product. The symmetry condition $\eqref{cond_symetrie}$ is obviously equivalent to
\begin{equation}
\label{C1}\forall (f,g)\in V^2,\qquad
\phi(vf,g)=\phi(vg,f),
\end{equation}
\begin{equation}
\label{C2}\forall (f,g)\in K\times V,\qquad \phi(vf,g)=\phi(vg,f),
\end{equation}
\begin{equation}
\label{C3}\forall (f,g)\in K^2,\qquad \phi(vf,g)=\phi(vg,f).
\end{equation}
We study each of those conditions in the following subsections.
\subsection{Study of the condition \eqref{C1}}
\begin{Prop} The condition \eqref{C1} is equivalent to
\begin{equation*
A^{T}Q=QA,
\end{equation*}
where $A$ is the matrix defined by $\eqref{matrices_systeme}$
and $Q$ is the symmetric definite positive matrix defined by
\begin{equation*
Q_{i,j}=\phi(m_i,m_j),\qquad 0\leq i,j\leq N.
\end{equation*}
\end{Prop}
\noindent\underline{Proof}: The condition \eqref{C1} is equivalent to
$$\phi(vm_i,m_j)=\phi(m_i,vm_j),\qquad \forall\;0\leq i,j\leq N.$$
Let $(i,j)\in \{0,\dots,N\}^2$. Since $m_j\in V$, he have
$$\phi(vm_i,m_j)=\phi(M(vm_i),m_j).$$
Moreover, \eqref{cond_moments_m_i} implies
\begin{equation}\label{formule M(vm_i)}
M(vm_i)=m_{i-1}+a_i m_N,\qquad 0\leq i\leq N,
\end{equation}
with the convention $m_{-1}=0$.
Thus
\[\phi(vm_i,m_j)=\phi(m_{i-1},m_j)+a_i\phi(m_N,m_j)=Q_{i-1,j}+a_i Q_{N,j}.
\]
But
$$(A^{T}Q)_{i,j}=\sum_{k=0}^N A_{k,i}Q_{k,j}=Q_{i-1,j}+a_i Q_{N,j},$$
with the convention $Q_{-1,j}=0$.
Therefore, we have $$\phi(vm_i,m_j)=(A^{T}Q)_{i,j}.$$
Thus, \eqref{C1} amounts to the matrix $A^{T}Q$ be (real and) symmetric, i.e.
$A^{T}Q=QA,$
since $Q$ is symmetric.
\cqfd
This result suggests a particular way of defining the inner product on $V$:
\begin{Cor}\label{choix_Q}
Choose $\phi$ s.t.
\begin{equation*
Q=(P^{-1})^T\,P^{-1}=(P P^{T})^{-1},
\end{equation*}
where $P$ is defined by
$\eqref{matrice_de_passage}$.
This choice implies
\begin{equation}\label{valeurs du ps sur V}
\phi(m_i,m_j)=Q_{i,j}=\sum_{k=0}^N P_{k,i}^{-1}P_{k,j}^{-1}.
\end{equation}
\end{Cor}
\noindent \underline{Proof.}
We have, denoting by $D=diag(\lambda_0,\dots,\lambda_N)$:
\[\begin{split}
A^{T}Q=&A^{T}(P^{-1})^{T}P^{-1}=(P^{-1}A)^{T}P^{-1}
=(DP^{-1})^{T}P^{-1}=(P^{-1})^{T}(DP^{-1})\\
&=(P^{-1})^{T}P^{-1}A=QA,
\end{split}\]
thus $Q$ satisfies $A^TQ=QA$. Moreover, it is easy to check
that $Q$ is symmetric, definite, and positive.
\cqfd
\bigskip
In the rest of the proof, we always choose $\Phi$ according to \eqref{valeurs du ps sur V}.
\subsection{Study of the condition \eqref{C2}}
Now, we assume \eqref{valeurs du ps sur V} to be satisfied
and analyze \eqref{C2}.
\begin{Prop} Assume \eqref{valeurs du ps sur V}, and consider the
following polynomials
\begin{equation}\label{formule explicite des T_i}
T_i(X)=\frac{1}{Q_{0,N}}\sum_{k=0}^NQ_{k,i}X^k,\qquad 0\leq i\leq N,
\end{equation}
where the matrix $Q\in M_{N+1}(\r)$ is defined by \eqref{choix_Q}.
Then, the condition \eqref{C2} is equivalent to
\begin{equation}\label{C2 condition 1}
T_N(v)m_i(v)=T_i(v)m_N(v),\qquad v\in I,\qquad 0\leq i\leq N-1,
\end{equation}
\begin{equation}\label{C2 condition 2}
\forall f\in K,\qquad \phi_{|K}\left(\frac{1}{Q_{0,N}}
\times\frac{\chi_A(v)m_N(v)}{T_N(v)},f\right)=\int_{I}v^{N+1}f(v)dv.
\end{equation}\label{propc2}
\end{Prop}
The proof of this proposition is split in several lemmas. First we find two equivalent conditions to \eqref{C2}.
\begin{Lemma} The condition \eqref{C2} is equivalent to
\begin{equation}
\label{C2.1}
\forall i\in\{1,\dots,N\},\quad
vm_i-m_{i-1}-a_im_N=\frac{Q_{i,N}}{Q_{0,N}}(vm_0-a_0m_N)
\end{equation}
\begin{equation}
\label{C2.2}
\forall f\in K,\quad
\phi_{|K}\left(\frac{1}{Q_{0,N}}(vm_0-a_0m_N),f\right)=\int_{I}w^{N+1}f(w)dw,
\end{equation}\label{lem1}
\end{Lemma}
We then have to study conditions \eqref{C2.1} and \eqref{C2.2}.
\noindent\underline{Proof of Lemma \ref{lem1}}: The condition \eqref{C2} is obviously equivalent to
$$\phi(vm_i,f)=\phi(m_i,vf),\qquad 0\leq i\leq N,\quad f\in K.$$
Let $i\in \{0,\dots,N\}$ and $f\in K$. Since $f\in K$, he have
$$\phi(vm_i,f)=\phi_{|K}(vm_i-M(vm_i),f)=\phi_{|K}(vm_i-m_{i-1}-a_im_N,f),$$
using \eqref{formule M(vm_i)}.
Similarly, since $m_i\in V$,
$$\phi(m_i,vf)=\phi_{|V}(m_i,M(vf))=Q_{i,N}\int_{I}w^{N+1}f(w)dw,$$
where the last equality is deduced from
\begin{equation*
\forall f\in K,\qquad M(vf)=\left(\int_{I}w^{N+1}f(w)dw\right)m_N,
\end{equation*}
recalling that $\int w^n f\,dv=0$, $\forall n\leq N$.
Thus, the condition \eqref{C2} amounts to for any $0\leq i\leq N$ and $f\in K$
\begin{equation*
\phi_{|K}(vm_i-m_{i-1}-a_im_N,f)=Q_{i,N}\int_{I}w^{N+1}f(w)dw.
\end{equation*}
from which we deduce \eqref{C2.2}, and
\[
\phi_{|K}\left(\frac{1}{Q_{i,N}}(vm_i-m_{i-1}-a_im_N)
-\frac{1}{Q_{0,N}}(vm_0-a_0m_N),f\right)=0.
\]
Therefore, for all $i\in \{1,\dots,N\}$, we have
$$\frac{1}{Q_{i,N}}(vm_i-m_{i-1}-a_im_N)
-\frac{1}{Q_{0,N}}(vm_0-a_0m_N)\in K\cap K^{\bot,\phi}=\{0\},$$
which shows the relation \eqref{C2.1}.\\
Conversely, the conditions \eqref{C2.1}+\eqref{C2.2} imply
\eqref{C2} just by following the previous steps in reverse order.
\cqfd
We start with condition \eqref{C2.1}
\begin{Lemma} Consider the polynomial
\begin{equation*
D(X)=\sum_{i=0}^N \beta_i X^i,
\end{equation*}
where
\begin{equation*
\beta_i=\frac{Q_{i,N}}{Q_{0,N}}=\frac{\phi(m_i,m_N)}{\phi(m_0,m_N)},\qquad
0\leq i\leq N.
\end{equation*}
The condition \eqref{C2.1} is equivalent to
\begin{equation*
D(v)m_i(v)=T_i(v)m_N(v),\qquad v\in I,\qquad 0\leq i\leq N,
\end{equation*}
where $(T_i)_{0\leq i\leq N}$ are the following polynomials:
\begin{equation}\label{formule de recurrence des T_i}
\begin{array}[]{llll}
T_N(X)=D(X),\\\\
T_{i-1}(X)=XT_i(X)-a_iD(X)-\beta_i \chi_A(X),\qquad 1\leq i\leq N.
\end{array}
\end{equation}\label{lemmac2.1}
\end{Lemma}
\noindent\underline{Proof.}
Setting $\beta_i=\frac{Q_{i,N}}{Q_{0,N}}$ for $i\in\{0,\dots,N\}$, we
deduce from \eqref{C2.1} the recursive formula
\begin{equation*
m_{i-1}=vm_i-(a_i-\beta_ia_0)m_N-\beta_i (vm_0),\qquad i=1\dots,N,
\end{equation*}
which leads to, for any $0\leq i\leq N$
\begin{equation*
m_i=\left(v^{N-i}-\sum_{k=0}^{N-i-1}v^k(a_{i+k+1}-\beta_{i+k+1}
a_0)\right)m_N-\left(\sum_{k=1}^{N-i}\beta_{i+k}v^k\right)m_0.
\end{equation*}
Thus
$$D(v)m_0(v)=\left(\ds v^N-\sum_{k=0}^{N-1}(a_{k+1}-\beta_{k+1}
a_0)v^k \right) m_N(v),$$
where
$$D(v):=\beta_0+\beta_1 v+\dots +\beta_N v^N.$$
We deduce
$$vD(v)m_0(v)=\left(\ds \chi_A(v)+a_0D(v) \right) m_N(v).$$
Thus, it comes from the recursive formula on the $m_i$ that
\begin{equation*
D(v)m_{i-1}(v)=vD(v)m_i(v)-(a_i D(v)+\beta_i \chi_A(v))
m_N(v),\qquad i=1\dots,N,
\end{equation*}
which is exactly
\begin{equation}\label{formule des m_i}
D(v)m_i(v)=T_i(v)m_N(v),\qquad i=0,\dots,N,
\end{equation}
with
\[
\begin{array}[]{llll}
T_N(X)=D(X),\\\\
T_{i-1}(X)=XT_i(X)-a_iD(X)-\beta_i \chi_A(X),\quad i=1,\dots N.
\end{array}\]
Conversely, if the functions $(m_i)_{0\leq i\leq N}$ satisfy these
relations, then \eqref{C2.1} is obvious, almost
everywhere $v\in I$ (except at the roots of the polynomial $D$).
\cqfd
\begin{Rk} We notice that
\begin{equation}\label{lien entre psi_0 et P_A}
XT_0(X)-a_0D(X)=\chi_A(X).
\end{equation}
Furthermore, the formula \eqref{formule de recurrence des T_i} implies:
$$1+deg(T_{i})\leq max(deg(T_{i-1}),N+1),\qquad 1\leq i\leq N.$$
But $T_0\in \r_N[X]$, thus $T_i\in\r_N[X]$ for all $i\in \{0,\dots,N\}$.
\end{Rk}
We can now prove
\begin{Lemma} We have the explicit formula \eqref{formule explicite des T_i} for the
$(T_i)_{0\leq i\leq N}$ which is recalled here
\[
T_i(X)=\frac{1}{Q_{0,N}}\sum_{k=0}^NQ_{k,i}X^k,\qquad 0\leq i\leq N.
\]\label{lem2}
\end{Lemma}
\noindent\underline{Proof}: First, according to the definition of $D$ in Lemma \ref{lemmac2.1}, we have, for all
$k\in \{0,\dots,N\}$,
\[\begin{split}
D(\lambda_k)=&\frac{1}{Q_{0,N}}\sum_{i=0}^N
Q_{i,N}\lambda_{k}^i=\frac{1}{Q_{0,N}}\sum_{i=0}^N\sum_{j=0}^N
P_{j,i}^{-1}P_{j,N}^{-1}P_{i,k}
=\frac{1}{Q_{0,N}}P_{k,N}^{-1}\\
=&\frac{1}{Q_{0,N}\,\,\pi_k}.
\end{split}\]
Then, the recursive formula \eqref{formule de recurrence des T_i}
easily implies:
$$T_j(\lambda_k)=(\lambda_k^{N-j}-a_N\lambda_k^{N-j-1}
-\dots-a_{j+2}\lambda_k-a_{j+1})D(\lambda_k),\qquad 0\leq j\leq N$$
which gives (according to
\eqref{matrice_de_passage_inverse}-\eqref{autre formule inverse P})
\begin{equation}\label{valeurs des Ti}
T_j(\lambda_k)=\tilde{p}_{k,j}D(\lambda_k)=
\frac{\tilde{p}_{k,j}}{Q_{0,N}\,\,\pi_k}=\frac{P_{k,j}^{-1}}{Q_{0,N}}
,\qquad 0\leq j,k\leq N.
\end{equation}
Since each polynomial $T_i$ have a degree equal to $N$, it is enough
to check equality \eqref{formule explicite des T_i} on the set
$\{\lambda_0,\dots, \lambda_N\}$, which allows to conclude.
\cqfd
\begin{Rk} The formula \eqref{formule explicite des T_i} shows that
the polynomials $(T_i)_{0\leq i\leq N}$
are a basis of $\r_N[X]$ since the matrix $Q$ is invertible.
\end{Rk}
We may finally characterize \eqref{C2.2}
\begin{Lemma} We assume that \eqref{C2.1} holds. Then, the condition
\eqref{C2.2} is equivalent to
\begin{equation*
\forall f\in K,\qquad
\phi_{|K}\left(\frac{1}{Q_{0,N}}\times\frac{\chi_A(v)m_N(v)}
{T_N(v)},f\right)=\int_{I}v^{N+1}f(v)dv.
\end{equation*}\label{lem3}
\end{Lemma}
\noindent\underline{Proof}: It is straightforward using \eqref{formule des m_i} and the
formula \eqref{lien entre psi_0 et P_A}.
\cqfd
We now have all what is needed to prove Prop. \ref{propc2}
\noindent\underline{Proof of Prop. \ref{propc2}}:
By Lemma \ref{lem1}, condition \eqref{C2} is equivalent to \eqref{C2.1}-\eqref{C2.2}. By Lemma \ref{lemmac2.1}, condition \eqref{C2.1} is equivalent to \eqref{C2 condition 1} provided that the $T_i$ are defined by \eqref{formule de recurrence des T_i}. Lemma \ref{lem2} shows that the recursive formula \eqref{formule de recurrence des T_i} actually gives the explicit formula \eqref{formule explicite des T_i}. Finally by Lemma \ref{lem3}, we know that condition \eqref{C2.2} is equivalent to \eqref{C2 condition 2} thus concluding the proof.
\cqfd
\subsection{Study of the condition \eqref{C3}}
We prove
\begin{Prop} Assume \eqref{valeurs du ps sur V}+\eqref{C2 condition
1}+\eqref{C2 condition 2}. Then, setting
\begin{equation}\label{def_poids}
\rho(v):=\frac{Q_{0,N}T_N(v)}{m_N(v)},
\end{equation}
and
\begin{equation*}
\phi_{|K}(f,g)=\int_{I}f(v)g(v)\rho(v)dv,\qquad \forall (f,g)\in K^2,
\end{equation*}
the condition \eqref{C3} is satisfied.\label{propphiK}
\end{Prop}
\noindent
\underline{Proof}: This choice is compatible with \eqref{C2.2}, since
\eqref{C2.2} can also be written as
$$\forall f\in K,\qquad \phi_{|K}\left(\frac{\chi_A}{\rho},f\right)
=\int_{I}v^{N+1}f(v)dv,$$
and since we have $\int_{I}\chi_A(v)f(v)dv=\int_{I}v^{N+1}f(v)dv$
since $f\in K$.
Moreover, \eqref{C3} is obviously satisfied with this choice.
\cqfd
\subsection{Choice of $\phi$ on the subspace $V$}
For the moment $\Phi$ is defined as a weighted $L^2$ type inner product on $K$ by Prop. \ref{propphiK} and on $V$ by Corollary \ref{choix_Q}.
We wish to define $\Phi$ as a weighted inner product on the whole $K\oplus V$.
The following lemma shows that provided $\rho$ satisfies the right relations then $\Phi$ as defined by Prop. \ref{propphiK} and Corollary \ref{choix_Q} is automatically of the right form.
\begin{Lemma} Assume \eqref{valeurs du ps sur V}, \eqref{C2
condition 1}, \eqref{C2 condition 2}, \eqref{def_poids},
and assume that the weight $\rho$ satisfies the moment conditions:
\begin{equation*
\int_{I}\frac{R(v)}{\rho(v)}dv=\sum_{k=0}^N R(\lambda_k),\qquad R\in\r_{2N}[X].
\end{equation*}
Then, we have:
\begin{equation}\label{valeurs de phi sur V}
\phi_{|V}(f,g)=\int_{I}f(v)g(v)\rho(v)dv.
\end{equation}\label{phiV}
\end{Lemma}
\noindent\underline{Proof}: It is sufficient to show that the formula
\eqref{valeurs de phi sur V} holds for $(f,g)=(m_i,m_j)$.
We have, for $(i,j)\in \{0,\dots,N\}^2$,
$$\int_{I}m_i(v)m_j(v)\rho(v)dv=\int_{I}
\left(\tilde{T_i}\tilde{T_j}\right)(v)\times\frac{dv}{\rho(v)},$$
setting
\begin{equation*
\tilde{T_i}=Q_{0,N}T_i,\qquad 0\leq i\leq N.
\end{equation*}
Moreover, we have
$$\phi(m_i,m_j)=Q_{i,j}=\sum_{k=0}^N P_{k,i}^{-1}P_{k,j}^{-1}=\sum_{k=0}^N
\left(\tilde{T_i}\tilde{T_j}\right)(\lambda_k),$$
according to \eqref{valeurs du ps sur V} and \eqref{valeurs des Ti}.
Since the $\frac{(N+1)(N+2)}{2}$ polynomials
$(\tilde{T_i}\tilde{T_j})_{0\leq i\leq j\leq N}$ are in $\r_{2N}[v]$,
we see
that the assumption on $\rho$
guarantees that \eqref{valeurs de phi sur V} is satisfied.
\cqfd
\subsection{Proof of the theorem
\eqref{th maxwellienne}: Synthesis}
We summarize here all the definitions and check rigorously that they are compatible.
So, assume there exists a function $\rho=\rho(v)$, positive on $I$ and satisfying
the moment conditions:
\begin{equation}\label{cond_moments_synthese}
\int_{I}\frac{R(v)}{\rho(v)}dv=\sum_{k=0}^{N}R(\lambda_{k}),\qquad
\forall R\in\r_{2N+1}[X].
\end{equation}
Note that this in particular implies that $1/\rho$ is integrable on $I$.
The space $E=L^2(I,\rho(v)dv)$ is a Hilbert space, for the inner product
\[
\phi(f,g)=\int_{I}f(v)g(v)\rho(v)dv,\qquad (f,g)\in E^2.
\]
We define the map $M:E\rightarrow E$ by
\[
Mf=\sum_{i=0}^N\left(\int_{I}v^{i}f(v)dv\right)
\frac{\tilde{T_i}(v)}{\rho(v)},\qquad
f\in E,
\]
where
$$\tilde{T_i}(X)=\sum_{k=0}^NQ_{k,i}X^k,\qquad 0\leq i\leq N,$$
and $Q=(P^{T})^{-1}P^{-1}$ (the matrix $P$ is defined by
\eqref{matrice_de_passage}).
\begin{itemize}
\item[$\bullet$] First, the map $M$ is well defined as
$\forall f\in E,\; \forall i\in\{0,\dots,N\}$
\[ \left|\int_{I}v^{i}f(v)dv\right|{}
\leq \left(\int_{I}\frac{|v|^{2i}}{\rho(v)}dv\right)^{1/2}
\left(\int_{I}|f(v)|^{2}\rho(v)dv\right)^{1/2},
\]
and thus $Mf$ makes sense. Moreover, for $f\in E$, we have $Mf\in E$ because
\[
\begin{split}
&\int_{I}|Mf(v)|^{2}\rho(v)dv =
\int_{I}\left|\sum_{i=0}^{N}\left(\int_{I}w^{i}f(w)dw\right)\frac{\tilde
T_{i}(v)}
{\rho(v)}\right|^{2}\rho(v)dv\\
&\quad \leq \ds\int_{I}\left(\sum_{i=0}^{N}
\left|\int_{I}w^{i}f(w)dw\right|^{2}\right)
\left(\sum_{i=0}^{N}\left|\frac{\tilde T_{i}(v)}
{\rho(v)}\right|^{2}\right)\rho(v)dv\\
&\quad \leq
\ds\left(\sum_{i=0}^{N}\left|\int_{I}w^{i}f(w)dw\right|^{2}\right)
\times
\sum_{i=0}^{N}\int_{I}\frac{|\tilde T_{i}(v)|^{2}}{\rho(v)}dv\quad <\,\,\infty.
\end{split}
\]
\item[$\bullet$] It is obvious that $M$ is linear. We check that $M$
is a projection: let $f\in E$, we have, for $0\leq j \leq N$,
$$\begin{array}{lll}
\ds\int_{I}v^{j}Mf(v)dv & = & \ds\sum_{i=0}^{N}
\left(\int_{I}w^{i}f(w)dw\right)\left(\int_{I}\frac{v^{j}
\tilde T_{i}(v)}{\rho(v)}dv\right)\\\\
& = & \ds\sum_{i=0}^{N}\left(\int_{I}w^{i}f(w)dw\right)
\left(\sum_{k=0}^{N}\lambda_{k}^{j}\tilde T_{i}(\lambda_{k})\right).
\end{array}
$$
Moreover, we have
$$\lambda_{k}^{j}=P_{j,k},\qquad \tilde
T_{i}(\lambda_{k})=P_{k,i}^{-1},\qquad 0\leq i,j,k\leq N,$$
thus $$\sum_{k=0}^{N}\lambda_{k}^{j}\tilde
T_{i}(\lambda_{k})=\delta_{i,j},\qquad 0\leq i,j\leq N.$$
We deduce
$$\begin{array}{lll}
\ds\int_{I}v^{j}Mf(v)dv & = & \ds\int_{I}w^{j}f(w)dw,\qquad 0\leq j\leq N,
\end{array}
$$
which implies $M\circ M=M$.
\item[$\bullet$] $M$ is an orthogonal projection (for the inner
product $\phi$), because it is a self-adjoint projector:
\[\begin{split}
\phi(Mf,g) & = \int_{I}Mf(v)g(v)\rho(v)dv\\
&=
\sum_{i=0}^{N}\left(\int_{I}w^{i}f(w)dw\right)
\left(\int_{I}\tilde T_{i}(v)g(v)dv\right)\\
& =
\ds\sum_{i=0}^{N}\sum_{k=0}^{N}Q_{k,i}
\left(\int_{I}w^{i}f(w)dw\right)
\left(\int_{I}v^{k}g(v)dv\right)\,\,=\,\,\phi(f,Mg),
\end{split}
\]
as the matrix $Q$ is symmetric.
\item[$\bullet$] The Maxwellian $M$ satisfies moment conditions:
\begin{equation*}
\left\{\begin{array}{llll}
\ds\int_{I}v^i Mf(v) dv & = & \ds\int_{I}v^i f(v) dv & i=0,\dots,N,\\\\
\ds\int_{I}v^{N+1} Mf(v) dv & = &\ds \sum_{i=0}^N a_i \int_{I}v^i f(v) dv &
\end{array}
\right..
\end{equation*}
In fact, the first conditions have been already established, and the
second ones result from the following formula, using
\eqref{cond_moments_synthese} and since $\chi_{A}(v)\tilde T_{i}(v)$
is a polynomial of degree $2N+1$,
$$ \begin{array}{llll}
\ds\int_{I}\chi_{A}(v)Mf(v) & = & \ds\sum_{i=0}^{N}
\left(\int_{I}w^{i}f(w)dw\right)\int_{I}
\frac{\chi_{A}(v)\tilde T_{i}(v)}{\rho(v)}dv\\\\
& =
&\ds\sum_{i=0}^{N}\left(\int_{I}w^{i}f(w)dw\right)
\sum_{k=0}^{N}\chi_{A}(\lambda_{k})\tilde
T_{i}(\lambda_{k})=0.
\end{array}$$
\item[$\bullet$] By Corollary \ref{choix_Q}, Prop. \ref{propphiK} and Lemma \ref{phiV}, the inner product $\Phi$ satisfies the assumptions of Lemma \ref{estimation_formelle}. This concludes the proof of Theorem \ref{th maxwellienne}.\cqfd
\end{itemize}
\subsection{From Th. \ref{th maxwellienne} to Th. \ref{th stabilite}: Uniform stability estimate on the BGK model}
The first point is to control the collision term which is done by
\begin{Lemma} Define $\rho$ and $\Phi$ as per Th. \ref{th maxwellienne} then
\begin{equation}\label{inegalite_lemme}
\begin{split}
\ds\int_{\r}\int_{I}f_{N}^{\eps}L(f_{N}^{\eps})\rho(v)dvdx \leq & \left(\Lambda_{N,d}
\left(\int_{I}|q(v)|^{2}\rho(v)dv\right)^{1/2}-\lambda\right)\\
&\ \int_{\r}\int_{I}|f_{N}^{\eps}|^{2}\rho(v)dvdx,
\end{split}
\end{equation}
with
\begin{equation*
\Lambda_{N,d}=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{j=0}^{d}\sum_{k=0}^{N}\lambda_{k}^{2j}}
=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{k=0}^{N}\frac{1-\lambda_{k}^{2d+2}}{1-\lambda_{k}^{2}}}.
\end{equation*}\label{controlL}
\end{Lemma}
\noindent\underline{Proof}: We have
\[\begin{split}
&\int_{\r}\int_{I}f_{N}^{\eps}(t,x,v)L(f_{N}^{\eps})(t,x,v)\rho(v)dvdx\\
&\ = \int_{\r}\int_{I}f_{N}^{\eps}(t,x,v)q(v)\sum_{j=0}^{d}\alpha_{j}\left(\int_{I}w^{j}f_{N}^{\eps}(t,x,w)dw\right)\rho(v)dvdx\\
&\qquad
-\lambda\int_{\r}\int_{I}|f_{N}^{\eps}(t,x,v)|^{2}\rho(v)dvdx.\\
\end{split}
\]
So
\[\begin{split}
&\int_{\r}\int_{I}f_{N}^{\eps}(t,x,v)L(f_{N}^{\eps})(t,x,v)\rho(v)dvdx\\
&\ = \int_{\r}dx\left(\ds\sum_{j=0}^{d}\alpha_{j}\int_{I}w^{j}f_{N}^{\eps}(t,x,w)dw\right)
\left(\int_{I}f_{N}^{\eps}(t,x,v)q(v)\rho(v)dv\right)\\
&\qquad
-\lambda\int_{\r}\int_{I}|f_{N}^{\eps}(t,x,v)|^{2}\rho(v)dvdx.
\end{split}\]
We can control the moments of $f_{N}^{\eps}$ in the following way:
\[
\left|\int_{I}w^{j}f_{N}^{\eps}(t,x,w)dw\right|^{2}\leq
\left(\int_{I}\frac{w^{2j}}{\rho(w)}dw\right)\left(\int_{I}|f_{N}^{\eps}(t,x,w)|^{2}\rho(w)dw\right),
\]
and the assumption \eqref{conditions moments poids} implies
\begin{equation*
\left|\int_{I}w^{j}f_{N}^{\eps}(t,x,w)dw\right|^{2}
\leq \left(\sum_{k=0}^{N}\lambda_{k}^{2j}\right)\phi(f_{N}^{\eps},f_{N}^{\eps}).
\end{equation*}
We deduce that
\begin{equation*
\left|\sum_{j=0}^{d}\alpha_{j}\int_{I}w^{j}f_{N}^{\eps}(t,x,w)dw\right|^{2}\leq
\left(\sum_{j=0}^{d}\alpha_{j}^{2}\right)
\left(\sum_{j=0}^{d}\sum_{k=0}^{N}\lambda_{k}^{2j}\right)\phi(f_{N}^{\eps},f_{N}^{\eps}).
\end{equation*}
Moreover, we have
\begin{equation*
\left|\int_{I}f_{N}^{\eps}(t,x,v)q(v)\rho(v)dv\right|^{2}=\phi(f_{N}^{\eps},q)^{2}\leq\phi_{N}(q,q)\,\phi(f_{N}^{\eps},f_{N}^{\eps}).
\end{equation*}
Thus, combining the last two inequalities and using Cauchy-Schwarz inequality, we obtain
\[\begin{split}
\int_{\r}\!\!\int_{I}f_{N}^{\eps}L(f_{N}^{\eps})\rho(v)dvdx \leq&
\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{j=0}^{d}\sum_{k=0}^{N}
\lambda_{k}^{2j}}\sqrt{\phi(q,q)}\,
\int_{\r}\phi(f_{N}^{\eps},f_{N}^{\eps})dx\\
&-\lambda\int_{\r}\phi(f_{N}^{\eps},f_{N}^{\eps})dx,
\end{split}\]
which is the desired estimate.
\cqfd
\medskip
Combining Th. \ref{th maxwellienne} and Lemma \ref{controlL}, we can obtain stability estimates for \eqref{BGK} uniform in $\eps$:
\begin{Prop} Let $f_{N}^{\eps}$ a solution of \eqref{BGK} with \eqref{initialdata}. Then, the following estimate holds for any $t>0$
\begin{equation}\label{estimation_de_stabilite_cinetique}
\sup_{\eps>0}\int_{\r}\int_{I}|f_{N}^{\eps}(t,x,v)|^{2}\rho(v)dvdx\leq
e^{tC_{N,q}}\int_{\r}\int_{I}|f^{0}(x,v)|^{2}\rho(v)dvdx,
\end{equation}
where
\begin{equation*
C_{N,q}=2\left(\ds\Lambda_{N,d}
\left(\int_{I}|q(v)|^{2}\rho(v)dv\right)^{1/2}-\lambda\right),
\end{equation*}
and we recall
\
\Lambda_{N,d}=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{j=0}^{d}\sum_{k=0}^{N}\lambda_{k}^{2j}}
=\sqrt{\sum_{j=0}^{d}\alpha_{j}^{2}}\sqrt{\sum_{k=0}^{N}\frac{1-\lambda_{k}^{2d+2}}{1-\lambda_{k}^{2}}}.
\]\label{BGKuniform}
\end{Prop}
\noindent \underline{Proof}:
It is straightforward using Gronwall lemma, as according to \eqref{bornes_stab_BGK} and \eqref{inegalite_lemme},
we have
\[\begin{split}
\frac{d}{dt}\int_{\r}\int_{I}|f_{N}^{\eps}(t,x,v)|^{2}\rho(v)dvdx\leq
&2\left(\ds\Lambda_{N,d}
\left(\int_{I}|q(v)|^{2}\rho(v)dv\right)^{1/2}-\lambda\right)\\
&\int_{\r}\int_{I}|f_{N}^{\eps}(t,x,v)|^{2}\rho(v)dvdx.
\end{split}
\]
\cqfd
\medskip
We are now ready to conclude the proof of Th. \ref{th stabilite}
Now, we show that we can pass to the limit $\eps\rightarrow 0$ in the BGK model \eqref{BGK}
to obtain a stability estimate on the hyperbolic system \eqref{systeme_hyperbolique}.
We fix $N\geq d$. Using \eqref{estimation_de_stabilite_cinetique}, we can see that the family
$(f_N^{\eps})_{\eps>0}$ is bounded in the space
$L^{2}(]0,+\infty[_{loc}\times \r_{x}\times I_{v},\rho(v)dtdxdv)$. Thus there exists a sequence
$\eps_{k}\underset{k\rightarrow\infty}{\longrightarrow}0$ such that
\[
f_{N}^{\eps_{k}}\underset{k\rightarrow\infty}{\tw} f_{N}
\in L^{2}\left(]0,+\infty[_{loc}\times \r_{x}\times I_{v},\rho_{N}(v)dtdxdv\right).
\]
Therefore
\[
\del_{t}f_{N}^{\eps_{k}}+v\del_{x}f_{N}^{\eps_{k}}\underset{k\rightarrow\infty}{\tw}
\del_{t}f_{N}+v\del_{x}f_{N},\qquad L(f_{N}^{\eps_{k}})\underset{k\rightarrow\infty}{\tw} L(f_{N})
\quad\mbox{in}\quad \mathcal{D}'(]0,\infty[\times \r\times I),
\]
from which we deduce
\[
M(f_{N}^{\eps_{k}})-f_{N}^{\eps_{k}}=\eps_{k}(\del_{t}f_{N}^{\eps_{k}}+v\del_{x}f_{N}^{\eps_{k}}-L(f_{N}^{\eps_{k}}))
\underset{k\rightarrow\infty}{\tw} 0\quad\mbox{in}\quad \mathcal{D}'(]0,\infty[\times \r\times I),
\]
and thus
\[
M(f_{N}^{\eps_{k}})\underset{k\rightarrow\infty}{\tw} f_{N}\quad\mbox{in}\quad
\mathcal{D}'(]0,\infty[\times \r\times I).
\]
Hence, passing to the limit in \eqref{conditions moments maxwellienne}, we show that the function $f_{N}$ is a
kinetic interpretation of the system $\eqref{systeme_hyperbolique}$, in the sense that
$\left(\mu_{i}=\int_{I}v^{i}f_{N}dv\right)_{0\leq i\leq N}$ satisfies \eqref{systeme_hyperbolique}. As this system is linear, hyperbolic, it has a unique solution for a given initial data. That means that any solution of \eqref{systeme_hyperbolique} can consequently be obtained as the moments of a limit $f_N$ of $f_N^\eps$.
Moreover, $f_{N}$ also satisfies the bound \eqref{estimation_de_stabilite_cinetique} :
\begin{equation*
\forall t>0,\qquad \int_{\r}\int_{I}|f_{N}(t,x,v)|^{2}\rho(v)dvdx\leq
e^{tC_{N,q}}\int_{\r}\int_{I}|f^{0}(x,v)|^{2}\rho(v)dvdx.
\end{equation*}
Thus we obtain the estimate \eqref{estimation_de_stabilite_macro}, since
\[
\left|\mu_{i}(t,x)\right|^{2}=\left|\int_{I}v^{i}f_{N}(t,x,v)dv\right|^{2}
\leq \left(\sum_{k=0}^{N}\lambda_{k}^{2i}\right)\int_{I}|f_{N}(t,x,v)|^{2}\rho(v)dv,
\]
according to the assumptions on $\rho_{N}$. The proof of Th. \ref{th stabilite} is now complete.\cqfd
\section{Error estimate: Proof of Th. \ref{errorestimate} and Prop. \ref{resultat_smooth_case}}
\subsection{Estimates provided by the model}
In order to prove Th. \ref{errorestimate}, we will need good smoothness properties on the solution to the exact equation \eqref{eq_neutronics}. Fortunately this model is very simple to manipulate and the estimates we need easy to obtain in that case.
Let us first start with the support in velocity
\begin{Prop} \label{support_vitesse_solution}
Assume \eqref{initialdata}, \eqref{forme_noyau} with $I\subset [-1,\ 1]$. Then the solution to \eqref{eq_neutronics} satisfies
\begin{equation*}\forall t\geq 0,\qquad a.e.\,\,x\in\r,\qquad {\rm supp_v}\, f(t,x,.)\subset I\subset [-1,\ 1].
\end{equation*}
\end{Prop}
\noindent\underline{Proof} :
If $f$ is a solution of \eqref{eq_neutronics} with $d=0$, then, using the Stokes formula, we have, formally
\[
\begin{split}
\frac{d}{dt}\int_{\r}|f(t,x,v)|^{2}dx & = 2\int_{\r} f(t,x,v)q(v)\left(\int_{I}f(t,x,v^{*})dv^{*}\right)dx\\
&\qquad -2\lambda\int_{\r}|f(t,x,v)|^{2}dx\\
& \leq \ds 2q(v)\int_{\r} f(t,x,v)\left(\int_{I}f(t,x,v^{*})dv^{*}\right)dx.
\end{split}
\]
Therefore, integrating in time, we obtain
\[
\begin{split}
\int_{\r}|f(t,x,v)|^{2}dx & \leq \ds \int_{\r}|f^{0}(x,v)|^{2}dx \\
&\qquad + 2q(v)\int_{0}^{t}\int_{\r} f(s,x,v)\left(\int_{I}f(s,x,v^{*})dv^{*}\right)dxds,
\end{split}
\]
and since supp$\,q\subset I$, we get the result.
\cqfd
The model \eqref{eq_neutronics} also propagates the $H^k$-norm of the solution
\begin{Prop}
Assume \eqref{initialdata}, \eqref{forme_noyau} with $I\subset [-1,\ 1]$. Then the solution to \eqref{eq_neutronics} satisfies
$$\forall t\geq 0,\qquad \|f(t)\|_{H^k(\r,\ L^{2}(I))}\leq e^{Ct}\|f^{0}\|_{H^k(\r,\ L^{2}(I))},$$
where $C=\|q\|_{L^{2}(I)}-\lambda.$ \label{hksolution}
\end{Prop}
\noindent \underline{Proof}: First note that for any $k$, $\partial_x^k f$ is also a solution to \eqref{eq_neutronics} with $\partial_x^k f^0$ as initial data. Then
\[\begin{split}
\frac{d}{dt}\|f(t)\|^{2}_{H^k(\r,\ L^{2}(I))}=&
2\sum_{p=0}^{k}\int_{\r}\left(\int_{\r}\del_{x}^{p}f(t,x,v)q(v)dv\right)
\left(\int_{I}\del_{x}^{p}f(t,x,v^{*})dv^{*}\right)\\
&-2\lambda \|f(t)\|^{2}_{H^k(\r,\ L^{2}(I))}.
\end{split}\]
Using Prop. \eqref{support_vitesse_solution} and H\"{o}lder inequality, we easily obtain
\[
\frac{d}{dt}\|f(t)\|^{2}_{H^k(\r,\ L^{2}(I))}\leq 2\left(\|q\|_{L^{2}(I)}-\lambda\right)\|f(t)\|^{2}_{H^k(\r,\ L^{2}(I))},\]
and a simple Gronwall lemma gives the result.
\cqfd
We may finally conclude from Props. \ref{support_vitesse_solution}-\ref{hksolution} that
\begin{Cor}
Assume \eqref{initialdata}, \eqref{forme_noyau} with $I\subset [-1,\ 1]$. Then the moments of solution to \eqref{eq_neutronics} satisfy
\begin{equation}\label{propagation_moments}
\forall i\in\N,\quad\forall t\geq 0,\qquad \|\mu^{f}_{i}(t)\|_{L^{2}(\r)}\leq{}
e^{Ct}\|f^{0}\|_{L^{2}(\r^{2})},
\end{equation}
and
\[
\sum_{i=0}^N \|\mu^{f}_{i}(t)\|_{L^{2}(\r)}\leq \, (2N+1)^{1/2}\,e^{Ct}\,\|f^0\|_{L^2(\r^2)}.
\]
where
$C=\|q\|_{L^{2}(I)}-\lambda.$ \label{hkmoments}
\end{Cor}
\noindent\underline{Proof.} Notice that
\[
|\mu_i^f|\leq \int_{-1}^1 |v|^i f(t,x,v)\,dv\leq \frac{1}{\sqrt{2i+1}}\,\left(\int_{-1}^1 |f(t,x,v)|^2\,dv\right)^{1/2},
\]
by Cauchy-Schwarz. Hence
\[
\|\mu^{f}_{i}(t)\|_{L^{2}(\r)}\leq \frac{1}{\sqrt{2i+1}}\,\|f(t)\|_{L^2(\r^2)},
\]
and one concludes by using Prop. \ref{hksolution}.\cqfd
\subsection{Proof of Prop. \ref{resultat_smooth_case}: Error estimate in the smooth case}
Since the functions $(\mu_{i})$ are smooth in the space variable, we have by \eqref{method_moments_d=0}, for all~$0\leq i\leq N$,
\[
\begin{split}
&\ds\frac{d}{dt}\|\mu_{i}(t)\|_{L^{2}(\r)}^{2} = 2\ds\int_{\r}\mu_{i}(t,x)\del_{t}\mu_{i}(t,x)dx\\
&\quad = -2\int_{\r}\mu_{i}(t,x)\del_{x}\mu_{i+1}(t,x)dx\\
&\quad + 2\gamma_{i}\int_{\r}\mu_{i}(t,x)\mu_{0}(t,x)dx - 2\lambda\int_{\r}|\mu_{i}(t,x)|^{2}dx,\\
\end{split},
\]
or
\[
\begin{split}
&\ds\frac{d}{dt}\|\mu_{i}(t)\|_{L^{2}(\r)}^{2}
\leq 2\|\mu_{i}(t)\|_{L^{2}(\r)}\|\del_{x}\mu_{i+1}(t)\|_{L^{2}(\r)}\\
&\qquad\qquad+2|\gamma_{i}|\;\|\mu_{i}(t)\|_{L^{2}(\r)}\|\mu_{0}(t)\|_{L^{2}(\r)}
- 2\lambda\|\mu_{i}(t)\|_{L^{2}(\r)}^{2}.
\end{split}\]
Thus, we obtain
\begin{equation*
\begin{array}{lll}
\ds \frac{d}{dt}\|\mu_{i}(t)\|_{L^{2}(\r)} & \leq & \|\del_{x}\mu_{i+1}(t)\|_{L^{2}(\r)}+|\gamma_{i}|\,\|\mu_{0}(t)\|_{L^{2}(\r)}
-\lambda\|\mu_{i}(t)\|_{L^{2}(\r)}.
\end{array}
\end{equation*}
Of course the same computation can be performed on the $\del_{x}^{k}\mu_{i}$, obtaining
\begin{equation}\label{relation_entre_les_derivees_des_delta_i}
\begin{split}
&\forall k\in\N,\quad \forall i\in\{0,\dots,N\},\\
&\frac{d}{dt}\|\del_{x}^{k}\mu_{i}(t)\|_{L^{2}(\r)} \leq
\|\del_{x}^{k+1}\mu_{i+1}(t)\|_{L^{2}(\r)}+|\gamma_{i}|\;
\|\del_{x}^{k}\mu_{0}(t)\|_{L^{2}(\r)}\\
&\qquad\qquad\qquad\qquad-\lambda\|\del_{x}^{k}\mu_{i}(t)\|_{L^{2}(\r)}.
\end{split}
\end{equation}
This sequence of inequalities lets us control $\|\mu_{0}(t)\|_{L^{2}(\r)}$ in term of the "last
derivatives", namely $\left(\|\del_{x}^{N}\mu_{i}\|_{L^{2}(\r)}\right)_{0\leq i\leq N}$. To do that, we set
\[
\forall t\geq 0,\quad\forall k\in\{0,\dots,N\},\qquad H_{k}(t):=\sum_{i=0}^{k}\|\del_{x}^{k}\mu_{i}\|_{L^{2}(\r)}.
\]
The coefficients $\gamma_i$ are easily bounded by $\|q\|_{L^2}$. Hence the inequalities \eqref{relation_entre_les_derivees_des_delta_i} imply
\begin{equation}\label{relation_entre_H_k}
\forall t\geq 0,\quad\forall k\in\{0,\dots,N\},\quad
\ds \frac{d}{dt}H_{k}(t) \leq
H_{k+1}(t)+\left(2\|q\|_{L^{2}(I)}-\lambda\right)H_{k}(t).
\end{equation}
We now use
\begin{Lemma}
Let $(H_{k}(t))_{k\geq 1}$ be a sequence of nonnegative $C^{1}$ functions such that:
$$\forall k\in\N,\quad \forall t\geq 0,\qquad \left\{\begin{array}{ll}
H_{k}'(t)\leq CH_{k}(t)+H_{k+1}(t)\\ H_{k}(0)=0
\end{array}\right.,$$
where $C>0$ is a numerical constant independant of $k$.\\
Then, we have:
$$\forall p\in\N^{*},\quad \forall t\geq 0,\qquad
H_{0}(t)\leq \frac{1}{(p-1)!}\int_{0}^{t}(t-s)^{p-1}e^{C(t-s)}H_{p}(s)ds.$$
\end{Lemma}
\noindent \underline{Proof}:
First, the assumption may be rewritten as
$$\forall k\in\N,\quad \forall t\geq 0,\qquad
\frac{d}{dt}\left(H_{k}(t)e^{-Ct}\right)\leq H_{k+1}(t)e^{-Ct},$$
thus, integrating in $t$, we obtain:
$$\forall k\in\N,\quad \forall t\geq 0,\qquad
H_{k}(t)\leq \int_{0}^{t}e^{C(t-s)}H_{k+1}(s)ds.$$
A simple recursion allows to conclude.
\cqfd
\bigskip
\noindent\underline{End of the proof of prop \eqref{resultat_smooth_case}}: Applying the previous lemma, we obtain
$$\forall t\geq 0,\qquad H_{0}(t)\leq \frac{1}{(N-1)!}\int_{0}^{t}(t-s)^{N-1}e^{(2\|q\|_{L^{2}(I)}-\lambda)(t-s)}H_{N}(s)ds,$$
and thus, for all $t\in [0,T]$,
\[\begin{split}
&\|\mu_{0}\|_{L^{2}(\r)} \leq \ds\frac{1}{(N-1)!}\int_{0}^{t}(t-s)^{N-1}e^{(2\|q\|_{L^{2}(I)}-\lambda)(t-s)}\sum_{i=0}^{N}\|\del_{x}^{N}\mu_{i}(s)\|_{L^{2}(\r)}ds\\
&\quad \leq \ds\frac{e^{(2\|q\|_{L^{2}(I)}-\lambda)T}}{(N-1)!}\int_{0}^{t}(t-s)^{N-1}\sum_{i=0}^{N}\|\del_{x}^{N}\mu_{i}(s)\|_{L^{2}(\r)}ds\\
\end{split}\]
which gives
\begin{equation}\label{estimation_provisoire_smooth_case}
\sup_{0\leq t\leq T}\|\mu_{0}(t,.)\|_{L^{2}(\r)} \leq C \frac{T^{N}}{N!}\times
\sum_{i=0}^{N}\|\del_{x}^{N}\mu_{i}\|_{L^\infty_tL^{2}_x},
\end{equation}
where the constant $C$ depends on $T$, $\lambda$ and $q$. This concludes the proof of Prop. \ref{resultat_smooth_case}.\cqfd
\subsection{Proof of Th. \ref{errorestimate}: Error estimate in the general case}
The general idea for the proof is to regularize the initial data. Then we have to bound 3 terms. First the error between the exact solution and the truncated hierarchy for this regularized initial data. This term is controlled by Prop. \ref{resultat_smooth_case}. The next term is the difference between the solution for the non regularized initial data and the solution for the regularized one, both solutions to the exact equation \eqref{eq_neutronics}. This is bounded using the estimates for \eqref{eq_neutronics}. The final term is the difference between the solution for the non regularized initial data and the solution for the regularized one, but both solutions to the truncated hierarchy \eqref{method_moments_d=0}. We bound this term thanks to assumption
\eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses}.
\begin{itemize}
\item\textit{Step $1$: Regularization of the initial data}
\noindent We fix $\eps>0$ and we choose $f_{\eps}^0\in
H^{k}(\r^2)$, with ${\rm
supp_v}\,f_{\eps}^0\subset I$, and such that (see the appendix for more details)
\begin{equation}\label{hypotheses sur la regularisee}
\begin{array}{lll}
a.e.\,\,v\in I, & f_{\eps}^0(.,v)\in C^{\infty}(\r),\\\\
& \|f_{\eps}^0(v)-f^0(v)\|_{L^2(\r)}\leq
C_k\eps^k\|f^0(v)\|_{H^k(\r)},\\\\
& \|f_{\eps}^0(v)\|_{H^m(\r)}\leq
C_k\eps^{k-m}\|f^0(v)\|_{H^k(\r)}\quad\forall m>k.
\end{array}
\end{equation}
Let $f_{\eps}=f_{\eps}(t,x,v)$ be the unique solution to Eq. \eqref{eq_neutronics} with $f_\eps(t=0)=f_\eps^0$.
We define the moments as usual
\begin{equation*}
\mu_i^{f_{\eps}}(t,x)=\int_{I} v^i\,f_{\eps}(t,x,v)\,dv,\qquad i\in\N.
\end{equation*}
Of course, those moments still satisfy the hierarchy
\begin{equation*}
\partial_t \mu_i^{f_{\eps}}+\partial_x \mu_{i+1}^{f_{\eps}}=\ds\gamma_i
\mu_0^{f_{\eps}}-\lambda \mu_i^{f_{\eps}}\qquad
i\in \N.
\end{equation*}
We denote by $\mu_i^\eps$ the solution to the truncated hierarchy \eqref{method_moments_d=0} for the initial data
\begin{equation}
\mu_i^{\eps}(t=0,x)=\mu_i^{f^0_{\eps}}(x)=\int_I v^i f^0_{\eps}(x,v)\,dv,\qquad
0\leq i\leq N.
\end{equation}
\smallskip
\item\textit{Step $2$: Error estimates in term of $\eps$.}
First note that $\mu_i^\eps-\mu_i^{f_\eps}$ satisfies the assumptions of Prop. \ref{resultat_smooth_case}. On the one hand
\[\begin{split}
\sum_{i\leq N}\|\partial_x^N (\mu_{i}^\eps-\mu_i^{f_\eps})\|_{L^2}&\leq \sum_{i\leq N} \left(\|\partial_x^N \mu_{i}^\eps\|_{L^2}+\|\partial_x^N \mu_{i}^{f_\eps}\|_{L^2}\right)\\
&\leq \sum_{i\leq N}\|\partial_x^N \mu_{i}^\eps\|_{L^2}+e^{Ct}\,(2N+1)^{1/2} \|\partial_x^N\,f^0_\eps\|_{L^2},
\end{split}\]
by Corollary \ref{hkmoments}. On the other hand by applying \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} to $\mu_i^\eps$ and $0$, one gets
\[
\sum_{i\leq N}\|\partial_x^N\mu_{i}^\eps\|_{L^\infty_t L^2_x}\leq C\,N^\gamma\,\|\del_{x}^{N}f^0_\eps\|_{L^2_{x,v}}.
\]
So applying Prop. \ref{resultat_smooth_case}, we have for $\gamma'=\max(\gamma,1/2)$
\begin{equation}\label{formule erreur 1}
\begin{split}
\|\mu_0^{\eps}-\mu_0^{f_{\eps}}\|_{L^{\infty}([0,T],L^2(\r))}&\leq C\,
\frac{N^{\gamma'}T^N}{N!}\times
\|\del_x^N f^{0}_{\eps}\|_{L^2(\r^{2})}\\
&\leq C_k\,\eps^{k-N}\,
\frac{N^{\gamma'}T^N}{N!}\times
\|f^{0}\|_{H^k_x\,L^2_v},
\end{split}\end{equation}
by \eqref{hypotheses sur la regularisee}.
We can use the stability estimate \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} to control
\begin{equation}\label{formule erreur 2}
\|\mu_0^{\eps}-\mu_0\|_{L^{\infty}([0,T],L^2(\r))}
\leq CN^{\gamma}\|f^{0}-f^{0}_{\eps}\|_{L^{2}}\leq C_k\,N^\gamma\,\eps^k\|f^{0}\|_{H^k_x\,L^2_v},
\end{equation}
again by \eqref{hypotheses sur la regularisee}.
At last, we can control $\|\mu_0^{f}-\mu_0^{f_{\eps}}\|_{L^{\infty}([0,T],L^2(\r))}$ according to Corollary \ref{hkmoments}
\begin{equation}\label{formule erreur 3}\begin{split}
\|\mu_0^{f}-\mu_0^{f_{\eps}}\|_{L^{\infty}([0,T],L^2(\r))}&=\|\mu_0^{f-f_{\eps}}\|_{L^{\infty}([0,T],L^2(\r))}\leq C\|f^{0}-f^{0}_{\eps}\|_{L^{2}(\r^{2})}\\
&\leq C_k\,\eps^{k}\,\|f^{0}\|_{H^k_x\,L^2_v}.
\end{split}\end{equation}
\item\textit{Step $3$: Choice of the parameter $\eps$}
We deduce from \eqref{formule erreur 1}, \eqref{formule erreur 2} and \eqref{formule erreur 3} the complete error estimate
\begin{equation}\label{erreur finale}
\|\mu_0-\mu_0^{f}\|_{L^{\infty}([0,T],L^2(\r))}\leq C\,
\left(N^{\gamma'}\,\frac{T^N}{N!}\,\eps^{k-N}+N^{\gamma}\,\eps^k\right)\,\|f^{0}\|_{H^k_x\,L^2_v},
\end{equation}
where the numerical constant $C\geq 0$ depends on $k$, $T$, $q$, and $\lambda$.
We of course choose the "best" value of $\eps$,
which minimizes the error.
When $N>k$, we get for $\gamma\geq 1/2$
\[
\eps=\eps^{*}=\left(\frac{T^N}{N!}\right)^{1/N}\underset{N\rightarrow \infty}{\sim}
\frac{eT}{N}.
\]
If $\gamma<1/2$ then one takes instead
\[
\eps=\eps^{*}=\left(\frac{T^N\,N^{1/2-\gamma}}{N!}\right)^{1/N}\underset{N\rightarrow \infty}{\sim}
\frac{eT}{N},
\]
with the same asymptotic behaviour.
In both cases
\begin{equation}\label{conclusion}
\begin{split}
\|\mu_0-\mu_0^{f}\|_{L^{\infty}([0,T],L^2(\r))} & \leq
\ds C_{T,q,\lambda,k}\,N^{\gamma}\,\left(\eps^{*}\right)^{k}\\
& \underset{N\rightarrow \infty}{\sim}
\frac{C_{T,q,\lambda,k}}{N^{k-\gamma}},
\end{split}
\end{equation}
which concludes the proof of Theorem \ref{errorestimate}.\cqfd
\end{itemize}
\section{Proof of Th. \ref{th stabilite tchebychev}: Example of the Tchebychev points\label{Tcheby}}
We now make the assumptions in Th. \ref{th stabilite tchebychev} and in particular that $d=0$ and that the $\lambda_k$ satisfy \eqref{tcheblambda}.
The constant $\Lambda_{N,0}$ defined by \eqref{constante Lambda} satisfies
\begin{equation}
\Lambda_{N,0}=(N+1)^{1/2}.\label{Lambda0}
\end{equation}
Moreover we define
\[
\rho_N(v)=\frac{\pi}{N+1}\,\sqrt{1-v^2}.
\]
In fact, the function $\frac{1}{\rho_{N}}$ is a normalization of the Tchebychev weight.
As the $\lambda_k$ gives the usual method of integration we have
\begin{Prop}
We have, for all $N\geq 1$ and for all $R\in\r_{2N+1}[X]$ :
\begin{equation}\label{example_weight}
\int_{I}\frac{R(v)}{\rho_{N}(v)}dv=\frac{N+1}{\pi}\int_{]-1,1[}\frac{R(v)}{\sqrt{1-v^{2}}}dv
=\sum_{k=0}^N R(\lambda_k),\qquad \forall R\in\r_{2N+1}[X].
\end{equation}\label{rhotcheby}
\end{Prop}
We may therefore apply Th. \ref{th stabilite} to this choice of $\lambda_k$ and for this choice of $\rho_N$.
Compute
\[
C_N(q)=\sqrt{N+1}\,\left(\int_{-1}^1 |q(v)|^2\rho_N(v)\,dv\right)^{1/2}\leq C\,\|q\|_{L^2}.
\]
Similarly
\[
C_N(f^0)\leq C\,N^{-1/2}\,\|f^0\|_{L^2}.
\]
So by Th. \ref{th stabilite}
\[
\sup _{i\leq N} \|\mu_i(t)\|_{L^2}\leq C\,e^{C\,T\,\|q\|_{L^2}}\,\|f^0\|_{L^2}\,N^{-1/2}\, \left(\sum_{k\leq N} \lambda_k^{2i}\right)^{1/2}\leq C\,e^{C\,T\,\|q\|_{L^2}}\,\|f^0\|_{L^2},
\]
which is exactly \eqref{estimation_de_stabilite_macro_tchebychev}. Moreover
\[
\sum_{i\leq N} \|\mu_i(t)\|_{L^2}\leq C\,\|f^0\|_{L^2}\,N^{-1/2}\,\sum_{i\leq N} \left(\sum_{k\leq N} \lambda_k^{2i}\right)^{1/2}.
\]
Of course by Cauchy-Schwarz, we have, for all $0\leq L\leq N$,
\[\begin{split}
N^{-1/2}\,\sum_{i\leq N} &\left(\sum_{k=L\ldots N-L} \lambda_k^{2i}\right)^{1/2}\leq \left(\sum_{i\leq N}\sum_{k=L\ldots N-L} \lambda_k^{2i}\right)^{1/2}\\
&\leq \left(\sum_{L\leq k\leq N-L} \frac{1}{1-|\lambda_k|^2}\right)^{1/2}
\end{split}\]
Note that
\[
1-|\lambda_k|^2\geq \frac{(k+1)^2}{C\,N^2}\quad \mbox{if}\ k\leq N/2,\quad
1-|\lambda_k|^2\geq \frac{(N-k+1)^2}{C\,N^2}\quad \mbox{if}\ k\geq N/2.
\]
Hence, if $L\leq N/2$,
\[
N^{-1/2}\,\sum_{i\leq N} \left(\sum_{k=L\ldots N-L} \lambda_k^{2i}\right)^{1/2}\leq
C\,\left(\sum_{L\leq k\leq N/2} \frac{N^2}{(k+1)^2}\right)^{1/2}\leq C\,\frac{N}{L^{1/2}}.
\]
Therefore
\[
\sum_{i\leq N} \|\mu_i(t)\|_{L^2}\leq C\,\|f^0\|_{L^2}\,(N^{1/2}\,L^{1/2}+N\,L^{-1/2}),
\]
and choosing $L=\sqrt{N}$ we obtain that this method satisfies the estimate \eqref{estimation_de_stabilite_macro_d=0_sous_hypotheses} with $\gamma=3/4$.
It only remains to apply Th. \ref{errorestimate} to conclude.\cqfd
\section{Appendix}
The natural way to regularize is by convolution. However to obtain high order approximation, it is necessary to choose correctly the mollifier.
In the $L^2$ framework though, things are quite simple by truncating in Fourier.
\begin{Prop} Let $k$ a positive integer, $f\in H^k(\r^d)$ and $\eps>0$. It
exists $f_{\eps}\in H^{\infty}(\r^d)$ such that
\begin{equation}\label{approx 1}
\|f-f_{\eps}\|_{L^2(\r^d)}\leq \eps^k \|D^k f\|_{L^2(\r^d)},
\end{equation}
\begin{equation}\label{approx 2}
\|D^m f_{\eps}\|_{L^2(\r^d)}\leq \eps^{k-m} \|D^k
f\|_{L^2(\r^d)},\quad\forall m\in\N.
\end{equation}
\end{Prop}
\noindent \underline{Proof} : We use Fourier's analysis. We consider $f_{\eps}\in
L^2(\r^d)$ defined by $$\widehat{f_\eps}(\xi)=\widehat{f}(\xi)\1_{\{|\xi|\leq
1/\eps\}}.$$
First, we have
\[\begin{split}
\|\widehat{f}-\widehat{f_\eps}\|_{L^2(\r^d)}&=\left(\int_{
|\xi|>1/\eps } |\widehat{f}(\xi)|^2d\xi\right)^{1/2}\leq \left(\int_{
\r^d } \eps^{2k}
|\xi|^{2k}|\widehat{f}(\xi)|^2d\xi\right)^{1/2}\\
&=\eps^k\|\widehat{D^k
f}\|_{L^2(\r^d)},
\end{split}
\]
which proves \eqref{approx 1}.
On the other hand, for all $m\in\N$,
\[\begin{split}
\||\xi|^m\widehat{f_{\eps}}\|_{L^2(\r^d)}&=\left(\int_{
|\xi|\leq 1/\eps }|\xi|^{2m} |\widehat{f}(\xi)|^2d\xi\right)^{1/2}\!\!\leq
\left(\int_{\r^d}\eps^{2k-2m}|\xi|^{2k}|\widehat{f}(\xi)|^2d\xi\right)^{1/2}
\!\!\!\\
&=\eps^{k-m}\|\widehat{D^k f}\|_{L^2(\r^d)},
\end{split}\]
thus, $f_{\eps}\in H^m(\r^d)$ and the estimate \eqref{approx 2} holds.\cqfd
|
1,477,468,750,111 | arxiv | \section{Proofs}\label{sec:proofs}
{\bf Notation:} For two real numbers $a$ and $b$, $a\propto^+ b$ implies that, $\exists$ $M>0$ such that $a=M\cdot b$.
\begin{prp}\label{prp:covvar}
Suppose $U$, $V$, $W$ are univariate components of a Gaussian random vector with mean $\mu$ and positive definite covariance $\Sigma$. Assume that $\cind{U}{V}{W}$. Then $\sigma_{UV}=\sigma_{UW}\sigma_{WV}/\sigma_{WW}$ and $\sigma_{UU}=\sigma^2_{UW}\sigma_{WW}/\sigma^2_{WW}+E\left[Var\left(U|W\right)\right]$.
\begin{proof} Trivial.\end{proof}
\end{prp}
Suppose $K$ and $K^{\prime}$ are constants and for some $a,c,d \in V$ and $B\subseteq V\setminus\{a,c,d\}$ (where $B$ may be empty) we denote $M_1=\sigma_{cd|B}\left\{\sigma_{ad|B}\sigma_{cc|B}-\sigma_{ac|B}\sigma_{cd|B}\right\}$, \\$M_2=\sigma_{ad|B}\left\{\sigma_{cd|B}\sigma_{aa|B}-\sigma_{ac|B}\sigma_{ad|B}\right\}$, $M_3(\alpha)=[(\alpha-K^{\prime})\sigma_{ac|B}\sigma_{dd|B}-K\cdot \sigma_{ad|B}\sigma_{cd|B}]$ and
\begin{equation}
L(\alpha)=\frac{\{(\alpha-K^{\prime})\rho_{ac|B}-K\rho_{ad|B}\rho_{cd|B}\}^2}{[\{(\alpha-K^{\prime})-K\rho^2_{ad|B}\}\{(\alpha-K^{\prime})-K\rho^2_{cd|B}\}]}\label{eq:l}.
\end{equation}
\begin{lem}\label{lem:initthree}
Suppose $K>0$ and for some $K^{\prime}$ and $\alpha$, $(\alpha-K^{\prime})-K\rho^2_{ad|B}> 0$ and $(\alpha-K^{\prime})-K\rho^2_{cd|B}> 0$.
Then if $M_1\cdot M_2\ge 0$ :
\begin{enumerate}
\item $\frac{\partial L(\alpha)}{\partial\alpha}=0$ if both $M_1\cdot M_3(\alpha)$ and $M_2\cdot M_3(\alpha)$ are $0$.
\item $\frac{\partial L(\alpha)}{\partial\alpha}$ has the same sign as either $M_1\cdot M_3(\alpha)$ or $M_2\cdot M_3(\alpha)$, whichever is non-zero.
\end{enumerate}
\begin{proof}
Since the denominator of \eqref{eq:l} is positive then the sign of $\partial L(\alpha)/\partial\alpha$ is the sign of the numerator of
$\partial L(\alpha)/\partial\alpha$. From quotient rule of differentiation and some algebraic manipulation we get :
\begin{align}\label{eq:dl}
\frac{\partial L(\alpha)}{\partial\alpha}\propto^+&K[(\alpha-K^{\prime})\rho_{ac|B}-K\rho_{ad|B}\rho_{cd|B}]\times\nonumber\\
&\left\{[(\alpha-K^{\prime})-K\rho^2_{ad|B}]\rho_{cd|B}[\rho_{ad|B}-\rho_{ac|B}\rho_{cd|B}]\right.\nonumber\\
&+\left.[(\alpha-K^{\prime})-K\rho^2_{cd|B}]\rho_{ad|B}[\rho_{cd|B}-\rho_{ac|B}\rho_{ad|B}]\right\}.
\end{align}
Note that $\rho_{cd|B}[\rho_{ad|B}-\rho_{ac|B}\rho_{cd|B}]\propto^+M_1$, $\rho_{ad|B}[\rho_{cd|B}-\rho_{ac|B}\rho_{ad|B}]\propto^+M_2$ and $[(\alpha-K^{\prime})\rho_{ac|B}-K\rho_{ad|B}\rho_{cd|B}]\propto^+M_3(\alpha)$.
By substituting these expressions in \eqref{eq:dl} and the positivity $K$, $[(\alpha-K^{\prime})-K\rho^2_{ad|B}]$ and $[(\alpha-K^{\prime})-K\rho^2_{cd|B}]$ the result follows.\hfill$\qed$
\end{proof}
\end{lem}
\noindent\emph{\bf Proof of Theorem \ref{lem:maincomp1}}.
From the assumption $Inf\left(\cind{a}{c^{\prime}}{cZ}\right)=0$. The rest follows from the identity $Inf\left(\cind{a}{c^{\prime}}{cZ}\right)+Inf\left(\cind{a}{c}{Z}\right)=Inf\left(\cind{a}{c}{c^{\prime}Z}\right)+Inf\left(\cind{a}{c^{\prime}}{Z}\right)$.\footnote{The author would like to thank the referee for drawing his attention to this equality which improved the original proof immensely.}\hfill$\qed$
Note that, from \citet{lnenicka_matus_2007}, assumptions on conditional independence and the conditional correlations do not change if we replace $\Sigma$ by $J\Sigma J$, where $J$ is the diagonal matrix with $1/\sqrt{\sigma_{vv}}$, $v\in V$. Thus, unless otherwise stated, w.l.g we can assume that the diagonal elements of $\Sigma$ are all equal to $1$ and all the off diagonals are in $(-1,1)$.
That is $\Sigma$ is the correlation matrix of $V$, but with an abuse of notation in what follows below, we still denote the correlation of $a$ and $c$ by $\sigma_{ac}$.
\noindent\emph{\bf Proof of Theorem \ref{ap:1b}.}
Note that by assumption $\sigma_{ac}=\sigma_{ax}\sigma_{cx}$, $\sigma_{az}=\sigma_{ax}\sigma_{xz}$, $\sigma_{cz}=\sigma_{cx}\sigma_{xz}$, $\sigma_{az^{\prime}}=\sigma_{ax}\sigma_{xz}\sigma_{zz^{\prime}}$ and $\sigma_{cz^{\prime}}=\sigma_{cx}\sigma_{xz}\sigma_{zz^{\prime}}$.
\noindent Part $1$. $\rhsq{a}{c}{z}=\sigma^2_{ac}(1-\sigma^2_{xz})^2/[(1-\sigma^2_{ax}\sigma^2_{xz})(1-\sigma^2_{cx}\sigma^2_{xz})]$. Now since $\sigma^2_{ax}\sigma^2_{xz}\le \sigma^2_{xz}$ and $\sigma^2_{cx}\sigma^2_{xz}\le \sigma^2_{xz}$, $\rhsq{a}{c}{z}\le\rho^2_{ac}$.
\noindent Part $2$. Assume that $x\ne z^{\prime}$ and consider three non trivial cases as $x=a$, $x=c$ and $x\not\in\{a,c\}$. Initially assume that $\sigma_{zz^{\prime}}\ne 0$. Since $\cind{ac}{z^{\prime}}{z}$, using Proposition \ref{prp:covvar} and the positive definiteness of the covariance matrix together with $\tau^2_{z}=(1-\sigma^2_{zz^{\prime}})>0$
and by denoting $\alpha=1+(\tau_z^2/\sigma^2_{zz^{\prime}})>1$, with $B=\emptyset$, $K^{\prime}=0$, $K=1$ it follows that $\rhsq{a}{c}{z^{\prime}}=L(\alpha)$ for $\alpha\ge 1$ and $\rhsq{a}{c}{z}=L(1)$. Thus in Lemma \ref{lem:initthree} using Cauchy Schwartz inequality and $\alpha\ge 1$ it follows that for $x=a$, $M_1\propto^+\sigma_{cx}$, $M_2=0$ and
$M_3(\alpha)\propto^+\sigma_{cx}$, for $x=c$, $M_1=0$, $M_2\propto^+\sigma_{ax}$ and $M_3(\alpha)\propto^+\sigma_{ax}$ and for $x\not\in\{a,c\}$, $M_1\propto^+\sigma_{ax}\sigma_{cx}$, $M_2\propto^+\sigma_{ax}\sigma_{cx}$ and $M_3(\alpha)\propto^+\sigma_{cx}\sigma_{ax}$.
Thus for all cases $\partial L/\partial\alpha\ge 0$ and the result follows. If $\sigma_{zz^{\prime}}=0$, $z\perp\!\!\!\perp z^{\prime}$ and $z^{\prime}\perp\!\!\!\perp acz$. Thus $\rhsq{a}{c}{z^{\prime}}=\rho^2_{ac}$. The rest follows from part $1$.
For the second inequality notice that, by our assumption $\sigma_{az^{\prime}}=\sigma_{az}\sigma_{zz^{\prime}}=\sigma_{ax}\sigma_{xz}\sigma_{zz^{\prime}}$. Since we don't assume $\cind{x}{z^{\prime}}{z}$, $\sigma_{xz}\sigma_{zz^{\prime}}$ is not necessarily equal to $\sigma_{xz^{\prime}}$. However, $\rhsq{a}{c}{z^{\prime}}=\sigma^2_{ac}(1-\sigma^2_{xz}\sigma^2_{zz^{\prime}})^2/[(1-\sigma^2_{ax}\sigma^2_{xz}\sigma^2_{zz^{\prime}})(1-\sigma^2_{cx}\sigma^2_{xz}\sigma^2_{zz^{\prime}})]\le\rho^2_{ac}$ in the same way as in part $1$.\hfill$\qed$
\noindent\emph{\bf Proof of Theorem \ref{ap:2b}.}
By assumption $\cind{zB}{ac}{x}$ and $a\perp\!\!\!\perp c$.
Part $1$. It is enough to show that $\sigma^2_{ac|Bz}\ge\sigma^2_{ac|B}$. Using the above relations in Proposition \ref{prp:covvar} and by denoting $Q_1=\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bx}$ and $Q_2=\left(\Sigma_{xB},\sigma_{xz}\right)\Sigma^{-1}_{(Bz)(Bz)}\left(\Sigma_{xB},\sigma_{xz}\right)^T$
one gets $\sigma_{ac|Bz}=-\sigma_{ax}\sigma_{cx}Q_2$ and $\sigma_{ac|B}=-\sigma_{ax}\sigma_{cx}Q_1$. Now the proof follows by noting that, $\sigma_{aa}-\sigma^2_{ax}Q_1=\sigma_{aa|B}\ge\sigma_{aa|Bz}=\sigma_{aa}-\sigma^2_{ax}Q_2$ implies $Q_2\ge Q_1$.
Part $2$. We initially assume that $\sigma_{zz^{\prime}}\ne 0$. By defining $\tau^2_{z^{\prime}}=\left(1-\sigma^2_{zz^{\prime}}\right)>0$, $\alpha=\left(1+(\tau^2_{z^{\prime}}/\sigma^2_{zz^{\prime}})\right)$, $K^{\prime}=\Sigma_{zB}\Sigma^{-1}_{BB}\Sigma_{Bz}>0$, $K=(1-K^{\prime})>0$
and from the assumption that $\cind{z^{\prime}}{acB}{z}$ it follows that $\rho^2_{ac|Bz^{\prime}}=L(\alpha)$ with $\alpha\ge 1$ and $\rhsq{a}{c}{Bz}=L(1)$. Further using $\cind{ac}{zB}{x}$ one
can show that $M_1\propto^+\sigma_{cx}\sigma_{ax}\sigma^2_{xz|B}$, $M_2\propto^+\sigma_{cx}\sigma_{ax}\sigma^2_{xz|B}$ and $M_3(\alpha)\propto^+-\sigma_{cx}\sigma_{ax}$.
Thus from Lemma \ref{lem:initthree} it follows that $\partial L/\partial\alpha\le 0$. If $\sigma_{zz^{\prime}}=0$, as before $z\perp\!\!\!\perp z^{\prime}$ and $z^{\prime}\perp\!\!\!\perp acB$. Thus $\rhsq{a}{c}{Bz}=\rhsq{a}{c}{B}$. The result follows from part $1$.
For the first inequality, notice that $\sigma_{ac\mid Bz^{\prime}}=-\sigma_{ax}\sigma_{cx}Q^{\star}_2$ and $\sigma_{aa\mid Bz^{\prime}}=1-\sigma^2_{ax}Q^{\star}_2$, where $Q^{\star}_2=\left(\Sigma_{xB},\sigma_{xz}\sigma_{zz^{\prime}}\right)\Sigma^{-1}_{(Bz^{\prime})(Bz^{\prime})}\left(\Sigma_{xB},\sigma_{xz}\sigma_{zz^{\prime}}\right)^T$. This implies $\sigma^2_{ac\mid Bz^{\prime}}\ge\sigma^2_{ac\mid B}$ just like part $1$ above.\hfill$\qed$
\noindent\emph{\bf Proof of Theorem \ref{ap:3b}.}
W.l.g. it is enough assume that $x\not\in B$. Furthermore, note that $\sigma_{aa|B}\ge\sigma_{aa|Bz}$ and $\sigma_{cc|B}\ge\sigma_{cc|Bz}$, thus for part $1$ it is enough to show that under the assumptions $\sigma_{ac|Bz}=m\cdot\sigma_{ac|B}$ for some $m>1$.\\
Part $1$. Assume that, $a\perp\!\!\!\perp z$ and let (ii) hold, ie. $\cind{cB}{az}{x}$. Using Proposition \ref{prp:covvar} it follows that
\begin{align}
\sigma_{ac|Bz}&=\sigma_{ac|B}+\frac{(\Sigma_{aB}\Sigma^{-1}_{BB}\Sigma_{Bz})(\sigma_{cz}-\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bz})}{\sigma_{zz|B}}=\sigma_{ac|B}+\frac{\sigma_{ax}\sigma^2_{xz}Q_1(\sigma_{cx}-\sigma_{cx}Q_1)}{\sigma_{zz|B}}\nonumber\\
&=\sigma_{ac|B}+\frac{\sigma^2_{xz}Q_1(\sigma_{cx}\sigma_{ax}-\sigma_{cx}\sigma_{ax}Q_1)}{\sigma_{zz|B}}=\sigma_{ac|B}\left(1+\sigma^2_{zx}Q_1\sigma^{-1}_{zz|B}\right).\nonumber
\end{align}
Thus $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz}$.
Under (i) if $c\perp\!\!\!\perp{az}$, $\sigma_{ac}=\sigma_{zc}=0$, $\sigma_{ac|B}=-\Sigma_{aB}\Sigma^{-1}_{BB}\Sigma_{Bc}$ and $\sigma_{cz|B}=-\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bz}$. Now if $(i)(a)$ ie. $\cind{az}{B}{x}$ holds:
\begin{align}
\sigma_{ac|Bz}&=\sigma_{ac|B}-\frac{(\Sigma_{aB}\Sigma^{-1}_{BB}\Sigma_{Bz})(\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bz})}{\sigma_{zz|B}}\label{eq:cov}\\
&=\sigma_{ac|B}-\frac{(\sigma_{ax}\sigma_{xz}Q_1)(\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bx}\sigma_{xz})}{\sigma_{zz|B}}=\sigma_{ac|B}\left(1+\sigma^2_{zx}Q_1\sigma^{-1}_{zz|B}\right).\nonumber
\end{align}
Under $(i)(b)$ ie. $\cind{az}{B}{cx}$ notice that from Proposition \ref{prp:covvar}:
\begin{align}
\Sigma_{aB}&=\Sigma_{a(xc)}\Sigma^{-1}_{(xc)(xc)}\Sigma_{(xc)B}=[\sigma_{ax},0]\Sigma^{-1}_{(xc)(xc)}\Sigma_{(xc)B}=\sigma_{ax}[1,0]\Sigma^{-1}_{(xc)(xc)}\Sigma_{(xc)B}=\sigma_{ax}\mathcal{Q}_{cxB}.\nonumber
\end{align}
Here $\mathcal{Q}_{cxB}=[1,0]\Sigma^{-1}_{(xc)(xc)}\Sigma_{(xc)B}$. Similarly it can be shown that, $\Sigma_{zB}=\sigma_{zx}\mathcal{Q}_{cxB}$ and $\sigma_{ac|B}=-\sigma_{ax}\mathcal{Q}_{cxB}\Sigma^{-1}_{BB}\Sigma_{Bc}$.
Now by substitution in \eqref{eq:cov} above we get:
\begin{align}
\sigma_{ac|Bz}&=\sigma_{ac|B}-\frac{\sigma_{ax}(\mathcal{Q}_{cxB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{cxB})(\Sigma_{cB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{cxB})\sigma^2_{zx}}{\sigma_{zz|B}}\nonumber\\
&=\sigma_{ac|B}-\frac{(\sigma_{zx}\mathcal{Q}_{cxB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{cxB}\sigma_{zx})(\Sigma_{cB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{cxB}\sigma_{ax})}{\sigma_{zz|B}}\nonumber\\
&=\sigma_{ac|B}+\frac{(\Sigma_{zB}\Sigma^{-1}_{BB}\Sigma_{Bz})\sigma_{ac|B}}{\sigma_{zz|B}}=\sigma_{ac|B}\left\{1+(\Sigma_{zB}\Sigma^{-1}_{BB}\Sigma_{Bz})\sigma^{-1}_{zz|B}\right\}.\nonumber
\end{align}
The proofs for $(i)(c)$ and $(i)(d)$ are similar.
If $(i)(e)$ ie. $\cind{ac}{B}{x}$ holds, $\sigma_{ac|B}=-\sigma_{ax}\sigma_{cx}Q_1$ and using Proposition \ref{prp:covvar} we get,
\begin{align}
\sigma_{ac|Bz}=&\sigma_{ac|B}-\frac{(-\Sigma_{aB}\Sigma^{-1}_{BB}\Sigma_{Bz})(-\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bz})}{\sigma_{zz|B}}=\sigma_{ac|B}-\sigma_{ax}\sigma_{cx}(\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bz})^2\sigma^{-1}_{zz|B}\nonumber\\
&=\sigma_{ac|B}\left\{1+(\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bz})^2/(Q_1\sigma^{-1}_{zz|B})\right\}.\nonumber
\end{align}
Under condition $(i)(f)$ notice that, $\Sigma_{aB}=\Sigma_{a(xz)}\Sigma^{-1}_{(xz)(xz)}\Sigma_{(xz)B}=\sigma_{ax}[1,0]\Sigma^{-1}_{(xz)(xz)}\Sigma_{(xz)B}=\sigma_{ax}\mathcal{Q}_{xzB}$. Similarly, $\Sigma_{cB}=\sigma_{cx}\mathcal{Q}_{xzB}$. Now from \eqref{eq:cov} it follows that:
\[\sigma_{ac|Bz}=\sigma_{ac|B}-\frac{\sigma_{ax}\sigma_{cx}(\mathcal{Q}_{xzB}\Sigma^{-1}_{BB}\Sigma_{Bz})^2}{\sigma_{zz|B}}.
\]
Clearly if at least one of $\sigma_{ax}$,$\sigma_{cx}$, $\mathcal{Q}_{xzB}$ is zero, the results is trivial. Now suppose none of them equal zero. Then $\mathcal{Q}_{xzB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{xzB}>0$. Further $\sigma_{ac|B}=-\sigma_{ax}\sigma_{cx}(\mathcal{Q}_{xzB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{xzB})$, which yields
\[\sigma_{ac|Bz}=\sigma_{ac|B}\left\{1+\frac{(\mathcal{Q}^T_{xzB}\Sigma^{-1}_{BB}\Sigma_{Bz})^2}{(\mathcal{Q}_{xzB}\Sigma^{-1}_{BB}\mathcal{Q}^T_{xzB})\sigma_{zz|B}}\right\}.
\]
Part $2$.
Suppose $\sigma^2_{z^{\prime}z}>0$. Let $\tau^2_{z^{\prime}}=\left(1- \sigma^2_{z^{\prime}z}\right)>0$, $K^{\prime}=\Sigma_{zB}\Sigma_{BB}^{-1}\Sigma_{Bz}$, $K=(1-K^{\prime})>0$
and $\alpha=1/\sigma^2_{z^{\prime}z}=\left(1+\tau^2_{z^{\prime}}/\sigma^2_{z^{\prime}z}\right)\ge 1$. Then from $\cind{acB}{z^{\prime}}{z}$, $a\perp\!\!\!\perp zz^{\prime}$ it follows that for both cases $\rho^2_{ac|Bz^{\prime}}=L\left(\alpha\right)$ with $\alpha\ge 1$ and $\rho^2_{ac|Bz}=L(1)$. Now we consider the four cases in the statement. By denoting $Q_{cx}=\Sigma_{cB}\Sigma^{-1}_{BB}\Sigma_{Bx}$, $Q_{ax}=\Sigma_{aB}\Sigma^{-1}_{BB}\Sigma_{Bx}$ and $\mathcal{Q}_{axB}=[1,0]\Sigma^{-1}_{(xa)(xa)}\Sigma_{(xa)B}$ it follows that:
\begin{equation}
M_1\propto^+M_2\propto^+\begin{cases}
\sigma_{ax}Q_{cx}&\text{if $(i)$, $(a)$}\\
\sigma_{ax}\mathcal{Q}_{cxB}\Sigma^{-1}_{BB}\Sigma_{Bc}&\text{if $(i)$, $(b)$}\\
\sigma_{cx}Q_{ax}&\text{if $(i)$, $(c)$}\\
\sigma_{cx}\mathcal{Q}_{axB}\Sigma^{-1}_{BB}\Sigma_{Ba}&\text{if $(i)$, $(d)$}\\
\sigma_{ax}\sigma_{cx}&\text{if $(i)$, $(e)$}\\
\sigma_{ax}\sigma_{cx}&\text{if $(i)$, $(f)$}\\
-\sigma_{ax}\sigma_{cx}&\text{if $(ii)$}\\
\end{cases},
M_3(\alpha)\propto^+\begin{cases}
-\sigma_{ax}Q_{cx}&\text{if $(i)$, $(a)$}\\
-\sigma_{ax}\mathcal{Q}_{cxB}\Sigma^{-1}_{BB}\Sigma_{Bc}&\text{if $(i)$, $(b)$}\\
-\sigma_{cx}Q_{ax}&\text{if $(i)$, $(c)$}\\
-\sigma_{cx}\mathcal{Q}_{axB}\Sigma^{-1}_{BB}\Sigma_{Ba}&\text{if $(i)$, $(d)$}\\
-\sigma_{ax}\sigma_{cx}&\text{if $(i)$, $(e)$}\\
-\sigma_{ax}\sigma_{cx}&\text{if $(i)$, $(f)$}\\
\sigma_{ax}\sigma_{cx}&\text{if $(ii)$}\\
\end{cases}.\nonumber
\end{equation}
Thus from Lemma \ref{lem:initthree}, in all cases $\partial L/\partial\alpha\le 0$, which completes the proof.
If $\sigma_{zz^{\prime}}=0$, then for all cases $\rhsq{a}{c}{Bz^{\prime}}=\rhsq{a}{c}{B}$ and the result follows from Part $1$ as before.
\hfill$\qed$
\noindent\emph{\bf Proof of Corollary \ref{cor:3b}.}
If $B=\emptyset$, under $(i)$ from the assumed independence of $a$, $c$ and $z$, we get $\sigma_{ac}=\sigma_{az}=\sigma_{cz}=0$. The result follows from this.
Under $(ii)$, $\sigma_{cz}\ne 0$ and from Theorem \ref{ap:3b} the result follows.\hfill$\qed$
\noindent\emph{\bf Proof of Theorem \ref{cl:3b2}.}
In this proof we take $\Sigma$ to be the covariance matrix and not the correlation matrix as above. Using condition $\cind{B}{acz}{x}$, denoting $\sigma^2_{xx}Q_4=\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bx}$, $T=\sigma_{zz}/\left(\sigma_{zz}-\sigma^2_{xz}Q_4\right)$ ($T>0$) and from Proposition \ref{prp:covvar} and some simplification we get
\begin{equation*}
\frac{\rhsq{a}{c}{Bz}}{\rhsq{a}{c}{x}}=\frac{\left(\sigma_{aa}\sigma_{xx}Q_4T-\sigma^2_{ax}Q_4T\right)\left(\sigma_{cc}\sigma_{xx}Q_4T-\sigma^2_{cx}Q_4T\right)}{\left(\sigma_{aa}-\sigma^2_{ax}Q_4T\right)\left(\sigma_{cc}-\sigma^2_{cx}Q_4T\right)}.
\end{equation*}
Thus $\rhsq{a}{c}{Bz}\ge\rhsq{a}{c}{x}$ iff $\sigma_{xx}Q_4T\ge 1$ iff $\left(\sigma_{xx}+\sigma^2_{xz}/\sigma_{zz}\right)Q_4\ge 1$. The equivalent expression follows as:
\begin{equation}
\frac{\sigma^2_{xz}}{\sigma_{zz}\sigma^2_{xx}}\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bx}\ge\frac{\sigma_{xx|B}}{\sigma_{xx}}\Leftrightarrow\frac{1}{\sigma_{zz}}\Sigma_{zB}\Sigma^{-1}_{BB}\Sigma_{Bz}\ge\frac{\sigma_{xx|B}}{\sigma_{xx}}\Leftrightarrow\frac{\sigma_{xx}-\sigma_{xx|B}}{\sigma_{xx}}\ge\frac{\sigma_{zz|B}}{\sigma_{zz}}.\nonumber
\end{equation}\hfill$\qed$
\section{Introduction}
In graphical Markov models literature, several attempts have been made to characterise the degree of conditional association among the vertices by the structure of the underlying graph.
Such knowledge is considered useful in model selection.
For example, \citet{cheng1} describe an algorithm of model selection for directed acyclic graphs (DAG) which assumes that the mutual information has a monotone relationship with certain structure based length of the path.
Examples \citep{meek1} show that such a \emph{monotone DAG faithfulness} property or a similar \emph{compound monotone DAG faithfulness} property do not hold even for simple binary DAGs.
In fact, except in some specific cases e.g. \citet{greenland1} in epidemiology, \citet[causal pipes]{spirtes:etal:2000} in causal analysis, no result is known in this context.
A more general problem is to order the squared partial correlation coefficients among the components of a Gaussian random vector. For these random vectors, squared partial correlation coefficients completely measure the degree of association between its components conditional on a subset of the components.
This measure is a polynomial in the entries of their covariance matrices. Thus in many situations it is beneficial to be able to order squared partial correlation coefficients in a way, such that the ordering does not depend on the specific values of the covariances.
Simple counter-examples show that such \emph{qualitative} comparisons cannot hold unless the covariance matrix belongs to certain subsets of positive definite matrices. In this article, we specify such subsets by conditional independence relationships.
For a graphical Markov model validity of such relationships can be simply read off from the underlying graph. Thus rules for comparing degree of association on various Gaussian graphical models can be developed.
In this article we show that, certain conditional independence relationships holding, suitable squared partial correlations can be qualitatively compared. We make two kinds of comparisons. In the first, the set of components conditioned on (conditionate) are kept fixed and we change the dependent vertices (correlates).
More importantly, in the second, we fix the two correlates and compare their degree of dependence by varying the conditionates. The sufficient conditional independence relationships are satisfied by several graphical Markov models.
Using relevant \emph{separation} criteria (e.g. separation for undirected graphs (UG) (see Definition \ref{defn:sep}), d-separation for DAGs \citep{verma_pearl_1990} (see Definition \ref{defn:dconn}), m-separation for mixed ancestral graphs (MAGs) (see supplement) \citep{thomas1} etc.,
we postulate sufficient structural conditions for comparing conditional association on them.
We emphasize that the specific graphical Markov models are used as illustrations. Our results apply to a much wider class of models.
Furthermore, using the fact that for tree and polytree (DAGs without any undirected cycles either or singly connected directed acyclic graphs) models, any two connected components have exactly one path joining them, these structural criteria can be simplified to path based rules for comparison.
We discuss such rules for trees in details, where it is also shown that our rules for comparing the squared partial correlations are complete.
The inequalities discussed here have theoretical interest as new properties of Gaussian random vectors and directly translate to corresponding conditional non-Shannon type information inequalities \citep{zhang:yeung:1997,matus:2006,matus:2007}.
\citet{matus:2005} considers implications of one set of conditional independence relations on other conditional independencies for Gaussian random vectors. Furthermore, he describes a way to determine such implications using the ring of polynomials generated by the entries of the correlation matrices with some additional indeterminates. Our results describe some polynomial inequalities these rings satisfy.
Our main motivation comes from the Gaussian graphical Markov models.
These results are canonical and sufficient to postulate structure based rules to order dependencies on several of them.
We improve upon \citet{sctsr3,scphd}, who only consider polytree models.
These results can be used in determining the distortion effects \citep{wermuth_cox_2008} and monotonic effects \citep{vanderweele_robins_2007,vanderweele_robins} of confounded variables in epidemiology and causal network analysis (see also \citet{greenland_pearl_2011}).
We postulate necessary and sufficient conditions for determining structures on a class of polytree models. These conditions can be directly applied in model selection, specially in mapping river flow and drainage networks where such polytree models occur naturally \citep{rodriguez-iturbe_rinaldo_book}.
In real data analysis, these inequalities would be useful for model selection, specially among various graphical Markov models \citep{cheng1,shimizu:hoyer:etal:2006}. For these models our results would translate to hypothesis connected to the structure of the graph. These hypothesis can be tested from the observed data.
Structure based inequalities may also be used as constraints in estimation with missing values. They are also relevant in choosing prior distributions in Bayesian procedures.
The qualitative bounds can be used in selecting stratifying variables in designing surveys, gathering most relevant information in forensic sciences and building strategies for constrained searches.
Further, these results may have applications in designing effective updating and blocking strategies in Gibbs sampling and Markov chain monte carlo procedures (see eg. \citet{roberts_sahu_1997} etc).
\input{sectioncombm_misc_rev.tex}
\input{tree.tex}
\input{mod_sel.tex}
\input{counterexmp.tex}
\section{Discussion}
Qualitative comparison may be possible under other sets of conditional independence relations.
The requirement of a single component $x$ cannot be relaxed.
The results in Section \ref{sec:canon} are sufficient for postulating path based rules for comparison on polytree models as well. Since the edges on a polytree are directed, these rules are more involved than those for trees \citep{sctsr3}.
Comparison of mutual information with a fixed conditionate holds for any distribution. In fact, the results with fixed correlates are based on the positive-definiteness of the covariance matrix and extend to non-Gaussian distributions as well. However, inequalities for squared partial correlation would not translate to mutual information for such random variables. These results may be applicable to causal model selections among non-Gaussian variables (eg. \citet{shimizu:hoyer:etal:2006}).
It can be shown that, although the comparisons with a fixed conditionate do not hold, but absolute values of partial regression coefficients can be qualitatively compared for fixed correlates under the same conditions \citep{chaudhuri_tan_2010}.
Rules for signed comparisons of partial correlation and regression coefficients can be developed from these results. Such results might be useful in identifying hidden variables in Factor models \citep{bekker:deleeuw:1987, Drton:strumfels:sullivan:2007, xu:pearl:1989, spirtes:etal:2000} and in recovering population covariance matrix for one-factor models in presence of selection bias \citep{kuroki:cai:2006}.
\section{Necessity of the conditional independence relationships}
In the above sections we postulated some sufficient conditional independence relationships under which some squared conditional correlations can be qualitatively compared. It is not known if these relationships are necessary as well.
It is possible that qualitative comparison would hold under different sets of conditions. However the conditions in any set of relationships cannot be reduced. In this section we show this fact using various counterexamples.
In each counter-example, unless otherwise stated, set all parameters ie. the regression coefficients and the node specific conditional variances are set to $1$.
\subsection{Comparison with a fixed conditionate}
We consider the graph in Figure \ref{figcomp1:2}. Note that, $c$ is a collider on the \path{a}{x} and $z$ is a child of $c$. Thus, from the laws of d-separation $x$ is not d-separated from $a$ given $c$ and $z$.
Under our choice of parametrisation clearly $a\not\perp\!\!\!\perp x\mid cz$. In the plots to the right of Figure \ref{figcomp1:2} we change respectively $\beta_{cz}$ and $\tau^2_z$ and keep other parameters fixed. It is clear from the plots that $\rhsq{a}{c}{z}$ and $\rhsq{x}{c}{z}$ cannot be qualitatively compared. This shows the condition of Theorem \ref{lem:maincomp1} cannot be relaxed.
\begin{figure}[t]
\parbox{.4\columnwidth}{
\input{figextc.tex}
}\ \hspace{.5in} \
\parbox{.4\columnwidth}{
\resizebox{.4\columnwidth}{.36\columnwidth}{\includegraphics{bvarc.eps}}
\resizebox{.4\columnwidth}{.36\columnwidth}{\includegraphics{svarc.eps}}
}
\caption{Plot of $\rhsq{a}{c}{z}$ and $\rhsq{a}{x}{z}$ for the graph on the
left with $\beta_{zc}$ and $\tau^2_z$.}
\label{figcomp1:2}
\end{figure}
\subsection{Comparison with fixed correlates}
We only consider the necessity of the conditions of Theorems \ref{ap:1b} and \ref{ap:2b} here. The examples for Theorem \ref{ap:3b} are similar.
The graphs and the plots used in the counterexamples are described as follows. In Figures \ref{fig:1} and \ref{fig:2} the graphs with solid edges satisfy the assumptions of Theorems \ref{ap:1b} and \ref{ap:2b} respectively. We consider the graph with the dashed edges.
However, excepting one such edge, for all others their corresponding regression coefficients are set to zero. Each edge implies violation of one conditional independence relationship.
The plots are interpreted as follows. The title of the plots describe which regression coefficients are set to zero. The other regression coefficient is changed and the values of the conditional and unconditional regression coefficients are calculated.
\begin{figure}[t]
\hspace{1in}\subfigure[\label{fig:c1g}]{
$\psmatrix[colsep=.6in,rowsep=.6in]
&x&\\
a& &c\\
&z_1&\\
&z_2&
\endpsmatrix
$
\psset{nodesep=3pt}
\ncline{->}{1,2}{2,1
\ncline{->}{1,2}{2,3
\ncline{->}{1,2}{3,2}\lput*{0}{
\ncline[linestyle=dashed]{->}{2,3}{3,2}\Aput{\small{$\beta_{z_1c}$}}
\ncline{->}{3,2}{4,2
\ncline[linestyle=dashed]{->}{2,3}{2,1}\lput{:0}{\rput{N}(.6,.4){$\beta_{ac}$}}
\ncline[linestyle=dashed]{->}{2,1}{4,2}\Bput{$\beta_{z_2a}$}
}\hfill
\subfigure[\label{fig:c1e1}]{
\resizebox{3in}{3in}{\includegraphics{c1_e1.ps}}
}\hfill
\subfigure[\label{fig:c1e2}]{
\resizebox{3in}{3.1in}{\includegraphics{c1_e2.ps}}
}\hfill
\subfigure[\label{fig:c1e3}]{
\resizebox{3in}{3.1in}{\includegraphics{c1_e3.ps}}
}
\caption{The directed acyclic graph described in Example 1. Clearly there is no ordering between $\rho^2_{ac}$, $\rhsq{a}{c}{z_1}$ and $\rhsq{a}{c}{z_2}$.}
\label{fig:1}
\end{figure}
\subsubsection{Figure \ref{fig:1}} The graph with only the solid edges satisfy the conditions of Theorem \ref{ap:1b}. If an edge between $a$ and $c$ is added, ie. if $\beta_{ac}\ne 0$, but $\beta_{z_1c}=\beta_{z_2a}=0$, $\cind{a}{c}{x}$ no longer holds.
Figure \ref{fig:c1e1} shows that none of $\rho^2_{ac}$, $\rhsq{a}{c}{z_1}$ and $\rhsq{a}{c}{z_2}$ can be qualitatively compared. Note that, when $\beta_{ac}=0$ the graph satisfies the condition of Theorem \ref{ap:1b}. So we get $\rhsq{a}{c}{z_1}\le\rhsq{a}{c}{z_2}\le\rho^2_{ac}$ as predicted.
If we set $\beta_{ac}=\beta_{z_2a}=0$ and allow $\beta_{z_1c}$ to vary, then for non-zero values of $\beta_{z_1c}$ the condition $\cind{ac}{z_1}{x}$ is violated. So in figure \ref{fig:c1e2} we see that, the concerned squared partial correlation coefficients are not comparable.
When $\beta_{ac}=\beta_{z_1c}=0$ and $\beta_{z_2a}$ varies, the condition $\cind{z_2}{acx}{z_1}$ is potentially violated. The condition $\cind{z_2}{x}{z_1}$ is not required for Theorem \ref{ap:1b} but for most graphical Markov models $\cind{z_2}{ac}{z_1}$ would imply this condition. Figure \ref{fig:c1e3} shows that the squared correlations cannot be qualitatively compared in this case either.
The above examples show that none of the conditions of Theorem \ref{ap:1b} can be relaxed further.
\begin{figure}[t]
\hspace{1in}\subfigure[\label{fig:c2g}]{
$\psmatrix[colsep=.6in,rowsep=.6in]
a&&c\\
&x&\\
b&z_1&\\
&z_2&
\endpsmatrix
$
\psset{nodesep=3pt}
\ncline{->}{1,1}{2,2
\ncline{->}{1,3}{2,2
\ncline[linestyle=dashed]{->}{1,1}{1,3}\Aput{$\beta_{ca}$}
\ncline[linestyle=dashed]{->}{1,3}{3,2}\Aput{\small{$\beta_{z_1c}$}}
\ncline{->}{2,2}{3,2
\ncline{->}{3,2}{4,2
\ncline{->}{2,2}{3,1
\nccurve[ncurv=.4,angleB=45,angleA=315,linestyle=dashed]{->}{1,3}{4,2}\Aput{$\beta_{z_2c}$}
}\hfill
\subfigure[\label{fig:c2e1}]{
\resizebox{3in}{3in}{\includegraphics{c2_e1.ps}}
}\hfill
\subfigure[\label{fig:c2e2}]{
\resizebox{3in}{3in}{\includegraphics{c2_e2.ps}}
}\hfill
\subfigure[\label{fig:c2e3}]{
\resizebox{3in}{3in}{\includegraphics{c2_e3.ps}}
}
\caption{The directed acyclic graph described in Example 1. Clearly there is no ordering between $\rho^2_{ac}$, $\rhsq{a}{c}{z_1}$ and $\rhsq{a}{c}{z_2}$.}
\label{fig:2}
\end{figure}
\subsubsection{Figure \ref{fig:2}} In this figure the graph with solid edges satisfy the conditions of Theorem \ref{ap:2b}. If $\beta_{ca}\ne 0$ then the assumption that $a\perp\!\!\!\perp c$ is violated. As it is evident from the plot in Figure \ref{fig:c2e1} $rhsq{a}{c}{z_1}$, $rhsq{a}{c}{z_2}$ and $\rhsq{a}{c}{x}$ cannot be qualitatively compared.
If $\beta_{z_1c}\ne 0$, $c\not\perp\!\!\!\perp z_1\mid x$ and from Figure \ref{fig:c2e2} it is seen that the squared correlations cannot be qualitatively compared either.
Finally, when $\beta_{z_2c}\ne 0$, $z_2$ becomes conditionally dependent on $c$ given $z_1$. From Figure \ref{fig:c2e3} we once again conclude that the squared correlations under consideration cannot be qualitatively compared.
The above examples prove that no conditions in Theorem \ref{ap:2b} can be relaxed.
\section{Application to polytree models and model selection}
\begin{figure}[t]
\begin{center}
\subfigure[\label{fig:modsel1}]{
\input{dag_sel1.tex}}\hspace{.1in}
\subfigure[\label{fig:modsel2}]{
\input{dag_sel2.tex}
}\hspace{.1in}
\subfigure[\label{fig:modsel3}]{
\input{dag_sel3.tex}
}
\end{center}
\caption{Examples of polytrees satisfying the conditions of Theorem \ref{thm:modsel} below. In each, $a\in an(c)$ and $c\in an(b)$. In \ref{fig:modsel1} $z_{11}$ and $z_{12}$ satisfy condition $1.$ and $\rhsq{a}{c}{b}<\rhsq{a}{c}{bz_{12}}<\rhsq{a}{c}{bz_{11}}$ (from Theorem \ref{ap:3b}. (ii)).
In \ref{fig:modsel2} $z_{21}$ and $z_{22}$ satisfy condition $2.$. So $\rhsq{a}{c}{bz_{21}}<\rhsq{a}{c}{bz_{22}}<\rhsq{a}{c}{b}$ (see Theorem \ref{ap:1b} and Figure \ref{fig:scts2}). Each $z_{3k}$, $k=1,\ldots,4$, in \ref{fig:modsel3} satisfy condition $2.$,
ie. $\rhsq{a}{c}{z_{3k}}<\rhsq{a}{c}{b}$. Note that, $b$ cannot be in $an(z)$, otherwise $\cind{ac}{z}{b}$ and $\rhsq{a}{c}{b}= \rhsq{a}{c}{bz}$.}
\label{fig:modsel}
\end{figure}
A polytree is a DAG such that if we substitute all its directed edges with undirected ones, the resulting graph (ie. its skeleton) would be a tree. Thus on a polytree two vertices $x$ and $y$ can have at most one path \path{x}{y} connecting them.
Here, on a connecting path we disregard the direction of the individual edges.
A vertex $y$ is an ancestor of a vertex $x$, if either $y=x$ or $x$ can be reached from $y$ by following the arrowheads of a directed path (ie. the path $y\rightarrow v_1\rightarrow v_2 \dashrightarrow v_k\rightarrow x$ exits). The collection of all ancestors of $x$ is denoted by $an(x)$. Furthermore, for a set of vertices $X$ we define $an(X)=\cup_{x\in X}an(x)$.
\begin{theorem}\label{thm:modsel} Suppose that on a Gaussian polytree $a\ne c\ne b$, $a\in an(c)$ and $c\in an(b)$. Further let, for some vertex $z$, $\rhsq{a}{c}{bz}\ne \rhsq{a}{c}{b}$. Then\begin{enumerate}
\item $\rhsq{a}{c}{bz}>\rhsq{a}{c}{b}$, iff $a\perp\!\!\!\perp z$ and $c\not\perp\!\!\!\perp z$.
\item $\rhsq{a}{c}{bz}<\rhsq{a}{c}{b}$ iff either $c\perp\!\!\!\perp z$ or $a\not\perp\!\!\!\perp z$.
\end{enumerate}
\end{theorem}
The condition $\rhsq{a}{c}{bz}\ne \rhsq{a}{c}{b}$ is required in Theorem \ref{thm:modsel}. This implies $ac\not\perp\!\!\!\perp z\mid b$. So $b\not\in an(z)$. It can further be shown (see the proof) that the polytree structure implies $ac\perp\!\!\!\perp z$ iff $c\perp\!\!\!\perp z$.
Thus the right hand side of Condition $2.$ above equivalently means that either both $a$ and $c$ are independent of $z$ or none of them are independent of $z$. Examples of graphs satisfying the conditions $1.$ and $2.$ can be found in Figure \ref{fig:modsel}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=9cm,keepaspectratio]{avon_map_final_2.ps}
\end{center}
\caption{An illustration of the results in Theorem \ref{thm:modsel} on the river network of Avon river, Hampshire, England (obtained from \citet{jarvie:etal:2005}).}
\label{fig:avon}
\end{figure}
Theorem \ref{thm:modsel} has applications in model selection. An example occurs in the mapping of river flow networks.
Figure \ref{fig:avon} \citep{jarvie:etal:2005} presents a schematic diagram of the network of the Avon basin in Hampshire, England. Suppose that it is known that none of the rivers involved have a distributary. Clearly the network, with the direction of the water flow form a polytree.
Measurements can be taken at points $a$ (Netheravon), $b$ (Christchurch), $c$ (Amesbury), $d$ (Downstream of Salisbury STW), $e$ (Longford) and $z$ (Chitterne).
However, because of practical considerations we suppose that the measurements are taken when the water level at Christchurch ($b$) touches certain levels. Lets assume $\rhsq{a}{x}{b}\ne\rhsq{a}{x}{bz}$ for $x=c,d,e$. We want to know where does the stream from $z$, ie. Chitterne meets river Avon.
It is clear that since the observations are all conditional on the water level at $b$, in the data neither $z\not\perp\!\!\!\perp a$ nor $z\not\perp\!\!\!\perp c$. However, from Theorem \ref{ap:1b}, see also Figure \ref{fig:scts2} and Theorem \ref{ap:3b} it follows that $\rhsq{a}{c}{bz}<\rhsq{a}{c}{b}$, $\rhsq{a}{d}{bz}>\rhsq{a}{d}{b}$ and $\rhsq{a}{e}{bz}>\rhsq{a}{e}{b}$.
From Condition $2.$ of Theorem \ref{thm:modsel} it follows that either both $a$ and $c$ are independent of $z$ or none of them are. On the other hand, Condition $1.$ implies that $a\perp\!\!\!\perp z$ but $d$ and $e$ are not independent of $z$.
If none of $a$ and $c$ are independent of $z$, the point $z$ must be on a distributary stream or on a tributary which meets Avon north of $a$ (Netheravon). However, by assumption there is no distributary stream. Furthermore, if the tributary from $z$ meets Avon somewhere north of $a$, by Theorem \ref{ap:1b} both $\rhsq{a}{d}{bz}<\rhsq{a}{d}{b}$ and $\rhsq{a}{e}{bz}<\rhsq{a}{e}{b}$ must hold.
This is a contradiction. Thus $ac\perp\!\!\!\perp z$ must hold. So from Theorem \ref{thm:modsel} we see that the stream from Chitterne ie. $z$ meets Avon somewhere between Amesbury ie. $c$ and Downstream of Salisbury STW ie.$d$.
\section{Squared partial correlation inequalities}\label{sec:canon}
Suppose $V\sim N\left(\mu,\Sigma\right)$ with a positive definite $\Sigma$. Let $a$, $b$, $c$, $c^{\prime}$, $z$ , $z^{\prime}$, $x$ etc. be the components and $B$, $Z$ etc. be the subsets of components of $V$. In this article $V$ will also denote the vertex set of the underlying graph (see supplement for more details). Let $\emptyset$ denote the empty set.
The squared partial correlation coefficient ($\rho^2_{ac|Z}$) between $a$ and $c$ conditional on $Z$ is defined by:
\begin{equation}\label{eq:rho}
\rhsq{a}{c}{Z}=\frac{\left(\sigma_{ac}-\Sigma_{aZ}\Sigma_{ZZ}^{-1}\sigma_{cZ}\right)^2}{\left(\sigma_{aa}-\Sigma_{aZ}\Sigma_{ZZ}^{-1}\sigma_{aZ}\right)\left(\sigma_{cc}-\Sigma_{cZ}\Sigma_{ZZ}^{-1}\sigma_{cZ}\right)}=1-e^{-2Inf\left(\cind{a}{c}{Z}\right)}.
\end{equation}
Here $\sigma_{ab}$ and $\Sigma_{aZ}$ respectively denote the $(a,b)$th element and $a\times Z$ submatrix of $\Sigma$. $Inf\left(\cind{a}{c}{Z}\right)$ is the mutual information \citep[information proper]{whit} of $a$ and $c$ given $Z$. From \eqref{eq:rho} it follows that the mutual information is a monotone increasing function of the corresponding squared partial correlation.
Thus the qualitative inequalities for $\rhsq{a}{c}{Z}$ presented below applies to $Inf\left(\cind{a}{c}{Z}\right)$ as well.
\subsection{Comparing conditional dependence with a fixed conditionate}\label{sec:comp1}
We first fix a subset $Z$ to be conditioned and one correlate $a$. The squared partial correlation is compared by changing the other correlate from $c$ to $c^{\prime}$.
\begin{theorem}\label{lem:maincomp1}
Suppose $\cind{c^{\prime}}{a}{cZ}$, then $\rhsq{a}{c^{\prime}}{Z}\le\rhsq{a}{c}{Z}$.
\end{theorem}
Theorem \ref{lem:maincomp1} is a conditional version of the well-known \emph{information inequality} \citep{cover} and holds in general for mutual information of any distribution.
For graphical Markov models the condition holds if $c^{\prime}$ is separated from $a$ given $c$ and $Z$. Further, for trees the condition is satisfied if $c$ lies on the path joining $a$ and $c^{\prime}$.
Thus longer path implies weaker dependence in this case.
For polytree models the condition depends on the arrangement of the arrows on the path joining $a$, $c$ and $c^{\prime}$. The condition is satisfied if two arrowheads do not meet at $c$ on the path joining $a$ and $c^{\prime}$,
(ie. $c$ is not a \emph{collider} on the path joining $a$ and $c^{\prime}$, see Definition \ref{defn:coll}). As for example, in Figure \ref{fig:fxdcond} with $Z=\left\{z_1,z_2,z_3,z_4\right\}$, using the d-separation criterion (see Definition \ref{defn:dconn}) we get, $\cind{c_3}{a}{Zc_2}$.
Theorem \ref{lem:maincomp1} ensures that $\rhsq{a}{c_3}{Z}\le\rhsq{a}{c_2}{Z}$. The same d-separation criterion however implies that $c_3\not\perp\!\!\!\perp a|Zc_1$,
so there is no guaranty the $\rhsq{a}{c_1}{Z}$ would be larger than $\rhsq{a}{c_3}{Z}$. This partially justifies the intuitive argument given in \citet{greenland1} (see also \citet{greenland_pearl_2011}).
\begin{figure}[t]
\begin{center}
\subfigure[\label{fig:fxdcond}]{
\input{fxdcond.tex}
}\hspace{1pt}
\subfigure[\label{fig:figcn1}]{
\input{figcn1.tex}
}
\hspace{1pt}
\subfigure[\label{fig:figap1b2}]{
\input{figap1b2.tex}
}
\caption{\ref{fig:fxdcond} A polytree, $Z=\{z_1,z_2,z_3,z_4\}$, $\rhsq{a}{c_3}{Z}\le\rhsq{a}{c_2}{Z}$, however $\rhsq{a}{c_3}{Z}\le\rhsq{a}{c_1}{Z}$ may not hold.
\ref{fig:figcn1} an UG satisfying the conditions of Theorem \ref{ap:1b}, $\rho^2_{ac}\ge\rhsq{a}{c}{z}\ge\rhsq{a}{c}{z^{\prime}}$ and $\rho^2_{ax}\ge\rhsq{a}{x}{z}\ge\rhsq{a}{x}{z^{\prime}}$.
Further, from Theorem \ref{lem:maincomp1}, $\rho^2_{ac}\le\rho^2_{ax}$, $\rhsq{a}{c}{z}\le\rhsq{a}{x}{z}$ and $\rhsq{a}{c}{z^{\prime}}\le\rhsq{a}{x}{z^{\prime}}$. Exactly the same conclusions hold on the DAG in \ref{fig:figap1b2}.}
\end{center}
\end{figure}
\subsection{Comparing conditional dependence with fixed correlates}\label{sec:appendix} Here two components $a$ and $c$ of $V$ are held fixed. We consider the variation in $\rhsq{a}{c}{Z}$ for different subsets $Z$ of $V$.
Depending on the nature of pairwise unconditional association between $a$, $c$ and the sets conditioned on, three situations may arise.
\begin{figure}[t]
\begin{center}
\subfigure[\label{fig:ap2b}]{
\input{figap2b.tex}
}\hspace{10pt}
\subfigure[\label{nanny}]{
\input{nanny.tex}
}\hspace{10pt}
\subfigure[\label{mag}]{
\input{mag.tex}
}
\end{center}
\caption{Graphical models satisfying the conditions of Theorem \ref{ap:2b}. In each graph $a\perp\!\!\!\perp c$. The graph in \ref{fig:ap2b} is a polytree. Here $B=\{b_1,b_2\}$ and $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz^{\prime}}\le\rhsq{a}{c}{Bz}$ holds. In \ref{nanny} it follows that $\rhsq{a}{c}{y}\le\rhsq{a}{c}{x}$ (cf. \citet{wermuth_cox_2008}). The graph in \ref{mag} is a mixed ancestral graph \citep{thomas1} where $\rhsq{a}{c}{z^{\prime}}\le\rhsq{a}{c}{z}$ always holds.}
\label{fig:ap3ba}
\end{figure}
\subsubsection{\bf Situation $1$.} The components $a$, $c$, $z$ and $z^{\prime}$ are unconditionally pairwise dependent.
\begin{theorem}\label{ap:1b}
Suppose for some $x$, $\cind{a}{c}{x}$ and $\cind{ac}{z}{x}$. Then $\rhsq{a}{c}{z}\le\rho^2_{ac}$. In addition, if $\cind{ac}{z^{\prime}}{z}$, then $\rhsq{a}{c}{z}\le\rhsq{a}{c}{z^{\prime}}\le\rho^2_{ac}$.
\end{theorem}
The conditions of Theorem \ref{ap:1b} can be represented by several graphical Markov models, eg. undirected graphs, directed acyclic graphs etc. The conditional independence conditions imply that $a$, $c$ and $z$ have to be pairwise separated given $x$ and $z^{\prime}$ has to be separated from $a$ and $c$ given $z$.
The first part shows that under these conditions the dependence of $a$ on $c$ always reduces on conditioning. For tree and polytree models the conclusion of the second part can be intuitively explained.
Notice that, by assumption $\rho^2_{ac}\ge\rhsq{a}{c}{x}=0$ and the separation criteria imply that $z^{\prime}$ is farther away from $x$ than $z$. Thus $z^{\prime}$ has less information about $x$ than $z$. So $\rhsq{a}{c}{z^{\prime}}$ should be closer to $\rho^2_{ac}$ than $\rhsq{a}{c}{z}$.
In other words, conditioning on the vertices farther away from the path between $a$ and $c$ increases the degree of association.
\subsubsection{\bf Situation $2$.} The correlates $a$ and $c$ are independent, but both are dependent on the sets conditioned on.
\begin{theorem}\label{ap:2b}
Suppose $a\perp\!\!\!\perp c$ and for some $x$, the condition $\cind{ac}{zB}{x}$ holds. Then $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz}$. Moreover, if $\cind{z^{\prime}}{acB}{z}$ holds, then $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz^{\prime}}\le\rhsq{a}{c}{Bz}$.
\end{theorem}
By assumption $0=\rho^2_{ac}\le\rhsq{a}{c}{B}$. Thus the first conclusion implies that conditioning on a larger set implies stronger association.
On an UG, the condition $a\perp\!\!\!\perp c$ implies that $a$ and $c$ cannot be connected. Thus UGs are not useful to represent the conditions in Theorem \ref{ap:2b}.
They are satisfied by several other graphical Markov models like DAGs, MAGs etc.
For polytree models (See Figure \ref{fig:ap2b}) the conclusions of Theorem \ref{ap:2b} can be intuitively explained as well. As before, one can conclude $z^{\prime}$ is farther away from $x$ and therefore has less information about $x$ than $z$,
$\rhsq{a}{c}{x}\ne 0$ but $\rho^2_{ac}=0$. Thus by the same argument as for Theorem \ref{ap:1b}, conditioning on $B$ and $z^{\prime}$ should produce weaker association than $B$ and $z$.
In the graph in Figure \ref{nanny} the marginal covariance matrix of $a$, $c$, $x$ and $y$ satisfy the conditions of Theorem \ref{ap:2b}. Thus, $\rhsq{a}{c}{y}\le\rhsq{a}{c}{x}$. The graph in Figure \ref{mag} is a mixed ancestral graph (notice the $\leftrightarrow$ edge between $y_1$ and $y_2$ \citep{thomas1}).
Here the marginal covariance matrix of $a$, $c$, $x_2$, $z$ and $z^{\prime}$ would satisfy the conditions of Theorem \ref{ap:2b} (see Appendix \ref{sec:mags}). So we conclude that $\rhsq{a}{c}{z^{\prime}}\le\rhsq{a}{c}{z}\le\rhsq{a}{c}{x_2}$.
\begin{figure}[t]
\begin{center}
\subfigure[\label{fig:ap3}]{
\input{figap3.tex}
}\hspace{1pt}
\subfigure[\label{fig:ap3b1ca}]{
\input{figap3b1ca.tex}
}\hspace{1pt}
\subfigure[\label{fig:ap3b1c}]{
\input{figap3b1c.tex}
}
\end{center}
\caption{Graphical models satisfying the conditions of Theorem \ref{ap:3b}. Each model satisfies the condition $(i)$ of the theorem. \ref{fig:ap3} is a polytree on which $\cind{aczz^{\prime}}{\{b_1,b_2\}}{x}$ holds. In \ref{fig:ap3b1ca}, $\cind{ac}{b}{x}$, but $ac\not\perp\!\!\!\perp b|zx$. In \ref{fig:ap3b1c}, $\cind{ac}{b}{zx}$ but $ac\not\perp\!\!\!\perp b|x$. From Theorem \ref{ap:3b} it follows that $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz^{\prime}}\le\rhsq{a}{c}{Bz}$.}
\label{fig:ap3bb}
\end{figure}
\subsubsection{\bf Situation $3$.} At least one of $a$ and $c$ is independent of both the sets conditioned on.
\begin{theorem}\label{ap:3b}
Suppose $a\perp\!\!\!\perp z$. Let for some $x$, $\Sigma$ satisfies one of the following two $( (i), (ii) )$ conditions:
\begin{list}{}{}
\item[$(i)$] $c\perp\!\!\!\perp az$ and one of the following six conditions $(a)$ $\cind{az}{B}{x}$, $(b)$ $\cind{az}{B}{cx}$, $(c)$ $\cind{cz}{B}{x}$, $(d)$ $\cind{cz}{B}{ax}$, $(e)$ $\cind{ac}{B}{x}$ and $(f)$ $\cind{ac}{B}{xz}$ holds,
\item[$(ii)$] $\cind{az}{cB}{x}$.
\end{list}
Then $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz}$. Further, if $\cind{z^{\prime}}{acB}{z}$ holds, then in both cases, $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz^{\prime}}\le\rhsq{a}{c}{Bz}$.
\end{theorem}
\begin{figure}[t]
\subfigure[\label{fig:ap3b2}]{
\input{figap3b2.tex}
}\hspace{1pt}
\subfigure[\label{fig:ap3b1}]{
\input{figap3b1.tex}
}\hspace{1pt}
\subfigure[\label{marg}]{
\input{marg.tex}
}
\caption{Graphical models satisfying the conditions $(ii)$ of Theorem \ref{ap:3b}. In each graph the condition $\cind{az}{cB}{x}$ holds. In Figure \ref{fig:ap3b1} $\cind{az}{\{b_1,b_2\}}{cx}$ also holds. The graphs in \ref{fig:ap3b2} ($B=\{b_1,b_2,b_3\}$) and \ref{fig:ap3b1} ($B=\{b_2,b_3\}$) are polytrees. On each $\rhsq{a}{c}{B}\le\rhsq{a}{c}{Bz^{\prime}}\le\rhsq{a}{c}{Bz}$ hold.}
\label{fig:ap3b}
\end{figure}
The difference between the conditions $(i)$ and $(ii)$ in Theorem \ref{ap:3b} is illustrated in Figure \ref{fig:ap3} and \ref{fig:ap3b2}. Under condition $(i)$, $c\perp\!\!\!\perp z$ but the relation $c\not\perp\!\!\!\perp z|x$ does not necessarily hold . On the other hand, under condition $(ii)$, $\cind{c}{z}{x}$ but $c$ may not be independent $z$ unconditionally.
The six conditions in $(i)$ are in general distinct. As for example, from m-connection rules \citep{thomas1} the MAG in Figure \ref{fig:ap3b1ca} we get (note the paths $(a,c)$$\leftrightarrow$$x$$\leftrightarrow$$z$$\leftrightarrow$$b$) $\cind{ac}{b}{x}$ but $ac\not\perp\!\!\!\perp b|zx$ (see supplement).
On the other hand on the DAG in Figure \ref{fig:ap3b1c} clearly $\cind{ac}{b}{zx}$ but $ac\not\perp\!\!\!\perp b|x$. Similar examples for other four conditions can be drawn.
Theorem \ref{ap:3b} goes beyond the DAGs considered by \citet{sctsr3}. One example is considered in Figure \ref{fig:scts1}. Here $a\perp\!\!\!\perp c$, $ac\perp\!\!\!\perp z$ and both $\cind{ac}{b}{x}$ and $\cind{ac}{b}{xz}$ holds.
Consequently, from Theorem \ref{ap:3b}, the relationship $\rhsq{a}{c}{b}\le\rhsq{a}{c}{bz^{\prime}}\le\rhsq{a}{c}{bz}$ follows. Note that $z$ is not an \emph{ancestor} of $x$ but an \emph{ancestor} of $b$ and consequently, $zz^{\prime}\perp\!\!\!\perp x$ also holds. \citet{sctsr3} explicitly exclude conditioning vertices which are independent of $x$.
\begin{cor}\label{cor:3b}
If $B=\emptyset$, Under all conditions of Theorem \ref{ap:3b} $(i)$, $\rhsq{a}{c}{z}=\rhsq{a}{c}{z^{\prime}}=\rho^2_{ac}=0$. Under condition $(ii)$, $\rhsq{a}{c}{z}\ge\rhsq{a}{c}{z^{\prime}}\ge\rho^2_{ac}$.
\end{cor}
\subsection{\bf Comparison between Theorems \ref{ap:1b} and \ref{ap:3b} for polytree models}
For polytree models, in view of Theorem \ref{ap:1b}, the conclusion of Theorem \ref{ap:3b} $(ii)$ is a bit counterintuitive.
Note that, under $(ii)$, $\rhsq{a}{c}{x}=0$, which is same as in Theorem \ref{ap:1b}. However, unlike the latter, conditioning on vertices farther away produce a weaker squared correlation in this case.
The difference seems to be that in Theorem \ref{ap:1b} $a\not\perp\!\!\!\perp z$, but we assume $\cind{a}{z}{x}$. In contrast, Theorem \ref{ap:3b} assumes that $a\perp\!\!\!\perp z$, but in $(ii)$, the condition $a\perp\!\!\!\perp z|x$ does not hold.
\begin{figure}[t]
\subfigure[\label{fig:scts1}]{
\input{figscts1.tex}
}\hspace{10pt}
\subfigure[\label{fig:cor4b}]{
\input{cor4b.tex}
}\hspace{10pt}
\subfigure[\label{fig:scts2}]{
\input{figscts2.tex}
}
\caption{\ref{fig:scts1} a DAG not considered by \citet{sctsr3}. From Theorem \ref{ap:3b}, it follows that $\rhsq{a}{c}{b}\le\rhsq{a}{c}{bz^{\prime}}\le\rhsq{a}{c}{bz}$. \ref{fig:cor4b} A DAG to illustrate the contrast in the conclusion of Theorem \ref{ap:1b} and Theorem \ref{ap:3b} $(ii)$. Here $\rhsq{a}{c}{v}\ge\rhsq{a}{c}{u}\ge\rho^2_{ac}\ge\rhsq{a}{c}{y}\ge\rhsq{a}{c}{w}\ge\rhsq{a}{c}{x}=0$.
From Theorem \ref{ap:1b}, on the DAG in \ref{fig:scts2} it follows that $\rhsq{a}{c}{b}\ge\rhsq{a}{c}{bz^{\prime}}\ge\rhsq{a}{c}{bz}$ always hold.}
\label{fig:extr}
\end{figure}
As an illustration of this contrast we consider the graph in Figure \ref{fig:cor4b}. From Theorem \ref{ap:1b} and Corollary \ref{cor:3b} it follows that the relationship $\rhsq{a}{c}{v}\ge\rhsq{a}{c}{u}\ge\rho^2_{ac}\ge\rhsq{a}{c}{y}\ge\rhsq{a}{c}{w}\ge\rhsq{a}{c}{x}=0$ holds.
Another such example can be constructed from the DAG in Figure \ref{fig:scts1}. We have argued above that from Theorem \ref{ap:3b} it follows that $\rhsq{a}{c}{b}\le\rhsq{a}{c}{bz^{\prime}}\le\rhsq{a}{c}{bz}$. In the DAG in Figure \ref{fig:scts2} the relation $a\perp\!\!\!\perp c$ has been replaced by $\cind{a}{c}{x}$.
From the rules of d-separation $\cind{a}{c}{bx}$, $\cind{ac}{z}{bx}$ and $\cind{acb}{z^{\prime}}{z}$ (see Definition \ref{defn:dconn}). Thus after conditioning on $b$, the Covariance matrix of $a$, $x$, $c$, $z$ and $z^{\prime}$ satisfies the conditions of Theorem \ref{ap:1b}.
So the qualitative comparison holds, but in contrast to Figure \ref{fig:scts1}, it follows that $\rhsq{a}{c}{b}\ge\rhsq{a}{c}{bz^{\prime}}\ge\rhsq{a}{c}{bz}$.
\subsection{\bf Comparison between $\rhsq{a}{c}{x}$ and $\rhsq{a}{c}{Bz}$.}
If $z=x$, in Theorem \ref{ap:3b} in all case $a\perp\!\!\!\perp Bcz$, so $\rhsq{a}{c}{z^{\prime}}=\rhsq{a}{c}{Bz}=\rhsq{a}{c}{B}=0$. When $x\in V\setminus z$, comparison between $\rhsq{a}{c}{x}$ and $\rhsq{a}{c}{Bz}$ does not directly follow from Theorem \ref{ap:3b}.
Under condition $(ii)$, $\cind{a}{c}{x}$, $0=\rhsq{a}{c}{x}\le\rhsq{a}{c}{Bz}$ for any $z$. However, under the conditions $(i)$, $\rhsq{a}{c}{x}$ and $\rhsq{a}{c}{Bz}$ may not be qualitatively compared. We show this fact in the following theorem.
\begin{theorem}\label{cl:3b2}
Suppose $a\perp\!\!\!\perp z$, $c\perp\!\!\!\perp az$, and $\cind{acz}{B}{x}$, then $\rhsq{a}{c}{Bz}\ge \rhsq{a}{c}{x}$, iff
\begin{equation*}\label{eq:firstcond}
\left(\sigma_{xx}+\frac{\sigma^2_{xz}}{\sigma_{zz}}\right)\Sigma_{xB}\Sigma^{-1}_{BB}\Sigma_{Bx}\ge \sigma^2_{xx},\text{ or equivalently }\frac{\sigma_{xx}-\sigma_{xx|B}}{\sigma_{xx}}\ge\frac{\sigma_{zz|B}}{\sigma_{zz}}.
\end{equation*}
\end{theorem}
Theorems \ref{ap:1b}, \ref{ap:2b}, \ref{ap:3b} and \ref{cl:3b2} have a curious implication on polytree models. Notice that in Theorems \ref{ap:1b} and \ref{ap:2b} the vertex $z$ is in the set of \emph{descendants} of vertex $x$ (see Figures \ref{fig:figap1b2} and \ref{fig:ap2b}),
whereas in Theorem \ref{ap:3b}, $z$ may be a \emph{parent} of $x$.
The curious fact is that, on a polytree the squared
partial correlations given the descendants of $x$ cannot be compared with the squared partial correlations given the parents (or more generally given the \emph{ancestors of the parents} of $x$).
Furthermore, the behaviour of $\rhsq{a}{c}{x}$ is a continuation of the behaviour of squared partial correlations given its descendants.
In other words, on polytrees, conditioning on the vertices ``above'' the path has different nature than conditioning on the vertices ``below'' or ``on'' the path.
We present an illustrative example in Figure \ref{fig:comps}. We consider the polytree in Figure \ref{fig:compgraph}. In Figure \ref{plot:compgraph} we plot the values of $\rhsq{a}{c}{i}$ for $i\in\{\emptyset,z_4,z_3,z_2,z_1,x,y_1,y_2,y_3,y_4\}$. All parameter values are fixed at $1$. As predicted from Theorem \ref{ap:3b} the squared partial correlation increases from $i=z_4$ to $i=z_1$ and from Corollary \ref{cor:3b} each of them are larger than $\rho^2_{ac}$.
However, From Theorem \ref{ap:2b}, $\rhsq{a}{c}{i}$ increases as we move from $x$ to $y_4$ and each of them are smaller that $\rho^2_{ac}$. Thus the squared partial correlation drops discontinuously as we move from $z_1$ to $x$ along the $z_4$ to $y_4$ path.
\begin{figure}[t]
\subfigure[\label{fig:compgraph}]{
\input{compgraph.tex}
}\hspace{10pt}
\subfigure[\label{plot:compgraph}]{
\includegraphics[height=2in,height=2in]{compgraph1.ps}
}
\caption{\ref{fig:compgraph} A polytree and \ref{plot:compgraph} the value of $\rhsq{a}{c}{i}$ for $i\in\{\emptyset,z_4,z_3,z_2,z_1,x,y_1,y_2,y_3,y_4\}$. Each parameter is fixed at $1$. \ref{plot:compgraph} illustrates the discontinuous drop in $\rhsq{a}{c}{i}$ as we move from $z_1$ to $x$ along the $z_4$ to $y_4$ path. }
\label{fig:comps}
\end{figure}
\subsection{\bf Further generalisations on comparison with fixed correlates}
Suppose $Z_1=\{z_{11},z_{12},\ldots,z_{1n}\}$ and $Z_2=\{z_{21},z_{22},\ldots,z_{2n}\}$ are two conditionates of cardinality $n$. Then for fixed correlates $a$ and $c$, one can write:
\begin{equation}\label{eq:factmain}
\frac{\rhsq{a}{c}{Z_1}}{\rhsq{a}{c}{Z_2}}=\prod^n_{i=1}\frac{\rhsq{a}{c}{z_{21},z_{22},\ldots,z_{2(i-1)},z_{1i},z_{1(i+1)},\ldots,z_{1n}}}{\rhsq{a}{c}{z_{21},z_{22},\ldots,z_{2(i-1)},z_{2i},z_{1(i+1)},\ldots,z_{1n}}}
\end{equation}
Clearly $\rhsq{a}{c}{Z_1}\le\rhsq{a}{c}{Z_2}$ holds if each factor in the R.H.S. of \eqref{eq:factmain} is bounded by $1$.
Note that in each factor in \eqref{eq:factmain} the conditionate in the numerator and the denominator differ only in one element.
Thus in order to qualitatively compare $\rhsq{a}{c}{Z_1}$ and $\rhsq{a}{c}{Z_2}$it is sufficient to find a $x_i$ for each factor such that $z_{1i}$ and $z_{2i}$ satisfy the conditions of one of the Theorems $2$ - $4$, possibly with $B\subseteq\{z_{21},z_{22},\ldots,z_{2(i-1)},z_{1(i+1)},\ldots,z_{1n}\}$ whenever necessary.
Using the factorisation in \eqref{eq:factmain} and Theorems $2$ - $4$, structural and path based rules for comparison may be postulated for several graphical models. The choice of $x_i$ and these path based rules depend on the structure of association of the whole vector $V$. We consider the tree models below.
\section{Application to tree models}\label{sec:tree}
Let $\gr{G}{V}{E}$ be a tree with vertex set $V$ and edge set $E$. For vertices $x\in V$ and $y\in V$, \path{x}{y} denote the unique path joining $x$ and $y$, which we define as:
\begin{equation}
\begin{aligned}
\epath{x}{y}=\{x=v_1,v_2,\ldots,v_{k-1},v_k=y\text{ such that there is an edge between}&\\\nonumber
\text{$v_i$ and $v_{i+1}$, for each $i=1,2,\ldots,k-1$}\}.&
\end{aligned}
\end{equation}
Notice that, by the above definition \path{x}{y} is a subset of $V$ which contains the end points $x$ and $y$. Since $G$ is a tree, it has only one connected component and therefore any two vertices $x$ and $y$ are connected by an unique \path{x}{y}.
\begin{defn}\label{defn:sep}
Two vertices $a$ and $c$ on an undirected graph $G$ is said to be \emph{separated} given a subset $Z$ of $V\setminus\{a,c\}$ if each path \path{}{} between $a$ and $c$ intersects $Z$.
Two subsets $A$ and $C$ of $V$ are separated given $Z\subseteq V\setminus(A\cup C)$ if $Z$ separates each $a\in A$ from each $c\in C$.
Two subset $A$ and $C$ of $V$ are connected given a subset $Z$ if they are not separated given $Z$.
\end{defn}
Clearly on a tree $a$ and $c$ are separated given each $x\in\epath{a}{c}\setminus\{a,c\}$. On the other hand since any two vertices $a$ and $c$ are connected by an unique path, $a$ and $c$ cannot be separated given the $\emptyset$.
The \emph{separation criterion} described above associates a set of conditional independence relations with $G$. This set is described by a collection of \emph{triples}.
\begin{equation}
\indrels{G}=\left\{\trip{T_1}{T_2}{T_3},\text{where $T_1\dot{\cup}T_2\dot{\cup}T_3\subseteq V$ such that $\cind{T_1}{T_2}{T_3}$}\right\}.
\end{equation}
The association of the separation criterion with $\indrels{G}$ can be described as follows:
\[\trip{T_1}{T_2}{T_3}\Leftrightarrow \text{ $T_1$ is separated from $T_2$ given $T_3$ in $G$.}
\]
If $V\sim N\left(0,\Sigma\right)$, then $\Sigma$ satisfies all conditional independence relationships in $\indrels{G}$. This implies that if $\Lambda=\Sigma^{-1}$, for each $\trip{T_1}{T_2}{T_3}\in\indrels{G}$, $\Lambda_{T_1T_2}=0$.
We now define formal operation of conditioning for independence model $\indrels{G}$, on subsets of $V$.
\begin{defn}\label{defn:condrels}
An independence model $\indrels{G}$ {\it after conditioning on a subset} $Z$ is the set of triples defined as follows:
\begin{equation}
\cm{\indrels{G}}{Z}{\empty}\; \equiv\; \Bigl\{ \trip{T_1}{T_2}{T_3}
\; \Bigr|\; \Bigl.
\trip{T_1}{T_2}{T_3\cup Z}
\in {\indrels{G}};\;\, (T_1\cup T_2\cup T_3)\cap Z = \emptyset \Bigr\}.
\end{equation}
\end{defn}
Thus if $\indrels{G}$ contains the independence relations satisfied by a $N\left(0,\Sigma\right)$ on $G$, then
$\cm{\indrels{G}}{Z}{\empty}$ constitutes the subset of independencies holding among the variables in $Z^c=V\setminus Z$, after conditioning on $Z$. Let $G_{Z^c}$ be the subgraph of $G$ with vertex set $Z^c$ and edge set consisting of all edges in $E$ between the vertices in $Z^c$. The following Lemma makes the connection between $\cm{\indrels{G}}{Z}{\empty}$ and $\mathfrak{I}\left(G_{Z^c}\right)$.
\begin{lem}\label{lem:ugcond}
Suppose $\gr{G}{V}{E}$ is a tree. Let $a,c$ be two distinct vertices, $Z\subseteq V\setminus\{a,c\}$ and $Z^c=V\setminus Z$. Then
\begin{equation}
\cm{\indrels{G}}{Z}{}=\mathfrak{I}\left(G_{Z^c}\right).
\end{equation}
\end{lem}
Lemma \ref{lem:ugcond} holds for any UG. It implies that the conditioning on $Z$ does not add or delete any edge in $G_{Z^c}$, so if $G$ is tree $\cm{\indrels{G}}{Z}{}$ can be represented by a forest. The inverse of conditional covariance matrix of $Z^c$ given $Z$ is simply $\Lambda_{Z^cZ^c}$.
Separation ensures conditional independence, but if even if the separation fails the corresponding conditional covariance can still be zero (implying conditional independence for Gaussian random variables) because of the parameter values. However, Theorem \ref{ap:1b} is still valid in these cases.
For a fixed conditionate the rules for comparing squared partial correlations on trees follows easily from Theorem \ref{lem:maincomp1} and the separation criterion.
\begin{theorem}\label{thm:cond}
Suppose that, on a Gaussian tree $G$, the vertices $a$, $c$, $c^{\prime}$ are such that $c\in\epath{a}{c^{\prime}}$. Then for any $Z\subseteq V$, $\rhsq{a}{c^{\prime}}{Z}\le\rhsq{a}{c}{Z}$.
\end{theorem}
For fixed correlates $a$ and $c$ and two sets $Z_1$ and $Z_2$ of cardinality more than one, $\rhsq{a}{c}{Z_1}$ and $\rhsq{a}{c}{Z_2}$ can be compared qualitatively. The following result describes a sufficient condition.
\begin{theorem}\label{thm:ugcorr}
Let $\gr{G}{V}{E}$ be a Gaussian tree. Suppose $a$ and $c$ are two vertices on $G$ and $Z_1$ and $Z_2$ are two subsets of $V$ such that $\cind{ac}{Z_2}{Z_1}$. Then $\rhsq{a}{c}{Z_1}\le\rhsq{a}{c}{Z_2}$.
\end{theorem}
From the separation criterion described above, it follows that the vertices $a$ and $c$ separated from $Z_2$ given $Z_1$ implies $\cind{ac}{Z_2}{Z_1}$ and therefore $\rhsq{a}{c}{Z_1}\le\rhsq{a}{c}{Z_2}$.
The following Corollary gives the corresponding sufficient condition in terms of paths:
\begin{cor}\label{cor:ugcorr}
Suppose $Z_1$ and $Z_2$ are two subsets of $V$, such that for each vertex $z_2\in Z_2$, the both paths \path{a}{z_2} and \path{c}{z_2} intersect $Z_1$, then $\rhsq{a}{c}{Z_1}\le \rhsq{a}{c}{Z_2}$.
\end{cor}
Notice that, Theorem \ref{thm:ugcorr} is more general than Corollary \ref{cor:ugcorr}, the Theorem covers the cases when the conditional independence holds due to the choices of parameters as well. The result in Theorem \ref{thm:ugcorr} is also complete in the following sense.
\begin{theorem}\label{thm:cmplt}
Suppose $\gr{G}{V}{E}$ is a Gaussian tree. Let $Z_1,Z_2\subseteq V$ such that $ac\not\perp\!\!\!\perp Z_2|Z_1$ and $ac\not\perp\!\!\!\perp Z_1|Z_2$. Further, suppose that $\left(Z_1\cup Z_2\right)\cap\epath{a}{c}=\emptyset$.
Then there exists $\Sigma_1$ such that $\rhsq{a}{c}{Z_1}>\rhsq{a}{c}{Z_2}$ and $\Sigma_2$ such that $\rhsq{a}{c}{Z_2}>\rhsq{a}{c}{Z_1}$.
\end{theorem}
Finally, Theorem \ref{thm:cond} and the Corollary \ref{cor:ugcorr} can be combined to a general rule for comparing squared partial correlation on trees.
\begin{cor}\label{cor:final}
Suppose $a$, $c$, $c^{\prime}$ are three vertices on a Gaussian tree $G$ and $Z$, $Z^{\prime}$ are two subsets of the vertex set $V$. Further, assume that $c\in\epath{a}{c^{\prime}}$ and the vertices $a$ and $c^{\prime}$ are separated from $Z$ given $Z^{\prime}$. Then $\rhsq{a}{c^{\prime}}{Z^{\prime}}\le \rhsq{a}{c}{Z}$.
\end{cor}
\section{Mixed ancestral graphs}\label{sec:mags}
In this supplement we briefly discuss mixed ancestral graphs. Our discussion closely follows \citet{thomas1}. We also refer to the same text for a more detailed treatment of the class of these graphs.
A graph $G$ is an ordered pair $(V,E)$ where $V$ is a set of vertices and $E$ is a set of edges.
A mixed graph is a graph containing three types of edges, undirected ($\hbox{\kern3pt\raise2.5pt\vbox{\hrule width9pt height 0.3pt}\kern3pt}$), directed ($\rightarrow$) and bidirected ($\leftrightarrow$). The following terminology is used to describe relations between variables in such a graph:
\begin{enumerate}
\item If $\alpha\hbox{\kern3pt\raise2.5pt\vbox{\hrule width9pt height 0.3pt}\kern3pt}\beta$ in $G$, then $\alpha$ is a neighbour of $\beta$ and $\alpha\in ne(\beta)$.
\item If $\alpha\rightarrow\beta$ in $G$, then $\alpha$ is a parent of $\beta$ and $\alpha\in pa(\beta)$.
\item If $\beta\rightarrow\alpha$ in $G$, then $\alpha$ is a child of $\beta$ and $\alpha\in ch(\beta)$.
\item If $\alpha\leftrightarrow\beta$ in $G$, then $\alpha$ is a spouse of $\beta$ and $\alpha\in sp(\beta)$.
\end{enumerate}
\begin{defn} A vertex $\alpha$ is said to be an ancestor of a vertex $\beta$ if either there is a directed path $\alpha\rightarrow\cdots\rightarrow\beta$ from $\alpha$ to $\beta$, or $\alpha=\beta$. Further, for $X\subseteq V$ its ancestor set is defined as:
\[an(X)=\{\alpha~:~\alpha\text{ is an ancestor of $\beta$ for some $\beta\in X$}\}.
\]
\end{defn}
\begin{defn} A vertex $\alpha$ is said to be anterior to a vertex $\beta$ if there is a path \path{\alpha}{\beta} on which every edge is either of the form $\gamma\hbox{\kern3pt\raise2.5pt\vbox{\hrule width9pt height 0.3pt}\kern3pt}\delta$, or $\gamma\rightarrow\delta$ with $\delta$ between $\gamma$ and $\beta$, or $\alpha=\beta$; that is,
there are no edges $\gamma\leftrightarrow\delta$ and there are no edges $\delta\rightarrow\gamma$ pointing toward $\alpha$. Further, for $X\subseteq V$ its anterior set is defined as:
\[ant(X)=\{\alpha~:~\alpha\text{ is an anterior to $\beta$ for some $\beta\in X$}\}.
\]
\end{defn}
\begin{defn} An ancestral graph $G$ is a mixed graph in which the following conditions hold for all vertices $\alpha$ in $G$:
\begin{enumerate}
\item $\alpha\not\in ant\left(pa(\alpha)\cup sp(\alpha)\right)$ and
\item if $ne(\alpha)\ne\emptyset$ then $pa(\alpha)\cup sp(\alpha)=\emptyset$.
\end{enumerate}
\end{defn}
The d-separation criterion for DAGs can be extended to m-separation criterion for mixed ancestral graphs.
A non-endpoint vertex $\zeta$ on a path is a collider on the path if the edges preceding and succeeding $\zeta$ on the path have an arrowhead at $\zeta$, ie., $\rightarrow\zeta\leftarrow$, $\leftrightarrow\zeta\leftrightarrow$, $\leftrightarrow\zeta\leftarrow$, $\rightarrow\zeta\leftrightarrow$.
A non-endpoint vertex $\zeta$ on a path which is not a collider is a noncollider on the path.
A path between vertices $\alpha$ and $\beta$ in an ancestral graph $G$ is said to be m-connecting given a set $Z$ (possibly empty), with $\alpha$, $\beta\not\in Z$ if:
\begin{enumerate}
\item every noncollider on the path is not in $Z$, and
\item every collider on the path is in the $ant(Z)$.
\end{enumerate}
If there is no path m-connecting $\alpha$ and $\beta$ given $Z$, then $\alpha$ and $\beta$ are said to be m-separated given $Z$. Non empty sets $X$ and $Y$ are m-separated given Z, if for every pair $\alpha$, $\beta$ with $\alpha\in X$ and $\beta\in Y$, $\alpha$ and $\beta$ are m-separated given $Z$ ($X$, $Y$ and $Z$ are disjoint sets).
A distribution $F$ is said to satisfy the conditional independence relations represented by a mixed ancestral graph if for disjoint subsets $X$, $Y$ and $Z$, $\cind{X}{Y}{Z}$ according to $F$ whenever $X$ is m-separated from $Y$ given $Z$.
\begin{figure}[t]
\begin{center}
\subfigure[\label{fig:magb}]{\input{mag.tex}}\hspace{1in}
\subfigure[\label{fig:ap3b1cab}]{\input{figap3b1ca.tex}}
\end{center}
\caption{}
\end{figure}
\subsection{Examples of mixed ancestral graphs in the main text}
\begin{example} Consider the Mixed ancestral graph in Figure \ref{fig:magb}. There are more than one paths connecting $a$ and $c$. Each of them has a collider on it. As for example, $y_1$ is a collider on the path $\{a,y_1,c\}$. So $a$ is m-separated from $c$ given $\emptyset$. Thus $a\perp\!\!\!\perp c$.
Further note that, $x_2$ is a noncollider on each path connecting $\{a,c\}$ and $z$. Thus, $\cind{ac}{z}{x_2}$. Similarly, $\cind{ac}{z^{\prime}}{z}$.
\end{example}
\begin{example}
Now we consider the graph in Figure \ref{fig:ap3b1cab}. Clearly $a\perp\!\!\!\perp c$. $x$ is a collider on the paths $\{a,x,z\}$ and $\{c,x,z\}$. Further, $b$ is a collider on the paths $\{a,x,b,z\}$ and $\{c,x,b,z\}$. So $b$ and $x$ m-separates $a$ and $c$ from $z$ given $\emptyset$. So $ac\perp\!\!\!\perp z$.
Now note that, $x$ is a noncollider on the paths $\{a,x,b\}$ and $\{c,x,b\}$. Also $z$ is a collider on the paths $\{a,x,z,b\}$ and $\{c,x,z,b\}$. This implies $\{a,c\}$ is m-separated from $b$ given $x$, but not given $zx$.
\end{example}
|
1,477,468,750,112 | arxiv | \section{Introduction}
Facial attributes are descriptions or labels that can be given to a face by describing its appearance \cite{kumar_ttributes}. In the biometrics community, attributes are also referred to as soft-biometrics \cite{softbio}. Various methods have been developed in the literature for predicting facial attributes from images \cite{DeepAtt,kumar2008facetracer,zhang2014panda,ranjan2019hyperface,rudd2016moon,lu2017fully,gunther2017affact}. In this work, we aim to tackle the inverse problem of synthesizing faces from their corresponding attributes (see Fig.~\ref{fig:att_vs_rec}). Visual description-based facial synthesis has many applications in law enforcement and entertainment. For example, visual attributes are commonly used in law enforcement to assist in identifying suspects involved in a crime when no facial image of the suspect is available at the crime scene. This is commonly done by constructing a composite or forensic sketch of the person based on the visual attributes.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{./fig/introduction.pdf}\\
(a)\hskip100pt (b)
\caption{Attribute prediction vs. face synthesis from attributes. (a) Attribute prediction: given a face image, the goal is to predict the corresponding attributes. (b) Face synthesis from attributes: given a list of facial attributes, the goal is to generate a face image that satisfies these attributes.}
\label{fig:att_vs_rec}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{./fig/framework.pdf}\\
\caption{An overview of the proposed synthesis method. Given a noise vector sampled from the normal distribution, the Sketch Generator Network synthesizes sketch image conditioned on the sketch attributes. The synthesized sketch is then given as an input to the Face Generator Network, which outputs the high-quality face images conditioned on the facial attributes.
}
\label{fig:framework}
\end{figure*}
Reconstructing an image from attributes or text descriptions is an extremely challenging problem because the model is required to learn the mapping from a semantic abstract space to a complex RGB image space. This task requires the generated images to be not only realistic but also semantically consistent, i.e., the generated face images should preserve the facial structure as well as the content described in attributes. Several recent works have attempted to solve this problem by using recently introduced CNN-based generative models such as conditional variational auto-encoder (CVAE) \cite{sohn2015learning,yan2016attribute2image,kingma2013auto} and generative adversarial network (GAN) \cite{goodfellow2014generative,reed2016generative,zhang2016stackgan,zhang2017stackgan++,zhang2018photographic,xu2018attngan}.
For instance, Yan \emph{et al.} \cite{yan2016attribute2image} proposed a disentangled CVAE-based method for attribute-conditioned image generation. In a different approach, Reed \emph{et al.} \cite{reed2016generative} introduced a GAN-based method for synthesizing images from detailed text descriptions. Similarly, Zhang \emph{et al.} \cite{zhang2016stackgan} proposed the StackGAN method for synthesizing photo-realistic images from text.
It is well-known that CVAE-based methods often generate blurry images due to the injected noise and imperfect element-wise squared error measure used in training \cite{bao2017cvae}. In contrast, GAN-based methods have shown to generate high-quality images \cite{goodfellow2014generative}. In order to synthesize photo-realistic facial images, rather than directly generating an image from attributes, we first synthesize a sketch image corresponding to the attributes and then generate the facial image from the synthesized sketch. Our approach is motivated by the way forensic sketch artists render the composite sketches of an unknown subject using a number of individually described parts and attributes. Our approach is also inspired by the recent works \cite{Wang_SSGAN2016,sohn2015learning,villegas2017decomposing,Chen_2017_ICCV,Walker_2017_ICCV} that have shown the effectiveness of stage-wise training.
In particular, the proposed framework consists of two stages (see Fig~\ref{fig:framework}) -- sketch generator network and face generator network. Given a noise vector sampled from the normal distribution and sketch attributes, the sketch generator learns to synthesize sketch images. In the second stage, given the synthesized sketch from the first stage, a different generator network is trained to synthesize high-quality face images with the help of attributes. In particular, the attribute augmentation module is adapted from StackGAN \cite{zhang2016stackgan} in both sketch and face generator networks. This module aims to increase the generation diversity by adding redundant information from the standard Gaussian noise. In experiments we observed that, due to the sparsity of visual attributes, the input attribute values become all zero when the input batch of attributes are all the same. In order to overcome this ``attribute vanishing issue", we replace the batch normalization layers with the conditional batch normalization layers \cite{de2017modulating}. We refer to this module as the attribute augmentation module.
To summarize, this paper makes the following contributions:
\begin{itemize}
\item We formulate the attribute-to-face generation problem as a stage-wise learning problem, i.e. attribute-to-sketch, and sketch-to-face. The synthesis networks are based on multi-scale generators with an attribute augmentation module for synthesizing photo-realistic images.
\item For the face generator network, we propose a novel visual attribute conditioned sketch-to-face synthesis network. The network is composed of an attribute augmentation module and a UNet \cite{ronneberger2015u} shape translation network. With the help of the attribute-augmentation module, the training stability is improved and the generators are able to synthesize diverse set of realistic face/sketch images.
\item Extensive experiments are conducted to demonstrate the effectiveness of the proposed image synthesis method. Furthermore, an ablation study is conducted to demonstrate the improvements obtained by different stages of our framework.
\end{itemize}
Rest of the paper is organized as follows. In Section~\ref{sec:related}, we review a few related works. Details of the proposed facial composite synthesis from visual attribute method are given in Section~\ref{sec:method}.
Experimental results are presented in Section~\ref{sec:expt}, and finally, Section~\ref{sec:con} concludes the paper with a brief summary.
\section{Background and Related Work} \label{sec:related}
Recent advances in deep learning have led to the development of various deep generative models for the problem of text-image synthesis and image-to-image translation \cite{larochelle2011neural}, \cite{kingma2013auto}, \cite{goodfellow2014generative}, \cite{rezende2014stochastic}, \cite{radford2015unsupervised}, \cite{sohn2015learning}, \cite{larsen2015autoencoding}, \cite{denton2015deep}, \cite{dosovitskiy2017learning}, \cite{salimans2016improved}, \cite{metz2016unrolled}, \cite{arjovsky2017towards}, \cite{che2016mode}, \cite{gauthier2014conditional}, \cite{odena2016conditional}.
Among them, variational autoencoder (VAE) \cite{kingma2013auto,rezende2014stochastic,larsen2015autoencoding}, generative adversarial network (GAN) \cite{goodfellow2014generative,radford2015unsupervised,salimans2016improved,metz2016unrolled,arjovsky2017towards,che2016mode,odena2016conditional}, and Autoregression \cite{larochelle2011neural} are the most widely used approaches.
VAEs \cite{kingma2013auto,rezende2014stochastic} are powerful generative models that use deep networks to describe distribution of observed and latent variables. A VAE model consists of two parts, with one network encoding a data sample to a latent representation and the other network decoding latent representation back to data space. VAE regularizes the encoder by imposing a prior over the latent distribution. Conditional VAE (CVAE) \cite{sohn2015learning,yan2016attribute2image} is an extension of VAE that models latent variables and data, both conditioned on side information such as a part or label of the image. For example, Yan \emph{et al.} \cite{yan2016attribute2image} proposed a discomposing conditional VAE (disCVAE) model to synthesize facial image from visual attributes. They took the assumption that a face image could be decomposed into two parts: foreground and background. By taking this assumption, the disCVAE model is able to generate plausible face images with corresponding attributes. However, due to the imperfect element-wise square error measurements, the VAE model usually generates blurry images \cite{bao2017cvae}.
\begin{figure*}[htp!]
\centering
\includegraphics[width=.85\linewidth]{./fig/Sketch_Generator_Network.pdf}
\caption{The sketch generator network architecture. The sketch attributes are first augmented by the attribute augmentation module, in which a new latent attribute variable is re-sampled from the estimated latent distribution ($\mu_{\phi(\mathbf{y}_{s})}$ and $\sigma_{\phi(\mathbf{y}_{s})}$) and concatenated with a noise vector. Then, the remaining up-sample modules (\textcolor{orange}{orange}) aim to generate a series of multi-scale sketches with the augmented sketch attributes.}
\label{fig:sketchgeneratornetwork}
\end{figure*}
GANs \cite{goodfellow2014generative} are another class of generative models that are used to synthesize realistic images by effectively learning the distribution of training images \cite{song2018geometry, lu2018conditional, huang2017beyond}. The goal of GAN is to train a generator $G$, to produce samples from training distribution such that the synthesized samples are indistinguishable from actual distribution by the discriminator, $D$. Conditional GAN is another variant where the generator is conditioned on additional variables such as discrete labels, text or images.
The objective function of a conditional GAN is defined as follows
\begin{equation}\label{eq:conditional GAN loss}
\begin{split}
L_{cGAN}(G,D) = E_{\mathbf{x},\mathbf{y} \sim P_{data} (\mathbf{x},\mathbf{y})}[\log D(\mathbf{x},\mathbf{y})]+ \\
E_{\mathbf{x}\sim P_{data}(\mathbf{x}),\mathbf{z}\sim p_{z}(\mathbf{z})}[\log(1-D(\mathbf{x},G(\mathbf{x},\mathbf{z})))],
\end{split}
\end{equation}
where $\mathbf{z}$ the input noise, $\mathbf{y}$ the output image, and $\mathbf{x}$ the observed image, are sampled from distribution $P_{data} (\mathbf{x},\mathbf{y})$ and they are distinguished by the discriminator, $D$. While for the generated fake $G(\mathbf{x},\mathbf{z})$ sampled from distributions $\mathbf{x}\sim P_{data}(\mathbf{x}),\mathbf{z}\sim p_{z}(\mathbf{z})$ would like to fool $D$.
Based on these generative models, two common problems have been widely studied by researchers: image-to-image synthesis and text-to-image synthesis.
\noindent \textbf{Image-to-image synthesis:} One important motivation behind the image-to-image synthesis problem is to bridge the gap between different image domains. Image-to-image translation models are often built based on the common networks like UNet \cite{ronneberger2015u} and FCN \cite{long2015fully}. Isola \emph{et al.} \cite{isola2016image} proposed conditional GANs \cite{mirza2014conditional} for several tasks such as labels to street scenes, labels to facades, image colorization, etc. In an another variant, Zhu \emph{et al.} \cite{zhu2017unpaired} proposed CycleGAN that learns image-to-image translation in an unsupervised fashion. Similarly, Yi \emph{et al.} \cite{yi2017dualgan} developed an unsupervised dual image-to-image translation model.
\noindent \textbf{Text-to-image synthesis:} Isola \emph{et al.} \cite{isola2016image} proposed Conditional GANs \cite{mirza2014conditional} for several tasks such as labels to street scenes, labels to facades, image colorization, etc. Built on this, Reed \emph{et al.} \cite{reed2016generative} proposed a conditional GAN network to generate images conditioned on the text description. Several text-to-image synthesis works have been proposed in the literature that make use of the multi-scale information \cite{li2019global,zhang2016stackgan,zhang2017stackgan++,zhang2018photographic,denton2015deep,korshunova2017fast,banerjee2018hallucinating}. Zhang \emph{et al.} \cite{zhang2016stackgan} proposed a two-stage stacked GAN (StackGAN) method which achieves the state-of-the-art image synthesis results. More recently this work was extended in \cite{zhang2017stackgan++} by using additional losses and better fine-tuning procedures. Xu \emph{et al.} \cite{xu2018attngan} proposed an attention-driven method to improve the synthesis results. Zhang \emph{et al.} \cite{zhang2018photographic} (HDGAN) adopted a multi-adversarial loss to improve the synthesis by leveraging more effective image and text information at multi-scale layers.
\section{Proposed Method}\label{sec:method}
In this section, we provide details of the proposed GAN-based attribute to face synthesis method, which consists of two components: sketch generator and face generator. Note that the training phase of our method requires ground truth attributes and the corresponding sketch and face images. Furthermore, the attributes are divided into two separate groups - one corresponding to texture and the other corresponding to color. Since sketch contains no color information, we use only the texture attributes in the first component (i.e. sketch generator) as indicated in Fig.~\ref{fig:framework}.
In order to explore the multi-scale information during training, inspired by the previous works \cite{zhang2016stackgan,zhang2017stackgan++,wangfg2018high,zhang2018photographic}, we adopt the idea of hierarchically-integrated multiple discriminators at different layers in our generators. The sketch/face generator network learns the training data distribution from low-resolution to high-resolution. This also helps in improving the training stability of the overall network \cite{karras2017progressive}.
\subsection{Stage 1: Attribute-to-Sketch}
An overview of the sketch generator network architecture is shown in Fig.~\ref{fig:sketchgeneratornetwork}. Given the sketch attribute vector $\mathbf{y}_{s}$, the goal of the sketch generator network $G_{s}$ is to produce multi-scale sketch outputs as follows
\begin{equation}\label{multi-scale Gs}
G_{s}(\mathbf{z}_{s}, \mathbf{y}_{s})= \{\mathbf{\hat{x}}_{s}^{1}, \mathbf{\hat{x}}_{s}^{2}, \cdots, \mathbf{\hat{x}}_{s}^{m} \} \triangleq \mathbf{\hat{X}}_{s},
\end{equation}
where $\mathbf{z}_{s}$ is the noise vector sampled from a normal Gaussian distribution, $\{\mathbf{\hat{x}}_{s}^{1}, \mathbf{\hat{x}}_{s}^{2}, \cdots, \mathbf{\hat{x}}_{s}^{m}\}$ are the synthesized sketch images with gradually growing resolutions, and $\mathbf{\hat{x}}_{s}^{m}$ is the final output with the highest resolution. In order to explore the multi-scale information at different image resolutions, a set of distinct discriminators $D_{s} = \{D_{s}^{1},...,D_{s}^{m}\}$ are implemented for each $\mathbf{\hat{x}}_{s}^{i}, i=1,2,3,\cdots, m$. An example of $3$-scale generator architecture is shown in Fig.~\ref{fig:sketchgeneratornetwork}. It can be observed that the output sketch images are generated from the feature maps with certain resolutions (width $\times$ height) from different layers of the network.
The generator network consists of three modules: the attribute augmentation module (AA), the up-sample module (UP), and the stretching module (STR). The STR module consists of two $1\times 1$ convolution layers followed by a Tanh layer, which aims to convert the feature map into a 3-channel output image. The UP module consists of an up-sampling layer followed by convolutional, batch normalization, and ReLU layers. Between each UP module, there is an additional residual block (Res) module \cite{he2016identity,he2016deep}.
The AA module consists of a series of fully-connected neural networks which aim to learn a latent representation of the given visual attribute vector $\mathbf{y}$. During training, we randomly sample a latent variable $\mathbf{\hat{y}}$ from an independent Gaussian distribution $\mathcal{N}(\mu_{\phi(\mathbf{y})}, \sigma_{\phi(\mathbf{y})})$, where the mean $\mu_{\phi(\mathbf{y})}$ and the diagonal covariance matrix $\sigma_{\phi(\mathbf{y})}$ are learned as the functions of visual attributes $\mathbf{y}$. In order to avoid over-fitting, the following KL-divergence regularization term is added during training between the augmented visual attribute distribution and the standard Gaussian distribution
\begin{equation}\label{attribute augmentation}
\mathcal{L}_{aug} = \mathcal{D}_{KL}(\mathcal{N}(\mu_{\phi(\mathbf{y})}, \sigma_{\phi(\mathbf{y})}) \| \mathcal{N}(0, \mathcal{I})),
\end{equation}
where $\mathcal{N}(0, \mathcal{I})$ is the normal Gaussian distribution \cite{zhang2016stackgan,kingma2013auto,larsen2015autoencoding}. Different from previous works \cite{zhang2016stackgan,zhang2017stackgan++}, we replace the traditional batch normalization layers with the conditional batch normalization \cite{de2017modulating} in order to overcome the attribute vanishing problem.
As shown in Fig.~\ref{fig:sketchgeneratornetwork}, the overall sketch generator network architecture is as follows:
\noindent AA(512)-UP(256)-Res(256)-UP(128)Res(128)-UP(64)-Res(64)-UP(32),\\
where the number in round bracket indicates the output channel of feature maps. As shown in Fig.~\ref{fig:sketchgeneratornetwork}, the three stretching (STR) modules convert the feature maps into 3-channel output sketch images at different resolutions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.85\linewidth]{fig/Sketch_Discriminator_Network}
\caption{Sketch discriminator at $64\times 64$ resolution scale: given a sketch attribute vector, the discriminator is trained using the triplets: (i) real-sketch and real-sketch-attributes, (ii) synthesized-sketch and real-sketch-attribute, (iii) wrong-sketch (real sketch but mismatching attributes) and same real-sketch-attributes. Note that the convolutional layers with dashed line are removed when training at lower-resolution discriminator. }
\label{fig:sketchdiscriminatornetwork}
\end{figure*}
\noindent \textbf{Discriminator and Training Loss:}
The proposed sketch generator produces multi-scale resolution synthesized sketch images. In order to leverage the hierarchical property of the network, a set of discriminators $D_{s} = \{D_{s}^{1},...,D_{s}^{m}\}$ with similar architectures are designed for each scale. For a particular scale, the sketch discriminator is developed as shown in Fig.~\ref{fig:sketchdiscriminatornetwork}. In order to learn the discrimination in both image content and semantics, we adopt the triplet matching training strategy \cite{zhang2017stackgan++,zhang2018photographic,reed2016generative,di2018apgan}. Specifically, given sketch attributes, the discriminator is trained by using the following triplets: (i) real-sketch and real-sketch-attributes, (ii) synthesized-sketch and real-sketch-attributes, and (iii) wrong-sketch (real sketch but mismatching attributes) and same real-sketch-attributes. As shown in Fig.~\ref{fig:sketchdiscriminatornetwork}, two kinds of errors are used to train the discriminator. They correspond to (i) real/face sketch images, and (ii) sketch images and attributes.
The architecture of the proposed sketch discriminator for $64\times 64$ resolution is shown in Fig.~\ref{fig:sketchdiscriminatornetwork}. This architecture can be easily adapted for other resolution scales by adding/removing appropriate the convolutional layers. As shown in Fig.~\ref{fig:sketchdiscriminatornetwork}, two branches with different losses are used to train the discriminator at a certain resolution scale. One consists of a series of down-sampling convolutional layers (with filter-size 4, stride 2 and padding size 1) to produce a $4 \times 4$ probability map and classify each location as true or false. The other branch first embeds the sketch attributes to a $128\times4\times4$ feature map and concatenates it with the feature maps from the first branch. Another two $1\times 1$ convolutional layers are used to fuse the concatenated feature maps to produce $4\times4$ probability maps for classification. This branch aims to distinguish whether the semantics in sketch images match the sketch attribute or not, through the feedback loss from another $4\times4$ probability map.
The overall adversarial loss used to train the network is defined as follows:
\begin{equation} \label{eq: multi-scale adversarial loss}
\begin{split}
\mathcal{L}_{s_{Dis}} = \min_{G_{s}} \max_{D_{s}} V(G_{s}, D_{s}, \mathbf{X}_{s}, \mathbf{y}_{s}, \mathbf{z}_{s}) \hspace{30mm} \\
= \sum_{i=1}^{m} \min_{G_{s}} \max_{D_{s}^{i}} (\mathcal{L}_{s_{real}}^{i} + \mathcal{L}_{s_{fake}}^{i} + \mathcal{L}_{s_{wrong}}^{i}), \hspace{15mm} \\
\mathcal{L}_{s_{real}}^{i} = \mathbb{E}_{\mathbf{x}_{s}^{i}\sim P_{data}(\mathbf{x}_{s}^{i})}[\log D_{s}^{i}(\mathbf{x}_{s}^{i})], \hspace{35mm} \\ + \mathbb{E}_{\mathbf{x}_{s}^{i} \sim P_{data}(\mathbf{x}_{s}^{i}), ,\mathbf{y}_{s} \sim P_{data}(\mathbf{y}_{s})}[\log D_{s}^{i}(\mathbf{x}_{s}^{i},\mathbf{y}_{s})], \hspace{8mm}
\\
\mathcal{L}_{s_{wrong}}^{i} = \mathbb{E}_{\mathbf{x}_{s}^{i'} \sim P_{data}(\mathbf{x}_{s}^{i}), \mathbf{y}_{s}\sim P_{data}(\mathbf{y}_{s})}[\log (1-D_{s}^{i}(\mathbf{x}_{s}^{i'},\mathbf{y}_{s}))], \hspace{0mm}
\\
\mathcal{L}_{s_{fake}}^{i} = \mathbb{E}_{\mathbf{\hat{x}}_{s}^{i} \sim P_{G_{s}(\mathbf{y}_{s},\mathbf{z}_{s})}}[\log (1-D_{s}^{i}(\mathbf{\hat{x}}_{s}^{i}))] \hspace{26mm}
\\ + \mathbb{E}_{\mathbf{\hat{x}}_{s}^{i} \sim P_{G_{s}(\mathbf{y}_{s},\mathbf{z}_{s})}, \mathbf{y}_{s}\sim P_{data}(\mathbf{y}_{s})}[\log (1-D_{s}^{i}(\mathbf{\hat{x}}_{s}^{i},\mathbf{y}_{s}))],
\end{split}
\end{equation}
where $\mathbf{\hat{x}}_{s}^{i} \sim P_{G_{s}(\mathbf{y}_{s}, \mathbf{z}_{s})}$ stands for the synthesized (fake) sketch image sampled from the sketch generator at scale $i$, $\mathbf{x}_{s}^{i}\sim P_{data}(\mathbf{x}_{s}^{i})$ stands for the real sketch image sampled from the sketch image data distribution at scale $i$, $\mathbf{\hat{x}}_{s}^{i'}$ is the attribute-mismatching sketch image sample at scale $i$, and $\mathbf{y}_{s}$ is the sketch attribute vector. The total objective loss function is given as follows
\begin{equation}\label{generator loss}
\mathcal{L}_{s_{total}} = \sum_{i=1}^{m} \min_{G_{s}} \max_{D_{s}^{i}} (\mathcal{L}_{s_{real}}^{i} + \mathcal{L}_{s_{fake}}^{i} + \mathcal{L}_{s_{wrong}}^{i}) + \lambda_{s} \mathcal{L}_{s_{aug}},
\end{equation}
where the hyperparameter $\lambda_{s}$ is set equal to 0.01 in our experiments, $ \mathcal{L}_{s_{aug}}$ is the KL-divergence regularization in the AA module with sketch attribute $\mathbf{y}_{s}$ and noise $\mathbf{z}_{s}$ as inputs.
\subsection{Stage 2: Sketch-to-Face}
Given the synthesized sketches $\mathbf{\hat{X}}_{s}$ and the facial attributes $\mathbf{y}_{f}$, the face generator network $G_{f}$ aims to produce multi-scale outputs as follows
\begin{equation}\label{multi-scale Gx}
G_{f}(\mathbf{\hat{X}}_{s}; \mathbf{z}, \mathbf{y}_{f}) = \{\mathbf{\hat{\mathbf{x}}}^{1}_{f}, \mathbf{\hat{\mathbf{x}}}^{2}_{f}, \cdots \mathbf{\hat{\mathbf{x}}}^{m}_{f} \} \triangleq \mathbf{\hat{X}}_{f},
\end{equation}
where $\mathbf{z}$ is noise sampled from a normal Gaussian distribution and $\mathbf{\hat{X}}_{f}$ are the synthesized facial images with gradually growing resolutions. Similar to the sketch generation network, a set of distinct discriminators are designed for each scale. The overall objective is given as follows:
\begin{equation}\label{total min-max face generator}
G_{f}^{\star} , D_{f}^{\star} = arg \min_{G_{f}} \max_{D_{f}} V(G_{f}, D_{f}, \mathbf{X}_{f}; \mathbf{\hat{X}}_{s}, \mathbf{y}_{f}, \mathbf{z}),
\end{equation}
where $D_{f}=\{\mathbb{D}_{1},\cdots, \mathbb{D}_{m} \}$ and $\mathbf{X}_{f}=\{\mathbf{\mathbf{x}}^{1}_{f}, \cdots, \mathbf{\mathbf{x}}^{m}_{f}\}$ denote real training images at multiple scales $1,\cdots, m$. In order to preserve the geometric structure of the synthesized sketch from the attribute-to-sketch stage, we adopt the skip-connection architecture from UNet related works \cite{di2017gp,di2018apgan,ronneberger2015u}. By using skip-connections, the feature maps from the encoding network are concatenated with the feature maps in the decoding network. This way, the geometric structure of the learned sketch image is inherited in the synthesized facial image. The proposed method is trained end-to-end. The lower-resolution outputs fully utilize the top-down knowledge from the discriminators at higher resolutions. Therefore, the synthesized images from different resolutions preserve the geometric structure, which improves the training stability and synthesis quality.
\begin{figure*}
\centering
\includegraphics[width=0.85\linewidth]{fig/Face_Generator_Network.pdf}
\caption{The architecture of Face Generator Network. The facial attributes are first embedded by the attribute augmentation module, similar to the one used in stage 1. The synthesized sketch image is also embedded by \textcolor{blue}{a sequence of down-sample convolutional layers}. These two feature maps are then fused by concatenation. Finally, the fused feature maps are used by the \textcolor{orange}{up-sample module} to synthesize multi-scale face images.}
\label{fig:facegeneratornetwork}
\end{figure*}
The architecture of the face generator network is shown in Fig.~\ref{fig:facegeneratornetwork}. The generator consists of four modules: the AA module, the down-sample module (DO), the UP module, and the STR module. As before, the STR aims convert the feature map into a 3-channel output image. It consists of two $1\times 1$ convolutional (conv) layers with one Tanh layer. The UP module consists of an up-sampling layer followed by conv-BN-ReLU layers and an additional residual block \cite{he2016identity,he2016deep} is between each UP module. The DO module consists of a series of conv-BN-ReLU layers. The overall face generator network architecture consists of the following components
\noindent DO(64)-DO(128)-DO(256)-DO(512)-AA(512)-UP(512)-UP(256)-UP(128)-UP(64)-UP(32),
where the number in round brackets indicate the output channel of feature maps. As shown in Fig.~\ref{fig:facegeneratornetwork}, the three stretching (STR) modules convert the feature maps into 3-channel output face images at different resolutions.
\noindent \textbf{Discriminator and Training Loss:} In the sketch-to-face stage, we use the same architecture of the discriminator as was used in stage 1. We input the triplets as facial images instead of sketch images and replace the sketch attributes by the facial attributes. Furthermore, the training loss function is also the same as the one used in the attribute-to-sketch stage.
\subsection{Testing}
Fig.~\ref{fig:framework} shows the testing phase of the proposed method. A sketch attribute vector $\mathbf{y}_{s}$ and $\mathbf{z}_{s}$ sampled from a normal Gaussian distribution are first passed through the sketch generator network $G_{s}$ to produce a sketch image. Then the synthesized sketch image with the highest resolution, attribute vector $\mathbf{y}_{f}$ and another noise vector $\mathbf{z}_{f}$ are passed through the face generator network to synthesize a face image. In other words, our method takes noise and attribute vectors as inputs and generates high-quality face images via sketch images.
\section{Experimental Results} \label{sec:expt}
In this section, experimental settings and evaluation of
the proposed method are discussed in detail. Results are compared with several
related generative models: disCVAE \cite{yan2016attribute2image}, GAN-INT-CLS \cite{reed2016generative}, StackGAN \cite{zhang2016stackgan}, Attribute2Sketch2Face \cite{di2017face}, StackGAN++ \cite{zhang2017stackgan++} and HDGAN \cite{zhang2018photographic}. The entire network in Fig.~\ref{fig:framework} is trained end-to-end using Pytorch. When training, the learning rate for the generator and the discriminator in the first stage is set equal to $0.0002$, while the learning rate in the second stage is set equal to $0.0001$.
We conduct experiments using two publicly available datasets: CelebA \cite{liu2015faceattributes}, and deep funneled LFW \cite{Huang2012a}. The CelebA database contains about 202,599 face images, 10,177 different identities and 40 binary attributes for each face image. The deep funneled LFW database contains about 13,233 images, 5,749 different identities and 40 binary attributes for each face image which are from the LFWA dataset \cite{liu2015faceattributes}.
Note that the training part of our network requires original face images and the corresponding sketch images as well as the corresponding list of visual attributes. The CelebA and the deep funneled LFW datasets consist of both the original images and the corresponding attributes.
To generate the missing sketch images in the CelebA and the deep funneled LFW datasets, we use a public pencil-sketch synthesis method \footnote{http://www.askaswiss.com/2016/01/how-to-create-pencil-sketch-opencv-python.html} to generate the sketch images from the face images.
Fig.~\ref{fig:sketch_example} shows some sample generated sketch images from the CelebA and the deep funneled LFW datasets.
\begin{figure}[htp!]
\centering
\includegraphics[width=1\linewidth]{./fig/sketch_example}
\caption{Sketch images sampled from the LFW and the CelebA datasets are shown in row 1 and row 2 respectively.}
\label{fig:sketch_example}
\end{figure}
The MTCNN method \cite{zhang2016joint} was used to detect and crop faces from the original images. The detected faces were scaled to the size of $64\times 64$.
Since many attributes from the original list of 40 attributes were not significantly informative, we selected 23 most useful attributes for our problem. Furthermore, the selected attributes were further divided into 17 texture and 6 color attributes as shown in Table~\ref{tab:fine-grained attributes}. During experiments, the texture attributes were used to train the sketch generator network while all 23 attributes were used to train the face generator network.
\begin{table}[htp!]
\centering
\caption{List of fine-grained texture and color attributes.}\label{tab:fine-grained attributes}
\begin{tabular}{|c|c|}
\hline Texture & \makecell{5\_o\_Clock\_Shadow, Arched\_Eyebrows, Bags\_Under\_Eyes, \\ Bald, Bangs, Big\_Lips, Big\_Nose, Bushy\_Eyebrows,\\ Chubby, Eyeglasses, Male, Mouth\_Slightly\_Open,\\ Narrow\_Eyes, No\_Beard, Oval\_Face, Smiling, Young} \\
\hline Color& \makecell{Black\_Hair, Blond\_Hair, Brown\_Hair, \\Gray\_Hair, Pale\_Skin, Rosy\_Cheeks} \\
\hline
\end{tabular}
\end{table}
\subsection{CelebA Dataset Results}
\begin{figure*}[htp!]
\centering
\includegraphics[width=0.85\linewidth]{fig/celeba_result}
\caption{Image generation results on the CelebaA dataset. First row of each sub-figure shows the reference image and its corresponding attributes. The images generated by different methods are shown in different rows.}
\label{fig:CelebA_result}
\end{figure*}
The CelebA dataset \cite{liu2015faceattributes} consists of 162,770 training samples, 19,867 validation samples and 19,962 test samples. We combine the training and validation splits together to train our models. After detection and alignment, we obtain 182,468 samples which we use for training our proposed model. During training, we use the batch size of 40. The ADAM algorithm \cite{adam_opt} with learning rate of 0.0002 is used to train the network. In total, 20 training epoch are used during training and the initial learning rate is frozen for the first 10 epochs. For the next 10 epochs, we let it drop by 0.1 of the initial value after every epoch. The latent feature dimension for sketch/facial attribute is set equal to 128. The noise vector dimension is set equal to 100. Three scales ($16 \times 16$, $32\times 32$, and $64\times 64$) are used in our multi-scale network.
Sample image generation results corresponding to different methods from the CelebA are shown in Fig.~\ref{fig:CelebA_result}. For fair comparison with those stage-wise training algorithm, we adopt StackGAN \cite{zhang2016stackgan} network to two scale resolution $32\times 32$ and $64\times 64$. Moreover, we adopt StackGAN++ \cite{zhang2017stackgan++} HDGAN \cite{zhang2018photographic} in the same resolution scales: $16\times 16$, $32\times 32$, and $64\times 64$. Note that these results are obtained by inputting a certain attribute vector along with random noise. As can be seen from this figure, GAN-INT-CLS and StackGAN methods easily meet the modal collapse issue in this problem. During training, the generator learns to generate a limited number (1 or 2) of image samples corresponding to a certain list of attributes. This synthesized results are good enough to fool the discriminator. Thus, the generator and discriminator networks do not get optimized properly. The disCVAE method is able to reconstruct the images without model collapse but they are blurry due to the imperfect $L_{2}$ measure in the Gaussian distribution loss. In addition, some of the attributes are difficult to see in the reconstructions corresponding to the disCVAE method, such as the hair color. This is because of the imperfect latent embedding by the variational bound limitation in VAE. Also, the other Attribute2Sketch2Face is able to generate realistic results, but the image quality is slightly inferior. The recent state-of-art text-to-image synthesis approaches (stackGAN++ and HDGAN) generate plausible facial images from visual attributes. However, the generated facial images do not always preserve the corresponding attributes very well. Compared with all the baselines, the proposed method not only generates realistic facial images but also preserves the attributes better than the others. We believe that this is mainly due to the way we appraoch the attribute-to-face synthesis apporach by decomposing it into two problems, attribute-to-sketch and sketch-to-face. By factoring the original problem into two separate problems, the model at each stage learns better conditional data distribution. Furthermore, the use of multi-scale generators in the proposed GANs also help in improving the performance of our method.
\subsection{LFW Dataset Results}
\begin{figure*}[htp!]
\centering
\includegraphics[width=0.85\linewidth]{fig/lfw_result.pdf}
\caption{Image generation results on the LFWA dataset. First row of each sub-figure shows the reference image and its corresponding attributes. The images generated by different methods are shown in different rows.}
\label{fig:LFW_result}
\end{figure*}
Images in the LFWA dataset come from the LFW dataset \cite{Huang2012a}, \cite{LFWTech}, and the corresponding attributes come from \cite{liu2015faceattributes}. This dataset contains the same 40 binary attributes as in the CelebA dataset. After pre-processing, the training and testing subsets contain 6,263 and 6,880 samples, respectively. We use all the training splits to train our model. The ADAM algorithm \cite{adam_opt} with learning rate of 0.0002 is used for both generators and discriminators. The initial learning rate is frozen in the first 100 epochs and is then dropped by 0.01 for the remaining 100 epochs. All the other experimental settings are the same as the ones used in with CelebA dataset.
Sample results corresponding to different methods on the LFWA dataset are shown in Fig.~\ref{fig:LFW_result}. For fair comparison, the multi-scale resolution settings are the same as used with the experiments on the CelebA dataset. In particular, we use $32\times 32$ and $64\times 64$ resolution scales for StackGAN \cite{zhang2016stackgan} training and $16\times 16$, $32\times 32$ and $64\times 64$ multiple resolution scales for HDGAN \cite{zhang2018photographic} and StackGAN++ \cite{zhang2017stackgan++} as well as our proposed method. The disCVAE method produces reconstructions which are blurry. Previous conditional GAN-based approaches such as GAN-INT-CLS \cite{reed2016generative} and StackGAN \cite{zhang2016stackgan} also produce poor quality results due to the model collapse during training. Recent StackGAN++ and HDGAN works generate plausible facial images (HDGAN is better at the color diversity). The previous work Attribute2Sketch2Face, which is a combination of CVAE and GAN, is also able to generate facial images with corresponding attributes. However, the proposed method is able to reconstruct high-quality attribute-preserved face images better than the previous approaches.
\subsection{CelebA-HQ Dataset Results}
In order to demonstrate how our proposed method works on high-resolution images, we also conduct an experiment using a recent proposed CelebA-HQ dataset. The CelebA-HQ dataset \cite{karras2017progressive} is a high-quality version of the CelebA dataset, which consists of 30,000 images with $1024\times 1024$ resolution. Due to GPU and memory limitations, we conduct experiments on $256\times 256$ resolution images and compare the performance with StackGAN++ \cite{zhang2017stackgan++} and HDGAN \cite{zhang2018photographic}. The reason why we chose these two baselines is due to their capability to deal with high resolution images. Sample results are shown in Figure~\ref{fig:celebahq_comparison256}.
For fair comparison, we set the number of resolution scale $s=3$ for all methods. In order to adopt our method to the high-resolution dataset, we follow the strategy that removing/adding the number of UP/DO block (as defined in Section~\ref{sec:method}) in the generator and the discriminator. In particular, we set the STR modules at resolution $64 \times 64$, $128 \times 128$ and $256 \times 256$ respectively. In experiments, the batch-size is set equal to 16 for our proposed method, which is smaller than StackGAN++ and HDGAN, which are set equal to 24, due to the GPU memory limitations. Also, when training on this dataset, we train the sketch generator first and then use the pre-trained model for training the face generator.
As can be seen from Figure~\ref{fig:celebahq_comparison256}, our proposed method can synthesize photo-realistic images on high-resolution images as well. Moreover, when we compare the attributes from the synthesized images with the given attributes, we can observe that our method preserves the attributes better than the other methods. Quantitative comparisons in terms of the FID scores also show that the proposed method performs favorably compared to StackGAN++ and HDGAN. In addition, comarison of our method in Table~\ref{tab:fid celebahq} with only a single scale shows the significance of our multi-scale network.
\begin{table}
\centering
\begin{tabular}{|c|c|}
\hline
Methods & FID score \\
\hline
HDGAN \cite{zhang2018photographic} & 114.912 \\
\hline
StackGAN++\cite{zhang2017stackgan++} & 35.988 \\
\hline
Single-scale (proposed method) & 37.381 \\
\hline
Proposed method & 30.566 \\
\hline
\end{tabular}
\caption{Quantitative results (FID scores) corresponding to different methods on the CelebA-HQ dataset.}
\label{tab:fid celebahq}
\end{table}
\begin{figure*}[!htb]
\centering
\begin{minipage}{0.95\textwidth}
\centering
\includegraphics[width=0.95\linewidth]{./fig/celebahq_comparison} \\
StackGAN++ \cite{zhang2017stackgan++} \hspace{35mm} HDGAN \cite{zhang2018photographic} \hspace{35mm} Proposed method
\caption{Image synthesis results on $256\times 256$ resolution. The attributes used to generate these images are: Eyeglasses -1 Male -1 Mouth\_Slightly\_Open -1 No\_Beard 1 Oval\_Face -1 Smiling -1 Young 1 Black\_Hair -1 Blond\_Hair -1 Brown\_Hair -1 Gray\_Hair -1 Pale\_Skin 1 Rosy\_Cheeks -1.}
\label{fig:celebahq_comparison256}
\end{minipage}
\end{figure*}
\subsection{Face Synthesis}
\begin{figure*}[htp!]
\centering
\includegraphics[width=0.85\linewidth]{./fig/CelebA_progression.pdf}
\caption{Facial image progressive synthesis on CelebA when attributes are changed. These progressive changes are based on one certain attribute manipulating while the others are keep frozen. (a) Male. (b) Smile. (c) Original skin tone to pale skin tone. (d) Original hair color to black hair color.}
\label{fig:CelebA_progression}
\end{figure*}
\begin{figure*}[htp!]
\centering
\includegraphics[width=0.85\linewidth]{./fig/random_noise.pdf}
\caption{Facial image synthesis sampled when attributes are kept frozen while the noise vector is changed. Note that the identity, pose, or facial shape changes as we vary the noise vector but he attributes stay the same on the synthesized images.}
\label{fig:random_noise}
\end{figure*}
\begin{comment}
\begin{table*}[htp!]
\centering
\caption{Quantitative results corresponding to different methods.The Inception Score and Attribute $L_{2}$ measure are used to compare the performance of different methods.}
\label{tab: quantitative result}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
& \multicolumn{3}{c|}{LFW} & \multicolumn{3}{c|}{CelebA} \\ \hline
Baselines & FID Score & Inception Score & Attribute $L_{2}$ & FID Score & Inception Score & Attribute $L_{2}$ \\ \hline
GAN-INT-CLS \cite{reed2016generative} & $85.811$ &$1.510 \pm 0.020$&$0.093 \pm 0.027$& $92.793$ & $1.486 \pm 0.016$&$0.104 \pm 0.024$ \\ \hline
disCVAE \cite{yan2016attribute2image}& $103.855$ &$1.275 \pm 0.005$&$0.086\pm 0.019$& $91.012$ &$1.482 \pm 0.017$&$0.080\pm 0.042$\\ \hline
StackGAN \cite{zhang2016stackgan}& $70.379$ &$1.517 \pm 0.014$ &$0.085\pm 0.029$& $63.816$ &$1.589 \pm 0.018$&$0.091 \mp 0.021$ \\ \hline
StackGAN-v2\cite{zhang2017stackgan++}& $50.360$ &$2.011 \pm 0.027$& & $49.889$ &$2.105 \pm 0.019$& \\ \hline
HDGAN \cite{zhang2018photographic}& $48.930$ &$2.117 \pm 0.027$& & $43.206$ &$2.357 \pm 0.037$& \\ \hline
Attribute2Sketch2Face \cite{di2017face} & $60.487$ & $1.637 \pm 0.025$ &$0.059 \pm 0.034$& $58.896$ & $1.657 \pm 0.011$ &$0.067 \pm 0.022$\\ \hline
Attribute2Sketch2Face-v2 & $\mathbf{43.712}$ & $\mathbf{2.353 \pm 0.029}$ & & $\mathbf{33.497}$ & $\mathbf{2.515 \pm 0.017}$ & \\ \hline
\end{tabular}
\end{table*}
\end{comment}
\begin{table*}[htp!]
\centering
\caption{Quantitative results corresponding to different methods.The FID score and Attribute $L_{2}$ measure are used to compare the performance of different methods.}
\label{tab: quantitative result}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \multicolumn{2}{c|}{LFW} & \multicolumn{2}{c|}{CelebA} \\ \hline
Baselines & FID Score & Attribute $L_{2}$ & FID Score & Attribute $L_{2}$ \\ \hline
GAN-INT-CLS \cite{reed2016generative} & $85.811$ &$0.093 \pm 0.027$& $92.793$ &$0.104 \pm 0.024$ \\ \hline
disCVAE \cite{yan2016attribute2image}& $103.855$ &$0.086\pm 0.019$& $91.012$ &$0.080\pm 0.042$\\ \hline
StackGAN \cite{zhang2016stackgan}& $70.379$ &$0.085\pm 0.029$& $63.816$ &$0.091 \pm 0.021$ \\ \hline
StackGAN++\cite{zhang2017stackgan++}& $50.360$ & $0.059 \pm 0.026$ & $49.889$ & $0.061 \pm 0.026$ \\ \hline
HDGAN \cite{zhang2018photographic}& $48.930$ & $0.053 \pm 0.023$ & $43.206$ & $0.056 \pm 0.020$ \\ \hline
Attribute2Sketch2Face \cite{di2017face} & $60.487$ &$0.059 \pm 0.034$& $58.896$ & $0.067 \pm 0.022$\\ \hline
Attribute2Sketch2Face-v2 (proposed method) & $\mathbf{43.712}$ & $\mathbf{0.048 \pm 0.020}$ & $\mathbf{33.497}$ & $\mathbf{0.051 \pm 0.019}$ \\ \hline
\end{tabular}
\end{table*}
In this section, we show the image synthesis capability of our network by manipulating the input attribute and noise vectors. Note that, the testing phase of our network takes attribute vector and noise as inputs and produces synthesized face as the output. In the first set of experiments with image synthesis, we keep the random noise vector frozen and change the weight of a particular attribute as follows: $[-1, -0.1, 0.1, 0.4, 0.7, 1]$. The corresponding results on the CelebA dataset are shown in Fig.~\ref{fig:CelebA_progression}. From this figure, we can see that when we give higher weights to a certain attribute, the corresponding appearance changes. For example, one can
synthesize an image with a different gender by changing the weights corresponding to the gender attribute as shown in Fig.~\ref{fig:CelebA_progression}(a). Each row shows the progression of gender change as the attribute weights are changed from -1 to 1 as described above. Similarly, figures (b), (c) and (d) show the synthesis results when a neutral face image is transformed into a smily face image, skin tones are changed to pale skin tone, and hair colors are changed to black, respectively. It is interesting to see that when the attribute weights other than the gender attribute are changed, the identity of the person does not change too much.
In the second set of experiments, we keep the input attribute vector frozen but now change the noise vector by inputing different realizations of the standard Gaussian. Sample results corresponding to this experiment are shown in Fig.~\ref{fig:random_noise} using the CelebA. Each column shows how the output changes as we change the noise vector. Different subjects are shown in different rows. It is interesting to note that, as we change the noise vector, attributes stay the same while the identity changes. This can be clearly seen by comparing the synthesized results in each row.
\subsection{Quantitative Results}
In addition to the qualitative results presented in Fig.~\ref{fig:CelebA_result}, \ref{fig:LFW_result}, we present quantitative comparisons in Table \ref{tab: quantitative result}. Since the ground-truth images corresponding to the noise-generated images are not available, we choose the quantitative criterion based on the Fréchet Inception Distance (FID) \cite{heusel2017gans,lucic2018gans} and Attribute $L_{2}$-norm.
The FID is a measure of similarity between two datasets of images. It was shown to correlate well with human judgment of visual quality and is most often used to evaluate the quality of samples generated by GANs. Attribute $L_{2}$-norm is used to compare the quality of attributes corresponding to different images. We extract the attributes from the synthesized images as well as the reference image using the MOON attribute prediction method \cite{rudd2016moon}. Once the attributes are extracted, we simply take the $L_{2}$-norm of the difference between the attributes as follows
\begin{equation}
\text{Attribute } L_{2}=\|\hat{a}_{ref}-\hat{a}_{synth}\|_{2},
\end{equation}
where $\hat{a}_{ref}$ and $\hat{a}_{synth}$ are the 23 extracted attributes from the reference image and the synthesized image, respectively. Note that lower values of the FID score and the Attribute $L_{2}$ measure imply the better performance. The quantitative results corresponding to different methods on the CalebA and LFW datasets are shown in Table~\ref{tab: quantitative result}. Results are evaluated on the test splits of the corresponding dataset and the average performance along with the standard deviation are reported in Table~\ref{tab: quantitative result}.
As can be seen from this table, the proposed method produces the lowest FID scores implying that the images generated by our method are more realistic than the ones generated by other methods. Furthermore, our method produces the lowest Attribute $L_{2}$ scores. This implies that our method is able to generate attribute-preserved images better than the other compared methods. This can be clearly seen by comparing the images synthesized by different methods in Fig.~\ref{fig:CelebA_result} and Fig.~\ref{fig:LFW_result}.
\section{Conclusion}\label{sec:con}
We presented a novel deep generative framework for reconstructing face images from visual attributes. Our method makes use of an intermediate representation to generate photo realistic images. The training part of our method consists of two models: Sketch Generator Network and Face Generator Network. Multi-scale hierarchical network architectures are proposed for each generator networks. Various experiments on three publicly available datasets show the significance of the proposed synthesis framework. In addition, an ablation study was conducted to show the importance of different components of our network. Various experiments showed that the proposed method is able to generate high-quality images and achieves significant improvements over the state-of-the-art methods.
On of the limitations of this work is that the synthesized images do not preserve the identity. In the future, we will develop methods that can synthesize identity-preserving images from visual attributes. These images can then be used to augment the datasets for face recognition \cite{wu2018light}, \cite{JC_WACV2016}.
\section*{Acknowledgment}
This research is based upon work supported by the Of- ficeof the Director of National Intelligence (ODNI), IntelligenceAdvanced Research Projects Activity (IARPA), via IARPA R\&D Contract No. 2019-19022600002. The views and conclu-sions contained herein are those of the au- thors and should notbe interpreted as necessarily represent- ing the official policiesor endorsements, either expressed or implied, of the ODNI,IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and dis- tribute reprints for Governmentalpurposes notwithstanding any copyright annotation thereon.
\begin{IEEEbiography}
[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/XD_JHU_Photo.jpg}}]%
{Xing Di}
is a Ph.D. student in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. Prior to joining Hopkins, he was an Ph.D. student in the Department of ECE at Rutgers University. He completed his M.E. in Electrical Engineering from the Stevens Institute of Technology, Hoboken, NJ, in 2015. His current research interests include machine learning, computer vision and image process with applications in biometrics. He has received a Best Student Paper Awards at IAPR ICPR 2018. He has received the ECE student development award in Rutgers. He also serves as the journal reviewer in IEEE-TIP, PR, and conference reviewer in BTAS and ICB.
\end{IEEEbiography}
\begin{IEEEbiography}
[{\includegraphics[width=1in,height=1.25in,clip,keepaspectratio]{./fig/VP_JHU_Photo.jpg}}]%
{Vishal M. Patel}
\text{[SM'15]} is an Assistant Professor in the Department of Electrical and Computer Engineering (ECE) at Johns Hopkins University. Prior to joining Hopkins, he was an A. Walter Tyson Assistant Professor in the Department of ECE at Rutgers University and a member of the research faculty at the University of Maryland Institute for Advanced Computer Studies (UMIACS). He completed his Ph.D. in Electrical Engineering from the University of Maryland, College Park, MD, in 2010. His current research interests include computer vision, image processing, and pattern recognition with applications in biometrics and imaging. He has received a number of awards including the 2016 ONR Young Investigator Award, the 2016 Jimmy Lin Award for Invention, A. Walter Tyson Assistant Professorship Award, Best Paper Award at IEEE AVSS 2017 \& 2019, Best Paper Award at IEEE BTAS 2015, Honorable Mention Paper Award at IAPR ICB 2018, two Best Student Paper Awards at IAPR ICPR 2018, and Best Poster Awards at BTAS 2015 and 2016. He is an Associate Editor of the IEEE Signal Processing Magazine, IEEE Biometrics Compendium, and serves on the Information Forensics and Security Technical Committee of the IEEE Signal Processing Society. He is a member of Eta Kappa Nu, Pi Mu Epsilon, and Phi Beta Kappa.
\end{IEEEbiography}
\bibliographystyle{IEEEtran}
|
1,477,468,750,113 | arxiv |
\section{Introduction}
Given a finite Borel measure $\mu$, a standard way to quantify the density of $\mu$ at a given point $x$ in its support is through the \defn{local dimension}, which is the quantity
\begin{equation*}
\dim_{\loc}(\mu,x) = \lim_{r\to 0}\frac{\log B(x,r)}{\log r}
\end{equation*}
when the limit exists.
A natural question to ask is the following: what is the structure of the set of points which have a prescribed local dimension $\alpha$?
In many interesting cases, these level sets of local dimensions are uncountable and dense in $\supp \mu$, but have $\mu$-measure zero for most values of $\alpha$.
We will focus on the Hausdorff dimensions of these level sets of local dimensions, which we denote by
\begin{equation*}
f_\mu(\alpha)=\dim_H\{x\in\supp\mu:\dim_{\loc}(\mu,x)=\alpha\}.
\end{equation*}
The function $f_\mu$ is commonly known as the \defn{(fine Hausdorff) multifractal spectrum} of $\mu$.
Related to the multifractal spectrum is the \defn{$L^q$-spectrum} of the measure $\mu$, which is given by
\begin{equation*}
\tau_\mu(q)=\liminf_{r\to 0}\frac{\log\sup\sum_i\mu(B(x_i,r))^q}{\log r}
\end{equation*}
where the supremum is taken over all disjoint families of balls $\{B(x_i,r)\}_i$ with $x_i\in\supp \mu$.
A standard application of Hölder's inequality shows that $\tau_\mu$ is a concave function of $q$.
The $L^q$-spectrum is related to the multifractal spectrum through a heuristic relationship known as the \defn{multifractal formalism}.
It states that if the measure $\mu$ is ``sufficiently nice'', then the multifractal spectrum is the concave function given by
\begin{equation*}
f_\mu(\alpha)=\tau_\mu^*(\alpha)=\inf\{\alpha q-\tau_\mu(q):q\in\R\}.
\end{equation*}
One can think of the $L^q$-spectrum as a sort of box-counting dimension, whereas the multifractal spectrum is a generalization of the Hausdorff dimension.
Of course, the multifractal formalism does not hold in general: for example, in the presence of non-conformality, $f_\mu$ and $\tau_\mu$ can both be concave conjugate functions but $f_\mu(\alpha)<\tau_\mu^*(\alpha)$ for all $\alpha$ \cite{jr2011}.
In some sense, this is a consequence of the fact that the box and Hausdorff dimensions of non-conformal sets are, in general, not the same.
However, even when a measure is ``locally nice'', the multifractal formalism can fail: if $\mu_1$ and $\mu_2$ are probability measures with disjoint supports each satisfying the multifractal formalism and $\nu=(\mu_1+\mu_2)/2$, a straightforward exercise from the definitions shows that
\begin{equation}\label{e:min-formula}
\begin{aligned}
\tau_\nu(q)&=\min\{\tau_{\mu_1}(q),\tau_{\mu_2}(q)\}\\
f_\nu(q) &= \max\{f_{\mu_1}(q),f_{\mu_2}(q)\}.
\end{aligned}
\end{equation}
In particular, $\nu$ satisfies the multifractal formalism if and only if $\tau_{\mu_1}(q)\leq\tau_{\mu_2}(q)$ or $\tau_{\mu_2}(q)\leq\tau_{\mu_1}(q)$.
Our main result states, for a certain class of conformal measures, that this phenomenon is the only way in which the multifractal formalism can fail.
We will focus on the multifractal analysis of self-similar measures in $\R$, which are defined as follows.
Given a finite set of maps $(S_i)_{i\in\mathcal{I}}$ where each $S_i:\R\to\R$ is given by $S_i(x)=r_i x+d_i$ where $0<|r_i|<1$ and probabilities $(p_i)_{i\in\mathcal{I}}$ with $p_i>0$ and $\sum p_i=1$, the self-similar measure $\mu$ is uniquely defined by
\begin{equation*}
\mu=\sum_{i\in\mathcal{I}}p_i\cdot S_i\mu
\end{equation*}
where $S_i\mu$ is the pushforward of $\mu$ by $S_i$.
Self-similar measures are ``locally nice'' by nature of their construction (indeed, they have equal box and Hausdorff dimensions \cite{fal1997}), so one might be more optimistic for nice multifractal properties.
For example, self-similar measures are exact-dimensional \cite{fh2009}, which means that there is precisely one value $\alpha$ for which the level set $\{x\in\supp\mu:\dim_{\loc}(\mu,x)=\alpha\}$ has full $\mu$-measure.
If there is an open set $U$ satisfying $\bigcup_{i\in\mathcal{I}}S_i(U)\subseteq U$ where the union is disjoint, we say that $\mu$ satisfies the \defn{open set condition} \cite{hut1981}.
For such measures, the $L^q$-spectrum is the unique smooth function satisfying $\sum_{i\in\mathcal{I}}p_i^qr_i^{-\tau_\mu(q)}=1$, and the multifractal formalism holds \cite{cm1992,pat1997}.
However, for self-similar measures with overlaps, the multifractal formalism can fail.
One of the earliest known examples of this fact is due to Hu and Lau \cite{hl2001}, where they show that the three-fold convolution of the Cantor measure has an isolated point in its set of local dimensions, and therefore fails the multifractal formalism.
This measure, and generalizations, have been studied in \cite{flw2005,hhn2018,lw2005,shm2005} among other papers.
Another class of well-studied measures are the Bernoulli convolutions, which is the law of the random variable $\sum_{n=0}^\infty\pm\lambda^n$ for $\lambda\in(0,1)$ where the $+$ and $-$ signs are chosen with equal probabilities.
In this case, for any parameter $\lambda\in(1/\phi,1)$ where $\phi$ is the Golden mean, the set of local dimensions has an isolated point \cite[Prop. 2.2]{hh2019}, and $\phi$ is maximal with this property.
Testud \cite{tes2006a} constructed self-similar measures associated with digit-like sets for which the multifractal spectrum is non-concave and the maximum of two non-trivial concave functions.
Thus behaviour similar to \cref{e:min-formula} can occur for self-similar measures with overlaps.
On the other hand, for Bernoulli convolutions with contraction ratio the reciprocal of a simple Pisot number (the unique positive root of a polynomial $x^k-x^{k-1}-\cdots-x-1$ for some $k\geq 2$), the multifractal formalism is known to hold \cite{fen2005}.
It is also shown in \cite{rut2021} that any self-similar measure associated with the IFS $\{\lambda_1 x,\lambda_2 x+\lambda_1(1-\lambda_2),\lambda_2 x+(1-\lambda_2)\}$ for $\lambda_1,\lambda_2>0$ and $\lambda_1+2\lambda_2-\lambda_1\lambda_2\leq 1$ satisfies the multifractal formalism.
The $L^q$-spectra of self-similar measures also have a certain amount of regularity: the limit defining $\tau_\mu(q)$ is known to exist for any $q\geq 0$ \cite{ps2000}.
We see that, even for self-similar measures, a wide variety of behaviour is possible.
Determining precisely when the multifractal formalism is satisfied, and more generally understanding properties of the $L^q$-spectrum and multifractal spectrum when it is not, is a very challenging question and little is known.
In this paper, we develop a general theory in an attempt to remedy this.
We will show for an important class of self-similar measures that the varied multifractal behaviour observed above follows from a decomposition similar in form to \cref{e:min-formula}.
More precisely, we show that the $L^q$-spectrum of $\mu$ is given by the minimum of a finite set of concave functions, and the multifractal spectrum of $\mu$ is given by the maximum of their concave conjugates.
These concave functions can be loosely interpreted as the $L^q$-spectra of a decomposition of $\mu$ as a sum of subadditive set functions, each satisfying a multifractal formalism.
By standard arguments involving concave functions, this shows that the multifractal formalism holds for $\mu$ in the following generic sense: $\mu$ satisfies the multifractal formalism if and only if $f_\mu$ is a concave function.
This is in stark contrast to measures associated with iterated function systems of non-conformal maps as discussed above.
\subsection{The weak separation condition and finite type conditions}
Many of the examples mentioned in the preceding section satisfy various closely-related finite type conditions \cite{fen2003,hhrtoappear,ln2007,nw2001}.
Heuristically, these finite type conditions require that there are only ``finitely many overlaps''
These separation conditions are all special cases of the \defn{weak separation condition} of Lau and Ngai \cite{ln1999}, which states that there is a uniform bound on the number of simultaneous ``distinct overlaps'' (see \cref{e:wsc} for a precise statement).
Note that the weak separation condition is strictly weaker than the open set condition.
When the invariant set $\supp\mu$ is a closed interval, the generalized finite type condition coincides with the weak separation condition \cite{fen2016,hhrtoappear}.
It is an open question to determine, outside certain degenerate situations, if these two separation conditions are equivalent in general.
The multifractal analysis of such measures have been extensively studied since.
Such measures have enough structure to allow strong results, yet the class contains many interesting examples and exceptional behaviour.
The most significant general result to date, due to Feng and Lau \cite{fl2009}, states that for self-similar measures satisfying the weak separation condition, the multifractal formalism holds for any $q\geq 0$, and for $q<0$ there is an open set $U_0$ on which $\mu$ is sufficiently regular so that the $L^q$- and multifractal spectra restricted to $U_0$ satisfy the multifractal formalism.
However, the relatively open set $U_0\cap K$ is almost always a proper subset of $K$, so this result only gives a (somewhat coarse) lower bound for $f_\mu$.
The case for $q<0$ is more challenging to establish in general: indeed, we already saw for such self-similar measures that the multifractal formalism need not hold.
For measures satisfying the weak separation condition in $\R$, the author recently established general conditions based on connectivity properties of an associated graph for which the regularity on the set $U_0$ can be extended to the entire set $K$ \cite{rut2021}.
This can be applied to verify the multifractal formalism for all $q\in\R$ for certain examples such as those discussed in \cite[Prop. 4.3]{lw2004} or \cite[Ex. 8.5]{dn2017}.
Our work here vastly extends these results under a slightly more specialized hypothesis (detailed in \cref{d:pfnc}).
We will discuss our technical conditions and results in detail in the following section.
We are not aware of any IFS satisfying the weak separation condition for which the technical conditions do not hold.
\subsection{Main results and outline of the paper}
\subsubsection{Symbolic encoding and the transition graph}
In \cref{s:gt-construct}, we define a generalized version of the constructions in \cite{fen2003,hhs2018,rut2021} which provides a more cohesive perspective on the ``net interval'' constructions defined therein and simplifies the study of certain examples.
The construction is based on the idea of an \defn{iteration rule} $\Phi$ (see \cref{d:iter}), which describes how to define inductively a nested heirarchy of partitions $\{\mathcal{P}_n\}_{n=0}^\infty$ in a way which depends only on the local geometry of $K$ (see \cref{p:ttype}).
The end result is to construct a rooted directed graph $\mathcal{G}$, which we call the \defn{transition graph}.
The edges of the graph $\mathcal{G}$ are equipped with matrices $T(e)$, such that norms of products of matrices corresponding to finite paths beginning at the root vertex encode the measure $\mu$ on a rich family of subsets (this result is given in \cref{p:mat-mu}).
When the transition graph $\mathcal{G}$ is finite, we say that the IFS satisfies the \defn{finite neighbour condition with respect to $\Phi$}, or the $\Phi$-FNC for short (see \cref{d:pfnc}).
For the remainder of this paper, we will assume that this condition is satisfied.
We denote by $\Omega^\infty$ the set of infinite paths in $\mathcal{G}$ originating at the root vertex, which is equipped with an ``almost injective'' Lipschitz projection $\pi:\Omega^\infty\to K$.
The set $\Omega^\infty$ can be thought of as ``symbolic'' analogue of $K$, where the weights $W(e)$ encode the metric structure of $K$ and the matrices $T(e)$ encode the self-similar measure $\mu$.
The overarching approach in this paper is to establish results in the space $\Omega^\infty$, and then using the projection $\pi$ obtain corresponding results about the multifractal analysis of the self-similar measure $\mu$.
The main technical challenge is that the map $\pi$ is not, in general, bi-Lipschitz.
The graph $\mathcal{G}$ need not be strongly connected.
We call the non-trivial connected components of $\mathcal{G}$ \defn{loop classes}, which we define fully in \cref{ss:irred}.
Since the tail of any infinite path is an infinite path in a loop class, we obtain a decomposition
\begin{equation*}
\Omega^\infty=\bigcup_{i=1}^m\Omega^\infty_{\mathcal{L}_i}
\end{equation*}
for appropriate sets $\Omega^\infty_{\mathcal{L}_i}$, where the union is disjoint.
This decomposition of $\mathcal{G}$ will correspond directly (outside certain degenerate situations) with the decomposition given in \cref{t:meas-split}.
For example, a graphic of a (hypothetical) transition graph is given in \cref{f:gen-tr-graph} and one can observe that there are 4 non-trivial strongly connected components $\mathcal{L}_i$ for $i=1,\ldots,4$.
\begin{figure}[ht]
\input{figures/example_graph}
\caption{A ``generic'' transition graph}
\label{f:gen-tr-graph}
\end{figure}
\subsubsection{Loop classes and the upper bounds}
There can be components $\mathcal{L}_i$ where the corresponding sets $\pi(\Omega_{\mathcal{L}_i})\subseteq K$ have measure 0 (in \cref{f:gen-tr-graph}, this is $\mathcal{L}_1$, $\mathcal{L}_2$, and $\mathcal{L}_3$).
However, even though the measure $\mu$ cannot be restricted to $\pi(\Omega_{\mathcal{L}_i})$ in a sensible way, the corresponding symbolic measure (which we denote by $\rho$) does restrict properly.
In \cref{ss:symb-defs}, we define symbolic analogues $\tau_{\mathcal{L}_i}$ of the $L^q$-spectrum and $f_{\mathcal{L}_i}$ of the multifractal spectrum for the loop classes $\mathcal{L}_i$.
These functions can be interpreted as $L^q$-spectra and multifractal spectra of some appropriate subadditive set functions defined on $\pi(\Omega_{\mathcal{L}_i})$.
In \cref{l:lq-upper-bound} and \cref{t:m-upper-bound}, we establish the following general upper bounds.
\begin{itheorem}\label{t:gen-upper}
Suppose $\mu$ is a self-similar measure satisfying the $\Phi$-FNC with loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$ and corresponding symbolic $L^q$-spectra $\tau_{\mathcal{L}_1},\ldots,\tau_{\mathcal{L}_m}$.
Then
\begin{align*}
f_\mu(\alpha)&\leq\max\{\tau_1^*(\alpha),\ldots,\tau_m^*(\alpha)\} & \tau_\mu(q)\leq\min\{\tau_1(q),\ldots,\tau_m(q)\}.
\end{align*}
\end{itheorem}
Unlike the general upper bound $f_\mu\leq\tau_\mu^*$ \cite[Thm. 4.1]{ln1999}, this upper bound for $f_\mu$ follows by an argument which depends sensitively on the existence of the local dimension in the definition of $f_\mu(\alpha)$.
The precise ideas here can be found in \cref{l:approx-reg} and the surrounding discussion.
Note that upper bound given in \cref{t:gen-upper} is already a non-trivial improvement on the general bound $\tau_\mu^*$ when $\max\{\tau_1^*,\ldots,\tau_m^*\}$ is not a concave function.
Indeed, since $\tau_\mu(q)\leq\tau_i(q)$, we have $\tau_\mu^*(\alpha)\geq\tau_i^*(\alpha)$ so that
\begin{equation*}
\tau_\mu^*\geq\max\{\tau_1^*,\ldots,\tau_m^*\},
\end{equation*}
but $\tau_\mu^*$ is necessarily concave.
\subsubsection{Irreducibility, decomposability, and the lower bounds}
In order to establish the lower bounds, we require two main assumptions.
The first, which we call \defn{irreducibility}, can be interpreted as an internal connectivity property for the loop classes, and depends only on properties of the paths and transition matrices internal to some loop class $\mathcal{L}_i$ (see \cref{sss:irreducibility}).
This property was introduced and studied in \cite{fen2009}; as with that paper, this result is essential for establishing the symbolic multifractal formalism in \cref{t:multi-f}.
The irreducibility assumption is also important to resolve the fact that the projection $\pi$ is not, in general, bi-Lipschitz.
This technical result is given in \cref{t:reg-sub}.
While irreducibility formally depends on the choice of probabilities, in practice, every example of which the author is aware can be verified by the slightly stronger hypothesis of \cref{l:irred}, which does not depend on the choice of probabilities.
The second main assumption, which we call \defn{decomposability}, is a statement about the finite paths which do not have any edges in loop classes (see \cref{sss:decomposable}).
This property is closely related to the positive transition matrix assumption in \cite{hhstoappear}, and our proof of \cref{t:lq-lower-bound} largely follows the ideas in that document.
This assumption allows a product-like decomposition of $\Omega^\infty$ as $\Omega_{\mathcal{L}_1}^\infty\times\cdots\times\Omega_{\mathcal{L}_m}^\infty$ in a way which preserves the norms of matrices.
See \cref{e:Psi-def} for the precise statement and application of this idea.
We will also assume a simple non-degeneracy property (given in \cref{d:degen}).
Similar statements can be made assuming some loop classes are degenerate, but we omit this discussion for simplicity.
We then have the following result, proven in \cref{c:m-spectrum} and \cref{t:lq-lower-bound}.
\begin{itheorem}\label{t:meas-split}
Suppose $\mu$ is a self-similar measure satisfying the $\Phi$-FNC with loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$ and corresponding symbolic $L^q$-spectra $\tau_{\mathcal{L}_1},\ldots,\tau_{\mathcal{L}_m}$.
Suppose each loop class is non-degenerate.
Then:
\begin{enumerate}[nl,r]
\item If the irreducibility assumption is satisfied,
\begin{equation*}
f_\mu(\alpha)=\max\{\tau_{\mathcal{L}_1}^*(\alpha),\ldots,\tau_{\mathcal{L}_m}^*(\alpha)\}.
\end{equation*}
\item If the decomposability assumption is satisfied, the limit defining $\tau_\mu(q)$ exists for every $q\in\R$.
Moreover,
\begin{equation*}
\tau_\mu(q)=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}.
\end{equation*}
\end{enumerate}
\end{itheorem}
Outside the open set condition \cite{ap1996} and the case $q\geq 0$ \cite{ps2000}, there does not appear to be any general existence results for the limit $\tau_\mu(q)$ when $\mu$ is a self-similar measure.
Moreover, the author is not aware of any self-similar measure satisfying the weak separation condition which does not satisfy all the hypotheses in \cref{t:meas-split}.
To provide evidence for this claim, we observe that the hypotheses are satisfied for a number of examples (see \cref{s:multi-examples}).
However, verifying these conditions in general seems to be a very challenging question.
We can now use \cref{t:meas-split} to describe precisely when the multifractal formalism holds.
We say that $\mu$ satisfies the multifractal formalism at $\alpha$ if $f_\mu(\alpha)=\tau_\mu^*(\alpha)$.
Recall that the subdifferential $\partial\tau_{\mathcal{L}_i}(q)$ is the interval from the right derivative to the left derivative of $\tau_{\mathcal{L}_i}$ at $q$.
The following result is proven in \cref{c:multi-validity}.
\begin{icorollary}
Let $\mu$ satisfy the same hypotheses as \cref{t:meas-split}, along with the irreducibility and decomposability assumptions.
Then $\mu$ satisfies the multifractal formalism at $\alpha$ if and only if $\alpha\in\partial\tau_{\mathcal{L}_i}(q)$ for some $1\leq i\leq m$ and $q\in\R$ with $\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}=\tau_{\mathcal{L}_i}(q)$.
In particular, if the derivative $\alpha=\tau_\mu'(q)$ exists, then $\mu$ satisfies the multifractal formalism at $\alpha$.
\end{icorollary}
In other words, the multifractal formalism fails precisely on phase transitions (values of $\alpha$ corresponding to points of non-differentiability of the $L^q$-spectrum) caused by transitions in $\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}$ from some $\tau_{\mathcal{L}_i}(q)$ to $\tau_{\mathcal{L}_j}(q)$ for $i\neq j$.
This corollary is highlighted in \cref{f:phase-t} with two loop class $\mathcal{L}_1$ and $\mathcal{L}_2$ such that $\tau_{\mathcal{L}_1}$ and $\tau_{\mathcal{L}_2}$ intersect.
For values of $\alpha$ corresponding to the phase transition of $\tau_\mu=\min\{\tau_{\mathcal{L}_1},\tau_{\mathcal{L}_2}\}$ at their intersection point $q_0$, we see that $\tau_\mu^*$ differs from $f_\mu=\max\{\tau_{\mathcal{L}_1}^*,\tau_{\mathcal{L}_2}^*\}$.
Here, the multifractal formalism is satisfied at $\alpha$ if and only if $\alpha\notin(\alpha_2,\alpha_1)$.
In fact, $\tau_\mu^*$ is the infimal concave function bounded below by $f_\mu$.
Thus the phase transitions which cause the multifractal formalism to fail are fundamentally linked to the connectivity properties of the transition graph.
For example, this provides a general explanation for the phenomenon observed by Testud \cite{tes2006a} for self-similar measures associated with digit-like sets (see \cref{ss:tes-ex}).
\begin{figure}[htp]
\subfloat[$L^q$-spectra]{
\input{figures/lq_example}
}
\subfloat[Concave conjugates and multifractal spectra]{
\input{figures/phase_transition}
}
\caption{An example illustrating a non-trivial phase transition}
\label{f:phase-t}
\end{figure}
There can be phase transitions not of this form: for example, for the Bernoulli convolution associated with the Golden mean, $\tau_\mu=\tau_{\mathcal{L}}$ for a loop class $\mathcal{L}$ but $\tau_\mu(q)$ is not differentiable \cite{fen2005}.
Our results provide some explanation for the phenomenon of self-similar measures with non-differentiable $L^q$-spectra which still satisfy the multifractal formalism.
A \defn{simple} loop class is a loop class where the edges can be ordered to form a cycle which does not repeat vertices.
In \cref{f:gen-tr-graph}, the simple loop classes are given by $\mathcal{L}_1$ and $\mathcal{L}_2$.
As a straightforward application of \cref{t:meas-split} along with basic properties of concave functions, we obtain the following result.
The proof of this result can be found in \cref{ss:cons}.
\begin{icorollary}\label{c:all-simple}
Let $\mu$ satisfy the same hypotheses as \cref{t:meas-split}, along with the irreducibility and decomposability assumptions.
Then $\mu$ satisfies the multifractal formalism if and only if the multifractal spectrum is a concave function.
In particular, if every non-essential loop class is simple, this happens if and only if the set of local dimensions is a closed interval.
\end{icorollary}
\subsubsection{Applications and analysis of examples}
The hypotheses in \cref{c:all-simple} are satisfied in many well-known examples.
Here, we list some IFSs for which \cref{c:all-simple} applies, so that any associated self-similar measure satisfies the multifractal formalism if and only if the set of local dimensions is a closed interval:
\begin{itemize}
\item the family $\{\frac{x}{d}+\frac{j}{md}(d-1):j=0,1,\ldots,m\}$ with $m\geq d-1\geq 1$ integers, which includes the 3-fold convolution of the Cantor measure \cite{hl2001}, and is discussed in detail in \cite[Sec. 5]{hhn2018}.
\item Bernoulli convolutions with parameters that are reciprocals of simple Pisot numbers \cite{fen2005}, or reciprocals of the Pisot roots of the polynomials $x^3 - 2x^2 + x - 1$, $x^4-x^3- 2x^2+ 1$, and $x^4- 2x^3+ x - 1$ (see \cref{ss:bconv-Pisot}).
\item the IFS $\{\rho x,\rho^2 x+\rho-\rho^2,\rho^2x+1-\rho^2\}$ where $1/\rho$ is the Golden mean, considered in \cite[Sec. 5.3.3]{hr2021}.
\end{itemize}
By combining our results with the detailed study of sets of local dimensions contained in the references cited above, we obtain a number of new examples of measures satisfying the multifractal formalism which were not previously known in the literature.
Such results about the validity of the multifractal formalism were previously only known for Bernoulli convolutions associated with simple Pisot numbers \cite{fen2005}.
We refer the reader to \cite{hr2021} for details related to the computation of sets of local dimensions under similar assumptions to this paper.
To conclude this paper, we will provide a detailed study of some examples in \cref{s:multi-examples} to illustrate more concretely how our results may be applied in specific situations.
Our selection of examples does not attempt to be exhaustive, and the examples are primarily chosen to illustrate how our results explain the different multifractal phenomena exhibited by self-similar measures satisfying the weak separation condition.
In \cref{ss:tes-ex}, we study a family of self-similar measures associated with an IFS with maps of the form $x\mapsto x/\ell+i/\ell$ or $x\mapsto -x/\ell+(i+1)/\ell$ where $\ell\geq 2$ is an integer and $i\in\{0,1,\ldots,\ell-1\}$.
Such measures were first studied by Testud \cite{tes2006a}, where he provided some of the first known examples of self-similar measures which exhibit non-trivial non-concave spectra.
Our results extend and contextualize his results, since we do not require any assumptions on the digit sets.
This also extends results obtained by Olsen and Snigireva \cite{os2008} for such measures.
In \cref{ss:bconv-Pisot}, we provide a simple (given our general results) verification of the multifractal formalism for Bernoulli convolutions with parameters that are reciprocals of simple Pisot numbers.
This fact was first observed by Feng \cite{fen2005}.
Our technique is more general and depends only on establishing certain structural properties of the transition graph.
Our results also apply, for example, to the polynomials $x^3 - 2x^2 + x - 1$, $x^4-x^3- 2x^2+ 1$, and $x^4- 2x^3+ x - 1$.
Finally, in \cref{ss:non-e}, we verify the multifractal formalism for any self-similar measure associated with a class of IFS generalizing an example of Lau and Wang \cite{lw2004}, which is the IFS $\{\lambda_1 x, \lambda_2 x+\lambda_1(1-\lambda_2),\lambda_2 x+(1-\lambda_2)\}$.
The multifractal formalism for the self-similar measure studied by Lau and Wang was first verified by the author in a recent paper \cite{rut2021}.
We provide a simplified proof of this fact, which generalizes naturally to a family of related examples (which also includes \cite[Ex. 8.5]{dn2017}).
\subsubsection{Questions}
We conclude this section with three natural questions.
\begin{enumerate}[nl]
\item Are the hypotheses in \cref{t:meas-split} satisfied for every measure $\mu$ satisfying the weak separation condition?
Both a counterexample or a proof of non-existence here would be very interesting.
\item In what generality does a version of \cref{t:meas-split} hold?
Is the multifractal spectrum of any self-similar measure always the maximum of a finite set of concave functions?
\item If $\mu$ is a self-similar (or self-conformal) measure and $f_\mu$ is a concave function, is it necessarily true that $\mu$ satisfies the multifractal formalism?
\end{enumerate}
\subsection{Acknowledgements}
The author would like to thank Kathryn Hare for many extensive discussions concerning the topics in this paper.
The author also thanks Jonathan Fraser and Kenneth Falconer for detailed comments on a draft version of this paper, and more generally for helpful comments and suggestions.
\subsection{Notation}
Given a general set $X$, we denote by $\#X$ the cardinality of the set $X$.
We denote by $\R$ the set of reals equipped with the standard Euclidean metric.
All sets and functions considered in this document are Borel unless otherwise noted.
If $\mu$ is a Borel measure and $f$ a measurable function, we denote by $f\mu$ the push-forward of $\mu$ by $f$, which is given by the rule
\begin{equation*}
f\mu(E)=\mu(f^{-1}(E)).
\end{equation*}
Given a Borel set $E$, we write $E^\circ$ to denote the topological interior and $\diam(E)$ the diameter of $E$.
A map $f:\R\to\R$ is Lipschitz if $|f(x)-f(y)|\leq C|x-y|$ for some $C>0$.
Then $f$ bi-Lipschitz if $f$ is invertible with Lipschitz inverse, and a similarity of equality holds.
Given families $(a_i)_{i\in I}$ and $(b_i)_{i\in I}$ of non-negative real numbers, we write $a_i\preccurlyeq b_i$ if there exists some constant $C>0$ such that $a_i\leq Cb_i$ for each $i\in I$.
We say $a_i\asymp b_i$ if $a_i\preccurlyeq b_i$ and $b_i\preccurlyeq a_i$.
We will always allow such relationships to depend implicitly on the governing iterated function system (including the probabilities) and the transition rule $\Phi$.
Any other dependence, unless otherwise stated, will be indicated explicitly with a subscript.
\section{Some brief preliminaries}
\subsection{Weighted iterated function systems}
In our setting, a \defn{weighted iterated function system} (WIFS) is a tuple $(S_{i},p_i)_{i\in\mathcal{I}}$ where
\begin{equation} \label{e:ifs}
S_{i}(x)=r_{i}x+d_{i}:\mathbb{R}\rightarrow \mathbb{R}\text{ for each }i\in\mathcal{I}
\end{equation}
with $0<\left\vert r_{i}\right\vert <1$, so that each $S_i$ is a contracting similaritity in $\R$, and the $p_i$ satisfy $p_i>0$ and $\sum p_i=1$.
We refer to the tuple $(S_i)_{i\in\mathcal{I}}$ simply as an \defn{iterated function system} (IFS).
There are two important invariant objects associated with a WIFS, both of which can be realized as the unique fixed point of a contraction mapping on an appropriate metric space.
The first is a non-empty, compact set $K$ satisfying
\begin{equation*}
K=\bigcup_{i\in\mathcal{I}}S_{i}(K),
\end{equation*}
known as the \defn{self-similar set} associated with the WIFS.
The second is a Borel probability measure $\mu$ satisfying
\begin{equation}\label{e:minv}
\mu(E)=\sum_{i=1}^mp_{i}\cdot S_i\mu(E)
\end{equation}
for any Borel set $E\subseteq K$, where $S_i\mu(E)=\mu(S_i^{-1}(E))$ is the pushforward of $\mu$ by $S_i$.
We say that $\mu$ is the \defn{self-similar measure} associated with the WIFS.
We refer the reader to the book of Falconer \cite{fal1997} for details concerning the existence and uniqueness of these objects.
Note that $\supp\mu=K$.
Throughout this document, we will assume that $K$ is not a singleton, so that $\mu$ is a non-atomic measure.
By conjugating the maps as necessary (which amounts to an appropriate translation of the $d_i$), we may assume that the convex hull of $K$ is $[0,1]$.
Let $\mathcal{I}^*=\bigcup_{n=0}^\infty\mathcal{I}^n$ denote the set of all finite tuples on $\mathcal{I}$.
Given $\sigma = (\sigma_1,\ldots,\sigma_n)\in\mathcal{I}^n$, we write
\begin{equation*}
S_\sigma = S_{\sigma_1}\circ\cdots\circ S_{\sigma_n},\qquad r_\sigma = r_{\sigma_1}\circ\cdots\circ r_{\sigma_n},
\end{equation*}
and
\begin{equation*}
p_\sigma = p_{\sigma_1}\circ\cdots\circ p_{\sigma_n}.
\end{equation*}
Abusing notation slightly, we denote the empty word (the unique word of length zero) by $\emptyset$ and write $S_\emptyset=\id$, $p_\emptyset=1$, and $r_\emptyset=1$.
Given another word $\tau=(\tau_1,\ldots,\tau_m)$, the \defn{concatenation} $\sigma\tau$ is the word $(\sigma_1,\ldots,\sigma_n,\tau_1,\ldots,\tau_m)$.
We say that a word $\sigma$ is a \defn{prefix} of $\tau$ if there exists some $\omega$ such that $\tau=\sigma\omega$.
\subsection{Concave functions}
Let $f:\R\to\R\cup\{-\infty\}$ be a concave function.
The \defn{subdifferential} of $f$ at $x$ is given by
\begin{equation*}
\partial f(x)=\{\alpha: \alpha(y-x)+f(x)\geq f(y)\text{ for any }y\in\R\}.
\end{equation*}
Of course, if $f$ is differentiable at $x$, then $\partial f(x)=\{f'(x)\}$.
The \defn{concave conjugate} of $f$ is the function
\begin{equation*}
f^*(\alpha):=\inf\{\alpha x-f(x):x\in\R\}.
\end{equation*}
Naturally, the infimum may be attained at $-\infty$.
Note that $f^*$ is always concave, and concave convolution is involutive (i.e. $f^{**}=f$ when $f$ is a concave function).
We will use the fact that $f^*(\alpha)+f(x)=\alpha x$ whenever $\alpha\in\partial f(x)$.
We refer the reader to \cite{roc1970} for more detail and proofs of these facts.
\subsection{Local dimensions and multifractal analysis}\label{ss:mfa}
Let $\mu$ be a finite Borel measure in $\R$ with compact support.
\begin{definition}
Let $x\in \supp\mu$ be arbitrary.
Then the \defn{lower local dimension of $\mu$ at $x$} is given by
\begin{equation*}
\underline{\dim}_{\loc}(\mu,x)=\liminf_{t\to 0}\frac{\log \mu(B(x,t))}{\log t}
\end{equation*}
and the \defn{upper local dimension} $\overline{\dim}_{\loc}(\mu,x)$ is given similarly with the limit inferior replaced by the limit superior.
When the values of the upper and lower local dimension agree, we call the shared value the \defn{local dimension of $\mu$ at $x$}, denoted $\dim_{\loc}(\mu,x)$.
\end{definition}
We are primarily interested in understanding geometric properties of the level sets of local dimensions.
Define
\begin{equation*}
E_\mu(\alpha) =\bigl\{x\in \supp\mu:\underline{\dim}_{\loc}(\mu,x)=\overline{\dim}_{\loc}(\mu,x)=\alpha\bigr\}.
\end{equation*}
We will focus on the \defn{(fine Hausdorff) multifractal spectrum} of $\mu$, which is the function $f_\mu:\R\to\R\cup\{-\infty\}$ given by
\begin{equation*}
f_\mu(\alpha):=\dim_H E_\mu(\alpha)
\end{equation*}
where, by convention, we write $\dim_H \emptyset = -\infty$.
A different (but related) way to quantify the density of $\mu$ is through the $L^q$-spectrum of the measure.
\begin{definition}
The \defn{$L^q$-spectrum} of $\mu$ is given by
\begin{equation*}
\tau_\mu(q) = \liminf_{t\to 0}\frac{\log \sup \sum_i\mu(B(x_i,t))^q}{\log t}
\end{equation*}
where the supremum is over families of disjoint balls $\{B(x_i,t)\}_{i=1}^m$ centred at $x_i\in K$.
\end{definition}
Standard arguments show that the function $\tau_\mu$ is an increasing concave function of $q$.
Set
\begin{align*}
\alpha_{\min}(\mu) &= \lim_{q\to\infty}\frac{\tau_\mu(q)}{q} & \alpha_{\max}(\mu) &= \lim_{q\to-\infty}\frac{\tau_\mu(q)}{q}.
\end{align*}
When $\mu$ is a self-similar measure, it is known that $\alpha_{\min}$ and $\alpha_{\max}$ are finite real numbers (see, for example, \cite[Cor. 3.2]{fl2009}).
The multifractal formalism is a heuristic relationship introduced in \cite{hjk+1986} which relates the $L^q$-spectrum and the multifractal spectrum of $\mu$ under certain conditions.
\begin{definition}
Given $\alpha\in\R$, we say that the measure $\mu$ satisfies the \defn{multifractal formalism at $\alpha$} if
\begin{equation*}
f_\mu(\alpha)=\tau_\mu^*(\alpha)
\end{equation*}
where $\tau_\mu^*$ is the concave conjugate of $\tau_\mu$.
We say that $\mu$ satisfies the \defn{(complete) multifractal formalism} if $\mu$ satisfies the multifractal formalism at every $\alpha\in\R$.
\end{definition}
In particular, $f_\mu(\alpha)$ is a concave function which takes finite values precisely on the interval $[\alpha_{\min}(\mu),\alpha_{\max}(\mu)]$.
It always holds that $f_\mu(\alpha)\leq\tau_\mu^*(\alpha)$ (see, for example, \cite[Thm. 4.1]{ln1999}) and if $E_\mu(\alpha)$ is non-empty, then $\alpha_{\min}\leq\alpha\leq\alpha_{\max}$ \cite[Cor. 3.2]{fl2009}.
However, as discussed in the introduction, the set of $\alpha$ where $E_\mu(\alpha)\neq\emptyset$ need not be a closed interval and, even if it is, the multifractal formalism need not hold \cite{tes2006a}.
\section{A generalized transition graph construction}\label{s:gt-construct}
Self-similar measures have a natural encoding as a projection of self-similar measures in sequence space.
Let $\mathcal{I}^\infty$ denote the set of all infinite sequences on the alphabet $\mathcal{I}$ equipped with the natural product metric.
Given a sequence $(i_n)_{n=1}^\infty\in\mathcal{I}^\infty$, define the projection $\pi_0:\mathcal{I}^\infty\to K$ by the rule
\begin{equation*}
\pi_0((i_n)_{n=1}^\infty)=\lim_{n\to\infty}S_{i_1}\circ\cdots\circ S_{i_n}(0).
\end{equation*}
When the compact sets $S_i(K)$ are disjoint for distinct $i\in\mathcal{I}$, the map $\pi_0$ is bi-Lipschitz.
In this case, the value of the measure $\mu$ has a simple formula for a rich family of subsets of $K$, namely
\begin{equation}\label{e:meas-p}
\mu\bigl(S_{i_1}\circ\cdots\circ S_{i_n}(K)\bigr)=p_{i_1}\cdots p_{i_n}.
\end{equation}
However, when the measure $\mu$ has overlaps, such a simple formula no longer holds since the projection $\pi_0$ fails (in some situations quite badly) to be bi-Lipschitz.
A technique to overcome this limitation was first introduced by Feng \cite{fen2003} and extended in \cite{hhs2018,rut2021}.
In the subsequent sections, we will introduce a convenient framework which generalizes the prior net interval constructions; this will allow the simplification of analysis of examples in \cref{s:multi-examples}.
As this construction underlies all the results in this paper, we informally summarize the main ideas here.
Recall that the convex hull of $K$ is $[0,1]$.
In \cref{ss:gen-net-iv}, we inductively construct a nested sequence of partitions of $K$ with mesh size tending to 0, which we will denote by $(\mathcal{P}_n)_{n=0}^\infty$.
Here, a partition $\mathcal{P}_n$ is a finite collection of closed intervals $\{\Delta_1,\ldots,\Delta_\ell\}$ where $\Delta_i^\circ\cap\Delta_j^\circ=\emptyset$ for $i\neq j$, $\Delta_i^\circ\cap K\neq\emptyset$, and $K\subset\bigcup_{i=1}^k\Delta_i$.
We set $\mathcal{P}_0=\{[0,1]\}$.
We will associate to each $\Delta\in\mathcal{P}_n$ a \defn{neighbour set} $\vs(\Delta)$ (an ordered tuple of similarity maps from $\R$ to $\R$) such that each similarity map is a normalized version of some word $S_\sigma$ with $S_\sigma(K)\cap\Delta^\circ$.
In the sense of \cref{l:dk-max}, we also require that $\vs(\Delta)$ does not contain repetitions and satisfies a sort of maximality.
For a given $\Delta\in\mathcal{P}_n$, we want that the \defn{children} $\{\Delta':\Delta'\in\mathcal{P}_{n+1},\Delta'\subset\Delta\}$ depend uniquely on $\vs(\Delta)$, as made precise in \cref{p:ttype}.
The \defn{(basic) iteration rule} given in \cref{d:biter} and \cref{d:iter} underpins this inductive construction (we think of the domain of an iteration rule as the set of all possible neighbour sets of net intervals), and the technical hypotheses ensure that the various properties listed above are satisfied.
Now, in \cref{ss:tg-sr}, we construct a directed \defn{transition graph} $\mathcal{G}$ with root vertex $\vroot$ such that the finite paths in $\mathcal{G}$ of length $n$ are in bijection with the partitions $\mathcal{P}_n$ (see \cref{l:netiv-sr}).
We associate to the edges in $\mathcal{G}$ \defn{transition matrices} such that the $\mu$-measure of a net interval is the norm of the corresponding products of matrices (see \cref{p:mat-mu}).
Our main assumption from this point on will be that the graph $\mathcal{G}$ is finite; more details on this assumption are given in \cref{ss:fnc}.
While $\mathcal{G}$ is not, in general, strongly connected, we can enumerate the non-trivial maximal strongly connected components as $\{\mathcal{L}_1,\ldots,\mathcal{L}_m\}$ (we refer to these as \defn{loop classes}, as defined in \cref{d:loop-class}).
Denote the set of infinite paths in $\mathcal{G}$ beginning at $\vroot$ by $\Omega^\infty$.
Given an infinite path $(e_n)_{n=1}^\infty$ in $\Omega^\infty$, there is a unique loop class $\mathcal{L}_i$ such that for all $n$ sufficiently large, $e_n$ is an edge in some loop class $\mathcal{L}_i$.
The bijections from finite paths of length $n$ to $\mathcal{P}_n$ induce a Lipschitz surjection $\pi:\Omega^\infty\to K$ (note that the metric structure on $\Omega^\infty$ is defined in \cref{ss:symb-defs}, and depends on the edge weights).
The space $\Omega^\infty$ equipped with the projection $\pi$ is analgous to the space $\mathcal{I}^\infty$ along with the projection $\pi_0$.
Moreover, since the $\mathcal{P}_n$ are partitions of $K$, $\pi$ is nearly a bijection (it is injective on all but countably many points), and while $\pi$ need not be bi-Lipschitz, it is close to being so in a heuristic sense.
The main cost is that we must replace the products of scalars in \cref{e:meas-p} with norms of products of matrices.
This introduces additional technical challenges, which necessitate the assumptions of irreducibility and decomposability, which are discussed in more detail in \cref{ss:irred}.
\subsection{Partitions and net intervals}\label{ss:gen-net-iv}
We write by $\Sim(\R)=\{f(x)=ax+b:a\in\R\setminus\{0\},b\in\R\}$ the set of similarity maps from $\R$ to $\R$, and equip $\Sim(\R)$ with the total order induced by the lexicographic order on the pairs $(a,b)$ (or any other fixed total order).
We then denote by $\Sim^*(\R)$ the set of finite tuples $(f_1,\ldots,f_m)$ where $f_1<\cdots<f_m$ and $m\in\N$ is arbitrary.
\begin{definition}\label{d:biter}
A \defn{basic iteration rule} is a map $\Phi$ which associates to each tuple $(f_1,\ldots,f_m)$ in $\Sim^*(\R)$ a tuple $(\mathcal{C}_1,\ldots,\mathcal{C}_m)$ where each $\mathcal{C}_i$ is a finite subset of $\mathcal{I}^*$ satisfying the following condition: for all $n\in\N$ sufficiently large, every $\sigma\in\mathcal{I}^n$ has a unique prefix in $\mathcal{C}_i$.
\end{definition}
A good example to keep in mind is the rule basic iteration rule $\Phi(f_1,\ldots,f_m)=(\mathcal{I},\ldots,\mathcal{I})$.
This example is discussed in more detail in \cref{ex:uniform-transition}.
Given a closed interval $J\subseteq\R$, we denote by $T_J$ the unique similarity $T_J(x)=rx+a$ with $r>0$ such that
\begin{equation*}
T_J([0,1])=J:
\end{equation*}
Of course, $r=\diam(J)$ and $a$ is the left endpoint of $J$.
Using the notion of a basic iteration rule, we can inductively construct a heirarchy of partitions of $K$ as follows.
First, suppose we are given a pair $(\Delta,v)$ where $\Delta=[a,b]$ is a closed interval and $v=(f_1,\ldots,f_m)\in\Sim^*(\R)$.
Let
\begin{align}
\begin{split}\label{e:H-def}
\mathcal{Y} = \mathcal{Y}(\Delta,v) &= \bigcup_{f\in v}\{T_\Delta\circ f\circ S_\tau:\tau\in\mathcal{C}_i,1\leq i\leq m\}\\
Y = Y(\Delta,v) &= \{a,b\}\cup\{g(z):g\in\mathcal{Y},z\in\{0,1\},g(z)\in\Delta\}
\end{split}
\end{align}
and write the elements of $Y$ as $a=y_1<\cdots<y_{k+1}=b$.
Order the intervals $\{[y_i,y_{i+1}]:(y_i,y_{i+1})\cap K\neq\emptyset\}$ from left to right as $(\Delta_1,\ldots,\Delta_n)$.
We then define the \defn{children} of $\Delta$ (with respect to $\Phi$) as the set of pairs $(\Delta_i,v_i)$ where $v_i$ is given by ordering the distinct elements of the set
\begin{equation}\label{e:ch-nb-def}
\{T_{\Delta_i}^{-1}\circ g: g\in\mathcal{Y}, g(K)\cap\Delta_i^\circ\neq\emptyset\}.
\end{equation}
If $\Delta_i=[a_i,b_i]$, the \defn{position index} is given by $q(\Delta_i,\Delta)=(a_i-a)/\diam(\Delta)$.
The position index is used to distinguish distinct children of $\Delta$ with the same neighbour set.
Now, using the above procedure, we can inductively construct our net intervals and neighbour sets.
Begin with the pair $\{([0,1],(\id))\}=\mathcal{N}_0$.
Having constructed $\mathcal{N}_n$ for some $n\in\N\cup\{0\}$, we denote by $\mathcal{N}_{n+1}$ the set of all children of pairs $(\Delta,v)\in\mathcal{N}_n$, and let $\mathcal{N}=\bigcup_{n=0}^\infty\mathcal{N}_n$.
Set
\begin{equation*}
\mathcal{P}_n=\{\Delta:(\Delta,v)\in\mathcal{N}_n\},
\end{equation*}
which is the set of all net intervals at level $n$.
Since the net intervals are disjoint except on endpoints, one may think of $\mathcal{P}_n$ as a partition of $K$.
Note that distinct intervals $\Delta$ overlap at most on endpoints, and for each $x\in K$ and $n\in\N$ there is some $\Delta\in\mathcal{P}_n$ with $x\in\Delta$.
Given some $(\Delta,v)\in\mathcal{N}_n$, we say that $\Delta$ is a \defn{net interval} of level $n$, and that $v$ is the \defn{neighbour set} of $\Delta$.
We refer to a similarity $f\in v$ as a \defn{neighbour} of $\Delta$.
When the level $n$ is implicit, we write $\vs(\Delta)$ to denote the neighbour set $v$.
For an example computing the net intervals and neighbour sets, see \cref{ex:uniform-transition}.
We make two basic observations which follow immediately from the construction by an induction argument.
\begin{itemize}
\item Let $(\Delta,v)\in\mathcal{V}$ with $f\in v$.
Then $T_\Delta\circ f=S_\sigma$ for some $\sigma\in\mathcal{I}^*$.
\item If $[a,b]=\Delta\in\mathcal{P}_m$, there exists some $(\Delta_0,v)\in\mathcal{V}_k$ with $k\leq m$ and $f\in v$ such that $a=T_{\Delta_0}\circ f(z)$ for some $z\in\{0,1\}$.
The same statement also holds for $b$.
\end{itemize}
Here is a short example illustrating the net interval construction, along with these two observations.
\begin{example}\label{e:gen-ifs}
Consider the IFS given by the maps
\begin{align*}
S_1(x) &= \frac{x}{3} & S_2(x) &= \frac{x}{3}+\frac{2}{9} & S_3(x) &= \frac{x}{3}+\frac{2}{3}
\end{align*}
along with the basic iteration rule given by $\Phi(f_1,\ldots,f_n)=(\mathcal{I},\ldots,\mathcal{I})$.
By definition of $\Phi$, we have
\begin{equation*}
\mathcal{Y}([0,1],\{\id\})=\{T_{[0,1]}\circ \id\circ S_i:i\in\mathcal{I}\}=\{S_1,S_2,S_3\}
\end{equation*}
and, expanding the definition,
\begin{equation*}
Y([0,1],\{\id\})=\{S_i(z):i\in\mathcal{I},z\in\{0,1\}\}= \bigl\{0,\frac{2}{9},\frac{1}{3},\frac{5}{9},\frac{2}{3},1\bigr\}.
\end{equation*}
Note that $(2/3,1)\cap K=\emptyset$ (since $(2/3,1)\cap S_i([0,1])=\emptyset$ for each $i\in\mathcal{I}$), so $\Delta$ has children $\Delta_1=[0,2/9]$, $\Delta_2=[2/9,1/3]$, $\Delta_3=[1/3,5/9]$, and $\Delta_4=[2/3,1]$.
These net intervals are depicted in \cref{f:ex-net-intervals} along with their positions relative to the intervals $S_i([0,1])$ for $i=1,2,3$.
For illustrative purposes, we also compute $\vs(\Delta_2)$.
Note that $T_{\Delta_2}(x)=x/3+2/9$ so that $T(x):=T_{\Delta_2}^{-1}(x)=9x-2$, so we have
\begin{align*}
\vs(\Delta_2)&=\{T_{\Delta_2}^{-1}\circ g:g\in\mathcal{Y},g(K)\cap\Delta_2^\circ\neq\emptyset\}=\{T\circ S_i:i=1,2\}\\
&= \{x\mapsto T(x/3), x\mapsto T(x/3+2/9)\}=\{x\mapsto 3x-2,x\mapsto 3x\}.
\end{align*}
If, furthermore, we wanted to compute the children of $\Delta_2$ in $\mathcal{P}_2$, we would begin by computing
\begin{align*}
\mathcal{Y}(\Delta_2,\vs(\Delta_2)) &= \{T_{\Delta_2}\circ f\circ S_i:i\in\mathcal{I},f\in\vs(\Delta_2)\}\\
&= \{S_i\circ S_j:i\in\{1,2\},j\in\mathcal{I}\}
\end{align*}
and then continuing as above.
\end{example}
\begin{figure}[ht]
\input{figures/net_interval_example}
\caption{Net intervals in $\mathcal{P}_1$ as described in \cref{e:gen-ifs}.}
\label{f:ex-net-intervals}
\end{figure}
In order to avoid certain degenerate situations, we require two additional assumptions on the basic iteration rule $\Phi$.
\begin{definition}\label{d:iter}
Let $\Phi$ be a basic iteration rule.
We say that $\Phi$ is an \defn{iteration rule} if
\begin{enumerate}[nl,r]
\item $\displaystyle\lim_{n\to\infty}\max_{(\Delta,v)\in\mathcal{V}_n}\max_{\{x\mapsto rx+a\}\in v} r \diam(\Delta)=0$, and
\item if $(\Delta,v)\in\mathcal{V}$ and $f_1\neq f_2\in v$, then for any $\sigma\in\mathcal{I}^*$, we have $f_1\circ S_\sigma\neq f_2$.
\end{enumerate}
\end{definition}
Note that if $f\in\vs(\Delta)$ is any neighbour, then $f(x)=ax+b$ for some $a\geq 1$.
Thus, (i) implies that the diameters of net intervals also tend uniformly to zero.
In fact, since $K\subseteq\bigcup_{\Delta\in\mathcal{P}_n}\Delta$ and the endpoints of each $\Delta$ are elements of $K$,
\begin{equation*}
\bigcap_{n=0}^\infty\bigcup_{\Delta\in\mathcal{P}_n}\Delta=K.
\end{equation*}
We now have the following basic lemma.
\begin{lemma}\label{l:dk-max}
Fix some pair $(\Delta,v)\in\mathcal{V}_n$.
Then for each $f\in v$, $f(K)\cap(0,1)\neq\emptyset$, and
\begin{equation}\label{e:dk}
\Delta^\circ\cap K=\Delta^\circ\cap \bigcup_{f\in v}T_\Delta\circ f(K).
\end{equation}
Moreover, if $\sigma\in\mathcal{I}^*$ is any word satisfying $S_\sigma(K)\cap\Delta^\circ\neq\emptyset$, there is a unique word $\tau$ such that $T_{\Delta}^{-1}\circ S_\tau\in v$ and either $\tau$ is a prefix of $\sigma$ or $\sigma$ is a prefix of $\tau$.
\end{lemma}
\begin{proof}
We prove \cref{e:dk} by induction on $n$.
The case $n=0$ is immediate, so now let $(\Delta,v)\in\mathcal{V}_n$ have parent $(\Delta',v')\in\mathcal{V}_{n-1}$.
Write $v'=(f_1,\ldots,f_m)$ and $\Phi(v')=(\mathcal{C}_1,\ldots,\mathcal{C}_m)$.
Note that by definition of $\Phi$, for each $i$,
\begin{equation*}
K=\bigcup_{\sigma\in\mathcal{C}_i}S_\sigma(K).
\end{equation*}
Thus by the inductive hypothesis,
\begin{equation*}
(\Delta')^\circ\cap K = (\Delta')^\circ\cap\bigcup_{i=1}^m\bigcup_{\sigma\in\mathcal{C}_i}T_{\Delta'}\circ f_i\circ S_\sigma(K).
\end{equation*}
But by construction, if $T_{\Delta'}\circ f_i\circ S_\sigma(K)\cap\Delta^\circ\neq\emptyset$, then $T_\Delta^{-1}\circ T_{\Delta'}\circ f_i\circ S_\sigma\in v$, so the result follows.
For the second part, the existence of the word $\tau$ follows by construction, and uniqueness follows from (ii) in \cref{d:iter}.
\end{proof}
We now have the following fundamental result, the proof of which is similar to \cite[Thm. 2.8]{rut2021}.
We include the main details and leave additional verification to the reader.
\begin{proposition}\label{p:ttype}
Let $(f_i)_{i\in\mathcal{I}}$ be an IFS with basic iteration rule $\Phi$.
Then for any $\Delta^{(1)}\in\mathcal{P}_{n_1}$ with children $(\Delta_1^{(1)},\ldots,\Delta_{m_1}^{(1)})$, if $\Delta^{(2)}\in\mathcal{P}_{n_2}$ where $\vs(\Delta^{(1)})=\vs(\Delta^{(2)})$ has children $(\Delta_1^{(2)},\ldots,\Delta_{m_2}^{(2)})$, then $m_1=m_2$ and for each $1\leq i\leq m$,
\begin{enumerate}[nl,r]
\item $\vs(\Delta_i^{(1)})=\vs(\Delta_i^{(2)})$,
\item $q(\Delta,\Delta_i)=q(\Delta^{(1)},\Delta_i^{(2)})$, and
\item $\diam(\Delta)/\diam(\Delta_i)=\diam(\Delta')/\diam(\Delta_i')$.
\end{enumerate}
\end{proposition}
\begin{proof}
For each $j=1,2$, let $\mathcal{Y}_j,Y_j$ correspond to the set $\Delta^{(j)}$ as in \cref{e:H-def} in the definition of children.
Then with $\psi=T_{\Delta^{(1)}}\circ T_{\Delta^{(2)}}^{-1}$, we have $\mathcal{Y}_1=\{\psi\circ g:g\in\mathcal{Y}_2\}$ and $\psi(Y_2)=Y_1$.
Thus with the elements of $Y_1$ in order as $y_1<\cdots<y_{k+1}$, the elements of $Y_2$ are given in order as $\psi(y_1)<\cdots<\psi(y_{k+1})$.
Now since
\begin{equation*}
(\Delta^{(j)})^\circ\cap K = (\Delta^{(j)})^\circ\cap\bigcup_{f\in v} T_{\Delta^{(j)}}\circ f(K)
\end{equation*}
from \cref{l:dk-max}, it follows that $\psi:\Delta^{(2)}\cap K\to\Delta^{(1)}\cap K$ is a surjection so that $(y_i,y_{i+1})\cap K\neq\emptyset$ if and only if $(\psi(y_i),\psi(y_{i+1}))\cap K\neq\emptyset$.
Thus $m_1=m_2$.
From here, (i), (ii), and (iii) follow by direct computation.
\end{proof}
\begin{remark}\label{r:pr-ch}
Sometimes, it can hold that $(\Delta,v)$ has a unique child $(\Delta',v')$ where $\Delta=\Delta'$.
For technical purposes, in order to avoid this degenerate situation, it is convenient to redefine the iteration rule $\Phi$ as follows.
Write $v=(f_1,\ldots,f_m)$, $g=(g_1,\ldots,g_k)$, and suppose $\Phi(v)=(\mathcal{C}_1,\ldots,\mathcal{C}_m)$ and $\Phi(v')=(\mathcal{C}_1',\dots,\mathcal{C}_k')$.
Since $\Delta=\Delta'$, for each $f_i$ and $\sigma\in\mathcal{C}_i$, either $f_i\circ S_\sigma(K)\cap\Delta^\circ=\emptyset$ or $f_i\circ S_\sigma = g_j$ for some $j$.
Now for each $1\leq i\leq m$ and $\sigma\in\mathcal{C}_i$, set
\begin{equation*}
\mathcal{U}_{i,\sigma}=\begin{cases}
\{\emptyset\} &: f_i\circ S_\sigma(K)\cap\Delta^\circ=\emptyset\\
\mathcal{C}_j &: f_i\circ S_\sigma=g_j
\end{cases}
\end{equation*}
and define
\begin{equation*}
\widetilde{\mathcal{C}}_i = \bigcup_{\sigma\in\mathcal{C}_i}\{\sigma\tau:\tau\in\mathcal{U}_{i,\sigma}\}.
\end{equation*}
Then define $\widetilde{\Phi}$ by $\widetilde{\Phi}(v)=(\widetilde{\mathcal{C}}_1,\ldots,\widetilde{\mathcal{C}}_m)$, and $\widetilde{\Phi}=\Phi$ otherwise.
It is straightforward to verify that $\widetilde{\Phi}$ is an iteration rule, and with this definition, the children of $(\Delta,v)$ with respect to $\widetilde{\Phi}$ are precisely the children of $(\Delta',v')$ with respect to $\Phi$.
Note that an infinite sequence of children where all the net intervals are identical is disallowed by (i) in \cref{d:iter}.
Repeating this construction, we may thus assume that each net interval $\Delta$ has at least two distinct children, and for any $\Delta\in\mathcal{P}$, there is a unique $n$ such that $\Delta\in\mathcal{P}_n$.
\end{remark}
We conclude with two examples explaining the relationship with our general net interval construction and earlier net interval constructions.
In practice, all iteration rules the author has used fall into these two classes.
\begin{example}\label{ex:uniform-transition}
As discussed, the rule
\begin{equation*}
\Phi(f_1,\ldots,f_m)=(\mathcal{I},\ldots,\mathcal{I})
\end{equation*}
always defines an iteration rule.
Here, the neighbour sets and net intervals can be described in a slightly different way.
Enumerate the points $\{S_\sigma(0),S_\sigma(1):\sigma\in\mathcal{I}^n\}$ in increasing order as $0=y_0<y_1<\cdots<y_{s(n)}= 1$.
We claim that
\begin{equation}\label{e:pn-formula}
\mathcal{P}_n=\{[y_i,y_{i+1}]:(y_i,y_{i+1})\cap K\neq\emptyset,0\leq i < s(n)\}
\end{equation}
and for a net interval $\Delta\in\mathcal{P}_n$,
\begin{equation}\label{e:vs-formula}
\vs(\Delta)=\{T_\Delta^{-1}\circ S_\sigma:\sigma\in\mathcal{I}^n,S_\sigma(K)\cap\Delta^\circ\neq\emptyset\}.
\end{equation}
Let us prove that this holds by induction.
When $n=0$, \cref{e:pn-formula} and \cref{e:vs-formula} both hold trivially.
Thus suppose $n\in\N$ is arbitrary and $\Delta=[a,b]\in\mathcal{P}_n$.
From the definition in \cref{e:H-def} along with \cref{e:pn-formula} and \cref{e:vs-formula}, we observe that
\begin{align*}
\mathcal{Y}&=\bigcup_{f\in\vs(\Delta)}\{T_\Delta\circ f\circ S_i:i\in\mathcal{I}\}\\
&= \bigcup_{\{\sigma\in\mathcal{I}^n:S_\sigma(K)\cap\Delta^\circ\neq\emptyset\}}\{S_{\sigma i}:i\in\mathcal{I}\}\\
&= \{S_{\sigma i}:\sigma\in\mathcal{I}^n,S_\sigma(K)\cap\Delta^\circ\neq\emptyset,i\in\mathcal{I}\}\\
&=\{S_\tau:\tau\in\mathcal{I}^{n+1},S_\tau(K)\cap\Delta^\circ\neq\emptyset\}.
\end{align*}
and therefore
\begin{equation*}
Y = \{a,b\}\cup\{S_\tau(z):z\in\{0,1\},\tau\in\mathcal{I}^{n+1},S_\tau(z)\in\Delta\}.
\end{equation*}
Thus the children of $\Delta$ in $\mathcal{P}_{n+1}$ are precisely of the form given in \cref{e:pn-formula}, and if $\Delta_i$ is any child of $\Delta$, from the definition \cref{e:ch-nb-def} it has neighbour set
\begin{equation*}
\vs(\Delta_i)=\{T_{\Delta_i}^{-1}\circ g: g\in\mathcal{Y}, g(K)\cap\Delta_i^\circ\neq\emptyset\}=\{T_{\Delta_i}^{-1}\circ S_\tau:\tau\in\mathcal{I}^{n+1},S_\tau(K)\cap\Delta_i^\circ\neq\emptyset\}
\end{equation*}
and \cref{e:vs-formula} holds for $\Delta_i$.
Since any net interval $\Delta$ satisfies $\Delta^\circ\cap K\neq\emptyset$, every net interval in $\mathcal{P}_{n+1}$ must be given in this way.
Thus \cref{e:pn-formula} and \cref{e:vs-formula} hold for $\mathcal{P}_{n+1}$.
If each $S_i(x)=\lambda x+d_i$ for some fixed $0<\lambda<1$, the net intervals are the same as those considered by Feng \cite{fen2003}, and our definition of a neighbour set is closely related to the characteristic vector defined in that paper.
See \cite[Rem. 2.2]{rut2021} for more details on this relationship.
\end{example}
\begin{example}\label{ex:weighted-transition}
Given a tuple of similarities $(f_1,\ldots,f_m)$ with each $f_i(x)=a_ix+b_i$, let $a=\max\{|a_i|:1\leq i\leq m\}$ and define
\begin{equation*}
\mathcal{C}_i=
\begin{cases}
\mathcal{I} &: |a_i|=a\\
\{\emptyset\} &: |a_i|<a
\end{cases}
\end{equation*}
where we recall that $\emptyset$ denotes the empty word.
Then the map $\Phi(f_1,\ldots,f_m)=(\mathcal{C}_1,\ldots,\mathcal{C}_m)$ defines an iteration rule which gives the net intervals and neighbour sets as defined in \cite[Sec. 2.2]{rut2021}.
Indeed, with this construction, the rule defining children of $\Delta$ described above coincides exactly with the notion of the child of a net interval from \cite[Sec. 2.3]{rut2021}.
\end{example}
\subsection{The transition graph and symbolic representations}\label{ss:tg-sr}
We begin by introducing some useful terminology from graph theory.
By a rooted graph $\mathcal{G}$, we mean a directed graph (possibly with loops and multiple edges) consisting of a set $V(\mathcal{G})$ of vertices with a distinguished vertex $\vroot\in V(\mathcal{G})$, and a set $E(\mathcal{G})$ of edges.
By an \defn{edge} $e$, we mean a triple $e=(v_1,v_2,q)$ where $v_1\in V(\mathcal{G})$ is the \defn{source}, $v_2\in V(\mathcal{G})$ is the \defn{target}, and $q$ is the \defn{label} of the edge $e$.
The point of the label is to distinguish multiple edges, but it is safe to imagine that the graph does not have multiple edges.
A \defn{finite path} in $\mathcal{G}$ is a sequence $\eta=(e_1,\ldots,e_n)$ of edges in $\mathcal{G}$ such that the target of each $e_i$ is the source of $e_{i+1}$.
We say that the \defn{length} of $\eta$ is $n$, and denote this by $|\eta|$.
A finite path is a \defn{cycle} if, in addition, the source of $e_1$ is the target of $e_n$.
A (one-way) infinite path is a sequence $(e_i)_{i=1}^\infty$ where the target of each $e_i$ is the source of $e_{i+1}$ for $i\in\N$.
Given paths $\eta_1=(e_1,\ldots,e_n)$ and $\eta_2=(e_{n+1},\ldots,e_{n+m})$, if the target of $e_n$ is the source of $e_{n+1}$, the \defn{concatenation} $\eta_1\eta_2$ is the path $(e_1,\ldots,e_{n+m})$.
When it is convenient, we will abuse notation and treat edges as paths of length $1$.
We say that a (finite or infinite) path is \defn{rooted} if it begins at the root vertex $\vroot$, and we denote by $\Omega^\infty$ (resp. $\Omega^*$) the set of all infinite (resp. finite) rooted paths.
For any $n\in\N\cup\{0\}$, $\Omega^n$ denotes the set of all rooted paths of length $n$.
We say that $\eta_1$ is a \defn{prefix} of $\eta$ in $\Omega^*$ (resp. $\Omega^\infty$) if $\eta=\eta_1\eta'$ for some finite (resp. infinite) path $\eta'$.
Given a path $\gamma=(e_i)_{i=1}^\infty\in\Omega^\infty$, we denote the unique prefix of $\gamma$ in $\Omega^n$ by $\gamma|n=(e_1,\ldots,e_n)$.
We now define the main object in consideration in this document.
Fix a WIFS $(S_i,p_i)_{i\in\mathcal{I}}$ along with an iteration rule $\Phi$.
Then the \defn{transition graph} $\mathcal{G}=\mathcal{G}\bigl((S_i,p_i)_{i\in\mathcal{I}},\Phi\bigr)$, is a rooted graph defined as follows.
The vertex set of $\mathcal{G}$ is the set of all neighbour sets $\{v:{(\Delta,v)\in\mathcal{V}}\}$ with root vertex $\vroot=\{\id\}$ corresponding to the net interval $[0,1]$.
Now whenever $(\Delta,v)$ has child $(\Delta',v')$, we introduce an edge $(v,v',q(\Delta,\Delta'))$, where the position index $q(\Delta,\Delta')$ is the label distinguishing multiple edges between the vertices $v$ and $v'$.
This construction is well-defined by \cref{p:ttype}.
Given a vertex $v\in V(\mathcal{G})$, which is a neighbour set $v=(f_1,\ldots,f_m)$, we write $d(v)=m$.
For the remainder of this document, the set $\Omega^\infty$ (and $\Omega^*$, $\Omega^n$) will always be associated with the transition graph $\mathcal{G}$
Now given a path $\eta=(e_1,\ldots,e_n)\in\Omega^n$, there is a unique sequence of net intervals $(\Delta_i,v_i)_{i=0}^n$ with $\Delta_0=[0,1]$ where each $(\Delta_i,v_i)\in\mathcal{V}_i$, $\Delta_{i+1}$ is the child of $\Delta_i$, and
\begin{equation*}
e_i=\bigl(v_i,v_{i+1},q(\Delta_i,\Delta_{i+1})\bigr).
\end{equation*}
This follows directly by construction of the edge set of the transition graph $\mathcal{G}$.
Since net intervals either coincide or overlap only on endpoints, this path is uniquely determined by the last net interval in the sequence.
Thus we may define a map $\pi:\Omega^n\to\mathcal{P}_n$ by $\pi(\eta)=\Delta_n$.
\begin{lemma}\label{l:netiv-sr}
The map $\pi:\Omega^n\to\mathcal{P}_n$ is a well-defined bijection for each $n\in\N\cup\{0\}$.
\end{lemma}
Given a net interval $\Delta\in\mathcal{P}_n$, the \defn{symbolic representation} of $\Delta$ is the path $\pi^{-1}(\Delta)\in\Omega^n$.
Now given an infinite path $\gamma=(e_i)_{i=1}^\infty\in\Omega^\infty$, there corresponds a sequence of net intervals $(\Delta_i)_{i=0}^\infty$ with $\Delta_i\in\mathcal{P}_i$ where $\Delta_0=[0,1]$ and $\Delta_{i+1}$ is a the child of $\Delta_i$ corresponding to the edge $e_{i+1}$.
Of course, $\Delta_n=\pi(\gamma|n)$.
Since $\lim_{i\to\infty}\diam(\Delta_i)=0$, there exists a unique point in $K$, which we call $\pi(\gamma)$, satisfying
\begin{equation*}
\{\pi(\gamma)\}=\bigcap_{i=1}^\infty\Delta_i.
\end{equation*}
In analogy to the net interval case, we refer to a path $\gamma\in\pi^{-1}(x)$ as a \defn{symbolic representation} of $x$.
It is clear by construction of net intervals that the map $\pi:\Omega^\infty\to K$ is surjective.
Note that $\pi$ need not be injective, but if $x\in K$ has fibre $\pi^{-1}(x)$ with cardinality greater than 1, then $x$ must be an endpoint of some net interval $\Delta$.
In this situation, $\pi^{-1}(x)$ contains two paths.
Since there are only countably many net intervals, $\pi$ is injective on all but at most countably many paths.
We say that $x$ is an \defn{interior point} of $K$ if $\pi^{-1}(x)$ has cardinality 1.
\subsection{Edge weights and transition matrices}
For our purposes, perhaps the two most important attributes of a net interval $\Delta$ are its diameter $\diam(\Delta)$ and measure $\mu(\Delta)$.
Moreover, recall that we have a correspondence $\pi:\Omega^n\to\mathcal{P}_n$ taking rooted paths in the transition graph to net intervals in $\R$.
Through this correspondence, we get the corresponding ``symbolic diameter'' $\diam\circ\pi$ and ``symbolic measure'' $\mu\circ\pi$ defined on the set of rooted finite paths $\Omega^*$.
In this section, we will define two natural objects which takes values on $E(\mathcal{G})$ which will allow us to encode the functions $\diam\circ\pi$ and $\mu\circ\pi$ respectively in a way intrinsic to the transition graph.
We first describe $\diam\circ\pi$ as a product of weights on edges.
\begin{definition}\label{d:edge-weight}
The \defn{edge weight function} for $\mathcal{G}$ is the map $W:E(\mathcal{G})\to(0,1)$ such that if the edge $e$ corresponds to the child $\Delta'\subseteq\Delta$, then $W(e)=\diam(\Delta')/\diam(\Delta)$.
Given a path $\eta=(e_1,\ldots,e_n)$, we write $W(\eta)=W(e_1)\cdots W(e_n)$.
\end{definition}
Note that the edge weight is well-defined by \cref{p:ttype} (see also \cref{r:pr-ch}).
Of course, when $\Delta\in\mathcal{P}_n$ has symbolic representation $\eta=(e_i)_{i=1}^n$,
\begin{equation*}
\diam(\Delta)=\diam(\pi(\eta))=W(e_1)\cdots W(e_n)=W(\eta),
\end{equation*}
so that $\diam\circ\pi=W$.
We now describe $\mu\circ\pi$ as the norm of products of matrices associated with edges.
Let $e\in E(\mathcal{G})$ be an edge corresponding to $(\Delta_1,(f_1,\ldots,f_m))$ the parent of $(\Delta_2,(g_1,\ldots,g_n))$, and let $\Phi(f_1,\ldots,f_m)=(\mathcal{C}_1,\ldots,\mathcal{C}_m)$ where $\Phi$ is the iteration rule.
For each $(i,j)\in\{1,\ldots,m\}\times\{1,\ldots,n\}$, set
\begin{equation*}
\mathcal{E}_{i,j}=\{\omega\in\mathcal{C}_i:T_{\Delta_1}\circ f_i\circ S_\omega=T_{\Delta_2}\circ g_j\}.
\end{equation*}
Then the \defn{transition matrix} is the $n\times m$ matrix $T(e)$ given by
\begin{equation}\label{e:tr-mat}
T(e)_{i,j}=\frac{f_i\mu((0,1))}{g_j\mu((0,1))}\cdot\sum_{\omega\in\mathcal{E}_{i,j}}p_\omega
\end{equation}
where we recall that $f\mu$ is the pushforward of $\mu$ by the function $f$.
It is clear that the transition matrix depends only on the edge $e$.
We note the following important observations.
\begin{lemma}\label{l:pos-col-row}
If $e\in E(\mathcal{G})$ is any edge, then $T(e)$ has a positive entry in each column.
Moreover, for any $v\in V(\mathcal{G})$ and $1\leq i\leq d(v)$, there is an edge $e$ with source $v$ such that $T(e)$ has a positive entry in row $i$.
\end{lemma}
\begin{proof}
Since each neighbour $g_j$ is of the form $T_{\Delta_2}\circ T_{\Delta_1}^{-1}\circ f_i\circ S_\sigma$ for some $i$ and (possibly empty) word $\sigma$, each column of $T(e)$ has a non-negative entry.
To see the second part, let $\Delta$ be a net interval with $\vs(\Delta)=v$.
Since $T_\Delta((0,1))\cap f_i(K)\neq\emptyset$ for each $1\leq i\leq m$, there is some child $\Delta'\subseteq\Delta$ such that $T_{\Delta'}((0,1))\cap f_i(K)\neq\emptyset$.
Then if $e$ is the edge corresponding to $\Delta'\subseteq\Delta$, $T(e)$ has a positive entry in row $i$ by \cref{l:dk-max} and the definition of the transition matrix.
\end{proof}
Given a path $\eta=(e_1,\ldots,e_n)$, we write $T(\eta)=T(e_1)\cdots T(e_n)$.
We write $\norm{T(\eta)}=\sum_{i,j}T(\eta)_{i,j}$ to denote the matrix $1$-norm.
Now fix a pair $(\Delta,(f_1,\ldots,f_m))\in\mathcal{N}_n$, let $\muv(\Delta)=(q_1,\ldots,q_m)$ where
\begin{equation}\label{e:qi-formula}
q_i=f_i\mu((0,1))\sum_{\substack{\sigma\in\mathcal{I}^*\\S_\sigma=T_\Delta\circ f_i}}p_\sigma.
\end{equation}
Using the self-similarity relation of $\mu$, the definition of the iteration rule $\Phi$, and condition (ii) in \cref{d:iter}, one can verify that
\begin{equation*}
\mu(\Delta)=\norm{\muv(\Delta)}.
\end{equation*}
Now a similar argument as the proof of \cite[Thm. 2.12]{rut2021} gives the following result:
\begin{proposition}\label{p:mat-mu}
Let $(S_i)_{i\in\mathcal{I}}$ have associated self-similar measure $\mu$ and fix an iteration rule $\Phi$.
Then $\muv\circ\pi=T$ for every $n\in\N$, so if $\eta\in\Omega^n$,
\begin{equation*}
\mu(\pi(\eta))=\norm{T(\eta)}.
\end{equation*}
\end{proposition}
\begin{proof}
Let $\Delta=\pi(\eta)$ and let $\eta$ end at the vertex $v$.
Given $(\Delta,v)\in\mathcal{N}_n$, there exists a unique sequence $(\Delta_i,v_i)_{i=0}^n$ where $\Delta_i\in\mathcal{N}_i$, $\Delta_{i+1}$ is a child of $\Delta_i$, and $\Delta_n=\Delta$.
Now for $f\in v$, let $T_\Delta\circ f=S_\sigma$ for some $\sigma\in\mathcal{I}^*$.
Then one can write $\sigma=\sigma_1\ldots\sigma_n$ if and only if $\sigma_i\in\mathcal{C}_{j(i)}$ where $j(i)$ satisfies $T_{\Delta_i}^{-1}\circ f^{(i)}_{j(i)}=S_{\sigma_i}$ with $v_i=(f^{(i)}_1,\ldots,f^{(i)}_{m_i})$.
Thus the entry of $T(\eta)$ corresponding to the index $f$ is the sum of $p_\sigma$ over all $\sigma$ satisfying $T_\Delta\circ f=S_\sigma$.
\end{proof}
We observe that the transition matrices are analgous to the role of the probabilities $(p_i)_{i\in\mathcal{I}}$ described in \cref{e:meas-p}.
We conclude by mentioning the following straightforward but important property of transition matrices.
\begin{lemma}\label{l:left-prod}
If $\eta=\eta_1\eta_2\in\Omega^*$ with $\eta_1\in\Omega^n$, then $\norm{T(\eta)}\asymp_n\norm{T(\eta_2)}$.
\end{lemma}
\begin{proof}
By \cref{l:pos-col-row}, every transition matrix has a non-zero entry in each column, so a straightforward calculation shows that there exists some constant $a=a(\eta_1)$ such that $\norm{T(\eta_1\eta_2)}\geq a(\eta_1)\norm{T(\eta_2)}$.
On the other hand, $\norm{T(\eta_1\eta_2)}\leq\norm{T(\eta_1)}\norm{T(\eta_2)}$ by submultiplicativity of the matrix norm.
But there are only finitely many paths in $\Omega^n$, giving the result.
\end{proof}
\subsection{The finite neighbour condition}\label{ss:fnc}
Throughout this section, we have made no assumptions about the IFS $(S_i)_{i\in\mathcal{I}}$ or the transition graph $\mathcal{G}$.
We now introduce the main restriction of this paper.
The finite neighbour condition was introduced in \cite{hhrtoappear} as a variation of the generalized finite type condition introduced by Lau and Ngai \cite{ln2007}.
In general, such ``finite type'' conditions attempt to capture the idea that an IFS only has finitely many possible overlaps.
It is known that the finite neighbour condition is equivalent to the generalized finite type condition holding with respect to the interval $(0,1)$ \cite{hhrtoappear}.
We introduce the following definition, which is a natural generalization of the usual finite neighbour condition with respect to our more general transition graph construction.
\begin{definition}\label{d:pfnc}
We say that the IFS $(S_i)_{i\in\mathcal{I}}$ satisfies the \defn{finite neighbour condition with respect to the iteration rule $\Phi$}, or the $\Phi$-FNC for short, if the corresponding transition graph is a finite graph.
\end{definition}
Closely related to this finite neighbour condition is the \defn{weak separation condition}.
This separation condition is satisfied if
\begin{equation}\label{e:wsc}
\sup_{x\in K,r>0}\#\{S_\sigma:r\cdot r_{\min}<|r_\sigma|\leq r,S_\sigma(K)\cap(x-r,x+r)\neq\emptyset\}<\infty.
\end{equation}
The weak separation condition was introduced by Lau and Ngai in \cite{ln1999}; this definition is not the original but equivalent by \cite[Thm. 1]{zer1996}.
Standard arguments show that any IFS satisfying the $\Phi$-FNC necessarily satisfies the weak separation condition (see, for example, \cite{hhrtoappear,ln2007}).
Moreover, when $K$ is a convex set, the weak separation condition implies that the $\Phi$-FNC holds with respect to the iteration rule $\Phi$ from \cref{ex:weighted-transition} \cite{hhrtoappear}.
\section{Loop classes, irreducibility, and decomposability}
In this section, we introduce the notion of a loop class of the transition graph $\mathcal{G}$, and other related definitions.
These definitions are required to state the main technical assumptions (irreducibility and decomposability) which underpin the main results presented later in this paper.
We also discuss certain general situations in which the technical assumptions are satisfied.
For the remainder of the paper (including this section), we will assume that $(S_i,p_i)_{i\in\mathcal{I}}$ satisfies the $\Phi$-FNC with finite transition graph $\mathcal{G}$.
Note that many concepts in this section hold more generally for an arbitrary transition graph $\mathcal{G}$, but we do not distinguish this during the subsequent discussions for sake of simplicity.
\subsection{Loop classes}\label{ss:irred}
Let $G$ be a directed multigraph.
Recall that a graph $H$ is an \defn{induced subgraph} of $G$ if $H$ is the graph consisting of the vertices $V(H)$ and any edge $e\in E(G)$ such that $e$ connects two vertices in $H$.
\begin{definition}\label{d:loop-class}
Let $H$ be an induced subgraph of $G$.
We say that $H$ is \defn{strongly connected} if for any vertices $v,w\in V(H)$, there is a directed path from $v$ to $w$.
Then $H$ is a \defn{loop class} (in $G$) if it is strongly connected, contains at least one edge, and is maximal with these properties.
Now if $H$ is a loop class, we say that $H$ is \defn{simple} if each vertex in $H$ has exactly one outgoing edge (in $H$).
We say that $H$ is \defn{essential} if for any $v\in V(H)$ and $w\in V(G)$, if there is a directed path from $v$ to $w$, then $w\in V(H)$ as well.
\end{definition}
Of course, any essential loop class is necessarily not simple.
Note that distinct loop classes have disjoint vertex and edge sets, but there may be vertices which do not belong to any loop class.
\begin{remark}
Previous authors (e.g. \cite{hhn2018}) distinguished between loop classes and maximal loop classes.
In this document, our loop classes are always maximal.
\end{remark}
\begin{example}
In \cref{f:gen-tr-graph}, the loop classes are given by $\{\mathcal{L}_1,\mathcal{L}_2,\mathcal{L}_3,\mathcal{L}_4\}$.
The loop classes $\mathcal{L}_1$ and $\mathcal{L}_2$ are simple, while $\mathcal{L}_3$ is not; and $\mathcal{L}_4$, being an essential loop class, is not simple.
\end{example}
Since the transition graph $\mathcal{G}$ is a finite graph, there are only finitely many loop classes.
Given any path $\gamma=(e_i)_{i=1}^\infty\in\Omega^\infty$, there is a unique loop class $\mathcal{L}$ such that there is some $N$ such that for all $k\geq N$, $e_k$ is an edge in $\mathcal{L}$.
We say that $\gamma$ is \defn{eventually in $\mathcal{L}$} and denote the set of all such $\gamma$ by $\Omega^\infty_{\mathcal{L}}$.
We may now set
\begin{align*}
K_{\mathcal{L}} &= \{x\in K:\pi^{-1}(x)\cap\Omega^\infty_{\mathcal{L}}\neq\emptyset\} & \kint_{\mathcal{L}} &= \{x\in K:\pi^{-1}(x)\subseteq\Omega^\infty_{\mathcal{L}}\}.
\end{align*}
Of course, for each $x\in K$ there is at least one loop class $\mathcal{L}$ such that $x\in K_{\mathcal{L}}$, and at most two such sets.
Note that $\kint_{\mathcal{L}}$ is the topological interior of $K_{\mathcal{L}}$ (relative to $K$) if and only if $\mathcal{L}$ is an essential loop class.
If $x\in K$ is an interior point, then $x\in\kint_{\mathcal{L}}$ for a unique loop class $\mathcal{L}$.
Our analysis is focused on two technical assumptions, which we call irreducibility and decomposability.
We discuss these assumptions in the following two sections.
\subsection{Irreducibility}\label{sss:irreducibility}
Irreducibility can be loosely interpreted as a type of ``measure connectivity'' within the loop class.
\begin{definition}\label{d:irred}
Let $\mathcal{L}$ be a loop class.
We say that $\mathcal{L}$ is \defn{irreducible} if there exists a finite set of paths $\mathcal{H}$ such that for any paths $\eta_1,\eta_2$ in $\mathcal{L}$, there is some $\gamma\in\mathcal{H}$ such that $\eta_1\gamma\eta_2$ is a path and
\begin{equation*}
\norm{T(\eta_1\gamma\eta_2)}\asymp\norm{T(\eta_1)}\norm{T(\eta_2)}.
\end{equation*}
We say that the transition graph $\mathcal{G}$ is \defn{irreducible} if every loop class is irreducible.
\end{definition}
Since $\mathcal{L}$ is a finite graph, by submultiplicativity of the matrix norm, one can always guarantee that
\begin{equation*}
\norm{T(\eta_1\gamma\eta_2)}\preccurlyeq\norm{T(\eta_1)}\norm{T(\eta_2)}.
\end{equation*}
On the other hand, establishing the lower inequality is more challenging.
This notion of irreducibility is motivated by various hypotheses studied by past authors \cite{fen2009,hr2021}.
We are not aware of any loop class of any IFS satisfying the finite neighbour condition that does not satisfy this irreducibility hypothesis.
In the following lemmas, we observe that this technical hypothesis is satisfied in a number of general cases.
Enumerate $V(\mathcal{L})=\{v_1,\ldots,v_k\}$.
For each $1\leq i,j\leq k$, let
\begin{align*}
\mathcal{A}_{i,j}&=\{e\in E(G):e\text{ is an edge from }v_i\text{ to }v_j\} & M_{i,j}=\sum_{e\in\mathcal{A}_{i,j}}T(e)
\end{align*}
and define the block matrix
\begin{equation*}
M=M(\mathcal{L}):=\begin{pmatrix}
M_{1,1}&\cdots&M_{1,k}\\
\vdots&\ddots&\vdots\\
M_{k,1}&\cdots&M_{k,k}
\end{pmatrix}
\end{equation*}
The following proof is straightforward and included in, say, \cite{hr2021}.
Recall the matrix $M$ is irreducible if for each $i,j$, there exists some $n=n(i,j)$ such that $(M^n)_{i,j}>0$.
\begin{lemma}\label{l:irred}
Suppose the matrix $M$ is irreducible.
Then $\mathcal{L}$ is an irreducible loop class.
\end{lemma}
\begin{proof}
It suffices to show for any vertices $v,w\in V(\mathcal{L})$, $1\leq i\leq d(v)$, and $1\leq j\leq d(w)$, there exists some path $\gamma$ from $v$ to $w$ such that $T(\gamma)_{i,j}>0$.
Let $\mathcal{H}$ be the finite set of paths in \cref{d:irred} and let $A$ be the smallest strictly positive entry for any $\eta\in\mathcal{H}$ and let $d_{\max}=\max\{d(v):v\in V(\mathcal{L})\}$ (which is also the maximum number of rows or columns of any transition matrix $T(e)$ where $e\in E(\mathcal{L})$).
Let $\eta_1,\eta_2$ be any finite paths in $\mathcal{L}$.
By the pidgeonhole principle, there exists some $k,i,j,\ell$ such that $T(\eta_1)_{k,i}\geq d_{\max}^{-2}\norm{T(\eta_1)}$ and $T(\eta_2)_{j,\ell}\geq d_{\max}^{-2}\norm{T(\eta_2)}$.
Let $\phi\in\mathcal{H}$ be a path from the target vertex of $\eta_1$ to the source vertex of $\eta_2$ such that $T(\phi)_{i,j}\geq A>0$.
Then
\begin{equation*}
\norm{T(\eta_1\phi\eta_2)}\geq T(\eta_1)_{k,i}T(\phi)_{i,j}T(\eta_2)_{j,\ell}\geq Ad_{\max}^{-4}\norm{T(\eta_1)}\norm{T(\eta_2)}.
\end{equation*}
The upper bound follows since
\begin{equation*}
\norm{T(\eta_1\phi\eta_2)}\leq\norm{T(\eta_1)}\norm{T(\eta_2)}\max\{\norm{T(\phi)}:\phi\in\mathcal{H}\}
\end{equation*}
by submultiplicativity of the norm.
\end{proof}
We next observe that an essential loop class is always irreducible.
\begin{lemma}\label{l:ess-irred}
Let $\mathcal{L}$ be an essential loop class of $\mathcal{G}$.
Then $\mathcal{L}$ is irreducible.
\end{lemma}
\begin{proof}
In fact, we will show for any $v,w\in V(\mathcal{L})$ and $1\leq i\leq d(v)$, there exists some path $\gamma$ from $v$ to $w$ such that row $i$ of $T(\gamma)$ is strictly positive.
The required result will then follow by \cref{l:irred}.
Let $\Delta$ be any net interval with $\vs(\Delta)=v$ and neighbour set $v=(f_1,\ldots,f_{d(v)})$.
Since $T_\Delta\circ f_i=S_{\sigma_0}$ for some $\sigma_0\in\mathcal{I}^*$ with $S_{\sigma_0}(K)\cap\Delta^\circ\neq\emptyset$, there exists some word $\sigma$ with prefix $\sigma_0$ such that $S_{\sigma}\subseteq\Delta$.
Let $U=(x-r,x+r)$ attain the supremum in \cref{e:wsc} with words $\tau_1,\ldots,\tau_\ell$ satisfying $r\cdot r_{\min}<|r_{\tau_k}|\leq r$ and $S_{\tau_k}(K)\cap U\neq\emptyset$.
Observe that $S_\sigma(U)$ also attains the supremum in \cref{e:wsc} with words $\sigma\tau_1,\ldots,\sigma\tau_\ell$.
By condition (i) in \cref{d:iter} and since $\mathcal{L}$ is an essential loop class, there exists some net interval $\Delta_1\subseteq S_\sigma(U)$ with $\vs(\Delta_1)=w$ such that if $g$ is any neighbour of $\Delta_1$, then the contraction ratio $T_{\Delta_1}\circ g$ is less than $|r_\sigma|r$.
Let $\gamma$ be the path corresponding to $\Delta_1\subseteq \Delta$, which is necessarily a path from $v$ to $w$ in $\mathcal{L}$.
It remains to show that row $i$ of $T(\gamma)$ is strictly positive.
Let $g\in\vs(\Delta_1)$ be arbitrary and let $S_\omega=T_{\Delta_1}\circ g$; by choice of $\Delta_1$, we have $|r_\omega|\leq |r_\sigma|r$.
Since $S_\omega(K)\cap\Delta^\circ\neq\emptyset$, we have $S_\omega(K)\cap S_\sigma(U)\neq\emptyset$.
Let $\xi$ be the unique prefix of $\omega$ with minimal length satisfying $|r_\xi|\leq|r_\sigma|r$.
In particular, $|r_\xi|>|r_\sigma|r\cdot r_{\min}$ and $S_\xi(K)\cap S_\sigma(U)\neq\emptyset$, forcing $S_\xi=S_{\sigma\tau_j}$ for some $1\leq j\leq\ell$ by maximality of $\ell$.
Unpacking definitions, this means that there is some word $\phi$ such that
\begin{equation*}
T_{\Delta_1}\circ g = S_{\sigma_0}\circ S_\phi=T_\Delta\circ f_i\circ S_\phi.
\end{equation*}
In other words, the entry in $T(\gamma)$ corresponding to the neighbours $f_i$ of $v$ and $g$ of $w$ is strictly positive.
Since $g$ was an arbitrary neighbour of $\Delta_1$, the result follows.
\end{proof}
\subsection{Decomposability}\label{sss:decomposable}
Unlike irreducibility which, up to a fixed constant multiple, states that one can join paths within a loop class without changing the norm of the corresponding transition matrix, decomposability states that for a given path passing through multiple loop classes, the norm is comparable to the norms of the components of the path within each loop class it passes through.
We begin by defining the notion of an \defn{initial path} and a \defn{transition path} in the transition graph $\mathcal{G}$ as follows.
Let $\mathcal{G}$ have loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$ and root vertex $\vroot$.
Let $\psi=(e_1,\ldots,e_n)$ be a path in $\mathcal{G}$ connecting vertices $(v_0,v_1,\ldots,v_n)$.
We say a path $\psi$ is a \defn{transition path} if
\begin{enumerate}[nl]
\item $v_0$ is a vertex in $V(\mathcal{L}_j)$ for some $j$,
\item $v_n$ is a vertex in $V(\mathcal{L}_k)$ for some $k\neq j$, and
\item each $v_i$ with $0<i<n$ is not a vertex in any loop class.
\end{enumerate}
Similarly, we say that $\psi$ is an \defn{initial path} if we replace condition (1) with
\begin{enumerate}[nl,r]
\item[(1')] $v_0=\vroot$
\end{enumerate}
There are only finitely many initial paths and transition paths since they cannot repeat vertices.
By definition of the loop class, we can sort the loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$ in a (not necessarily unique) order such that if $\psi$ is any transition path joining loop classes $\mathcal{L}_i$ and $\mathcal{L}_j$, then $i<j$.
Now suppose $\eta=(e_1,\ldots,e_n)\in\Omega^*$ is any finite rooted path.
Then we can uniquely write
\begin{equation*}
\eta = \phi\lambda_1\psi_1\ldots\psi_{m-1}\lambda_m
\end{equation*}
for possibly empty paths $\phi,\psi_i,\lambda_i$, where $\phi$ is an initial path, each $\lambda_i$ is a path in $\mathcal{L}_i$, and each $\psi_i$ is a transition path.
We call the tuple $(\lambda_1,\ldots,\lambda_m)$ the \defn{decomposition} of the path $\eta$.
\begin{example}
In \cref{f:gen-tr-graph}, an example of a valid order is $\mathcal{L}_1,\mathcal{L}_2,\mathcal{L}_3,\mathcal{L}_4$.
Note that any decomposition can contain a maximum of $3$ non-empty paths $\lambda_i$ (corresponding to the loop classes $\mathcal{L}_1,\mathcal{L}_3,\mathcal{L}_4$).
\end{example}
By convention, if $\lambda_i$ is an empty path, we write $\norm{T(\lambda_i)}=1$.
\begin{definition}
We say that the transition graph $\mathcal{G}$ is \defn{decomposable} if for any path $\eta\in\Omega^*$ with decomposition $(\lambda_1,\ldots,\lambda_m)$, we have
\begin{equation*}
\norm{T(\eta)}\asymp \norm{T(\lambda_1)}\cdots \norm{T(\lambda_m)}
\end{equation*}
with constants depending only on the transition graph $\mathcal{G}$.
\end{definition}
We now discuss a few examples in which the transition graph $\mathcal{G}$ is decomposable.
\begin{lemma}\label{l:pos-tr}
Suppose every transition path $\eta$ has that $T(\eta)$ is a strictly positive matrix.
Then $\mathcal{G}$ is decomposable.
\end{lemma}
\begin{proof}
Since there are only finitely many transition paths, there exists a constant $C>0$ such that for any transition path $\psi$ and valid indices $i,j$, $T(\eta)_{i,j}\geq C$.
But now if $\eta\in\Omega^*$ has decomposition $(\lambda_1,\ldots,\lambda_m)$, we can uniquely write $\eta=\phi\lambda_1\psi_1\ldots\psi_{m-1}\lambda_m$ where each $\psi_i$ is a transition path.
Then by \cref{l:left-prod},
\begin{align*}
\norm{T(\eta)}&=\norm{T(\phi\lambda_1\psi_1\ldots\psi_{m-1}\lambda_m)} \asymp \norm{T(\lambda_1\psi_1\ldots\psi_{m-1}\lambda_m)}\\
&\geq C^{m-1}\norm{T(\lambda_1)}\cdots\norm{T(\lambda_m)}.
\end{align*}
Of course, $C$ and $m$ depend only on $\mathcal{G}$, so the lower bound holds.
The upper bound always follows by submultiplicativity of the matrix norm, since there are only finitely many choices for the paths $\phi$, $\psi_i$.
\end{proof}
\begin{lemma}\label{l:size-one-loops}
Suppose that every vertex in a non-essential loop class is a neighbour set of size one.
Then $\mathcal{G}$ is decomposable.
\end{lemma}
\begin{proof}
By \cref{l:pos-tr}, it suffices to show for any transition path $\eta$ that $T(\eta)$ is a strictly positive matrix.
By definition of an essential loop class, if $\eta$ is a transition path from loop classes $\mathcal{L}_i$ to $\mathcal{L}_j$, then $\mathcal{L}_i$ is non-essential.
Thus by assumption, $\eta$ is a path beginning at a vertex with neighbour set consisting of a single neighbour, so that $T(\eta)$ is a matrix with 1 row.
Since every transition matrix has a positive entry in every column by \cref{l:pos-col-row}, $T(\eta)$ is a strictly positive matrix, so the result follows by \cref{l:pos-tr}
\end{proof}
We now establish an irreducibility type condition to guarantee that the transition graph is decomposable when all non-essential loop classes are simple.
We begin with some general observations about non-negative irreducible matrices.
Let $M$ be an irreducible matrix with spectral radius $r$.
It is known that if $M$ is a strictly positive matrix, then the limit $\lim_{k\to\infty} M^k/r^k$ exists \cite{sen1981}.
While this limit need not exist in general if $M$ is irreducible, using similar arguments, one can show that there are constants $c_1,c_2>0$ such that for all $n$ sufficiently large, either $M^n_{i,j}=0$ or
\begin{equation*}
c_1 r^n\leq M^n_{i,j}\leq c_2 r^n.
\end{equation*}
In particular, suppose $M_1,\ldots,M_k$ are irreducible matrices and $A_1,\ldots,A_{k+1}$ are such that $A=A_1 M_1^{n_1}\cdots A_kM_k^{n_k}A_{k+1}\neq 0$.
Then for all $n_i$ sufficiently large, if $M_i$ has spectral radius $r_i$, either $A_{k,\ell}=0$ or
\begin{equation}\label{e:irred-power}
A_{j,\ell}\asymp_{A_1,\ldots,A_{k+1}} r_1^{n_1}\cdots r_k^{n_k}.
\end{equation}
This observation is the main idea in the following result.
\begin{lemma}
Suppose every non-essential loop class is simple.
For each simple loop class $\mathcal{L}$, suppose there is a path $\theta$ in $\mathcal{L}$ beginning and ending at the same vertex such that $T(\theta)$ is an irreducible matrix.
Then $\mathcal{G}$ is decomposable.
\end{lemma}
\begin{proof}
For simplicity, we assume there is a unique essental class; the proof in the general case follows similarly.
Denote the simple loop classes by $\mathcal{L}_1,\ldots,\mathcal{L}_k$, and for each $1\leq i\leq k$, let $\theta_i$ be a cycle in $V(\mathcal{L}_i)$ such that $T(\theta_i)$ is an irreducible matrix.
Let $T(\theta_i)$ have spectral radius $r_i$.
If $\eta\in\Omega^*$ is an arbitrary path, it has decomposition of the form $(\lambda_1,\ldots,\lambda_k,\xi)$ where $\lambda_i=\gamma_i^{(1)}\theta_i^{n_i}\gamma_i^{(2)}$ with $n_i$ maximal and $\xi$ is a path in the essential loop class.
Since $n_i$ is maximal and $\mathcal{L}_i$ is simple, the paths $\gamma_i^{(1)}$ and $\gamma_i^{(2)}$ have length at most the length of $\theta$, so there are only finitely many possible paths $\gamma_i^{(j)}$.
Thus by \cref{e:irred-power},
\begin{equation*}
\norm{T(\lambda_i)}\asymp r_i^{n_i}.
\end{equation*}
Now, we may write
\begin{equation*}
\eta = \phi\theta_1^{n_1}\psi_1\theta_2^{n_2}\ldots\theta_k^{n_k}\psi_k\xi
\end{equation*}
where $\phi=\phi'\gamma_1^{(1)}$ with $\phi'$ an initial path, and each $\psi_i$ is of the form $\gamma_i^{(2)}\psi_i'\gamma_{i+1}^{(1)}$ for $i<k$ or $\gamma_k^{(2)}\psi_k'$, and the paths $\psi_i'$ are transition paths.
Of course, some of the paths $\gamma_i^{(j)}$ or $\psi_i'$ may be the empty path.
The point here is that there are only finitely many possible choices for the paths $\phi,\psi_1,\ldots,\psi_k$, independent of the choice of $\eta$.
It always holds that $\norm{T(\eta)}\preccurlyeq\norm{T(\lambda_1)}\cdots\norm{T(\lambda_k)}\norm{T(\xi)}$.
Thus it suffices to show that
\begin{equation*}
\norm{T(\eta)}\succcurlyeq r_1^{n_1}\cdots r_k^{n_k}\norm{T(\xi)}.
\end{equation*}
Let $M=T(\phi\theta_1^{n_1}\psi_1\theta_2^{n_2}\ldots\theta_k^{n_k}\psi_k)$.
By \cref{e:irred-power}, for each index $j,\ell$, either $M_{j,\ell}=0$ or
\begin{equation*}
M_{j,\ell}\asymp r_1^{n_1}\cdots r_k^{n_k}.
\end{equation*}
But now if $T(\xi)$ has maximal entry $T(\xi)_{p,q}$, we have $\norm{T(\xi)}\succcurlyeq T(\xi)_{p,q}$.
Since column $p$ of the matrix $M$ has a non-negative entry by \cref{l:pos-col-row}, get $p'$ such that $M_{p',p}\succcurlyeq r_1^{n_1}\cdots r_k^{n_k}$ and
\begin{equation*}
\norm{T(\eta)}=\norm{M\cdot T(\xi)}\geq M_{p',p}T(\xi)_{p,q}\succcurlyeq r_1^{n_1}\cdots r_k^{n_k}\norm{T(\xi)}
\end{equation*}
as required.
\end{proof}
\section{Loop class spectra and a multifractal formalism}
\subsection{Measures and metric structure on paths in the transition graph}\label{ss:symb-defs}
The set $\Omega^\infty$ of infinite rooted paths has a natural metric space structure given by the weights.
Given paths $\gamma_1,\gamma_2$, define
\begin{equation*}
d(\gamma_1,\gamma_2)=\inf\{W(\eta):\eta\text{ a prefix of }\gamma_1\text{ and }\gamma_2\}.
\end{equation*}
The topology is generated by the closed and open cylinders
\begin{equation*}
[\eta]:=\{\gamma\in\Omega^*:\eta\text{ a prefix of }\gamma\}.
\end{equation*}
It is easy to see that this space is compact and totally disconnected.
Of course, $\pi([\eta])=\pi(\eta)\cap K$ where we recall that $\pi(\eta)$ is the net interval with symbolic representation $\eta$.
It is productive to interpret the space $\Omega^\infty$ with the above metric as a ``separated'' version of the set $K$.
We have the following straightforward result:
\begin{lemma}\label{l:pi-Lip}
The map $\pi:\Omega^\infty\to K$ is Lipschitz with constant $1$.
\end{lemma}
\begin{proof}
Let $\gamma_1$ and $\gamma_2$ be two distinct paths in $\Omega^\infty$ with maximal common prefix $\eta\in\Omega^n$, so that $d(\gamma_1,\gamma_2)=W(\eta)$.
Let $\Delta\in\mathcal{P}_n$ have symbolic representation $\eta$.
By definition, $W(\eta)=\diam(\Delta)$.
But then $\pi(\gamma_1),\pi(\gamma_2)\in\Delta$ so
\begin{equation*}
|\pi(\gamma_1)-\pi(\gamma_2)|\leq\diam(\Delta) = W(\eta)=d(\gamma_1,\gamma_2)
\end{equation*}
as required.
\end{proof}
Our general philosophy is to establish multifractal properties of the space $\Omega^\infty$ (in terms of the corresponding subspaces $\Omega^\infty_{\mathcal{L},\zeta}$ defined below), and then translate these results to the self-similar measure $\mu$.
However, the main difficulty in establishing corresponding multifractal results is that, in general, the map $\pi$ is not in general bi-Lipschitz (even when restricted to the maximal domain on which it is injective).
Many of the technical results in the following sections are established to overcome this.
Since the graph $\mathcal{G}$ is not in general strongly connected, we will study subspaces of $\Omega^\infty$ corresponding to the loop classes.
Fix a loop class $\mathcal{L}$, and let $\zeta\in\Omega^*$ be a fixed path which ends at a vertex $v$ in $\mathcal{L}$.
Recall that $\Omega^\infty_{\mathcal{L}}$ is the set of infinite paths eventually in the loop class $\mathcal{L}$.
Now, define the set
\begin{equation*}
\Omega^\infty_{\mathcal{L},\zeta}:=\{\gamma\in\Omega^\infty_{\mathcal{L}}:\zeta\text{ is a prefix of }\gamma\}
\end{equation*}
This is a compact subspace of $\Omega^\infty$ (note that the sets $\Omega^\infty_{\mathcal{L}}$ need not be compact).
We also define the analgous sets
\begin{equation*}
\Omega^*_{\mathcal{L},\zeta}:=\{\eta\in\Omega^*_{\mathcal{L}}:\zeta\text{ is a prefix of }\eta\}
\end{equation*}
consisting of finite, rather than infinite, paths.
Often, given $\eta\in\Omega^*_{\mathcal{L},\zeta}$, we will abuse notation and write $[\eta]$ to denote the cylinder $[\eta]\cap\Omega^\infty_{\mathcal{L},\zeta}\subseteq\Omega^\infty$.
Suppose $\Delta=\pi(\zeta)$ is the net interval with symblic representation $\zeta$.
Then one can verify that
\begin{equation*}
\pi\bigl(\Omega^\infty_{\mathcal{L},\zeta}\bigr)=K_{\mathcal{L}}\cap\Delta.
\end{equation*}
We now turn our attention to the measure $\mu$.
Since distinct net intervals in the same level overlap only on endpoints and the self-similar measure $\mu$ is non-atomic, one can verify that the rule
\begin{equation*}
\mu(\pi(\eta))=\norm{T(\eta)}
\end{equation*}
for paths $\eta\in\Omega^*$ extends to a unique Borel measure on $\Omega^\infty$.
We would like to restrict this measure $\mu\circ \pi$ to the subsets $\Omega^\infty_{\mathcal{L},\zeta}$ in a meaningful way.
However, these sets can have measure $0$ in $\Omega^\infty$ (in fact, they have non-zero measure if and only if $\mathcal{L}$ is an essential loop class).
Regardless, it is convenient to simply consider the measure $\mu\circ\pi$ as being defined on finite rooted paths (or the corresponding cylinders).
With this in mind, we define a function $\rho:\Omega^*\to [0,1]$ by the rule
\begin{equation*}
\rho(\eta)=\mu\circ\pi(\eta).
\end{equation*}
Now $\rho$ restricts naturally to a function $\rho:\Omega^*_{\mathcal{L},\zeta}\to[0,1]$, though this restriction is not in general additive.
\subsubsection{Loop class $L^q$-spectra}
We now use the function $\rho$ to define an analogue of the $L^q$-spectrum of measures for loop classes.
To motivate this, we first state an equivalent formulation of the $L^q$-spectrum of $\mu$ using the function $\rho$.
Set
\begin{equation*}
\mathcal{F}(t) = \bigl\{\eta=(e_1,\ldots,e_n)\in\Omega^*:W(e_1\ldots e_n)\leq t<W(e_1\ldots e_{n-1})\bigr\}.
\end{equation*}
One can think of the sets $\mathcal{F}(t)$ as a ``scale-uniform'' analogue in $\Omega^\infty$ of the partitions $\mathcal{P}_n$ (which may contaimay contain intervals with vastly different diameters).
We then have the following standard result, which is a weighted version of \cite[Prop. 4.3]{hhstoappear} or \cite[Prop. 5.6]{fen2003}.
We include the main details for the convenience of the reader.
\begin{proposition}\label{p:lq-lim}
Let $\mu$ be a self-similar measure satisfying the finite neighbour condition.
Then
\begin{equation*}
\tau_\mu(q)= \liminf_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q}{\log t}
\end{equation*}
\end{proposition}
\begin{proof}
First suppose $x\in K$ and $t>0$ is arbitrary, and let $\{B(x_i,t)\}_i$ be any centred packing of $K$.
If $\eta\in\mathcal{F}(t)$ has $x\in\pi(\eta)$, we always have $\pi(\eta)\subseteq B(x,t)$, so for $q<0$
\begin{equation*}
\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q\geq\sum_i\mu(B(x_i,t))^q.
\end{equation*}
On the other hand, for $q\geq 0$, since there are only finitely many edge weights, there is some $N$ such that $\#\{\eta\in\mathcal{F}(t):\pi(\eta)\cap B(x,t)\neq\emptyset\}\leq N$.
Since a given net interval $\pi(\eta)$ overlaps with at most 2 distinct balls in $\{B(x_i,t)\}_i$, for $q\geq 0$ by Jensen's inequality
\begin{equation*}
\sum_i\mu(B(x_i,t))^q\leq\sum_i\Bigl(\sum_{\substack{\eta\in\mathcal{F}(t)\\\pi(\eta)\cap B(x_i,t)\neq\emptyset}}\rho(\eta)\Bigr)^q\preccurlyeq_q\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q.
\end{equation*}
Thus
\begin{equation*}
\tau_\mu(q) \geq \liminf_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q}{\log t}.
\end{equation*}
Conversely, suppose $\Delta=\pi(\eta)$ is some net interval, where $\eta\in\mathcal{F}(t)$.
If $f$ is any neighbour of $\Delta$, since $f(K)\cap(0,1)\neq\emptyset$, there exists some word some $\epsilon>0$ and $\tau\in\mathcal{I}^*$ depending only on $f$ such that $f\circ S_\tau(K)\subseteq(\epsilon,1-\epsilon)$ and $0<r_\tau<\epsilon$.
Since there are only finitely many neighbour sets, and hence only finitely many neighbours, we may assume $\epsilon\geq \epsilon_0>0$ and $p_\tau\geq p_0>0$ for some fixed $\epsilon_0,p_0$.
Now by \cref{p:mat-mu} along with \cref{e:qi-formula}, there is some $M>0$ fixed such that there is $f\in\vs(\Delta)$ satisfying
\begin{equation*}
\rho(\eta)=\mu(\Delta)\leq M\sum_{\substack{\sigma\in\mathcal{I}^*\\S_\sigma=T_\Delta\circ f}}p_\sigma.
\end{equation*}
But then by choice of $\tau$, with $x_\eta=T_\Delta\circ f\circ S_\tau(0)\in K$ and $r_\eta=\epsilon|r_\tau|\diam(\Delta)$, we have $B(x_\eta,r_\eta)\subseteq\Delta$ and
\begin{equation*}
\rho(\eta)\geq \mu(B(x_\eta,r_\eta))\geq \mu(T_\Delta\circ f\circ S_\tau(K))\succcurlyeq\sum_{\substack{\sigma\in\mathcal{I}^*\\S_\sigma=T_\Delta\circ f}}p_\sigma p_\tau\succcurlyeq\rho(\eta).
\end{equation*}
Thus the centred packing $\{B(x_\eta,r_\eta)\}_\eta$ satisfies
\begin{equation*}
\sum_{\eta\in\mathcal{F}(t)}\mu(B(x_\eta,r_\eta))^q\asymp_q \sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q.
\end{equation*}
But $r_\eta\asymp t$, so for any $q\in\R$,
\begin{equation*}
\tau_\mu(q) \leq \liminf_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q}{\log t}.
\end{equation*}
This gives the desired result.
\end{proof}
When $q\geq 0$, for any ``sufficiently uniform'' (e.g. dyadic) partition of $K$, one may always define the $L^q$-spectrum of any finite measure with respect to such a partition (see, for example, \cite[Prop. 3.1]{ln1999}).
When $q<0$, such an operation is more delicate since the intervals in a partition can intersect $K$ on sets of disproportionately small $\mu$-measure.
\cref{p:lq-lim} essentially states that the partitions $\{\pi(\eta):\eta\in\mathcal{F}(t)\}$ of $K$ for $t>0$ avoid this issue.
Now for $t>0$, and $\zeta$ and $\mathcal{L}$ defined as above, set
\begin{align*}
\mathcal{F}_{\mathcal{L},\zeta}(t)&=\mathcal{F}(t)\cap\Omega^*_{\mathcal{L},\zeta}.
\end{align*}
We then define
\begin{equation*}
\tau_{\mathcal{L},\zeta}(q) = \lim_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t)}\rho(\eta)^q}{\log t}.
\end{equation*}
We have the following basic lemma.
The argument giving existence of the limit is similar to \cite[Lem. 2.2]{fen2009}.
\begin{lemma}\label{l:lq-limit}
The function $\tau_{\mathcal{L},\zeta}(q)$ is a concave function of $q$, and the limit exists for any $q\in\R$.
Moreover, if $\zeta'$ is any other path ending in $\mathcal{L}$, then $\tau_{\mathcal{L},\zeta'}=\tau_{\mathcal{L},\zeta}$.
\end{lemma}
\begin{proof}
Concavity is a standard application of Hölder's inequality.
We now see existence of the limit.
Write $A_q(t)=\sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t)}\rho(\eta)^q$.
All implicit constants below may depend on the choice of $\zeta$.
First suppose $q\geq 0$ and set $r_0=W_{\min}\cdot\min\{W(\zeta\gamma_w):w\in V(\mathcal{L})\}$ where $W_{\min}=\min\{W(e):e\in E(\mathcal{L})\}$.
Suppose $\eta\in\mathcal{F}_{\mathcal{L},\zeta}(r_0t_1t_2)$, so we may write $\eta=\eta_1\phi$ where $\eta_1\in\mathcal{F}_{\mathcal{L},\zeta}(t_2)$.
If $\phi$ begins at the vertex $w$, by choice of $r_0$ write $\phi=\psi\phi_0$ such that with $\eta_2=\zeta\gamma_w\psi$, $\eta_2\in\mathcal{F}_{\mathcal{L},\zeta}(t_2)$.
Observe that $W(\phi_0)\asymp r_0$ so there are only finitely possible values of $\norm{T(\phi_0)}$.
Thus by \cref{l:left-prod} we have $\norm{T(\psi)}\asymp \rho(\eta_2)$ so that
\begin{equation*}
\rho(\eta)\leq \rho(\eta_1)\norm{T(\psi)}\norm{T(\phi_0)}\preccurlyeq\rho(\eta_1)\rho(\eta_2).
\end{equation*}
Thus $A_q(r_0t_1t_2)\preccurlyeq_q A_q(t_1)A_q(t_2)$ so the limit exists for $q\geq 0$ by submultiplicativity.
Now suppose $q<0$.
Let $\zeta$ end at the vertex $v\in V(\mathcal{L})$, and for each $w\in V(\mathcal{L})$, let $\gamma_w$ be a path in $\mathcal{L}$ from $v$ to $w$.
Given $\eta_i\in\mathcal{F}_{\mathcal{L},\zeta}(t_i)$ for $i=1,2$ and $t_i$ sufficiently small, write $\eta_2=\zeta\phi_2$ so that $W(\phi_2)\asymp t_2$ and $\norm{T(\phi_2)}\asymp\norm{T(\eta_2)}$ by \cref{l:left-prod}.
Then if the path $\eta_1$ ends at the vertex $w$, $\eta_1\gamma_w\phi_2$ is an admissible path with $W(\eta_1\gamma_w\phi_2)\asymp t_1t_2$, so there exists some fixed $r_0>0$ such that $W(\eta_1\gamma_w\phi_2)\geq r_0 t_1t_2$.
Thus get a path $\psi$ with $W(\psi)\asymp r_0$ such that $\eta_1\gamma_w\phi_2\psi\in\mathcal{F}_{\mathcal{L},\zeta}(r_0t_1t_2)$, and by \cref{l:left-prod}
\begin{equation*}
\rho(\eta_1\gamma_w\phi_2\psi)^q\succcurlyeq_q\bigl(\norm{T(\gamma_w)}\norm{T(\psi)}\bigr)^q\cdot\rho(\eta_1)^q\rho(\eta_2)^q.
\end{equation*}
But $\mathcal{L}$ is a finite graph (so there are only finitely many paths $\gamma_w$) and $W(\psi)\asymp r_0$ (so there are only finitely many paths $\psi$).
Thus $A_q(t_1)A_q(t_2)\preccurlyeq_q A_q(r_0 t_1t_2)$ and the limit exists for $q<0$ by supermultiplicativity.
To see the final claim, suppose $\zeta$ ends at the vertex $v$ and $\zeta'$ ends at the vertex $v'$ where $v,v'$ are both vertices in $\mathcal{L}$.
Let $\phi$ be any path in $\mathcal{L}$ from $v$ to $v'$.
Let $\Psi:\Omega^*_{\mathcal{L}}(\zeta')\to\Omega^*_{\mathcal{L},\zeta}$ be given by $\Psi(\zeta'\eta)=\zeta\phi\eta$, and note that
\begin{equation}\label{e:incl}
\Psi(\mathcal{F}_{\mathcal{L},\zeta'}(W(\zeta')t))\subseteq\mathcal{F}_{\mathcal{L},\zeta}(W(\zeta\phi)t).
\end{equation}
Now if $\zeta'\eta\in\Omega^*_{\mathcal{L},\zeta}$, by \cref{l:left-prod},
\begin{equation*}
\rho(\Psi(\zeta'\eta))=\norm{T(\zeta\phi\eta)}\asymp\norm{T(\eta)}\asymp \norm{T(\zeta'\eta)}=\rho(\zeta'\eta)
\end{equation*}
and combining this with \cref{e:incl} yields
\begin{equation*}
\sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta'}(W(\zeta')t)}\rho(\eta)^q\succcurlyeq_q \sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(W(\zeta\phi)t)}\rho(\eta)^q.
\end{equation*}
Since $\zeta$, $\zeta'$, and $\phi$ are fixed , it follows that $\tau_{\mathcal{L},\zeta}(q)\geq\tau_{\mathcal{L},\zeta'}(q)$.
The reverse inequality follow by the same argument with the roles of $\zeta$ and $\zeta'$ swapped.
\end{proof}
\begin{proposition}\label{p:ess-formula}
Suppose $\mathcal{L}$ is an essential loop class of $\mathcal{G}$.
Then if $\Delta$ is any net interval with neighbour set $\vs(\Delta)\in V(\mathcal{L})$, with $\nu=\mu|_{\Delta}$, we have
\begin{equation*}
\tau_{\mathcal{L}}(q)=\tau_\nu(q).
\end{equation*}
In particular, $\tau_{\mathcal{L}}(q)=\tau_\mu(q)$ for any $q\geq 0$.
\end{proposition}
\begin{proof}
This follows by the same argument as \cref{p:lq-lim}, observing that if $\zeta\in\Omega^*$ is a path ending in an essential loop class $\mathcal{L}$, then $\eta\in\Omega^*_{\mathcal{L},\zeta}$ if and only if $\eta\in\Omega^*$ and $\zeta$ is a prefix of $\eta$.
That $\tau_{\mathcal{L}}(q)=\tau_\mu(q)$ for $q\geq 0$ follows by standard arguments (see, for example, \cite[Prop. 3.1]{fl2009}).
\end{proof}
\begin{remark}\label{r:ess-unique}
In fact, using arguments similar to the proof of \cite[Thm. 4.5]{rut2021}, one can show that if $\mathcal{L}$ and $\mathcal{L}'$ are essential classes, then $\tau_{\mathcal{L}}=\tau_{\mathcal{L}'}$.
In practice, with the standard choices of iteration rules given in \cref{ex:uniform-transition} and \cref{ex:weighted-transition}, there will always be a unique essential class.
\end{remark}
\subsubsection{Loop class local dimensions}
Given an infinite path $\gamma=(e_n)_{n=1}^\infty\in\Omega^\infty$, recall that $\gamma|n=(e_1,\ldots,e_n)\in\Omega^n$.
We then define
\begin{equation*}
\underline{\dim}_{\loc}(\rho,\gamma)=\liminf_{n\to\infty}\frac{\log\rho(\gamma|n)}{\log W(\gamma|n)}
\end{equation*}
with similar definitions for the (upper) local dimension.
With this, we define
\begin{equation*}
E_{\mathcal{L},\zeta}(\alpha) = \{\gamma\in\Omega^\infty_{\mathcal{L},\zeta}:\underline{\dim}_{\loc}(\rho,\gamma)=\overline{\dim}_{\loc}(\rho,\gamma)=\alpha\}.
\end{equation*}
Now let $f_{\mathcal{L},\zeta}:\R\to\R\cup\{-\infty\}$ be given by
\begin{equation*}
f_{\mathcal{L},\zeta}(\alpha):=\dim_H E_{\mathcal{L},\zeta}(\alpha).
\end{equation*}
Note that $E_{\mathcal{L},\zeta}(\alpha)$ may be the empty set; by convention, we write $\dim_H\emptyset=-\infty$.
Following the theme for $L^q$-spectra, we have the following easy result.
\begin{lemma}
If $\mathcal{L}$ is a loop class and $\zeta,\zeta'\in\Omega^*$ both end in $\mathcal{L}$, then
\begin{equation*}
f_{\mathcal{L},\zeta}(\alpha)=f_{\mathcal{L},\zeta'}(\alpha).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\zeta$ end at the vertex $v$ and $\zeta'$ end at the vertex $v'$, and let $\phi$ be a path in $\mathcal{L}$ from $v$ to $v'$.
Now if $\zeta'\gamma\in\Omega^\infty_{\mathcal{L}}(\zeta')$, then $\zeta\phi\gamma\in\Omega^\infty_{\mathcal{L},\zeta}$ and by \cref{l:left-prod},
\begin{equation*}
\underline{\dim}_{\loc}(\rho,\zeta'\gamma)=\underline{\dim}_{\loc}(\rho,\zeta\phi\gamma).
\end{equation*}
Moreover, it is straightforward to verify that the map $\zeta'\gamma\mapsto\zeta\phi\gamma$ is bi-Lipschitz on its image, so that $f_{\mathcal{L},\zeta}(\alpha)\geq f_{\mathcal{L},\zeta'}(\alpha)$ for any $\alpha$.
The same argument yields the converse inequality, as required.
\end{proof}
\subsection{Multifractal formalism for irreducible loop classes}\label{ss:mf-Moran}
We maintain notation from the previous section, recalling that $\zeta\in\Omega^*$ is a path ending at a vertex in the loop class $\mathcal{L}$.
In light of the results in the previous section, the following notions are well-defined.
\begin{definition}
We define the \defn{loop class $L^q$-spectrum}, denoted by $\tau_{\mathcal{L}}(q)$, as $\tau_{\mathcal{L}}(q)=\tau_{\mathcal{L},\zeta}(q)$.
Similarly, we define the \defn{loop class multifractal spectrum} by $f_{\mathcal{L}}(\alpha)=f_{\mathcal{L},\zeta}(\alpha)$.
\end{definition}
For convenience, we write
\begin{align}\label{e:a-min-max}
\alpha_{\min}(\mathcal{L})&=\lim_{q\to\infty}\frac{\tau_{\mathcal{L}}(q)}{q}& \alpha_{\max}(\mathcal{L})&=\lim_{q\to-\infty}\frac{\tau_{\mathcal{L}}(q)}{q}.
\end{align}
The limits necessarily exist by concavity of $\tau_{\mathcal{L}}(q)$, and a straightforward argument shows that they are finite.
Our main result in this section is the following multifractal formalism, which relates the multifractal spectrum with the $L^q$-spectrum on the loop class.
\begin{theorem}\label{t:multi-f}
Let $\mathcal{L}$ be an irreducible loop class in $\mathcal{G}$.
Then $f_{\mathcal{L}}=\tau_{\mathcal{L}}^*$.
\end{theorem}
In particular, $f_{\mathcal{L}}$ is a concave function taking finite values precisely on the interval $[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$.
By definition, it suffices to show $f_{\mathcal{L},\zeta}=\tau_{\mathcal{L},\zeta}^*$ for a path $\zeta\in\Omega^*$ ending at a vertex in $\mathcal{L}$.
There are many ways to prove this result.
One could use a weighted version of the arguments in \cite{fen2009}, which are proven in a similar irreducible matrix-product setting.
Another option is to follow the arguments in \cite{fl2009}.
We find it most efficient to use the following result, which is a simplified ``symbolic'' version of \cite[Thm. 2.2]{fen2012}.
\begin{proposition}[\cite{fen2012}]\label{p:asymp-good}
Suppose for any $q\in\R$ such that the derivative $\tau_{\mathcal{L},\zeta}'(q)=\alpha$ exists, there exist numbers $b(q,k)$ and $c(q,k)$ such that the following properties hold:
\begin{enumerate}[nl,r]
\item We have $\lim_{k\to\infty}b(q,k)=0$.
\item Suppose $n\in\N$ and $\eta\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-n})$.
Then for any $m\geq c(q,k)$, there are distinct paths $\eta_1,\ldots,\eta_N\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-n-m})$ such that $\eta$ is a prefix of each $\eta_i$,
\begin{equation*}
N \geq 2^{m(\tau^*_{\mathcal{L},\zeta}(\alpha)-b(q,k))},
\end{equation*}
and
\begin{equation*}
2^{-m(\tau_{\mathcal{L},\zeta}'(q)+1/k)}\leq \frac{\rho(\eta_i)}{\rho(\eta)}\leq 2^{-m(\tau_{\mathcal{L},\zeta}'(q)-1/k)}.
\end{equation*}
\end{enumerate}
Then $f_{\mathcal{L},\zeta}(\alpha)=\tau_{\mathcal{L},\zeta}^*(\alpha)$ for each $\alpha\in\R$.
\end{proposition}
We first observe the following standard counting lemma, which is similar to \cite[Prop. 3.3]{fl2009}, but the proof is easier.
\begin{lemma}\label{l:counting}
Suppose $\mathcal{L}$ is any loop class (not necessarily irreducible) and the derivative $\tau_{\mathcal{L},\zeta}'(q)=\alpha$ exists.
Then for any $\delta>0$, there is $t_0=t_0(\delta,q)$ such that for all $0<t<t_0$, there is $F^*(t)\subset \mathcal{F}_{\mathcal{L},\zeta}(t)$ such that
\begin{enumerate}[nl,r]
\item $\# F^*(t)\geq t^{-\tau_{\mathcal{L},\zeta}^*(\alpha)+\delta(|q|+1)}$ and
\item $t^{\alpha+\delta}\leq \rho(\eta)\leq t^{\alpha-\delta}$ for each $\eta\in F^*(t)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Write $A_q(t)=\sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t)}\rho(\eta)^q$.
Since $\tau_{\mathcal{L},\zeta}'(q)$ exists, get $\epsilon>0$ such that
\begin{align*}
(\alpha-\delta/2)\epsilon&\leq|\tau_{\mathcal{L},\zeta}(q\pm \epsilon)-\tau_{\mathcal{L},\zeta}(q)|\leq(\alpha+\delta/2)\epsilon.
\end{align*}
Let $0<\gamma<\min\{\epsilon\delta/6,\delta/2,1\}$ and observe that $\gamma$ depends only on $\delta$ and $q$.
Then since the limit defining $\tau_{\mathcal{L},\zeta}$ exists by \cref{l:lq-limit}, get $t_0$ depending on $\gamma$ and $q$ such that for all $0<t<t_0$,
\begin{equation*}
t^{\tau_{\mathcal{L},\zeta}(q+z)+\gamma}\leq A_{q+z}(t)\leq t^{\tau_{\mathcal{L},\zeta}(q+z)-\gamma}
\end{equation*}
for each $z\in\{0,-\epsilon,\epsilon\}$.
Next, write $\mathcal{F}_{\mathcal{L},\zeta}(t)=F^-(t)\cup F^*(t)\cup F^+(t)$ where
\begin{align*}
F^-(t) &= \{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t):\rho(\eta)\leq t^{\alpha+\delta}\} &F^+(t) &= \{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t):\rho(\eta)\geq t^{\alpha-\delta}\}
\end{align*}
and $F^*(t) = \mathcal{F}_{\mathcal{L},\zeta}(t)\setminus(F^-(t)\cup F^+(t))$.
By definition, (ii) holds for $\eta\in F^*(t)$.
Combining the above inequalities gives that
\begin{align*}
\sum_{\eta\in F^-(t)}\rho(\eta)^q&\leq A_{q+\epsilon}(t)t^{-\epsilon(\alpha-\delta)}\leq t^{\tau_{\mathcal{L},\zeta}(q)+\epsilon\delta/2-\gamma}
\end{align*}
with the analgous inequality for $F^+(t)$.
Then since $\gamma,t\in(0,1)$ and $\gamma<\epsilon\delta/6$,
\begin{align*}
\sum_{\eta\in F^*(t)}\rho(\eta)^q &\geq t^{\tau_{\mathcal{L},\zeta}(q)+\gamma}-2t^{\tau_{\mathcal{L},\zeta}(q)+\epsilon\delta/2-\gamma} \geq t^{\tau_{\mathcal{L},\zeta}(q)+2\gamma}(t^{-\gamma}-2)\geq t^{\tau_{\mathcal{L},\zeta}(q)+2\gamma}.
\end{align*}
But now for each $\eta\in F^*(t)$, we have $\rho(\eta)^q\leq\max\{t^{(\alpha+\delta)q},t^{(\alpha-\delta)q}\}=t^{\alpha q-\delta|q|}$ so that
\begin{equation*}
\# F^*(t)\geq t^{-\alpha q+\delta|q|}\sum_{\eta\in F^*(t)}\rho(\eta)^q\geq t^{-\tau_{\mathcal{L},\zeta}^*(\alpha)+\delta|q|+2\gamma}\geq t^{-\tau_{\mathcal{L}}^*(\alpha)+\delta(|q|+1)}
\end{equation*}
since $\gamma<\delta/2$, giving (i).
\end{proof}
\begin{proofref}{t:multi-f}
Let $q\in\R$ with $\tau_{\mathcal{L},\zeta}'(q)=\alpha$, $n\in\N$ and $\eta\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-n})$.
First suppose we are given some large $m\in\N$ and a path $\phi\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-m})$.
We construct a path $\Psi(\phi)$ as follows.
Write $\phi=\zeta\psi$.
By the irreducibility assumption, there is a path $\gamma\in\mathcal{H}$ such that $\eta_0=\eta\gamma\psi$ is an admissible path and by \cref{l:left-prod}
\begin{equation}\label{e:rp-constants}
\rho(\eta_0)\asymp\rho(\eta)\norm{T(\gamma)}\norm{T(\psi)}\asymp \rho(\eta)\rho(\phi).
\end{equation}
Since $W(\eta\gamma)\asymp 2^{-n}$ and $W(\phi)\asymp 2^{-m}$, we have $W(\eta_0)\geq 2^{-n-m-m'}$ for some $m'$ depending only on the (fixed) choice of $\zeta$.
Again by the irreducibility assumption, we can thus obtain $\Psi(\phi)\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-n-m-m'})$ such that $\eta_0$ is a prefix of $\Psi(\phi)$ and $\rho(\Psi(\phi))\asymp\rho(\eta_0)$.
Let $C$ be a fixed constant such that $C^{-1}\rho(\phi)\leq \rho(\Psi(\phi))/\rho(\eta)\leq C\rho(\phi)$.
Now by \cref{l:counting} with constant $\delta=1/2k$ such that for all $m_0\geq c_0(q,k)$ there are paths $\phi_1,\ldots,\phi_N\in \mathcal{F}_{\mathcal{L},\zeta}(2^{-m_0})$ such that $N\geq 2^{m_0(\tau_{\mathcal{L},\phi}^*(\alpha)-(|q|+1)/2k)}$ and $2^{-m_0(\alpha+1/2k)}\leq\rho(\phi_i)\leq 2^{-m_0(\alpha-1/2k)}$.
Now with $m=m_0+m'$ and $\eta_i=\Psi(\phi_i)$, we observe that $\eta_1,\ldots,\eta_N\in\mathcal{F}_{\mathcal{L},\zeta}(2^{-n-m})$ and
\begin{equation*}
N\geq 2^{m_0(\tau_{\mathcal{L},\phi}^*(\alpha)-(|q|+1)/2k}\geq 2^{m(\tau_{\mathcal{L},\phi}^*(\alpha)-(|q|+1)/k)}
\end{equation*}
and
\begin{equation*}
\frac{\rho(\eta_i)}{\rho(\eta)}\leq C\rho(\phi_i)\leq C 2^{-m_0(\alpha-1/2k)}\leq 2^{-m(\alpha-1/k)}
\end{equation*}
with a similar lower bound, for all $m\geq c_0(q,k)+m'$ sufficiently large depending only on fixed quantities.
Thus the conditions for \cref{p:asymp-good} are satisfied, giving the desired result.
\end{proofref}
\subsection{Regular points in level sets of local dimensions}
As before, we fix a path $\zeta\in\Omega^*$ ending at a vertex in the loop class $\mathcal{L}$.
Recall that
\begin{equation*}
E_{\mathcal{L},\zeta}(\alpha) = \{\gamma\in\Omega^\infty_{\mathcal{L},\zeta}:\underline{\dim}_{\loc}(\rho,\gamma)=\overline{\dim}_{\loc}(\rho,\gamma)=\alpha\}.
\end{equation*}
We wish to show that the set $E_{\mathcal{L},\zeta}(\alpha)$ can be approximated (in the sense of dimensions) by sets of points which have particularly nice properties.
\begin{definition}\label{d:xi-reg}
Let $\xi$ be a finite path (not necessarily rooted) in $\mathcal{G}$.
We say that a path $\gamma=(e_n)_{n=1}^\infty\in\Omega^\infty$ is \defn{$\xi$-regular} if there exists a monotonically increasing sequence $(n_j)_{j=1}^\infty\subset\N$ such that $\xi$ is a prefix of $(e_{n_j},e_{n_j+1},\ldots)$ for each $j$ and
\begin{equation*}
\lim_{j\to\infty}\frac{n_{j+1}}{n_j}=1.
\end{equation*}
\end{definition}
This will be of key importance in \cref{s:mf-properties}.
The proof of the following result is very similar to \cite[Prop. 3.2]{fen2009}, so we are somewhat terse with details.
The irreducibility hypothesis is critical in order to obtain this result.
\begin{theorem}\label{t:reg-sub}
Suppose $\mathcal{L}$ is an irreducible loop class and $\zeta\in\Omega^*$ a path ending at a vertex $v$ in $\mathcal{L}$.
Then for any $\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$ and finite path $\xi$ contained in $\mathcal{L}$ beginning at the vertex $v$, there exists $\emptyset\neq\Gamma=\Gamma(\xi)\subset E_{\mathcal{L},\zeta}(\alpha)$ such that
\begin{equation*}
\dim_H\Gamma = \dim_H E_{\mathcal{L},\zeta}(\alpha) = f_{\mathcal{L}}(\alpha)
\end{equation*}
and $\Gamma$ is composed only of $\xi$-regular points.
\end{theorem}
\begin{proof}
If $\mathcal{L}$ is simple, this result is immediate.
Otherwise we assume $\mathcal{L}$ is not simple.
All cylinders in the proof are taken relative to $\Omega^\infty_{\mathcal{L},\zeta}$.
Set
\begin{align*}
F(\alpha; t,\epsilon) &:= \bigl\{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t):t^{\alpha+\epsilon}\leq \rho(\eta)\leq t^{\alpha-\epsilon}\bigr\}\\
G(\alpha; s,\epsilon) &:= \bigcap_{0<t\leq s}\bigcup_{\eta\in F(\alpha;t,\epsilon)}[\eta].
\end{align*}
Of course, if $\gamma\in E_{\mathcal{L},\zeta}(\alpha)$, for any $\epsilon>0$, $\gamma\in G(\alpha; t,\epsilon)$ for all $t$ sufficiently small (depending on $\epsilon$) and thus
\begin{equation*}
E_{\mathcal{L},\zeta}(\alpha)\subseteq\bigcup_{s>0}G(\alpha; s,\epsilon).
\end{equation*}
Since each cylinder $[\eta]$ where $\eta\in F(\alpha; t,\epsilon)$ has diameter $W(\eta)\asymp t$, for any $s>0$ and $\epsilon>0$,
\begin{equation*}
\dim_H G(\alpha; s,\epsilon)\leq \underline{\dim}_B G(\alpha; s,\epsilon)\leq\liminf_{t\to 0}\frac{\log \# F(\alpha; t,\epsilon)}{-\log t}.
\end{equation*}
This holds for any $\epsilon>0$.
Thus by countable stability of the Hausdorff dimension,
\begin{equation}\label{e:lim-H}
\dim_H E_{\mathcal{L},\zeta}(\alpha)\leq\liminf_{\epsilon\to 0}\liminf_{t\to 0}\frac{\log \# F(\alpha; t,\epsilon)}{-\log t}.
\end{equation}
We now turn to the construction of the set $\Gamma$.
For the remainder of this proof, unless otherwise stated, all implicit constants may depend on the (fixed) paths $\zeta$ and $\xi$.
Let $\delta>0$ be arbitrary.
By \cref{e:lim-H}, there are strictly monotonic sequences $(t_j)_{j=1}^\infty\subset (0,1)$ and $(\epsilon_j)_{j=1}^\infty$ both tending to 0 such that
\begin{equation}\label{e:tj-def}
\frac{\log \# F(\alpha; t_j,\epsilon_j)}{-\log t_j}>\dim_H E_{\mathcal{L},\zeta}(\alpha)-\delta
\end{equation}
for each $j\in\N$.
Define a sequence $\{t_j^*\}_{j=1}^\infty$ by
\begin{equation*}
\underbrace{t_1,\ldots,t_1}_{N_1},\underbrace{t_2,\ldots,t_2}_{N_2},\ldots,\underbrace{t_i,\ldots,t_i}_{N_i},\ldots
\end{equation*}
where $N_j$ is defined recursively by $N_1=1$ and, for $j\geq 2$,
\begin{equation*}
N_j=2^{-\log t_{j+1}+N_{j-1}}.
\end{equation*}
For each $i$, let $A_i$ denote the set of indices $j\in\N$ where $t_j^*=t_i$.
Set $\epsilon_j^*=\epsilon_i$ when $t_j^*=t_i$.
Since there are only finitely many vertices in $\mathcal{L}$ and finitely many possible dimensions of transition matrices, by the pidgeonhole principle, for each $j$, there exists an index $(m_j,n_j)$ and a subset $G_j^*\subset F(\alpha;t_j^*,\epsilon_j^*)$ such that each path in $G_j$ begins and ends at the same vertex and $\# G_j^*\succcurlyeq\# F(\alpha;t_j^*,\epsilon_j^*)$.
Let $\eta_j^*=\eta_i$ when $t_j^*=t_i$.
Recall that the path $\xi$ begins at vertex $v$.
There exist constants $C,D>0$ such that by repeatedly applying irreducibility of $\mathcal{L}$, for each path $\eta_j^*\in G_j^*$, there exist paths $\phi(\eta_j^*),\psi(\eta_j^*)\in\mathcal{H}$ such that the following two conditions hold:
\begin{enumerate}[nl,r]
\item the path $\theta(\eta_j^*):=\xi\phi(\eta_j^*)\eta_j^*\psi(\eta_j^*)$ is a cycle beginning and ending at vertex $v$, and
\item for any $\gamma=\theta(\eta_1^*)\ldots\theta(\eta_k^*)\eta'$ where $\eta'$ is a prefix of $\theta(\eta_{k+1}^*)$,
\begin{equation*}
D^k\prod_{i=1}^{k}\norm{T(\eta_i^*)}\geq\norm{T(\gamma)}\geq C^k\prod_{i=1}^{k+1}\norm{T(\eta_i^*)}.
\end{equation*}
\end{enumerate}
Then let for $n\in\N$
\begin{equation*}
\mathcal{G}_n=\bigl\{[\zeta\theta(\eta_1^*)\ldots \theta(\eta_n^*)]:(\eta^*_1,\ldots,\eta^*_n)\in\prod_{i=1}^n G_i^*\bigr\}
\end{equation*}
which is a nested sequence of families of cylinders, and set
\begin{equation*}
\Gamma_\delta= \bigcap_{n=1}^\infty\bigcup_{I\in\mathcal{G}_n}I.
\end{equation*}
A direct computation shows that $\Gamma_\delta\subset\Sigma(\alpha)$.
We now show that $\dim_H\Gamma_\delta\geq\Lambda(\alpha)-\delta$.
By \cite[Prop. 3.1]{flw2002} (the technical assumptions are immediate to verify), $\dim_H\Gamma_\delta=\liminf_{k\to\infty}a_k$ where $a_k$ satisfies
\begin{equation*}
\sum_{(\eta_1^*,\ldots,\eta_k^*)\in G_1\times\cdots\times G_k}W(\eta_1^*\ldots \eta_k^*)^{a_k}=1.
\end{equation*}
Let $1\leq j\leq k$ and choose $i$ such that $j\in A_i$.
As $\xi$ is fixed, $W(\theta(\eta_j^*))\asymp W(\eta_j^*)\asymp t_i$ and $\eta_j^*\in F(\alpha;t_j^*,\epsilon_j^*)$ so $\eta_j^*\in\mathcal{F}_{t_j^*}(\Delta)=\mathcal{F}_{t_i}(\Delta)$.
Let $r>0$ be such that $W(\eta_j^*)\geq rt_j^*$.
Thus since $(t_j^*)\to 0$ and $\# F(\alpha; t_j^*,\epsilon_j^*)\to\infty$,
\begin{align*}
\dim_H\Gamma_\delta&\geq\liminf_{k\to\infty}\frac{\log\prod_{j=1}^k \# G_j^*}{-\log\prod_{j=1}^k(rt_j^*)}\\
&\geq \liminf_{k\to\infty}\frac{-k+\log\prod_{j=1}^k\# F(\alpha;t_j^*,\epsilon_j^*)}{k-\log\prod_{j=1}^k t_j^*}\\
&= \liminf_{k\to\infty}\frac{\log\prod_{j=1}^k\# F(\alpha;t_j^*,\epsilon_j^*)}{-\log\prod_{j=1}^k t_j^*}.
\end{align*}
Now by definition of the $N_j$ and \cref{e:tj-def}, it follows that
\begin{align*}
\dim_H\Gamma_\delta&\geq \dim_H E_{\mathcal{L},\zeta}(\alpha)-\delta
\end{align*}
as claimed.
Take $\Gamma=\bigcup_{n=1}^\infty\Gamma_{2^{-n}}$, and the result follows.
\end{proof}
\section{Multifractal analysis of self-similar measures}\label{s:mf-properties}
We continue to use the notation of the previous section.
We fix a WIFS $(S_i,p_i)_{i\in\mathcal{I}}$ with self-similar measure $\mu$.
In particular, we assume that $\Phi$ is an iteration rule with corresponding finite transition graph $\mathcal{G}$, as described in \cref{ss:fnc}.
\subsection{Local dimensions and regular points}
Intuitively, the multifractal analysis of self-similar sets satisfying the finite neighbour condition is related to the multifractal analysis results for loop classes from the preceding section.
However, the exact relationship is somewhat more complicated to establish: while the local dimension of $\rho$ at a path $\gamma$ depends only on the single sequence of edges determining $\gamma$, the local dimension of $\mu$ at a point $x\in K$ can also depend on net intervals which are adjacent to net intervals containing $x$.
This happens when $x$ is the shared boundary point of two distinct net intervals, but it can also happen when $x$ is an interior point approximated very well by boundary points (so that balls $B(x,r)$ overlap significantly with neighbouring net intervals, for infinitely many values of $r$).
In order to better understand this adjacency structure, we introduce the notion of the approximation sequence of an interior point, as well as the set of regular points $K_R\subseteq K$.
Let $x\in K$ be an interior point, which we recall means that $\pi^{-1}(x)=\{\gamma\}$ is a single (infinite) path.
Let $(\Delta_i)_{i=0}^\infty$ with $\Delta_0=[0,1]$ and each $\Delta_{i+1}$ a child of $\Delta_i$ denote the sequence of net intervals corresponding to $\gamma$.
Of course, $\Delta_n=\pi(\gamma|n)$.
Given some $i$ and $[a,b]=\Delta_{i+1}\subseteq\Delta_i=[c,d]$, by the reductions described in \cref{r:pr-ch}, exactly one of $c=a<b<d$, $c<a<b=d$, or $c<a<b<d$ must hold.
Moreover, since $x$ is an interior point, it cannot hold that all $\Delta_k$ share a common left (resp. right) endpoint for all sufficiently large $k$ where the left (resp. right) endpoint is also the right (resp. left) endpoint of some adjacent net interval.
In particular, there exists a monotonically increasing infinite sequence $(n_j)_{j=1}^\infty$ such that there exists a neighbourhood of $\Delta_{n_j+2}$ in $K$ which is contained entirely in $\Delta_{n_j}$.
We now make the following definition:
\begin{definition}\label{d:reg-point}
Given an interior point $x\in K$, we call the sequence $(n_j)_{j=1}^\infty$ described above the \defn{approximation sequence} of $x$.
We then say that $x$ is \defn{regular} if its approximation sequence satisfies
\begin{equation*}
\lim_{j\to\infty}\frac{n_{j+1}}{n_j}=1.
\end{equation*}
We denote the set of regular points in $K$ by $K_R$.
\end{definition}
The intuition is that interior points in $K$ which are approximated very well by boundary points are contained in long sequences of net intervals which share left endpoints or right endpoints, so that regular points are those which are poorly approximated by boundary points.
The main point of the approximation sequence is that $x$ is bounded uniformly away from the neighbouring net intervals of $\Delta_{n_j}=\pi(\gamma|n_j)$ for each $j\in\N$.
To be precise, we have the following lemma.
\begin{lemma}\label{l:ap-sub}
Let $x$ be an interior point with approximation sequence $(n_j)_{j=1}^\infty$.
There exists some $R>0$ depending only on the IFS and $m=m(R)\in\N$ such that, for any $j\in\N$,
\begin{equation*}
\Delta_{n_j+m}\subseteq B(x, R\cdot\diam(\Delta_{n_j}))\cap K\subseteq\Delta_{n_j}.
\end{equation*}
\end{lemma}
\begin{proof}
Since there is a neighbourhood of $\Delta_{n_j+2}$ in $K$ contained entirely in $\Delta_{n_j}$, either $\Delta_{n_j+2}\subseteq\Delta_{n_j}^\circ$ or $\Delta_{n_j+2}$ shares an endpoint with $\Delta_{n_j}$ but there is no other adjacent net interval in $\mathcal{P}_{n_j}$.
We only treat the first case; the second follows by similar arguments.
We recall by \cref{p:ttype} that the position index $q(\Delta_{i+1},\Delta_i)$ depends only on the neighbour set of $\Delta_i$.
Thus if we write $\Delta_{n_j+2}=[a,b]$ and $\Delta_{n_j}=[c,d]$ where $a<c<d<b$, there are only finitely many positive values for $(c-a)/(d-c)$ and $(b-d)/(d-c)$.
The existence of $R$ follows.
Moreover, recall that $W(e)=\diam(\Delta_i)/\diam(\Delta_{i+1})$ when $\Delta_{i+1}$ is the child of $\Delta_i$ corresponding to the edge $e$.
Therefore, with $W_{\min}=\min\{W(e):e\in E(\mathcal{G})\}$, it suffices to take $m$ such that $W_{\min}^{m-2}\leq R$.
\end{proof}
Using the approximation sequence, we can establish some basic relationships between local dimensions and their loop class analogues.
A similar version of the following result was first proven in \cite{hr2021}.
\begin{proposition}\label{p:nper-dim}
Suppose $x$ is an interior point with unique symbolic representation $\gamma$.
\begin{enumerate}[nl,r]
\item We always have
\begin{equation*}
\underline{\dim}_{\loc}(\mu,x)\leq\underline{\dim}_{\loc}(\rho,\gamma)\leq \overline{\dim}_{\loc}(\mu,x)\leq\overline{\dim}_{\loc}(\rho,\gamma).
\end{equation*}
\item If $\dim_{\loc}(\rho,\gamma)$ exists, then $\overline{\dim}_{\loc}(\mu,x)=\dim_{\loc}(\rho,\gamma)$.
\item If $\dim_{\loc}(\mu,x)$ exists, then $\underline{\dim}_{\loc}(\rho,\gamma)=\dim_{\loc}(\mu,x)$.
\end{enumerate}
\end{proposition}
\begin{proof}
It suffices to show (i), since it is clear that (ii) and (iii) follow directly.
For each $t>0$ let $n(t)$ be maximal such that $W(\gamma|n(t))\leq t$.
Then if $\Delta_t=\pi(\gamma|n(t))$, we have $\Delta_t\subseteq B(x,t)$ so that
\begin{align*}
\overline{\dim}_{\loc}(\mu,x)= \limsup_{t\to 0}\frac{\log\mu(B(x,t))}{\log t}&\leq\limsup_{t\to 0}\frac{\log\rho(\gamma|n(t))}{\log t}\\
&=\overline{\dim}_{\loc}(\rho,\gamma).
\end{align*}
Replacing the limit superior with the limit inferior, we also have that $\underline{\dim}_{\loc}(\mu,x)\leq\underline{\dim}_{\loc}(\rho,\gamma)$.
To get the remaining bound, let $x$ have approximation sequence $(n_k)_{k=1}^\infty$ and let $R,m$ be as in \cref{l:ap-sub}.
We then have, since $W(\gamma|(n_k+m))\asymp \rho W(\gamma|n_k)$,
\begin{align*}
\overline{\dim}_{\loc}(\mu,x)&=\limsup_{t\to 0}\frac{\log\mu(B(x,t))}{\log t}\\
&\geq\limsup_{k\to\infty}\frac{\log\mu(B(x,R\cdot W(\gamma|n_k)))}{\log R\cdot W(\gamma|n_k)}\\
&\geq\limsup_{k\to\infty}\frac{\log\rho(\gamma|(n_k+m))}{\log W(\gamma|(n_k+m))}\\
&\geq\underline{\dim}_{\loc}(\rho,\gamma).
\end{align*}
as required.
\end{proof}
When the local dimension exists, the content of the following lemma states that we can extend the nice properties along the approximation sequence to net intervals in similar levels.
A similar statement holds when the loop class local dimension exists.
\begin{lemma}\label{l:approx-reg}
Suppose $x$ is an interior point with symbolic representation $\gamma$.
Let $x$ have approximation sequence $(n_j)_{j=1}^\infty$ and let $(k_j)_{j=1}^\infty\subset\N$ satisfy $\lim_{j\to\infty}\frac{k_j}{n_j}=0$.
\begin{enumerate}[nl,r]
\item Suppose $\dim_{\loc}(\mu,x)$ exists.
Then
\begin{equation*}
\lim_{j\to\infty}\frac{\log\rho(\gamma|n_j-k_j)}{\log\rho(\gamma|n_j)}=1.
\end{equation*}
\item Suppose $\dim_{\loc}(\rho,\gamma)$ exists.
Then with $m$ from \cref{l:ap-sub},
\begin{equation*}
\lim_{j\to\infty}\frac{\log \mu\bigl(B(x, W(\gamma|n_j+m))\bigr)}{\log \mu\bigl(B(x,W(\gamma|n_j+m+k_j))\bigr)}=1
\end{equation*}
\end{enumerate}
\end{lemma}
\begin{proof}
We first see (i).
For each $i$ let $\Delta_i$ have symbolic representation $\gamma|i$, set $t_i=\diam(\Delta_{n_i})$, and let $\alpha=\dim_{\loc}(\mu,x)$.
By \cref{l:ap-sub}, there exists some $R>0$ and $m\in\N$ such that
\begin{equation}\label{e:rho-bd}
B(x,R t_j)\subseteq\Delta_{n_j}\subseteq\Delta_{n_j-k_j}\subseteq B(x,c_j t_j)
\end{equation}
where $c_j=W_{\min}^{-k_j-1}$.
But then since $\log c_jt_j\asymp n_j-k_j$,
\begin{align*}
\lim_{j\to\infty}\frac{(k_j+1)\log W_{\min}}{\log c_jt_j} =\lim_{j\to\infty}\frac{k_j}{n_j-k_j}=0
\end{align*}
so that
\begin{align*}
\lim_{j\to\infty}\frac{\log\mu(B(x,c_j t_j))}{\log t_j} &= \lim_{j\to\infty}\frac{\log\mu(B(x,c_j t_j))}{(k_j+1)\log W_{\min}+\log c_jt_j}\\
&= \lim_{j\to\infty}\frac{\log\mu(B(x,c_j t_j))}{\log c_j t_j}=\alpha
\end{align*}
Arguing similarly, we also have $\alpha=\lim_{j\to\infty}\frac{\log\mu(B(x,R t_j))}{\log t_j}$.
Thus
\begin{equation*}
\lim_{j\to\infty}\frac{\log\mu(B(x,c_j t_j))}{\log\mu(B(x,R t_j))}=1
\end{equation*}
and the result follows from \cref{e:rho-bd}.
The proof of (ii) follows similarly after observing that
\begin{align*}
\Delta_{n_j}&\supseteq B(x,R\cdot W(\gamma|n_j))\supseteq B(x,W(\gamma|n_j+m))\\
&\supseteq B(x,W(\gamma|n_j+m+k_j))\supseteq \Delta_{n_j+m+k_j}.
\end{align*}
\end{proof}
Finally, the situation is nicest when $x\in K_R$ is a regular point.
Note that this strengthens the usual observations in \cref{p:nper-dim}.
\begin{corollary}\label{c:reg-loc-dim}
Suppose $x\in K_R$ is a regular point with unique symbolic representation $\gamma$ eventually in the loop class $\mathcal{L}$.
If either $\dim_{\loc}(\mu,x)$ exists or $\dim_{\loc}(\rho,\gamma)$ exists, then they both exist and are equal.
\end{corollary}
\begin{proof}
We see this when $\dim_{\loc}(\rho,\gamma)=\alpha$ exists; the proof when $\dim_{\loc}(\mu,x)$ exists is analgous.
Set $k_j=n_{j+1}-n_j$.
But then for all $i$ sufficiently large, $n_j + m\leq i\leq n_j+m+k_j$ for some $j$.
Then since $x$ is a regular point, \cref{l:approx-reg} applies with $(k_j)_{j=1}^\infty$ and
\begin{align*}
\limsup_{i\to\infty}\frac{\log\mu\bigl(B(x,W(\gamma|i))\bigr)}{\log W(\gamma|i)}&\leq \limsup_{j\to\infty}\frac{\log \mu\Bigl(B\bigl(x,W(\gamma|(n_j+m))\bigr)\Bigr)}{\log W(\gamma|(n_j+m))}\\
&\leq \limsup_{j\to\infty}\frac{\log\rho(\gamma|n_j)}{\log W(\gamma|n_j)}=\alpha.
\end{align*}
The lower bound follows similarly, so that $\dim_{\loc}(\mu,x)=\alpha$.
\end{proof}
\subsection{The upper bound for the multifractal spectrum}
Set
\begin{equation*}
E_\mu(\alpha;\mathcal{L})=\{x\in\kint_{\mathcal{L}}:\dim_{\loc}(\mu,x)=\alpha\}=\kint_{\mathcal{L}}\cap E_\mu(\alpha).
\end{equation*}
Given a path $\zeta\in\Omega^*$ ending at a vertex in $\mathcal{L}$, one can think of $E_\mu(\alpha;\mathcal{L})\cap\pi(\zeta)$ as an analogue of the set $E_{\mathcal{L},\zeta}(\alpha)$ from \cref{ss:symb-defs}.
In \cref{t:multi-f} the upper bound $f_{\mathcal{L}}(\alpha)\leq\tau_{\mathcal{L}}^*(\alpha)$ always holds, with no assumptions on $\mathcal{L}$.
Here, we show that $\tau_{\mathcal{L}}^*(\alpha)$ is also an upper bound for the Hausdorff dimension of the level sets $E_\mu(\alpha;\mathcal{L})$.
Compare part (i) in \cref{t:m-upper-bound} with \cite[Prop. 4.4]{hr2021}.
Note that our definition of $\alpha_{\min}(\mathcal{L})$ and $\alpha_{\max}(\mathcal{L})$ (as defined in \cref{e:a-min-max}) is formally different from that paper.
Regardless, one can shown that they coincide when $\mathcal{L}$ is an irreducible loop class.
\begin{theorem}\label{t:m-upper-bound}
Let $(S_i,p_i)_{i\in\mathcal{I}}$ be a WIFS satisfying the finite neighbour condition with associated self-similar measure $\mu$.
Let $\mathcal{L}$ be a loop class.
Then
\begin{enumerate}[nl,r]
\item $\dim_{\loc}(\mu,x)\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$ for any $x\in \kint_{\mathcal{L}}$ for which the local dimension exists, and
\item $\dim_H E_\mu(\alpha;\mathcal{L})\leq\tau_{\mathcal{L}}^*(\alpha)$ for any $\alpha\in\R$.
\end{enumerate}
\end{theorem}
We first recall some notation from \cref{ss:mf-Moran}.
Fix some $\Delta_0\in\mathcal{F}$ such that $\vs(\Delta_0)\in V(\mathcal{L})$.
Let $\Delta_0$ have symbolic representation $\zeta_0$ and set
\begin{equation*}
A_q(t) = \sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta_0}(t)}\rho(\eta)^q.
\end{equation*}
so that
\begin{align*}
\tau_{\mathcal{L}}(q)=\tau_{\mathcal{L},\zeta_0}(q)&=\lim_{t\to 0}\frac{\log A_q(t)}{\log t}.
\end{align*}
The projection $\pi$ taking paths in $\Omega^*$ to net intervals in $\mathcal{P}$ restricts to the map
\begin{equation*}
\pi:\Omega^*_{\mathcal{L},\zeta_0}\to\{\Delta\in\mathcal{P}:\Delta\subseteq\Delta_0,\vs(\Delta)\in V(\mathcal{L})\}.
\end{equation*}
We recall that $\rho=\mu\circ\pi$.
As defined in \cref{t:reg-sub}, we also set
\begin{equation*}
F(\alpha; t,\epsilon) := \bigl\{\eta\in\mathcal{F}_{\mathcal{L},\zeta_0}(t):t^{\alpha+\epsilon}\leq \rho(\eta)\leq t^{\alpha-\epsilon}\bigr\}.
\end{equation*}
We first prove the following standard counting result on the size of the sets $F(\alpha; t,\epsilon)$.
This is essentially the same as, for example, \cite[Lem. 4.1]{ln1999}.
\begin{lemma}\label{l:ft-count}
Let $\alpha\geq 0$ be arbitrary and $q\in\partial \tau_{\mathcal{L}}^*(\alpha)$.
Then there exists some $r>0$ such that for all $0<t<r$,
\begin{equation*}
\#F(\alpha;t,\epsilon)\leq t^{-\tau_{\mathcal{L}}^*(\alpha)-(1+|q|)\epsilon}.
\end{equation*}
\end{lemma}
\begin{proof}
We prove this for $q<0$, but the case $q\geq 0$ follows identically.
To do this, we bound $A_q(t)$ in two ways for $t$ sufficiently small.
On one hand,
\begin{equation*}
A_q(t)\geq\sum_{\eta\in F(\alpha;t,\epsilon)}\rho(\eta)^q\geq t^{q(\alpha-\epsilon)}\#F(\alpha; t,\epsilon).
\end{equation*}
On the other hand, for $t$ sufficiently small (depending on $\epsilon$ and $\Delta_0$), $A_q(t)\leq t^{\tau_{\mathcal{L}}(q)-\epsilon}$.
Combining these observations, we have
\begin{equation*}
\# F(\alpha; t,\epsilon)\leq t^{\tau_{\mathcal{L}}(q)-\epsilon}t^{-q(\alpha-\epsilon)}=t^{-\tau_{\mathcal{L}}^*(\alpha)-(1-q)\epsilon}
\end{equation*}
since $q\in\partial \tau_{\mathcal{L}}^*(\alpha)$ so that $\tau_{\mathcal{L}}^*(\alpha)=\alpha q-\tau_{\mathcal{L}}(q)$.
\end{proof}
We now begin the main proof.
\begin{proofref}{t:m-upper-bound}
To see (i), suppose $x\in \kint_{\mathcal{L}}$ is arbitrary with unique symbolic representation $\gamma=(e_n)_{n=1}^\infty$.
Let $\zeta\in\Omega^*$ be a prefix of $\gamma$ ending in $\mathcal{L}$.
By \cref{p:nper-dim}, $\dim_{\loc}(\mu,x)=\underline{\dim}_{\loc}(\rho,\gamma)$, so there exists an increasing sequence $(n_j)_{j=1}^\infty$ such that
\begin{equation*}
\underline{\dim}_{\loc}(\rho,x)=\lim_{j\to\infty}\frac{\log \rho(\gamma|n_j)}{\log W(\gamma|n_j)}.
\end{equation*}
With $t_j=W(\gamma|n_j)$, since $\gamma|n_j\in\mathcal{F}_{t_j}$, we have for $j$ sufficiently large that $\gamma|n_j\in\Omega^*_{\mathcal{L},\zeta}$ so that
\begin{equation*}
\frac{\log \sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t_j)}\rho(\eta)^q}{\log t_j}\leq q\frac{\log \rho(\gamma|n_j)}{\log t_j}.
\end{equation*}
Taking the limit infimum as $j$ goes to infinity yields
\begin{equation*}
\tau_{\mathcal{L}}(q)= \tau_{\mathcal{L},\zeta}(q)\leq q\dim_{\loc}(\mu,x)
\end{equation*}
where $q\in\R$ is arbitrary.
It follows that $\dim_{\loc}(\mu,x)\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$.
We now see (ii).
Since
\begin{equation*}
\kint_{\mathcal{L}}=\bigcup_{\{\Delta\in\mathcal{P}:\vs(\Delta)\in V(\mathcal{L})\}}\Delta\cap \kint_{\mathcal{L}},
\end{equation*}
it suffices to show that $\dim_H E_0\leq \tau_{\mathcal{L}}^*(\alpha)$ where
\begin{equation*}
E_0:= E_\mu(\alpha; \mathcal{L})\cap\Delta_0
\end{equation*}
and $\Delta_0=\pi(\zeta_0)$ is a fixed net interval with neighbour set in $\mathcal{L}$.
We fix notation as above; in particular, we recall that $\zeta_0$ is the symbolic representation of $\Delta_0$.
Again we assume $q< 0$; the case $q\geq 0$ follows similarly.
Fix $\epsilon>0$ and set
\begin{equation*}
\mathcal{G}_n=\{\pi(\eta):\eta\in F(\alpha; 2^{-n},\epsilon)\}
\end{equation*}
where $\pi(\eta)$ is the net interval with symbolic representation $\eta$.
By \cref{l:ft-count}, there exists $N=N(\epsilon)$ such that for all $n\geq N$,
\begin{equation*}
\#\mathcal{G}_n=\# F(\alpha; 2^{-n},\epsilon)\leq 2^{n(\tau_{\mathcal{L}}^*(\alpha)+(1-q)\epsilon)}.
\end{equation*}
Let $\mathcal{G}=\bigcup_{n=N(\epsilon)}^\infty G_n$.
We first see that $\mathcal{G}$ is a Vitali cover for $E_0$.
Let $x\in E_0$ be arbitrary.
Since $x=\pi(\gamma)$ is an interior point, it has an approximation sequence $(n_j)_{j=1}^\infty$.
Let $m$ be such that any path $\eta$ in $\mathcal{G}$ of length at least $m$ has $W(\eta)\leq 1/3$.
Such a constant exists since there are only finitely many possible edge weights $W(e)\in(0,1)$.
The choice of $m$ ensures that there exists some $m_j\in\N$ such that $W(\gamma|n_j)\leq 2^{-m_j}\leq W(\gamma|n_j-m)$.
Since $\dim_{\loc}(\mu,x)$ exists and $R\cdot W(\gamma|n_j)\asymp 2^{-m_j}$ where $R>0$ is a fixed constant,
\begin{equation}\label{e:2-bd}
\lim_{j\to\infty}\frac{\log \mu(B(x,R\cdot W(\gamma|n_j)))}{\log \mu(B(x,2^{-m_j}))}=1.
\end{equation}
Now, by \cref{l:ap-sub} and \cref{l:approx-reg} applied to the constant sequence $k_j=m$, we have for $j$ sufficiently large and $0\leq i\leq m$ arbitrary
\begin{equation*}
\mu(B(x, R\cdot W(\gamma|n_j)))\leq\rho(\gamma|n_j)\leq \rho(\gamma|n_j-i)\leq \rho(\gamma|n_j)^{1-\epsilon}.
\end{equation*}
Moreover, we always have
\begin{equation*}
B(x,2^{-m_j})\supseteq B(x, W(\gamma|n_j))\supseteq \pi(\gamma|n_j)
\end{equation*}
so that $\rho(\gamma|n_j)\leq \mu(B(x,2^{-m_j}))$.
Thus applying \cref{e:2-bd}, for all $j$ sufficiently small,
\begin{equation*}
\mu(B(x,2^{-m_j}))^{1+\epsilon}\leq \rho(\gamma|n_j-i)\leq\rho(\gamma|n_j)^{1-\epsilon}\leq\mu(B(x,2^{-m_j}))^{1-\epsilon}.
\end{equation*}
Finally, since $\dim_{\loc}(\mu,x)=\alpha$, for all $j$ sufficiently small and $0\leq i\leq m$ with $\gamma|(n_j-i)\in\mathcal{F}_{\mathcal{L},\zeta_0}(2^{-m_j})$,
\begin{equation*}
(2^{-m_j})^{\alpha+\epsilon}\leq \rho(\gamma|n_j-i)\leq (2^{-m_j})^{\alpha-\epsilon}.
\end{equation*}
Thus $\pi(\gamma|(n_j-i))\in\mathcal{G}$.
Since this is true for all $j$ sufficiently large, we may take $\diam(\gamma|(n_j-i))$ arbitrarily small, so $\mathcal{G}$ is indeed a Vitali cover for $E_0$.
Now suppose $\{E_i\}_{i=1}^\infty$ is any disjoint subcollection of $\mathcal{G}$: then for $s=\tau_{\mathcal{L}}^*(\alpha)+2(1-q)\epsilon$,
\begin{align*}
\sum_{i=1}^\infty\diam(E_i)^s &= \sum_{n=N(\epsilon)}^\infty\sum_{\Delta\in \mathcal{G}_n}\diam(\Delta)^s\leq \sum_{n=N(\epsilon)}^\infty 2^{-ns}\#\mathcal{G}_n\\
&\leq \sum_{n=N(\epsilon)}^\infty\bigl(2^{-\tau_{\mathcal{L}}^*(\alpha)-2(1-q)\epsilon)}2^{\tau_{\mathcal{L}}^*(\alpha)+(1-q)\epsilon}\bigr)^n\\
&= \sum_{n=N(\epsilon)}^\infty (2^{-(1-q)\epsilon})^n<\infty.
\end{align*}
Thus by the Vitali covering theorem for Hausdorff measure, we must have
\begin{equation*}
\mathcal{H}^s(E_0)\leq\sum_{i=1}^\infty\diam(E_i)^s<\infty
\end{equation*}
so that $\dim_H E_0\leq \tau_{\mathcal{L}}^*(\alpha)+2(1-q)\epsilon$.
Since $\epsilon>0$ was arbitrary, the result follows.
\end{proofref}
\subsection{Irreducibility and the lower bound for the multifractal spectrum}
We recall that the notion of irreducibility was introduced in \cref{sss:irreducibility}.
Moreover, recall that a point $x\in K_{\mathcal{L}}$ is said to be an interior point of $K_{\mathcal{L}}$ if it only has symbolic representations that are eventually in $\mathcal{L}$, and the set of such points is denoted by $\kint_{\mathcal{L}}$.
We now introduce the notion of an interior path, and use this to relate the notions of $\xi$-regularity in $\Omega^\infty$ (introduced in \cref{d:xi-reg}) with regular points in $K$ (as defined in \cref{d:reg-point}).
\begin{definition}
We say $\xi$ is an \defn{interior path} if whenever $(\Delta_i)_{i=0}^m$ is a sequence of net interals where $\Delta_{i+1}$ is a child of $\Delta_i$ corresponding to $\xi$, there is a neighbourhood of $\Delta_m$ in $K$ which is contained entirely in $\Delta_0$.
\end{definition}
Recall that $\Omega^\infty$ is the set of rooted infinite paths in $\mathcal{G}$ and $K_R$ is the set of regular points.
\begin{lemma}\label{l:xi-reg-proj}
Let $\xi$ be an interior path in a loop class $\mathcal{L}$, and let $\gamma\in\Omega^\infty_{\mathcal{L}}$ be $\xi$-regular.
Then for any path $\eta$ such that $\eta\gamma\in\Omega^\infty$, $\pi(\eta\gamma)\in \kint_{\mathcal{L}}\cap K_R$.
\end{lemma}
\begin{proof}
This is a direct application of the definitions, noting that if $\gamma=(e_n)_{n=1}^\infty$, $\eta$ has length $m_1$, $\xi$ has length $m_2$, and $\xi$ appears at some position $n$, then some $j$ with $n+m_1\leq j\leq n+m_1+m_2$ is a point in the approximation sequence of $\pi(\eta\gamma)$.
\end{proof}
Recall that
\begin{equation*}
E_\mu(\alpha;\mathcal{L})=\{x \in \kint_{\mathcal{L}}:\dim_{\loc}(\mu,x)=\alpha\}.
\end{equation*}
We will also need the following result, which follows by a similar argument to \cite[Prop. 3.15]{rut2021} or (in a somewhat more specialized case) \cite[Prop. 2.7]{hhn2018}.
\begin{lemma}\label{l:simple}
Suppose $\mathcal{L}$ is a simple loop class and $x\in K_{\mathcal{L}}$.
If $x$ is an interior point with $\pi^{-1}(x)=\{\gamma\}$, then
\begin{equation*}
\dim_{\loc}(\mu,x)=\dim_{\loc}(\rho,\gamma).
\end{equation*}
Otherwise $x\in K$ is a boundary point with $\pi^{-1}(x)=\{\gamma_1,\gamma_2\}$, and
\begin{equation*}
\dim_{\loc}(\mu,x)=\min\{\dim_{\loc}(\rho,\gamma_1),\dim_{\loc}(\rho,\gamma_2)\}.
\end{equation*}
\end{lemma}
We now show here that the regular points in a non-simple $K_{\mathcal{L}}$ are abundant.
\begin{theorem}\label{t:m-lower-bound}
Let $\mathcal{L}$ be an irreducible loop class which is not simple, or simple and contains an interior point.
Then $E_\mu(\alpha;\mathcal{L})\neq\emptyset$ if and only if $f_{\mathcal{L}}(\alpha)\geq 0$ if and only if $\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$.
Moreover,
\begin{equation*}
\dim_H E_\mu(\alpha;\mathcal{L})\cap K_R\geq f_{\mathcal{L}}(\alpha)
\end{equation*}
for all $\alpha$.
\end{theorem}
\begin{proof}
If $\mathcal{L}$ is simple, since $\mathcal{L}$ contains interior points, the result follows directly from \cref{l:simple}.
Otherwise, $\mathcal{L}$ is not simple, so there exists some vertex $v\in V(\mathcal{L})$ and an interior path $\xi\in\Omega^*(\mathcal{L},v)$.
Let $\zeta_1\in\Omega^*$ be any path ending at a vertex in $\mathcal{L}$.
By \cref{t:reg-sub}, get $\Gamma\subseteq E_{\mathcal{L},\zeta_1}(\alpha)$ such that $\dim_H\Gamma\geq\dim_H E_{\mathcal{L},\zeta_1}(\alpha)$ and each $\gamma\in\Gamma$ is $\xi$-regular with $\dim_{\loc}(\rho,\gamma)=\alpha$.
By \cref{l:xi-reg-proj} and \cref{c:reg-loc-dim}, $\pi(\Gamma)\subseteq E(\mathcal{L},\alpha)\cap K_R$.
In particular, this proves $E_{\mu}(\mathcal{L};\alpha)\cap K_R$ is non-empty whenever $f_{\mathcal{L}}(\alpha)\geq 0$, and
\begin{equation*}
\dim_H E_{\mu}(\mathcal{L};\alpha)\cap K_R\geq\dim_H\pi(\Gamma).
\end{equation*}
We also know by \cref{t:multi-f} that $\alpha\in[\alpha_{\min}(\mathcal{L}),\alpha_{\max}(\mathcal{L})]$ if and only if $f_{\mathcal{L}}(\alpha)\geq 0$.
The remaining implication follows from \cref{t:m-upper-bound}.
It remains to prove that $\dim_H\pi(\Gamma)=\dim_H(\Gamma)$.
We recall from \cref{l:pi-Lip} that $\pi$ is Lipschitz, so $\dim_H\pi(\Gamma)\leq\dim_H\Gamma$.
Conversely, let $\Delta\in\mathcal{P}$ be the net interval with symbolic representation $\zeta_1$ and let $\{U_i\}_{i=1}^\infty$ be some $\epsilon$-cover of $\pi(\Gamma)\subseteq\Delta$.
Without loss of generality, we may assume $U_i\subseteq\Delta$ for each $i\in\N$.
Let $t_i=\diam U_i <\epsilon$ and let $b_i$ denote the maximal number of net intervals of generation $t_i$ which intersect $U_i$.
Note that $b_i\leq 1/[a]+1$ where the diameter of any generation $t$ net interval is at least $at$.
These net intervals have symbolic representations $\{\zeta_1\eta_{ij}:1\leq j\leq b_i\}$, and the corresponding cylinders $\mathcal{C}=\{[\eta_{ij}]:i\in\N,1\leq j\leq b_i\}$ cover $\Gamma$ and have diameter $W(\eta_{ij})\asymp t_i$.
Thus there exists some $A>0$ such that $\mathcal{C}$ forms an $A\epsilon$-cover of $\Gamma$.
It follows that for a suitable constant $c$,
\begin{equation*}
\sum_{i=1}^{\infty}\sum_{j=1}^{b_{i}}\left(\diam([\eta_{ij}])\right)^{s}\leq cA^{s}\sum_{i}(\diam(U_{i}))^{s}
\end{equation*}
and therefore for each $\epsilon >0$,
\begin{equation*}
H_{\epsilon A}^{s}(\Gamma)\leq cA^{s}H_{\epsilon }^{s}(\pi(\Gamma )).
\end{equation*}
Letting $\epsilon \rightarrow 0$, we deduce that $H^{s}(\pi(\Gamma ))\geq (cA^{s})^{-1}H^{s}(\Gamma )$.
This implies $\dim_{H}\pi(\Gamma )\geq \dim_{H}(\Gamma )$, so that $\dim_H\pi(\Gamma)=\dim_H\Gamma$.
\end{proof}
If $\mathcal{L}$ is an irreducible non-simple loop class, then necessarily $\mathcal{L}$ contains an interior path.
The only additional case occurs when $\mathcal{L}$ is a simple loop class without an interior path.
In this case, it may hold that every $x\in K_{\mathcal{L}}$ has two symbolic representations, and the local dimension is always given by the symbolic representation of the adjacent path not eventually in $\mathcal{L}$.
This motivates the following definition.
\begin{definition}\label{d:degen}
We say that a loop class $\mathcal{L}$ is \defn{non-degenerate} if $\mathcal{L}$ is not simple, or if $\mathcal{L}$ is simple and there exists some $x\in K$ such that
\begin{equation*}
\dim_{\loc}(\mu,x)=\dim_{\loc}(\rho,\gamma)
\end{equation*}
for some $\gamma\in\Omega^\infty_{\mathcal{L}}$.
We say that $\mathcal{L}$ is \defn{degenerate} otherwise.
\end{definition}
\begin{corollary}\label{c:m-spectrum}
Suppose every loop class in $\mathcal{G}$ is irreducible, with non-degenerate loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$.
Then the multifractal spectrum of $\mu$ is given by
\begin{equation*}
f_\mu(\alpha)=\max\{f_{\mathcal{L}_1}(\alpha),\ldots,f_{\mathcal{L}_m}(\alpha)\}
\end{equation*}
for each $\alpha\in\R$.
\end{corollary}
\begin{proof}
Combining the general upper bound from \cref{t:m-upper-bound} and the lower bound \cref{t:m-lower-bound} using irreducibility, it follows for each $1\leq i\leq m$ that
\begin{equation*}
\dim_H E_\mu(\alpha;\mathcal{L}_i)=f_{\mathcal{L}_i}(\alpha).
\end{equation*}
Of course,
\begin{equation*}
\bigcup_{i=1}^m E_\mu(\alpha;\mathcal{L}_i)\supseteq E_\mu(\alpha)\cap\kint.
\end{equation*}
Moreover, if $x\notin\kint$, by \cref{l:simple}, then $\dim_{\loc}(\mu,x)=\dim_{\loc}(\rho,\gamma)$ for some infinite path $\gamma\in\Omega^\infty_{\mathcal{L}}$.
Then this $\mathcal{L}$ is non-degenerate, and $K\setminus\kint$ is countable and hence has Hausdorff dimension 0.
Thus
\begin{equation*}
f_\mu(\alpha)=\dim_H E_\mu(\alpha)=\dim_H E_\mu(\alpha)\cap\kint
\end{equation*}
as required.
\end{proof}
\subsection{Decomposability and bounds for the \texorpdfstring{$L^q$}{Lq}-spectrum}
Recall that the notion of decomposability was introduced in \cref{sss:decomposable}.
Similarly to how we bounded the multifractal formalism $f_\mu$ in terms of the functions $f_{\mathcal{L}}$ for loop classes $\mathcal{L}$, in this section, we establish bounds for the $L^q$-spectrum $\tau_{\mu}$ in terms of the functions $\tau_{\mathcal{L}}$.
We first note the following general upper bound.
\begin{lemma}\label{l:lq-upper-bound}
Let $\mu$ be a self-similar measure satisfying the $\Phi$-FNC, with loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$.
Then
\begin{equation*}
\tau_\mu(q)\leq\limsup_{t\to 0}\frac{\log\sup\sum_i\mu(B(x_i,t))^q}{\log t}\leq \min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}
\end{equation*}
where the supremum is taken over all centred packings $\{B(x_i,t)\}_i$ of $K=\supp\mu$.
\end{lemma}
\begin{proof}
The first inequality follows by definition.
To see the second inequality, let $\mathcal{L}$ be an arbitrary loop class.
Let $\zeta\in\Omega^*$ be a path ending at a vertex in $\mathcal{L}$.
Then by definition
\begin{equation*}
\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q\geq\sum_{\eta\in\mathcal{F}_{\mathcal{L},\zeta}(t)}\rho(\eta)^q.
\end{equation*}
Now the same proof as \cref{p:lq-lim} shows that
\begin{equation*}
\limsup_{t\to 0}\frac{\log\sup\sum_i\mu(B(x_i,t))^q}{\log t}=\limsup_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q}{\log t}\leq\tau_{\mathcal{L},\zeta}(q)=\tau_{\mathcal{L}}(q)
\end{equation*}
by existence of the limit defining $\tau_{\mathcal{L},\zeta}$ given in \cref{l:lq-limit}.
But $\mathcal{L}$ was arbitrary, so the result follows.
\end{proof}
We now have the following result establishing our lower bound as well.
Note the similarity of this result and proof to \cite[Thm. 5.2]{hhstoappear}.
\begin{theorem}\label{t:lq-lower-bound}
Let $\mu$ be a self-similar satisfying the $\Phi$-FNC with decomposable transition graph $\mathcal{G}$.
Let $\mathcal{G}$ have loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$.
Then
\begin{equation*}
\tau_\mu(q)=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}.
\end{equation*}
for any $q\in\R$.
Moreover, the limit defining $\tau_\mu(q)$ exists for any $q\in\R$.
\end{theorem}
\begin{proof}
For each loop class $\mathcal{L}_i$, fix a path $\zeta_i\in\Omega^*$ ending at a vertex $v_i\in V(\mathcal{L}_i)$.
Now for each vertex $w\in V(\mathcal{L}_i)$, let $\gamma_{i,w}$ be a path in $\mathcal{L}_i$ from $v_i$ to $w$.
Let $s_0>0$ be such that
\begin{equation*}
s_0^{1/m}\leq\min_{i}\min_{w\in\mathcal{L}_i}W(\zeta_i\gamma_{i,w}).
\end{equation*}
Similarly, since there are only finitely many initial and transition paths, there is $s_1>0$ such that if $\eta\in\mathcal{F}(t)$ has decomposition $(\lambda_1,\ldots,\lambda_n)$, then
\begin{equation*}
s_1 t\geq W(\lambda_1)\cdots W(\lambda_n).
\end{equation*}
Next, define sets of path weights
\begin{align*}
\Lambda_i&:=\{W(\eta):\eta\in\mathcal{F}_{\mathcal{L}_i,\zeta_i}\}\\
\Lambda(t) &:= \{(t_1,\ldots,t_m)\in\Lambda_1\times\cdots\times\Lambda_m: s_1 t\geq t_1\cdots t_m\geq s_0 t\}.
\end{align*}
Since there are only finitely many edge weights $W(e)$ for $e\in E(\mathcal{G})$, it follows that there is some $k\in\N$ such that $\#\Lambda(t)\leq(-\log t)^k$ for all $t$ sufficiently small.
We now construct a function
\begin{equation}\label{e:Psi-def}
\Psi:\mathcal{F}(t)\to\bigcup_{(t_1,\ldots,t_m)\in\Lambda(t)}\mathcal{F}_{\mathcal{L}_1,\zeta_1}(t_1)\times\cdots\times\mathcal{F}_{\mathcal{L}_m,\zeta_m}(t_m)
\end{equation}
as follows.
Suppose the path $\eta\in\mathcal{F}(t)$ has decomposition $(\lambda_1,\ldots,\lambda_m)$.
Then if the path $\lambda_i$ begins at vertex $w_i\in V(\mathcal{L}_i)$, we set
\begin{equation*}
\Psi(\eta)=(\zeta_1\gamma_{1,w_1}\lambda_1,\ldots,\zeta_m\gamma_{m,w_m}\lambda_m).
\end{equation*}
Note that $\Psi$ is well-defined by choice of $s_0$ and the definition of $\Lambda(t)$.
Since there are only finitely many transition paths, there is a uniform bound on the number of paths with the same decomposition.
Moreover, since there are only finitely many paths $\gamma_{i,w_i}$, for a fixed path $\eta$, the number of distinct decompositions of paths $\eta'$ with $\Psi(\eta)=\Psi(\eta')$ is also uniformly bounded.
Thus, even though $\Psi$ need not be injective, there is some constant $N\in\N$ (independent of $t$) such that each fibre of $\Psi$ has cardinality at most $N$.
Fix
\begin{equation*}
\theta(q):=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}.
\end{equation*}
By \cref{l:lq-limit}, for any $\epsilon>0$ and all $t$ sufficiently small,
\begin{equation*}
\sum_{\eta_i\in\mathcal{F}_{\mathcal{L}_i,\zeta_i}(t)}\norm{T(\eta_i)}^q\leq t^{\tau_{\mathcal{L}_i}(q)-\epsilon}\leq t^{\theta(q)-\epsilon}.
\end{equation*}
Moreover, by the decomposability assumption and \cref{l:left-prod}, it follows that if $\Psi(\eta)=(\eta_1,\ldots,\eta_m)$, then
\begin{equation*}
\norm{T(\eta)}^q\preccurlyeq_q\norm{T(\eta_1)}^q\cdots\norm{T(\eta_m)}^q.
\end{equation*}
Thus for all $t$ sufficiently small,
\begin{align*}
\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q&\preccurlyeq_q\sum_{(t_1,\ldots,t_m)\in\Lambda(t)}\left(\sum_{\eta_1\in\mathcal{F}_{\mathcal{L}_1,\zeta_1}(t_1)}\cdots\sum_{\eta_m\in\mathcal{F}_{\mathcal{L}_m,\zeta_m}(t_m)}\norm{T(\eta_1)}^q\cdots\norm{T(\eta_m)}^q\right)\\
&= \sum_{(t_1,\ldots,t_m)\in\Lambda(t)}\left(\sum_{\eta_1\in\mathcal{F}_{\mathcal{L}_1,\zeta_1}(t_1)}\norm{T(\eta_1)}^q\right)\cdots\left(\sum_{\eta_m\in\mathcal{F}_{\mathcal{L}_m,\zeta_m}(t_m)}\norm{T(\eta_m)}^q\right)\\
&\preccurlyeq_q \sum_{(t_1,\ldots,t_m)\in\Lambda(t)} t_1^{\theta(q)-\epsilon}\cdots t_m^{\theta(q)-\epsilon}\\
&\preccurlyeq_q\#\Lambda(t) t^{\theta(q)-\epsilon}.
\end{align*}
Since $\#\Lambda(t)$ grows polynomially in $\log t$, it follows by \cref{p:lq-lim} that
\begin{equation*}
\tau_\mu(q)\geq\liminf_{t\to 0}\frac{\log\sum_{\eta\in\mathcal{F}(t)}\rho(\eta)^q}{\log t}\geq\theta(q)-\epsilon.
\end{equation*}
But $\epsilon>0$ was arbitrary, and combining this with \cref{l:lq-upper-bound} yields the desired result.
\end{proof}
\begin{remark}\label{r:q-pos-min}
In fact, since for any path $\eta$ with decomposition $(\lambda_1,\ldots,\lambda_m)$, we have
\begin{equation*}
\norm{T(\eta)}\preccurlyeq\norm{T(\lambda_1)}\cdots\norm{T(\lambda_m)}
\end{equation*}
with no assumptions on the transition graph $\mathcal{G}$, the same proof as above shows that
\begin{equation*}
\tau_{\mathcal{L}}(q)=\tau_\mu(q)=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}
\end{equation*}
by \cref{p:ess-formula} for $q\geq 0$ without the decomposability assumption.
\end{remark}
\begin{remark}
Unlike the results for the multifractal formalism in \cref{c:m-spectrum}, we note that \cref{l:lq-upper-bound} and \cref{t:lq-lower-bound} write the $L^q$-spectrum in terms of \emph{all} loop classes, and not just the non-degenerate loop classes.
\end{remark}
\section{Applications and examples}\label{s:multi-examples}
Throughout this section, naturally, $(S_i,p_i)_{i\in\mathcal{I}}$ is a WIFS satisfying the finite neighbour condition with respect to the iteration rule $\Phi$, and has transition graph $\mathcal{G}$ and associated self-similar measure $\mu$.
\subsection{Consequences of the main results}\label{ss:cons}
Our first application, which follows essentially from the bounds in the previous section along with standard properties of concave functions, describes precisely when the multifractal formalism holds.
\begin{corollary}\label{c:multi-validity}
Suppose $\mathcal{G}$ is irreducible and decomposable, and suppose the maximal loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$ are non-degenerate.
Then $\mu$ satisfies the multifractal formalism at $\alpha$ if and only if $\alpha\in\partial\tau_{\mathcal{L}_i}(q)$ for some $1\leq i\leq m$ and $q\in\R$ with $\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}=\tau_{\mathcal{L}_i}(q)$.
In particular, if the derivative $\alpha=\tau_\mu'(q)$ exists at some $q\in\R$, then $\mu$ satisfies the multifractal formalism at $\alpha$.
\end{corollary}
\begin{proof}
Since $\mathcal{G}$ is decomposable, $\tau_\mu=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}\}$ by \cref{t:lq-lower-bound}.
First suppose $f_\mu(\alpha)=\tau_\mu^*(\alpha)$, so there is some $\mathcal{L}_i$ such that
\begin{equation*}
\tau_\mu^*(\alpha)=f_{\mathcal{L}_i}(\alpha)=\tau_{\mathcal{L}_i}^*(\alpha)
\end{equation*}
by \cref{c:m-spectrum} and \cref{t:multi-f}.
Since $\tau_\mu^*(\alpha)=\tau_{\mathcal{L}_i}^*(\alpha)$, there are $q_1,q_2\in\R$ such that $\alpha\in\partial\tau_{\mathcal{L}_i}(q_1)\cap\partial\tau_\mu(q_2)$: therefore, $\tau_{\mathcal{L}_i}(q_1)-\tau_\mu(q_2)=\alpha(q_1-q_2)$.
Without loss of generality, suppose $q_1<q_2$.
Since $\tau_\mu(q_1)\leq \tau_{\mathcal{L}_i}(q_1)$ and $\tau_{\mathcal{L}_i}(q_1)\leq\tau_{\mathcal{L}_i}(q_2)-(q_2-q_1)\alpha$ by concavity, this can only happen when $\tau_{\mathcal{L}_i}(q_1)=\tau_\mu(q_1)$ and $\alpha\in\partial\tau_{\mathcal{L}_i}(q_1)$, as required.
Conversely, suppose $\alpha\in\partial\tau_{\mathcal{L}_i}(q)$ where $\tau_{\mathcal{L}_i}(q)=\tau_\mu(q)$.
Since $\tau_\mu\leq\tau_{\mathcal{L}_i}$, it follows that $\alpha\in\partial\tau_\mu(q)$ so that $\tau_\mu^*(\alpha)=\tau_{\mathcal{L}_i}^*(\alpha)$.
But $\tau_{\mathcal{L}_i}(q)\leq\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}$ by assumption so
\begin{equation*}
\tau_\mu^*(\alpha)=\tau_{\mathcal{L}_i}^*(\alpha)=\max\{\tau_{\mathcal{L}_1}^*(\alpha),\ldots,\tau_{\mathcal{L}_m}^*(\alpha)\}=f_\mu(\alpha)
\end{equation*}
by \cref{c:m-spectrum}.
If $\tau_\mu'(q)$ exists, it follows immediately that $\alpha\in\partial\tau_{\mathcal{L}_i}(q)$ for any $i$ such that $\tau_{\mathcal{L}_i}(q)=\min\{\tau_{\mathcal{L}_1}(q),\ldots,\tau_{\mathcal{L}_m}(q)\}$.
\end{proof}
Our next result was obtained in \cite{rut2021} under the weak separation condition, but we obtain it here (in a slightly more specialized case) as a direct corollary of the prior results.
\begin{corollary}\label{c:one-loop}
Suppose $\mathcal{G}$ has exactly one loop class $\mathcal{L}$.
Then $\mu$ satisfies the multifractal formalism.
\end{corollary}
\begin{proof}
Since $\mathcal{L}$ is the only loop class, it must be essential, so it is irreducible by \cref{l:ess-irred}.
Since there is only one loop class and therefore no transition paths, the decomposability condition holds vacuously, and by \cref{t:lq-lower-bound}, $\tau_\mu=\tau_{\mathcal{L}}$.
Thus the result follows from the multifractal formalism for irreducible graph-directed systems proven in \cref{t:multi-f}.
\end{proof}
We now prove the following result, which completely characterizes the validity of the multifractal formalism in terms of a qualitative property of the multifractal spectrum.
\begin{corollary}\label{c:ir-de}
Suppose the transition graph $\mathcal{G}$ is irreducible and decomposable, and every loop class is non-degenerate.
Then $\mu$ satisfies the multifractal formalism if and only if $f_\mu$ is a concave function.
\end{corollary}
\begin{proof}
If $\mu$ satisfies the multifractal formalism, then $f_\mu=\tau_\mu^*$ where $\tau_\mu^*$ is a concave function.
Conversely, suppose $f_\mu$ is a concave function.
We have by \cref{t:lq-lower-bound} and \cref{c:m-spectrum} that
\begin{equation}
f_\mu=\max\{f_{\mathcal{L}_1},\ldots,f_{\mathcal{L}_m}\}\text{ and } \tau_{\mu}=\min\{\tau_{\mathcal{L}_1},\ldots,\tau_{\mathcal{L}_m}\}.\label{e:taumu}
\end{equation}
where $\mathcal{G}$ has loop classes $\mathcal{L}_1,\ldots,\mathcal{L}_m$.
Now let $\alpha_0\in\R$ be arbitrary.
Let $q$ be the unique value such that $\alpha_0\in\partial\tau_\mu(q)=[\alpha_1,\alpha_2]$.
If $\alpha_1=\alpha_2$, $\tau_\mu$ is differentiable at $q$ and we are done by \cref{c:multi-validity}.
Otherwise, by \cref{e:taumu}, there exist two loop classes, say $\mathcal{L}_1$ and $\mathcal{L}_2$, such that $\alpha_1\in\partial\tau_{\mathcal{L}_1}(q)$, $\alpha_2\in\partial\tau_{\mathcal{L}_2}(q)$ and $\tau_{\mathcal{L}_1}(q)=\tau_{\mathcal{L}_2}(q)=\tau_\mu(q)$.
Observe that $\tau_\mu^*(\alpha)=\alpha q-\tau_\mu(q)$ for any $\alpha\in[\alpha_1,\alpha_2]$.
Moreover, since concave conjugation is order reversing, by \cref{e:taumu}, $f_\mu(\alpha_1)=f_{\mathcal{L}_1}(\alpha_1)=\tau_\mu^*(\alpha_1)$ and $f_\mu(\alpha_2)=f_{\mathcal{L}_2}(\alpha_2)=\tau_\mu^*(\alpha_2)$.
But $f_\mu(\alpha_0)\leq\tau_\mu^*(\alpha_0)$ and $f_\mu$ is concave by assumption, forcing $\tau_\mu^*(\alpha_0)=f_\mu(\alpha_0)$ as required.
\end{proof}
\begin{remark}
For IFS of the form $(\lambda x+d_i)_{i\in\mathcal{I}}$ satisfying the finite type condition, the following version of the reverse implication was first observed in \cite[Rem. 5.3]{fen2009}: if $\tau_\mu=\tau_{\mathcal{L}}$ for an essential loop class $\mathcal{L}$, then $\mu$ satisfies the multifractal formalism.
This result follows for any IFS satisfying the $\Phi$-FNC by combining \cref{p:ess-formula}, the fact that the essential loop class is always irreducible, and the general upper bound $f_\mu\leq\tau_\mu^*$,
\end{remark}
A sufficient condition for the measure $\mu$ to fail the multifractal formalism is for the set of attainable local dimensions of $\mu$ to not be a closed interval.
In general, this condition is not necessary.
However, in certain situations, we can determine that it is necessary and sufficient.
\begin{corollary}\label{c:loc-dim-set}
Suppose the transition graph $\mathcal{G}$ is decomposable.
Suppose in addition that every non-essential loop class is simple and non-degenerate.
Then $\mu$ satisfies the multifractal formalism if and only if the set of local dimensions
\begin{equation*}
\{\dim_{\loc}(\mu,x):x\in K\}
\end{equation*}
is a closed interval.
\end{corollary}
\begin{proof}
The forward direction is immediate.
Conversely, denote the loop classes by $\{\mathcal{L}_1,\ldots,\mathcal{L}_m\}$.
If $\mathcal{L}$ is any simple loop class, then $f_{\mathcal{L}}(\alpha)=0$ for precisely one value of $\alpha$, and is $-\infty$ otherwise.
Since the essential loop class and any simple loop class is irreducible, by \cref{r:ess-unique}, we have
\begin{equation*}
f_{\mathcal{L}}(\alpha)=\max\{f_{\mathcal{L}_1}(\alpha),\ldots,f_{\mathcal{L}_m}(\alpha)\}=f_\mu(\alpha).
\end{equation*}
Thus the result follows by \cref{c:ir-de}.
\end{proof}
\subsection{A family of examples of Testud}\label{ss:tes-ex}
Let $\ell\geq 2$ be a positive integer.
Let $P,N\subseteq\{0,1,\ldots,\ell-1\}$ where $\{0,\ell-1\}\subseteq P\cup N$.
Let $\mathcal{I}=P\times\{1\}\cup N\times\{-1\}$ and for $(i,\pm 1)\in\mathcal{I}$, define
\begin{align*}
S_{(i,1)}(x) &= \frac{x}{\ell}+\frac{i}{\ell} & S_{(i,-1)}(x) &= -\frac{x}{\ell}+\frac{i+1}{\ell}.
\end{align*}
In this subsection, we study the multifractal theory of the IFS $\{S_{\underline{i}}\}_{\underline{i}\in\mathcal{I}}$.
This family of IFS was studied in \cite{tes2006a} and \cite{os2008} under the assumption that $P=\{0,1,\ldots,\ell-1\}$.
We do not require this assumption in our analysis.
Fix the iteration rule $\Phi$ from \cref{ex:uniform-transition}.
Write $V=\{v_{1},v_{-1},v_{\pm 1}\}$ where $v_{1}=\{x\mapsto x\}$, $v_{-1}=\{x\mapsto -x+1\}$ and $v_{\pm 1}=v_{1}\cup v_{-1}$.
Since the images $S_{(i,\pm 1)}((0,1))$ are either disjoint or coincide exactly,
\begin{equation*}
\mathcal{P}_n = \{S_\sigma([0,1]):\sigma\in\mathcal{I}^n\}.
\end{equation*}
In particular, if $\Delta\in\mathcal{P}$ is any net interval, then $\vs(\Delta)\in V$.
Thus $(S_{\underline{i}})_{\underline{i}\in\mathcal{I}}$ satisfies the $\Phi$-FNC.
Note that $v_1=\vroot\in V(\mathcal{G})$.
If $P\cap N=\emptyset$, then the IFS satisfies the open set condition with respect to the open interval $(0,1)$.
Otherwise, there exists some index $i$ such that $(i,1)$ and $(i,-1)$ are both in $\mathcal{I}$, so that $v_{\pm 1}$ is a neighbour set in $V$.
For the remainder of this section, we will assume that this is the case.
The open set condition may hold even when $P\cap N\neq\emptyset$ with respect to an open set that is not an interval (take, for example, $\ell=4$, $P=\{0,1,3\}$, and $N=\{1\}$), but for simplicity we omit this discussion.
\subsubsection{Properties of the transition graph}
We begin with a description of the transition graph $\mathcal{G}$.
\begin{proposition}
Suppose $P\cap N\neq\emptyset$.
There is a unique essential loop class $\mathcal{G}_{\ess}$, and $v_{\pm 1}\in V(\mathcal{G}_{\ess})$.
Moreover, exactly one of the following holds:
\begin{enumerate}[nl,r]
\item We have $P=N$.
Then $\mathcal{G}_{\ess}$ is the only loop class and $V(\mathcal{G}_{\ess})=\{v_{\pm 1}\}$.
\item There is some $i$ such that $i\in P\setminus N$ and $\ell-1-i\notin P$ or $i\in N\setminus P$ and $i,\ell-1-i\notin N$.
Then $\mathcal{G}_{\ess}$ is the only loop class and $V(\mathcal{G}_{\ess})=V(\mathcal{G})=\{v_{1},v_{-1},v_{\pm 1}\}$.
\item Otherwise, there is exactly one non-essential loop class $\mathcal{L}$.
In this case, if $P\setminus N\neq\emptyset$, then $v_1\in V(\mathcal{L})$, and if $N\setminus P\neq\emptyset$, then $v_{-1}\in V(\mathcal{L})$.
\end{enumerate}
\end{proposition}
\begin{proof}
If $i\in P\cap N$, then $\{(i,1),(i,-1)\}\subset\mathcal{I}$ so that $S_{(i,1)}([0,1])=S_{(i,-1)}([0,1])=\Delta$ is a net interval with neighbour set $v_{\pm 1}$.
This neighbour set is essential since if $\Delta=S_\sigma([0,1])$ is any net interval, then $S_{\sigma(i,1)}([0,1])$ is a net interval with neighbour set $v_{\pm 1}$.
It is clear that exactly one of the conditions must hold.
We verify corresponding properties of the transition graph $\mathcal{G}$.
\begin{enumerate}[nl,r]
\item If $P=N$, then for any net interval in $\mathcal{P}_1$, we see that $\vs(\Delta)=v_{\pm 1}$.
Thus every outgoing edge from $\vroot$ ends at the vertex $v_{\pm 1}$.
\item Suppose there is some $i\in P\setminus N$ with $\ell-1-i\notin N$.
Let $\Delta=S_\sigma([0,1])\in\mathcal{P}_n$ have $\vs(\Delta)=v_{\pm 1}$.
Then $S_{\sigma(i,1)}([0,1])$ is a net interval with $\vs(\Delta)=v_1$ and $S_{\sigma(\ell-1-i,-1)}([0,1])$ is a net interval with $\vs(\Delta)=v_{-1}$.
The other case follows similarly.
\item Finally, suppose (i) and (ii) do not hold.
Let $S_\sigma([0,1])=\Delta$ be a net interval with $\vs(\Delta)=v_{\pm 1}$, and suppose $r_\sigma>0$, and $\tau$ has $r_\tau<0$ and $S_\tau([0,1])=\Delta$ as well.
Suppose $i\in P$ so that $\Delta'=S_{\sigma(i,1)}([0,1])$ is a child of $\Delta$.
Then negating the condition (ii), we either have $i\in N$ (and $\Delta'$ has neighbours generated by $\sigma(i,1)$ and $\sigma(i,-1)$) or $\ell-1-i\in P$ (and $\Delta'$ has neighbours generated by $\sigma(i,1)$ and $\tau(i,1)$), so $\Delta'$ has neighbour set $v_{\pm 1}$.
The other case $i\in N$, or the cases where $\Delta'=S_{\tau(i,\pm 1)}([0,1])$, follow similarly.
Thus $V(\mathcal{G}_{\ess})=\{v_{\pm 1}\}$.
Since $P\neq N$, if $P\setminus N\neq\emptyset$, there is an edge from $\vroot=v_1$ to $v_1$ and if $N\setminus P\neq\emptyset$, there are edges from $v_1$ to $v_{-1}$ and $v_{-1}$ to $v_1$.
Thus the claim follows.
\end{enumerate}
\end{proof}
We can now observe the following result.
\begin{lemma}\label{l:tes-ir-dec}
With any choice of probabilities, the transition graph $\mathcal{G}$ is irreducible and decomposable.
\end{lemma}
\begin{proof}
The essential loop class $\mathcal{G}_{\ess}$ is always irreducible by \cref{l:ess-irred}.
If there is a loop class $\mathcal{L}$, we observe that either $V(\mathcal{L})$ consists of a single vertex or $V(\mathcal{L})=\{v_1,v_{-1}\}$ and there are edges joining $v_1$ and $v_{-1}$ and $v_{-1}$ and $v_1$.
Since the neighbour sets $v_1$ and $v_{-1}$ have cardinality one, irreducibility follows by \cref{l:irred}.
Decomposability follows directly from \cref{l:size-one-loops}.
\end{proof}
\subsubsection{Multifractal properties of associated measures}
We can compute formulas for the loop class $L^q$-spectra.
\begin{proposition}
\begin{enumerate}[nl,r]
\item Suppose $\mathcal{L}$ is a non-essential loop class.
Let $\mathcal{J}=(P\setminus N)\cup (N\setminus P)$, and for $j\in\mathcal{J}$, write $p_j=p_{(j,1)}$ if $j\in P\setminus N$, and $p_j=p_{(j,-1)}$ if $j\in N\setminus P$.
Then
\begin{equation*}
\tau_{\mathcal{L}}(q)=\frac{\log\sum_{j\in\mathcal{J}}p_j^q}{-\log \ell}.
\end{equation*}
\item Let $T(x)=1-x$.
Then with $\nu=\mu+\mu\circ T$,
\begin{equation*}
\tau_{\mathcal{G}_{\ess}}(q)=\tau_\nu(q).
\end{equation*}
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[nl,r]
\item Observe that there is a bijection between paths in $\Omega^n_{\mathcal{L}}$ and words in $\mathcal{J}^n$.
Moreover, if $\eta\in\Omega^n_{\mathcal{L}}$ has corresponding sequence $(j_1,\ldots,j_n)\in\mathcal{J}^n$, a direct computation gives that $\rho(\eta)=\norm{T(\eta)}=p_{j_1}\cdots p_{j_n}$.
Thus since $\vroot=v_1\in V(\mathcal{L})$,
\begin{align*}
\tau_{\mathcal{L}}(q)=\tau_{\mathcal{L},\emptyset}(q)&= \lim_{n\to\infty}\frac{\log\sum_{(j_1,\ldots,j_n)\in\mathcal{J}^n}p_{j_1}\cdots p_{j_n}}{-n\log\ell}\\
&= \lim_{n\to\infty}\frac{\log\left(\sum_{j\in\mathcal{J}}p_j\right)^n}{-n\log\ell}\\
&= \frac{\log \sum_{j\in\mathcal{J}}p_j^q}{-\log\ell}
\end{align*}
as claimed.
\item Let $\Delta\in\mathcal{P}_1$ have $\vs(\Delta)=v_{\pm 1}$.
By \cref{p:ess-formula}, $\tau_{\mathcal{G}_{\ess}}(q)=\tau_{\mu|_\Delta}(q)$.
But for any Borel set $E\subseteq\Delta$, we have by \cref{e:qi-formula} since $\vs(\Delta)=\{\id, T\}$
\begin{align*}
\mu(E) &= \mu(E)p_{(i,1)}+\mu\circ T(E) p_{(i,-1)}\asymp \nu(E).
\end{align*}
Thus $\tau_{\mathcal{G}_{\ess}}(q)=\tau_\nu(q)$ for any $q\in\R$.
\end{enumerate}
\end{proof}
We now observe the following conclusion.
\begin{theorem}
If there is no non-essential loop class, then $\mu$ satisfies the multifractal formalism.
Otherwise, there is a single non-essential loop class $\mathcal{L}$.
Then
\begin{align*}
\tau_\mu(q)&=\min\{\tau_{\mathcal{L}}(q),\tau_{\mathcal{G}_{\ess}}(q)\}\\
f_\mu(\alpha) &= \max\{\tau_{\mathcal{L}}^*(\alpha),\tau_{\mathcal{G}_{\ess}}^*(\alpha)\}.
\end{align*}
\end{theorem}
\begin{proof}
This follows directly from \cref{l:tes-ir-dec} by the general results \cref{c:m-spectrum} and \cref{t:lq-lower-bound}.
\end{proof}
\subsection{Bernoulli convolutions with Pisot contractions}\label{ss:bconv-Pisot}
\subsubsection{Simple Pisot contractions}
A simple Pisot number is the unique positive real root of a polynomial
\begin{equation*}
p_k(x)=x^k-x^{k-1}-\cdots-x-1
\end{equation*}
for some $k\geq 2$.
We denote this number by $r_k$.
Naturally, $r_k$ is a \emph{Pisot number}, which is a real algebraic number strictly greater than $1$ with Galois conjugates having modulus strictly less than 1.
Note that $r_2=(\sqrt{5}+1)/2$ is the Golden ratio, and $1<r_2<r_3<r_4<\cdots<2$.
We are interested in the (possibly biased) Bernoulli convolution associated with the parameter $\lambda=1/r_k$, which we view as a self-similar measure associated with the IFS
\begin{align*}
S_1(x)&=\lambda x & S_2(x) &= \lambda x+(1-\lambda)
\end{align*}
and probabilities $p_1,p_2>0$ with $p_1+p_2=1$.
It is known (since at least \cite{nw2001}) that the IFS $(S_i)_{i=1,2}$ satisfies the finite type condition, and thus satisfies the finite neighbour condition with respect to the iteration rule from \cref{ex:uniform-transition}.
In \cite{fen2005}, Feng proved, with probabilities $p_1=p_2=1/2$, that the associated self-similar measure satisfies the multifractal formalism.
Here, we show how this result can be obtained as a special case of our general results.
Fix any probabilities $p_1,p_2>0$ with $p_1+p_2=1$.
We first obtain basic results on the structure of the transition graph $\mathcal{G}$ and some information on sets of local dimensions.
\begin{proposition}\label{p:s-b-set}
The transition graph $\mathcal{G}$ has a unique essential loop class $\mathcal{G}_{\ess}$ and two non-essential simple loop classes $\mathcal{L}_1$ and $\mathcal{L}_2$.
Both loop classes $\mathcal{L}_1$ and $\mathcal{L}_2$ have a single vertex which is a neighbour set which has cardinality one.
Each $\Omega_{\mathcal{L}_i}^\infty$ consists of a single path $\gamma_i$, where $\pi(\gamma_1)=0$ and $\pi(\gamma_2)=1$, and
\begin{equation}\label{e:loc-dim-formula}
\begin{aligned}
\dim_{\loc}(\mu,0)&=\dim_{\loc}(\rho,\gamma_1)=\frac{\log p_1}{\log\lambda}\\
\dim_{\loc}(\mu,1) &= \dim_{\loc}(\rho,\gamma_2)=\frac{\log p_2}{\log\lambda}.
\end{aligned}
\end{equation}
Moreover, there exists a $\gamma\in\Omega^\infty_{\mathcal{G}_{\ess}}$ such that
\begin{equation*}
\dim_{\loc}(\mu,\pi(\gamma))=\frac{\log p_1p_2}{2\log \lambda}.
\end{equation*}
\end{proposition}
\begin{proof}
We will assume that $k\geq 3$; the case $k=2$ is similar, but easier (in fact, full details of the computation are given in \cite[Sec. 5.1]{hr2021}).
By a direct computation, the part of the graph $\mathcal{G}$ spanned by $\Omega^2$ is given in \cref{f:gm-graph}, along with the net intervals in $\mathcal{P}_2$ drawn in \cref{f:netiv-diag}.
The net intervals labelled $\Delta_i$ for $i=1,2,3$ have neighbour sets $\vs(\Delta_i)=v_i$, which are the labelled vertices in the partial transition graph.
We can now see that the leftmost child of $\Delta_1$ has neighbour set $v_2$, and the corresponding edge $e_{12}$ has $T(e_{12})=\begin{pmatrix}a p_1\end{pmatrix}$ and $W(e_{12})=\lambda$ for some constant $a>0$.
Similarly, there is an edge $e_{21}$ from $v_2$ to $v_1$ with $T(e_{21})=\begin{pmatrix}a^{-1} p_2\end{pmatrix}$ and $W(e_{21})=\lambda$.
From here, a straightforward induction argument (using the fact that $\lambda^k+\lambda^{k-1}+\cdots+\lambda-1=0$) yields that, in fact, $v_1,v_2,v_3$ are vertices in a unique essential loop class $\mathcal{G}_{\ess}$, and the cycles labelled as $\mathcal{L}_1$ and $\mathcal{L}_2$ indeed make up simple loop classes.
Since the edge $e_1$ (resp. $e_2$) corresponds to the left-most (resp. right-most) child of the base net interval $[0,1]$ and $\vroot$ is not in any loop class, it follows for $i=1,2$ that $\Omega_{\mathcal{L}_i}$ consists of a single path $\gamma_i=(e_i',e_i,e_i,\ldots)$ with $\pi(\gamma_1)=0$ and $\pi(\gamma_2)=1$.
Moreover, since $T(e_i)=\begin{pmatrix}p_i\end{pmatrix}$ and $W(e_i)=\lambda$, $\norm{T(\gamma_i|n)}\asymp p_i^{n}$ and thus \cref{e:loc-dim-formula} holds.
Now since $\theta=(e_{12},e_{21})$ is a cycle and an interior path in $\mathcal{G}_{\ess}$, let $\gamma$ denote any path of the form $\gamma_0\theta\theta\ldots$, so $\gamma\in\Omega^\infty_{\mathcal{G}_{\ess}}$ and by \cref{l:left-prod}
\begin{equation*}
\dim_{\loc}(\mu,\pi(\gamma)) = \dim_{\loc}(\rho,\gamma)=\frac{\log p_1p_2}{2\log\lambda}.
\end{equation*}
as claimed.
\end{proof}
\begin{figure}[ht]
\input{figures/simple_pisot}
\caption{Partial transition graph for the simple Pisot Bernoulli convolution}
\label{f:gm-graph}
\end{figure}
\begin{figure}[ht]
\input{figures/simple_pisot_intervals}
\caption{Net intervals in $\mathcal{P}_2$ for the simple Pisot Bernoulli convolution}
\label{f:netiv-diag}
\end{figure}
\begin{theorem}\label{t:simple-Pisot-mf}
Let $\mu$ the Bernoulli convolution associated with the Pisot number $r_k$.
Then $\mu$ satisfies the multifractal formalism if and only if $p_1=p_2=1/2$.
\end{theorem}
\begin{proof}
It follows from a general observation in \cite[Thm. 3.1]{hh2019} that if $p_1\neq 1/2$, then the set of attainable local dimensions of $\mu$ is not a closed interval (this holds for any overlapping biased Bernoulli convolution, with no separation assumptions).
Thus $\mu$ does not satisfy the multifractal formalism.
Conversely, when $p_1=p_2=1/2$, it follows from \cref{p:s-b-set} that the set of local dimensions is a closed interval.
The IFS is decomposable by \cref{l:size-one-loops}, so by \cref{c:loc-dim-set}, $\mu$ satisfies the multifractal formalism.
\end{proof}
\subsubsection{Other Pisot contractions}
More generally, we can take $r\in(1,2)$ to be any Pisot number.
Let $\mu$ be the Bernoulli convolution with parameter $\lambda=1/r$ associated with probabilities $p_1$ and $p_2$.
We have the following result.
\begin{theorem}
Suppose $r$ is the Pisot number which is the unique positive real root of any of the polynomials below:
\begin{itemize}[nl]
\item $x^3-2x^2+x-1$.
\item $x^4-x^3-2x^2+1$.
\item $x^4-2x^3+x-1$.
\end{itemize}
Let $\mathcal{G}$ be the transition graph associated with the Bernoulli convolution with parameter $\lambda=1/r$.
Then $\mathcal{G}$ has one essential loop class $\mathcal{G}_{\ess}$ and two simple loop classes $\mathcal{L}_1$ and $\mathcal{L}_2$, each of which has a single vertex which is a neighbour set of size one.
Moreover, the set of local dimensions is a closed interval with right endpoint $\log 2/\log r$ when $p_1=p_2=1/2$.
In particular, $\mu$ satisfies the multifractal formalism if and only if $p_1=p_2=1/2$.
\end{theorem}
\begin{proof}
This follows by a direct computation, preferably with the aid of a computer: the net intervals in $\mathcal{P}_2$ have the same relative placement and the corresponding transition matrices are the same as given in \cref{p:s-b-set}.
Thus the conclusion follows by the same argument as \cref{t:simple-Pisot-mf}
\end{proof}
\subsection{A family of non-equicontractive examples}\label{ss:non-e}
Fix parameters $\lambda_1,\lambda_2>0$ and consider the IFS given by
\begin{align}\label{e:wifs-1}
S_1(x) &= \lambda_1 x & S_2(x) &= \lambda_2 x +\lambda_1(1-\lambda_2) & S_3(x) &= \lambda_2 x+(1-\lambda_2)
\end{align}
where $\lambda_1+2\lambda_2-\lambda_1\lambda_2\leq 1$.
Note that the case $\lambda_1=\lambda_2=1/3$ is discussed in \cref{e:gen-ifs}.
This IFS was first introduced in \cite[Prop. 4.3]{lw2004}, and the multifractal analysis of this measure was studied extensively in \cite{dn2017,rut2021}.
The IFS in \cref{e:wifs-1} is a special case of the following general construction.
Fix parameters $\lambda_1,\lambda_2>0$ and some $k\in\N$, and for $j\in\{0,1,\ldots,k\}$ let $\beta_j=\lambda_1\cdot (\lambda_2/\lambda_1)^{j}$.
Then consider the IFS given by the $k+2$ maps
\begin{equation}\label{e:w-ifs}
\begin{aligned}
S_0(x) &= \lambda_1 x\\
S_i(x) &= \beta_i x+\sum_{j=1}^i\beta_{j-1}(1-\beta_j)\text{ for each }i\in\{1,\ldots,k\}\\
S_{k+1}(x) &= \lambda_2 x + (1-\lambda_2)
\end{aligned}
\end{equation}
under the constraint $S_k(1)+\lambda_2\leq 1$.
This IFS coincides with \cref{e:wifs-1} when $k=1$, and coincides with \cite[Ex. 8.5]{dn2017} when $k=2$.
The author proved in \cite[Thm. 5.7]{rut2021} that any self-similar measure associated with the IFS \cref{e:wifs-1} satisfies the multifractal formalism.
However, the proof in that paper is complicated by the use of the iteration rule given in \cref{ex:weighted-transition}.
If we instead take the iteration rule from \cref{ex:uniform-transition} with corresponding transition graph $\mathcal{G}$, the situation is much more straightforward, even with our general setup.
\begin{proposition}\label{p:w-ifs-graph}
The transition graph $\mathcal{G}$ is strongly connected.
\end{proposition}
\begin{proof}
The definition of the IFS $(S_i)_{i=0}^{k+1}$ ensures for each $i=1,\ldots,k$ that
\begin{equation}\label{e:overlap-exact}
S_{i-1}\circ S_{k+1}=S_0\circ S_i\text{ and }S_{i-1}(0)<S_i(0)<S_{i-1}(1)<S_i(1),
\end{equation}
and by assumption $S_{k}(1)\leq S_{k+1}(0)$.
Thus the net intervals in $\mathcal{P}_1$ are the intervals
\begin{align*}
\Delta_0 &= [0,S_1(0)] & \Delta_k&=[S_{k-1}(1),S_k(1)] & \Delta_{k+1} &= S_{k+1}([0,1])\\
\end{align*}
and
\begin{align*}
\Delta_{i,i+1} &= [S_i(1)\cap S_{i+1}(0)] \text{ for }i=0,1,\ldots,k-1\\
\Delta_i &= [S_i(1),S_{i+1}(0)]\text{ for }i=1,\ldots,k-1
\end{align*}
which are ordered from left to write as $(\Delta_0,\Delta_{0,1},\Delta_1,\ldots,\Delta_{k-1,k},\Delta_k,\Delta_{k+1})$.
Note that $\vroot=\vs(\Delta_{k+1})$, and set $v_i=\vs(\Delta_i)$ for $i=0,\ldots,k$ and $v_{i,i+1}=\vs(\Delta_{i,i+1})$ for $i=0,\ldots,k-1$.
Set $V=\{\vroot\}\cup \{v_i:i=0,\ldots,k\}\cup\{v_{i,i+1}:i=0,\ldots,k-1\}$.
It follows from \cref{e:overlap-exact} that the net intervals in $\mathcal{P}_2$ contained in $S_i([0,1])$ for all $0\leq i\leq k+1$ are just the intervals $S_i(\Delta_j)$ and $S_i(\Delta_{j,j+1})$ with $\vs(S_i(\Delta_j))=v_j$ and $\vs(S_i(\Delta_{j,j+1}))=v_{j,j+1}$ for all valid $j$.
Tracking inclusion of these net intervals in the net intervals in $\mathcal{P}_1$ yields the graph $\mathcal{G}'$ with (unlabelled) edges given by
\begin{itemize}[nl]
\item $(\vroot, v) \text{ for all }v\in V$.
\item $(v_0,v) \text{ for all }v\in V\setminus\{\vroot\}$.
\item $(v_k,v) \text{ for all }v\in V\setminus\{v_0,v_{0,1}\}$.
\item $(v_i,v) \text{ for all }v\in V\setminus\{v_0,v_{0,1},\vroot\}\text{ and }i\in\{1,\ldots,k-1\}$.
\item $(v_{i-1,i}, v)\text{ for all }v\in\{v_0,v_{0,1}\}\text{ and }i\in\{1,\ldots,k\}$.
\end{itemize}
In particular, we observe that $\mathcal{G}'$ is strongly connected.
Note that, for certain choices of $k,\lambda_1,\lambda_2$, the list $V$ of neighbour sets given above may include repetitions.
In any case, the transition graph $\mathcal{G}$ is given by identifying vertices in $\mathcal{G}'$ corresponding to the same neighbour set, so $\mathcal{G}$ is strongly connected.
\end{proof}
\begin{theorem}
Let $\mu$ be any self-similar measure associated with the IFS $(S_i)_{i=0}^{k+1}$ from \cref{e:w-ifs}.
Then $\mu$ satisfies the multifractal formalism.
\end{theorem}
\begin{proof}
This is immediate from \cref{p:w-ifs-graph} and \cref{c:one-loop}.
\end{proof}
\bibliographystyle{plain}
|
1,477,468,750,114 | arxiv | \section{Introduction}
In a graph $G$ a \emph{hole} is a chordless cycle of length at least four. A
hole is \emph{even} or \emph{odd} depending on the parity of the size of its
vertex set. An \emph{$n$-hole} is a hole on $n$ vertices. A graph $G$
\emph{contains} a graph $F$, if $F$ is isomorphic to an induced subgraph of
$G$. $G$ is \emph{$F$-free} if it does not contain $F$, and for a family of
graphs $\mathcal{F}$, $G$ is \emph{$\mathcal{F}$-free} if for every $F\in \mathcal{F}$, $G$ does not
contain $F$. A \emph{diamond} is the graph obtained by removing one edge from
a complete graph on four vertices. In this paper we study (diamond, even
hole)-free graphs.
Even-hole-free graphs have been studied considerably in the last two decades
(see surveys \cite{kv-survey1, kv-survey2}), and yet some of the key
algorithmic questions remain open for this class. Finding a largest (weighted)
clique in an even-hole-free graph can be done in polynomial time. As observed
by Farber \cite{farber}, 4-hole-free graphs have $\mathcal{O}(n^2)$ maximal cliques,
so one can list them all in polynomial time. One can do better for
even-hole-free graphs, by exploiting structural properties of the class. In
\cite{daSV} it is shown that every even-hole-free graph has a vertex whose
neighbourhood is \emph{chordal} (i.e.\ hole-free), and in \cite{actv} it is
shown that an ordering of such vertices can be found using LexBFS, resulting
in an $\mathcal{O}(nm)$-time algorithm for maximum weighted clique problem for
even-hole-free graphs. This algorithm is in fact robust: for any input graph
$G$, it either outputs a maximum weighed clique of $G$ or a certificate that
$G$ is not even-hole-free. Even-hole-free graphs can be recognized in
polynomial time, as first shown in \cite{cckv-ehfrecognition}, with currently
best complexity of $\mathcal{O}(n^{11})$ \cite{cl}. This result is based on a
decomposition theorem for even-hole-free graphs from \cite{daSVehfdecomp} that
states that every even-hole-free graph is either simple in some sense, or has
a star cutset or a 2-join. In \cite{achrs} it is shown that every
even-hole-free graph $G$ has a vertex whose neighborhood is a union of two
(possibly empty) cliques, implying that $\chi(G) \le 2\omega(G)-1$. Despite
all these attempts to understand the structure of even-hole-free graphs, the
complexity of the stable set and coloring problems remains open for this
class.
For several subclasses of even-hole-free graphs these problems are solved in
polynomial time. Of particular interest is the class of (diamond, even
hole)-free graphs. The class was first studied in \cite{kmv} where it was
shown that (diamond, even hole)-free graphs can be decomposed by bisimplicial
cutsets (a special type of a star cutset that consists of two, possibly empty,
cliques) and 2-joins. One of the consequences of this decomposition theorem is
the existence of a vertex that is either of degree 2 or is simplicial (i.e.,
its neighborhood is a clique), implying that the class is $\beta$-perfect, and
for every graph $G$ in the class $\chi(G) \le \omega(G)+1$. The
$\beta$-perfection implies that the class can be colored in polynomial time by
coloring greedily on a particular, easily constructible, ordering of vertices.
The complexity of the stable set problem remains open for this class.
One of the motivations for the study of even-hole-free graphs is their
connection to $\beta$-perfect graphs introduced by Markossian, Gasparian and
Reed \cite{mgr}. For a graph $G$, let $\delta(G)$ denote the minimum degree
of a vertex of $G$. Consider the following total order on $V(G)$: order the
vertices by repeatedly removing a vertex of minimum degree in the subgraph of
vertices not yet chosen and placing it after all the remaining vertices but
before all the vertices already removed. Coloring greedily on this order
gives the upper bound: $\chi(G) \le \beta(G)$, where $\beta(G) =
\max\{\delta(H)+1 : H \text{~is an induced subgraph of~} G\}$. A graph is
\emph{$\beta$-perfect} if for each induced subgraph $H$ of $G$,
$\chi(H)=\beta(H)$. It is easy to see that $\beta$-perfect graphs are a
proper subclass of even-hole-free graphs.
\emph{Tree-width} is a well-known graph invariant, introduced by Robertson and
Seymour in~\cite{RS-GM02}. Many problems that are NP-hard in general become
tractable on graph classes of bounded tree-width~\cite{Courcelle90}.
Similarly, \emph{clique-width}, introduced by Courcelle, Engelfriet and
Rozenberg in~\cite{CER-93}, allows for many hard problems to become tractable
on graph classes of bounded clique-width~\cite{CMR00}. This includes finding
the largest clique or independent set, and deciding if a colouring with at
most $k$ colors exists (for fixed $k\in \mathbb N$). While bounded tree-width
implies bounded clique-width, the converse is not true in general. Graph
classes of bounded tree-width are necessarily sparse. In contrast, there
exist dense graph classes with bounded clique-width. This makes clique-width
particularly interesting in the study of algorithmic properties of hereditary
graph classes. The notion of \emph{rank-width} was defined by Oum and Seymour
in~\cite{OS-rw}, where they use it for an approximation algorithm for
clique-width. They also show that rank-width and clique-width are equivalent,
in the sense that a graph class has bounded rank-width if, and only if, it has
bounded clique-width. Meanwhile, the structure of graphs of bounded
rank-width is studied widely, and it turns out that rank-width is an elegant
notion, that also provides a better understanding of graph classes of bounded
clique-width.
Rank-width of subclasses of even-hole-free graphs has also been studied. In
\cite{dsl} it is shown that planar even-hole-free graphs have tree-width at
most 49. In \cite{kl} it is shown that even-hole-free graphs with no star
cutset have bounded rank-width. Even-hole-free graphs in general do not have
bounded tree-, clique-, rank-width, as they contain all chordal graphs.
Algorithms for chordal graphs follow from their decomposition by clique
cutsets, and clique cutsets in general agree well with a number of problems,
including stable set and coloring. An example of even-hole-free graphs with no
clique cutset and unbounded rank-width is given in \cite{kl}, which is a
slight modification of the class of permutation graphs introduced in
\cite{gr}. In \cite{tk} Kloks claims a proof of the fact that (diamond, even
hole)-free graphs can be decomposed by clique cutsets into graphs of bounded
clique-width. In this paper we exhibit a class of (diamond, even hole)-free
graphs with no clique cutset that has unbounded rank-width (and hence
clique-width), so disproving Kloks' claim.
Another interesting subclass of even-hole-free graphs is the class of (cap,
even hole)-free graphs, where a \emph{cap} is a graph made of a hole and a
vertex that has exactly two neighbors on this hole, which are furthermore
adjacent. Cap-free graphs in general are decomposed by amalgams in
\cite{cckv-cap}. Recently, Conforti, Gerards and Pashkovich \cite{cgp}, show
how to obtain a polynomial-time algorithm for solving the maximum weighted
stable set problem on any class of graphs that is decomposable by amalgams
into basic graphs for which one can solve the maximum weighted stable set
problem in polynomial time. This leads to a polynomial-time algorithm for
solving the maximum weight stable set problem for (cap, even-hole)-free
graphs. Subsequently, Cameron, da Silva, Huang and Vu\v{s}kovi\'c \cite{cshv}
give an explicit construction of (cap, even hole)-free graphs, which is then
used to show that (triangle, even hole)-free graphs have tree-width at most 5,
and that (cap, even hole)-free graphs with no clique cutset have clique-width
at most 48 (and hence bounded rank-width). This implies that a number of
problems can be solved efficiently on this class, and in particular the class
can be colored in polynomial time.
\section{Preliminaries}
Graphs are finite, simple and undirected unless stated otherwise. The vertex
set of a graph $G$ is denoted by $V(G)$ and the edge set by $E(G)$. A graph
$H$ is a \emph{subgraph} of a graph $G$, denoted by $H\subseteq G$, if
$V(H)\subseteq V(G)$ and $E(H)\subseteq E(G)$. For a graph $G$ and a subset
$X\subseteq V(G)$, we let $G[X]$ denote the subgraph of $G$ \emph{induced} by
$X$, i.e.\ $G[X]$ has vertex set $X$, and $E(G[X])$ consists of the edges of
$G$ that have both ends in $X$. A graph $H\subseteq G$ is an \emph{induced
subgraph }of $G$, if $H=G[X]$ for some $X\subseteq V(G)$. Moreover, we let
$G\setminus X:= G[V(G)\setminus X]$. The set $X$ is a \emph{clique}, if $G[X]$
contains all possible edges. If $G$ is connected, $X$ is called a
\emph{clique cutset} if $X$ is a clique and $G\setminus X$ is disconnected.
A \emph{tree} is a connected, acyclic graph. A \emph{leaf} of a tree is a node
incident to exactly one edge. For a tree $T$, we let $L(T)$ denote the set of
all leaves of $T$. A tree node that is not a leaf is called \emph{internal}.
A tree is \emph{cubic}, if it has at least two vertices and every internal
node has degree $3$. A \emph{path} is a tree where every node has degree at
most $2$. The (at most $2$) leaves of a path $P$ are called
\emph{end-vertices} of $P$. A \emph{$u,v$-path} is a path with end-vertices
$u$ and $v$. A graph $P$ is a \emph{subpath} of a graph $G$, if $P$ is a path
and $P\subseteq G$.
For a set $X$, let $2^X$ denote the set of all subsets of $X$. For sets $R$
and $C$, an \emph{$(R,C)$-matrix} is a matrix where the rows are indexed by
elements in $R$ and columns indexed by elements in $C$. For an $(R,C)$-matrix
$M$, if $X\subseteq R$ and $Y\subseteq C$, we let $M[X,Y]$ be the submatrix of
$M$ where the rows and the columns are indexed by $X$ and $Y$, respectively.
For a graph $G=(V,E)$, let $A_G$ denote the adjacency matrix of $G$ over the
binary field (i.e.\ $A_G$ is the $(V,V)$-matrix, where an entry is $1$, if and
only if, the column-vertex is adjacent to the row-vertex, and $0$ otherwise).
The \emph{cutrank function} of $G$ is the function
$\cutrk{G}\colon2^{V}\to\mathbb{N}$, given by
\[\cutrk{G}(X)=\operatorname{rank}\left(A_G[X,V\setminus X]\right),\]
where the rank is taken over the binary field.
A \emph{rank decomposition} of a graph $G$ is a pair $(T,\lambda)$, where $T$
is a cubic tree and $\lambda\colon V(G)\to L(T)$ is a bijection. If
$\left|V(G)\right| \le 1$, then $G$ has no rank decomposition. For every edge
$e\in E(T)$, the connected components of $T-e$ induce a partition $(A_e,B_e)$
of $L(T)$. The \emph{width} of an edge $e$ is defined as
$\cutrk{G}(\lambda^{-1}(A_e))$. The \emph{width} of $(T,\lambda)$, denoted by
$\operatorname{width}(T,\lambda)$, is the maximum width over all edges of $T$. The
\emph{rank-width} of $G$, denoted by $\operatorname{rw}(G)$, is the minimum integer $k$,
such that there is a rank decomposition of $G$ of width $k$. (If
$\left|V(G)\right| \le 1$, we let $\operatorname{rw}(G)=0$.)
\begin{remark}\label{rem:isubgraph-monotone}
Let $G$ be a graph and $H\subseteq G$ be an induced subgraph of $G$.
Then $\operatorname{rw}(H) \le \operatorname{rw}(G)$.
\end{remark}
We say that a class $\mathcal{C}$ of graphs has \emph{bounded} rank-width, if
there exists a constant $k\in \mathbb{N}$, such that every $G\in \mathcal{C}$
satisfies $\operatorname{rw}(G) \le k$. If such a constant does not exist, $\mathcal{C}$ has
\emph{unbounded} rank-width.
We conclude the section with two lemmas that we will use in
Section~\ref{sec:lower-bound}.
\begin{lemma}\label{lem:rdec-path}
Let $k\in \mathbb{N}$. Let $G$ be a graph, $P\subseteq G$ an induced path,
$(T,\lambda)$ a rank decomposition of $G$ of width at most $k$, and $e \in
E(T)$. Let $(X,Y)$ be the bipartition of $V(P)$ induced by the two
components of $T-e$. Then the induced graph $P[X]$ has at most $k+1$
connected components.
\end{lemma}
\begin{proof}
Towards a contradiction, assume that $P[X]$ has at least $k+2$ components.
Order the components (which are subpaths of $P$) according to their
appearance along $P$. From each component, except for the first one, pick
the first vertex. In this way we obtain a set $X'\subseteq X$ of at least
$k+1$ vertices, each with one or two neighbours in $Y$ (two neighbours only
if the component is a singleton vertex). Let $Y'$ be the set of vertices in
$Y$ that are adjacent to a vertex in $X'$. Then each row of $A_P[X',Y']$ has
one or two non-zero entries, and no two rows are equal. Ordering the
vertices of $X'$ and $Y'$ according to their appearance on $P$ yields a
matrix with blocks corresponding to subpaths of $P$, such that in each row
the (at most two) non-zero entries appear consecutively. By the choice of
$X'$, within each block there is at most one row with precisely one non-zero
entry, while all other rows in that block have two non-zero entries.
With this it is easy to see that the rows of each block are linearly
independent, and it follows that $A_P[X',Y']$ has rank at least $k+1$. Since
$P$ is induced, we have $A_P[X',Y']=A_G[X',Y']$, and hence the width of $e$
is at least $k+1$, a contradiction to the width of $(T,\lambda)$ being at
most $k$.
\end{proof}
We use the following definition, several variants of which exist in the
literature.
\begin{defeng}
Let $T$ be a tree. We call an edge $e \in E(T)$ \emph{balanced}, if the
partition $(A_e,B_e)$ of $L(T)$ satisfies $\frac13\left|L(T)\right| \le
\left|A_e\right|$ and $\frac13\left|L(T)\right| \le \left|B_e\right|$.
\end{defeng}
The following lemma is well-known and we omit the proof.
\begin{lemma}\label{lem:balanced-edge}
Every cubic tree has a balanced edge.
\end{lemma}
\begin{lemma}\label{lem:heavy-subpath}
For $m,k\in \mathbb{N}$ with $k>1$, let $G$ be a graph, $P\subseteq G$ be an
induced path and $|V(G)|-|V(P)|=m$. Let $(T,\lambda)$ be a rank
decomposition of $G$ of width at most $k$, and let $e\in E(T)$ be a balanced
edge. Let $(X,Y)$ be the bipartition of $V(P)$ induced by $e$. Then each of
the two induced subgraphs $P[X]$ and $P[Y]$ contains a connected component
with at least $\left\lfloor\frac{|V(G)|-3m}{3(k+1)}\right\rfloor$ vertices.
\end{lemma}
\begin{proof}
Since $e$ is balanced, we have
$\left|X\right|\geq\frac13\left|V(G)\right|-m$ and
$\left|Y\right|\geq\frac13\left|V(G)\right|-m$.
By Lemma~\ref{lem:rdec-path}, both $P[X]$ and $P[Y]$ have at most $k+1$
connected components, which proves the lemma.
\end{proof}
\section{Construction}
In this section we construct a class of (diamond, even-hole)-free graphs
$(G_d)_{d \ge 1}$.
For $1 \le k \le d$, let
\[S_k=\{(a_1,a_2,\dots,a_{k-1},a_k)\,:\,a_1,a_2,\dots,a_{k-1}\in\{1,3\},\,a_k\in\{1,2,3,4\}\},\]
and $S^d=\bigcup_{k=1}^d S_k$. If $u\in S_k$, then we denote $l(u)=k$, and say
that the \emph{length of} $u$ is $k$.
In $S^d$, let $\preccurlyeq$ denote the lexicographical order defined as follows.
For $a=(a_1,a_2,\dots,a_k)\in S^d$ and
$b=(b_1,b_2,\dots,b_l)\in S^d$, $a\preccurlyeq b$ if and only if $k \le l$ and
$a_i=b_i$ for $1\leq i \le k$, or $t=\min\{i\,:\,a_i \ne b_i\}$ is
well-defined and $a_t<b_t$. This order is a total order on the finite set
$S^d$, so we introduce the following notation:
\begin{itemize}
\item for $a\in S^d\setminus\{(4)\}$, $s(a)$ is the smallest element (w.r.t.\
$\preccurlyeq$) of $S^d$ that is greater than $a$;
\item for $a\in S^d\setminus\{(1)\}$, $p(a)$ is the greatest element (w.r.t.\
$\preccurlyeq$) of $S^d$ that is smaller than $a$.
\end{itemize}
Let $P'_d$ denote the path on vertex set $S^d$ connecting the vertices
according to the lexicographic order, and let $P_d$ be the path obtained from
$P'_d$ by subdividing every edge $uv\in E(P'_d)$ twice if $l(u) = l(v)$, and
once, otherwise. Finally, let $W_d=\{v_1,v_2,\dots,v_d\}$ be a set of (new)
vertices, such that $v_k$, for $1 \le k \le d$, is adjacent to all vertices of
$S_k$ and all other vertices of $W_d$. Then, $G_d$ is the graph induced by the
set $W_d\cup V(P_d)$. For vertices of $W_d$ we say that they are
\emph{centers} of $G_d$. Figure~\ref{fig:G4} shows $G_4$.
\begin{remark} \label{rem:size}
For $d \ge 1$, the following hold:
\begin{enumerate}[(i)]
\item \label{s1} $|S^d|=\sum_{k=1}^d 4\cdot 2^{k-1}=4(2^d-1) \ge 2^{d+1}$,
and
\item\label{s2} $3|S^d|+d \ge |V(G_d)| \ge 2|S^d| \ge 2^{d+2}$.
\end{enumerate}
\end{remark}
\begin{proof}
Part (\ref{s1}) follows from the fact that for $k=1$, the set $S_k$ contains
$4$ vertices, and that the number of vertices in the set doubles whenever
$k$ increases by one. Part (\ref{s2}) follows from Part (\ref{s1}) and the
number of subdivision vertices added in the construction of $P_d$.
\end{proof}
\begin{remark} \label{rem:continuous}
For $d \ge 1$, every $u\in S^d$, with $u \ne (4)$, satisfies
$|l(u)-l(s(u))| \le 1$.
\end{remark}
\begin{figure}[htbp]
\begin{center}
\newcommand{\NtoS}[1]{\ifcase #1
(0) \or 0a \or 0b
\or (1) \or 1a
\or (1,1) \or 11a
\or (1,1,1) \or 111a
\or (1,1,1,1) \or 1111a \or 1111b
\or (1,1,1,2) \or 1112a \or 1112b
\or (1,1,1,3) \or 1113a \or 1113b
\or (1,1,1,4) \or 1114a
\or (1,1,2) \or 112a \or 112b
\or (1,1,3) \or 113a
\or (1,1,3,1) \or 1131a \or 1131b
\or (1,1,3,2) \or 1132a \or 1132b
\or (1,1,3,3) \or 1133a \or 1133b
\or (1,1,3,4) \or 1134a
\or (1,1,4) \or 114a
\or (1,2) \or 12a \or 12b
\or (1,3) \or 13a
\or (1,3,1) \or 131a
\or (1,3,1,1) \or 1311a \or 1311b
\or (1,3,1,2) \or 1312a \or 1312b
\or (1,3,1,3) \or 1313a \or 1313b
\or (1,3,1,4) \or 1314a
\or (1,3,2) \or 132a \or 132b
\or (1,3,3) \or 133a
\or (1,3,3,1) \or 1331a \or 1331b
\or (1,3,3,2) \or 1332a \or 1332b
\or (1,3,3,3) \or 1333a \or 1333b
\or (1,3,3,4) \or 1334a
\or (1,3,4) \or 134a
\or (1,4) \or 14a
\or (2) \or 2a \or 2b
\or (3) \or 3a
\or (3,1) \or 31a
\or (3,1,1) \or 311a
\or (3,1,1,1) \or 3111a \or 3111b
\or (3,1,1,2) \or 3112a \or 3112b
\or (3,1,1,3) \or 3113a \or 3113b
\or (3,1,1,4) \or 3114a
\or (3,1,2) \or 312a \or 312b
\or (3,1,3) \or 313a
\or (3,1,3,1) \or 3131a \or 3131b
\or (3,1,3,2) \or 3132a \or 3132b
\or (3,1,3,3) \or 3133a \or 3133b
\or (3,1,3,4) \or 3134a
\or (3,1,4) \or 314a
\or (3,2) \or 32a \or 32b
\or (3,3) \or 33a
\or (3,3,1) \or 331a
\or (3,3,1,1) \or 3311a \or 3311b
\or (3,3,1,2) \or 3312a \or 3312b
\or (3,3,1,3) \or 3313a \or 3313b
\or (3,3,1,4) \or 3314a
\or (3,3,2) \or 332a \or 332b
\or (3,3,3) \or 333a
\or (3,3,3,1) \or 3331a \or 3331b
\or (3,3,3,2) \or 3332a \or 3332b
\or (3,3,3,3) \or 3333a \or 3333b
\or (3,3,3,4) \or 3334a
\or (3,3,4) \or 334a
\or (3,4) \or 34a
\or (4) \or 4a \or 4b
\or (5)
\fi}
\newcommand{\layer}[1]{\ifcase #1 0,15
\or 3,76,79,15
\or 5,38,41,74,81,114,117,15
\or 7,20,23,36,43,56,59,72,83,96,99,112,119,132,135,14
\or 9,12,15,18,25,28,31,34,45,48,51,54,61,64,67,70,85,88,91,94,%
101,104,107,110,121,124,127,130,137,140,143,14
\or 4,6,8,10,11,13,14,16,17,19,21,22,24,26,27,29,30,32,33,35,37,39,%
40,42,44,46,47,49,50,52,53,55,57,58,60,62,63,65,66,68,69,71,73,75,%
77,78,80,82,84,86,87,89,90,92,93,95,97,98,100,102,103,105,106,108,%
109,111,113,115,116,118,120,122,123,125,126,128,129,131,133,134,%
136,138,139,141,142,144,145,147,149,15
\fi}
\newcommand{\col}[1]{\ifcase #1 %
black\or blue\or red\or green!70!black\or yellow!60!red\else none\fi}
\tikzset{v/.style={circle, draw=black, minimum size=1.5mm, inner sep=0pt},
n/.style={draw=none}
}
\begin{tikzpicture}
\node[v, fill=\col{1}, label=180:$v_1$] (x1) at (195:2.0) {};
\node[v, fill=\col{2}, label=135:$v_2$] (x2) at (135:2.5) {};
\node[v, fill=\col{3}, label=105:$v_3$] (x3) at ( 65:2.0) {};
\node[v, fill=\col{4}, label= 0:$v_4$] (x4) at (320:2.0) {};
\edef\nlist{\layer{5}}
\foreach \num in \nlist{
\pgfmathsetmacro{\angle}{270-360/155*\num}
\node[v] (\num) at (\angle : 6.0) {};
}
\foreach \ind in {4,3,2,1}{
\edef\nlist{\layer{\ind}}
\pgfmathsetmacro{\dista}{6.3+\ind/7}
\foreach \num in \nlist{
\pgfmathsetmacro{\angle}{270-360/155*\num}
\node[v, fill=\col{\ind}] (\num) at (\angle:6.0) {};
\draw[\col{\ind}] (x\ind)--(\num);
\node[n,rotate=\angle] () at (\angle:\dista) {\NtoS{\num}};
}
}
\foreach[count=\mum from 3] \num in {4,5,...,152} \draw (\num)--(\mum);
\draw (x1)--(x2)--(x3)--(x4)--(x1)--(x3) (x2)--(x4);
\node[v, fill=\col{4}, label= 0:$v_4$] (x4) at (320:2.0) {};
\end{tikzpicture}
\end{center}
\caption{The graph $G_4$}
\label{fig:G4}
\end{figure}
Let us introduce some additional notation for the elements of $S^d$. For
$a,b\in S^d$, \emph{interval} $[a,b]$ is the set $\{c\in S^d\,:\, a\preccurlyeq
c\preccurlyeq b\}$. We say that an interval $[a,b]$ is \emph{proper} if for all
$c\in [a,b]\setminus\{a,b\}$, $l(c)\not\in\{l(a),l(b)\}$. Note that
$[a,b]=\bigcup_{a\preccurlyeq c\prec b}[c,s(c)]$. For an interval $[c,s(c)]$, $a\preccurlyeq
c\prec b$, we say that it is a \emph{step of $[a,b]$}, and if additionally
$l(c)=l(s(c))$, we say that this step is \emph{flat}.
\begin{lemma}\label{OddEqual}
Let $a,b\in S^d$. If $[a,b]$ is a proper interval such that $l(a)=l(b)$,
then it contains an odd number of flat steps.
\end{lemma}
\begin{proof}
Our proof is by induction on the number of elements of $[a,b]$. If $[a,b]$
has only 2 elements, that is if $b=s(a)$, then the lemma trivially holds.
Let $a=(a_1,a_2,\dots,a_k)$.
\begin{enumerate}[{Case }1.]
\item $a_k=2$.
In this case $b=(a_1,a_2,\dots,a_{k-1},3)$ and $[a,b]=\{a,b\}$, so the
conclusion trivially follows.
\item $a_k\in\{1,3\}$.
In this case $b=(a_1,a_2,\dots,a_{k-1},a_k+1)$. If $k=d$, then
$[a,b]=\{a,b\}$, and the conclusion follows. So, let $k<d$. Then
\[[a,b]=[a,a^{(1)}]\cup [a^{(1)},a^{(2)}]\cup [a^{(2)},a^{(3)}]\cup
[a^{(3)},a^{(4)}]\cup [a^{(4)},b],\]
where $a^{(i)}=(a_1,a_2,\dots,a_k,i)$, for $1 \le i \le 4$.
Since $s(a)=a^{(1)}$ and $s(a^{(4)})=b$, the number of flat steps of
$[a,b]$ is the sum of the numbers of flat steps of $[a^{(i)},a^{(i+1)}]$,
for $1 \le i \le 3$. Note that $a^{(i)}$ and $a^{(i+1)}$, for $1 \le i
\le 3$, are consecutive $(k+1)$-tuples of $S^d$, i.e.\ the interval
$[a^{(i)},a^{(i+1)}]$ is proper. Therefore, by induction, each of the
intervals $[a^{(i)},a^{(i+1)}]$, for $1 \le i \le 3$, has an odd number of
flat steps, and hence so does the interval $[a,b]$.
\item $a_k=4$.
In this case $a_{k-1}\in\{1,3\}$, so
\[a=(a_1,\dots,a_{i-1},1,\underbrace{3,\dots,3}_{k-i-1},4),\]
where $1 \le i \le k-1$ ($a$ has at least one coordinate equal to 1, since
there does not exist a $k$-tuple in $S^d$ which is larger than the $k$-tuple
$({3,\dots,3},4)$).
If $i=k-1$, then $s(a) = (a_1,\dots,a_{i-1},2)$,
$s(s(a)) = (a_1,\dots,a_{i-1},3)$ and
$s(s(s(a))) = (a_1,\dots,a_{i-1},3,1)=b$,
and hence the interval $[a,b]$ has one flat step.
So, let $i<k-1$. Then
\begin{align*}
s(a)&=(a_1,\dots,a_{i-1},1,\underbrace{3,\dots,3}_{k-i-2},4),\\
p(b)&=(a_1,\dots,a_{i-1},3,\underbrace{1,\dots,1}_{k-i-1}),\\
b&=(a_1,\dots,a_{i-1},3,\underbrace{1,\dots,1}_{k-i}).
\end{align*}
So, the number of flat steps of the interval $[a,b]$ is the same as the
number of flat steps of the interval $[s(a),p(b)]$. Since $s(a)$ and $p(b)$
are consecutive $(k-1)$-tuples of $S^d$, the interval $[s(a),p(b)]$ is
proper, and the conclusion follows by induction. \qedhere
\end{enumerate}
\end{proof}
\begin{lemma}\label{ZeroEqual}
Let $a,b\in S^d$. If $[a,b]$ is a proper interval such that $l(a) \ne l(b)$,
then it does not contain a flat step.
\end{lemma}
\begin{proof}
Note that the set $S^d$ is symmetric, so we may assume that $l(a)>l(b)$.
Let $a=(a_1,\dots,a_{k-1},a_k)$.
If $a_k<4$, then $a \prec (a_1,\dots,a_{k-1},a_k+1)$, and hence $[a,b]$ is
not proper, since there does not exist $c\in S^d$ such that
$(a_1,\dots,a_{k-1},a_k)\prec c\prec (a_1,\dots,a_{k-1},a_k+1)$ and $l(c)<k$.
So, $a_k=4$. If $a=(\underbrace{3,\dots,3}_{k-1},4)$, then
$b=(\underbrace{3,\dots,3}_{l-1},4)$, where $l=l(b)$, and the conclusion
follows. So, let
\[a=(a_1,\dots,a_{i-1},1,\underbrace{3,\dots,3}_{k-i-1},4)\,,\]
where $1 \le i \le k-1$, and
\[a'=(a_1,\dots,a_{i-1},3,\underbrace{1,\dots,1}_{k-i})\,.\]
The elements of the interval $[a,a']$ are the following (given in increasing
order):
\begin{align*}
&(a_1,\dots,a_{i-1},1,\underbrace{3,\dots,3}_{k-i-1},4),(a_1,\dots,a_{i-1},1,\underbrace{3,\dots,3}_{k-i-2},4),\dots,(a_1,\dots,a_{i-1},2),\\
&(a_1,\dots,a_{i-1},3),(a_1,\dots,a_{i-1},3,1),\dots,(a_1,\dots,a_{i-1},3,\underbrace{1,\dots,1}_{k-i}).
\end{align*}
Since $[a,b]$ is proper, it holds $b \prec a'$. Additionally, since $[a,b]$
does not contain an element of length equal to $l(b)$, $b$ is an element of
$[a,a']$ from the first row of the given list. Now it is clear that $[a,b]$
contains zero flat steps.
\end{proof}
For an interval $[a,b]$ in $S^d$, let $P_{[a,b]}$ be the path of $G_d$ induced
by $\bigcup_{a\preccurlyeq c\prec b} V(P_{c})$. Since path $P_c$ is of odd length if
and only if $l(c)=l(s(c))$, path $P_{[a,b]}$ is of odd length if and only if
$[a,b]$ contains an odd number of flat steps.
\begin{theorem}\label{theo:gd-ehf}
The graph $G_d$ is (diamond, even hole)-free for all $d \ge 1$ and
$G_d$ has no clique cutset for all $d \ge 2$.
\end{theorem}
\begin{proof}
First, suppose that $G_d$ contains a diamond $D$ for some $d \ge 1$. Since
$P_d$ is a path, $V(D)\not\subseteq V(P_d)$, and since $D$ is not a clique
$V(D)\not\subseteq W_d$. The neighborhood in $P_d$ of every vertex of $W_d$
is a stable set, so $|V(D)\cap V(P_d)| \le 2$. On the other hand, every
vertex of $P_d$ is adjacent to at most one vertex of $W_d$, so $|V(D)\cap
V(W_d)| \le 2$. Hence, $|V(D)\cap V(P_d)|=|V(D)\cap W_d|=2$. But then $D$
has at most 4 edges, a contradiction.
Now, suppose that $G_d$ contains an even hole $H$ for some $d \ge 1$. Since
$P_d$ is a path, $V(H)\cap W_d \ne \varnothing$, and since $W_d$ is a clique
$|V(H)\cap V(W_d)| \le 2$. First suppose that $V(H)\cap V(W_d)=\{v_k\}$,
for some $1 \le k \le d$. Since $v_k$ has exactly two neighbors in $H$,
$V(H)=\{v_k\}\cup V(P_{[a,b]})$, where $a,b\in S^d$ are such that
$l(a)=l(b)=k$ and the interval $[a,b]$ is proper. Then, by Lemma
\ref{OddEqual}, interval $[a,b]$ contains an odd number of flat steps, and
hence path $P_{[a,b]}$ and hole $H$ are of odd length, a contradiction. So,
$V(H)\cap V(W_d)=\{v_k,v_l\}$, for some $1 \le k<l \le d$. Then
$V(H)=\{v_k,v_l\}\cup V(P_{[a,b]})$, where $a,b\in S^d$ are such that
$\{l(a),l(b)\}=\{k,l\}$ and the interval $[a,b]$ is proper. Then, by Lemma
\ref{ZeroEqual}, interval $[a,b]$ does not contain a flat step, and hence
path $P_{[a,b]}$ is of even length, i.e.\ the hole $H$ is of odd length
(since the length of $H$ is by 3 larger than the length of $P_{[a,b]}$), a
contradiction.
Let $d \ge 2$ and suppose that $G_d$ has a clique cutset $K$. We distinguish
between three cases. First, if $K \subseteq W_d$ then $K$ does not separate
since $P_d$ is a path and every vertex in $W_d \setminus K$ has a neighbor in
$P_d$. Second, if $K \subseteq V(P_d)$ then $P_d - K$ has two components.
In $G_d - K$ these are connected via $W_d$ since $d \ge 2$. Hence we are in
the third case and may assume $K \cap W_d \ne \varnothing$ and $K \cap V(P_d) \ne
\varnothing$. By construction, no vertex of $P_d$ is contained in a triangle, and
hence $|K| \le 2$. Consequently $K=\{u,v_i\}$ for $u \in V(P_d)$ and $1 \le
i \le d$. The vertex $u$ is neither (1) nor (4) since both are adjacent to
$v_1 \in W_d$ and neither $\{(1),v_1\}$ nor $\{(4),v_1\}$ are cutsets of
$G_d$. It follows that (1) and (4) are separated by $K$. Since $v_1$ is
adjacent to both (1) and (4) we have $i=1$, and hence $u$ is (2) or (3).
But then $v_2$ has a neighbor in both components of $P_d - u$, a
contradiction.
\end{proof}
\section{Lower bound}\label{sec:lower-bound}
In this section we prove that the rank-width of the class $(G_d)_{d \ge 1}$
constructed in the previous section is unbounded.
\begin{lemma} \label{lem:suffix}
If $d \ge 1$ and $P$ is a subpath of $P_d$ such that $|V(P)\cap S_i| \ge 3$
for some $i$ $(1 \le i \le d)$,
then $V(P)\cap S_j \ne \varnothing$ for every $j$ satisfying $i \le j \le d$.
\end{lemma}
\begin{proof}
Since $|V(P)\cap S_i| \ge 3$, there exist two vertices of the form
$(a_1,\dots,a_{i-1},1)$ and $(a_1,\dots,a_{i-1},2)$, or two vertices of the
form $(a_1,\dots,a_{i-1},3)$ and $(a_1,\dots,a_{i-1},4)$ in $P$, where
$a_k\in \{1,3\}$ for $1 \le k < i$. But then, by the definition of the order
$\preccurlyeq$ for $S^d$, $P$ must contain some vertex of length $j$ for every $j$
satisfying $i \le j \le d$.
\end{proof}
\begin{lemma} \label{lem:large-color}
If $P$ is a subpath of $P_d$ such that $|V(P)| \ge c |V(G_d)|$, where
$0<c<1$ and $d>2\lfloor\log_2{\frac{1}{c}}\rfloor+4$,
then $V(P)\cap S_j \ne \varnothing$ for every $j$ satisfying
$\lfloor\log_2{\frac{1}{c}}\rfloor+3 \le j\le d$.
\end{lemma}
\begin{proof}
If $V(P)\cap S_j \ne \varnothing$ for every $j\in\{1,\dots,d\}$, then the
conclusion trivially holds. Hence, we may assume that $V(P)\cap S_j = \varnothing$
for some $j \in \{1,\dots,d\}$.
\begin{enumerate}[{Claim }1.]
\item\label{claim1} $|V(P)|>6d$.
\emph{Proof of Claim \ref{claim1}:} Suppose that $|V(P)| \le 6d$.
Since $|V(P)| \ge c |V(G_d)| \ge c\cdot 2^{d+2}$ (the first inequality is
by the assumption, and the second by Remark \ref{rem:size}), it follows
that $6d \ge c\cdot 2^{d+2}$, which is equivalent to $\log_2{\frac{1}{c}}
\ge d - \log_2{d} +2-\log_2{6}$. Since $d-\log_2{d} \ge \frac{d}{2}$, for
all $d \ge 4$ (which is the case by assumption), and $2-\log_2{6}>-1$, we
have that $\log_2{\frac{1}{c}}>\frac{d}{2}-1$, which is equivalent to
$d<2\log_2{\frac{1}{c}}+2$, a contradiction. This completes the proof of
Claim \ref{claim1}.
\item\label{claim2} For some $t \in \{1,\dots,d\}$, $|V(P)\cap S_t| \ge 3$.
\emph{Proof of Claim \ref{claim2}:}
Suppose that for all $t \in \{1,\dots,d\}$, $|V(P)\cap S_t| \le 2$.
Let $a'$ and $b'$ be the endnodes of $P$, and let $a$ (resp.~$b$) be the
first (resp.\ last) vertex of $S^d$ encountered when traversing $P$ from
$a'$ to $b'$. Since for some $j$, $V(P)\cap S_j=\varnothing$, the interval
$[a,b]$ contains at most $d-2+1+d-2=2d-3$ steps (note that this bound can
be achieved when $[a,b]$ contains vertices $(2)$ and $(3)$, the $d-2$
elements of $S^d$ that precede $(2)$, and the $d-2$ elements of $S^d$ that
succeed $(3)$). For each step $[u,s(u)]$, the $u,s(u)$-subpath of $P$ is
of length at most three. The $a,a'$-subpath of $P$ and the $b,b'$-subpath
of $P$ are each of length at most two. It follows that the length of $P$
is at most $3(2d-3)+2 \cdot 2 = 6d-5$, and hence $|V(P)| \le 6d$,
contradicting Claim \ref{claim1}.
This completes the proof of Claim \ref{claim2}.
By Claim \ref{claim2} and Lemma \ref{lem:suffix}, for some $i<d$,
$V(P)\cap S_i=\varnothing$ and $V(P)\cap S_j \ne \varnothing$ for $j \in \{i+1,\dots,d\}$.
By Remark~\ref{rem:continuous}, $V(P)\cap S_j=\varnothing$ for $j\in \{1,\dots,i\}$.
Therefore, there exist two vertices $u,v \in S_i$, $u \preccurlyeq v$, such that
$P$ is contained in the subpath $P'$ of $P_d$ from $u$ to $v$ and
$V(P') \cap S_i = \{u,v\}$. Let $u=(a_1,\dots,a_i)$.
\item\label{claim3} $a_i\in \{ 1,3\}$.
\emph{Proof of Claim \ref{claim3}:}
We consider the following cases:
\begin{itemize}
\item If $a_i=2$ then $v=s(u)$. Hence, $|V(P')|= 4$.
\item If $a_i=4$, then $u=(a_1,\dots,a_{i'-1},1,3,\dots,3,4)$, where
$1 \le i' \le i-1$ ($u$ has at least one coordinate equal to $1$,
otherwise there does not exist a tuple in $S_i$ which is larger than $u$).
Since $v$ is the next element in $S_i$ which is larger than $u$,
$v=(a_1,\dots,a_{i'-1},3,1,\dots,1)$. By the discussion in the proof of
Lemma \ref{ZeroEqual}, the number of elements of $S^d$ in the interval
$[u,v]$ is $2(i-i'+1)$ and we have that $2(i-i'+1) \le 2i \le 2d$. Since
there are at most two vertices of $P'$ between any two consecutive
elements in $S_d$, $|V(P')| \le 3\cdot 2d = 6d$.
\end{itemize}
Both cases contradict Claim \ref{claim1}.
This completes the proof of Claim \ref{claim3}.
\end{enumerate}
Since there are at most two vertices of $P'$ between any two consecutive
elements in $S_d$ and by Claim~\ref{claim3}, $|V(P')| \le 3|[u,v]| =
3(\sum_{j=0}^{d-i-1} 4\cdot 2^j+2) < 12(\sum_{j=0}^{d-i-1}2^j + 1) =
12\cdot 2^{d-i} < 2^{d-i+4}$. So by Remark~\ref{rem:size}, we have that
\[ 2^{d-i+4} > |V(P')| \ge |V(P)| \ge c |V(G_d)| \ge c\cdot 2^{d+2}\,. \]
Hence $2^{2-i}>c$, or equivalently $i<2+\log_2{\frac{1}{c}}$,
proving the lemma.
\end{proof}
\begin{lemma}\label{lem:rw-unbounded}
For any $d \ge 22$ we have $\operatorname{rw}(G_d)> d/3$.
\end{lemma}
\begin{proof}
Suppose that $\operatorname{rw}(G_d) \le k = d/3$. Let $(T,\lambda)$ be a rank
decomposition of $G_d$ of width at most $k$. Let $e\in E(T)$ be a balanced
edge (it exists by Lemma \ref{lem:balanced-edge}), and let $M$ be the
adjacency matrix of $G_d$. Let $(X,Y)$ be the bipartition of $V(G_d)$
induced by $e$. Appying Lemma \ref{lem:heavy-subpath} for $G_d$ and the path
$P_d$ ($|V(G_d)|-|V(P_d)|=d$), there exist two subpaths $P_X$, $P_Y$ of
$P_d$ in $G_d[X]$ and $G_d[Y]$, respectively, such that $|V(P_X)|,|V(P_Y)|
\ge \left\lfloor\frac{|V(G_d)|-3d}{3(k+1)}\right\rfloor \ge
\frac{|V(G_d)|}{4(k+1)}$ (note that the second inequality holds by Remark
\ref{rem:size} and the fact that $d \ge 22$). Applying Lemma
\ref{lem:large-color} (using the fact that $d \ge 22$) with
$c=\frac{1}{4(k+1)}$ and letting $c'=\lfloor\log_2(\frac{1}{c})\rfloor +3 =
\lfloor \log_2(k+1) \rfloor +5$, we have $V(P_X)\cap S_j \ne \varnothing$ and
$V(P_Y) \cap S_j \ne \varnothing$ for every $j$ satisfying $c' \le j \le d$.
W.l.o.g.\ let $X$ be the set containing at least half of the center vertices
in $\{v_{c'},\dots,v_d\}$. Let $I = \{i \in \{c',\dots,d\} \mid v_i\in X\}$
(the set of indices of center vertices in $X$), and fix a vertex
$a_i \in Y \cap S_i$ for every $i \in I$, which exists because
$V(P_Y)\cap S_i \ne \varnothing$.
We have $|I| \ge \frac{d-c'+1}{2}$. Let $S_X=\{v_i \mid i\in I\}$ and
$S_Y=\{a_i \mid i\in I\}$.
Note that $S_X\subseteq X$ and $S_Y\subseteq Y$. Because each vertex
$v_i$ in $S_X$ has exactly one neighbor in $S_Y$ (namely $a_i$),
we have that $M[S_X,S_Y]=\mathbf{1}_{|I|}$ (identity matrix). Therefore,
$\operatorname{rank}(M[S_X,S_Y])=|I|$. We have
\[ k \ge \operatorname{width}(T,\lambda) \ge \cutrk{G}(X) = \operatorname{rank}(M[X,Y])
\ge \operatorname{rank}(M[S_X,S_Y]) = |I| \ge \frac{d-c'+1}{2}\,,\]
which is equivalent to
$d \le 2k+c'-1 = 2k + \lfloor\log_2(k+1)\rfloor + 4
= 2d/3 + \lfloor\log_2(d/3+1)\rfloor + 4$,
a contradiction since $d \ge 22$.
\end{proof}
From Lemma~\ref{lem:rw-unbounded} and Remark~\ref{rem:size} we obtain that
the rankwidth of $G_d$ grows at least logarithmically with $|V(G_d)|$,
since if $d \ge 22$ then $\operatorname{rw}(G_d ) > d/3 \ge (\log_2 |V(G_d)| - 4)/3$.
From Theorem~\ref{theo:gd-ehf} and Lemma~\ref{lem:rw-unbounded} we have the
following theorem.
\begin{theorem}
The family of (diamond, even hole)-free graphs $G_d$, $d \ge 2$, without
clique cutsets has unbounded rank-width.
\end{theorem}
For completeness, observe that $\operatorname{rw}(G_d) \le d+1$ for all $d\in\mathbb{N}$.
To see this, take a cubic tree $T$ with $\left| V(G_d)\right|$ leaves,
where the internal nodes form a path. Via the bijection
$\lambda: V(G_d) \to L(T)$, pick the linear ordering on
$W_d \cup V(P_d)$, which starts with $v_1, v_2, v_3, \dots, v_d$, followed
by the vertices of $P_d$ in their canonical order (see Figure
\ref{fig:dec}).
\begin{figure}[htbp]
\begin{center}
\newcommand{\vv}[1]{{\phantom{(}$v_{#1}$\phantom{)}}}
\tikzset{v/.style={circle, draw=black, minimum size=1.5mm, inner sep=0pt}}
\begin{tikzpicture}[scale=0.75]
\node[v, label=below:\vv{1}] (v1) at (0,0) {};
\node[v, label=below:\vv{2}] (v2) at (1,0) {};
\node[v] (pv2) at (1,1) {}; \draw (v2)--(pv2);
\draw[rounded corners=5mm] (v1)--(0,1)--(pv2);
\node[v, label=below:\vv{3}] (v3) at (2,0) {};
\node[v] (pv3) at (2,1) {}; \draw (v3)--(pv3)--(pv2) (pv3)--(2.5,1);
\node[v, label=below:\vv{d}] (vd) at (4,0) {};
\node[v] (pvd) at (4,1) {}; \draw (vd)--(pvd)--(3.5,1);
\draw[dotted] (2.5,1)--(3.5,1);
\node[v, label=below:{(1)}] (1) at (5,0) {};
\node[v] (p1) at (5,1) {}; \draw (1)--(p1)--(pvd);
\node[v] (1a) at (6,0) {};
\node[v] (p1a) at (6,1) {}; \draw (1a)--(p1a)--(p1);
\node[v, label=below:{(1,1)}] (11) at (7,0) {};
\node[v] (p11) at (7,1) {}; \draw (11)--(p11)--(p1a);
\node[v] (11a) at (8,0) {};
\node[v] (p11a) at (8,1) {}; \draw (11a)--(p11a)--(p11);
\node[v, label=below:{(1,1,1)}] (111) at (9,0) {};
\node[v] (p111) at (9,1) {}; \draw (111)--(p111)--(p11a) (p111)--(9.5,1);
\node[v, label=below:{(3,3,4)}] (334) at (12,0) {};
\node[v] (p334) at (12,1) {}; \draw (334)--(p334)--(11.5,1);
\draw[dotted] (9.5,1)--(11.5,1);
\node[v] (334a) at (13,0) {};
\node[v] (p334a) at (13,1) {}; \draw (334a)--(p334a)--(p334);
\node[v, label=below:{(3,4)}] (34) at (14,0) {};
\node[v] (p34) at (14,1) {}; \draw (34)--(p34)--(p334a);
\node[v] (34a) at (15,0) {};
\node[v] (p34a) at (15,1) {}; \draw (34a)--(p34a)--(p34);
\node[v, label=below:{(4)}] (4) at (16,0) {};
\draw[rounded corners=5mm] (4)--(16,1)--(p34a);
\end{tikzpicture}
\caption{A rank decomposition of $G_d$ of width at most $d+1$.}
\label{fig:dec}
\end{center}
\end{figure}
Let $e$ be an edge of $T$ and let $(X,Y)$ be the bipartition of $V(G_d)$
induced by $e$. Since $\operatorname{rank}(M[X,Y]) \le \min(|X|,|Y|)$ we may assume
$|X|,|Y| > d$ and $\{v_1,v_2,\dots,v_d\} \subseteq X$. Now the vertices in $Y$
have at most $d+1$ different neighbours in $X$. Hence the width of $e$ is at
most $d+1$, proving that $\operatorname{rw}(G_d) \le d+1$.
\bibliographystyle{amsplain}
|
1,477,468,750,115 | arxiv | \section{Introduction}
\label{sec:intro}
Accurate measurements of cosmic microwave background (CMB)
anisotropies on scales smaller than the sound horizon at last
scattering are necessary to form a complete understanding of the
initial power spectrum of cosmic inhomogeneities. Secondary
anisotropies--- chiefly reionization and Sunyaev-Zel'dovich (SZ)
distortions from large scale structures--- also leave imprints on
these scales which contain clues of the subsequent evolution of
structures from very simple, linear initial conditions to the wealth
of nonlinear structures seen in the nearby universe. Photon diffusive
damping occuring near last scattering has the effect of strongly
suppressing small-scale intrinsic CMB anisotropies, so precise
measurements on these scales are difficult. A key challenge for
centimeter-wavelength observations on these angular scales is the
foreground presented by faint, discrete radio sources.
The Cosmic Background Imager \citep[CBI]{cbi7} is a 30 GHz, small
angular-scale experiment which has detected, at $2.9 \sigma$, power in
excess of intrinsic CMB anisotropy at multipoles $\ell > 2000$. The
accuracy of this result is limited by our knowledge of the
contribution of faint extragalactic radio sources to the measured
power spectrum. The CBI removes all {\it known} radio sources by
``projecting'' them out of the dataset\footnote{``Source projection''
is a procedure equivalent to allowing an arbitrary variance to a
linear combination of the data corresponding to the point source. This
procedure is performed in fourier space; the analog in the image
domain would be deleting a contaminated pixel.}. The best available
low-frequency radio data covering the CBI fields are from the $1.4 \,
{\rm GHz}$ NRAO VLA Sky Survey \citep[NVSS][]{nvss}, which is reliable
and complete down to $3.4 \,{\rm mJy} $. Since the source density on the sky
as a function of flux density (the ``source counts'') is well known to
down to $50-100 \,{\rm \mu Jy} $, the 31 GHz power spectrum of sources not
detected in the NVSS can be calculated given sufficient knowledge of
the ratio of flux densities at these two frequencies,
$S_{30}/S_{1.4}$. Since the sources are unresolved they will have a
flat power spectrum: $C_{\ell} \sim {\rm const}$, or $\ell (\ell +1)
C_{\ell} \sim \ell^2$ in terms of the prevailing convention for CMB
power spectra. The power spectrum of these sources is directly
subtracted from the CBI power spectrum. In this paper we call this
correction to the power spectrum the ``residual source correction'';
in other papers it is also called the statistical correction and the
isotropic point source correction. The uncertainty in this correction
is comparable to other uncertainties on the smallest angular scales
measured by CBI ($\ell > 2000$, or angular scales smaller than about 5
arcminutes). \citet{cbi7} calculate a value of $96 \pm 48 \,{\rm \mu K^2} $,
where the uncertainty in the correction is dominated by the poorly
constrained extrapolation of $1.4 \, {\rm GHz}$ flux densities to $31
\, {\rm GHz}$. The $\sim 50 \,{\rm \mu K^2} $ uncertainty in the point source
correction contributes a substantial uncertainty to the CBI
measurement at $\ell > 1960$ of $355^{+137}_{-122} \,{\rm \mu K^2} $ (of which
$80 - 90 \,{\rm \mu K^2} $ is expected to be intrinsic anisotropy). New CBI
results presented in \citet{cbi10} show an excess which, to be
explained by point sources, would have a power $\ell
(\ell+1)C^{src}_{\ell}/(2\pi) = 275 \pm 63 \,{\rm \mu K^2} $ at $\ell =2500$,
consistent with the \citet{cbi7} result.
In more detail, if at a given frequency sources above some flux
density $S_{max}$ are dealt with, the power spectrum of
residual sources below $S_{max}$
is proportional to
\citep{myers03}
\begin{equation}
X_{src} = \int_{0}^{S_{max}} \, dS \, S^2 \frac{dN}{dS}
\label{eq:csrc}
\end{equation}
In our case the sources are selected for inclusion at a
different, lower frequency, ({\it i.e.}, they are in effect the
sources in our fields that are not detected reliably in the NVSS,
hence are not projected out). Then, assuming that the spectral
properties of the sources are constant over the range of relevant flux
densities--- in practice, $\sim 1$ to $4$ mJy at $1.4$ GHz---
Eq.~\ref{eq:csrc} becomes
\begin{equation}
X_{src} = \langle \left(\frac{S_{31}}{S_{1.4}}\right)^2 \rangle \int_{0}^{S_{max,1.4}} \, dS_{1.4} \,
S_{1.4}^2 \, \frac{dN}{dS_{1.4}}
\end{equation}
The low-frequency source counts $dN/dS$ are well-known. To accurately
correct 31 GHz CMB measurements for point source contamination we need
to determine the mean value of $ (S_{31}/S_{1.4})^2 $, and to
determine the uncertainty in $X_{src}$ and therefore in the
source-corrected CMB power spectrum, we must know its distribution.
For the shallow source counts [$N(>S) \propto S^{-0.7}$] observed at
mJy levels the sources immediately below the projection threshold
dominate the sky variance and our surveys target this population. For
the CBI, for instance, 75\% of the correction comes from sources in
the range $1 < S_{1.4} < 3.4 \,{\rm mJy} $. At 31 GHz the sky variance can be
strongly influenced by the abundance of comparatively rare flat or
inverted spectrum sources, so a large sample is needed to place useful
constraints on the abundance of these sources.
In this paper we present a detailed characterization of the impact of
the discrete source foreground on arcminute-scale 31 GHz anisotropy
measurements based upon two observational campaigns. The first
campaign was carried out with the Owens Valley Radio Observatory
(OVRO) 40-meter telescope at 31 GHz from September 2000 through
December 2002. The second campaign used the Robert C. Byrd
Green Bank Telescope (GBT) from February to May of 2006. This work
was undertaken with the specific aim of improving the accuracy of CBI
microwave background anisotropy measurements. A companion paper
\citep{cbi10} presents the 5-year CBI total intensity power spectrum
incorporating the results of the point source measurements discussed
here.
The structure of this paper is as follows. In \S~\ref{sec:instr} we
describe the instrumentation used in both surveys. \S~\ref{sec:samp}
describes the source lists and sample selection, and
\S~\ref{sec:obsreduc} describes the OVRO 40-m and GBT observations and
data reduction and presents catalogs of the
observations. \S~\ref{sec:results} presents a determination of the
$1.4$ to $31$ GHz spectral index of NVSS sources and determines the
implications of these measurements for 31 GHz CMB observations, for
the case of the CBI in particular; here we also present a
determination of the 31 GHz source counts. Finally
\S~\ref{sec:summary} reviews our main conclusions.
\section{Instrumentation}
\label{sec:instr}
\subsection{OVRO 40-meter and 31 GHz Receiver}
\label{subsec:ovroinstr}
In 1999 the OVRO 40-meter telescope was outfitted with a
Dicke-switching, dual horn receiver operating in four $2 {\, \rm GHz}$ bands
between 26 and 34 GHz. The Dicke switch alternates between the two
horns at a rate of $125 \, {\rm Hz} $, sampling two beams separated by $7'.8$
in cross-elevation on the sky. Each beam has a FWHM of $1'.36$ at 31
GHz; this is somewhat larger than what would be expected for a
40-meter dish since only the central 30 meters are illuminated. The
measured receiver temperature in the 31 GHz band is $23 \,{\rm K} $. The
statistics of (noise) samples taken against both ambient temperature
and $77 \,{\rm K} $ beam-filling loads are consistent with the receiver
temperature. Including $ 13 \,{\rm K} $ per airmass due to the atmosphere,
$2.7 \,{\rm K} $ from the CMB, and a fixed ground contribution of $ 10 \,{\rm K} $,
the system temperature at zenith is $\sim 50 \,{\rm K} $. Calibration is
facilitated by two broad-band noise diodes; cross-guide couplers
before the Dicke switch allow for the insertion of signals from these
devices.
The telescope is an alt-azimuth instrument consisting of a
paraboloidal dish reflector, primary focus feed, and supports, mounted
on an alidade and base pedestal. Our observing frequency of $ 31
{\, \rm GHz}$ is beyond the design specification of the 40-meter telescope,
resulting in aperture efficiencies of only $15\%$. The gain of the
40-m changes as a function of both zenith-angle (ZA) due to
gravitational deformations in the dish structure, and of angle of the
sun from the optical axis due to thermal deformations. These
variations were characterized by long tracks on bright calibrators and
the resulting gain corrections are applied offline in the data
reduction. The focus position also varies with zenith angle. The
focus position corrections are applied in real time and checked
periodically for consistency. The peak sensitivity that the 40-meter
achieves is $12$ mJy RMS in one minute at 40 degrees elevation in the
31 GHz band. The outer bands had substantially higher noise
levels. The analysis in this paper relies upon data from the 31 GHz
channel only.
\subsection{GBT, 31 GHz Receiver and Continuum Backend}
\label{subsec:gbtinstr}
The Robert C. Byrd Green Bank Telescope (GBT) is a 100-meter off-axis
Gregorian telescope located in Green Bank, West
Virginia \citep{gbtspie}. One of the GBT's distinguishing features is
the fact that it was designed to operate efficiently at frequencies up
through 115 GHz. For the observations presented here the GBT's 31 GHz
aperture efficiency was 50\%. The 2-dimensional RMS referenced
pointing accuracy is $4''$ on timescales of half an hour to an
hour. Corrections to the focus tracking model on the same timescale
are typically a few millimeters. Owing to the primary reflector's
remotely actuated positioning system, the telescope gain does not vary
significantly at elevations greater than 20 degrees.
Broadband measurements of the radio-frequency continuum are affected
by systematic effects such as gain fluctuations and variation in the
emissivity of the atmosphere. To this end the receiver built for the
OVRO 40-m employed a Dicke switch, enabling two feeds to be sampled
rapidly in series. For the GBT receiver, we chose an electronic
beamswitching arrangement employing $180^{\circ}$ hybrids similar to
the WMAP radiometers \citep{jarosik, padinka}. This permits wider
bandwidth coverage, avoids expensive and difficult to procure
Dicke switches, and permits signals from both feeds to be
simultaneously measured at all times. The receiver provides 16
continuum channels, one for each of four frequency bands, two feeds,
and both circular polarizations. Receiver temperatures range from $20
\, {\rm K}$ to $40 \, {\rm K}$ across the band, resulting in $T_{sys}$
values on-sky of $35 \, {\rm K}$ to $65 \, {\rm K}$. Broadband noise
diodes are coupled in to the signal path prior to the first hybrid tee
and permit monitoring the total gain of the system for calibration
purposes.
The Caltech Continuum Backend (CCB) is a digital backend which
controls the GBT 26-40 GHz receiver and synchronously reads out and
demodulates the beamswitched signal. Lab tests conducted with the CCB
connected to the receiver show that the noise obtained is 15 to 30\%
above the theoretical noise minimum given by the radiometer equation.
For our observations we operated the receiver with a 4 kHz beam switch
rate in order to avoid excessive lost of time to blanking after phase
switch transitions. The CCB was constructed by NRAO and the California
Institute of Technology, with funding provided to Caltech through the
NRAO University Instrumentation Program. The 26-40 GHz receiver and
CCB are available as facility instruments on the GBT.
With the 31 GHz receiver and continuum backend, we attain nearly
thermal noise limited performance ($\sim 0.15 \,{\rm mJy} $ RMS in one minute)
a small fraction of the time, when the atmosphere is very dry and
stable. More typically the thermal noise RMS in a 60 second
double-differenced observation (essentially identical to that
described in \S~\ref{sec:ovroobs}) is $0.4 - 0.5 \,{\rm mJy} $ (RMS). The
results in this paper depend only on data from the 31 GHz channels.
\section{Source Lists and Sample Selection}
\label{sec:samp}
The CBI total intensity mosaic fields \citep{cbi7} covered $98 \, {\rm
deg^2}$ of sky, including a $45'$ buffer zone. The OVRO 40m survey
targeted all $S_{1.4} > 6 {\rm mJy}$ sources in this region. The CBI
polarization observations \citep{cbipol} covered $115 \, {\rm deg^2}$
in all, also including a $45'$ buffer zone; accounting for the overlap
between these two datasets the total sky coverage is $143 \, {\rm
deg^2}$. The GBT survey targeted $S_{1.4} > 3.4 \,{\rm mJy} $ (the NVSS 99\%
completeness limit) sources in this total region, although full
coverage was not achieved. Source selection proceeded from areas of
sky with the lowest CBI map noise. Sources detected at $3\sigma$ or
greater in the OVRO survey as a rule were avoided in the GBT survey.
Sources in the CBI fields were observed from September 1999 through
December 2001 with the OVRO 40-meter telescope in support of ongoing
CBI observations. The 40-m observations preferentially targetted
sources in the original \citep{cbi3,cbi7} CBI total intensity fields;
in all, $2,315$ sources were observed by the 40-m. With the typical
RMS sensitivity of the OVRO survey ($2.5 \, {\rm mJy}$ -- see
\S~\ref{sec:ovroobs}), this resulted in 180 detections at $4\sigma$ or
greater significance (363 at $3\sigma$ or greater).
Our aim with the GBT survey was to measure {\it all} of the NVSS
sources in the CBI fields with a sensitivity comparable to the RMS
noise in the CBI maps. The 363 sources previously
detected\footnote{Due to a software bug, GBT observations of sources
between $+01^{\circ}$ and $-01^{\circ}$ were observed without regard
for the OVRO 40m measurements, {\it i.e.} observations in this
range were not pre-censored by the measured OVRO flux value.} at
$3\sigma$ or greater in the OVRO 40m survey were not observed by the
GBT, leaving $5,636$ sources. Useful GBT data were collected on
$1,490$ of these (\S~\ref{sec:gbtobs}). Faint NVSS sources, and
sources in areas where the CBI maps are most sensitive, were
preferntially targeted. Of the OVRO observations, 640 sources' data
were superceded by more sensitive GBT measurements, leaving $1675$
unique observations in the OVRO dataset. In all useful data were
obtained on $3,165$ NVSS sources. The distribution of $1.4$ GHz flux
densities of the sources measured is shown in
Figure~\ref{fig:sourcefluxdist}.
\begin{figure}[htbp]
\vspace{3in}
\plotone{surveylowfreqflux.eps}
\caption{NVSS flux densities for sources measured in the GBT and OVRO 40-m surveys.}
\label{fig:sourcefluxdist}
\end{figure}
\section{Observations}
\label{sec:obsreduc}
\subsection{OVRO Observations}
\label{sec:ovroobs}
All NVSS sources brighter than 6 mJy were observed to a typical RMS
sensitivity of $2.4 \,{\rm mJy} $, requiring $ 30$ five minute observations on
average. Five minute observations of NVSS-selected sources were
interleaved with daily measurements of 3C286, 3C147, 3C279, and other
bright sources to monitor the system's
performance. In between calibrator observations, the system gain was
monitored with the noise diodes internal to the receiver. Every 40
minutes, a calibrator source within $\sim 15^{\circ}$ of the field
being observed was measured to determine the telescope pointing
offset. For each flux density measurement, the online system reports
an estimate of the measurement error from the variance in the 1-second
samples within the integration period. During the course of all
observations weather data are collected and recorded for later
correlation with the astronomical datastream. The basic measurement
consists of four beamswitched integration periods wherein the
source of interest is alternately placed in the two beams of the
telescope in an A/B/B/A pattern. This symmetric double-differencing
scheme is effective at cancelling constant and gradient terms in
atmospheric emission \citep{readheadovro}.
Our flux density scale is based on the WMAP 5-year
measurement of the 32 GHz brightness temperature of Jupiter $T_J =
146.6 \pm 0.75 \,{\rm K} $ \citep{hill}, extrapolated across the receiver bandpass with
3C286 using an assumed spectral index of $\alpha = -0.827$
\citep{ring5m}. Observations of 3C286, 3C48, and 3C147 from Sep 1999
through May of 2000 showed uncorrelated RMS variations in flux density
of $\sim 4 \%$. Together with the $3\%$ uncertainty in the absolute
flux density of Jupiter, this gives a $5\%$ calibration uncertainty
for the OVRO 40-m.
\subsection{OVRO Data Reduction}
\label{sec:ovroreduc}
OVRO data were reduced as follows. Observations of the noise diodes
several times per hour are first used to remove electronic gain
fluctuations and scale the data to antenna temperature. Corrections
for the telescope gain as a function of elevation, determined from
long tracks on bright calibrators, are applied, as is a correction to
account for atmospheric opacity as a function of elevation.
Time-variable weather conditions determine the sensitivity of OVRO
40-m flux density measurements. Typically sources were observed in
rotation, with 5 1-minute observations of each source. The
timescale on which changes in the weather affect the 40-m photometric
sensitivity is typically an hour, which sets the timescale on which we wish to
estimate the measurement noise. During this period of time $\sim 10$
separate sources with different mean flux densities were measured.
Sources in the OVRO sample are expected to have 31 GHz flux
densities of $1-2$ mJy, small in comparison to the $10-15$ mJy noise
level achieved on a per-observation basis. To determine the
measurement noise for a single observation, we wished to compute the
characteristic scatter of similar observations nearby in time with no
contribution due to the scatter in the source flux densities
themselves, our main concern being the power-law tail of the source
flux density distribution which results in rare objects comparable to
or greater than the per-observation noise level which could bias a
scatter-based noise estimate. We chose to determine the noise level
of each observation by computing the median absolute deviation of
similar observations within a one hour buffer centered on the
observation under consideration. The median absolute deviation (MedAD)
of some data $x_i$ is defined as
\begin{equation}
{\rm MedAD} (x_i) = {\rm Median} ( | x_i - {\rm Median}(x_i) | )
\end{equation}
and is a measure of the dominant noise scale of a distribution which
is extremely resistant to the presence and magnitude of outliers
\citep{robust}. For a Gaussian distribution of width $\sigma_g$, the
${\rm MedAD} = 0.6745 \sigma_g$. We used this relation to to rescale
the MedAD into an effective Gaussian $\sigma$ value for each
observation. An indepenent noise estimate was provided by the scatter
of 1-second integrations {\it within} each measurement. The internal
noises gave results comparable to, but generally 15\%-30\% lower than,
the scatter between observations. This is consistent with the
expectation that except in the very best observing conditions,
low-level photometric instabilities ({\it e.g.} a slow variation in
the gradient of the sky emission) on timescales longer than that of an
individual source observation contribute to the measurement noise. We
confirmed with simulations that to within a few percent this approach
yields an unbiased estimate of the RMS of a Gaussian noise
distribution in the presence of power-law source populations with
typical per-measurement signal-to-noise ratios of $0.1$ to $0.2$.
Over the course of the observing campaign individual sources in our
sample were observed between a few and about 100 times, most commonly
about thirty times. All observations of a given source were combined
to form a weighted mean and an uncertainty on this mean was computed
by propagation of the error bars of each measurement. The reduced
$\chi^2$ of the data about this average was also computed; these
values are shown in Figure~\ref{fig:ovrochi2}. For reference the
expected distribution of $\chi^2_{\nu}$ for 30 DOF is shown. There is
a fairly broad distribution of final sensitivies, owing largely to the
range of total integration times per source, but also to the range of
photometric conditions. For our final source catalog
(\S~\ref{subsec:catalogs}) we adopted a $4\sigma$ detection threshold;
the probability of detecting a random source of a given flux density,
allowing for the distribution of noises in our dataset, is shown in
Figure~\ref{fig:ovrosens}.
We reject data within 5 degrees of the sun or the moon, as well as
data for which the preceding pointing calibrator observations were not
successful or did not occur within one hour. Since high winds can
affect the telescope pointing, observations during which the wind was
greater than 15 mph are discarded. Reducing the pointing requirement
from an hour to a half-hour, and the wind limit from 15 to $7.5$ mph
(reducing the force of the wind on the telescope by roughly a factor
of four) did not significantly affect the measured flux densities of
the OVRO-detected sources.
The mean flux density for each source is computed by averaging over
the entire observing epoch, with each point inversely weighted by the
estimated measurement variance. For our final results we used only the
$30-32$ GHz band, which we take to have a nominal center frequency of
$31.0$ GHz. This is the center of the CBI observing band and
corresponds closely to the most sensitive channel of the GBT receiver,
readily allowing direct comparisons.
\begin{figure}
\vspace{4in}
\special{psfile=ovrochi2b.eps}
\caption{The $\chi^2_{\nu}$ distribution for OVRO 40m measurements, each individual
$\chi^2_{\nu}$ value being derived from the combination of all data in the given
frequency band on a given source. Also shown is the theoretical reduced $\chi^2$
distribution for 30 degrees of freedom, which is typical, although the number of
observations on a given source ranges from three to a hundred in some cases.}
\label{fig:ovrochi2}
\end{figure}
\begin{figure}
\vspace{4in}
\special{psfile=cumulant.vps}
\caption{The probability that a given source is detected in the OVRO
survey as a function of 31 GHz flux density. The 90\% (dotted) and
95\% (dash-dotted) completeness levels are shown for clarity. The GBT survey
is essentially complete above $2.5 \,{\rm mJy} $.}
\label{fig:ovrosens}
\end{figure}
\subsection{GBT Observations}
\label{sec:gbtobs}
Test observations of the CCB and 26-40 GHz receiver on the GBT were
conducted in November and December of 2005, and January of 2006; these
observations confirmed lab measurements of the system performance.
The science observations ran from 02 February, 2006 through 07 May
2006. We collected 3198 observations of 3040 NVSS sources; after the
data filtering described below, 1567 observations of 1490 sources
remain in the final dataset.
For these observations we developed an ``On-the-Fly Nod'' variant of
the double-differencing technique described in \S~\ref{sec:ovroobs}
and \citet{readheadovro}. With this technique data are collected
continuously through the entire observation, including the slews
between beams. The recorded (10 Hz) antenna positions are used with
the target source coordinates to construct a template beamswitched
signal which is fitted to the differenced data. This approach
minimizes scan-start overheads and provides a conveniently continuous
datastream for each source; it also allows us to carefully account for
imperfect source acquisition and settling times offline. An example
observation of a bright source is shown in Figure~\ref{fig:nod}. A
typical feature is the spike at $\sim 30 \sec$. In actuality the spike
is a dip in the source signal coming from the negative beam, and
arises from stiction in the telescope servo resulting in overshooting
the source slightly when slewing in one direction. Since this
overshoot was recorded by the antenna position encoders it is reflected
in the template model and has minimal effect on our observations,
particularly of our much weaker science sources. The on-source dwell
time in each phase was 10 seconds; there was a 10-second slew between A
and B phases. A 10 second settle was also allowed at the start of each
scan in order to allow possible GBT feedarm vibrations (occasionally
excited by the servo system at the start of a scan) to settle, but
this was never seen to be an issue in our frequent bright-source
calibration checks. The average slew-time between program sources was
20 seconds, for an average total elapsed time per source measurement
of 90 seconds. During the nod measurement the detected RF power in
each of 16 channels was recorded at 200 Hz by the CCB.
\begin{figure}[htbp]
\epsfxsize=6.2in
\epsffile{prettyNod.ps}
\caption{Measurement of steep spectrum source $085328-0341$ at $31.25
\, {\rm GHz}$. The left hand plot shows the individual 5
millisecond beamswitched integration calibrated via the noise diode
to antenna temperature. The right hand plot shows the individual
beamswitched integrations averaged into 0.5 second measurements with
error bars given by the internal scatter of each 0.5 second
measurement and flux density calibrated via 3c286; here the initial
10 second settle period has been excised. The remaining data
illustrates the symmetric A/B/B/A nod pattern in which the source
was placed in the two beams of the receiver. The solid line in the
right panel shows the template fit (Eq.~\ref{eq:gbtfit}).}
\label{fig:nod}
\end{figure}
During daylight the GBT pointing and focus was checked every half hour
on a nearby bright source ($S_{31 {\, \rm GHz}} > 500 \,{\rm mJy} $); at night, this
was relaxed to every 45 minutes. During each pointing and focus check
a nod observation was also collected to monitor intraday stability of
our measurements. To monitor interday repeatability of our
calibration, we selected a steep-spectrum (NVSS/OVRO) source near each
CBI field as well and measured it at least once per observing session.
\subsection{GBT Data Reduction}
\label{sec:gbtreduc}
While the GBT observations are similar to the OVRO observations,
several important differences led to a different approach to
reducing the data. First, substantially less ``data reduction''
(averaging) was performed by the online data acquisition system,
enabling preservation of more data for offline diagnostics. Second,
the typical signal-to-noise in a GBT observation was of order unity
owing to the higher sensitivity of the telescope -- see
Figure~\ref{fig:gbtsnr}. Finally the dataset was substantially smaller,
less than a week in total in contrast to several years of OVRO
observations.
The CCB data were first averaged from 5 ms to $0.5 \, {\rm sec}$
integrations $d(t_i)$ and each integration was assigned an error
estimate based on the internal scatter of the beamswitched 5 millisecond
integrations. The 10 Hz recorded antenna positions were interpolated
onto the same time sampling as the CCB data resulting in a time series
of positions $\vec{x}_j(t_i) \equiv \vec{x}_{j,i}$ for a given feed
indexed by $j=1,2$ for a given observation. The beam locations on the
sky and measured GBT beam pattern $B$ as a function of frequency were
used to compute the expected beamswitched response of the receiver to
a point source of flux density $s_o$ at the location of the source of
interest, $\vec{x}_o$
\begin{equation}
d_i = \, s_o \times [ B( |\vec{x}_{1,i} - \vec{x}_o| ) - B( |\vec{x}_{2,i} - \vec{x}_o| )]
+ \langle d_i \rangle + \frac{d d_i}{dt}
\label{eq:gbtfit}
\end{equation}
where the difference in square brackets comes about due to the beam
switching. The last two terms are mean (radiometric offset) and
gradient terms allowed in the fit and which, due to the symmetry of
the nod pattern, are approximately orthogonal to the source flux
density parameter $s_o$. This template was fit directly to the beamswitched
data -- refer again to Figure~\ref{fig:nod} for an example. The
$\chi_{\nu}^2$ of the fit is a good diagnostic of data quality; for
sources weaker than 10 mJy, $\chi_{\nu}^2$ was close to unity under
good conditions. As weather conditions degrade our simple model
fails resulting in appreciable increase of the $\chi_{\nu}^2$. For
sources brighter than $\sim 10 \, {\rm mJy}$ $\chi_{\nu}^2$ rarely
approached unity even under excellent observing conditions due to
imperfections in the beam model and residual pointing errors. These
observations required separate consideration in
$\chi_{\nu}^2$-based filters.
We performed this procedure separately on each of the 16 CCB data
channels. Using observations of 3C286 throughout the observing
campaign a single mean calibration, referenced to the WMAP-5
Jupiter-temperature scale described in \S~\ref{sec:ovroobs}, was
determined for each channel. For the final processing this calibration
was applied to the individual detector timestreams and all timestreams
for a given frequency were averaged in the time domain before
performing the source flux density fit of Eq.~\ref{eq:gbtfit}. This
ensured that noise fluctuations correlated between feeds or
polarizations were correctly accounted for in the noise estimate that
follows.
Repeated observations of bright, steep-spectrum sources near our
fields (also of fainter steep-spectrum sources) provided a check on
the validity of our pointing filters. We found that the accuracy of
our calibration for an individual source is 10\% (RMS) at 31 GHz,
dominated by uncertainties in the GBT pointing.
\begin{figure}[htbp]
\epsfxsize=6.2in
\epsffile{newgbtsurveyhist2.eps}
\caption{The distribution of the signal-to-noise ratio for the
measurements in the GBT survey. Compare with blank-field
measurements in Fig.~\ref{fig:gbtnoisetest}.}
\label{fig:gbtsnr}
\end{figure}
Since the signal to noise in a typical 70-second GBT observation was of
order unity we could not straightforwardly compute the noise in our data
from the scatter of the measurements ({\it i.e.}, the fitted values of
the source flux densities $s_o$) as we did for the OVRO
measurements. Instead we formed alternate combinations of the
individual segments of the symmetric Nod procedure to quantify the
photometric stability of the measurement. Designating the average
value of integrations within individual segments of the nod by
$A_1,B_1,B_2,A_2$, then $\Delta A = A_1-A_2$ is a source-signal-free
combination measuring to the photometric fluctuation over 50 seconds,
and $\Delta B = B_1-B_2$ is a source-signal-free combination measuring
the photometric fluctuation over 10 seconds. From test observations of
blank patches of sky under a wide range of conditions we found that the
average (over some window of many observations) of the
root-mean-squares of $\Delta A$ and $\Delta B$ gave an estimate of
the RMS of the measured flux density values $s_o$ accurate to within 10\%.
Note that since both $A$ and $B$
respond to time gradients of the beamswitched signal, whereas the
fitted source flux density ($s_o$ in Eq.~\ref{eq:gbtfit}) does not,
we might expect them to slightly overestimate the noise.
To improve the robustness of this
noise measure, we computed the mean absolute deviation
(MnAD) of $\Delta A$ and $\Delta B$ rather than the RMS and
renormalize to a Gaussian equivalent ($MnAD = \sqrt{2/\pi} \sigma_g$);
we chose the somewhat less robust but lower variance MnAD over the
MedAD used in \S~\ref{sec:ovroreduc} because outliers are a lesser
concern for the signal-free estimators $\Delta A , \Delta B$ than in
the flux density measurements themselves. For each measurement we
computed these quantities in a one-hour buffer centered on the
observation in question. With the appropriate normalization constants,
derived in the white-noise limit, this approach gave a noise estimate
\begin{equation}
N(t_i) = \frac{\sqrt{\pi}}{8} [MnAD(\Delta A) + MnAD(\Delta B) ]
\label{eq:gbtnoiseest}
\end{equation}
(where the mean absolute deviations were computed from all observations
within plus or minus a half hour of $t_i$) close to the RMS that
signal-free nod measurements would have given under the same conditions.
The data from the blank-field test observations are shown in
Figure~\ref{fig:gbtnoisetest}, normalized by the individual estimated
measurement errors. The actual per-measurement noises varied by a
factor of four. The dispersion in the noise-normalized data is $\sigma
= 0.91$, acceptably close to the $\sigma = 1.0$ which would result
from perfectly measured Gaussian noise. As expected the noise was
slightly overestimated due to the presence of radiometric
gradients.
The mean noise level in the GBT data which pass the data filters
described below is $390 \,{\rm \mu Jy} $. The distribution is shown in
Figure~\ref{fig:gbtnoisedist}.
\begin{figure}[htbp]
\epsfxsize=6.2in
\epsffile{newBubbleSnrPlot.eps}
\caption{Distribution of blank-field measurements divided by their
individual measurement noises estimated by
Eq.~\ref{eq:gbtnoiseest}. The dashed line shows the best fitting
Gaussian which has $\sigma = 0.91$, close to the expected value of
unity which would be obtained for perfectly estimated Gaussian noise;
the noise values themselves varied by a factor of 4. Compare to the
corresponding distribution of measurements that targeted NVSS sources
in the survey, Figure~\ref{fig:gbtsnr}.}
\label{fig:gbtnoisetest}
\end{figure}
\begin{figure}[htbp]
\epsfxsize=6.2in
\epsffile{gbterrhist.eps}
\caption{Distribution of RMS measurement errors in the GBT survey,
estimated as described in the text.}
\label{fig:gbtnoisedist}
\end{figure}
The main time-variable systematics that affect our data were
departures from good photometric conditions caused by atmospheric
clouds and water vapor and telescope pointing and focus. A series of
filters excised compromised observations. To identify periods of poor
photometry we computed the median $\chi_{\nu}^2$ in sliding one-hour
buffers -- excluding measurements brighter than 10 mJy -- and rejected
observations for which this value exceeds $1.4$. To catch rare
isolated glitches, individual observations with $\chi_{\nu}^2 > 1.4$
were also rejected {\it unless} the fitted flux density exceeded 10
mJy. In this case the observation was inspected manually; all passed
inspection so were retained.
In the course of commissioning and operations the pointing and focus
performance of the GBT was extensively characterized, with
results summarized by \citet{ptcs26} and \citet{ptcs49}. Under calm
conditions (winds under $3.0$ m/s) the observed RMS in the radial
pointing offset is $2.7''$, corresponding to less than 5\% loss of
peak signal for the GBT's $24''$ (fwhm) beam at 31 GHz. When the
windspeed is greater than a few meters per second the GBT pointing
performance degrades, principally due to excitation of feedarm
vibrations; up to $4.0$ m/s, the pointing accuracy is still acceptable
for 31 GHz observations. We rejected individual observations with mean
wind speeds over 3 meters per second; we also reject individual
observations with peak wind speeds over 5 meters per second. Every
half hour (during the day) to 45 minutes (at night), the telescope
pointing and focus was updated from observations of a nearby
calibrator. Science program observations that are
not preceeded by a successful peak and focus correction within these
periods of time are also rejected. There were no GBT data for which
the sun or the moon were closer than 10 degrees away. The data filters
are summarized in Table~\ref{tbl:filters}.
To check the effectiveness of our pointing update criteria we
considered the ratio $r$ of measured flux density in the 38 GHz band
to that measured in the 31 GHz band for all sources detected at
$4\sigma$ or greater. The flux density measured in the high frequency
band will fall more for a given pointing offset than will the lower
frequency band. For the dataset as a whole $r=0.84 \pm 0.03$, where
the uncertainty is the error in the mean assuming Gaussian statistics
from the RMS of the distribution. From the spectral index analysis of
\S~\ref{sec:maxlike} we calculated an expected $r=0.83 \pm 0.04$,
where the error bar in this case is the RMS of the distribution
(indicating its intrinsic width) predicted by our Maximum Likelihood
spectral index distribution under the assumption of a single power-law
extrapolation. Note that selecting $4\sigma$ detections will bias $r$
slightly high relative to the full distribution. Splitting the data
into two halves based on the time since the last pointing calibration,
the data with more recent corrections has an average flux density
ratio of $0.84$, while data with less-fresh pointing corrections have
an average flux density ratio of $0.83$, indicating no significant
change in the telescope pointing between pointing checks. Similarly
the daytime data have $r=0.83$ and the night-time data $r=0.84$,
indicating that on average the thermal effects prevalent during the
day do not significantly affect the telescope pointing. The $\Delta r
= 0.01$ differences seen correspond to average radial pointing offsets
of $2''$ or $2\%$ gain effects; a 10\% loss in gain overall, if caused
by a pointing offset, would correspond to $\Delta r = 0.05$ (a $4''.6$
radial offset). Note that this test is sensitive to any errors which
causes relative changes in observed power across the receiver
band. For instance, variations in the Ruze-equivalent RMS deviation of
the telescope surface from a perfect paraboloid scale as $exp(-(4\pi
\epsilon/\lambda)^2)$ and will also affect the $38$ GHz channel more
than the $31$ GHz channel.
\begin{table}
\begin{tabular}{|c||c|c|c|c|} \hline
Filter & Criterion & fraction of data passed \\ \hline
Wind & max $< 5.0 {\rm m/s}$, mean $< 3.0 {\rm m/s}$ & $79\%$ \\
$\langle \chi_{\nu}^2 \rangle$ & $< 1.4$ & 54\% \\
Pointing \& Focus Updates & within 30 (day) or 45 (night) minutes & $86\%$ \\ \hline
Total & & 39\% \\
\hline
\end{tabular}
\caption{Summary of GBT data filters. We show the fraction of the
total dataset that passes each individual filter; since there are
correlations between the filtered variables the filters are not
statistically independent.}
\label{tbl:filters}
\end{table}
To further check the accuracy of our noise estimate we made use of the
fact that in the course of the survey 50 weak ($S_{31} < 15 \,{\rm mJy} $)
sources were observed more than once; we selected weak sources in order
that the effect of gain errors, which can be correlated between
observations, not be a dominant effect. In all there are 122 such
observations. Subtracting the per-source means from each we calculate
a $\chi_{\nu}^2 = 0.95$ for $\nu = 72$. The probability to exceed this
by chance is 59\%.
\subsection{The Effect of Finite Source Size}
While most radio sources are compact compared to the GBT and OVRO
beams, a handfull are sufficiently extended that flux density is lost
in targetted observations. On angular scales at which the GBT begins
to lose flux density ($\sim 10''$) most of the extended emission seen
in extragalactic sources originates in radio jets with a synchrotron
spectrum $\propto \nu^{-0.6}$ to $ \nu^{-1}$
\citep{laingpeacock80,dennetthorpe99}; the 31 GHz emission is
dominated by the compact, flatter-spectrum cores. Consider for example
that the NVSS at $1.4$ GHz, with a $45''$ (FWHM) beam find $20\%$ of
sources to be resolved, while 9C at 15 GHz, with a $25''$ FWHM beam,
finds only $9\%$ of sources to be detectably resolved
\citep{lizsizes}. Consequently we expect that most of the flux
density in the 31 GHz sky will be in compact sources which are
accurately measured by the GBT.
To test for lost flux in the dataset NVSS sources with useful GBT
measurements were divided into two groups--- those resolved by NVSS
and those not resolved by NVSS--- and from these groups we constructed
two CBI visibility templates (actually, gridded estimator templates).
We fit for a scale factor $S_{CBI} = f \times S_{GBT}$ for each
template using the CBI visibility data in aggregate. For sources
unresolved by NVSS we find $f = 0.96 \pm 0.04$, indicating that there
is no systematic bias between the flux density scales and that there
is no significant degree of source extension that is not detected by
NVSS. Fitting for a CBI/GBT scale factor using only the NVSS-{\it
resolved} sources yields $f=1.18 \pm 0.10$. The larger error bar in
this is consistent with the (smaller) number of extended sources in
comparison to the number of compact sources in the NVSS catalog.
We found that excising the extended sources from the spectral index
analysis of \S~\ref{sec:maxlike} did not significantly change the
spectral index probability density function (PDF) or final result for
the residual source correction. This is consistent with the expectation
that the 31 GHz sky variance is dominated by the flat, compact
sources.
\subsection{Confusion}
Since the angular resolution of the GBT and, especially, OVRO surveys
are comparatively low ($24''$ and $1'.3$, respectively) chance
superpositions of radio sources occasionally occur and must be
considered. To assess the effects of source confusion we combined the
OVRO and GBT catalogs, giving precedence to GBT measurements where
present, and selected $3\sigma$ or greater detections. To this we
added the NVSS sources with $1.4 \, {\rm GHz}$ flux densities $> 3.4
\,{\rm mJy} $ and multiplied their flux densities by $0.1$ (the mean ratio of
$S_{31}$ to $S_{1.4}$ determined from the Maximum Likelihood analysis
of \S~\ref{sec:maxlike}), thereby obtaining our best estimate of the
point sources 31 GHz sky in the regions observed. We call this our
``reference'' catalog.
Using the reference catalog we scanned the full set of OVRO observations
and identified those for which the sum of the absolute values of
beam-weighted confusing source flux densities in the reference catalog
amounts to more than half the measurement error for that source. This
amounts to $1.3\%$ of the OVRO catalog. For these measurements a
correction is calculated using the procedure described in
Appendix~\ref{appendix:confusion}.
Owing to the smaller beam size and beam throw the level of source
confusion in the GBT data was much lower. Using the same criteria
only two observations were significantly confused; for these sources
the correction described in Appendix~\ref{appendix:confusion} was also
performed.
\subsection{Source Catalogs}
\label{subsec:catalogs}
The full set of OVRO observations is presented in
Table~\ref{tbl:ovroresults}, and the GBT survey results are presented
in Table~\ref{tbl:gbtresults}. Reported error bars include a 10\% and
5\% RMS gain uncertainty for GBT and OVRO measurements,
respectively. Sources detected at greater than $4\sigma$ at 31 GHz are
marked; for this calculation the random gain uncertainty is excluded.
In all $3,165$ sources were observed. The GBT catalog contains
$1,490$ sources. Of the $2,315$ useful OVRO observations many of the
non-detections (and a few detections) are superceded by more sensitive
GBT observations; the OVRO catalog therefore contains data on $1,675$
sources. The detection rate of the OVRO measurements was $11\%$, and
that of the GBT measurements $25\%$. In all $18\%$ of sources were
detected at 31 GHz.
Also included in the table are the $1.4$ GHz flux densities, source
sizes from the NVSS catalog, flags to indicate $4\sigma$ detections,
and flags to indicate which observations have been corrected for the
effects of source confusion in either the main or reference beams.
The catalogs presented here are based on the processing described in
\S~\ref{sec:ovroreduc} and \ref{sec:gbtreduc}. The analysis
\S~\ref{subsec:srccounts} and \S~\ref{subsec:cbipow} used an earlier
processing with slightly less strict filters. The total number of
sources in this catalog was $3,562$. The spectral index distributions
obtained from these two versions of the source lists are consistent.
\begin{table}
\begin{verbatim}
Name RA/J2000 Dec/J2000 S30 E(S30) S(1.4) E(S1.4)Maj Min D C
085057-0150 08 50 57.06 -01 50 38.6 5.30 1.91 46.90 1.50 0.0 0.0
085101-0509 08 51 01.80 -05 09 52.9 3.00 3.40 36.30 1.20 0.0 0.0
085103-0303 08 51 03.94 -03 03 35.5 2.00 2.20 17.80 1.40 71.4 0.0
085118-0419 08 51 18.93 -04 19 10.3 -3.13 2.81 62.70 2.30 15.3 0.0 *
085121-0418 08 51 21.49 -04 18 25.7 11.74 2.79 66.00 2.40 45.5 0.0 * *
085127-0156 08 51 27.01 -01 56 09.3 4.00 5.00 18.70 1.00 22.7 0.0
085130-0155 08 51 30.58 -01 55 49.4 -4.30 3.80 22.70 0.80 0.0 0.0
085135-0150 08 51 35.60 -01 50 44.9 5.80 2.71 55.20 1.70 0.0 0.0
085137-0405 08 51 37.67 -04 05 00.7 3.40 3.30 27.40 0.90 0.0 0.0
085138-0451 08 51 38.97 -04 51 23.8 14.20 3.25 78.40 2.40 0.0 0.0 *
085141-0424 08 51 41.73 -04 24 35.6 2.80 2.10 13.70 0.60 0.0 0.0
085149-0314 08 51 49.11 -03 14 57.0 0.70 2.50 81.40 2.90 29.8 0.0
085157-0408 08 51 57.40 -04 08 01.2 -0.10 1.80 17.30 0.70 0.0 0.0
\end{verbatim}
\caption{Excerpt of OVRO 40-m survey results. Positions and $1.4$ GHz
flux densitiesare from NVSS. Columns are: NVSS name; Right Ascencion
(J2000); Declination (J2000); 31 GHz flux density and uncertainty in mJy;
NVSS integrated flux density and uncertainty; NVSS Major axis in
arcseconds ($0.0$ indicates no detected size); NVSS minor axis in
arcseconds. A flag in the ``D'' column indicates a $4\sigma$
detection, and a flag in the ``C'' column indicates that a confusion
correction has been performed by the method described in the text. The
full version of this table is available on line.}
\label{tbl:ovroresults}
\end{table}
\begin{table}
\begin{verbatim}
Name RA/J2000 Dec/J2000 S30 E(S30) S(1.4) E(S1.4)Maj Min D C
024033-0430 02 40 33.46 -04 30 00.5 0.17 0.30 3.60 0.60 0.0 0.0
024033-0432 02 40 33.53 -04 32 47.5 0.37 0.32 5.50 0.50 0.0 0.0
024038-0425 02 40 38.89 -04 25 54.5 0.81 0.32 4.00 0.50 0.0 0.0
024055-0428 02 40 55.18 -04 28 36.4 0.34 0.30 4.50 0.50 0.0 0.0
024108-0422 02 41 08.83 -04 22 51.4 5.32 0.61 6.90 0.50 0.0 0.0 *
024111-0425 02 41 11.61 -04 25 21.4 1.23 0.32 26.80 0.90 0.0 0.0 *
024119-0421 02 41 19.39 -04 21 44.4 2.71 0.43 17.20 0.70 0.0 0.0 *
024129-0003 02 41 29.53 -00 03 27.4 -2.40 1.19 4.70 0.50 0.0 0.0
024137-0039 02 41 37.85 -00 39 19.4 0.56 0.61 14.60 0.60 0.0 0.0
024144-0416 02 41 44.17 -04 16 48.0 5.88 0.67 84.50 3.30 32.6 16.5 *
024146-0025 02 41 46.15 -00 25 01.7 0.89 0.61 5.70 0.50 0.0 0.0
024153-0105 02 41 53.70 -01 05 43.3 0.15 0.24 3.40 0.50 0.0 0.0
024204-0053 02 42 04.56 -00 53 33.7 0.26 0.24 5.60 0.50 0.0 0.0
\end{verbatim}
\caption{Excerpt of GBT 31 GHz survey results. Columns are as in Table~\ref{tbl:ovroresults}.}
\label{tbl:gbtresults}
\end{table}
\section{Interpretation}
\label{sec:results}
\subsection{Spectral Index Distribution from GBT, OVRO, and NVSS Data}
\label{sec:maxlike}
In order to determine the contribution of radio sources below the NVSS
completeness limit to the sky variance measured by 31 GHz CMB
experiments such as the CBI, it is necessary to understand the $1.4$
to $31$ GHz spectral index distribution --- or equivalently, the
probability density function (PDF) of $S_{31}$ given $S_{1.4}$. The
data from the survey presented in this paper are the best currently
available for this purpose. Since sources detected at 31 GHz are preferentially flat-spectrum, it is
necessary to include non-detections in this analysis to obtain an
unbiased result. Because we know the low-frequency flux densities of
these sources and the 31 GHz measurement noise, these non-detected
sources will impose constraints on the spectral index distribution.
We adopted a Bayesian Maximum Likelihood approach.
We wish to find the spectral index distribution that maximizes the
likehood of measuring the observed 31 GHz flux densities given their observed
1.4 GHz NVSS flux densities.
The general form of the likelihood of measuring 31 GHz flux density $ S_{31, obs}$
given 1.4 GHz flux density $ S_{1.4, obs}$ is:
\begin{equation}
P\left( S_{31, obs} | S_{1.4, obs} \right) = \int \int P\left( S_{31, obs} | S_{31, T} \right)
P\left( S_{31, T} | S_{1.4, T}\right) P \left( S_{1.4, T} | S_{1.4, obs} \right) d S_{31, T} d S_{1.4, T}
\end{equation}
integrating over the unkown values of the ``True'' fluxe densities
$ S_{31, T}$ and $ S_{1.4, T}$, and with the
(unknown) 1.4-31 GHz spectral index function $ P\left( S_{31, T} |
S_{1.4, T} \right) $. We parameterized $\frac{S_{31,T}}{S_{1.4,T}}$ with a set of
$N_{bin}=17$ points in the frequency spectral index $\alpha$ evenly
spaced between $\alpha=-1.6$ and $\alpha=+1$. Appendix~\ref{appendix:maxlike}
contains a more detailed discussion of the evaluation of this
likelihood function.
To measure the the PDF and its uncertainty, we used the publicly
available Markov-Chain Monte-Carlo (MCMC) code COSMOMC \citep{cosmomc}
adapted for use with a generic likelihood function. The MCMC
algorithm draws samples a multi-dimensional parameter space, in this
case the space of $N_{bin}$ parameters representing the spectral index
distribution, with a specified distribution, in this case the
likelihood of the parameters given the data. This procedure permits
easy evaluation of the uncertainties and covariances in parameters of
interest (the spectral index distribution); it also makes evaluating
the uncertainties and covariances in functions of these parameters
straightforward ({\it e.g.}, \S~\ref{subsec:srccounts}).
We show the marginalized posterior distributions of the parameters of
the spectral index PDF in Figure~\ref{fig:specind}. This is our best
description of $1.4$ to 31 GHz source spectral indices. This figure
also shows the spectral index distribution weighted by $(31/1.4)^{2
\alpha}$, and therefore proportional to the variance in 31 sky
brightness that residual point sources of a given spectral index
contribute. Table~\ref{tbl:specind} summarizes the results in binned
form, calculated from $100,000$ samples from the MCMC chains. We
found that the mean $31$ to $1.4$ GHz flux density ratio is $0.111 \pm
0.003$, corresponding to a spectral index $-0.71 \pm 0.01$. The
distribution is heavily skewed towards steep spectral indices with a
long tail in the flat direction, resulting in a {\it mean spectral
index} steeper than the value $\alpha=-0.71$ that corresponds to the
mean flux density ratio ($<\alpha>= -0.92^{+0.29}_{-0.30}$, $68.5 \%$ confidence
interval). $9.0 \pm 0.8\%$ of sources have spectral indices flatter
than $\alpha > -0.5$ and $1.2 \pm 0.2\%$ have inverted spectral
indices, $\alpha > 0$.
To check the important assumption that the spectral index does not
change as a function of $1.4$ GHz flux density we split the sample
into $S_{1.4} > 10 \,{\rm mJy} $ and $S_{1.4} < 10 \,{\rm mJy} $ subsamples and
estimated the spectral index distribution for each of the bright and
faint samples separately, with results shown in
Figure~\ref{fig:specindsplit}. At low flux densities the faint
subsample provided little constraint on the spectrally steep end of
the distribution, reflected by large error bars in that regime. The
consistency of the subsamples supports the assumption that the
spectral index distribution is constant over the range of flux
densities of interest. To further assess the robustness of our
conclusions we re-ran the spectral index distribution chains varying
the assumed noise level of the OVRO and GBT data by $\pm 20\%$ and
excised potentially confused sources. The results, along with the
nominal case, are summarized in Table~\ref{tbl:sitests}. We show the
mean spectral index and its RMS, the fraction of sources with flat our
rising spectra, the mean 31 to $1.4$ GHz flux density ratio, and the
mean-square flux density ratio, which is more indicative of the
variance of the source population.
\begin{figure}
\centering
\includegraphics[width=16cm]{sipdf.eps}
\caption{The $1.4$ to 31 GHz spectral index distribution determined from
GBT, OVRO, and NVSS data. Error bars are the 1-parameter marginalized
$1\sigma$ uncertainties for the individual points parameterizing the
spectral index PDF. Also shown is the PDF weighted by $(31/1.4)^{2\alpha}$,
which is proportional to the contribution that sources of a given
$\alpha$ make to the variance of the 31 GHz sky intensity.}
\label{fig:specind}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=16cm]{brightfaintpdf.eps}
\caption{The spectral index from $1.4$ to 31 GHz determined from
bright ($S_{1.4} > 10 \,{\rm mJy} $) and faint ($S_{1.4} < 10 \,{\rm mJy} $)
subsets of the full dataset. We also show the spectral index
distribution from the full dataset.}
\label{fig:specindsplit}
\end{figure}
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
Test: & Noise$\times 0.8$ & Nominal & Noise $\times 1.2$ \\ \hline
$f_{\alpha \ge 0.0}$ & $1.24 \pm 0.15 \%$ & $1.17 \pm 0.15 \%$ & $0.92 \pm 0.18 \%$ \\
$\langle S_{31}/S_{1.4}\rangle$ & $0.117$ & $0.111$ & $0.101$ \\
$\langle (S_{31}/S_{1.4})^2 \rangle$ & $0.099$ & $0.092$ & $0.084$ \\
$\langle \alpha\rangle$ & $-0.911$ & $-0.917$ & $-0.925$ \\
$\langle\sigma_{\alpha}\rangle$ & $0.336$ & $0.311$ & $0.292$ \\
\hline
\end{tabular}
\caption{Results of tests of the spectral index distribution
estimate. We show the fraction of rising spectrum sources, the mean
flux density ratio ($1.4$ to $31$ GHz), the mean of the square of the
flux density ratio (which is directly relevant to the residual source
variance), and the mean spectral index for a range of perturbed GBT
and OVRO noise levels, and for the case where all measurements
potentially affected by confusion are excised.}
\label{tbl:sitests}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|} \hline
Spectral Index & $f/\%$ \\ \hline
-1.60 & $<1.8$ \\
-1.45 & $3.1 \pm 1.6$ \\
-1.30 & $8.8 \pm 2.8$ \\
-1.15 & $16.9 \pm 3.3$ \\
-1.0 & $27.4 \pm 3.0$ \\
-0.85 & $18.1 \pm 2.4$ \\
-0.7 & $10.6 \pm 1.6$ \\
-0.55 & $5.2 \pm 1.0$ \\
-0.40 & $3.5 \pm 0.6$ \\
-0.25 & $2.8 \pm 0.4$ \\
-0.10 & $1.6 \pm 0.3$ \\
0.05 & $0.7 \pm 0.2$ \\
0.20 & $0.3 \pm 0.1$ \\
0.35 & $0.1 \pm 0.1$ \\
0.50 & $<0.05$ \\
0.65 & $<0.05$ \\
0.80 & $<0.04$ \\
\hline
\end{tabular}
\end{center}
\caption{ The $1.4$ to $31$ GHz spectral index distribution determined
from GBT, OVRO and NVSS data. For bins consistent with zero at
$1\sigma$, $2\sigma$ upper limits computed from the Likelihood
function are listed.}
\label{tbl:specind}
\end{table}
\subsection{31 GHz Source Counts}
\label{subsec:srccounts}
We estimated the 31 GHz counts by by drawing flux densities
between $50 \, {\rm \mu Jy}$ and $1 \, {\rm Jy}$ from the Hopkins et
al. $1.4$ GHz counts and extrapolating them to 31 GHz with samples
from our $1.4$ to $31$ GHz PDF. Over the range $1 \, {\rm mJy} <
S_{31} < 15 \, {\rm mJy}$ the counts follow a power law $dN/dS \propto
S_{31}^{-1.8}$, with a normalization at 1 mJy of $S^{5/2} \, dN/dS =
1.18 \pm 0.05 \, {\rm Jy^{-1.5} \, sr^{-1}}$. Directly summing the
simulated source populations gives an integrated source count of
\begin{equation}
N(>S_{31}) = (16.7 \pm 1.7) \times (S_{31}/{\rm 1 mJy})^{-0.80 \pm 0.07} \, {\rm deg^{-2}}.
\end{equation}
As shown in Figure~\ref{fig:counts} our 31 GHz counts compare
favorably with other measurements \citep{cbideep,cleary05,kovac02}.
We can apply this procedure over an arbitrary range of flux densities
but it is only valid over the range that our spectral index PDF is
valid. Based on the observed $31$ to $1.4$ GHz PDF we estimate that
the potentially distinct source populations at $S_{1.4} < 1 \,{\rm mJy} $ and
$S_{1.4} > 100 \,{\rm mJy} $ could contribute 10\% or more of the sources at
$S_{31} < 1 \,{\rm mJy} $ and $S_{31} > 4 \,{\rm mJy} $, respectively, so take this to
define the range over which our counts are valid. These potential contributions
to the counts also set the uncertainty in the power law slope. Error bars were
checked using the set of MCMC-sampled spectral-index PDFs, but are
dominated by the assumed 10\% systematic uncertainty. However the
agreement with models and data is good over a much wider range of flux
densities, suggesting that the change of the source spectral indices
is not especially strong. The GBT counts have a similar slope to the
model of \citet{dezotti05}, although the normalization of the GBT
counts is $15\%$ lower in the $1-10$ mJy range. The
\citet{toffolatti98} counts are substantially higher than both in this
range. Below $0.3 \, {\rm mJy}$ the 31 GHz counts in our model show a
weak turn-up due to the sub-mJy population but the precise location
and magnitude of this turn-up depends on the assumed spectra of these
sources (see \S~\ref{subsec:otherpops}).
\begin{figure}[htbp]
\epsfxsize=6.2in
\epsffile{counts30may09.eps}
\caption{A summary of 31 GHz source count measurements and models. CBI results are
from \citet{cbideep}, VSA results from \citet{cleary05}, and DASI
results from \citet{kovac02}. All data are at 31 GHz except for VSA,
which is at 33 GHz. Also shown is the \citet{toffolatti98} 31 GHz
model scaled down by a factor of $0.64$ and the \citet{dezotti05} 33
GHz model. The derivation of the GBT error box is described in the
text; the solid red box shows our best estimate of the counts, valid
over $1 \,{\rm mJy} < S_{30} < 4 \,{\rm mJy} $ and the red dashed line shows the
result over the full range $0.25 \,{\rm mJy} < S_{30} < 100 \,{\rm mJy} $. Other
experiments' errors are taken from the Poisson error in the count
normalization.}
\label{fig:counts}
\end{figure}
\subsection{The Effect of Unidentified Sources on CBI Measurements}
\label{subsec:cbipow}
\subsubsection{Simulations}
\label{subsec:cbisims}
Point sources are the largest astrophysical foreground in the CBI
data, and are especially critical at high-$\ell$. An accompanying
paper \citep{cbi10} presents the power spectrum from 5 years of CBI
observations. We have used the results of GBT and OVRO 40-m
measurements presented here to quantify the impact of discrete sources
on the power spectrum. As discussed in \S~\ref{sec:intro} there are
two distinct classes of sources to be treated: those which are
individually known and identified from imaging surveys (NVSS); and
sources in our fields not detected in any survey, but expected to be
present based on source counts extending below the survey detection
limits in other fields. Very conservatively, all known sources are
projected out of the CBI dataset; the efficacy of this procedure is
quantified in \citet{cbi10}. Our task, and the fundamental aim of this
paper, is to quantify the statistical contribution of the fainter
sources. This population is a {\it low-frequency selected population}
(sources below the NVSS detection threshold), and we must calculate
the variance of its sky brightness at 31 GHz.
To do this we undertook an extensive suite of simulations. We created
realistically constrained realizations of the sub-NVSS populations and
ran these realizations through the full CBI power spectrum pipeline;
the procedure is schematically illustrated in
Figure~\ref{fig:sourceplan}. We first drew $1.4$ GHz populations down
to 0.2 mJy using a power-law fit to the FIRST counts \citep{white97}
between 2 and 100 mJy, representing the dominant contribution of
mJy-level AGN. The contribution of sources below 1 mJy at $1.4$ GHz,
which likely have different spectral indices than the mJy AGN, is
considered separately in \S~\ref{subsec:otherpops}. We then simulated
NVSS observations of these source realizations, adding Gaussian noise
typical of the NVSS thermal noise ($0.6$ mJy). Any source that has an
observed (noisy) flux density greater than our NVSS projection
threshold of $3.4$ mJy was then removed, leaving a realistic
population of $1.4$ GHz sources that would not have appeared in NVSS.
Sources on the power-law $dN/dS$ with $S_{1.4} < 0.2 \,{\rm mJy} $, which we
do not simulate, will contribute $< 2\%$ of the power. We then drew
one $1.4$-$31$ GHz spectral index distribution from the Markov chains
in \S~\ref{sec:maxlike}, with the assumption that the spectral index
distribution is independent of $1.4$ GHz flux density, and assigned
each source a spectral index drawn from the distribution. The signal
from these faint-source realizations was added to CMB+noise
simulations of the CBI dataset.
An additional constraint came from the fact that the CBI maps, at a
typical $5\sigma$ level of $S_{31} = 20 \,{\rm mJy} $, shows no sources that
are not present in NVSS. This limits the strongly inverted-spectrum
tail of the spectral index distribution. At the NVSS lower flux limit
roughly one in three CBI synthesized beams ($\sim 5'$ FWHM) has an
NVSS source in it, so at the fainter $1.4$ GHz flux densities
characteristic of the residual sources chance superpositions
will be common. Such a blend would appear to the CBI as a single
orphan source. Therefore the absence of non-NVSS sources in the 31
GHz CBI maps also constrained the abundance of more modestly inverted
spectrum sources {\it for the particular realization of sources
present in the CBI fields}. In this analysis we found that the
latter was the more important constraint.
To fold in this constraint we imaged each simulation and searched it
for ``orphan'' sources using the method described in \citet{cbi12},
and rejected any realization where such a source is found. There were
a 4 $\sim 5-\sigma$ features in the CBI maps that were marginally
inconsistent with being associatable with an NVSS source(s). We
followed each of these up with the GBT and do not detect them,
implying that they are probably noise fluctuations. It is still
possible that these fluctuations were chance superpositions of
multiple faint sources in the same CBI beam. The GBT could miss such
a superposition if no individual source falls within the GBT's
much-smaller beam when pointed at the effective emission center. To
account for this possibility, we carried out mock GBT observations on
fluctuations in the simulated maps that were classified as orphan
sources, and do \textit{not} reject any simulation based on map
fluctuations that would not have been seen by the GBT. This gave us a
set of simulated source catalogs that are consistent with the observed
$1.4$-$31$ GHz spectral index distribution and the fact that CBI
detects no orphan 31 GHz sources.
We created 500 simulations, 250 for each of the binnings in
\S~\ref{sec:maxlike}, and subjected them to the map test described
above. In 215 of the 500 (the ``clean'' simulations), no orphan
sources are detected in the simulated CBI maps. In 285 of them (the
``dirty'' simulations) one or more orphan source is detected. We take
the visibilities from the source simulations and run them through the
full CBI power spectrum pipeline, fitting an $\ell^2$ model to
determine the residual source power spectrum.
The resulting estimate of the CBI residual source contamination is
shown in Figure~\ref{fig:isohist}. We find the mean
signal\footnote{Previous CBI analyses expressed the residual source
correction in the units implied by Eq.~\ref{eq:csrc}, while here we
express them in terms of $C_{\ell}$, which is also independent of
$\ell$ for unresolved sources but is more readily compared to other
experiments. The conversion between $X_{src}$ and $C_{\ell}^{src}$
is given by $C_{\ell}^{src} = X_{src} \times \left[\frac{2 k_B}{c^2}
\left(\frac{k_B T_{cmb}}{h}\right)^2 \, \frac{x^4 e^x}{(e^x-1)^2}
\right]^{-2}$. Note that CMB bandpowers are typically expressed
with the normalization $\ell (\ell+1) C_{\ell}/(2\pi)$. For
reference our result $C_{\ell}^{src} = 43.0 \,{\rm n K} ^2$ corresponds to
$X_{src} = 0.036 \, {\rm Jy^2/Sr}$, and at $\ell = 2500$, $\ell
(\ell+1)C_{\ell}^{src}/(2 \pi) = 43.2 \,{\rm \mu K^2} $.} from the clean
simulations is $C_{\ell}^{src} = 44 \pm 14 \,{\rm n K} ^2$; this is our best
estimate of the residual point source contamination in the CBI
fields. In comparison, the mean signal from all simulations
(neglecting the CBI/GBT orphan-source constraints) is $63^{+24}_{-30}
\,{\rm n K} ^2$. The $95\%$ upper limits for the clean and total distributions
are $80 \,{\rm n K} ^2$ and $204 \,{\rm n K} ^2$, respectively. The maximum power in
any of the 215 clean simulations is $112 \,{\rm n K} ^2$, or almost exactly
half of what is needed to explain the excess power observed by CBI
over intrinsic anisotropy, compared $519 \,{\rm n K} ^2$ in the total set of
simulation. For the total set of simulations $2.2\%$ of instances
give power equal to or exceding what is needed to account for the CBI
high-$\ell$ excess. The long tail to high power in the total
distribution comes from {\it one to a few individual bright 30 GHz
sources} which would have been detected in the CBI maps were they
present. The non-gaussian nature of the distribution is substantial:
the scatter in gaussian simulations with the same average power as the
clean simulations is a factor of $5.5$ lower.
Our observed level is in good agreement with, though generally lower
than, past measurements. \citet{cbideep} found $0.08 \pm 0.04 \, {\rm Jy^2/Sr}
\, (C_{\ell}^{src} = 96 \pm 48 \,{\rm n K} ^2)$, the value used in previous CBI
analyses, based on Owens Valley 40-m measurements. The data in
\citet{cleary05} predict a mean level of $0.03 \, \, {\rm Jy^2/Sr} \,
(C_{\ell}^{src} = 36 \,{\rm n K} ^2)$, with no uncertainty stated. These
results are consistent with what we report here.
\citet{sza} recently reported a determination of residual source power
$\ell (\ell +1) C_{\ell}^{src}/(2\pi) = 378 \pm 87 \,{\rm \mu K^2} $ following a
procedure similar to ours with data from the Sunyaev-Zel'dovich Array
(SZA). These results are also at $30$ GHz but on smaller angular
scales ($\ell \sim 4500$) where the discrete source power will be
higher. Under the approximation that the flat-bandpower CMB window
functions adequately represent the effect of sources on the power
spectrum we can compare these results to the results of our
simulations above. Our best estimate of the residual (sub-$3.4 \,{\rm mJy} $
in NVSS) source contamination at 31 GHz for {\it random} fields on the
sky is the total distribution neglecting the CBI map constraints,
resulting in a predicted $\ell (\ell +1) C_{\ell}^{src}/2\pi =
242^{+77}_{-97} \,{\rm \mu K^2} $ for the SZA measurement (including the extra
contribution estimated in \S~\ref{subsec:otherpops}). Taking the SZA
error bar at face value these measurements are consistent at
$1.2\sigma$. The SZA error bar is based on Gaussian statistics so will
be a significant underestimate of the true uncertainty in the residual
source power spectrum. The underestimate is a factor of $5.5$ for the
CBI fields but will be different for the SZA since the size of the
areas covered are very different. Conversely, were the entire CBI
excess to be explained by discrete sources the SZA should see $\ell
(\ell +1) C_{\ell}^{src}/(2\pi) = 875 \pm 221 \,{\rm \mu K^2} $. This is only
marginally ($2.1 \sigma$) consistent with the lower power level seen
by the SZA.
\subsubsection{The Contribution of sub-millijansky Galaxies}
\label{subsec:otherpops}
\begin{figure}[htbp]
\epsfxsize=6in
\epsffile{dnds.eps}
\caption{$1.4$ GHz source counts. The solid line
is the model of \citet{hopkins03} and triangles are their
measurements in the Phoenix Deep Field; x's are the source
counts from the COSMOS field \citep{bondiCosmos08}; and
squares are source counts from FIRST \citep{white97}. The dashed
line is a power-law fit to the FIRST source counts at flux densities
fainter than $100 \, {\rm mJy}$ and the dot-dash line is the excess of
the full source counts over the power law form. The vertical
dash-triple-dot line is the CBI projection threshold: the sources of
interest are those to the left of this line. The power law behavior
persists up to $100$ mJy.}
\label{fig:dnds}
\end{figure}
Our simulations considered only the power-law-distributed source
population which is seen at mJy levels and higher in low frequency
surveys. We must also consider the contribution of fainter sources
likely belonging to a different population and having different
spectral properties. At $1.4$ GHz flux densities under $\sim 1 \,
{\rm mJy}$ the source counts turn up due to the emergence of the
high-$z$ starbursting galaxies \citep[e.g.][]{windhorst85} --- see
Figure~\ref{fig:dnds}. We can estimate the impact of this population
by first considering source correction at low frequency. Explicitly
integrating the source count of \cite{hopkins03} from $50 \, {\rm \mu
Jy}$ to $3.4 \, {\rm mJy}$ we find a total $1.4 \, {\rm GHz}$
residual source contribution of $C_{\ell}^{src} = 1038 \,{\rm n K} ^2$. We
assume that the turn-up is due to a distinct population. Integrating
the power law over this range gives $869 \,{\rm n K} ^2$, thus, the
contribution of the sources responsible for the turn-up in the counts
below a millijansky is $170 \,{\rm n K} ^2$ at $1.4 \, {\rm GHz}$. This implies
that if every single one of these sources had a flat spectrum between
$1.4$ and $31$ GHz they would account for less than two-thirds of the
small-scale power in excess of the CMB observed by CBI of $270
\,{\rm n K} ^2$. In reality, observations of $\mu {\rm Jy}$ sources
\citep{richards00} show typical spectral indices between $1.4$ and $8$
GHz of $-0.8$, consistent with the observed dominance of synchrotron
in nearby starbursting galaxies \citep{yuncarilli,condonreview}.
Assuming the distribution of spectral indices which we determined for
mJy level radio galaxies we obtain a 31 GHz value $C_{\ell}^{src} = 12
\,{\rm n K} ^2$, a small correction to the mJy-AGN contribution of $44
\,{\rm n K} ^2$. We include this contribution in the power spectrum analysis
for a final correction of $56 \,{\rm n K} ^2$. \citet{dezotti05} also find that
sub-mJy galaxies make a minor contribution in comparison to mJy level
AGN.
It is possible that the sources responsible for the turn-up in the low
frequency counts could have a high incidence of inverted-spectrum
sources extending to 31 GHz thus contributing more to the high-$\ell$
source correction. Using simulations similar to those in
\S~\ref{subsec:srccounts} with modified spectral index distributions
we estimate that were these sources to have moderately inverted
spectral ($\alpha \sim 0.2$) they would need to constitute 40\% of the
sub-mJy population in order to fully explain the CBI excess. Were they
to have strongly inverted spectra ($\alpha \sim 0.8$), 2\% of the
population is required. In contrast,
most steeply inverted spectrum source in the GBT+OVRO surveys the
had $\alpha = 0.49$ and $<0.1\%$
of sources had $\alpha > 0.3$. Both cases would give rise to
substantial enhancements (factors of $1.5$ and $3$ for the strongly
and moderately inverted cases, respectively) enhancements of the 31
GHz source counts over those reported in \S~\ref{subsec:srccounts} in
the 1 to 10 mJy range.
Preliminary analysis of deeper 31 GHz GBT \citep{masoninprep} and ATCA
\citep{taylorinprep} observations targetting a small sample of $\sim
40$ sources with $S_{1.4} \sim 1 \, {\rm mJy}$ indicates that the
mean $1.4-31$ GHz spectral index of these sources is comparable to
that of the AGN population and that there is not a substantial
population with inverted spectra continuing to 31 GHz.
\begin{figure}
\centering
\includegraphics[width=16cm]{cbiGbtSims.eps}
\caption{Schematic of the simulation pipeline we use to estimate the
distribution of residual source power in the CBI power spectrum.}
\label{fig:sourceplan}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=16cm]{isohistMay09.eps}
\caption{The CBI residual source contamination, determined from
low-frequency source counts plus GBT and OVRO 31 GHz information, via
simulations described in the text. The heavy black line is the
distribution full distribution and is our best prediction for random
$\sim 140 \, {\rm deg^2}$ of sky. The light green distribution is for
realizations which have no ``orphan'' sources which would have been
detected in the CBI maps, and represents our best prediction for the
residual source contribution to the CBI data. The residual source
correction used in CBI analyses prior to this work is shown as a
dotted blue line; the dash-dot purple line is that calculated by
\citet{cleary05} from VSA source counts at 31 GHz; and the red
dash-triple-dot line is the level of source contamination needed to
fully account for the power that CBI observes in excess of intrinsic
CMB anisotropy \citep{cbi10}. Note that the units of the x-axis are $C_{\ell}$
rather than $\ell (\ell +1) C_{\ell}/2\pi$.}
\label{fig:isohist}
\end{figure}
\subsubsection{Source Clustering}
\label{subsec:clustering}
In addition to the Poisson or shot noise contribution of discrete
sources to the power spectrum there will also be a contribution due to
their spatial correlations, given by the second term of
\citep{scott99,oh03}
\begin{eqnarray}
C_{\ell}^{src} & \propto &
\int_{0}^{s_{max}} \, ds \, s^2 \frac{dN}{ds} +
\omega_{\ell} \, \left(\int_{0}^{s_{max}} \, ds \, s \frac{dN}{ds}\right)^2 \\
& \propto & \langle s^2\rangle + \langle s \rangle^2 \, \omega_{\ell}
\end{eqnarray}
where $\omega_{\ell}$ are the coefficients of the Legendre polynomial
expansion of the discrete source angular correlation function (ACF)
$\omega(\theta)$. The first term on the right hand side of this
equation is simply the shot noise contribution which has already been
considered in detail. Using the \citet{blakewall} measurement of the
NVSS ACF, and the value $\langle S_{31} \rangle^2/\langle S_{31}^2
\rangle = 0.04$ determined from the 31 GHz flux density PDFs
determined in \S~\ref{sec:maxlike} we find that clustering is a
negligible contribution in comparison to the poisson term at these
faint flux densities. The sub-mJy sources, which have a substantial
contribution from starbursting galaxies, could be more strongly
clustered than mJy AGN. Applying the $3\sigma$ upper limit of
\citet{webb03} on the clustering amplitude of submillimeter galaxies
to the sources lying above the AGN differential counts power law does
not change the conclusion from the calculation.
\subsubsection{Source Variability}
\label{subsec:variability}
Some sources exhibit significant time variability, with variability
measures increasing to timescales of up to $\sim 2 $ years
\citep{hughesAllerAller92}. The effect of this will be to broaden our
measurement of the apparent $1.4$ to $31$ GHz flux density ratio
$r(t_1-t_2)=S_{31}(t_2)/S_{1.4}(t_1)$ over what would be observed in a
commensal multi-frequency survey, $r_0 = S_{31}(t_1)/S_{1.4}(t_1)$.
Provided the 31 GHz measurements are separated from the $1.4$ GHz
measurements by a period of time greater than the longest
characteristic timescale for variations, our measured distribution of
$r$ will be a statistically fair description of
$S_{31}(t_3)/S_{1.4}(t_1)$ for other 31 GHz occuring at some time
$t_3$ also separated from $t_1$ by greater than the longest
characteristic timescale for variation. Considering that the $1.4$ GHz
NVSS observations were collected from September 1993 October 1996, the
CBI observations from November 1999 to April 2005, and the GBT
observations from February to May of 2006, this will largely be the
case. There could be a small number of variable sources with
variability measures increasing beyond time spans of 10 years but
compared to the sample as a whole these are rare and will have little
impact on our results.
We note that while $r$ is a fair sample to describe the CBI residual
source power, extrapolations between frequencies other than $1.4$ and
$30$ GHz will be biased. Even assuming that all sources
instantaneously have true power-law spectra, sources whose apparent
spectral indices fluctuate flatwards due to variability make very
different marginal contributions to the calculated sky variance at
another frequency than those whose apparent spectral indices fluctuate
steepwards.
\section{Conclusions}
\label{sec:summary}
By measuring the 31 GHz flux densities of a large sample of $1.4$ GHz
selected sources we have for the first time characterized the 31 GHz
properties of a large sample of mJy-level radio galaxies. Our sample
was large enough to place significant limits upon the frequency of
rare inverted spectrum sources which can contribute significantly to
the 31 GHz counts, and even more to the 31 GHz sky variance. We find
the mean $31$ to $1.4$ GHz flux density ratio is $0.111 \pm 0.003$,
corresponding to a spectral index $-0.71 \pm 0.01$, and the mean
spectral index is $<\alpha>= -0.92^{+0.29}_{-0.30}$. The fraction of
sources with $\alpha > -0.5$ is $9.0 \pm 0.8\%$ and the fraction with
inverted spectral indices $\alpha > 0$ is $1.2 \pm 0.2\%$. This has
allowed us to greatly improve the accuracy with which we calculate the
statical point source correction for the Cosmic Background Imager
experiment. We find that residual mJy-level AGN contribute a power of
$C_{\ell}^{src} = 44 \pm 14 \,{\rm n K} ^2$. Including an additional estimated
$12 \,{\rm n K} ^2$ contribution from faint ``sub-mJy'' sources, residual
sources account for $21 \pm 7 \%$ of the amplitude of the power seen
in excess of intrinsic anisotropy by CBI at $\ell > 2000$. We place a
95\% upper limit on residual source contamination of $C_{\ell}^{src} =
92 \,{\rm n K} ^2$ or $34\%$ of the total excess power. By way of comparison a
total residual point source correction $270 \pm 60 \,{\rm n K} ^2$ is needed to
fully account for the observed CBI excess power. All of these results
are consistent (at $1.2\sigma$) with the recent SZA result of
\citet{sza}; a detailed comparison is given at the end of
\S~\ref{subsec:cbisims}. Note that we express our results in terms of
$C_{\ell}$, which is appropriate for unresolved sources, rather than
$\ell (\ell+1) C_{\ell}/2\pi$.
A population of faint inverted-spectrum sources not present at
milli-Jansky levels could compromise these conclusions, but the
requirements are substantial: 20\% of $S_{1.4} < 1 \,{\rm mJy} $ sources would
need to have $\alpha_{1.4-30} = +0.2$ for instance, a factor of twenty
more than is observed at mJy levels in this survey; or 2\% of $S_{1.4}
< 1 \,{\rm mJy} $ sources would need to have $\alpha_{1.4-30} = +0.8$,
resulting in $\sim 8$ of these sources per square degree above a
milli-Jansky at 30 GHz. In this survey of $3,165$ sources only a
single source was as steeply inverted as $\alpha = 0.5$. Both
scenarios would imply enhancements of the 31 GHz source counts over
what we have measured by at least 50\% at 1 mJy.
It is worth noting several points in connection with these
conclusions. First, it is essential to appreciate that the residual
sources are fundamentally selected by $1.4$ GHz flux density. Second,
for an unbiased calculation of the 31 GHz contribution of these
sources a complete $1.4$ to $31$ GHz (effective) spectral index
distribution must be used. Populations {\it selected} at a higher
frequency will have preferentially flatter spectral indicies; and
spectral indices measured between $1.4$ and a higher frequency less
than $31$ GHz will not in general be representative, and in
particular, will not reflect the steepending of synchrotron spectral
indices to higher frequencies. Both of these effects, due to the large
lever arm in frequency involved, have a significant impact. Third, it
is essential to avoid selection biases in estimating the spectral
index distribution. In the absence of better information previous CBI
results used an incomplete sample of $1.4$ to $31$ GHz spectral
indices (biased flat) resulting in an overestimate of the point source
contribution. The Bayesian analysis of \S~\ref{sec:maxlike}
eliminates any bias due to censoring (non-detections) at 31 GHz.
Fourth, as discussed in \S~\ref{subsec:variability}, spectral index
extrapolations from other frequencies are biased by source
variability. Finally it is important to use an accurate form of the
well-known low-frequency counts rather than simple approximations to
them. All of these considerations are independent of the 31 GHz
counts {\it per se}, which are only indirectly related to the
conclusions reached: to calculate the CBI residual source correction from
31 GHz counts requires the same additional information (the
distribution of $S_{31}/S_{1.4}$) as calculating the statistical
correction from the $1.4$ GHz counts.
We have also computed 31 GHz counts based on $1.4$ GHz source counts
and our distribution of spectral indices, finding $N(>S_{31}) = 16.7
\pm 1.7 \times (S_{31}/{\rm 10 mJy})^{-0.80 \pm 0.07} \, {\rm
deg^{-2}}$ for $1 \,{\rm mJy} < S_{31} < 4 \,{\rm mJy} $, in good agreement with
observed 31 GHz source counts at higher flux densities, as well as the
model of \citet{dezotti05}.
The National Radio Astronomy Observatory is a facility of the National
Science Foundation operated under cooperative agreement by Associated
Universities, Inc. We thank the GBT and OVRO science and engineering
staff for outstanding contributions to both survey projects and
acknowledge support from NSF grants AST-9413935, AST-9802989,
AST-0098734, and AST0206416. We thanks Gianfraco deZotti for providing
us with his most recent 30 GHz source count model; Dan Marrone for
providing the SZA window functions; Jim Condon, Bill Cotton, Mike
Jones, and Angela Taylor for helpful discussions; and Rachel Rosen for
carefully proofreading the manuscript. We thank Liz Waldram and Guy
Pooley for providing unpublished source size information from the 9C
survey. Finally we thank an anonymous referee for thorough comments
which helped to improve the paper.
|
1,477,468,750,116 | arxiv | \section{Introduction}
\subsection{Oscillatory integrals}
We study the limiting behavior of the zeros of the polynomials
that are orthogonal with respect to an oscillatory weight function
of exponential type along a path $\Gamma$ in the complex plane,
\begin{equation}\label{E:orthogonality}
\int_{\Gamma} \pi_n(z)z^k e^{i z^r}d{z}=0, \qquad k=0,1,\ldots,n-1,
\end{equation}
such that the integral is well defined. Parameter $r \geq 2$ is an
integer and we will focus mainly on the case $r=3$.
Our motivation originates in a Fourier-type integral on a finite interval of the real axis of the general form
\begin{equation}\label{E:I_general}
I[f] = \int_a^b f(x) e^{i \omega g(x)} d{x},
\end{equation}
where $\omega > 0$ is a frequency parameter, $f$ is called the \emph{amplitude} and $g$ is the \emph{phase} or \emph{oscillator}. Integrals of this type appear in many scientific disciplines involving wave phenomena, such as acoustics, electromagnetics and optics (see for example \cite{huybrechs:2009:hoq} and references therein). For $\omega\gg 1$, integrals of this kind are a recurring topic in asymptotic analysis, and we recall in particular the classical method of steepest descent, which can be applied when $f$ and $g$ are analytic in a neighbourhood of $[a,b]$, see for instance \cite{wong:2001:asymptotic}.
We will concentrate on the case where the oscillator $g$ has a single stationary point $\xi$ of order $r-1$ inside the interval $[a,b]$, with $r\geq 2$, i.e., $g^{(j)}(\xi)=0$, $j=1,\ldots,r-1$ but $g^{(r)}(\xi) \neq 0$. Without loss of generality, we take this point $\xi$ to be the origin and the canonical example is the following:
\begin{equation}\label{E:I_canonical}
I[f] := \int_a^b f(x) e^{i \omega x^{r}} d{x},
\end{equation}
with $a < 0$ and $b > 0$. Assuming a single stationary point, the general form~\eqref{E:I_general} can always be brought into this form by a change of variables.
\subsection{Numerical evaluation}
We assume $f$ analytic in a complex neighbourhood of the interval
$[a,b]$. As shown in \cite{Huybrechs:06:osc1}, one possible
numerical strategy for the evaluation of~\eqref{E:I_canonical} is
to consider paths of steepest descent stemming from the endpoints
and from the stationary point. In this way, we can decompose the
original integral as follows:
$$
\int_a^b f(x)e^{i\omega x}d x=
\left(\int_{\Gamma_a}+\int_{\Gamma^{-}_0}+\int_{\Gamma^{+}_0}+\int_{\Gamma_b}\right)f(x)e^{i\omega x}d x,
$$
where the paths are depicted in Fig.~\ref{F:saddlepoint_contours}.
Making an appropriate change of variables, the line integrals
along these paths have the form
\begin{equation*}
\int_0^\infty u(z) e^{-\omega z^\mu} d{z},
\end{equation*}
with $\mu=1$ for the endpoints and $\mu=r$ for a stationary
point of order $r-1$. Each of these integrals can be efficiently approximated
using Gaussian quadrature, because the optimal
polynomial order of Gaussian quadrature translates into optimal
asymptotic order in this setting: for $n$ quadrature points the
error behaves like $\mathcal{O}(\omega^{-\frac{2n+1}{\mu}})$ as
$\omega \to \infty$, see \cite{DH:2008:CG}. This order is
approximately twice that of a classical asymptotic expansion
truncated after $n$ terms.
\begin{figure}[t]
\begin{center}
\includegraphics{figure1.eps}
\caption{Approximate contours of integration in the complex plane
corresponding to even $r$ (left) and odd $r$
(right).}\label{F:saddlepoint_contours}
\end{center}
\end{figure}
There are two paths of steepest descent originating from the
stationary point, called $\Gamma^{-}_0$ and $\Gamma^{+}_0$ in
Fig.~\ref{F:saddlepoint_contours}. Both paths are straight lines
and their structure depends essentially on the parity of $r$: for
odd $r$, these lines form an angle equal to $\pi-\pi/r$, whereas
for even $r$ they form one straight line in the complex plane.
In order to keep the total number of function evaluations to a
minimum, it is desirable to evaluate both line integrals using only one
quadrature rule of Gaussian type~\cite{DH:2008:CG}. This amounts
to constructing a quadrature rule for the functional
\begin{equation} \label{E:Mf}
M[f] = \int_{\Gamma} f(z) e^{i z^r} d{z},
\end{equation}
where $\Gamma=\Gamma^{-}_0\cup\Gamma^{+}_0$ is the concatenation of the two steepest descent paths through the origin.
In the case $r=2$, this leads to classical Gauss-Hermite
quadrature, which involves the weight function $e^{-x^2}$ on the
real line $(-\infty,\infty)$. Higher even values of $r$ lead to
straightforward generalizations, and in all cases the quadrature points lie on the paths of steepest descent.
For odd $r$, the functional \eqref{E:Mf} is indefinite and the
existence of orthogonal polynomials is not guaranteed a priori.
Nevertheless, the orthogonal polynomials and their zeros can be
computed numerically. However, one finds that the zeros, which are
the complex quadrature points for the integral (3), do not lie on
the paths of steepest descent anymore. Instead, they seem to lie
on a curve in a sector of the complex plane bounded by the paths
of steepest descent. Their location for $r=3$ is shown in
Fig.~\ref{F:indefinite_r3} for several values of $n$. Similar
phenomena are observed for larger odd values of $r$.
\begin{figure}[t]
\centerline{\includegraphics[height=65mm,width=160mm]{figure2.eps}}
\caption{Location of the quadrature nodes for $r=3$ on $[-1,1]$,
corresponding to $n=10$ (left), $n=20$ (center) and $n=40$
(right). In dashed line, the paths of steepest descent from the
origin.}\label{F:indefinite_r3}
\end{figure}
\subsection{Orthogonality in the complex plane}
The problem of Gaussian quadrature leads to the study of
polynomials $\pi_n(z)$ that are orthogonal in the sense of
\eqref{E:orthogonality}, where $r$ is a positive integer ($r\geq
3$ for a non-classical case) and $\Gamma$ is the combination of two
paths of steepest descent of the exponential function $e^{i
z^r}$ from the origin, so
$$
\arg z=\frac{\pi}{2r}+\frac{2m\pi}{r}, \qquad m=0,1,\ldots,r-1.
$$
The straight lines $\Gamma^{-}_0$ and $\Gamma^{+}_0$ in
Fig.~\ref{F:indefinite_r3} correspond respectively to $m=0$ and
$m=\lfloor \frac{r}{2} \rfloor$. In the case $r=3$, these lines
form angles of $\pi/6$ and $5\pi/6$ with respect to the positive
real axis.
Putting $\lambda_n = (n/r)^{1/r}$ and
\begin{equation} \label{E:Pn}
P_n(z) = \lambda_n^{-n} \pi_n(\lambda_n z)
\end{equation}
we note that~\eqref{E:orthogonality} can be written in the form
\begin{equation}
\label{E:varying_weight}
\int_{\Gamma} P_n(z) z^k e^{-nV(z)}d z=0, \qquad k=0, \ldots, n-1,
\end{equation}
where
\begin{equation}\label{E:V}
V(z)=-i z^r/r.
\end{equation}
Note that \eqref{E:Pn} is again a monic polynomial, and the zeros of
$P_n(z)$ and $\pi_n(z)$ are the same but for rescaling with the
parameter $\lambda_n$.
The orthogonality \eqref{E:varying_weight} is an example of
non-Hermitian orthogonality with respect to a varying weight on a
curve in the complex plane. A basic observation is that the path
$\Gamma$ of the integral in \eqref{E:varying_weight} can be
deformed into any other curve that is homotopic to it in the
finite plane, and that connects the same two sectors at infinity. For
any such deformed $\Gamma$ we still have the orthogonality
condition \eqref{E:varying_weight}.
In order to find where the zeros of $P_n(z)$ lie for large $n$, we
have to select the `right' contour. Stahl \cite{Stahl:1986} and
Gonchar--Rakhmanov \cite[Sec.~3]{GR:1989:eq} studied and solved this problem, and from their works it is known that the appropriate contour should have a
symmetry property (the so-called $S$-property) in the sense of logarithmic
potential theory with external fields.
We recall this concept in the next subsection.
In the case \eqref{E:V} with $r=3$, we can identify the
curve with the $S$-property explicitly as a critical trajectory
of a quadratic differential. Other cases where the potential problem is explicitly solved include
\cite{GR:1989:eq}, \cite{MF:1997} and \cite{Apt:2002} in connection
with best rational approximation of $e^{-x}$ on $[0,\infty)$,
\cite{Baik:2001:random} in connection with a last passage percolation problem, and \cite{KML:2001:Laguerre}, \cite{KML:2004:Laguerre}, \cite{KMF:2004:Jacobi}, \cite{MFMGO:2001:Laguerre}, \cite{MFO:2005:Jacobi} in connection with classical orthogonal polynomials (Laguerre and Jacobi) with non-standard parameters.
Varying orthogonality on complex curves is treated in
detail in the more recent accounts \cite{Apt:2007:complex}
and \cite{BertolaMo:2009}, which contain a Riemann-Hilbert
steepest descent analysis in a fairly general setting, assuming
the knowledge of the curve with the $S$-property. See also
\cite{Bertola:Boutroux} for an approach based on algebraic
geometry and Boutroux curves and \cite{Bertola:Lp} for
extensions to $L_p$ optimal polynomials.
\subsection{The $S$ property}
Let $V$ be a polynomial.
We consider a smooth curve $\Gamma\subset\mathbb{C}$, such that the integral
in \eqref{E:varying_weight} is well-defined, and we want to minimize the
weighted energy:
\begin{equation} \label{E:energyonGamma}
I_V(\nu)
= \iint \log\frac{1}{|z-s|}d\nu(z)d\nu(s)+ \Re \int
V(s)d\nu(s),
\end{equation}
among all Borel probability measures $\nu$ supported on $\Gamma$.
Following the general theory of logarithmic potential theory with
external fields, see \cite{ST:1997:Pot}, this problem
has a unique solution, which is called the equilibrium measure on
$\Gamma$ in the presence of the external field $\Re V$. We denote this equilibrium measure by $\mu$.
Let
\begin{equation} \label{E:Umupotential}
U^{\mu}(z) = \int \log \frac{1}{|z-s|} d\mu(s)
\end{equation}
be the logarithmic potential of $\mu$. It satisfies
\begin{equation} \label{E:Uequilibrium}
\begin{aligned}
2 U^{\mu}(z) + \Re V(z) & = \ell, \qquad z\in \supp \mu, \\
2 U^{\mu}(z) + \Re V(z) & \geq \ell, \qquad z \in \Gamma \setminus \supp \mu,
\end{aligned}
\end{equation}
for some constant $\ell$, see \cite{ST:1997:Pot}. If $\Gamma$ is an analytic contour, then $\supp \mu$ will consist
of a finite union of analytic arcs. Now we can define the $S$-property.
\begin{definition}
The analytic contour $\Gamma$ has the $S$-property in the external field
$\Re V$ if for every $z$ in the interior of the analytic arcs that
constitute $\supp \mu$, we have
\begin{align} \label{E:Sproperty}
\frac{\partial}{\partial n_+} \left[ 2 U^{\mu}(z) + \Re V(z) \right] =
\frac{\partial}{\partial n_-} \left[ 2 U^{\mu}(z) + \Re V(z) \right].
\end{align}
Here $\frac{\partial}{\partial n_{\pm}}$ denote the two normal
derivatives taken on either side of $\Gamma$.
\end{definition}
The result of Gonchar-Rakhmanov then reads (for the special
case of polynomial $V$):
\begin{theorem} \textbf{Gonchar-Rakhmanov \cite[Sec.~3]{GR:1989:eq}} \label{th:Sproperty}
If $\Gamma$ is a contour with the $S$-property \eqref{E:Sproperty}
in the external field $\Re V$, then the equilibrium measure $\mu$ on $\Gamma$ in the
external field $\Re V$ is the weak limit of the
normalized zero counting measures of the polynomials $P_n$
defined by the orthogonality \eqref{E:varying_weight}.
\end{theorem}
\subsection{Outline of the paper}
In the next section we present the main results of this paper,
corresponding to \eqref{E:V} with $r=3$. These can be summarized in the following points:
\begin{itemize}
\item We present a finite curve $\gamma\subset\mathbb{C}$, which
is a critical trajectory of a certain quadratic differential $Q(z)d z^2$,
see Theorem \ref{theoremgamma}.
\item We prove that this curve $\gamma$ can be prolonged to $\infty$ in a
suitable way, thus obtaining a curve $\Gamma$ with the $S$-property in the
presence of the external field $\Re V$, see Theorem \ref{theo:Sproperty}.
\item As a consequence of Gonchar-Rakhmanov theorem, it is possible to obtain the weak limit distribution of the zeros of $P_n(z)$ as $n\to\infty$, see Theorem \ref{theoremr3}.
\item Additionally, a full Riemann--Hilbert analysis of this problem is feasible and yields both existence of the sequence of orthogonal polynomials $P_n(z)$ for large enough $n$ and the asymptotic behavior of $P_n(z)$ in various regions of the complex plane as $n\to\infty$, see Theorem \ref{th:strongasymptotics}.
\end{itemize}
\section{Statement of results}
\subsection{Definition of the curve $\gamma$}
In the case \eqref{E:V} with $r=3$, the curve with the $S$-property is given in terms of the critical trajectory of the
quadratic differential $Q(z)d z^2$, where
\begin{equation} \label{Qr3}
Q(z)=-\frac 14(z+i)^2 (z^2-2i z-3).
\end{equation}
The polynomial \eqref{Qr3} has a double root at $z=-i$ and two
simple roots at $z_1 = -\sqrt{2} + i$ and $z_2 = \sqrt{2} +
i$.
\begin{figure}[t]
\begin{pspicture}(0,0)(10,3.5)
\pscurve{->,arrowsize=0.25}(3,1.5)(4.9,0.8)(6,0.65)
\pscurve(5.9,0.65)(7.1,0.8)(9,1.5)
\psline(1,2.5)(3,1.5)
\psline(9,1.5)(11,2.5)
\psdot(3,1.5)
\psdot(9,1.5)
\put(3.1,1.75){$z_1 = -\sqrt{2} + i$}
\put(7.25,1.75){$z_2 = \sqrt{2} + i$}
\put(4,0.5){$\gamma$}
\put(7,1){$+$}
\put(7,0.4){$-$} \put(1.75,1.5){$\gamma_1$}
\put(10,1.5){$\gamma_2$}
\end{pspicture}
\caption{The contour $\Gamma$ consists of the
critical trajectory $\gamma$ and its analytic
continuations $\gamma_1$ and $\gamma_2$. \label{Figure3}}
\end{figure}
The critical trajectory $\gamma$ is an analytic arc from
$z_1$ to $z_2$ so that
\begin{equation} \label{E:Dz}
\frac{1}{\pi i} \int_{z_1}^z Q^{1/2}(s) d s
\end{equation}
is real for $z\in\gamma$, see \cite{strebel:1984:quad}. We first show that this curve indeed exists.
\begin{theorem} \label{theoremgamma}
There exists a critical trajectory $\gamma$ of
the quadratic differential $Q(z)d z^2$, where $Q(z)$ is given in \eqref{Qr3},
that connects the two zeros $z_1=-\sqrt{2}+i$ and $z_2=\sqrt{2}+i$ of $Q$.
\end{theorem}
The proof of the theorem is contained in Section \ref{s:proof1}.
\subsection{Contour with $S$-property}
In what follows we use the analytic arc $\gamma$, whose existence is guaranteed
by Theorem \ref{theoremgamma}, with an orientation so that $z_1$ is the starting point of $\gamma$
and $z_2$ is the ending point. The $+$ side ($-$ side) of $\gamma$ is on the left (right)
as we traverse $\gamma$ according to its orientation, as shown in Figure~\ref{Figure3}.
From now on the square root $Q^{1/2}(z)$ is defined with
a branch cut along $\gamma$ and so that
\begin{equation} \label{E:Qsqrt}
Q^{1/2}(z) = - \frac{1}{2} i z^2 - \frac{1}{z} + \mathcal{O}\left(\frac{1}{z^2}\right)
\end{equation}
as $z \to \infty$. This branch is then used for
example in \eqref{E:Dz}. We use $Q^{1/2}_+(s)$, when $s \in \gamma$, to denote the limiting value of $Q^{1/2}(z)$ as $z$ approaches $s \in \gamma$
from the $+$ side.
The curve $\gamma$ has an analytic extension to an
unbounded oriented contour
\begin{equation} \label{E:Gamma}
\Gamma = \gamma_1\cup\gamma\cup\gamma_2
\end{equation}
that we use for the orthogonality \eqref{E:varying_weight}.
The parts $\gamma_1$ and $\gamma_2$ are such that
\begin{equation} \label{E:isreal1}
\phi_1(z) = \int_{z_1}^z Q^{1/2}(s) ds \quad \text{ is real and positive for } z \in \gamma_1,
\end{equation}
and
\begin{equation} \label{E:isreal2}
\phi_2(z) = \int_{z_2}^z Q^{1/2}(s) ds \quad \text{ is real and positive for } z \in \gamma_2.
\end{equation}
The main result of the paper is then the following.
\begin{theorem} \label{theo:Sproperty}
The contour $\Gamma$ is a curve with the $S$-property in
the external field $\Re V$. In addition, we have that the equilibrium measure on $\Gamma$ in the external field $\Re V$ is given by
the probability measure
\begin{equation} \label{E:mu}
d\mu(s) = \frac{1}{\pi i}\, Q^{1/2}_+(s) d s.
\end{equation}
\end{theorem}
The proof of this theorem is presented in Section \ref{s:proof2}.
The general result of Gonchar-Rakhmanov, see Theorem \ref{th:Sproperty}, then implies:
\begin{theorem} \label{theoremr3}
Assume $r=3$. For large enough $n$ the monic polynomial $P_n(z)$ of degree $n$
satisfying the orthogonality relation \eqref{E:varying_weight}
with $V(z)$ given by \eqref{E:V} exists uniquely. Furthermore, denoting by
$$
z_{1,n}, \ldots, z_{n,n}
$$
its $n$ zeros in the complex plane, we have
\begin{enumerate}
\item[\rm (a)] as $n\to\infty$ the zeros accumulate on $\gamma$;
\item[\rm (b)] the normalized zero
counting measures have a weak limit
\[ \frac 1n \sum_{k=1}^n \delta_{z_{k,n}}
\stackrel{*}{\longrightarrow} d \mu,
\]
where $\mu$ is given by \eqref{E:mu}.
\end{enumerate}
\end{theorem}
\subsection{Riemann-Hilbert analysis}
Theorem \ref{theoremr3} also follows from a steepest descent analysis
for the Riemann-Hilbert problem that characterizes the polynomials $P_n$.
We include an exposition of this method in Section \ref{s:proof4} for two main reasons:
\begin{itemize}
\item The Riemann-Hilbert analysis not only provides the limit behavior
of the distribution of zeros of $P_n(z)$, but also
strong asymptotics of the orthogonal polynomials in the complex plane.
\item The present case can be viewed as a simple model problem
for Riemann-Hilbert analysis in the complex plane. We hope that the
present analysis can be useful as an introduction to this
powerful method.
\end{itemize}
The Riemann-Hilbert problem
for orthogonal polynomials was found by Fokas, Its and Kitaev
\cite{fokas:1992:isomonodromy}, and the steepest descent analysis of Riemann-Hilbert problems is due
to Deift and Zhou \cite{deift:1993:steepestdescent}. The steepest descent analysis
for orthogonal polynomials with varying weights on the real line
is due to Deift et al., see \cite{DKMVZ:1999:varying} and \cite{Deift:2000:RH}.
The extension of this method to orthogonal polynomials on curves
in the complex plane is not new. It has already been presented in various
papers, see for example \cite{Apt:2002},
\cite{Apt:2007:complex}, \cite{BertolaMo:2009}, \cite{DKMVZ:1999:varying} and \cite{KML:2001:Laguerre}.
However, an attractive feature of the example treated here is
that all quantities in the analysis can be computed explicitly.
In that respect it is similar to \cite{KML:2001:Laguerre}.
In order to formulate the additional asymptotic results that
follow from the steepest descent analysis, we introduce some
more notation. We use the function $\phi_2(z)$ defined in
\eqref{E:isreal2} and the related function (the $g$-function)
\begin{equation} \label{E:defg}
g(z) =\frac 12 V(z)-\phi_2(z)-l, \qquad l = \frac{1}{3} + \frac{1}{2} \log 2,
\end{equation}
which has an alternative expression
\[ g(z) = \int \log(z-s) d\mu(s) \]
in terms of the equilibrium measure $\mu$ on $\gamma$.
In the case $r=3$ there is an explicit expression for $\phi_2(z)$:
\begin{multline} \label{E:phi_explicit}
\phi_2(z) = -\frac{i}{6}z(z+i)\sqrt{z^2-2i z-3} \\ -
\log(z-i+ \sqrt{z^2-2i z-3})+\frac{1}{2} \log 2.
\end{multline}
The so-called global parametrix $N(z)$ is defined in terms of the function
\begin{equation} \label{E:beta}
\beta(z) = \left(\frac{z-z_2}{z-z_1}\right)^{1/4}, \qquad z \in \mathbb C \setminus \gamma,
\end{equation}
with the branch cut taken along $\gamma$. $N(z)$ is a $2 \times 2$ matrix
valued function with entries
$$
N_{11}(z) = N_{22}(z) = \frac{\beta(z)+\beta(z)^{-1}}{2}, \qquad
N_{12}(z)= - N_{21}(z) = \frac{\beta(z)-\beta(z)^{-1}}{2i},
$$
that also appear in the asymptotic formulas in Theorem \ref{th:strongasymptotics}.
Finally, near the endpoint $z_2$ we require a conformal map
\begin{equation}
f(z) = \left[ \frac{3}{2} \phi_2(z) \right]^{2/3},
\end{equation}
which maps $\gamma$ and $\gamma_2$ near $z_2$ into the real line.
A local Riemann-Hilbert problem is solvable
explicitly in terms of the usual Airy function $\Ai(z)$ and its
derivative $\Ai'(z)$. These functions appear in the asymptotic
formula in part (c) of Theorem \ref{th:strongasymptotics} that is
valid in a neighborhood of $z_2$.
The steepest descent analysis of the Riemann-Hilbert problem then
leads to the following result:
\begin{theorem} \label{th:strongasymptotics}
Assume $r=3$. Let $U_{\delta}(z_1)$
and $U_{\delta}(z_2)$ be small neighbourhoods of the points $z_1$
and $z_2$ given before. As $n\to\infty$ the polynomial $P_n(z)$
has the following asymptotic behavior:
\begin{itemize}
\item[\rm (a)] Uniformly for $z$ in compact subsets of $\overline{\mathbb C}
\setminus \gamma$, we have
\begin{equation} \label{E:asymptotic outside}
P_n(z) = N_{11}(z) e^{n g(z)} \left(1 + O(1/n)\right)
\end{equation}
as $n \to \infty$;
\item[\rm (b)] There is a neighbourhood $U$ of $\gamma$ in the complex
plane, so that, uniformly for $z \in U \setminus (U_{\delta}(z_1)\cup U_{\delta}(z_2))$:
\begin{equation} \label{E:asymptotic_away}
P_n(z) = e^{n\left[\frac {V(z)}{2}-l\right]}
\left(e^{-n\phi_2(z)}N_{11}(z) \pm i e^{n\phi_2(z)}N_{12}(z) +
O(1/n) \right),
\end{equation}
where the $+$ ($-$) sign in \eqref{E:asymptotic_away} is valid for $z$ in the part of
$U$ that lies above (below) the curve $\gamma$;
\item[\rm (c)] Uniformly for $z\in U_{\delta}(z_2)$, we have as $n \to \infty$,
\begin{align*}
P_n(z) & = \sqrt{\pi}e^{n\left[\frac{V(z)}{2} - l \right]}
\left( n^{1/6} f^{1/4}(z) \beta^{-1}(z)\Ai(n^{2/3}f(z))\left(1 + \mathcal{O}(1/n)\right) \right. \\
& \left. \hspace*{25mm} - n^{-1/6} f^{-1/4}(z) \beta(z) \Ai'(n^{2/3} f(z)) \left(1 + \mathcal{O}(1/n)\right)\right),
\end{align*}
with the same constant $l$ as in \eqref{E:defg}.
\end{itemize}
\end{theorem}
\section{Proof of Theorem \ref{theoremgamma}}\label{s:proof1}
It follows from the general theory, see \cite{strebel:1984:quad}, that
three trajectories of the quadratic differential $Q(z)d z^2$ emanate from
each simple zero of $Q$. The three trajectories through $z_1$ emanate from
$z_1$ at angles $\theta$ that satisfy
\[ 3 \theta = \pi - Q'(z_1) \quad (\textrm{mod}\, 2\pi). \]
From the explicit formula of $Q$ and $z_1$ we find $Q'(z_1) = - \sqrt{2} - 4 i$
and the three angles at $z_1$ are
\[ \theta = -\frac{1}{3} \arctan(2 \sqrt{2}) + \frac{2 k\pi}{3}, \qquad k=0,1,2. \]
We let $\gamma$ be the trajectory that emanates from $z_1$ at angle
\[ \theta_0 = -\frac{1}{3} \arctan(2\sqrt{2}) = -0.4103\cdots. \]
\begin{figure}[t]
\centerline{\includegraphics[height=65mm,width=90mm]{figure4.eps}}
\vspace{-15mm}
\caption{Contour lines of $D(z)$ in the cubic case.
In solid line $\Im D(z)=0$, and in dashed line, $\Re D(z)=0$ and
$\Re D(z)= 0$ (left and right of the figure respectively).}
\label{levelcurves_r3}
\end{figure}
Let
\[ D(z) = \frac{1}{\pi i} \phi_1(z) = \frac{1}{\pi i} \int_{z_1}^z Q^{1/2}(s) d s,
\qquad z \in \mathbb C \setminus \gamma. \]
Figure \ref{levelcurves_r3} shows the level curves $\Re
D(z)=0$, $\Re D(z)=1$, and $\Im D(z)=0$. In order to prove that
$z_1$ and $z_2$ are indeed connected by $\gamma$ (as suggested by the figure),
we use arclength parametrization of $\gamma$
\[ \gamma: \quad z = z(t), \qquad z(0) = z_1. \]
Then
$$ \int_{z(0)}^{z(t)} Q^{1/2}(s)d s = \pi i f(t),
$$
where $f(t)$ is real. Differentiating and squaring, we obtain
$$
Q(z(t))[z'(t)]^2=-\pi^2 [f'(t)]^2,
$$
which implies that
$$
\arg Q(z(t))+2\arg z'(t)=\pi \quad (\textrm{mod}\, 2\pi).
$$
\begin{figure}[t]
\centerline{\includegraphics[height=65mm,width=90mm]{figure5.eps}}
\caption{Contour lines of $Q(z)$ in the cubic case. In solid line
$\Re Q(z)=0$, and in dashed line $\Im Q(z)=0$.}
\label{levelcurves_Q}
\end{figure}
The level lines $\Re Q(z) = 0$ and $\Im Q(z) = 0$
are shown in Fig.~\ref{levelcurves_Q}. The two level lines
intersect of course at the zeros $z_1$, $z_2$ and $-i$ of $Q$.
The critical trajectory $\gamma$ starts at $z_1$ at an
angle $\theta_0 = -\frac{1}{3} \arctan(2\sqrt{2})$, and
therefore $\gamma$ enters the shaded region of Fig.~\ref{levelcurves_Q}, which is the region where $\Re Q(z)<0$ and $\Im Q(z)<0$,
hence $-\pi<\arg Q(z)<-\pi/2$. As a consequence
we have that
$$
-\frac{\pi}{4}<\arg z'(t)<0
$$
as long as $z(t)$ is in the shaded region. This implies that
the real part of $z(t)$ increases faster than the imaginary
part decreases. It follows that the part of $\gamma$ that
is in the shaded region is contained in the triangle with
vertices $z_1=-\sqrt{2}+i$, $i$ and $(1-\sqrt{2})i$.
Hence $\gamma$ leaves the shaded region at a point
on the imaginary axis above the other critical
point $z_0=-i$. Then by the symmetry with respect to the imaginary
axis we conclude that $\gamma$ indeed connects $z_1$ and
$z_2$.
This completes the proof of Theorem \ref{theoremgamma}.
\section{Proof of Theorem \ref{theo:Sproperty}}\label{s:proof2}
We start by presenting a characterization of the curve $\Gamma$ that is equivalent to the $S$-property.
For complex $z$, we define the $g$-function
\begin{equation} \label{E:gfunction}
g(z)=\int_{\Gamma} \log(z-s) d \mu(s),
\end{equation}
which is analytic when $z\in\mathbb{C}\setminus\Gamma$. We observe
that $\Re g(z)=-U^{\mu}(z)$, where $U^{\mu}$ is the logarithmic
potential as defined in \eqref{E:Umupotential}. The equilibrium
properties \eqref{E:Uequilibrium} then translate into
\begin{equation} \label{E:gequilibrium}
\begin{aligned}
\Re(-g_+(z)-g_-(z)+V(z)) & = \ell, \qquad z\in \supp \mu,\\
\Re(-g_+(z)-g_-(z)+V(z)) & \geq \ell, \qquad z\in\Gamma\setminus\supp \mu.
\end{aligned}
\end{equation}
Let us write $\gamma = \supp \mu$, then from the Cauchy-Riemann equations it follows that the $S$-property
\eqref{E:Sproperty} is equivalent to the property that the
imaginary part of $-g_+ - g_- + V$ is locally constant on $\gamma$,
that is
\begin{align} \label{E:gSproperty}
\Im (-g_+(z)-g_-(z)+V(z)) = \tilde{\ell}, \qquad z \in \gamma
\end{align}
with a possibly different constant $\tilde{\ell}$ on the different
components of $\gamma$. Then as a consequence of \eqref{E:gequilibrium}
and \eqref{E:gSproperty} we have that
\begin{equation} \label{E:constantl}
-g_+(z)-g_-(z)+V(z) = \ell+ i\tilde{\ell}
\end{equation}
is constant on each connected component of $\gamma$.
Differentiating \eqref{E:constantl} we obtain
\begin{equation}\label{diffg}
-g'_+(z) - g'_-(z) + V'(z) = 0, \qquad z\in \gamma.
\end{equation}
Next we observe that the function $\tfrac{1}{2} V'(z)-g'(z)$ is
analytic for $z\in \mathbb C \setminus \gamma$, and furthermore using
\eqref{diffg}:
\begin{equation*}
(\tfrac 12 V'(z)-g'(z))_+=-\tfrac 12 V'(z)+g'(z)_-
=-(\tfrac 12 V'(z)-g'(z))_-
\end{equation*}
for $z \in \gamma$.
Hence $\tfrac 12 V'(z)-g'(z)$ has a multiplicative jump of $-1$ on $\gamma$, and therefore
\begin{equation} \label{E:Qdefinition}
Q(z):=\left(\tfrac 12 V'(z)-g'(z)\right)^2
\end{equation}
is analytic in the whole complex plane. The asymptotic behavior
of $Q(z)$ for $z \to\infty$ follows from the fact that
$V(z)$ is a polynomial and
\begin{equation} \label{E:gprime}
g'(z)=\int \frac{1}{z-s} d\mu(s) = \frac{1}{z}
+ \mathcal{O} \left( \frac{1}{z^2} \right), \qquad z \to \infty,
\end{equation}
since $\mu$ is a probability measure on $\gamma$. The general case
of Liouville's theorem implies that $Q$ is a polynomial of degree
$2r-2$ if $\deg V = r$.
In general this is not enough to determine the curve $\gamma$, and we need more information on the roots of $Q(z)$ (or extra assumptions, such as that we are in the one-cut case). Since $Q^{1/2}(z)$ is analytic in $\mathbb C \setminus \gamma$
we can deduce that any zero of odd multiplicity of $Q$ is in $\gamma$. Zeros of even multiplicity can be anywhere and are typically not in $\gamma$.
Let $z_1$ be a zero of $Q$ of odd multiplicity.
From \eqref{E:gprime} we see that for $z \in \gamma$,
in the same connected component as $z_1$, we have
\[ \int_{z_1}^z Q^{1/2}_+(s) d s \in i \mathbb R. \]
This is the condition that characterizes a trajectory of
the quadratic differential $Q(z) d z^2$, emanating from a zero $z_1$ of $Q$.
In the case $V(z)=-iz^3/3$, it follows from \eqref{E:Qdefinition}
and \eqref{E:gprime} that $Q$ should be taken as a polynomial
of degree $4$ so that
\begin{equation} \label{E:QformulaC}
Q(z) = \left(-\frac{i z^2}{2}-\frac{1}{z} +
\mathcal{O}\left(\frac{1}{z^2}\right)\right)^2 = -\frac{z^4}{4}+i z + C,
\end{equation}
where $C$ needs to be determined. In order to do this, we make the assumption (to be justified later)
that we are in the one-cut case, that
is we assume that $\gamma$ is a single curve. The endpoints of
the curve are then simple zeros of $Q$. Since $Q$ has degree four
there are two more zeros, which in the one-cut case, should
combine into a double zero.
In our case, there is a symmetry about the imaginary axis, and
therefore the double root should be on the imaginary axis, say $z=z_0$,
and two simple roots are symmetric with respect to the imaginary
axis, say $z_1$ and $z_2=-\overline{z}_1$. This leads to
$$
Q(z)=-\frac 14(z-z_0)^2 (z-z_1)(z+\bar{z}_1),
$$
which combined with \eqref{E:QformulaC} yields
\[ z_0=-i, \qquad \text{ and } \qquad z_1=-\sqrt{2}+i,
\qquad z_2 = \sqrt{2} + i. \]
The free constant is $C =-3/4$. Therefore
\begin{equation} \label{e:Qformula}
Q(z)=-\frac 14(z+i)^2 (z^2-2i z-3),
\end{equation}
and we recover \eqref{Qr3}.
Once we have $Q(z)$, we may obtain $\mu$ in the following way.
From \eqref{E:Qdefinition} it follows that there is an analytic
branch of $Q^{1/2}(z)$ for $z \in \mathbb C \setminus \gamma$
which behaves as $\tfrac{1}{2} V'(z)$ for large $z$. Choose an orientation
on $\gamma$. The orientation induces a $+$-side and a $-$-side on $\gamma$,
where the $+$-side ($-$-side) is on the left (right) as one traverses the contour according to
its orientation.
\begin{lemma}\label{lemma:probmeasure}
Given the critical trajectory $\gamma$ and the polynomial $Q(z)$, then
\begin{equation} \label{E:muQ}
\frac{1}{\pi i} \, Q_+^{1/2}(s) \, ds = d\mu
\end{equation}
is a probability measure on $\gamma$.
\end{lemma}
\begin{proof}
The measure $\mu$ is a priori complex, however, by the
construction of $\gamma$ we have that
\[ \int_{z_1}^z d\mu(s) = \frac{1}{\pi i} \int_{z_1}^z Q_+^{1/2}(s)
d s \in \mathbb R \]
for every $z \in \gamma$, so that $\mu$ is a real measure.
Taking $z = z_2$ we can compute
\[ \int_{z_1}^{z_2} d\mu(s) = \frac{1}{\pi i} \int_{z_1}^{z_2} Q_+^{1/2}(s) d s \]
by contour integration. Indeed, we have
\[ \frac{1}{\pi i} \int_{z_1}^{z_2} Q_+^{1/2}(s) d s = \frac{1}{2\pi i} \int_C Q^{1/2}(s) d s \]
where $C$ is a closed contour in $\mathbb C \setminus \gamma$ that encircles $\gamma$ once
in the clockwise direction. Moving the contour to infinity, and using
the behavior of $Q^{1/2}$ at infinity, see \eqref{E:Qsqrt}, we find
\begin{equation} \label{E:mumass}
\mu(\gamma) = \int_{z_1}^{z_2} d\mu(s) = 1.
\end{equation}
Then if $t \in [0,1] \mapsto z=z(t)$ is a smooth parametrization of $\gamma$ with $z(0)=z_1$
and $z(1) = z_2$ we have that
\[ t \in [0,1] \mapsto \frac{1}{\pi i} \int_{z_1}^{z(t)} Q^{1/2}_+(s) d s \]
is real valued, with values $0$ for $t=0$ and $1$ for $t=1$.
The derivative $z'(t) Q^{1/2}(z(t))$ is non-zero for $0 < t < 1$. Therefore
\[ \frac{1}{\pi i} \int_{z_1}^{z(t)} Q^{1/2}_+(s) d s \]
is strictly increasing, and it follows that $\mu$ is a probability measure.
\end{proof}
\begin{lemma}\label{lemma:eqmeasure}
Let $\Gamma=\gamma_1\cup\gamma\cup\gamma_2$ be defined as in \eqref{E:Gamma}, \eqref{E:isreal1} and \eqref{E:isreal2}, then the measure $\mu$ defined by \eqref{E:muQ} is the equilibrium measure on $\Gamma$ in the external field $\Re V$.
\end{lemma}
\begin{proof}
From another residue calculation, similar to the one leading to \eqref{E:mumass}
and based on \eqref{E:Qsqrt}, it follows that
\begin{equation} \label{E:muresidue}
\int_{\gamma} \frac{1}{z-s} d\mu(s) = \frac{1}{2} V'(z) - Q^{1/2}(z),
\qquad z \in \mathbb C \setminus \gamma.
\end{equation}
Then we have
\[ g(z) = \int \log(z-s) d\mu(s) \]
is such that \eqref{diffg} holds, which after integration
leads to \eqref{E:gSproperty} and to the first line of
\eqref{E:gequilibrium}.
We extend $\gamma$ to an unbounded contour $\Gamma = \gamma \cup \gamma_1 \cup \gamma_2$
as in section 2.2. The unbounded pieces $\gamma_1$ and $\gamma_2$ are such that
\eqref{E:isreal1} and \eqref{E:isreal2} hold.
This leads to the second line of \eqref{E:gequilibrium}.
For example, if $z \in \gamma_2$, then by \eqref{E:isreal2} and \eqref{E:muresidue}
\begin{align*}
0 < 2 \int_{z_2}^z Q^{1/2}(s) d s & =
\int_{z_2}^z (V'(s) - 2 g'(s)) d s \\
& =
(V(z) - 2 g(z)) - (V(z_2) - 2g(z_2)) \\
& = V(z) - 2 g(z) - (\ell + i \tilde{\ell}),
\end{align*}
which by taking the real part indeed leads to the inequality
in \eqref{E:gequilibrium}.
Because of \eqref{E:gequilibrium} we have that $\mu$ is the
equilibrium measure on $\Gamma$ in the external field $\Re V$.
\end{proof}
Finally, because of \eqref{E:gSproperty} we conclude that the contour $\Gamma$ has the $S$-property,
and this completes the proof of Theorem \ref{theo:Sproperty}.
\section{Proof of Theorem \ref{th:strongasymptotics}} \label{s:proof4}
\subsection{Riemann--Hilbert problem}
The orthogonal polynomial $P_n(z)$ characterized by \eqref{E:varying_weight} appears as the
$(1,1)$ entry of the solution $Y(z)$ of a $2\times 2$ matrix-valued Riemann--Hilbert problem,
see \cite{fokas:1992:isomonodromy}.
From this Riemann--Hilbert problem, the Deift-Zhou steepest descent method performs several explicit
and invertible transformations that allow us to obtain asymptotic results for
the entries of the matrix $Y$, and in particular for $P_n(z)$, as $n\to\infty$ uniformly in
different regions of $\mathbb{C}$, see \cite{DKMVZ:1999:varying}.
In the present case the analysis is quite standard, except for the fact that we are working
on a complex curve $\Gamma$ instead of on a part of the real line. For this reason,
we give a brief sketch of the method and refer the reader to \cite{DKMVZ:1999:varying}, \cite{Deift:2000:RH}
and \cite{KML:2001:Laguerre} for the general theory involving orthogonality with respect
to exponential weights and also for more details on a similar problem.
We are interested in a matrix-valued function $Y : \mathbb C \setminus \Gamma \to \mathbb{C}^{2\times 2}$ such that
\begin{itemize}
\item $Y(z)$ is analytic for $z \in \mathbb{C}\setminus \Gamma$.
\item $Y_+(z)=Y_-(z) \begin{pmatrix} 1 & e^{-nV(z)} \\ 0 & 1 \end{pmatrix}$, for $z \in \Gamma$,
\item $Y(z)=\left(I+\mathcal{O}\left(\frac 1z\right)\right)
\begin{pmatrix} z^n & 0 \\ 0 & z^{-n} \end{pmatrix}$, as $z\to\infty$.
\end{itemize}
As before, $\Gamma$ is the contour $\Gamma = \gamma_1 \cup \gamma \cup \gamma_2$
consisting of the critical trajectory $\gamma$ and its analytic extensions
$\gamma_1$ and $\gamma_2$. See Figure~\ref{Figure3}.
This Riemann--Hilbert problem has a unique solution if and only if
the monic polynomial $P_n(z)$, orthogonal with respect to the
weight function $w(z)$, exists uniquely. If additionally
$P_{n-1}(z)$ exists, then the solution of the Riemann--Hilbert
problem is given by:
\begin{equation*}
Y(z)= \begin{pmatrix} P_n(z) & (\mathcal{C}P_n w)(z) \\
-2\pi i\gamma_{n-1}P_{n-1}(z) & -\gamma_{n-1}(\mathcal{C}P_{n-1}w)(z)
\end{pmatrix},
\end{equation*}
where
\begin{equation}\label{Cauchy}
(\mathcal{C}f)(z)=\frac{1}{2\pi i}
\int_{\Gamma}\frac{f(s)}{s-z}\ d s
\end{equation}
is the Cauchy transform on $\Gamma$, and the coefficient
$\gamma_{n-1}$ is defined as
$$
\gamma_{n-1}=\left[\int_{\Gamma}P^2_{n-1}(s)w(s)d s\right]^{-1}.
$$
\subsection{First transformation}
The first transformation $Y\mapsto T$ is a normalization at $\infty$. We
use the functions $\phi_1$ and $\phi_2$ as in \eqref{E:isreal1} and \eqref{E:isreal2}
which are analytic in $\mathbb{C}\setminus(\gamma_1\cup\gamma)$ and $\mathbb{C}\setminus(\gamma_2\cup\gamma)$ respectively and satisfy $\phi_2(z)-\phi_1(z) = \pm \pi i$ for $z\in\mathbb{C}\setminus\Gamma$. We set
\begin{equation}\label{YT}
T(z)=\begin{pmatrix} e^{nl} &0\\0 & e^{-nl}\end{pmatrix}
Y(z)\begin{pmatrix} e^{n[\phi_2(z)-\tfrac 12 V(z)]} & 0\\ 0 &
e^{-n[\phi_2(z)-\tfrac 12 V(z)]}\end{pmatrix}.
\end{equation}
Now, using (\ref{E:Qdefinition}), we obtain by direct integration from \eqref{E:isreal2} that
\begin{equation}
\phi_2(z)=\tfrac 12 V(z)-\log(z)-l+\mathcal{O}\left(\frac 1z\right), \qquad z\to\infty,
\end{equation}
for some constant of integration $l$. It follows that
$$
e^{n[\phi_2(z)-\tfrac 12 V(z)]}=z^{-n}e^{-nl}\left(1+\mathcal{O}\left(\frac 1z\right)\right), \qquad z\to\infty.
$$
Hence $T$ satisfies the following Riemann--Hilbert problem:
\begin{itemize}
\item $T(z)$ is analytic for $z$ in $\mathbb{C}\setminus \Gamma$;
\item $T$ has the jumps indicated in Figure \ref{Figure6};
\item $T(z) = I + \mathcal{O}\left(\frac 1z\right)$ as $z\to\infty$.
\end{itemize}
\begin{figure}[t]
\begin{pspicture}(0,0)(10,3.5)
\pscurve{->,arrowsize=0.25}(3,1.5)(4.9,0.8)(6,0.65)
\pscurve(5.9,0.65)(7.1,0.8)(9,1.5)
\psline(1,2.5)(3,1.5)
\psline(9,1.5)(11,2.5)
\psdot(3,1.5)
\psdot(9,1.5)
\put(2.9,1.75){$z_1$}
\put(8.8,1.75){$z_2$}
\put(5.5,0.3){$\gamma$}
\put(1.8,2.3){$\gamma_1$}
\put(9.8,2.3){$\gamma_2$}
\put(9.5,1){$\begin{pmatrix} 1 & e^{-2n\phi_2} \\ 0 & 1 \end{pmatrix}$}
\put(4.5,1.6){$\begin{pmatrix} e^{2n\phi_{2+}} & 1 \\ 0 & e^{2n\phi_{2-}} \end{pmatrix}$}
\put(0.25,1){$\begin{pmatrix} 1 & e^{-2n\phi_1} \\ 0 & 1 \end{pmatrix}$}
\end{pspicture}
\caption{The jump matrices for the Riemann-Hilbert problem for $T$. \label{Figure6}}
\end{figure}
\subsection{Second transformation}
The second transformation of the Riemann-Hilbert problem is
the so-called opening of lenses. From the Cauchy--Riemann equations,
it is possible to show that the sign pattern for $\Re \phi_2$ is as shown in Figure~\ref{Figure7}.
Since $\phi_1 = \phi_2 \pm \pi i$, the sign pattern for $\Re \phi_1$
is exactly the same.
\begin{figure}[ht]
\begin{pspicture}(0,0)(10,3.5)
\pscurve(3,1.5)(5,0.75)(7,0.75)(9,1.5)
\psline[linestyle=dashed](1,2.5)(3,1.5)
\psline[linestyle=dashed](9,1.5)(11,2.5)
\psdot(3,1.5)
\psdot(9,1.5)
\put(3.1,1.75){$z_1$}
\put(8.5,1.75){$z_2$}
\psline(3,1.5)(1.75,0.75)
\psline(3,1.5)(2.8,2.6)
\psline(9,1.5)(10.25,0.75)
\psline(9,1.5)(9.2,2.6)
\put(0.7,1.7){$\Re \phi_1 >0$}
\put(5.5,1.25){$\Re \phi_2<0$}
\put(5,0.2){$\Re \phi_2<0$}
\put(9.5,1.8){$\Re \phi_2>0$}
\end{pspicture}
\caption{The sign of $\Re \phi_1 = \Re \phi_2$ in various parts of the complex plane.
The solid curves are where $\Re \phi_2 = 0$.
The curves $\gamma_1$ and $\gamma_2$ are shown with dashed lines.
We have that $\phi_1$ is real and positive on $\gamma_1$
and $\phi_2$ is real and positive on $\gamma_2$. \label{Figure7}}
\end{figure}
In the second transformation we open a lens-shaped region around $\gamma$ as in Fig.~\ref{Figure8}, so that the lens is contained in the region where $\Re \phi_2 < 0$:
\begin{figure}[t]
\begin{pspicture}(0,-1)(10,4.5)
\pscurve{->,arrowsize=0.25}(3,1.5)(4.75,0.8)(6,0.65)
\pscurve(5.9,0.65)(7.25,0.8)(9,1.5)
\psline{->,arrowsize=0.25}(0.5,2.75)(1.5,2.25)
\psline(1.5,2.25)(3,1.5)
\psline{->,arrowsize=0.25}(9,1.5)(10.5,2.25)
\psline(10.5,2.25)(11,2.5)
\psdot(3,1.5)
\psdot(9,1.5)
\pscurve{->,arrowsize=0.25}(3,1.5)(3.5,1.75)(4.6,2.15)(6,2.35)
\pscurve(5.9,2.35)(7.4,2.15)(8.5,1.75)(9,1.5)
\pscurve{->,arrowsize=0.25}(3,1.5)(3.5,0.9)(4.5,0.2)(6,-0.15)
\pscurve(5.9,-0.15)(7.5,0.2)(8.5,0.9)(9,1.5)
\put(3,1.75){$z_1$}
\put(8.75,1.75){$z_2$}
\put(9.5,1){$\begin{pmatrix} 1 & e^{-2n\phi_2} \\ 0 & 1 \end{pmatrix}$}
\put(1,3){$\begin{pmatrix} 1 & e^{-2n\phi_1} \\ 0 & 1 \end{pmatrix}$}
\put(5.5,3){$\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix}$}
\put(5.5,1.25){$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$}
\put(1.75,-0.25){$\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix}$}
\put(1,1.85){$\gamma_1$}
\put(10,2.5){$\gamma_2$}
\put(4.5,1.1){$\gamma$}
\end{pspicture}
\caption{The contour $\Gamma_S$
and the jump matrices on $\Gamma_S$ in the Riemann-Hilbert problem for $S$.
\label{Figure8}}
\end{figure}
We define
\begin{equation}
\label{matrixS}
S = \begin{cases}
\, T \begin{pmatrix} 1 & 0 \\ -e^{2n\phi_2} & 1 \end{pmatrix} & \text{in the upper part of the lens}, \\
\,T \begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix} & \text{in the lower part of the lens}, \\
\, T & \text{elsewhere}.
\end{cases}
\end{equation}
Then $S$ satisfies the following Riemann--Hilbert problem:
\begin{itemize}
\item $S(z)$ is analytic for $z \in \mathbb{C}\setminus \Gamma_S$,
where $\Gamma_S$ consists of $\Gamma$ plus the lips of the lens;
\item $S$ has the jumps indicated in Figure \ref{Figure8};
\item $S(z) = I +\mathcal{O}\left(\frac 1z\right)$ as $z\to\infty$.
\end{itemize}
\subsection{Construction of parametrices}
\subsubsection{Global parametrix}
Now we seek an approximation to $S$ that is valid for large $n$.
The approximation will consist of two parts, a global parametrix
$N$ away from the endpoints $z_1$ and $z_2$ and local parametrices $P$ at $z_1$ and $z_2$.
The global parametrix $N$ satisfies a Riemann--Hilbert problem
with the same constant jump on $\gamma$. Then $R=SN^{-1}$ will be
analytic across $\gamma$. We define $N$ as
\begin{equation} \label{E:defN}
N(z) =
\begin{pmatrix} \frac 12(\beta(z)+\beta(z)^{-1}) & \frac {1}{2i}(\beta(z)-\beta(z)^{-1}) \\[5pt]
-\frac {1}{2i}(\beta(z)-\beta(z)^{-1}) & \frac 12(\beta(z)+\beta(z)^{-1}) \end{pmatrix},
\end{equation}
where $\beta$ is given by \eqref{E:beta}, see \cite{Deift:2000:RH}, \cite{DKMVZ:1999:varying} and \cite{Arno:2003:RH}.
Then $N$ satisfies the following Riemann--Hilbert problem:
\begin{itemize}
\item $N(z)$ is analytic for $z \in \mathbb{C}\setminus \gamma$;
\item $N_+ = N_- \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$ on $\gamma$;
\item $N(z)=I+\mathcal{O}\left(\frac 1z\right)$ as $z\to\infty$;
\item $N(z)=\mathcal{O}(|z-z_j|^{-1/4})$ as $z\to z_j$ for $j=1,2$.
\end{itemize}
It is clear that $N$ cannot be a good approximation to $S$ near the endpoints of $\gamma$,
since it blows up at $z=z_1$ and $z=z_2$, while $S$ remains bounded there.
For this reason we need a different local approximation near the endpoints.
\subsection{Local parametrix}
The local parametrix $P$ is constructed in neighbourhoods of the endpoint $z=z_j$, $j=1,2$, say
$$
U_{\delta}(z_j)=\{z\in\mathbb{C} \mid |z-z_j|<\delta\}, \qquad j =1,2,
$$
with some small but fixed $\delta > 0$. We describe here the
construction of $P$ in $U_{\delta}(z_2)$, the construction in $U_{\delta}(z_1)$
being similar.
The local parametrix $P$
should satisfy the following Riemann--Hilbert problem:
\begin{itemize}
\item $P(z)$ is analytic for $z \in U_{\delta}(z_2)\setminus \Gamma_S$
with a continuous extension to $\overline{U_{\delta}(z_2)} \setminus \Gamma_S$;
\item $P$ has the jumps on $\Gamma_S \cap U_{\delta}(z_2)$ as shown in Fig.~\ref{Figure9} (these are the same jump
matrices as in the RH problem for $S$);
\item $P(z) = \left(I+\mathcal{O}\left(\frac 1n\right)\right)N(z)$ as $n\to\infty$,
uniformly for $z \in \partial U_{\delta}(z_2)$;
\item $P(z)$ remains bounded as $z\to z_2$.
\end{itemize}
\begin{figure}[t]
\begin{pspicture}(0,0)(10,6.5)
\psdot(7,3)
\pscircle(7,3){3}
\pscurve{->,arrowsize=0.25}(3.5,2.35)(4.5,2.4)(5.5,2.5)
\pscurve(5.45,2.5)(6.5,2.775)(7,3)
\pscurve{->,arrowsize=0.25}(3.9,5)(4.5,4.75)(5.8,4)
\pscurve(5.7,4.075)(6.5,3.45)(7,3)
\pscurve{->,arrowsize=0.25}(4.5,0.25)(5.5,1)(6,1.5)
\pscurve(6,1.5)(6.5,2.15)(7,3)
\pscurve{->,arrowsize=0.25}(7,3)(7.75,3.5)(8,3.7)
\pscurve(8,3.7)(8.5,4.1)(9,4.55)(10,5.5)
\put(6.75,3.4){$z_2$}
\put(3.5,1.9){$\gamma$}
\put(4.25,3){$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$}
\put(7.75,2.75){$\begin{pmatrix} 1 & e^{-2n\phi_2} \\ 0 & 1 \end{pmatrix}$}
\put(5.5,4.75){$\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1\end{pmatrix}$}
\put(6,1){$\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix}$}
\end{pspicture}
\caption{The jump matrices in the Riemann-Hilbert problem for $P$
defined in the neighbourhood $U_{\delta}(z_2)$ of $z_2$. \label{Figure9}}
\end{figure}
The construction of $P$ is given in terms of the Airy function $\Ai$
and its derivative. We put
\begin{equation} \label{E:defP}
P(z) = E_n(z)A(n^{2/3} f(z)) \begin{pmatrix} e^{n\phi_2(z)} & 0 \\ 0 & e^{-n\phi_2(z)} \end{pmatrix},
\end{equation}
where $A(\zeta)$, $f(z)$, and $E_n(z)$ are described below.
\paragraph{Airy parametrix $A(\zeta)$}
The matrix-valued function $A(\zeta)$ is the solution
of the Airy Riemann-Hilbert problem, which is posed on
four infinite rays in an auxiliary $\zeta$-plane as follows:
\begin{itemize}
\item $A(\zeta)$ is analytic for $\zeta \in \mathbb C$, $\arg \zeta \not\in \{0, 2\pi/3, -2\pi/3, \pi\}$;
\item $A$ has the jumps on the four rays as shown in Fig.~\ref{Figure10};
\item As $\zeta \to \infty$, we have
\begin{equation}\label{asympA}
A(\zeta)= \begin{pmatrix} \zeta^{-1/4} & 0 \\ 0 & \zeta^{1/4} \end{pmatrix}
\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i \\ i & 1 \end{pmatrix}
\left(I+\mathcal{O}\left(\frac{1}{\zeta^{3/2}}\right)\right)
\begin{pmatrix} e^{-\frac 23 \zeta^{3/2}} & 0 \\ 0 & e^{\frac 23 \zeta^{3/2}} \end{pmatrix}
\end{equation}
\item $A(\zeta)$ remains bounded as $\zeta \to 0$.
\end{itemize}
\begin{figure}[t]
\begin{pspicture}(0,0)(10,7)
\psline{->,arrowsize=0.25}(3.25,6)(4.125,4.5)
\psline(4.125,4.5)(5,3)
\psline{->,arrowsize=0.25}(3.15,0)(4.125,1.5)
\psline(4.125,1.5)(5,3)
\psline{->,arrowsize=0.25}(5,3)(7,3)
\psline(7,3)(9,3)
\psline{->,arrowsize=0.25}(1,3)(3,3)
\psline(3,3)(5,3)
\psarc(5,3){1}{0}{120}
\put(7,3.5){$\begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix}$}
\put(4.25,5){$\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$}
\put(4.25,1){$\begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}$}
\put(2,3.5){$\begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix}$}
\put(5.75,3.8){$2\pi/3$}
\end{pspicture}
\caption{Contours and jump matrices in the Riemann-Hilbert problem
for Airy functions. \label{Figure10}}
\end{figure}
The solution of this Riemann--Hilbert problem is given by
the Airy function $\Ai(\zeta)$ and rotated versions of it, see \cite{DKMVZ:1999:varying}
and \cite[Sec.~10.4]{Abramowitz:1964:HMF}. Let
\begin{equation*}
y_0(\zeta)=\Ai(\zeta), \quad y_1(\zeta)=\omega\Ai(\omega\zeta), \quad y_2(\zeta)=\omega^2\Ai(\omega^2\zeta),
\end{equation*}
where $\omega=e^{\tfrac{2\pi i}{3}}$. These are three solutions of the Airy differential
equation $y'' = \zeta y$ satisfying the connection formula
$y_0(\zeta)+y_1(\zeta)+y_2(\zeta)=0$.
For instance, in the sector $0<\arg\zeta<2\pi/3$ we have
\begin{equation} \label{Ainfirstsector}
A(\zeta)=\sqrt{2\pi} \begin{pmatrix} y_0(\zeta) & -y_2(\zeta) \\
-i y_0'(\zeta) & i y_2'(\zeta) \end{pmatrix}.
\end{equation}
The
solution in the other sectors is obtained from this by applying
the appropriate jump matrices.
\paragraph{Conformal map $f(z)$}
The map $f(z)$ is defined by
\begin{equation}
f(z)= \left[\frac {3}{2}\phi_2(z)\right]^{2/3},
\end{equation}
which is a conformal map in a neighourhood of $z=z_2$. It is assumed that $\delta > 0$
is sufficiently small so that $f$ is indeed a conformal map on $U_{\delta}(z_2)$, and also that the lens around $\gamma$ is opened in such a way that the
lips of the lens inside $U_{\delta}(z_2)$ are mapped by $\zeta = f(z)$ to
the rays $\arg \zeta = \pm 2\pi/3$. This can be done without any loss of generality.
\paragraph{Analytic prefactor $E_n(z)$}
The prefactor $E_n(z)$ in \eqref{E:defP} is defined by
\begin{align}
E_n(z) & = \nonumber
N(z)\frac{1}{\sqrt{2}} \begin{pmatrix} 1 & - i \\ -i & 1 \end{pmatrix}
\begin{pmatrix} n^{1/6} f(z)^{1/4} & 0 \\ 0 & n^{-1/6} f(z)^{-1/4} \end{pmatrix} \\
& = \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & - i \\ -i & 1 \end{pmatrix}
\begin{pmatrix} n^{1/6} f(z)^{1/4} \beta^{-1}(z) & 0 \\
0 & n^{-1/6} f(z)^{-1/4} \beta(z) \end{pmatrix}, \label{matrixEn}
\end{align}
which is analytic in $U_{\delta}(z_2)$.
It is chosen so that the matching condition $P(z) = (I + \mathcal{O}(1/n)) N(z)$
for $z \in \partial U_{\delta}(z_2)$ is satisfied.
Then with these definitions it can be shown that $P$ defined by \eqref{E:defP}
indeed satisfies the Riemann-Hilbert problem for $P$.
\subsection{Third transformation}
In the third and final transformation we use the global parametrix $N(z)$
and the local parametrices $P(z)$ to define
\begin{align} \label{matrixR}
R(z) = \begin{cases} S(z)N(z)^{-1}, & \quad z\in\mathbb{C} \setminus (\Gamma_S \cup
\overline{U_{\delta}(z_1)} \cup \overline{U_{\delta}(z_2)}), \\[5pt]
S(z)P(z)^{-1}, & \quad z\in(\overline{U_{\delta}(z_1)}
\cup \overline{U_{\delta}(z_2)}) \setminus \Gamma_S.
\end{cases}
\end{align}
Then $R$ has an analytic continuation across $\gamma$ and across the
parts of $\Sigma_S$ that are inside the disks $U_{\delta}(z_1)$
and $U_{\delta}(z_2)$. It satisfies the following Riemann--Hilbert problem:
\begin{itemize}
\item $R(z)$ is analytic for $z \in \mathbb{C}\setminus \Gamma_R$, where $\Gamma_R$ is
the contour shown in Fig.~\ref{Figure11};
\item $R$ has jumps on each part of $\Gamma_R$ with jump matrices
as indicated in Fig.~\ref{Figure11};
\item $R(z)=I+\mathcal{O}\left(\frac 1z\right)$ as $z\to\infty$.
\end{itemize}
\begin{figure}[t]
\begin{pspicture}(0,0)(15,7)
\psdot(3,3.5)
\psdot(9,3.5)
\pscurve{->,arrowsize=0.25}(3.8,4.1)(4.5,4.3)(5,4.4)(6,4.5)
\pscurve(6,4.5)(7,4.4)(7.5,4.3)(8.2,4.1)
\pscurve{->,arrowsize=0.25}(3.65,2.75)(4.5,2.3)(5,2.15)(6,2)
\pscurve(6,2)(7,2.15)(7.5,2.3)(8.35,2.75)
\put(2.8,3.75){$z_1$}
\put(8.8,3.75){$z_2$}
\psarc(9,3.5){1}{-90}{90}
\psarc{<-,arrowsize=0.25}(9,3.5){1}{90}{-90}
\psarc(3,3.5){1}{-90}{90}
\psarc{<-,arrowsize=0.25}(3,3.5){1}{90}{-90}
\pscurve{->,arrowsize=0.25}(9.95,3.75)(10.5,4)(11,4.25)
\pscurve(11,4.25)(11.5,4.6)(12,5)
\pscurve(2.05,3.75)(1.5,4)(1,4.25)
\pscurve{<-,arrowsize=0.25}(1,4.25)(0.5,4.6)(0,5)
\put(4.5,5){$N\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix} N^{-1}$}
\put(4.5,1){$N\begin{pmatrix} 1 & 0 \\ e^{2n\phi_2} & 1 \end{pmatrix} N^{-1}$}
\put(10,2.75){$PN^{-1}$}
\put(1,2.75){$PN^{-1}$}
\put(8.5,5.5){$N\begin{pmatrix} 1 & e^{-2n\phi_2} \\ 0 & 1 \end{pmatrix} N^{-1}$}
\put(0,5.5){$N\begin{pmatrix} 1 & e^{-2n\phi_1} \\ 0 & 1 \end{pmatrix} N^{-1}$}
\end{pspicture}
\caption{The contour $\Gamma_R$
and the jump matrices on $\Gamma_R$ in the Riemann-Hilbert problem for $R$.
\label{Figure11}}
\end{figure}
The jump matrices in the Riemann-Hilbert problem for $R$ tend
to the identity matrix as $n \to \infty$.
Indeed, since $P(z) = (I+\mathcal{O}(1/n))N(z)$ as $n\to\infty$, uniformly
for $z \in \partial U_{\delta}(z_1)\cup\partial U_{\delta}(z_2)$, we have that
\[
R_+(z)=R_-(z)\left(I+\mathcal{O}\left(\frac 1n\right)\right),
\qquad z\in \partial U_{\delta}(z_1)\cup\partial U_{\delta}(z_2)
\]
as $n \to \infty$. On the remaining parts of $\Gamma_R$ we even have
for some positive constant $c > 0$,
\[
R_+(z) = R_-(z)\left(I+\mathcal{O}(e^{-cn})\right),
\qquad z\in \Gamma_R\setminus(\partial U_{\delta}(z_1)\cup\partial U_{\delta}(z_2)),
\]
as $n \to \infty$. Thus the jumps on $R$ tend to the identity matrix uniformly,
and in fact also in $L^2(\Gamma_R)$.
It then follows from the general theory, see \cite{Deift:2000:RH}
and \cite{Arno:2003:RH}, that
the solution to the Riemann-Hilbert problem for $R$ exists for all large
enough $n$ with
\begin{equation} \label{E:Rasymptotics}
R(z)=I+\mathcal{O}\left(\frac{1}{n}\right), \qquad \text{as } n\to\infty,
\end{equation}
uniformly for $z\in\mathbb{C}\setminus\Gamma_R$.
\subsection{Proof of Theorem \ref{th:strongasymptotics}}
Once we arrive at this result for $R$, it is possible to reverse
all the transformations $Y \mapsto T \mapsto S \mapsto R$, since they are all explicit and
invertible. The first thing that follows is that the original Riemann-Hilbert problem for $Y$
has a unique solution for large enough $n$.
Since
\[ P_n(z) = Y_{11}(z), \]
this proves that the orthogonal polynomials $P_n$
indeed exist for every large enough $n$.
The asymptotic formula \eqref{E:Rasymptotics} for $R$ further yields the
first term in an asymptotic expansion of $Y$ as $n \to \infty$.
Following the effect of the inverse transformations $R \mapsto S \mapsto T \mapsto Y$
on the asymptotic formula \eqref{E:Rasymptotics} for $R$,
we obtain the asymptotics of $Y$ and therefore of $P_n$ in the various
regions of the complex plane.
This will give the different parts of Theorem \ref{th:strongasymptotics}.
\subsubsection{Proof of part (a)}
Let $z \in \mathbb C \setminus \gamma$. We then may and do assume
that the lens around $\gamma$ and the neighborhoods $U_{\delta}(z_1)$
and $U_{\delta}(z_2)$ are chosen so that $z$ is in the outside region.
From \eqref{matrixR} we have that $S(z) = R(z) N(z)$.
Also, because we are outside the lens, we have $S(z) = T(z)$ from (\ref{matrixS})
and $T(z)$ in terms of $Y(z)$ follows from (\ref{YT}). Combining all this we find
\[ Y(z) = \begin{pmatrix} e^{-nl} & 0 \\ 0 & e^{nl} \end{pmatrix}
\left(I + \mathcal O\left(\frac{1}{n}\right)\right) N(z)
\begin{pmatrix} e^{-n\left[\phi_2(z)-\frac{1}{2} V(z)\right]} & 0 \\
0 & e^{n\left[\phi_2(z)-\frac{1}{2} V(z)\right]} \end{pmatrix}.
\]
Then, for the $(1,1)$-entry the first part of the theorem follows
in a straightforward way, since
\[ g(z) = \frac{1}{2} V(z) - \phi_2(z) - l. \]
\subsubsection{Proof of part (b)}
For $z$ inside the lens, but outside of the two disks, we
have $S(z) = R(z) N(z)$ as before. From \eqref{matrixS} we
then get
\begin{equation*}
T(z)=S(z) \begin{pmatrix} 1 & 0 \\ \pm e^{2n\phi_2} & 1 \end{pmatrix} =
R(z)N(z) \begin{pmatrix} 1 & 0 \\ \pm e^{2n\phi_2} & 1 \end{pmatrix}
\end{equation*}
where the $+$ sign ($-$ sign) is taken in the upper (lower) part of the lens.
Using \eqref{YT} and \eqref{E:Rasymptotics} we then find
\begin{equation*}
Y(z)= \begin{pmatrix} e^{-nl} & 0 \\ 0 & e^{nl} \end{pmatrix} \left(I + \mathcal O\left(\frac{1}{n}\right)\right) N(z)
\begin{pmatrix} e^{-n\left[\phi_2(z)-\frac{1}{2} V(z)\right]} & 0 \\
\pm e^{n\left[\phi_2(z)+\frac{1}{2} V(z) \right]} & e^{n\left[\phi_2(z)-\frac{1}{2} V(z)\right]} \end{pmatrix}.
\end{equation*}
Then for the $(1,1)$-entry we obtain from this
\begin{align*}
P_n(z) & = Y_{11}(z) = e^{n \left[\frac{V(z)}{2} - l\right]}
\begin{pmatrix} 1 + \mathcal O\left(\frac{1}{n}\right), & \mathcal O\left(\frac{1}{n}\right) \end{pmatrix}
N(z) \begin{pmatrix} e^{-n \phi_2(z)} \\ \pm e^{n \phi_2(z)} \end{pmatrix} \\
& = e^{n\left[\frac {V(z)}{2}-l\right]}
\left( e^{-n \phi_2(z)} N_{11}(z) \pm e^{n\phi_2(z)} N_{12}(z) + \mathcal O\left(\frac{1}{n}\right)
\right)
\end{align*}
as $n\to\infty$. This proves part (b) of the theorem.
\subsection{Proof of part (c)}
In the neighbourhoods $U_{\delta}(z_1)$ and $U_{\delta}(z_2)$ of
the endpoints $z_1$ and $z_2$ we use the local parametrix $P(z)$
to obtain an approximation for $P_n(z)$ in terms of Airy
functions. Indeed, by \eqref{matrixR} and \eqref{E:Rasymptotics},
\begin{equation*}
S(z) = R(z) P(z) = \left( I + \mathcal O\left(\frac{1}{n}\right)\right) P(z)
\end{equation*}
for $z \in U_{\delta}(z_1) \cup U_{\delta}(z_2)$. If we assume that
$z$ is inside the disk $U_{\delta}(z_2)$ but outside the lens around $\gamma$,
then we find by following the transformations \eqref{matrixS} and \eqref{YT} that
\begin{align*}
Y(z)= \begin{pmatrix} e^{-nl} & 0 \\ 0 & e^{nl} \end{pmatrix}
\left( I + \mathcal O\left(\frac{1}{n}\right)\right)
P(z) \begin{pmatrix} e^{-n\left[\phi_2(z)-\frac{V(z)}{2}\right]} & 0 \\
0 & e^{n\left[\phi_2(z)-\frac{V(z)}{2}\right]} \end{pmatrix}.
\end{align*}
Using \eqref{E:defP} and \eqref{matrixEn}, we obtain from this that
\begin{multline*}
Y(z)= \frac{1}{\sqrt{2}} \begin{pmatrix} e^{-nl} & 0 \\ 0 & e^{nl} \end{pmatrix}
\left( I + \mathcal O\left(\frac{1}{n}\right)\right) \begin{pmatrix} 1 & -i \\ -i & 1 \end{pmatrix} \\
\times \begin{pmatrix} n^{1/6} f(z)^{1/4} \beta(z)^{-1} & 0 \\
0 & n^{-1/6} f(z)^{-1/4} \beta(z) \end{pmatrix} \\
\times A(n^{2/3} f(z))
\begin{pmatrix} e^{n\frac{V(z)}{2}} & 0 \\ 0 & e^{-n\frac{V(z)}{2}} \end{pmatrix}.
\end{multline*}
To evaluate $A(n^{2/3} f(z))$ we use \eqref{Ainfirstsector} and it follows that
\begin{align*}
P_{n}(z) & = \begin{pmatrix} 1 & 0 \end{pmatrix} Y(z) \begin{pmatrix} 1 \\ 0 \end{pmatrix} \\
&=
\sqrt{\pi} e^{n \left[\frac{V(z)}{2} - l\right]}
\begin{pmatrix} 1 + \mathcal O\left(\frac{1}{n} \right), & \mathcal O\left(\frac{1}{n}\right) \end{pmatrix}
\begin{pmatrix} 1 & -i \\ -i & 1 \end{pmatrix} \\
& \qquad \times \begin{pmatrix} n^{1/6} f(z)^{1/4} \beta(z)^{-1} & 0 \\
0 & n^{-1/6} f(z)^{-1/4} \beta(z) \end{pmatrix}
\begin{pmatrix} \Ai(n^{2/3} f(z)) \\ - i \Ai'(n^{2/3} f(z)) \end{pmatrix}
\end{align*}
as $n\to\infty$.
This proves part (c) of the theorem in case $z \in U_{\delta}(z_2)$ is outside the lens.
A similar calculation leads to the same expression in case $z$ is inside the lens.
This completes the proof of part (c) of Theorem \ref{th:strongasymptotics}.
\section{Concluding remarks}
We have presented a Riemann--Hilbert analysis of a family of
polynomials orthogonal with respect to a varying exponential
weight on certain curves of the complex plane. The problem was
motivated by the fact that the zeros of these polynomials are
complex Gaussian quadrature points for an oscillatory integral on
an interval $[a,b]\subset\mathbb{R}$. The zeros cluster on
analytic arcs in the complex plane, which are given by a critical
trajectory of a suitable quadratic differential.
We have focused on the case where the weight function is
$V(z)= - i z^3/3$, for which we were able to obtain explicit expressions
throughout the Riemann-Hilbert analysis. A similar procedure (with
more complicated computations) can be applied in principle to the more general
case $V(z)= - i z^r/r$ with $r\geq 5$ and odd. The only difficulty
is the determination of a curve with the $S$-property in this
more general case. It would be interesting to know
if we are in the one-cut case for every odd $r$.
It is also worth remarking that the Riemann--Hilbert analysis can
provide more detailed asymptotic information than the one given
before, following the ideas exposed in \cite{DKMVZ:1999:varying}, \cite{KMVV:2004:Jacobi}. The
importance of these results from a numerical point of view is
currently under investigation.
\section*{Acknowledgements}
The authors acknowledge useful discussions with
A.~Mart\'{i}nez-Finkelshtein and H.~Stahl. A. Dea\~{n}o
acknowledges financial support from the programme of postdoctoral
grants of the Spanish Ministry of Education and Science and
project MTM2006-09050. D. Huybrechs is a Postdoctoral Fellow of the Research Foundation
Flanders (FWO) and is supported by FWO-Flanders project G061710N.
A.B.J.~Kuijlaars
is supported by K.U. Leuven research grant OT/08/33, FWO-Flanders project
G.0427.09, by the Belgian Interuniversity Attraction Pole P06/02, by the
European Science Foundation Program MISGAM, and by grant
MTM2008-06689-C02-01 of the Spanish
Ministry of Science and Innovation.
|
1,477,468,750,117 | arxiv | \section{Introduction}
\textbf{Generative adversarial networks (GANs):}~\citep{goodfellow2014generative} are a class of generative models based on a competitive game between a \emph{generator} that tries to generate realistic new data, and a \emph{discriminator} that tries to distinguish generated from real data. In practice, both players are parameterized by neural networks that are trained simultaneously by a variant of stochastic gradient descent.
\textbf{The minimax interpretation:}
Presently, the success of GANs is mostly attributed to properties of the divergence or metric obtained under an optimal discriminator.
For instance, an optimal discriminator in the original GAN leads to a generator loss equal to the Jensen-Shannon divergence between real and generated distribution.
Optimization over the generator is then seen as approximately minimizing this divergence.
We refer to this point of view as the \emph{minimax interpretation}.
The minimax interpretation has led to the development of numerous GAN variants that aim to use divergences or metrics with better theoretical properties.
\textbf{The GAN-dilemma:}
However, every attempt to explain GAN performance with the minimax interpretation faces one of the two following problems:
\begin{enumerate}[wide, labelwidth=!, labelindent=0pt, label=\textbf{\arabic*.}]
\item \label{item:dilemma1} \textbf{Without regularity constraints, the discriminator can always be perfect.} This is because it can selectively assign a high score to the finite amount of real data points while assigning a low score on the remaining support of the generator distribution, as illustrated in Figure~\ref{fig:picking_out}.
Therefore, the Jensen-Shannon divergence between a continuous and a discrete distribution always achieves its maximal value, a property that is shared by all divergences that do not impose regularity constraints on the discriminator.
Thus, these divergences can not meaningfully compare the quality of different generators.
\item \label{item:dilemma2} \textbf{Imposing regularity constraints needs a measure of similarity of images.} Imposing regularity on the discriminator amounts to forcing it to map similar images to similar results. To do so, we would require a notion of similarity between images that is congruent with human perception.
This is a longstanding unsolved problem in computer vision.
Commonly used gradient penalties use the Euclidean norm which is known to poorly capture visual similarity, as illustrated in Figure~\ref{fig:deception}.
\end{enumerate}
\begin{figure*}
\centering
\includegraphics[width=0.30\textwidth]{figures/not_picking_out.pdf}
\hspace{0.2\textwidth}
\includegraphics[width=0.30\textwidth]{figures/picking_out.pdf}
\caption{\textbf{The discriminator can always improve:} We want the discriminator confidence to reflect the relative abundance of true and fake data (left). But by picking out individual data points, the discriminator can almost always achieve arbitrarily low loss on any finite data set (right).
Even in the limit of infinite data, the slightest misalignment of the supports of generated and real data can be exploited in a similar way.}
\label{fig:picking_out}
\end{figure*}
We believe that the different divergences underlying the various GAN formulations have little to do with their ability to produce realistic images.
This is supported by the large scale studies of \citet{lucic2017gans} that did not find systematic differences in the performance of GANs associated with different divergence measures.
Understanding of GAN performance is crucial in order to improve training stability and reduce the required amount of hyperparameter tuning.
\textbf{A way out?:}
Due to the GAN-dilemma, every attempt at explaining the performance of GANs needs to go beyond the minimax interpretation and consider the \emph{dynamics} of the training process.
In this work, we argue that an implicit regularization due to the simultaneous \footnote{Here and in the following, when talking about simultaneous training, we include variants such as alternating gradient descent.} training of generator and discriminator allows GANs to use the inductive biases of neural networks for the generation of realistic images.
\textbf{Implicit competitive regularization:}
We define \emph{implicit competitive regularization} (ICR) as the introduction of additional stable points or regions due to the simultaneous training of generator and discriminator that do not exist when only training the generator (or discriminator) with gradient descent while keeping the discriminator (or generator) fixed. \\
It has been previously observed that performing simultaneous gradient descent (SimGD) on both players leads to stable points that are not present when performing gradient descent with respect to either player, while keeping the other player fixed \citep{mazumdar2018convergence}.
These stable points are not local Nash equilibria, meaning that they are not locally optimal for both players.
This phenomenon is commonly seen as a shortcoming of SimGD and modifications that promote convergence only to local Nash equilibria which have been proposed by, for instance, \citep{balduzzi2018mechanics,mazumdar2019finding}.
In contrast to this view we believe that ICR is crucial to overcoming the GAN-dilemma and hence to explaining GAN performance in practice by allowing the inductive biases of the discriminator network to inform the generative model.
\subsection*{Summary of Contributions}
In this work, we point out that a fundamental dilemma prevents the common minimax interpretation of GANs from explaining their successes.
We then show that implicit competitive regularization (ICR), which so far was believed to be a \emph{flaw} of SimGD, is key to overcoming this dilemma.
Based on simple examples and numerical experiments on real GANs we illustrate how it allows to use the inductive biases of neural networks for generative modelling, resulting in the spectacular performance of GANs.\\
We then use this understanding to improve GAN performance in practice.
Interpreting ICR from a game-theoretic perspective, we reason that strategic behavior and opponent-awareness of generator and discriminator during the training procedure can strengthen ICR.
These elements are present in competitive gradient descent (CGD) \citep{schaefer2019competitive} which is based on the two players solving for a local Nash-equilibrium at each step of training.
Accordingly, we observe that CGD greatly strengthens the effects of ICR.
In comprehensive experiments on CIFAR 10, competitive gradient descent stabilizes previously unstable GAN formulations and achieves higher inception score compared to a wide range of explicit regularizers, using both WGAN loss and the original saturating GAN loss of \citet{goodfellow2014generative}.
In particular, taking an existing WGAN-GP implementation, dropping the gradient penalty, and training with CGD leads to the highest inception score in our experiments.
We interpret this as additional evidence that ICR, as opposed to explicit regularization, is the key mechanism behind GAN performance.
\section{The GAN-dilemma}
\label{sec:gan_dilemma}
In this section, we study in more detail the fundamental dilemma that prevents the common minimax interpretation from explaining the successes of GANs.
In particular, we show how the existing GAN variants fall into one or the other side of the GAN-dilemma.
\textbf{Metric-agnostic GANs:}
In the original formulation due to \citet{goodfellow2014generative}, the two players are playing a zero-sum game with the loss function of the generator given by the binary cross entropy
\begin{figure*}
\centering
\includegraphics[width=0.80\textwidth]{figures/deception.png}
\caption[]{\textbf{The Euclidean distance is not perceptual:} We would like to challenge the reader to order the above three pairs of images according to the Euclidean distance of their representation as vectors of pixel-intensities. \footnotemark}
\label{fig:deception}
\end{figure*}
\footnotetext{The pairs of images are ordered from left to right, in increasing order of distance. The first pair is identical, while the third pair differs by a tiny warping.}
\begin{equation}
\label{eqn:ogan}
\min \limits_{\mathcal{G}} \max \limits_{\mathcal{D}} \frac{1}{2} \mathbb{E}_{x \sim P_{\operatorname{data}}} \left[ \log \mathcal{D}(x) \right] + \frac{1}{2} \mathbb{E}_{x \sim P_{\mathcal{G}}}\left[ \log\left(1 - \mathcal{D}(x) \right) \right].
\end{equation}
Here, $\mathcal{G}$ is the probability distribution generated by the generator, $\mathcal{D}$ is the classifier provided by the discriminator, and $P_{\operatorname{data}}$ is the target measure, for example the empirical distribution of the training data.
A key feature of the original GAN is that it depends on the discriminator only through its output when evaluated on samples. This property is shared, for instance, by the more general class of $f$-divergence GANs \citep{nowozin2016f}.
We call GAN formulations with this property \emph{metric-agnostic}.
\textbf{Metric-informed GANs:}
To address instabilities observed in the original GAN, \citet{arjovsky2017wasserstein} introduced WGAN, with loss function given by
\begin{equation}
\label{eqn:wgan}
\min \limits_{\mathcal{G}} \max \limits_{\mathcal{D}} \mathbb{E}_{x \sim P_{\operatorname{data}}} \left[ \mathcal{D}(x) \right] - \mathbb{E}_{x \sim P_{\mathcal{G}}}\left[ \mathcal{D}(x) \right] + \mathcal{F}\left(\nabla \mathcal{D}\right)
\end{equation}
where $\mathcal{F}(\nabla \mathcal{D})$ is infinity if $\sup_x \left\|\nabla \mathcal{D}(x) \right\| > 1$ and zero, else.
\citep{gulrajani2017improved} propose WGAN-GP, where this inequality constraint is relaxed by replacing $\mathcal{F}$ with a penalty, for instance $\mathcal{F}(\nabla \mathcal{D}) = \mathbb{E} \left[ \left( \left\|\nabla_x \mathcal{D} \right\| - 1 \right)^2\right]$.
These GAN formulations are fundamentally different from metric-agnostic GANs in that they depend explicitly on the gradient of the discriminator.
In particular, they depend on the choice of metric used to measure the size of $\nabla \mathcal{D}$.
Subsequent to WGAN(-GP), which uses the Euclidean norm, other variants such as Sobolev-GAN \citep{mroueh2017sobolev}, Banach-GAN \citep{adler2018banach}, or Besov-GAN \citep{uppal2019nonparametric} have been proposed that use different metrics to measure gradient size.
We refer to these types of GAN formulations as \emph{metric-informed} GANs.
\textbf{The problem with metric-agnostic GANs:}
GANs are able to generate highly realistic images, but they suffer from unstable training and mode collapse that often necessitates extensive hyperparameter tuning.
Beginning with \citep{arjovsky2017towards} these problems of the original GAN have been explained with the fact that the supports of the generator distribution and the training data are almost never perfectly aligned.
For any fixed generator, the discriminator can take advantage of this fact to achieve arbitrarily low loss, as illustrated in Figure~\ref{fig:picking_out}.
In the case of the Formulation \ref{eqn:ogan}, this corresponds to the well known fact that the Jensen-Shannon divergence between mutually singular measures is always maximal.
This result extends to \emph{all} metric-agnostic divergences, simply because they have no way of accessing the degree of similarity between data points on disjoint supports.
\citet{arora2017generalization,huang2017parametric} emphasize that the discriminator is restricted to a function class parameterized by a neural network.
However, the experiments of \citet{arjovsky2017towards} as well as our own in Figure~\ref{fig:overtraining_1} clearly show the tendency of the discriminator to diverge as it achieves near-perfect accuracy.
This is not surprising since \citet{zhang2016understanding} observed that modern neural networks are able to fit even \emph{random} data perfectly.
\citet{arjovsky2017towards} also show that as the discriminator improves its classification loss, the generator achieves less and less useful gradient information.
This is again not surprising, since confidence scores of deep neural networks are known to be poorly calibrated \citep{guo2017calibration}.
Therefore, the outputs of a near-perfect discriminator can not be expected to provide a useful assessment of the quality of the generated samples.
Since GAN optimization is highly non-convex it is natural to ask if GANs find \emph{locally} optimal points in the form of local Nash or Stackelberg equilibria.
This local minmax interpretation has been emphasized by \citet{fiez2019convergence,jin2019minmax}, but the experiments of \citet{berard2019closer} as well as our own in Figure~\ref{fig:overtraining_1} suggest that good GAN solutions for metric-agnostic GANs are typically not locally optimal for both players.
It seems plausible that the discriminator, being highly overparameterized, can find a direction of improvement against most generators.
\textbf{The problem with metric-informed GANs:}
The above observation has motivated the introduction of metric-informed GANs that restrict the size of the gradient of the discriminator (as a function mapping images to real numbers).
This limits the discriminator's ability to capitalize on small misalignments between $\mathcal{D}$ and $P_{\operatorname{data}}$ and thus makes for a meaningful minimax interpretation even if the two measures have fully disjoint support.
However, the Achilles heel of this approach is that it needs to choose a metric to quantify the magnitude of the discriminator's gradients.
Most of the early work on metric-informed GANs chose to measure the size of $\nabla \mathcal{D}$ using the Euclidean norm \citep{arjovsky2017towards,arjovsky2017wasserstein,gulrajani2017improved,roth2017stabilizing,kodali2017convergence,miyato2018spectral}.
However, since the discriminator maps images to real numbers, this corresponds to quantifying the similarity of images at least locally by the Euclidean distance of vectors containing the intensity values of each pixel.
As illustrated in Figure~\ref{fig:deception}, this notion of similarity is poorly aligned with visual similarity even locally.
From this point of view it is not surprising that the generative model of \cite{chen2019gradual}, based on a differentiable optimal transport solver, produced samples of lower visual quality than WGAN-GP, despite achieving better approximation in Wasserstein metric.
As noted by \citet{chen2019gradual}, these observations suggest that the performance of WGAN can not be explained by its relationship to the Wasserstein distance.
When comparing a variety of GAN formulations with a fixed budget for hyperparameter tuning, \citet{lucic2017gans} did not find systematic differences in their performance.
This provides additional evidence that the key to GAN performance does not lie in the choice of a particular divergence between probability measures.
The metric-informed divergences considered so far were all based on the Euclidean distance between images.
Other researchers have tried using different metrics on image space such as Sobolev or Besov norms \citep{adler2018banach,mroueh2017sobolev,uppal2019nonparametric}, or kernel maximum mean discrepancy distances \citep{li2015generative,li2017mmd,binkowski2018demystifying}.
However, none of these metrics do a good job at capturing perceptual similarity either, which explains why these variants have not been observed to outperform WGAN(-GP) in general.
Researchers in computer vision have proposed more sophisticated domain-specific distance measures \citep{simard1998transformation}, kernel functions \citep{haasdonk2007invariant,song2014local}, and features maps \citep{dalal2005histograms}.
Although computationally expensive, methods from differential geometry have been used for image inter-- and extrapolation \citep{trouve2005metamorphoses,berkels2015time,effland2018image}.
However, none of these classical methods achieve performance comparable to that of neural network based models, making them unlikely solutions for the GAN dilemma.
\textbf{A way out:}
Generative modelling means producing new samples that are \emph{similar} to the training samples, but \emph{not too similar} to each other.
Thus, every generative method needs to choose how to measure similarity between samples, implicitly or explicitly.\\
When analyzing GANs from the minimax perspective this assessment of image similarity seems to rely exclusively on the classical metrics and divergences used for their formulation.
But modeling perceptual similarity is hard and most commonly used GAN formulations are based on measures of similarity that are known to be terrible at this task.
Thus, the minimax point of view can not explain why GANs produce images of higher visual quality than any other method.
The key to image classification is to map \emph{similar} images to \emph{similar} labels.
The fact that deep neural networks drastically outperform classical methods in this tasks leads us to believe that they capture perceptual similarity between images far better than any classical model.
We believe that the success of GANs is due to their ability to implicitly use the inductive biases of the discriminator network as a notion of similarity.
They create images that \emph{look real} to a neural network, which acts as a proxy for \emph{looking real} to the human eye.
In the next section we propose a new mechanism, implicit competitive regularization, to explain this behavior.
\section{Implicit competitive regularization (ICR)}
\textbf{Implicit regularization:}
Based on the discussion in the last section, any attempt at understanding GANs needs to involve the inductive biases of the discriminator.
However, there is ample evidence that the inductive biases of neural networks do not arise from a limited ability to represent certain functions.
Indeed, it is known that modern neural networks can fit almost arbitrary functions \citep{kolmogorov1956representation,cybenko1989approximation,zhang2016understanding}.
Rather, they seem to arise from the dynamics of gradient-based training that tends to converge to classifiers that generalize well, a phenomenon commonly referred to as \emph{implicit regularization} \citep{neyshabur2017implicit,gunasekar2017implicit,ma2017implicit,azizan2019stochastic,kubo2019implicit,arora2019implicit}.
\textbf{Implicit regularization is not enough for GANs:}
The implicit regularization induced by gradient descent lets neural networks prefer sets of weights with good generalization performance.
However, the outputs of even a well-trained neural network are typically not informative about the confidence of the predicted class \citep{guo2017calibration}.
Thus, a discriminator trained on finite amounts of real data and data generated by a given generator can be expected to distinguish new real data from new data generated by a similar generator, with high accuracy.
However, its outputs do not quantify the confidence of its prediction and thus of the visual quality of the generated samples.
Therefore, even considering implicit regularization, a fully trained discriminator does not provide useful gradients for training the generator.
\textbf{Implicit \emph{competitive} regularization:}
We think that GAN training relies on \emph{implicit competitive regularization} (ICR), an additional implicit regularization due to the simultaneous training of generator and discriminator.
When training generator and discriminator simultaneously, ICR selectively stabilizes good generators that would not be stable when training one player while keeping the other player fixed.
Consider the game given by
\begin{equation}
\label{eqn:basicICR}
\min \limits_x \max \limits_{y} x^2 + 10 xy + y^2.
\end{equation}
In this problem, for any fixed $x$, any choice of $y$ will be sub-optimal and gradient ascent on $y$ (with $x$ fixed) will diverge to infinity for almost all initial values.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/plot_combined_beta_1.pdf}
\caption{\textbf{ICR in the quadratic case:} When optimizing only $y$ in Equation~\eqref{eqn:basicICR}, it diverges rapidly to infinity, for any fixed $y$. If however we simultaneously optimize $x$ and $y$ with respective step sizes $\eta_{x} = 0.09$ and $\eta_{y} = 0.01$, we converge to $(0,0)$.}
\label{fig:cycling}
\end{figure}
What about simultaneous gradient descent?
As has been observed before \citep{mazumdar2018convergence}, simultaneous gradient descent wit step sizes $\eta_x = 0.09$ for $x$ and $\eta_y = 0.01$ for $y$ will converge to $(0,0)$, despite it being a locally \emph{worst} strategy for the maximizing player. (See Figure~\ref{fig:cycling} for an illustration.)
This is a first example of ICR, whereby the simultaneous optimization of the two agents introduces additional attractive points to the dynamics that are \emph{not} attractive when optimizing one of the players using gradient descent while keeping the other player fixed.
As outlined in Section~\ref{sec:gan_dilemma}, the key to the performance of GANs has to lie in the simultaneous optimization process.
We now provide evidence that the solutions found by GANs are indeed stabilized by ICR.
To this end, we train a GAN on MNIST until it creates good images.
We refer to the resulting generator and discriminator as the \emph{checkpoint} generator and discriminator.
We observe that the loss of both generator and discriminator, as well as the image quality, is somewhat stable even though it would diverge after a long time of training.
If instead, starting at the checkpoint, we optimize only the discriminator while keeping the generator fixed, we observe that the discriminator loss drops rapidly.
For the same number of iterations and using the same learning rate, the discriminator moves away from the checkpoint significantly faster as measured both by the Euclidean norm of the weights and the output on real-- and fake images.
The observation that the discriminators diverges from the checkpoint faster when trained individually than when trained simultaneously with the generator suggests that the checkpoint, which produced good images, was stabilized by ICR.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth]{figures/loss_compare.pdf}
\includegraphics[width=0.49\columnwidth]{figures/pred_D.pdf}
\caption{\textbf{ICR on MNIST:} We train a GAN on MNIST until we reach a \emph{checkpoint} where it produces good images.
(First image:) We fix the generator and only train the discriminator, observing that it can reach near-zero loss. When instead training generator and discriminator jointly, the loss stays stable.
(Second Image:) When trained individually, the discriminator moves significantly slower slower when trained jointly with the generator, as measured by its output on a set of thousand reference images.}
\label{fig:overtraining_1}
\end{figure}
\section{How ICR lets GANs generate}
\textbf{An (hypo)thesis:}
In the example in the last section, the checkpoint producing good images was stabilized by ICR.
However, we have not yet given a reason why points stabilized by ICR should have better generators, in general.
For GANs to produce visually plausible images, there has to be some correspondence between the training of neural networks and human visual perception.
Since learning and generalization are poorly understood even for ordinary neural network classifiers, we can not avoid making an assumption on the nature of this relationship.
This section relies on the following hypothesis.
\textbf{Hypothesis} How quickly the discriminator can pick up on an imperfection of the generator is correlated with the visual prominence of said imperfection.
It is common intuition in training neural network classifiers that more visually obvious patterns are learned in fewer iterations and from less data.
It is also in line with the \emph{coherent gradient hypothesis} of \citet{chatterjee2020coherent} that explains generalization performance of neural networks with the fact that systematic patterns in the data generate more coherent gradients and are therefore learned faster.
While a thorough verification of the hypothesis is beyond the scope of this work we provide some empirical evidence in Figure~\ref{fig:pretrain}.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/Accuracy_pretrain.pdf}
\caption{\label{fig:pretrain} By prematurely stopping the training process, we obtain generators of different image-quality on CIFAR10 (higher inception score (IS) reflects better image quality).
We then train a new discriminator against this \emph{fixed} generator and measure how quickly it increases its classification performance.
We use a model trained on the 10-class classification task as starting point for the discriminator to prevent the initial phase of training from polluting the measurements.
While all discriminators achieve near-perfect accuracy eventually, the \emph{rate} of improvement is inversely correlated to inception score of the generator.}
\end{figure}
This section argues for the following thesis.
\textbf{Thesis:} ICR selectively stabilizes generators for which the discriminator can only improve its loss \emph{slowly}.
By the hypothesis, these generators will produce high quality samples.
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/plot_combined_beta_0.pdf}
\includegraphics[width=0.9\columnwidth]{figures/plot_combined_beta_-1.pdf}
\caption{\textbf{ICR depends on speed of learning:} When changing the learning rates to $\left(\eta_{x}, \eta_{y}\right) = \left(0.03, 0.03\right)$ (top) or $\left(\eta_{x}, \eta_{y}\right) = \left(0.01, 0.09\right)$ (bottom), SimGD diverges.}
\label{fig:step_sizes}
\end{figure}
\textbf{An argument in the quadratic case:}
We begin with the quadratic problem in Equation~\ref{eqn:basicICR} and model the different speeds of learning of the two agents by changing their step sizes $\eta_x$ and $\eta_y$.
In Figure~\ref{fig:step_sizes} we see that for $\left(\eta_{x}, \eta_{y}\right) = (0.03, 0.03)$ the two agents slowly diverge to infinity and for $\left(\eta_{x}, \eta_{y}\right) = \left(0.01, 0.09\right)$, divergence occurs rapidly.
In general, stable points $\bar{x}$ of an iteration $(x_{k + 1} = x_{k} + F\left(x_{k}\right)$ are characterized by \textbf{(1):} $F\left(\bar{x}\right) = 0$ and \textbf{(2)} $D_{x}F\left(\bar{x}\right)$ having spectral radius smaller than one \citep{mescheder2017numerics}[Proposition 3].
For SimGD applied to a zero sum game with the loss of $x$ given by $f$, these are points with vanishing gradients such that
\begin{equation*}
\operatorname{Id} - M
\coloneqq
\operatorname{Id} -
\begin{pmatrix}
\eta_x D_{xx}^2 f & \eta_{x} D_{xy}^2 f \\
- \eta_y D_{yx}^2 f & -\eta_{y} D_{yy}^2 f
\end{pmatrix}
\end{equation*}
has spectral radius smaller than one.
For univariate $x$, $y$ we can set $a \coloneqq D_{xx}^2f$, $b \coloneqq D_{xy}^2f$, and $c \coloneqq D_{yy}^2f$ and compute the characteristic polynomial of $M$ as
\begin{equation*}
p(\lambda) = \lambda^2 - (\eta_x a - \eta_y c) \lambda + (- \eta_x \eta_y a c + \eta_x \eta_y b^2).
\end{equation*}
For $\eta_{x} a > \eta_{y}c$ and $ \eta_x \eta_y b^2 > \eta_x \eta_y a c$ the solutions of this equation have positive real part and therefore the eigenvalues of $M$ have positive real part.
By multiplying $\eta_{x}$ and $\eta_{y}$ by a small enough factor we can obtain a spectral radius smaller than one (c. f. \citet{mazumdar2018convergence}).
Thus, a small enough $\eta_{y}$ and large enough mixed derivative $b$ can ensure convergence even for positive $c$.
If we think of the maximizing player as the discriminator, slow learning (modelled by small $\eta_{y}$) is correlated to good images produced by the generator.
Thus, in this interpretation, a good generator leads to ICR stabilizing the point $(0,0)$ more strongly.
\begin{figure}
\centering
\includegraphics[scale=0.18]{figures/plot_oscillate.pdf}
\includegraphics[scale=0.18]{figures/plot_project.pdf}
\includegraphics[scale=0.25]{figures/scatter_oscillate.pdf}
\includegraphics[scale=0.25]{figures/scatter_project.pdf}
\caption{\textbf{Approximate projection via adversarial training:} On the left column, discriminator picks up on errors in the $x$- and $y$-direction equally quickly. Therefore, the generator tries to satisfy the criteria alternatingly, leading to a cyclic pattern. In the right column, the discriminator picks up on errors in the $x$-direction much more quickly. This causes the generator to try to stay accurate in the $x$-direction.}
\label{fig:toyGAN}
\end{figure}
\textbf{Adversarial training as projection:}
Surprisingly, ICR allows us to compute a projection with respect to the perceptual distance of a neural network, without quantifying this distance explicitly.
Let us consider the following example.
We construct a \emph{generator} $\mathcal{G}$ that maps its $28$ weights to a bivariate output.
This nonlinear map is modelled as a tiny neural network with two hidden layers, with the final layer restricting the output to the set $\mathcal{S} \coloneqq \left\{ (e^{s+t}, e^{s-t}) \middle| s \in \left[-\frac{1}{2}, \frac{1}{2}\right], t \in \mathbb{R} \right\} \subset \mathbb{R}^2$.
We think of this as mapping a set of weights to a generative model that is characterized by only two parameters.
In this parameterization, we assume that the target distribution is represented by the point $P_{\operatorname{data}} = (2,2)$.
Importantly, as shown in Figure~\ref{fig:toyGAN}, there is no set of weights that allow the generator to output \emph{exactly} $P_{\operatorname{data}}$.
This is to model the fact that in general, the generator will not be able to exactly reproduce the target distribution.
We construct a \emph{discriminator} $\mathcal{D}$ that maps a generative model (a pair of real numbers) and a set of 28 weights to a real number, by a small densely connected neural network.
We want to model the difference in visual prominence of the two components of $P_{\operatorname{data}}$. To this end, we assume that before before being passed to the discriminator, $\mathcal{G}$ and $P_{\operatorname{data}}$ are rescaled by a diagonal matrix $\eta \in \mathbb{R}^{2 \times 2}$.
Thus, $\eta$ determines the relative size of the gradients of $\mathcal{D}$ of the first and second components of the input data.
This models the hypothesis that a real discriminator will pick up more quickly on visually prominent features.
Importantly, we assume $\eta$ to be unknown, since we do not have access to a metric measuring "visual similarity to a neural network".\\
We will now show how adversarial training can be used to approximate a projection with respect to $\eta$, without knowing $\eta$.
We use the loss
\begin{equation}
\min \limits_{w_{\mathcal{G}} \in \mathbb{R}^{28}} \max \limits_{w_{\mathcal{D}} \in \mathbb{R}^{28}} \mathcal{D}\left(\eta P_{\operatorname{data}}, w_{\mathcal{D}} \right) - \mathcal{D}\left(\eta \mathcal{G}\left(w_{\mathcal{G}}\right), w_{\mathcal{D}}\right)
\end{equation}
and train the two networks using simultaneous gradient descent.
For $\eta$ equal to the identity, we see oscillatory training behavior as $\mathcal{G}$ tries be accurate first in one, then the other direction.
If we instead use $\eta = \left(\begin{smallmatrix} 1 & 0\\ 0 & 10^{-2} \end{smallmatrix}\right)$, we are modelling the first component as being more visually prominent.
Instead of the oscillatory patterns from before, we observe long periods where the value of the first component of $\mathcal{G}\left(w_{\mathcal{G}}\right)$ is equal to the first component of $P_{\operatorname{data}}$ (see Figure~\ref{fig:toyGAN}).
Without knowing $\eta$, we have approximated the projection of $P_{\operatorname{data}}$ onto $\mathcal{S}$ with respect to the metric given by $(x,y) \mapsto \|\eta (x,y)\|$.
To do so, we used the fact that this point is subject to the slowest learning discriminator, and thus the strongest ICR.
We believe that GANs use the same mechanism to compute generators that are close to the true data in the perceptual distance of the discriminator, which in turn acts as a proxy for the perceptual distance of humans.
\section{Competitive gradient descent amplifies ICR}
\textbf{How to strengthen ICR:}
We have provided evidence that GANs' ability to generate visually plausible images can be explained by ICR selectively stabilizing good generators.
It is well known that GANs often exhibit unstable training behavior, which is mirrored by the observations in Figures~\ref{fig:cycling}, ~\ref{fig:overtraining_1} and~\ref{fig:toyGAN} that ICR often only leads to weak, temporary stability.
Thus, it would be desirable to find algorithms that induce stronger ICR than SimGD.
To this end, we will find a game-theoretic point of view useful.
\textbf{Cooperation in a zero-sum game?}
As discussed in the last section, ICR can stabilize solutions that are locally suboptimal for at least one of the players.
Since we did not model either of the two players as altruistic, this behavior may seem puzzling.
It is likely for this reason that ICR has mostly been seen as a flaw, rather than a feature of SimGD.
\textbf{Convergence by competition:}
The quadratic example in Equation~\eqref{eqn:basicICR} shows that the bilinear term $xy$ is crucial for the presence of ICR. Otherwise, SimGD reduces to each player moving independently according to gradient descent.
In fact, the strength of ICR decreases rapidly as $\left|\alpha\right|$ and $\left|\beta\right|$ diverge to infinity.
The mixed term $xy$ models the ability of each player to retaliate against actions of the other player.
In the case of $\beta < 0$, as the maximizing player $y$ moves to plus infinity in order to maximize its reward, it becomes a locally optimal strategy for the minimizing player $x$ to move towards negative infinity in order to minimize the dominant term $xy$.
If $|\beta| \ll 1$ it is favorable for the maximizing player to move back towards zero in order to maximize the dominant term $xy$.
The reason for the maximizing player to stay in the sub-optimal point $y=0$ (the \emph{maximizer} of its loss, for $x = 0$) is the that minimizing player can use the mixed term $xy$ to punish every move of $y$ with a counterattack.
Thus, the need to avoid counterattacks justifies the seemingly sub-optimal decision of the maximizing player to stay in $y=0$.
\textbf{The generator strikes back!}
This phenomenon is also present in the example of Figure~\ref{fig:overtraining_1}.
Consider the checkpoint generator from Figure~\ref{fig:overtraining_1} and the over-trained discriminator that achieves near perfect score against the discriminator.
As we can see in Figure~\ref{fig:overtraining_2}, training the generator while keeping the over-trained discriminator fixed leads to a rapidly increasing discriminator loss.
The over-trained discriminator has become vulnerable to counterattack by the generator!
If instead the generator is trained against the checkpoint discriminator, the loss increases only slowly.
Thus, ICR can be interpreted as the discriminator trying to avoid counterattack by the generator.
\textbf{Agent modelling for stronger ICR:}
The update $(x,y)$ of SimGD applied to the loss function $f$ can be interpreted as the two players solving, at each step, the local optimization problem
\begin{equation*}
\min \limits_x x^{\top}\nabla_x f(x_k, y_k) + \frac{\|x\|^2}{2\eta}, \ \ \ \
\max \limits_y y^{\top} \nabla_y f(x_k, y_k) - \frac{\|y\|^2}{2\eta}
\end{equation*}
The terms $x^{\top}\nabla_x f(x_k, y_k)$, $y^{\top} \nabla_y f(x_k, y_k)$ express the \emph{belief} about the loss associated to different actions, based on local information.
The quadratic regularization terms express their their \emph{uncertainty} about these beliefs, letting them avoid extreme actions (large steps).
However, $y$ ($x$) does not appear in the local optimization problem of $x$ ($y$).
Thus, the two players are not taking the presence of their opponent into account when choosing their actions.
Accordingly, ICR arises only because of the players reaction to, rather than anticipation of each others actions.
We propose to strengthen ICR by using local optimization problems that model the players' \emph{anticipation} of each other's action.
\textbf{Competitive gradient descent:}
The updates of competitive gradient descent (CGD) \citep{schaefer2019competitive} are obtained as Nash equilibria of the local game
\begin{align*}
&\min \limits_x x^{\top}\nabla_x f(x_k, y_k) + x^{\top} [D_{xy}f(x_k, y_k))] y + \frac{\|x\|^2}{2\eta}, \\
&\max \limits_y y^{\top} \nabla_y f(x_k, y_k) + y^{\top} [D_{yx}f(x_k, y_k))] x - \frac{\|y\|^2}{2\eta}.
\end{align*}
Under CGD, the players are aware of each other's presence at every step, since the mixed Hessian $x^{\top} [D_{xy}f(x_k,y_k)]y$ informs each player, how the simultaneous actions of the other player could affect the loss incurred due to their own action.
This element of anticipation strengthens ICR, as indicated by the convergence results provided by \citet{schaefer2019competitive}.
Providing additional evidence, we see in Figure~\ref{fig:overtraining_2} that attempting to over-train the discriminator using CGD leads to a discriminator that is even more robust than the checkpoint discriminator.
Applying CGD to the example of Figure~\ref{fig:toyGAN} also increases the stability of the approximate projection of $P_{\operatorname{data}}$ onto $\mathcal{S}$ according to the metric implicit in the discriminator.
These results suggest to use CGD to strengthen ICR in GAN training, which we will investigate in the next section.
We also expect methods such as LOLA \citep{foerster2018learning} or SGA \citep{balduzzi2018mechanics,gemp2018global} to strengthen ICR, but a detailed comparison is beyond the scope of this work.
\begin{figure}
\centering
\includegraphics[width=0.49\columnwidth]{figures/G3_Dloss3.pdf}
\includegraphics[width=0.49\columnwidth]{figures/plot_projectCGD.pdf}
\caption{\textbf{ICR and opponent-awareness:} When training the generator for just a few iterations against the over-trained discriminator of Figure~\ref{fig:overtraining_1}, the discriminator loss increases rapidly. When attempting to over-train with CGD instead of Adam, the resulting discriminator is even more robust.
Similarly, CGD is able to significantly increase the duration for which the generator stays accurate in the (more important) $x$-direction in Figure~\ref{fig:toyGAN}.}
\label{fig:overtraining_2}
\end{figure}
\section{Empirical study on CIFAR10}
\textbf{Experimental setup:} Based on the last section, CGD strengthens the effects of ICR and should therefore improve GAN performance.
We will now investigate this question empirically.
In order to make for a fair comparison with Adam, we combine CGD with a simple RMSprop-type heuristic to adjust learning rates, obtaining adaptive CGD (ACGD, see supplement for details).
As loss functions, we use the original GAN loss (OGAN) of \eqref{eqn:ogan} and the Wasserstein GAN loss function (WGAN)
given by
\begin{equation*}
\label{eqn:minmaxganws}
\min \limits_{\mathcal{G}} \max \limits_{\mathcal{D}} ~ \mathbb{E}_{x \sim P_{\operatorname{data}}} \left[\mathcal{D}(x)\right] - \mathbb{E}_{x \sim P_{\mathcal{G}}} \left[\mathcal{D}(x)\right].
\end{equation*}
When using Adam on OGAN, we stick to the common practice of replacing the generator loss by $\mathbb{E}_{x \sim P_{\mathcal{G}}}\left[ - \log\left(\mathcal{D}(x) \right]\right]$, as this has been found to improve training stability \citep{goodfellow2014generative,goodfellow2016deep}.
In order to be generous to existing methods, we use an existing architecture
intended for the use with WGAN gradient penalty \citep{gulrajani2017improved}.
As regularizers, we consider no regularization (NOREG), $\ell_2$ penalty on the discriminator with different weights (L2), Spectral normalization \citep{miyato2018spectral} on the discriminator (SN), or $1$-centered gradient penalty on the discriminator, following \cite{gulrajani2017improved} (GP).
Following the advice in \citep{goodfellow2016deep} we train generator and discriminator simultaneously, with the exception of WGAN-GP and Adam, for which we follow \citep{gulrajani2017improved} in making five discriminator updates per generator update.
We use the Pytorch implementation of inception score (IS) \citep{salimans2016improved} to compare generator quality.\footnote{The Pytorch implementation gives slightly different scores than Tensorflow. We report Tensorflow IS in the supplementary material showing that the relative performance is largely the same.
}
\begin{figure*}
\centering
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig-a.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig-b.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=0.70\textwidth]{figures/sample.png}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig-d.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig-e.pdf}
\end{minipage}
\begin{minipage}{0.32\textwidth}
\centering
\includegraphics[width=\textwidth]{figures/fig-f.pdf}
\end{minipage}
\caption{We plot the inception score (IS) against the number of iterations (first panel) and gradient or Hessian-vector product computation (second panel). In the third panel we show final samples of WGAN trained with ACGD and without explicit regularization.
In panel four, we compare measure image quality using the Frechet-inception-distance (FID, smaller is better). The results are consistent with those obtained using IS.
In panel five, we plot the difference between inception scores between ACGD and Adam (positive values correspond to a larger score for ACGD) over different iterations and models.
The only cases where we observe nonconvergence of ACGD are OGAN without regularization or with weight decay of weight $0.0001$, as shown in the last panel. The inception score is however still higher than for the same model trained with Adam.
When using Adam on the original saturating GAN loss (which we used with ACGD), training breaks down completely.}
\label{fig:summary}
\end{figure*}
\textbf{Experimental results:} We will now summarize our main experimental findings, (see Figure ~\ref{fig:summary}).
\textbf{(1:)} When restricting our attention to the top performing models, we observe that the combination of ACGD with the WGAN loss and without any regularization achieves higher inception score than all other combinations tested.
\textbf{(2:)} The improvement obtained from training with ACGD persists when measuring image quality according to the Freched-inception-distance (FID) \citep{heusel2017gans}.
\textbf{(3:)} When comparing the number of gradient computations and Hessian-vector products, ACGD is significantly slower than WGAN loss with spectral normalization trained with ADAM, because of the iterative solution of the matrix inverse in ACGD's update rule.
\textbf{(4:)} The only instance where we observe erratic behavior with ACGD is when using OGAN without regularization, or with a small $\ell_2$ penalty. However, ACGD still outperforms Adam on those cases. In particular training with Adam breaks down completely when using the original saturating loss (as we do for ACGD).
\textbf{(5:)} When plotting the difference between the inception scores obtained by ACGD and Adam for the same model over the number of iterations, for all models, we observe that ACGD often performs significantly better, and hardly ever significantly worse.
Since CGD strengthens the effects of ICR, the performance improvements obtained with CGD provide further evidence that ICR is a key factor to GAN performance.
\section{Conclusion and outlook}
In this work, we have pointed out a fundamental flaw present in the static minimax approach to understanding GANs.
As an alternative we explain GAN performance with ICR, a mechanism that focuses on the \emph{dynamics} of simultaneous training.
While there is more work left to be done in order to characterize ICR, we provide a number of illustrative experiments on low-dimensional examples and real GANs that supports our conclusions.
We also use a game-theoretic interpretation of ICR to identify algorithms such as CGD that can lead to stronger ICR.
Indeed, comprehensive experiments on CIFAR10 show systematically improved inception scores and stability when training with CGD, adding further support to our findings.\\
An important direction for future work is the closer investigation of the generator.
Recent work on variational autoencoders \citep{razavi2019generating} and GANs \citep{karras2019style} suggests that the inductive biases of the generator play an important role, as well.
Understanding their interaction with ICR is an important direction of future work.
We also hope to better understand the relationship of ICR with local solution concepts such as \emph{``proximal equilibria''} \citep{farnia2020gans} that emphasize slow improvement of the discriminator.
\newpage
\subsubsection*{Acknowledgments}
We would like to thank Houman Owhadi for helpful discussions.
A. Anandkumar is supported in part by Bren endowed chair, DARPA PAIHR00111890035, Raytheon, and Microsoft, Google and Adobe faculty fellowships.
F. Sch{\"a}fer gratefully acknowledges support by the Air Force Office of Scientific Research under award number FA9550-18-1-0271 (Games for Computation and Learning) and the Ronald and Maxine Linde Institute of Economic and Management Sciences at Caltech.
H. Zheng is supported by Zhiyuan College, Shanghai Jiao Tong University.
|
1,477,468,750,118 | arxiv | \section{Introduction}\label{sec:intro}
The $p{}+^{11}\text{B}$ reaction has been extensively used to study the excitation structure of the $^{12}$C nucleus. This includes measurement of proton widths $\Gamma_p$, the partial $\gamma$ widths $\Gamma_{\gamma_0}$ and $\Gamma_{\gamma_1}$ to the two lowest levels in $^{12}$C, and the partial $\alpha$ widths $\Gamma_{\alpha_0}$ and $\Gamma_{\alpha_1}$ to the two lowest levels in $^{8}$Be~\cite{symons1963,segel1965,Becker:1987fk}. %
The focus of the present work are the two isospin $T=1$ resonances occurring at proton energies of $E_p=2.00$~MeV and $2.64$~MeV which correspond to the levels \level{17.76}{0}{+} and \level{18.85}{3}{-}.\footnote{Throughout this paper the notation {\level{E_x}{J}{\pi}} is used to denote excited nuclear levels, $E_x$ being the excitation energy in MeV and $J^{\pi}$ the spin and parity.} %
The $\gamma$ decay of these levels to lower-lying, unbound levels in $^{12}$C was studied by Hanna {\it et al.}~\cite{hanna1982} who identified two rather strong transitions feeding two narrow levels above the $3\alpha$ threshold: \level{17.76}{0}{+}$\rightarrow\;$\level{12.71}{1}{+} and \level{18.35}{3}{-}$\rightarrow\;$\level{9.64}{3}{-}.
Using the conventional approach of detecting the $\gamma$ transitions with a large scintillator, Hanna {\it et al.}\ could not have identified weak transitions or transitions to broad levels. %
Recently, such transitions have been studied using a technique where the final level is identified by measuring the momenta of the three $\alpha$ particles resulting from its breakup~\cite{NIM_alcorta,kirsebom09_plb,laursen2016_2}. %
Here we wish to explore, first, if $\gamma$ transitions from the levels \level{17.76}{0}{+} and \level{18.85}{3}{-} to broad, lower-lying levels similar to those observed in Ref.~\cite{laursen2016_2} can be identified, and second, if the strength of the transitions already observed by Hanna {\it et al.}\ can be confirmed with this indirect detection method.
\section{Experiment}\label{sec:exp}
The experiment was performed at the 5~MV Van der Graaf accelerator at the Department of Physics and Astronomy at Aarhus University. The proton beam was directed on the target using electrostatic deflection plates and a magnetic bending stage. The beam size was defined by two variable apertures placed after the magnet both set at a separation of 2\,mm and placed 0.5\,m apart.
The ion energy was adjusted by means of a generating voltmeter, which was calibrated on an absolute scale using the $^{27}\text{Al}(p,\alpha)^{24}\text{Mg}$ and $^{27}\text{Al}(p,\gamma)^{28}\text{Si}$ reactions. The energy spread of the beam was less than 1\,keV. Beam intensities of several 100\,nA can be delivered by the accelerator, but only beams of less than 1~nA were used for the experiment discussed here. The beam current was measured by a Faraday cup placed in a 1\,m long beam pipe downstream of the target chamber, specially designed to reduce the amount of beam back-scattered from the Faraday cup to the detector setup.
Long measurements were performed at proton energies of $E_p=2.00$~MeV and $2.64$~MeV. At the lower energy, a total of 295~$\mu$C was directed on the target over a period of 211~hours, which corresponds to an average current of 0.39~nA. For the higher energy setting, the corresponding numbers are 124~$\mu$C, 77~hours, and 0.45~nA. Additionally, multiple, short measurements were performed across the energy range 0.5--3.5~MeV as reported in Ref.~\cite{munch2020}.
The target consisted of a layer of 12.6(1.2)~$\mu$g/cm$^2$ isotope-enriched $^{11}$B deposited on a 4~$\mu$g/cm$^2$ carbon backing~\cite{munch2020}. %
The target was manufactured from 99\% enriched $^{11}\text{B}$ by slow evaporation in a Cu crucible. In addition to B, C, and Cu, the target was found to contain H and O impurities, likely due to condensation of water vapor. The presence of these impurities was inferred from the corresponding Rutherford scattering peaks in the singles spectra. Considering the known target constituents, the only open three-body channels at the beam energies used in this study are $p+{}^{10}\text{B} \rightarrow 2\alpha + {}^3\text{He}$ and $p+{}^{11}\text{B} \rightarrow 3\alpha$.%
The target was placed in the middle of a compact array of double sided Si strip detectors (DSSDs) at an angle of 45$^{\circ}$ with respect to the axis defined by the beam, as shown in Fig.~\ref{fig:setup}. %
Annular DSSDs with 24 ring strips and 32 annular strips were placed upstream and downstream of the target, and two square DSSDs with 16 horizontal strips and 16 vertical strips were placed on either side of the target orthogonal to the beam axis.
\begin{figure}[t!]
\includegraphics[width=\columnwidth]{setup_ruler.pdf}
\caption{\label{fig:setup} Schematic illustration of the detector setup which consisted of two annular and two square double-sided silicon strip detectors. The proton beam enters the setup through one annular detector and exits through the other. The $^{11}\text{B}$ target is placed in the middle of the setup at a 45$^{\circ}$
angle with respect to the beam axis.}
\end{figure}
The electronics and data acquisition consisted of a VME based system with ADCs and TDCs fed by signals from a chain of preamplifiers and amplifiers.
The dead time was around 10\% with trigger rates of several kHz.
\section{Event selection}
The data is analyzed following an approach similar to that of Laursen {\it et al.}~\cite{laursen2016_2}. Particle energies and hit positions on the DSSDs are determined by requiring an energy difference of at most $\pm 50$~keV in front and back strips. Energy conservation cannot be used as a condition to reduce unwanted background because we are searching for events where some of the energy is carried away by a $\gamma$ ray. However, the momentum carried away by the $\gamma$ ray is sufficiently small that we can require momentum conservation of the three $\alpha$ particles. Hence, $3\alpha$ events are identified as triple coincidence events fulfilling both a TDC cut of $\pm 15$~ns and momentum conservation, but not necessarily energy conservation.
Figures \ref{fig:dPEx1} and \ref{fig:dPEx2} show scatter plots of the total momentum in the centre of mass (c.m.) frame versus the $^{12}$C excitation energy calculated from the triple-coincidence events.
\begin{figure}[h!]%
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 0 0 50]{dPEx_2000keV.pdf}
\caption{\label{fig:dPEx1} Triple-concidence data obtained at $E_p=2.00$~MeV. The $x$ axis is the excitation energy in $^{12}$C, and the $y$ axis is the total momentum in the centre of mass frame, both determined from the energies and positions of the three detected particles. The events enclosed by the red contour fulfill momentum conservation, but not energy conservation, and are therefore interpreted as $\gamma$-delayed 3$\alpha$ emissions from $^{12}$C.}
\end{figure}%
\begin{figure}[h!]%
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 0 0 50]{dPEx_2640keV.pdf}
\caption{\label{fig:dPEx2} Triple-concidence data obtained at $E_p=2.64$~MeV. The axes are the same as in Figure \ref{fig:dPEx1}.}
\end{figure}%
The intense groups of events just below and just above $E_x = 18$~MeV in the two figures correspond to 3$\alpha$ decays directly from the levels \level{17.76}{0}{+} and \level{18.85}{3}{-}, respectively. These events fulfill both energy and momentum conservation. The events further to the left from these intense regions, enclosed by the red contours, are interpreted as events where some of the energy is carried away by a $\gamma$ ray, and they are therefore the events of interest.
To assert that these events are in fact genuine triple-$\alpha$ coincidences as opposed to, say, two $\alpha$ particles in coincidence with a noise signal, the following checks were made: First, the energy distribution of the individual detections was inspected to ensure that the energies were comfortably above the ADC thresholds. Second, the spatial distribution of the detections across the surface of the DSSDs was inspected to verify that the events were not caused by a single or a few noisy strips. Third, the effect on the event rate of widening the TDC cut was studied. For genuine coincidences one expects the event rate to plateau once the width the TDC cut exceeds the experimental resolution, whereas for random coincidences one expects the event rate to continue increasing.
Looking at Figures \ref{fig:dPEx1} and \ref{fig:dPEx2} one notes the occurrence of a number of clusters of events at low excitation energy ($E_x \sim 7$--9~MeV) which exhibit a considerable momentum mismatch ($\Delta P \sim 50$--250~MeV/c). Each of these clusters was subject to a careful analysis, which revealed all but one of the clusters to be comprised of random coincidences, in most cases involving $p+{}^{11}$B coincidences or $p+p$ coincidences due to elastic scattering on H impurities in the target.
A dedicated analysis followed to clarify the origin of the single cluster that could not be attributed to random coincidences. This cluster was found to be comprised of $p+p$ coicidences in which one of the protons, having penetrated into the active volume of one of the DSSDs, is backscattered into a second DSSD, thus producing two separate detections with a combined energy close to the original proton energy. Having identified $p+p$ and $p+{}^{11}$B coincidences as significant sources of background, dedicated kinematic cuts were implemented to selectively remove such events. As a result, the event density within the clusters was substantially reduced and some clusters were fully removed, leaving only the few clusters visible in Figures \ref{fig:dPEx1} and \ref{fig:dPEx2}.%
Figures \ref{fig:2MeV} and \ref{fig:2.64MeV} focus specifically on those events fulfilling momentum conservation, but not energy conservation. The upper panels show scatter plots of the excitation energy in $^{12}$C versus the individual energies of the three $\alpha$-particles in the $^{12}$C rest frame. These scatter plots show the different $3\alpha$ breakup mechanisms of the levels in $^{12}$C populated in the $\gamma$ decays. The diagonal lines from the lower left to the upper right represent breakups that proceed by $\alpha$ decay to the ground state of $^8$Be. Owing to parity and angular momentum conservation this decay mechanism is only allowed for natural-parity levels in $^{12}$C. The two $\alpha$ particles from the subsequent breakup of $^8$Be, detected in coincidence with the primary $\alpha$ particle, form a broad band running from left to right with half the slope of the upper diagonal. The positions of known levels in $^{12}$C are indicated on the scatter plots.
\begin{figure}[t!]
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 50 0 50]{fynbo1.pdf}
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 0 0 40]{ExSpec1.pdf}
\caption{\label{fig:2MeV} $\gamma$-delayed $3\alpha$ spectra obtained at $E_p=2.00$~MeV. Panel (a) is a scatter plot of the excitation energy in $^{12}$C versus the individual energies of the three detected $\alpha$ particles in the $^{12}$C rest frame. The positions of selected known levels in $^{12}$C are indicated. Panel (b) is the projection of the scatter plot on the excitation energy axis. The shaded histogram is obtained by selectively projecting events in which the $3\alpha$ breakup proceeds via the ground state of $^8$Be. The green and magenta (short- and long-dashed) curves show the detection efficiencies determined from Monte-Carlo simulations.}
\end{figure}
\begin{figure}[t!]
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 50 0 50]{fynbo2.pdf}
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 0 0 40]{ExSpec2.pdf}
\caption{\label{fig:2.64MeV} Similar to Figure \ref{fig:2MeV} but for $E_p=2.64$~MeV.}
\end{figure}
The lower panels of Figures \ref{fig:2MeV} and \ref{fig:2.64MeV} show the projections of the scatter plots on the excitation energy axes with the shaded histograms providing the projection selectively for the events on the diagonals, which fulfill the condition $E_{2\alpha}<210$~keV for at least one pair of $\alpha$ particles, $E_{2\alpha}$ being the relative kinetic energy of the pair. The coloured curves on these plots will be discussed later. From the trigger rate and the width of the TDC gate we estimate the number of random coincidences to be 6 events in Figure \ref{fig:2MeV} and 9 events in Figure \ref{fig:2.64MeV}.
The transitions identified by Hanna {\it et al.} \cite{hanna82}, namely \level{17.76}{0}{+}$\rightarrow\;$\level{12.71}{1}{+} and \level{18.35}{3}{-}$\rightarrow\;$\level{9.64}{3}{-}, are clearly identifiable in Figure \ref{fig:2MeV} and \ref{fig:2.64MeV}, respectively. In addition to these, there is clear evidence for additional transitions. These will be discussed further later.
\section{Cross sections}
We determine the capture cross section, $\sigma_{\gamma}$, from the number of observed events in each excitation energy bin, taking into account the triple-$\alpha$ detection efficiency, the target thickness, the integrated charge on the target, and the dead-time of the data acquisition system.
The cross sections thus obtained at $E_p=2.00$~MeV and $2.64$~MeV are summarized in Table \ref{tbl:tab3} and \ref{tbl:tab4}, respectively.
\begin{table}[b]
\setlength{\tabcolsep}{4pt}
{\footnotesize
\centering
\caption{$^{11}\text{B}(p,3\alpha)\gamma$ cross section at $E_p=2.00$~MeV. $E_x$ is the $^{12}$C excitation energy inferred from the momenta of the
three $\alpha$ particles; $\sigma_{\gamma}$ is the cross section and is subject to an additional 10\% systematic uncertainty from the target thickness;
the events are divided into two groups: those that correspond to breakups proceeding via the $^8$Be ground state (gs) and those that do not (exc). The first energy bin (I) is not included since all the events in this bin are attributed to $p+{}^{10}$B.}
\label{tbl:tab3}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}cc *{1}{d{1.6}} *{2}{d{2.6}} *{1}{d{1.6}}}
\toprule
\multirow{2}{*}{Bin} & \multirow{2}{*}{$E_x$~(MeV)} & \multicolumn{4}{c}{$\sigma_{\gamma}$~($\mu$b)} \\ \cline{3-6}
& & \mc{gs} & \mc{exc} & \mc{tot} & \mc{Ref.~\cite{hanna1982}}\\
\midrule
II & 9.2--10.0 & 0.18(9) & \mc{0.005--0.11} & 0.24(10) & \\
III & 10.0--11.3 & 1.56(27) & 0.21(11) & 1.77(29) & \\
IV & 11.3--12.3 & 0.27(8) & 1.6(5) & 1.8(5) & \\
V & 12.3--13.0 & 0.09(4) & 13.9(26) & 14.0(26) & 6.5(26) \\
VI & 13.0--14.5 & 0.25(7) & 0.9(4) & 1.2(4) & \\
VII & 14.5--16.0 & 0.13(5) & 1.6(5) & 1.8(5) & \\
\bottomrule
\end{tabular*}
}
\end{table}
\begin{table}[h]
\setlength{\tabcolsep}{5pt}
{\footnotesize
\caption{$^{11}\text{B}(p,3\alpha)\gamma$ cross section at $E_p=2.64$~MeV. See caption of Table~\ref{tbl:tab3} for further information.}
\label{tbl:tab4}
\centering
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}cc *{1}{d{1.6}} *{3}{d{1.5}} }
\toprule
\multirow{2}{*}{Bin} & \multirow{2}{*}{$E_x$~(MeV)} & \multicolumn{4}{c}{$\sigma_{\gamma}$~($\mu$b)} \\ \cline{3-6}
& & \mc{gs} & \mc{exc} & \mc{tot} & \mc{Ref.~\cite{hanna1982}}\\
\midrule
II & 9.3--9.9 & 2.4(7) & \mc{0.08--0.5} & 2.7(7) & 4.2(17) \\
III & 9.9--11.3 & 1.3(4) & \mc{0.06--0.4} & 1.6(4) & \\
IV & 11.3--12.3 & 0.59(18) & 1.6(7) & 2.1(7) & \\
V & 12.3--13.0 & 0.15(7) & 1.6(6) & 1.7(6) & \\
VI & 13.0--14.5 & 0.26(11) & 1.3(5) & 1.5(6) & \\
VII & 14.5--16.5 & 0.71(19) & 4.5(15) & 5.2(15) & \\
\bottomrule
\end{tabular*}
}
\end{table}
The detection efficiency depends on the 3$\alpha$ breakup mechanism and differs significantly between breakups that do and do not proceed via the ground state (g.s.) of $^8$Be. The green and magenta (short- and long-dashed) curves in the lower panels of Figures \ref{fig:2MeV} and \ref{fig:2.64MeV} show the detection efficiencies determined from Monte-Carlo simulations. %
For the excited channel, phase-space ($\Phi$) simulations were used to estimate the detection efficiency in all excitation energy bins, except the bins containing the \level{11.83}{2}{-} and \level{12.71}{1}{+} levels where more accurate models~\cite{fynbo03} were used. The error resulting from adopting the phase-space approximation is estimated to be at most $\sim 15$\%, which we include as an additional uncertainy on the detection efficiency for those energy bins where phase-space simulations were used. For the other bins, and for the g.s.\ channel where the angular distributions of Ref.~\cite{munch2020} were used, we adopt a 5\% model uncertainty.
We note that the ratio of triple-coincidence events to single events predicted by the simulation for the g.s.\ channel is 15\% below the experimental ratio. We ascribe this to inaccuracies in the representation of the beam-target-detector geometry in the simulation and account for it by including an additional 15\% uncertainty on our efficiency estimate.
We find the detection efficiency to be insensitive to uncertainties in the ADC thresholds, except for the lowest excitation energy bin ($E_x < 9.2$~MeV) where ADC thresholds contribute an estimated 8\% to the overall uncertainty.
These uncertainty contributions are all added up in quadrature, and finally added linearly with the statistical counting uncertainty to obtain the overall uncertainty on the cross section in each excitation energy bin.
\section{Deduced $\gamma$-ray widths}
The excitation functions of the $\gamma$ rays to the \level{9.64}{3}{-} and \level{12.71}{1}{+} levels have been measured in considerable detail in the energy range $E_p = 1.8$--$3.0$~MeV by Hanna {\it et al.}~\cite{hanna1982} by means of conventional $\gamma$-ray spectroscopy. Both excitation functions were found to be resonant, allowing the authors to attribute the $\gamma$ rays to the transitions \level{17.76}{0}{+}$\rightarrow\;$\level{12.71}{1}{+} and \level{18.35}{3}{-}$\rightarrow\;$\level{9.64}{3}{-}, respectively. %
One drawback of the indirect experimental approach adopted in the present work, which involves detecting the three $\alpha$ particles rather than the $\gamma$ ray, is the reduced event rate compared to conventional $\gamma$-ray spectroscopy. Therefore, excitation functions could not be obtained in a reasonable amount of time and measurements were limited to a few selected beam energies. %
In the absence of excitation functions to support a resonant interpretation of the measured cross sections, we rely on the findings of Hanna {\it et al.}~\cite{hanna1982} concerning the resonant character of the $\gamma$ rays to the \level{9.64}{3}{-} and \level{12.71}{1}{+} levels, as well as theoretical estimates of the direct-capture cross section, to justify a resonant interpretation of the new $\gamma$ rays observed in this work. The theoretical estimates of direct-capture cross section will be discussed next. %
\subsection{Direct capture}
For the purpose of estimating the (E1) direct-capture capture cross section, we adopt the
model of Rolfs~\cite{rolfs1973} which approximates the many-nucleon problem
by a two-body problem in which the projectile and target are treated as inert cores
and their interaction is described by a square-well potential with the depth adjusted
to reproduce the binding energy of the final state. This simple model was found
to yield accurate results for the capture reaction $^{16}\text{O}(p,\gamma)$ to
the two bound levels in $^{17}$F, which both are well described by a simple,
single-particle configurations involving only a single orbital~\cite{rolfs1973}.
Here, we apply the model to capture transitions to levels in $^{12}$C which are
not well described by single-particle configurations and also are unbound with
respect to decay to the $3\alpha$ final state. Therefore, we do not expect
the model to be very accurate and will use its predictions merely as
order-of-magnitude estimates, accurate only within a factor of 2--3 or so.
Estimates of the direct capture-cross section to four known levels
in $^{12}$C computed with the model of Rolfs using the parameters
listed in Table~\ref{tbl:rolfs}, are shown in Fig.~\ref{fig:dc-cross-sec}. %
The computed cross sections are proportional
to the assumed spectroscopic factor, which is not predicted by the model
itself. For the \level{12.71}{1}{+} level we take the spectroscopic factor
from Ref.~\cite{adelberger77}. For the remaining levels we use
the average values of the spectroscopic factors compiled in
Ref.~\cite{tunl12}, noting that there is a substantial spread
($\sim 50\%$) in the spectroscopic factors obtained by different
authors. %
In all cases, we assume a single-orbital configuration, with
$\ell_{\text{i}}=1$ for the \level{12.71}{1}{+} level and
$\ell_{\text{i}}=2$ for the remaining levels. %
The channel radius was taken to be 4.38~fm.
\begin{table}[b]
\setlength{\tabcolsep}{5pt}
\centering
{\footnotesize
\caption{Parameters used for estimating the cross section
for direct capture to four levels in $^{12}$C based on
the model of Ref.~\cite{rolfs1973}. $\ell_{\text{i}}$ are
the orbital angular momenta in the entrance channel,
$\ell_{\text{f}}$ is the orbital angular momentum
assumed for the final state, and $S$ is the spectroscopic
factor. The spectroscopic factor of the \level{12.71}{1}{+}
level was taken from Ref.~\cite{adelberger77}; for the
remaining levels we use the average values of the spectroscopic
factors reported in Ref.~\cite{tunl12}. The channel radius was
taken to be 4.38~fm.}
\label{tbl:rolfs}
\begin{tabular*}{0.55\linewidth}{@{\extracolsep{\fill}}ccccc}
\toprule
\mc{$E_x$~(MeV)} & \mc{$J^{\pi}$} &
\mc{$\ell_{\text{i}}$} & \mc{$\ell_{\text{f}}$} &
\mc{$S$} \\
\midrule
9.64 & $3^-$ & 1,3 & 2 & 0.30 \\
10.84 & $1^-$ & 1,3 & 2 & 0.23 \\
11.83 & $2^-$ & 1,3 & 2 & 0.11 \\
12.71 & $1^+$ & 0,2 & 1 & 0.86 \\
\bottomrule
\end{tabular*}
}
\end{table}
\begin{figure}[h!]
\includegraphics[width=0.99\columnwidth,clip=true,trim=0 0 0 0]{dc_cross_sec.pdf}
\caption{\label{fig:dc-cross-sec} Estimates of the cross section for $p+{}^{11}\text{B}$ direct capture to four selected levels in $^{12}$C based on the model of Ref.~\cite{rolfs1973}.}
\end{figure}
The excitation functions measured by Hanna {\it et al.}~\cite{hanna1982}
(at 90$^{\circ}$) indicate that direct capture contributes at most $\sim 10\%$
to the total capture cross section to the \level{12.71}{1}{+} level at
$E_p=2.00$~MeV, corresponding to 1.4~$\mu$b, which is within a factor of two of
the cross section predicted by the model (2.6~$\mu$b). %
Similarly, the direct-capture contribution to the cross section to the \level{9.64}{3}{-}
level can be estimated to be at most $\sim 15\%$ of the total capture cross section at 2.64 MeV,
corresponding to 0.4~$\mu$b, a factor of four below the model prediction (1.6~$\mu$b). %
Thus, we conclude that our rather crude model provides reasonable estimates
of the direct-capture cross section, with a tendency to overestimate the actual
cross section by a factor of two to four. Comparing the predicted direct-capture
cross sections (Fig.~\ref{fig:dc-cross-sec}) to the measured total capture cross
sections (Tables~\ref{tbl:tab3} and \ref{tbl:tab4}), we conclude that resonant
capture is likely to be the dominant mechanism in most energy bins, but with
a substantial contribution from direct capture.
\subsection{Resonant capture}
The goal of the analysis is to calculate the partial $\gamma$ widths of the
levels in $^{12}$C mediating the observed (resonant) capture transitions. For this we use
the resonant cross section formula,
\begin{equation}\label{eq:resonant}
\sigma_{\gamma,\textrm{R}} = 4\pi {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}^2 \omega \Gamma_p \Gamma_{\gamma} / \Gamma^2
\end{equation}
where $\omega = \tfrac{1}{8}(2J+1)$ is the spin statistical factor appropriate for $p+^{11}$B.
Using this equation the partial $\gamma$ decay widths can be determined from the measured
cross sections, provided the partial proton decay widths ($\Gamma_p$) and the total widths
($\Gamma$) are known.
In Table~\ref{tbl:known-res}, we list known levels in the excitation region $E_x = 16.5$--$18.5$~MeV, which can mediate resonant captures to lower-lying levels at the beam energies investigated in this work. The levels and their properties are obtained from the most recent TUNL compilation~\cite{tunl12} with a few exceptions, as discussed below. %
Fig.~\ref{fig:reson-schem} gives a schematic representation of the levels listed in Table~\ref{tbl:known-res}. The quantity $y$, shown on the abscissa, is calculated from the expression,
\begin{equation}\label{eq:y}
y(E_x) \; = \; 4\pi {\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}^2 \omega f(E_x) / \Gamma \; ,
\end{equation}
where the resonance shape is approximated as a Breit-Wigner distribution multiplied by the penetrability for the lowest possible relative orbital angular momentum,
\begin{equation}\label{eq:bw}
f(E_x) \; = \; \frac{P_{\ell}}{\hat{P}_{\ell}} \times \frac{(\Gamma/2)^2}{(E_x - \hat{E}_x)^2 + (\Gamma/2)^2} \; .
\end{equation}
We note that on resonance, $\sigma_{\gamma,\text{R}} = y \Gamma_{\gamma} \Gamma_p / \Gamma $.
\begin{figure}[t]
\includegraphics[width=0.99\columnwidth]{reson-schem.pdf}
\caption{\label{fig:reson-schem}Schematic representation of the known levels in $^{12}$C in the excitation region $E_x = 16.5$--$18.5$~MeV. The quantity $y$, shown on the abscissa, is calculated from Eq.~(\ref{eq:y}). The energies studied in this work are indicated by the arrows. In each case, we give the integrated charge, corrected for the dead time of the data acquisition system. Dashed lines indicate levels that were not found to contribute to the cross sections measured in this work. Note that the 16.62-MeV level has been downscaled by a factor of five for improved display.}
\end{figure}
The energies ($\hat{E}_x$) and total widths ($\Gamma$) of the levels listed in Table~\ref{tbl:known-res} are generally well constrained, whereas proton widths ($\Gamma_p$) are either missing or quoted without uncertainties. Proton widths have typically been determined by subtracting the $\alpha$ widths ($\Gamma_{\alpha_0}$, $\Gamma_{\alpha_1}$) from the total width. In particular, $\Gamma_{\alpha_1}$ has been poorly constrained in previous experiments due to the complex $3\alpha$ correlations in this channel~\cite{segel1965}, and therefore the proton widths should be used with some caution. Also, the possibility should not be discounted that the excitation region $16.5$--$18.5$~MeV contains broad $T=0$ levels with large $\alpha$ widths ($\Gamma_{\alpha} > 1$~MeV), which have not been clearly resolved in previous studies.
\begin{table}[b]
{\footnotesize
\begin{minipage}{\linewidth}
\renewcommand{\thefootnote}{\alph{footnote}}
\centering
\caption{\label{tbl:known-res}Known levels in $^{12}$C between 16.5~MeV and 18.5~MeV. Properties obtained from Ref.~\cite{tunl12} with a few exceptions, as discussed in the text. Values in parentheses indicate uncertain assignments.}
\begin{tabular}{ccccc}
\toprule
$\hat{E}_x$~(MeV) & $\Gamma$ (keV) & $\Gamma_p$ (keV) & $J^{\pi}$ & $T$ \\
\midrule
16.62(5) & 280(28) & 150 & $2^-$ & 1 \\
17.23 & 1150 & 1000 & $1^-$ & 1 \\
17.768 & 96(5) & 76 & $0^+$ & 1 \\
18.13 & 600(100) & - & ($1^+$) & (0) \\
18.16(7) & 240(50) & - & ($2^-$) & (0) \\
18.35(5) & 350(50) & 68 & $3^-$ & 1 \\
18.35(5) & 350(50) & - & $2^-,2^+$ & $0+1$ \\
(18.39) & 42 & 33 & $0^-$ & (1)\\
\bottomrule
\end{tabular}
\end{minipage}
}
\end{table}
We proceed by briefly reviewing the available data for each of the levels in Table~\ref{tbl:known-res}. Unless otherwise stated, the data is taken directly from the most recent TUNL compilation~\cite{tunl12}.
\paragraph{16.62, 2$^-$}
The properties of this level are well established, although the precision of $\Gamma_p$ is unclear. The level is clearly observed in $(p,p)$, $(p,\alpha_1)$, and $(p,\gamma_1)$, as established already in the 1950s and 1960s, {\it e.g.}, Refs.~\cite{dearnaley1957, segel1965}. There is also compelling evidence for smaller $\gamma$ branches to the ground state and the \level{12.71}{1}{+} and \level{15.11}{1}{+} levels~\cite{zijderhand1990}, but since the excitation functions were not measured the evidence is not conclusive.
\paragraph{17.23, 1$^-$}
Owing to its large width, the level is not easily resolved. It is most clearly seen in $(p,\gamma_0)$~\cite{segel1965}, while its precise contribution to $(p,p)$, $(p,\alpha_0)$, and $(p,\alpha_1)$ remains somewhat uncertain. There is compelling evidence for smaller $\gamma$ branches to the \level{4.44}{2}{+}, \level{7.65}{0}{+}, \level{12.71}{1}{+}, and \level{15.11}{1}{+} levels~\cite{zijderhand1990}, but since excitation functions were not measured the evidence is not conclusive.
\paragraph{17.76, 0$^+$}
The level is seen very clearly in $(p,p)$, $(p,\alpha_0)$, and $(p,\gamma_{12.71})$. The total width has been determined rather accurately by Hanna {\it et al.} and the proton width appears reliable. The level energy of 17.768~MeV was determined from the centroid of the resonance peak in the $(p,\alpha_0)$ spectrum of Ref.~\cite{munch2020}.
\paragraph{18.13, 1$^+$}
Evidence for the existence of this level comes from a single study of $(p,\gamma_{15.11})$~\cite{suffert1972}. There are no constraints on the proton width and the spin-parity and isospin assignments are not conclusive.
\paragraph{18.16, 2$^-$}
Evidence for the existence of this level also comes from a single study, in this case of $(p,d)$~\cite{lewis1987}. There are no constraints on the proton width and the spin-parity and isospin assignments are not conclusive. It was suggested in Ref.~\cite{lewis1987} that the \level{18.16}{2}{-} and \level{18.13}{1}{+} levels might be one and the same level. Indeed, a spin-parity assignment of $2^-$ appears compatible with the data of Ref.~\cite{suffert1972}. In the TUNL compilation~\cite{tunl12}, the two levels are assumed to be one and the same, but here the 1$^+$ spin-parity assignment of Ref.~\cite{suffert1972} is preferred, while the level energy and width is taken from Ref.~\cite{lewis1987}. However, the very different widths reported in the two studies contradict a single-level interpretation. Therefore, we assume the resonances reported in Refs.~\cite{suffert1972, lewis1987} to correspond to distinct levels.
\paragraph{18.35, 3$^-$ \& 2$^-$,2$^+$}
A multidude of experimental probes provide evidence for the existence of at least two, if not three, levels at 18.35~MeV, cf.\ the discussion in Ref.~\cite{Neuschaefer:1983lr}. One of these levels, which is observed both in the spectra of $(p,\alpha_0)$ and $(p,\alpha_1)$ and in the excitation curves of $(p,\gamma_0)$, $(p,\gamma_1)$, and $(p,\gamma_{9.64})$, has been firmly assigned as $3^-$ and isospin $T=1$, with additional evidence to support this assignment coming from $(e,e^{\prime})$ and $^{11}$B$(d,n\alpha_0)$ data~\cite{Neuschaefer:1983lr}. %
On the other hand, $(p,p^{\prime})$ and $(\pi,\pi^{\prime})$ data provide substantial evidence for the presence of an isospin-mixed $2^-$ level at 18.35~MeV with a width similar to that of the $3^-$ level, while $(\alpha,\alpha^{\prime})$ data suggest a $2^+$ level at this energy with isospin $T=0$~\cite{kiss1987}. %
$\gamma$ rays to the \level{12.71}{1}{+} and \level{15.11}{1}{+} levels have also been observed at this energy~\cite{zijderhand1990}, but in the absence of yield-curve measurements they cannot be attributed to the 18.35-MeV level(s) with certainty. %
Given the complicated situation with two or possibly three overlapping levels, the widths quoted in Table~\ref{tbl:known-res} should be used with some caution.
\paragraph{18.39, 0$^-$}
The level has only been observed in $(p,p^{\prime})$. Its spin-parity assignment appears firm although it is based solely on cross-section arguments~\cite{segel1965}, while the isospin remains unknown. The total width and proton width both appear reliable.
\subsection{Partial $\gamma$ widths}
In the following, we provide a resonant interpretation of the observed capture cross
sections that ignores the sub-dominant direct-capture component, {\it i.e.}, $\sigma_{\gamma} \approx \sigma_{\gamma,\textrm{R}}$. With this approximation, partial $\gamma$-decay widths can be deduced directly from Eq.~\ref{eq:resonant}. For those levels where the proton width is unknown, we adopt $\Gamma_p = \Gamma$. This effectively renders the $\gamma$-ray widths deduced for these levels lower limits. For the purpose of estimating off-resonance contributions, we adopt the resonance shapes shown in Fig.~\ref{fig:reson-schem}, taking into account the energy-dependence of the $\gamma$-ray transition rate. We discuss the energy bins I--VII separately, starting with the lowest-energy bin. The deduced $\gamma$-ray widths are summarized in Table~\ref{tbl:gamma-widths}.
\begin{table*}[h]
\begin{minipage}{\linewidth}
{\footnotesize
\renewcommand{\thefootnote}{\alph{footnote}}
\centering
\caption{Transitions observed in the present work. The direct capture component was not considered in the derivation of the partial $\gamma$-ray widths ($\Gamma_{\gamma}$). For those initial levels where the proton width ($\Gamma_p$) is unknown, the derived $\gamma$-ray widths are lower limits. Uncertainties on $\Gamma$ and $\Gamma_p$ have not been taken into account in the estimation of the uncertainty on $\Gamma_{\gamma}$.}
\label{tbl:gamma-widths}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}cccc *{2}{d{1.6}} }
\toprule
$E_p$~(MeV) & Final level & Initial level\footnotemark[1] & $ML$ & \mc{$\Gamma_{\gamma}$~(eV)} & \mc{$\Gamma_{\gamma}$~(W.u.)} \\
\midrule
\multirow{2}{*}{2.00, 2.64} & \multirow{2}{*}{9.64, $3^-$} & 18.35, $3^-$ & $M1$ & 4.7(13) & 0.34(10)\footnotemark[2] \\
& & 18.35, $2^-$ & $M1$ & 1.3(4) & 0.095(26)\footnotemark[2] \\
\midrule
2.00 & 10.84, $1^-$ & 17.76, $0^+$ & $E1$ & \mc{1.11(21)\footnotemark[3]} & \mc{0.0093(18)\footnotemark[3]} \\
\midrule
\multirow{2}{*}{2.64} & \multirow{2}{*}{10.84, $1^-$} & 18.35, $2^-$ & $M1$ & 0.75(20) & 0.086(23)\footnotemark[2] \\
& & 18.39, $0^-$ & $M1$ & 0.80(21) & 0.090(24)\footnotemark[2] \\
\midrule
2.00 & 11.83, $2^-$ & 17.23, $1^-$ & $M1$ & 4.4(14)\footnotemark[4] & 1.4(4)\footnotemark[4] \\
\midrule
2.00 & 12.71, $1^+$ & 17.76, $0^+$ & $M1$ & \mc{8.7(19)} & \mc{3.2(7)} \\
\midrule
\multirow{2}{*}{2.64} & \multirow{2}{*}{12.71, $1^+$} & 18.35, $2^-$ & $E1$ & 0.77(28) & 0.012(4) \\
& & 18.39, $0^-$ & $E1$ & 0.81(3) & 0.013(5) \\
\bottomrule
\end{tabular*}
\footnotetext[1]{In those cases, where several initial levels can account for the observed feeding of the final level, all initial levels are given, and the widths are computed assuming that each transition accounts for the full cross section.}
\footnotetext[2]{Some evidence for a smaller contribution from an isoscalar $E1$ transition from the \level{18.35}2{+} level.}
\footnotetext[3]{A substantial contribution from an $M1$ transition from the \level{17.23}{1}{-} level cannot be ruled out.}
\footnotetext[4]{Contributions from transitions from the \level{16.62}{2}{-} and \level{18.13}{1}{+} levels cannot be ruled out.}
}
\end{minipage}
\end{table*}
\paragraph{I)} The yield in the lowest-energy bin is attributed entirely to $p+{}^{10}\text{B}\rightarrow 2\alpha + {}^3\text{He}$, as confirmed by separate measurements performed on an isotope-enriched $^{10}$B target.
\paragraph{II)} At $E_p=2.64$~MeV, the \level{9.64}{3}{-} level is observed very clearly in the $^8$Be$_{\text{gs}}$ channel. The inferred cross section is somewhat smaller than that of Hanna {\it et al.}~\cite{hanna1982}, but consistent within uncertainties. The cross section may be accounted for by isovector $M1$ transitions from the negative-parity levels at 18.35~MeV. An isoscalar $E1$ transition from the $2^+$ level cannot by itself account for the full cross section, as this would require a strength of 0.0055(15)~W.u., exceeding the upper limit of 0.002~W.u.\ recommended for such transitions~\cite{endt1993}. %
However, it was noted by Hanna {\it et al.}\ that the angular distribution of the $\gamma$ ray to the \level{9.64}{3}{-} level is suggestive of mixing between two opposite parity levels, which provides some evidence for a sub-dominant contribution from the $2^+$ level. The two events observed in the $^8$Be$_{\text{exc}}$ channel may be attributed to $\alpha$ decay of the \level{9.64}{3}{-} level via the ghost of the $^8$Be ground state, which has been estimated to account for 2\% of the $\alpha$-decay intensity~\cite{alcorta12}. In Table~\ref{tbl:gamma-widths}, we give the widths required for each of the two candidate transitions to produce the full observed cross section. The width of $4.7(12)$~eV obtained for the \level{18.35}{3}{-}$\rightarrow\;$\level{9.64}{3}{-} transition agrees within uncertainties with the less precise width of $5.7(23)$~eV reported by Hanna {\it et al.}~\cite{hanna1982}. Another estimate of this width can be obtained by combining $\Gamma_{\gamma_1}=3.2(10)$~eV from Ref.~\cite{segel1965}, with the intensity ratio $I_{\gamma_{9.64}} / I_{\gamma_1} = 0.68$ from Ref.~\cite{zijderhand1990} measured at $\theta=55^{\circ}$. This yields $\sim 2.2$~eV, in reasonable agreement with our value and that of Hanna {\it et al.} Finally, we note that the cross section at $E_p=2.00$~MeV is consistent with feeding of the \level{9.64}{3}{-} level via the low-energy tails of the 18.35-MeV levels.
\paragraph{III)} Feeding to the \level{10.84}{1}{-} level is observed both at $E_p=2.00$~MeV and 2.64~MeV. At the lower proton energy, where the level is seen very clearly, the cross section is most readily accounted for by an isovector $E1$ transition from the \level{17.78}{0}{+} level with a strength of $0.0128(25)$~W.u., which is typical for such transitions in light nuclei~\cite{endt1993}. An isovector $M1$ transition from the broad \level{17.12}{1}{-} level is also a possibility, although the short measurements performed at $E_p\sim 1.4$~MeV and 2.37~MeV indicate that such a transition could not be the dominant contribution at $E_p=2.00$~MeV. Assuming this were the case, we would expect to observe 3.0--3.5 events at $E_p\sim 1.4$~MeV whereas only one event was observed ($1.8\sigma$ discrepancy~\cite{rolke2005}) and 2.5--3.0 events at $E_p=2.37$~MeV whereas only one event was observed ($1.3\sigma$ discrepancy). %
(We note that there is a slight mismatch between the energies of the two observed events, $E_x = 11.15$~MeV and 11.25~MeV, respectively, and the energy of the \level{10.84}{1}{-} level, leading to some uncertainty in their interpretation.) %
The feeding observed in the $^8$Be$_{\text{exc}}$ channel may be attributed to $\alpha$ decay of the \level{10.84}{1}{-} level via the ghost of the $^8$Be ground state, which has been esimated to account for 8\% of the $\alpha$-decay intensity~\cite{alcorta12}.
At $E_p=2.64$~MeV, where the feeding of the \level{10.84}{1}{-} level is less pronounced, the cross section is consistent with isovector $M1$ transitions from the \level{18.35}{2}{-} level or the \level{18.38}{0}{-} level, while an isoscalar $E1$ transition from the \level{18.35}{2}{+} level is ruled out because the required strength exceeds the recommended upper limit for such transitions~\cite{endt1993}. %
Finally, we note that in the short measurement performed at $E_p\sim 0.65$~MeV, a single event was detected in the $^8$Be$_{\text{gs}}$ channel. This event had an energy consistent with that of the \level{10.84}{1}{-} level and could be accounted for by an isovector $M1$ transition from the \level{16.62}{2}{-} level with a strength of 0.070--1.00~W.u., which is typical for transitions of this kind in light nuclei~\cite{endt1993}.
\paragraph{IV)} At $E_p=2.00$~MeV, a peak occurs in the cross section at $E_x\sim 11.8$~MeV in both the $^8$Be$_{\text{gs}}$ and $^8$Be$_{\text{exc}}$ channel. While the \level{11.83}{2}{-} level provides a natural explanation for the peak in the $^8$Be$_{\text{exc}}$ channel, this level cannot account for the peak in the $^8$Be$_{\text{gs}}$ channel, which requires a level of natural parity. At $E_p=2.64$~MeV, strength is observed in both channels, but there is no clear indication of a peak at 11.8~MeV, suggesting that the \level{11.83}{2}{-} level only makes a minor contribution to the cross section at this proton energy. %
The feeding to the \level{11.83}{2}{-} level at $E_p=2.00$~MeV is most naturally accounted for by an isovector $M1$ transition from the \level{17.23}{1}{-} level with a strength of 1.4(4)~W.u. An $E1$ transition from the \level{18.13}{1}{+} level could also be contributing, but cannot account for the entire feeding. If it did, we would expect to observe 43(7) events at $E_p=2.64$~MeV whereas only 19 events were observed in the $^8$Be$_{\text{exc}}$ channel (3.0$\sigma$ discrepancy). An isovector $M1$ transition from the \level{16.62}{2}{-} level provides yet another potential feeding mechanism, inconsistent only at the level of 2.2$\sigma$ with the low-statistics data collected at $E_p=0.65$~MeV, but requires a rather large strength of 5.4~W.u.\ to account for the entire cross section. %
We now turn to the observation of a peak-like structure at $E_x\sim 11.8$~MeV in the $^8$Be$_{\text{gs}}$ channel at $E_p=2.00$~MeV, which is intriguing since no narrow levels with natural-parity are known to exist at this energy in $^{12}$C. Unfortunately, the data provide few constraints on the quantum numbers of the level, only ruling out spins $J \geq 4$: Feeding of a $0^+$ level can be accounted for by an $M1$ transition from the \level{17.23}{1}{-} level; feeding of a $1^-$ level by $M1$ transitions from the \level{16.62}{2}{-} and \level{17.23}{1}{-} levels or an $E1$ transition from the \level{17.76}{0}{+} level; feeding of a $2^+$ level by $E1$ transitions from the \level{16.62}{2}{-} and \level{17.23}{1}{-} levels; and feeding of a $3^-$ level by an $M1$ transition from the \level{16.62}{2}{-} level. In all cases, the required strengths are within expectations for light nuclei~\cite{endt1993} and consistent with the cross sections measured at the other beam energies. %
The cross section measurement at $E_p=2.64$~MeV also provides limited insight into the properties of the final level: Any of the spin-parities $1^-$, $2^+$, $3^-$, and $4^+$ can be accounted for by more than one transition. Only a $0^+$ assignment seems improbable as it requires an isoscalar $E2$ transition from the \level{18.35}{2}{+} level with a rather large strength of 18(6)~W.u.
\paragraph{V)} Feeding to the \level{12.71}{1}{+} level is observed very clearly at $E_p=2.00$~MeV and also at $E_p=2.64$~MeV albeit less clearly. Some cross section is also observed in the $^8$Be$_{\text{gs}}$ channel which cannot be accounted for by the \level{12.71}{1}{+} level. %
The cross section obtained at the lower proton energy is about two times larger than that of Hanna {\it et al.} Even considering the substantial uncertainty on the value of Hanna {\it et al.}, the discrepancy is significant. However, we note that Hanna {\it et al.} relied on the $\gamma_1$ yield reported by Segel {\it et al.}~\cite{segel1965} for normalizing their data, and this yield disagrees with other measurements by up to 50\% as discussed in Ref.~\cite{segel1965}. Also, the $(p,\alpha_0)$ cross section reported by Segel {\it et al.}\ has recently been found to be underestimated by a factor of $1.50^{+0.15}_{-0.11}$~\cite{munch2020}. Taken together, these observations cast doubt on the accuracy of the normalization of the measurements of Hanna {\it et al.} indicating a potential $\sim 50$\% underestimation.
As already noted by Hanna {\it et al.}, the feeding to the \level{12.71}{1}{+} level at $E_p=2.00$~MeV can be accounted for by a rather strong isovector $M1$ transition from the \level{17.76}{0}{+} level. Indeed, the feeding cannot be accounted for in any other way. Adopting our larger cross section, the required strength is 4.4(9)$^{+2.2}_{-1.1}$~W.u., making the transition one of the strongest of its kind~\cite{endt1993}. %
The feeding observed at $E_p=2.64$~MeV cannot be accounted for by the high-energy tail of the \level{17.76}{0}{+} level, but requires an isovector $E1$ transition from either the \level{18.35}{2}{-} or the \level{18.39}{0}{-} level. %
While the cross section observed in the $^8$Be$_{\text{gs}}$ channel is relatively small, it is of substantial interest since no natural-parity levels are known at $E_x\sim 12.7$~MeV. There is, however, evidence for a broad ($\Gamma=1.7$~MeV) level at $E_x=13.3$~MeV with spin-parity $4^+$, the low-energy tail of which could potentially account for the observed cross section. This possibility will be explored further below. %
\paragraph{VI)} The excitation region $E_x = 13$--$14.5$~MeV is known to contain a $4^-$ level at 13.32~MeV, which decays entirely via the $^8$Be$_{\text{exc}}$ channel, and a $4^+$ level at 14.08~MeV, which decays predominantly via the $^8$Be$_{\text{exc}}$ channel (78\%). Recently, evidence has been found for a very broad ($\Gamma = 1.7$~MeV) $4^+$ level at 13.3~MeV. We observe relatively little feeding into this region, consistent with the expected inhibition of $\gamma$ transitions that require large changes in spin. We note that the factor of $\sim 4$ enhancement of the cross section in the $^8$Be$_{\text{exc}}$ channel compared to the $^8$Be$_{\text{gs}}$ channel appears consistent with the known decay properties of the known levels, especially if the broad 13.3-MeV level is assumed to have a substantial decay component to the $^8$Be ground state. %
Only isovector $E1$/$M1$ transitions from the \level{18.35}{3}{-} level can account for the feeding to the $4^{\pm}$ levels. However, this mechanism should produce a factor of $\sim 15$ enhancement of the cross section at $E_p=2.64$~MeV relative to 2.00~MeV, which is not observed. The discrepancy could potentially be reduced somewhat if the asymmetric shape of the 18.35-MeV level were taken into account, but it seems unlikely that this can fully explain the discrepancy. This suggests two possibilities: some of the cross section observed at the lower proton energy is to be attributed to {\it (i)} feeding {\it to} an unknown natural-parity level with $E_x \sim 13$--14~MeV and $J \leq 2$, or {\it (ii)} feeding {\it from} an unknown level with $E_x \sim 17$--18~MeV and $J \geq 2$.
\paragraph{VII)} At both $E_p=2.00$~MeV and 2.64~MeV, we observe substantial feeding to the excitation region above 14.5~MeV, especially in the $^8$Be$_{\text{exc}}$ channel. It seems natural to ascribe the majority of this cross section to the broad level at 15.44~MeV, tentatively assigned as $2^+$ although $0^+$ has also been proposed, but the feeding to this level is problematic:
Adopting the $2^+$ assignment, the cross section observed at $E_p=2.00$~MeV can only be accounted for by an isovector $E1$ transition from the \level{17.23}{1}{-} level, but we dismiss this possibility because the required strength of 2.5(8)~W.u.\ exceeds the recommended upper limit of 0.5~W.u.~\cite{endt1993} by a factor of five. (Transitions from the levels above 18~MeV can be dismissed because they overpredict the cross section at $E_p=2.64$~MeV.) %
Adopting instead the $0^+$ assignment, the conclusion is the same: no transition from any of the known levels can account for the observed feeding while conforming to the recommended upper limits of Ref.~\cite{endt1993}.
At $E_p=2.64$~MeV, the feeding can be accounted for by a rather strong $M1$ transition from the \level{18.35}{2}{-} level with a strength of (at least) 0.29(9)~W.u., but only if the $2^+$ assignment is adopted for the 15.44-MeV level.
\section{Summary and conclusions}
We summarize our findings as follows: The $^{11}\text{B}(p,3\alpha)\gamma$ cross sections measured at $E_p=2.00$~MeV and 2.64~MeV give clear evidence of feeding to the four known levels \level{9.64}{3}{-}, \level{10.84}{1}{-}, \level{11.83}{2}{-}, and \level{12.71}{1}{+}, but by themselves these levels cannot fully account for the observed cross sections. In particular, we find evidence for feeding to a natural-parity level near $E_x \sim 11.8$~MeV. Evidence for natural-parity strength in this region was also found in a previous study of the $\gamma$ de-excitations of the \level{16.11}{2}{+} level \cite{laursen2016_2} and in studies of the $\beta$ decays of $^{12}$B and $^{12}$N \cite{hyldegaard2010}.
The feeding to the \level{9.64}{3}{-}, \level{10.84}{1}{-}, \level{11.83}{2}{-}, and \level{12.71}{1}{+} levels can be explained in terms of isovector $M1$ and $E1$ transitions from the known levels above the $p+{}^{11}$B threshold. The transitions proposed to account for the feeding to the \level{10.84}{1}{-} and \level{12.71}{1}{+} levels at $E_p=2.64$~MeV are of some interest, as they provide evidence for significant $T=1$ admixture in the \level{18.35}{2}{-} level and/or the \level{18.39}{0}{-} level. It is also worth noting that the larger and more precise width obtained for the \level{17.76}{0}{+}$\rightarrow\;$\level{12.71}{1}{+} transition makes this one of the strongest $M1$ transition in any nucleus~\cite{endt1993}.
Higher-statistics measurements at $E_p = 0.65$~MeV and 1.4~MeV would be highly desirable to confirm the tentative observation of $M1$ transitions from the \level{16.62}{2}{-} and \level{17.23}{1}{-} levels, both feeding into the \level{10.84}{1}{-} level. Such measurements would also yield improved constraints on the spin-parity of the natural-parity level observed at $E_x \sim 11.8$~MeV. For these studies, it could prove advantageous to adopt a detector geometry similar to that of Ref.~\cite{laursen2016_2}, which allows significantly larger beam currents at the cost of a substantial reduction of the detection efficiency in the $^8$Be$_{\text{exc}}$ channel.
The interpretation of the feeding observed into the excitation region above 13~MeV remains unclear, especially at $E_p=2.00$~MeV where the measured cross section could not be explained in terms of transitions between known levels. Here, too, additional measurements would be desirable.
An analysis of new complete-kinematics data on the $^{11}$B$(p,3\alpha)$ reaction, currently in progress, will provide an improved understanding of the $\alpha_1$ channel. This, together with a multi-channel $R$-matrix analysis that includes recent data on $(p,p)$ and $(p,\alpha_0)$ as well as existing data on other channels, should lead to an improved understanding of the excitation region $E_x\sim \;$16--18 MeV, which may require revision of some of the conclusions drawn from the present study.
While theoretical estimates suggest the resonant capture component to be dominant, the direct
capture component is not negligible and could in some instances make a substantial contribution
to the observed cross section. Such direct contributions were not considered in the derivation of the partial $\gamma$-ray widths given in Table~\ref{tbl:gamma-widths}. Improved theoretical calculations of the direct component would be of significant interest. Theoretical calculations of the radiative widths deduced in this work would also be of interest.
Finally, we remark that the $2^+\rightarrow 0^+$ and $4^+\rightarrow 2^+$ transitions in $^8$Be contribute only at the sub-nb level to the cross sections measured in this work, and hence can be safely ignored~\cite{datar2005}.
\section*{Acknowledgements}
We would like to thank Folmer Lyckegaard for manufacturing the target. %
This work has been supported by the European Research Council under ERC
starting grant LOBENA, No. 307447. %
OSK acknowledges support from the Villum Foundation through
Project No.\ 10117.
|
1,477,468,750,119 | arxiv | \section{Appendix: Magma program for Lemma \ref{lemma:5x5} (\ref{5x5-clique})} \label{sec:appendix}
Assume that $\Gamma$ is locally $5 \times 5$ grid, and that all $\mu$-graphs of $\Gamma$ have order at least $8$. Let $x \in \V(\Gamma)$. We want to determine if some $6$-clique $C$ is contained in $\Gamma_2(x)$. If such a clique exists, then each $\mu$-graph $\mu(x,y)$, for $y \in C$, is an induced subgraph of $K_5 \square K_5$ and is either an $8$-cycle or a disjoint union of two $4$-cycles, and the set of these six $\mu$-graphs satisfies the conditions in Lemma \ref{lemma:mu-clique}, namely:
\begin{enumerate}
\item If $S$ is union of vertex sets of these six $\mu$-graphs, then each vertex in $S$ lies in exactly two $\mu$-graphs.
\item Any two distinct $\mu$-graphs are either disjoint or have exactly one common edge.
\item The set of edges which lie in two $\mu$-graphs (as in 2.) form a perfect matching of $S$ (since $C \cap \Gamma(x) = \varnothing$).
\end{enumerate}
For the computation we replaced condition (1) with the following weaker condition:
\begin{enumerate}
\item[(1')] Each vertex in S lies in at most two $\mu$-graphs.
\end{enumerate}
We denoted by $\texttt{Cyc8}$ and $\texttt{Cyc44}$, respectively, the set of all induced subgraphs of $K_5 \square K_5$ which are $8$-cycles, and the set of all induced subgraphs of $K_5 \square K_5$ which are unions of two disjoint $4$-cycles. Since $\Aut(K_5 \square K_5)$ is transitive on each of the sets $\texttt{Cyc8}$ and $\texttt{Cyc44}$, so we assumed without loss of generality that:
\begin{enumerate}
\item[(4)] One of the six graphs is a fixed graph $\texttt{mu}$.
\end{enumerate}
We considered two cases, one with $\texttt{mu} \in \texttt{Cyc8}$ and the other with $\texttt{mu} \in \texttt{Cyc44}$. For each of these cases we used \textsc{Magma} to enumerate all sets consisting of $6$ induced subgraphs of $K_5 \square K_5$ satisfying the conditions (1'), (2), and (4). In each case we managed to find as many as five such subgraphs but there were no sets of six.
The following is our \text{Magma} code.
\subsection*{Step 1.} We constructed the graph $K_5 \square K_5$ and the sets $\texttt{Cyc8}$ and $\texttt{Cyc44}$.
\bigskip \noindent \tt
n := 5; \\[6pt]
vertices := \{ <a,b> : a,b in [1..n] \}; \\
edges := \{\{u,v\} : u,v in vertices | u ne v and (u[1] eq v[1] or u[2] eq v[2])\}; \\
grid,V,E := Graph< vertices | edges >; \\[6pt]
Cyc8 := \{ X : X in Subsets(Set(V),2*(n-1)) | IsIsomorphic(sub< grid | X >, \\ PolygonGraph(2*(n-1))) \}; \\[6pt]
Cyc44 := \{ X : X in Subsets(Set(V),2*(n-1)) | IsIsomorphic(sub< grid | X >, \\ Union(PolygonGraph(4),PolygonGraph(4))) \};
\subsection*{Step 2.} \rm We constructed the set of all $2$-sets of graphs in $\texttt{Cyc8} \cup \texttt{Cyc44}$ which satisfy (2):
\bigskip \noindent \tt
U := \{ \{@ X1,X2 @\} : X1,X2 in Cyc44 join Cyc8 | IsDisjoint(X1,X2) or ( \#(X1 \\ meet X2) eq 2 and IsIsomorphic(sub< grid | X1 meet X2 >, CompleteGraph(2)) ) \};
\subsection*{Step 3.} \rm We constructed the fixed graph $\texttt{mu} \in \texttt{Cyc8}$:
\bigskip \noindent \tt
mu := \{V!<1,1>, V!<1,2>, V!<2,2>, V!<2,3>, V!<3,3>, V!<3,4>, V!<4,4>, V!<4,1>\};
\subsection*{Step 4.} \rm We constructed the sets $\texttt{W}$, $\texttt{X}$, $\texttt{Y}$, and $\texttt{Z}$ of all $3$-, $4$-, $5$-, and $6$-sets, respectively, of $\mu$-graphs satisfying (1'), (2), and (4). Note that \textsc{Magma} returned non-empty sets $\texttt{W}$, $\texttt{X}$, and $\texttt{Y}$, but that $\texttt{Z}$ is empty.
\bigskip \noindent \tt
W := \{ \{@ mu, x[1], x[2] @\} : x in U | mu notin x and forall(i)\{ i : i in \\ {[1..\#x]} | \{@ x[i], mu @\} in U \} and IsDisjoint(mu, x[1] meet x[2]) \}; \\[6pt]
X := \{ \{@ x[1], x[2], x[3], z @\} : x in W, z in Cyc44 join Cyc8 | z notin x \\ and forall(i)\{ i : i in [1..\#x] | \{@ x[i], z @\} in U \} and forall(i)\{ \{i,j\} : \\ i,j in [1..\#x] | i eq j or IsDisjoint(z, x[i] meet x[j]) \} \}; \\[6pt]
Y := \{ \{@ x[1], x[2], x[3], x[4], z @\} : x in X, z in Cyc44 join Cyc8 | z \\ notin x and forall(i)\{ i : i in [1..\#x] | \{@ x[i], z @\} in U \} and forall(i)\{ \\ \{i,j\} : i,j in [1..\#x] | i eq j or IsDisjoint(z, x[i] meet x[j]) \} \}; \\[6pt]
Z := \{ \{@ x[1], x[2], x[3], x[4], x[5], z @\} : x in Y, z in Cyc44 join Cyc8 | \\ z notin x and forall(i)\{ i : i in [1..\#x] | \{@ x[i], z @\} in U \} and \\ forall(i)\{ \{i,j\} : i,j in [1..\#x] | i eq j or IsDisjoint(z, x[i] meet x[j]) \} \};
\subsection*{Step 5} \rm Finally we repeated steps 3 to 4 for the fixed graph $\texttt{mu} \in \texttt{Cyc44}$, namely
\bigskip \noindent \tt
mu := \{V!<1,1>, V!<1,2>, V!<2,2>, V!<2,1>, V!<3,3>, V!<3,4>, V!<4,4>, V!<4,3>\};
\bigskip \noindent \rm
and again the collection \texttt{Z} of $6$-sets was empty.
\rm
\section{Basic properties of locally $n \times n$ grid graphs} \label{sec:basics}
In this section we establish some basic properties of locally $n \times n$ grid graphs and prove the first statement of Theorem \ref{maintheorem:order-diameter}.
The first result is a generalisation of \cite[Lemma 1, Section 5]{4by4}.
\begin{lemma} \label{lemma:basic-nxn}
Assume that $\Gamma$ is locally $n \times n$ grid. Then the following hold:
\begin{enumerate}[(1)]
\item \label{basic-edges} Each edge $\{x,y\}$ is in $2(n-1)$ triangles, and $[\Gamma(x) \cap \Gamma(y)] \cong 2\,K_{n-1}$.
\item \label{basic-maxcliques} A maximal clique in $\Gamma$ has size $n+1$. Each vertex is in $2n$ maximal cliques, each edge is in two maximal cliques, and each triangle is in a unique maximal clique.
\item \label{basic-triangles} The number of maximal cliques is $|\V(\Gamma)| \cdot 2n/(n+1)$ and the number of triangles is $|\V(\Gamma)| \cdot n^2(n-1)/3$. Hence $n+1$ divides $2|\V(\Gamma)|$, and if $n \equiv 2\pmod{3}$ then $3$ divides $|\V(\Gamma)|$.
\item \label{basic-mu} Each $\mu$-graph is a union of $\ell$ cycles, say of lengths $2m_1, \ldots, 2m_\ell$, where each $m_i \geq 2$ and $\sum_{i=1}^\ell m_i \leq n$. No two edges of $\mu(x,y)$ lie in the same $n$-clique in $[\Gamma(x)]$ or $[\Gamma(y)]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Statements \ref{basic-edges} and \ref{basic-maxcliques} follow easily from the fact that $\Gamma$ is locally $n \times n$ grid.
By statement (\ref{basic-maxcliques}) each vertex is in $2n$ maximal cliques, and each maximal clique contains $n+1$ vertices. Hence there are $|\V(\Gamma)| \cdot 2n/(n+1)$ maximal cliques. Each vertex is in $n^2$ edges, and by statement (\ref{basic-edges}) each edge is in $2(n-1)$ triangles. Each vertex is contained in two edges in the same triangle, and each triangle has three edges. Therefore the number of triangles is $|\V(\Gamma)| \cdot n^2 \cdot 2(n-1)/6$, and statement (\ref{basic-triangles}) follows.
Let $(x,y) \in \Gamma_2$. Then by Lemma \ref{lemma:basic-mxn} (see also Figure \ref{figure:mu-component}), $|\Gamma(x) \cap \Gamma(y)| = |\mu(x,y)| = 2m$ for some $m \geq 2$. Also no two edges of $\mu(x,y)$ belong to the same $n$-clique in $[\Gamma(y)]$, so the edges of $\mu(x,y)$ determine $m$ ``horizontal'' cliques and $m$ ``vertical'' cliques. Each connected component with length, say, $2m_i$, determines $m_i$ horizontal and $m_i$ vertical cliques, and $m_i \geq 2$. It follows that if $\ell$ is the number of connected components of $\mu(x,y)$, then $\sum_{i=1}^\ell m_i = m \leq n$. This proves statement (\ref{basic-mu}).
\end{proof}
\begin{remark} \label{remark:distancediagram}
Let $x \in \V(\Gamma)$, with eccentricity $\epsilon(x)$ as in (\ref{eq:epsilon}), and $2 \leq i \leq \epsilon(x)$. Counting in two ways the number of edges between $\Gamma_{i-1}(x)$ and $\Gamma_i(x)$ yields the equality
\begin{equation} \label{eq:b=c}
\sum_{y \in \Gamma_{i-1}(x)} b_{i-1}(x,y) = \sum_{z \in \Gamma_i(x)} c_i(x,z).
\end{equation}
By Lemma \ref{lemma:basic-mxn} (\ref{mu-cycles}), for any $z \in \Gamma_2(x)$ we have $c_2(x,z) = 2m$ for some $m \in \{2, \ldots, n\}$, where $m$ may depend on $x$ and $z$. For $2 \leq m \leq n$ define
\begin{equation} \label{eq:k_2,2m}
k_{2,2m}(x) := \big|\big\{ z \in \Gamma_2(x) \ : \ c_2(x,z) = 2m \big\}\big|,
\end{equation}
so that
\begin{equation} \label{eq:k2-sum}
k_2(x) = \sum_{m=2}^n k_{2,2m}(x).
\end{equation}
Also $\sum_{z \in \Gamma_2(x)} c_2(x,z) = \sum_{m=2}^n 2m\,k_{2,2m}(x)$, and since $k_1(x) = n^2$ and $b_1(x,y) = (n-1)^2$ for all $y \in \Gamma(x)$, we have $\sum_{y \in \Gamma(x)} b_1(x,y) = n^2(n-1)^2$. Thus, for $i = 2$, equation (\ref{eq:b=c}) becomes
\begin{equation} \label{eq:sum-k_2,2m}
n^2(n-1)^2 = \sum_{m=2}^n 2m\,k_{2,2m}(x).
\end{equation}
\end{remark}
\begin{lemma} \label{lemma:maxcliques}
Assume that $\Gamma$ is locally $n \times n$ grid, and let $(x,y) \in \Gamma_2$. Then $c_2(x,y) = 2m$ for some $m \in \{2, \ldots, n \}$, and the following hold:
\begin{enumerate}[(1)]
\item \label{2n} If $c_2(x,y) = 2n$ then $d_\Gamma(x,C) = 1$ for any maximal clique $C$ containing $y$.
\item \label{2m} If $c_2(x,y) = 2m \leq 2(n-1)$, then, of the $2n$ maximal cliques $C$ containing $y$, $d_\Gamma(x,C) = 1$ for $2m$ cliques and $d_\Gamma(x,C) = 2$ for the remaining $2(n-m)$ cliques.
\end{enumerate}
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:basic-mxn} (\ref{mu-cycles}), $c_2(x,y) = 2m$ for some $m \in \{2, \ldots, n\}$ and no two edges of $\mu(x,y)$ lie in the same $n$-clique of $[\Gamma(y)]$. So in $[\Gamma(y)]$ there are $m$ horizontal $n$-cliques and $m$ vertical $n$-cliques that contain an edge of $\mu(x,y)$, as illustrated on the left in Figure \ref{figure:maxcliques-1}. If $m = n$ then each $n$-clique in $[\Gamma(y)]$ contains an edge of $\mu(x,y)$, so that each $(n+1)$-clique containing $y$ is adjacent to $x$, proving statement (\ref{2n}). If $m < n$ then the remaining $n-m$ horizontal $n$-cliques and $n-m$ vertical $n$-cliques in $[\Gamma(y)]$ do not contain any vertex of $\mu(x,y)$, as illustrated on the right in Figure \ref{figure:maxcliques-1}, but each of these cliques contains at least one vertex that is adjacent to a vertex of $\mu(x,y)$. Hence $d_\Gamma(x,C) = 2$ for these cliques $C$, as required.
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-6,-2.25)(6,2.5)
\rput(-4.75,0){
\localgrid
\rput(-0.75,0.75){\scalebox{0.55}{\mucomponent}}
\rput(0,1.9){\small $[\Gamma(y)]$}
\rput(-2.2,0.5){\small $\mu(x,y)$} \psline[linewidth=0.25pt](-1.6,0.5)(-1.35,0.5)
\pnode(1.65,1.5){A} \pnode(1.65,0){B} \ncbar[nodesep=0pt,angle=0,armA=0.1cm,armB=0.1cm]{A}{B} \rput(3.5,0.75){\parbox{3cm}{\small $m$ horizontal \\ cliques}}
\pnode(-1.5,-1.65){C} \pnode(0,-1.65){D} \ncbar[nodesep=0pt,angle=-90,armA=0.1cm,armB=0.1cm]{C}{D} \rput(-0.75,-2.1){\small $m$ vertical cliques}
}
\rput(3.25,0){
\psline[linecolor=lightgray](-1.5,0)(-1.5,1.5)(0,1.5)
\pspolygon[linecolor=lightgray,fillstyle=vlines,hatchcolor=lightgray,hatchangle=0](0,1.5)(1.5,1.5)(1.5,-1.5)(0,-1.5)
\pspolygon[linecolor=lightgray,fillstyle=hlines,hatchcolor=lightgray,hatchangle=0](-1.5,0)(-1.5,-1.5)(1.5,-1.5)(1.5,0)
\rput(-0.75,0.75){\scalebox{0.55}{\mucomponent}}
\rput(0,1.9){\small $[\Gamma(y)]$}
\rput(-2.2,0.5){\small $\mu(x,y)$} \psline[linewidth=0.25pt](-1.6,0.5)(-1.35,0.5)
\pnode(1.65,0){A} \pnode(1.65,-1.5){B} \ncbar[nodesep=0pt,angle=0,armA=0.1cm,armB=0.1cm]{A}{B} \rput(3.5,-0.75){\parbox{3cm}{\small $n-m$ horizontal \\ cliques}}
\pnode(0,-1.65){C} \pnode(1.5,-1.65){D} \ncbar[nodesep=0pt,angle=-90,armA=0.1cm,armB=0.1cm]{C}{D} \rput(0.75,-2.1){\small $n-m$ vertical cliques}
}
\end{pspicture}
\caption{Maximal cliques $C$ satisfying $d_\Gamma(x,C) = 1$ (left) and $d_\Gamma(x,C) = 2$ (right)} \label{figure:maxcliques-1}
\end{figure}
\end{center}
Statement (\ref{distance1}) of the next lemma is the third assertion in \cite[Lemma, Section 1]{4by4}. The second part of statement (\ref{distance2}) generalises the first assertion in \cite[Lemma 2, Section 5]{4by4}.
\begin{lemma} \label{lemma:maxcliques2}
Assume that $\Gamma$ is locally $n \times n$ grid. Let $x \in \V(\Gamma)$ and $C$ a maximal clique in $\Gamma$ not containing $x$.
\begin{enumerate}[(1)]
\item \label{distance1} If $d_\Gamma(x,C) = 1$ then $|C \cap \Gamma(x)| = 2$ and $|C \cap \Gamma_2(x)| = n-1$.
\item \label{distance2} If $d_\Gamma(x,C) = 2$ then each $y \in C \cap \Gamma_2(x)$ satisfies $c_2(x,y) \leq 2(n-1)$, and if $c_2(x,y) = 2m$ then $|C \cap \Gamma_2(x)| \geq m+1$.
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $d_\Gamma(x,C) = 1$ and that some vertex $y \in C \cap \Gamma(x)$. Then $x \in \Gamma(y)$, and $C \setminus \{y\}$ is an $n$-clique in $\Gamma(y)$ not containing $x$. We see from the $n \times n$ grid $[\Gamma(y)]$ that $x$ is adjacent to a unique vertex in $C \setminus \{y\}$ and is at distance two from any other vertex in $C \setminus \{y\}$. Therefore $|C \cap \Gamma(x)| = 2$ and $|C \cap \Gamma_2(x)| = n-1$, which proves statement (\ref{distance1}).
Suppose now that $d_\Gamma(x,C) = 2$. Let $y \in C \cap \Gamma_2(x)$ and let $C' = C \setminus \{y\}$. Then $c_2(x,y) = 2m \leq 2(n-1)$ (because otherwise by Lemma \ref{lemma:maxcliques} (\ref{2n}) all cliques containing $y$ are at distance $1$ from $x$, and in particular $d_\Gamma(x,C) = 1$, contradiction), and $C'$ is an $n$-clique in $\Gamma(y)$. Since $C' \subseteq C$ we have $d_\Gamma(x,C') \geq 2$, so $C'$ does not contain an edge of $\mu(x,y)$. In $[\Gamma(y)]$ there are $n-1$ cliques of size $n$ that are disjoint from $C'$, and $n$ cliques of size $n$ that meet $C'$ in a unique vertex. Of the $n$ cliques that intersect $C'$, there are $m$ cliques $C_1, \ldots, C_m$ each of which cointains an edge of $\mu(x,y)$. Any two of the cliques $C_i$ are disjoint, so that $|C' \cap (C_1 \cup \ldots \cup C_m)| = m$ as illustrated in Figure \ref{figure:maxcliques-2}. Thus $|C \cap \Gamma_2(x)| \geq |(C' \cap (C_1 \cup \ldots \cup C_m)) \cup \{y\}| = m+1$, which proves statement (\ref{distance2}).
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-2.5,-1.8)(2.5,2.5)
\multido{\n=-1.5+0.4}{8}{\psline[linecolor=lightgray](\n,1.5)(\n,-1.5)}
\multido{\n=-1.3+0.4}{8}{\psline[linecolor=lightgray](1.5,\n)(-1.5,\n)}
\psline[linewidth=1.3pt,linecolor=lightgray](-1.5,-0.9)(1.5,-0.9) \multido{\n=-1.1+0.4}{5}{\qdisk(\n,-0.9){2pt}}
\rput(-2,-0.9){\small $C'$}
\rput(-0.3,0.3){\scalebox{0.8}{\mucomponent}}
\rput(0,1.9){\small $[\Gamma(y)]$}
\rput(-2.3,0.25){\small $\mu(x,y)$} \psline[linewidth=0.25pt](-1.7,0.25)(-1.2,0.25)
\pnode(-1.1,-1.5){a} \pnode(0.5,-1.5){b} \ncbar[nodesep=2pt,angle=-90,armA=0.1cm,armB=0.1cm]{a}{b} \rput(-0.3,-1.9){\small $m$}
\end{pspicture}
\caption{Vertices in $\Gamma_2(x) \cap C'$} \label{figure:maxcliques-2}
\end{figure}
\end{center}
The next result generalises \cite[Lemmas 1 (iv) and 2]{4by4}.
\begin{lemma} \label{lemma:parameters}
Assume that $\Gamma$ is locally $n \times n$ grid. Let $x \in \V(\Gamma)$ with eccentricity $\epsilon(x)$ as in (\ref{eq:epsilon}).
\begin{enumerate}[(1)]
\item \label{b2} Let $y \in \Gamma_2(x)$. If $c_2(x,y) = 2m$ then $b_2(x,y) \leq (n-m)^2$. In particular, if $c_2(x,y) = 2n$ then $b_2(x,y) = 0$.
\item \label{b3,c3} Assume that $\epsilon(x) \geq 3$ and let $z \in \Gamma_3(x)$. If $c_2(x,y) \geq 2m$ for all $y \in \Gamma_2(x)$ then $c_3(x,z) \geq (m+1)^2$ and $b_3(x,z) \leq (n-m-1)^2$.
\item \label{bi,ci} Assume that $\epsilon(x) \geq i \geq 4$ and let $z \in \Gamma_i(x)$. If $c_2(x,y) \geq 2m$ for all $y \in \Gamma_2(x)$ then $c_i(x,z) \geq (m+1)^2$ and $b_i(x,z) \leq (n-m-1)^2$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $y \in \Gamma_2(x)$ and suppose that $c_2(x,y) = 2m$. Then $\Gamma_3(x) \cap \Gamma(y)$ is contained in the set of all vertices in $\Gamma(y)$ that are not adjacent to any vertex in $\mu(x,y)$, as illustrated in Figure \ref{figure:b,c}. Hence $\Gamma_3(x) \cap \Gamma(y)$ lies in an $(n-m) \times (n-m)$ subgrid of $[\Gamma(y)]$, so it follows that $b_2(x,y) = |\Gamma_3(x) \cap \Gamma(y)| \leq (n-m)^2$. This proves statement (\ref{b2}).
Let $z \in \Gamma_3(x)$ and assume that $c_2(x,y) \geq 2m$ for all $y \in \Gamma_2(x)$. Then $d_\Gamma(x,C) = 2$ for some $(n+1)$-clique $C$ containing $z$. Hence $|C \cap \Gamma_2(x)| \geq m+1$ by Lemma \ref{lemma:maxcliques2} (\ref{distance2}). Let $C' = C \setminus \{z\}$, which is an $n$-clique in $\Gamma(z)$. Since $z \notin \Gamma_2(x) \cap C$, we also have $|C' \cap \Gamma_2(x)| \geq m+1$. Without loss of generality suppose that $C'$ is a ``horizontal'' $n$-clique. Then any ``vertical'' $n$-clique $C''$ containing a point in $C' \cap \Gamma_2(x)$ also satisfies $|C'' \cap \Gamma_2(x)| \geq m+1$, as illustrated in Figure \ref{figure:b,c}. Therefore $c_3(x,z) = |\Gamma_2(x) \cap \Gamma(z)| \geq (m+1)^2$, and $[\Gamma_2(x) \cap \Gamma(z)]$ contains an $(m+1) \times (m+1)$ subgrid. No vertex in $\Gamma_4(x) \cap \Gamma(z)$ is adjacent to any vertex in $\Gamma_2(x) \cap \Gamma(z)$, hence $\Gamma_4(x) \cap \Gamma(z)$ lies in an $(n-m-1) \times (n-m-1)$ subgrid of $\Gamma(z)$. So $b_3(x,z) = |\Gamma_4(x) \cap \Gamma(z)| \leq (n-m-1)^2$, and statement (\ref{b3,c3}) holds.
Finally, let $z \in \Gamma_i(x)$, $4 \leq i \leq \epsilon(x)$, and suppose that $c_2(x,y) \geq 2m$ for all $y \in \Gamma_2(x)$. Take $w \in \Gamma_{i-3}(x)$ such that $d_\Gamma(w,z) = 3$. Then by part \ref{b3,c3} above $c_3(w,z) \geq (m+1)^2$ and $b_3(w,z) \leq (n-m-1)^2$. Clearly $\Gamma_2(w) \cap \Gamma(z) \subseteq \Gamma_{i-1}(x) \cap \Gamma(z)$, and $\Gamma_4(w) \cap \Gamma(z) \subseteq \Gamma_{i+1}(x) \cap \Gamma(z)$. statement (\ref{bi,ci}) follows.
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-6,-2.5)(6,2.5)
\rput(-4.5,0){
\localgrid
\psellipse(0.75,-0.75)(0.6,0.5)
\rput(-0.75,0.75){\scalebox{0.55}{\mucomponent}}
\rput(0,1.9){\small $[\Gamma(y)]$}
\rput(-2.2,0.5){\small $\mu(x,y)$} \psline[linewidth=0.25pt](-1.6,0.5)(-1.35,0.5)
\rput(2.7,-0.75){\small $\Gamma_3(x) \cap \Gamma(y)$} \psline[linewidth=0.25pt](0.75,-0.75)(1.6,-0.75)
\pnode(1.65,1.5){A} \pnode(1.65,0){B} \ncbar[nodesep=0pt,angle=0,armA=0.1cm,armB=0.1cm]{A}{B} \rput(3.2,0.75){\parbox{2.5cm}{\small $m$ horizontal \\ cliques}}
\pnode(-1.5,-1.65){C} \pnode(0,-1.65){D} \ncbar[nodesep=0pt,angle=-90,armA=0.1cm,armB=0.1cm]{C}{D} \rput(-0.75,-2.1){\small $m$ vertical cliques}
}
\rput(3.5,0){
\localgrid
\psellipse(0.75,-0.75)(0.6,0.5)
\multido{\n=0.15+0.40}{4}{\rput(0,\n){\multido{\n=-1.35+0.40}{4}{\qdisk(\n,0){2pt}}}}
\pnode(-1.5,1.5){A} \pnode(-1.5,0){B} \ncbar[nodesep=4pt,angle=180,armA=0.1cm,armB=0.1cm]{A}{B} \rput(-2.4,0.8){\small $\geq m+1$}
\pnode(0,1.5){C} \ncbar[nodesep=4pt,angle=90,armA=0.1cm,armB=0.1cm]{A}{C} \rput(-0.8,2){\small $\geq m+1$}
\psline[linewidth=0.25pt](-1.35,0.15)(-1.7,-0.25) \rput(-2.4,-0.35){\small $\Gamma_2(x) \ni$}
\rput(2.7,-0.75){\small $\Gamma_4(x) \cap \Gamma(z)$} \psline[linewidth=0.25pt](0.75,-0.75)(1.6,-0.75)
\rput(0,-1.9){\small $[\Gamma(z)]$}
}
\end{pspicture}
\caption{$[\Gamma(y)]$ for $y \in \Gamma_2(x)$ and $[\Gamma(z)]$ for $z \in \Gamma_3(x)$} \label{figure:b,c}
\end{figure}
\end{center}
\begin{lemma} \label{lemma:k-bounds}
Assume that $\Gamma$ is locally $n \times n$ grid. Suppose that there exists a constant $m \in \{2, \ldots, n-1\}$ such that $c_2(x,y) \geq 2m$ for all $(x,y) \in \Gamma_2$. Then for any $x \in \V(\Gamma)$,
\begin{align}
k_2(x) &\leq \frac{n^2(n-1)^2}{2m}, \label{eq:k2} \\
k_3(x) &\leq k_2(x) \cdot \frac{(n-m)^2}{(m+1)^2} \leq \frac{n^2(n-1)^2(n-m)^2}{2m(m+1)^2}, \label{eq:k3} \\
\intertext{and for any $4 \leq i \leq \epsilon(x)$, with $\epsilon$ as in (\ref{eq:epsilon}),}
k_i(x) &\leq k_{i-1}(x) \cdot \frac{(n-m-1)^2}{(m+1)^2}. \label{eq:ki}
\end{align}
\end{lemma}
\begin{proof}
It follows from the hypothesis and equation (\ref{eq:b=c}) with $i = 2$ that
\[ n^2(n-1)^2 = \sum_{y \in \Gamma(x)} b_1(x,y) = \sum_{z \in \Gamma_2(x)} c_2(x,z) \geq k_2(x) \cdot 2m, \]
which then yields (\ref{eq:k2}). By Lemma \ref{lemma:parameters}.\ref{b2} and \ref{lemma:parameters}.\ref{b3,c3} we have $b_2(x,y) \leq (n-m)^2$ and $c_3(x,z) \geq (m+1)^2$ for any $y \in \Gamma_2(x)$ and $z \in \Gamma_3(x)$. So with $i = 3$ in (\ref{eq:b=c}) we obtain
\[ k_2(x) \cdot (n-m)^2 \geq \sum_{y \in \Gamma_2(x)} b_2(x,y) = \sum_{z \in \Gamma_3(x)} c_3(x,z) \geq k_3(x) \cdot (m+1)^2, \]
which then yields (\ref{eq:k3}). Similarly, if $4 \leq i \leq \epsilon(x)$, then by Lemma \ref{lemma:parameters}.\ref{b3,c3} and \ref{lemma:parameters}.\ref{bi,ci} we have $b_{i-1}(x,y) \leq (n-m-1)^2$ and $c_i(x,z) \geq (m+1)^2$ for any $y \in \Gamma_{i-1}(x)$ and $z \in \Gamma_i(x)$. So (\ref{eq:b=c}) gives us
\[ k_{i-1}(x) \cdot (n-m-1)^2 \geq \sum_{y \in \Gamma_{i-1}(x)} b_{i-1}(x,y) = \sum_{z \in \Gamma_i(x)} c_i(x,z) \geq k_i(x) \cdot (m+1)^2, \]
and (\ref{eq:ki}) follows.
\end{proof}
The distance diagram of a locally grid graph $\Gamma$ satisfying the hypotheses of Lemma \ref{lemma:k-bounds} is shown in Figure \ref{figure:distancediagram}.
\begin{center}
\begin{figure}[ht]
\begin{pspicture}(-7,-0.7)(7,1.25)
\distancediagram
\rput(-2.6,0.25){\small $\geq 2m$} \rput(0.3,0.25){\small $\leq (n-m)^2$}
\rput(2.45,0.25){\small $\geq (m+1)^2$} \rput(6,0.25){\small $\leq (n-m-1)^2$}
\end{pspicture} \\
\begin{pspicture}(-7,-1.25)(7,0.7)
\distancediagramsmall
\rput(-1.6,0.25){\small $\geq (m+1)^2$}
\rput(2,0.25){\small $\leq (n-m-1)^2$}
\end{pspicture}
\caption{Distance diagram for $\Gamma$ with respect to the vertex $x$, assuming $c_2(x',y') \geq 2m$ for all $\{x',y'\} \in \Gamma_2$}
\label{figure:distancediagram}
\end{figure}
\end{center}
\begin{proof}[Proof of Theorem \ref{maintheorem:order-diameter} (\ref{>n-1})]
By Lemmas \ref{lemma:basic-mxn} (\ref{mu-cycles}) and \ref{lemma:basic-nxn} (\ref{basic-mu}), each $\mu$-graph has order at least $4$ and at most $2n$. Assume that each $\mu$-graph has order at least $n-1$. Let $(x,y) \in \Gamma_2$. Since $|\mu(x,y)| = c_2(x,y)$ is even, we have $c_2(x,y) \geq n-1$ if $n$ is odd and $c_2(x,y) \geq n$ if $n$ is even. Thus $c_2(x,y) \geq 2m$, where $m = \lceil (n-1)/2 \rceil$. Let $M = (n-1)/2$, $\alpha_0 = (n-m-1)^2/(m+1)^2$, and $\alpha = (n-M-1)^2/(M+1)^2 = (n-1)^2/(n+1)^2$. Then $M \leq m$ and $\alpha_0 \leq \alpha < 1$, and applying Lemma \ref{lemma:k-bounds} we get
\begin{align*}
|\V(\Gamma)|
&= 1 + k_1(x) + k_2(x) + k_3(x) + \ldots \\
&\leq 1 + n^2 + \frac{n^2(n-1)^2}{2m} + \frac{n^2(n-1)^2}{2m} \cdot \frac{(n-m)^2}{(m+1)^2} \, (1 + \alpha_0 + \alpha_0^2 + \ldots) \\
&\leq 1 + n^2 + \frac{n^2(n-1)^2}{2M} + \frac{n^2(n-1)^2}{2M} \cdot \frac{(n-M)^2}{(m+1)^2} \, (1 + \alpha + \alpha^2 + \ldots) \\
&\leq 1 + n^2 + n^2(n-1) + n^2(n-1) \cdot 1 \cdot \frac{1}{1-\alpha} \\
&= \frac{n^4 + 5n^3 - n^2 - n + 4}{4} \\
&\leq \frac{n^4 + 5n^3}{4}.
\end{align*}
Thus there are only finitely many graphs $\Gamma$ with these properties. Now let
\begin{equation} \label{eq:f}
f(n) = \frac{\ln\left(n^2(n-1)\right)}{2\,\ln\left(\frac{n+1}{n-1}\right)}
\end{equation}
and $D = \lceil f(n) \rceil$. Take $i = 3+D$. Then $i \geq 4$, so that
\[ k_i(x) \leq \frac{n^2(n-1)^2}{2M} \cdot \frac{(n-M)^2}{(M+1)^2} \cdot \alpha^{i-3} = n^2(n-1)\,\alpha^D. \]
It can be shown that $f(n) \notin \mathbb{Z}$ for any $n \geq 2$, so $D > f(n)$ and
\[ 2D \ln\left(\frac{n+1}{n-1}\right) > \ln\left(n^2(n-1)\right). \]
Hence
\[ \alpha^D = \left(\frac{n-1}{n+1}\right)^{2D} < \frac{1}{n^2(n-1)} \]
and we obtain
\[ k_i(x) \leq n^2(n-1)\,\alpha^D < 1. \]
Therefore $k_i(x) = 0$. It follows that $\diam(\Gamma) \leq i-1 = D+2$, which completes the proof.
\end{proof}
\begin{remark} \label{remark:diambounds}
It follows from inequality (3) in \cite{logbounds} that
\[ \frac{2}{n} \leq \ln\left(\frac{n+1}{n-1}\right) \leq \frac{2n}{n^2-1}. \]
Hence the function $f(n)$ in (\ref{eq:f}) satisfies
\[ \frac{1}{4}\left(\frac{n^2-1}{n}\right)\ln\left(n^2(n-1)\right) \leq f(n) \leq \frac{1}{4}\,n\ln\left(n^2(n-1)\right), \]
from which we deduce
\[ \frac{3}{4}\,(n-1)\ln(n-1) < f(n) < \frac{3}{4}\,n\ln(n). \]
Thus in Theorem \ref{maintheorem:order-diameter} (\ref{>n-1}), the upper bound on $\diam(\Gamma)$ is $O(n\ln(n))$.
\end{remark}
\section{Results on maximal cliques} \label{sec:cliques}
In this section we prove some technical results on maximal cliques of locally $n \times n$ grid graphs.
Lemma \ref{lemma:mu-clique} and Corollary \ref{corollary:mu-clique} generalise the result in the first part of the proof of \cite[Lemma 6]{4by4}, and use very similar arguments. If $C$ is a maximal clique in $\Gamma$ and $x$ is a vertex not contained in $C$, it follows from Lemma \ref{lemma:maxcliques2} (\ref{distance1}) that $|C \cap \Gamma(x)| = 0$ or $2$. In either case, $C$ contains vertices at distance at least $2$ from $x$. In particular, $C \cap \Gamma_2(x)$ is nonempty exactly when $d_\Gamma(x,C) = 1$ or $2$; in these cases the set $S$ defined by
\begin{equation} \label{eq:S}
S := \big\{ w \in \Gamma(x) \ : \ w \notin C \text{ and } w \in \mu(x,y) \text{ for some } y \in C \cap \Gamma_2(x) \big\}
\end{equation}
has at least $4$ vertices.
\begin{lemma} \label{lemma:mu-clique}
Assume that $\Gamma$ is locally $n \times n$ grid with $n \geq 3$. Let $x \in \V(\Gamma)$ and $C$ a maximal clique in $\Gamma$ with $d_\Gamma(x,C) = 1$ or $2$. Let $\Delta$ be the union of all graphs $\mu(x,y)$ where $y \in C \cap \Gamma_2(x)$, and let $S$ be as in (\ref{eq:S}). Define
\[ T = \left\{ \begin{aligned} &\varnothing &&\text{if } d_\Gamma(x,C) = 2, \\ &S \cap (\Gamma(u) \cup \Gamma(v)) &&\text{if } C \cap \Gamma(x) = \{u,v\}. \end{aligned} \right. \]
Then the following hold:
\begin{enumerate}[(1)]
\item \label{mu-number} Each vertex $w \in S$ lies in a unique $\mu$-graph in $\Delta$ if $w \in T$, and in exactly two $\mu$-graphs in $\Delta$ if $w \in S \setminus T$.
\item \label{mu-meet} The intersection of any two distinct $\mu$-graphs in $\Delta$ is either $C \cap \Gamma(x)$, or the union of $C \cap \Gamma(x)$ and an edge in $S \setminus T$.
\item \label{matching} The set of all edges $e$ such that $e$ is in exactly two $\mu$-graphs in $\Delta$ is a perfect matching on $S \setminus T$.
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove statement (\ref{mu-number}). Let $w \in S$. Then $w \sim_\Gamma y$ for some $y \in C$, so that $d_\Gamma(w,C) = 1$. Thus $|\Gamma(w) \cap C| = 2$ by Lemma \ref{lemma:maxcliques2} (\ref{distance1}), and there is a unique vertex $z \in \Gamma(w) \cap C$ distinct from $y$. Hence $w$ lies in at most two $\mu$-graphs in $\Delta$, since each such $\mu$-graph is $\mu(x,y')$ for some $y' \in \Gamma(w) \cap C \cap \Gamma_2(x)$. If $w \notin T$ then $\Gamma(w) \cap C \cap \Gamma(x) = \varnothing$, so we must have $z \in \Gamma_2(x)$ and $w \in \mu(x,z)$. Otherwise $z \in \Gamma(x)$, so $\mu(x,y)$ is the unique $\mu$-graph in $\Delta$ that contains $w$. This proves statement (\ref{mu-number}).
We now prove statement (\ref{mu-meet}). First observe that $C \cap \Gamma(x)$ is contained in every $\mu$-graph in $\Delta$: this is vacuously true if $d_\Gamma(x,C) = 2$ since then $C \cap \Gamma(x) = \varnothing$, while if $d_\Gamma(x,C) = 1$ then $C \cap \Gamma(x) \subseteq \Gamma(x) \cap \Gamma(y) = \mu(x,y)$ for each $y \in C \setminus \Gamma(x)$, since $C$ is a clique. Let $\mu(x,y)$ and $\mu(x,z)$ be distinct $\mu$-graphs in $\Delta$ with a common vertex $w \notin C \cap \Gamma(x)$. Then $\Gamma(w) \cap C = \{y,z\} \subseteq \Gamma_2(x)$ by Lemma \ref{lemma:maxcliques2}, so $\Gamma(w) \cap C \cap \Gamma(x) = \varnothing$ and hence $w \in S \setminus T$. Also $\{w,y,z\}$ is a triangle, so by Lemma \ref{lemma:basic-nxn} (\ref{basic-triangles}) there is a unique maximal clique $C'$ of $\Gamma$ which contains $\{w,y,z\}$. Then $w \in C' \cap \Gamma(x)$, so $d_\Gamma(x,C') = 1$ and $|C' \cap \Gamma(x)| = 2$ by Lemma \ref{lemma:maxcliques2} (\ref{distance1}). Let $w'$ be the unique vertex in $C' \cap \Gamma(x)$ distinct from $w$. Then $w'$ is a common vertex of $\mu(x,y)$ and $\mu(x,z)$, and $w' \in \Gamma(w)$. So $w' \in S \setminus T$ by statement (\ref{mu-number}) above, as illustrated in Figure \ref{figure:mu-clique} on the left. Suppose that there is a third vertex $w''$ common to $\mu(x,y)$ and $\mu(x,z)$ with $w'' \notin C \cap \Gamma(x)$. Then again there is a unique maximal clique $C''$ of $\Gamma$ containing $\{w'',y,z\}$. Now $C' \neq C$, and it follows from Lemma \ref{lemma:basic-nxn} (\ref{basic-maxcliques}) that $C$ and $C'$ are the only two maximal cliques containing the edge $\{y,z\}$. So $C''$ is either $C$ or $C'$. Since $w'' \notin \{w,w'\} = C' \cap \Gamma(x)$, $C'' \neq C'$. Therefore $C'' = C$, so $w'' \in C \cap \Gamma(x)$, contradiction. Hence $\mu(x,y) \cap \mu(x,z) = (C \cap \Gamma(x)) \cup \{w,w'\}$, and $\{w,w'\}$ is an edge in $S \setminus T$. This proves statement (\ref{mu-meet}).
Finally we prove statement (\ref{matching}). Let $\Delta'$ denote the subgraph of $\Gamma$ consisting of all edges $e$ such that $e$ is in exactly two $\mu$-graphs in $\Delta$. By statement (\ref{mu-number}) above we have $\V(\Delta') \subseteq S \setminus T$. Also each $w \in S \setminus T$ is contained in exactly two $\mu$-graphs $\mu(x,y)$ and $\mu(x,z)$ in $\Delta$, and by statement (\ref{mu-meet}) we have $\mu(x,y) \cap \mu(x,z) = (C \cap \Gamma(x)) \cup e$ for some edge $e$ in $S \setminus T$. So $e = \{w,w'\}$ for some $w' \in S \setminus T$. It follows from statement (\ref{mu-number}) that $\mu(x,y)$ and $\mu(x,z)$ are the only $\mu$-graphs in $\Delta$ which contain $w'$, and thus $\{w,w'\} \in \E(\Delta')$ by the definition of $\Delta'$. So $w \in \V(\Delta')$ and $w$ is contained in an edge of $\Delta'$. Since $w$ is arbitrary, it follows that $S \setminus T \subseteq \V(\Delta')$, so $S \setminus T = \V(\Delta')$, and each vertex of $S \setminus T$ lies in an edge of $\Delta'$. Suppose that there are two distinct edges $e_1$ and $e_2$ of $\Delta'$ with a common vertex $w$. Then $e_1 = \mu(x,y) \cap \mu(x,z)$ and $e_2 = \mu(x,y') \cap \mu(x,z')$ for some $y, z, y', z' \in C \cap \Gamma_2(x)$ with $\{y,z\} \neq \{y',z'\}$. Thus $\{y,z,y',z'\} \subseteq \Gamma(w) \cap C$, as illustrated in Figure \ref{figure:mu-clique} on the right. This implies that $|\Gamma(w) \cap C| \geq 3$, contradiction. Therefore no two edges in $\Delta'$ have a common vertex, and so $\Delta'$ is a perfect matching on $S \setminus T$. This proves statement (\ref{matching}).
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-6,-1.5)(6,2.5)
\rput(-4,0){
\xdiagram
\rput(-1.5,-0.5){\pnode(0,0.25){w} \pnode(0,-0.25){w'}} \rput(1.5,0){\pnode(0,0.25){y} \pnode(0,-0.25){z}}
\ncline[linestyle=dashed]{w}{y} \ncline[linestyle=dashed]{w'}{z}
\psellipse[linestyle=dashed,fillstyle=solid,fillcolor=white](-1.5,-0.5)(0.55,0.65)
\psellipse[linestyle=dashed](1.5,0)(0.75,0.85)
\rput{7}{\psellipse*[linecolor=white](-1.05,-0.25)(0.1,0.22)} \rput{9}{\psellipse*[linecolor=white](0.755,-0.27)(0.1,0.203)} \pscircle*[linecolor=white](0.9,-0.35){2pt} \psline[linewidth=1.1pt,linecolor=lightgray](0.7,0.09)(0.8,0.065) \psline[linewidth=1.1pt,linecolor=lightgray](0.7,0.115)(0.97,0.16)
\psellipse*[linecolor=lightgray](1.5,0)(0.55,0.65)
\rput(-1.5,-0.5){\cnode*(0,0.25){2pt}{w} \cnode*(0,-0.25){2pt}{w'} \ncline{w}{w'} \rput(-0.3,0.25){\small $w$} \rput(-0.25,-0.2){\small $w'$}}
\rput(1.5,0){\cnode*(0,0.25){2pt}{y} \cnode*(0,-0.25){2pt}{z} \ncline{y}{z} \rput(0.25,0.25){\small $y$} \rput(0.25,-0.25){\small $z$}}
\rput(0,-0.25){\small $C'$}
}
\rput(4,0){
\xdiagram
\psellipse*[linecolor=lightgray](1.5,0)(0.75,0.9)
\rput(-1.2,-0.25){\cnode*(-0.9,0.25){2pt}{e1} \cnode*(0,0){2pt}{w} \cnode*(-0.25,-0.9){2pt}{e2} \ncline{e1}{w} \ncline{w}{e2}
\rput(-0.45,-0.05){\small $e_1$} \rput(0.1,-0.55){\small $e_2$} \rput(0,0.25){\small $w$}}
\rput(1.5,0){\cnode*(-0.2,0.5){2pt}{y} \cnode*(0.3,0.2){2pt}{z} \cnode*(0.3,-0.2){2pt}{y'} \cnode*(-0.2,-0.5){2pt}{z'}
\rput(-0.2,0.8){\small $y$} \rput(0.3,0.45){\small $z$} \rput(0.3,-0.45){\small $y'$} \rput(-0.2,-0.8){\small $z'$}}
\ncline{w}{y} \ncline{w}{z} \ncline{w}{y'} \ncline{w}{z'}
}
\end{pspicture}
\caption{$\Gamma$ and $C$ as in Lemma \ref{lemma:mu-clique}, with $C$ shown in gray and $\Gamma(x) \cap C = \varnothing$ if $d_\Gamma(x,C) = 2$}
\label{figure:mu-clique}
\end{figure}
\end{center}
\begin{corollary} \label{corollary:mu-clique}
Assume that $\Gamma$ is locally $n \times n$ grid with $n \geq 3$. Let $x$, $C$, and $S$ be as in Lemma \ref{lemma:mu-clique}. Then
\[ 2|S| = \sum_{y \in C \cap \Gamma_2(x)} c_2(x,y) \equiv 0 \pmod{4}. \]
\end{corollary}
\begin{proof}
We count in two ways the number $\sigma := \big|\{ (w,y) \ : \ y \in C \cap \Gamma_2(x), \ w \in \Gamma(x) \cap \Gamma(y) \}\big|$. First we have
\[ \sigma = \sum_{y \in C \cap \Gamma_2(x)} \big|\{ w \in \Gamma(x) \ : \ w \sim_\Gamma y \}\big| = \sum_{y \in C \cap \Gamma_2(x)} |\mu(x,y)|. \]
Next we have
\begin{equation}
\sigma
= \sum_{w \in \Gamma(x)} \big|\{ y \in C \cap \Gamma_2(x) \ : \ y \sim_\Gamma w \}\big|
= \sum_{w \in \Gamma(x)} \big|\{ y \in C \cap \Gamma_2(x) \ : \ w \in \mu(x,y) \}\big|. \label{eq:sigma}
\end{equation}
The nonzero terms in the sum on the right side of (\ref{eq:sigma}) correspond exactly to those $w \in \V(\Delta) = S \cup (C \cap \Gamma(x))$. We apply Lemma \ref{lemma:mu-clique} to the right side of (\ref{eq:sigma}). If $d_\Gamma(x,C) = 2$ then $C \cap \Gamma(x) = \varnothing$, and each vertex in $S$ lies in exactly two $\mu$-graphs in $\Delta$. Hence
\[ \sum_{w \in \Gamma(x)} \big|\{ y \in C \cap \Gamma_2(x) \ : \ w \in \mu(x,y) \}\big| = \sum_{w \in S} \big|\{ y \in C \cap \Gamma_2(x) \ : \ w \in \mu(x,y) \}\big| = 2|S|. \]
Now suppose that $d_\Gamma(x,C) = 1$. Then $|C \cap \Gamma(x)| = 2$; let $\{u,v\} = C \cap \Gamma(x)$. Let $T$ denote the set of all vertices $w \in S$ such that $\Gamma(w) \cap C \cap \Gamma(x) \neq \varnothing$, that is, $T = S \cap (\Gamma(u) \cup \Gamma(v))$. By Lemma \ref{lemma:mu-clique} (\ref{mu-number}) each vertex in $S \setminus T$ is in exactly two $\mu$-graphs in $\Delta$, while each vertex in $T$ is in a unique $\mu$-graph in $\Delta$. Note also that no vertex in $T$ is adjacent to both $u$ and $v$, for otherwise such a vertex will have three neighbours in $C$, contradiction. It follows that $S \cap \Gamma(u) \cap \Gamma(v) = \varnothing$ and $T$ is the disjoint union of $S \cap \Gamma(u)$ and $S \cap \Gamma(v)$, as illustrated in Figure \ref{figure:mu-vertices-cor}. By Lemma \ref{lemma:maxcliques2} (\ref{distance1}) we have $|C \cap \Gamma_2(x)| = n-1$, so there are $n-1$ $\mu$-graphs in $\Delta$, each of which contains $\{u,v\}$. Since each vertex in $T$ lies in a unique $\mu$-graph in $\Delta$, it follows that $|S \cap \Gamma(u)| = |S \cap \Gamma(v)| = n-1$. Thus $|T| = 2(n-1)$. Recalling that in the right side of (\ref{eq:sigma}), the nonzero contributions come from $w \in S \cup \{u,v\}$, we have
\begin{align*}
\sum_{w \in \Gamma(x)} \big|\{ y \in C \cap \Gamma_2(x) \ : \ y \sim_\Gamma w \}\big|
&= \sum_{w \in S \cup \{u,v\}} \big|\{ y \in C \cap \Gamma_2(x) \ : \ y \sim_\Gamma w \}\big| \\
&= 2|S \setminus T| + (n-1) |\{u,v\}| + 1 \cdot |T| \\
&= 2(|S| - 2(n-1)) + (n-1) \cdot 2 + 1 \cdot 2(n-1) \\
&= 2|S|.
\end{align*}
Therefore in both cases $2|S| = \sigma$. By Lemma \ref{lemma:mu-clique} (\ref{matching}) the set $S \setminus T$ can be paritioned into subsets of size $2$, so $|S \setminus T|$ must be even. Since also $|T| = 2(n-1)$ is even, it follows that $|S|$ is even. Hence $\sigma = 2|S| \equiv 0 \pmod{4}$, which completes the proof.
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-2.7,-1.5)(2.7,2.5)
\psellipse(0,0)(1,1.5) \rput(0,1.8){\small $\Gamma(x)$}
\cnode*(-0.2,0.9){2pt}{u} \cnode*(0.2,0.9){2pt}{v} \rput(-0.2,1.15){\small $u$} \rput(0.2,1.15){\small $v$} \ncline{u}{v}
\pnode(-0.5,0.55){Tu1} \pnode(0.5,0.55){Tv1} \ncline{u}{Tu1} \ncline{v}{Tv1}
\pnode(-0.5,-0.25){Tu2} \pnode(0.5,-0.25){Tv2} \pnode(-0.35,-0.9){Su} \pnode(0.35,-0.9){Sv} \ncline{Tu2}{Su} \ncline{Tv2}{Sv}
\psellipse[fillcolor=white,fillstyle=solid](-0.5,0.15)(0.25,0.4) \psline(-0.65,0.15)(-1.25,0.15) \rput(-2,0.15){\small $S \cap \Gamma(u)$}
\psellipse[fillcolor=white,fillstyle=solid](0.5,0.15)(0.25,0.4) \psline(0.65,0.15)(1.25,0.15) \rput(2,0.15){\small $S \cap \Gamma(v)$}
\rput(0,-0.9){\psellipse[fillcolor=white,fillstyle=solid](0,0)(0.55,0.35) \rput(0,0){\small $S \setminus T$}}
\end{pspicture}
\caption{$\V(\Delta) = S \cup \{u,v\}$, $T = (S \cap \Gamma(u)) \cup (S \cap \Gamma(v))$} \label{figure:mu-vertices-cor}
\end{figure}
\end{center}
For the next result we assume that all $\mu$-graphs have order at least $2(n-1)$. In this case Lemma \ref{lemma:maxcliques2} states that any $(x,y) \in \Gamma_2$ satisfies the following: if $c_2(x,y) = 2n$ then $d_\Gamma(x,C) = 1$ for all maximal cliques $C$ of $\Gamma$ containing $y$, and if $c_2(x,y) = 2(n-1)$ then $d_\Gamma(x,C) = 1$ for $2(n-1)$ maximal cliques $C$ containing $y$ and $d_\Gamma(x,C) = 2$ for the remaining two cliques.
\begin{lemma} \label{lemma:cliques-bounded}
Assume that $\Gamma$ is connected and locally $n \times n$ grid, and that all $\mu$-graphs in $\Gamma$ have order at least $2(n-1)$. Then $\diam(\Gamma) \leq 3$, and $d_\Gamma(x,C) = 1$ or $2$ for any $x \in \V(\Gamma)$ and maximal clique $C$ not containing $x$. Furthermore, the following hold:
\begin{enumerate}[(1)]
\item \label{distance1-bounded} If $d_\Gamma(x,C) = 1$ then the number of vertices $y \in C \cap \Gamma_2(x)$ satisfying $c_2(x,y) = 2(n-1)$ is even. Moreover, if $n$ is even then $c_2(x,z) = 2n$ for some $z \in C \cap \Gamma_2(x)$.
\item \label{distance2-bounded} If $d_\Gamma(x,C) = 2$ then each vertex $y \in C \cap \Gamma_2(x)$ satisfies $c_2(x,y) = 2(n-1)$. Moreover, if $n$ is even then $C \nsubseteq \Gamma_2(x)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x \in \V(\Gamma)$ and let $C$ be a maximal clique in $\Gamma$ not containing $x$. It follows from Lemma \ref{lemma:parameters} (\ref{b3,c3}) that $b_3(x,y) \leq (n-(n-1)-1)^2 = 0$ for all $y \in \Gamma_3(x)$, which implies that $k_i(x) = 0$ for all $i \geq 4$. By connectedness $\diam(\Gamma) \leq 3$. Using $m = n-1$ in (\ref{eq:k3}) we have
\[ |\Gamma_3(x)| = k_3(x) \leq \frac{n-1}{2} < n+1 = |C|, \]
and thus $C \nsubseteq \Gamma_3(x)$. Since $\diam(\Gamma) \leq 3$, either $d_\Gamma(x,C) = 1$ or $d_\Gamma(x,C) = 2$. Let $r = |C \cap \Gamma_2(x)|$ and let $s$ be the number of vertices $y \in C \cap \Gamma_2(x)$ with $c_2(x,y) = 2(n-1)$. Then $c_2(x,y) = 2n$ for the remaining $r-s$ vertices, and
\[ \sum_{y \in C \cap \Gamma_2(x)} c_2(x,y) = s \cdot 2(n-1) + (r-s) \cdot 2n = 2(rn - s). \]
By Corollary \ref{corollary:mu-clique} this number is divisible by $4$. Hence $t := rn - s$ is even. Suppose first that $d_\Gamma(x,C) = 1$. Then $r = n-1$ by Lemma \ref{lemma:maxcliques2} (\ref{distance1}), so $t = n(n-1) - s$, and since $t$ is even, $s$ must also be even. This proves the first part of statement (\ref{distance1-bounded}). If in this case $n$ is even then $r = n-1$ is odd and so $r \neq s$ since $s$ is even, whence $c_2(x,z) = 2n$ for some $z \in C \cap \Gamma_2(x)$ and statement (\ref{distance1-bounded}) is proved. Now suppose that $d_\Gamma(x,C) = 2$. Then $c_2(x,y) \leq 2(n-1)$ for any $y \in C \cap \Gamma_2(x)$ by Lemma \ref{lemma:maxcliques2} (\ref{distance2}), and together with the hypothesis this gives us $c_2(x,y) = 2(n-1)$ for all $y \in C \cap \Gamma_2(x)$. So the first part of statement (\ref{distance2-bounded}) holds. Thus $s = r$; also $r \geq (n-1) + 1 = n$ by the second part of Lemma \ref{lemma:maxcliques2} (\ref{distance2}), so $r = n$ or $n+1$. If $r = n$ then $t = n^2 - n$, which is even for any $n$. If $r = n+1$ then $t = n^2 - 1$, which is even exactly when $n$ is odd; it follows that if $n$ is even then $|C \cap \Gamma_2(x)| = r \neq n+1 = |C|$, and thus $C \nsubseteq \Gamma_2(x)$. This completes the proof of statement (\ref{distance2-bounded}).
\end{proof}
\section{Introduction}
Throughout this paper all graphs are finite, simple, and undirected.
Let $m$ and $n$ be integers. An \emph{$m \times n$ grid} (also known as the \emph{$m \times n$ lattice graph}) is the Cartesian product $K_m \square K_n$ of two complete graphs, one with order $m$ and the other with order $n$. It has as vertices all ordered pairs $(i,j)$, $i \in \{1, \ldots, m\}$ and $j \in \{1, \ldots, n\}$, and as edges all $2$-sets of ordered pairs that agree in exactly one coordinate. If $m, n \geq 2$ then an $m \times n$ grid has diameter $2$. The $n \times n$ grid, sometimes called the lattice graph $L_2(n)$ of order $n$, is isomorphic to the Hamming graph $H(2,n)$, and has automorphism group $S_n \wr S_2$ which acts transitively of rank $3$ on its vertex set.
For any class $\mathcal{G}$ of graphs, a graph is said to be \emph{locally $\mathcal{G}$} if the induced subgraph on the neighbourhood of any vertex is isomorphic to a graph in $\mathcal{G}$. In particular, a graph is said to be \emph{locally grid} if $\mathcal{G}$ is the class of all grid graphs. Locally grid graphs were first studied in 1977 by Buekenhout and Hubaut in \cite{localpolar}, where they arise as adjacency graphs of certain locally polar spaces. In particular, they exhibit two infinite families of graphs which provide examples of locally $n \times n$ grid graphs for all $n \geq 4$ \cite[Section 2.3]{localpolar}. (One of these families consists of the Johnson graphs, and the other consists of quotients of the Johnson graphs by an antipodal partition.) Families of locally $m \times n$ grid graphs which have been completely classified include the subcases where $m = 2$ \cite{4by4}, $m = 3$ \cite{3byq} (see also Remark \ref{remark:3x3}), and $m = n = 4$ \cite{4by4}. The first are all triangular graphs, and the second are line graphs of certain connected partial linear spaces. The third classification, for $m = n = 4$, yields exactly four graphs, namely the Johnson graph $J(8,4)$ and its quotient $\frac{1}{2}J(8,4)$, and two graphs on $40$ vertices.
A \emph{$\mu$-graph} of a non-complete graph is an induced subgraph on the set of common neighbours of two vertices at distance two. Blokhuis and Brouwer showed in \cite{4by4} that any $\mu$-graph of a locally grid graph is a union of cycles of even length, and that if each $\mu$-graph is a union of $4$-cycles then the graph is either a Johnson graph or a quotient of a Johnson graph. Furthermore, if all $\mu$-graphs have the maximum possible order (that is, $2m$ if the graph is locally $m \times n$ grid with $m \leq n$) then $\Gamma$ is strongly regular and the parameters are known. In \cite{bilinear}, Gavrilyuk and Koolen considered locally $m \times n$ grid graphs with $n \geq m \geq 3$ whose $\mu$-graphs are all $6$-cycles, and with the additional condition that for each pair of vertices $x$ and $y$ at distance two, there are $(m-3)(n-3)$ vertices adjacent to $y$ and at distance three from $x$. They characterised such graphs as certain quotients of the graph of bilinear $(d \times e)$-forms over the field $\F_2$ where $m = 2^d-1$ and $n = 2^e-1$.
In this paper we undertake a general study of locally $n \times n$ grid graphs extending some of the results in \cite{4by4}. Our first result is a general characterisation for the case where all $\mu$-graphs are large enough. We denote the vertex set of the graph $\Gamma$ by $\V(\Gamma)$ and its diameter by $\diam(\Gamma)$.
\begin{theorem} \label{maintheorem:order-diameter}
Assume that $\Gamma$ is connected and locally $n \times n$ grid for some $n \geq 2$. Then any $\mu$-graph has even order at least $4$ and at most $2n$.
\begin{enumerate}[(1)]
\item \label{>n-1} If all $\mu$-graphs of $\Gamma$ have order at least $n-1$, then there are only finitely many such graphs $\Gamma$ and these satisfy
\[ |\V(\Gamma)| \leq \frac{n^3(n+5)}{4} \quad \text{and} \quad \diam(\Gamma) \leq 2 + \left\lceil \frac{\ln(n^2(n-1))}{2\,\ln\left(\frac{n+1}{n-1}\right)} \right\rceil. \]
\item \label{>2(n-1)} Further, if all $\mu$-graphs of $\Gamma$ have order at least $2(n-1)$, then
\[ |\V(\Gamma)| \leq \left\lfloor \frac{(n^2 + 1)(n+1)}{2} \right\rfloor \quad \text{and} \quad \diam(\Gamma) \leq 3. \]
\item \label{=2(n-1)} In part (\ref{>2(n-1)}), $|\V(\Gamma)| = \lfloor (n^2 + 1)(n+1)/2 \rfloor$ if and only if all $\mu$-graphs have order equal to $2(n-1)$, and in this case $n$ is odd, $\diam(\Gamma) = 3$, and $\Gamma$ is a distance-regular antipodal $((n+1)/2)$-cover of $K_{n^2+1}$ with intersection array $\left(n^2,\, (n-1)^2,\, 1;\, 1,\, 2(n-1),\, n^2\right)$.
\end{enumerate}
\end{theorem}
The upper bound on $\diam(\Gamma)$ in Theorem \ref{maintheorem:order-diameter} (\ref{>n-1}) is $O(n\ln(n))$ (see Remark \ref{remark:diambounds})
There are examples of graphs satisfying the conditions of Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}). An infinite family of such graphs arises from a construction of Godsil and Hensel \cite{DRcovers}, which in turn is a special case of the construction given in \cite[Proposition 12.5.3]{BCN}. We describe this in Construction \ref{example:drg}.
\begin{theorem} \label{maintheorem:family}
For each odd prime power $n$ the graph $\Gamma^{(n)}$ in Construction \ref{example:drg} is locally $n \times n$ grid and satisfies the conditions in Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}). Furthermore each $\mu$-graph in $\Gamma^{(n)}$ is either connected or a union of cycles of equal length.
\end{theorem}
Theorem \ref{maintheorem:family} will follow from the technical Proposition \ref{proposition:family} which gives, in addition, local structural information and describes the $\mu$-graphs for the graphs in Construction \ref{example:drg}. In particular we show that the number of cycles in a $\mu$-graph is unbounded (see Proposition \ref{proposition:family} (\ref{cycles-lowerbound})).
In addition to the above, we also obtain technical results about maximal cliques in general locally $n \times n$ grid graphs. We apply these together with Theorem \ref{maintheorem:order-diameter} to the case where $n = 5$, and obtain the following.
\begin{theorem} \label{maintheorem:5by5}
Assume that $\Gamma$ is connected and locally $5 \times 5$ grid.
\begin{enumerate}[(1)]
\item \label{mu>8} If all $\mu$-graphs in $\Gamma$ have order at least $8$, then $\Gamma$ has diameter $3$ and all $\mu$-graphs of $\Gamma$ have order equal to $8$, and $\Gamma$ is a distance-regular antipodal triple cover of $K_{26}$ with diameter $3$ and intersection array $(25, 16, 1; 1, 8, 25)$.
\item \label{mu-constant} If all $\mu$-graphs in $\Gamma$ have constant order $|\mu|$, then either $|\mu| = 8$ and $\Gamma$ is as in part (\ref{mu>8}), or $|\mu| = 4$ and $\Gamma$ is the Johnson graph $J(10,5)$.
\end{enumerate}
\end{theorem}
There is at least one graph satisfying the conditions of Theorem \ref{maintheorem:5by5} (\ref{mu>8}) with all $\mu$-graphs of order $8$, namely, the graph in Construction \ref{example:drg} with $n = 5$.
The rest of the paper is organised as follows: In Section \ref{sec:prelims} we list elementary properties of locally $m \times n$ grid graphs. In Section \ref{sec:family} we introduce the infinite family of graphs mentioned above, and prove Theorem \ref{maintheorem:family}. We then restrict ourselves to the case where $m = n$, and in Section \ref{sec:basics} prove Theorem \ref{maintheorem:order-diameter} (\ref{>n-1}). We look at maximal cliques of locally $n \times n$ grid graphs in Section \ref{sec:cliques}. Finally, in Section \ref{sec:lowerbound} we restrict further to the case where all $\mu$-graphs have order at least $2(n-1)$, and prove Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}) and \ref{maintheorem:order-diameter} (\ref{>2(n-1)}). We apply some of these results to the case where $n = 5$ and prove Theorem \ref{maintheorem:5by5}.
\subsection*{Acknowledgement}
The first and second authors acknowledge the hospitality of the Centre for the Mathematics of Symmetry and Computation of UWA, where this research was carried out. The first author was supported by a Post-doctoral Research Award (FRASDP) of the University of the Philippines. The second author was supported by NSFC (11661039) and NSF of Jiangxi (2018ACB21001, 20171BAB201010, 20171BCB23046). The third author was supported by Australian Research Council grant DP130100106. The authors are grateful to Gordon Royle for pointing out the examples in Construction \ref{example:drg}, to Jonathan Hall for generously sharing his paper \cite{3byq}, and to Aart Blokhuis and Andries Brouwer, whose paper \cite{4by4} is the basis of this work and the source of many hours of mathematical joy.
\section{Graphs with $|\mu| \geq 2(n-1)$} \label{sec:lowerbound}
In this section we consider the special case where all $\mu$-graphs have order at least $2(n-1)$. The main results are Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}) and \ref{maintheorem:order-diameter} (\ref{=2(n-1)}). We apply these results to locally $3 \times 3$ grid and locally $5 \times 5$ grid graphs.
The subcase where all $\mu$-graphs have the maximum possible order $2n$ is covered by the remarks following \cite[Lemma, Section 2]{4by4}. We state below this result for locally $n \times n$ grid graphs.
\begin{theorem} \label{theorem:srg} \cite[Section 2]{4by4}
Assume that $\Gamma$ is connected and locally $n \times n$ grid, and that all $\mu$-graphs of $\Gamma$ have order $2n$. Then $\diam(\Gamma) = 2$ and $\Gamma$ is strongly regular with parameters
\[ \left( \frac{n^3 + n + 2}{2}, \ n^2, \ 2(n-1), \ 2n \right). \]
\end{theorem}
Indeed, in this subcase equation (\ref{eq:b=c}) with $i = 2$ and $i = 3$ gives us
\[ k_2(x) = \frac{n(n-1)^2}{2} \ \ \text{and} \ \ k_3(x) = 0 \]
for all $x \in \V(\Gamma)$, $\diam(\Gamma) = 2$. All $\mu$-graphs have the same order so $c_2(x,y)$ is constant for all $\{x,y\} \in \Gamma_2$; this together with Lemma \ref{lemma:basic-nxn} (\ref{basic-edges}) imply that $\Gamma$ is strongly regular.
Suppose now that some $\mu$-graph in $\Gamma$ has order $2(n-1)$. By (\ref{eq:k2}) and (\ref{eq:k3})
\begin{equation} \label{eq:k2,k3}
k_2(x) \leq \frac{n^2(n-1)}{2} \ \ \text{and} \ \ k_3(x) \leq \frac{n-1}{2}
\end{equation}
for all $x \in \V(\Gamma)$. With $k_{2,2(n-1)}(x)$ and $k_{2,2n}(x)$ as in (\ref{eq:k_2,2m}), counting the number of edges between $\Gamma(x)$ and $\Gamma_2(x)$ yields the following special case of (\ref{eq:sum-k_2,2m}):
\begin{equation} \label{eq:edges-k2}
n^2(n-1)^2 = 2(n-1)\,k_{2,2(n-1)}(x) + 2n\,k_{2,2n}(x).
\end{equation}
The left side of (\ref{eq:edges-k2}) is divisible by $2n(n-1)$. Hence $k_{2,2(n-1)}(x) \equiv 0 \pmod{n}$, $k_{2,2n}(x) \equiv 0 \pmod{n-1}$, and
\[ \frac{n(n-1)}{2} = \frac{k_{2,2(n-1)}(x)}{n} + \frac{k_{2,2n}(x)}{n-1}. \]
By (\ref{eq:k2-sum}) we have $k_2(x) = k_{2,2(n-1)}(x) + k_{2,2n}(x)$, and substituting from this into the equation above gives us
\begin{equation} \label{eq:k2-sum2}
k_2(x) = \frac{n(n-1)^2}{2} + \frac{k_{2,2(n-1)}(x)}{n}.
\end{equation}
\begin{proof}[Proof of Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}) and \ref{maintheorem:order-diameter} (\ref{=2(n-1)})]
Assume that all $\mu$-graphs have order at least $2(n-1)$. By Lemma \ref{lemma:cliques-bounded}, $\diam(\Gamma) \leq 3$. Thus $|\V(\Gamma)| = 1 + n^2 + k_2(x) + k_3(x)$ for any $x \in \V(\Gamma)$; this together with (\ref{eq:k2,k3}) gives the bound
\[ |\V(\Gamma)| \leq 1 + n^2 + \frac{n^2(n-1)}{2} + \frac{n-1}{2} = \frac{(n^2 + 1)(n+1)}{2} \]
and hence
\begin{equation} \label{eq:order-bounded}
|\V(\Gamma)| \leq \left\lfloor\frac{(n^2 + 1)(n+1)}{2}\right\rfloor.
\end{equation}
This proves Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}).
We claim that for any $x \in \V(\Gamma)$, $k_2(x) = n^2(n-1)/2$ if and only if $c_2(x,y) = 2(n-1)$ for all $y \in \Gamma_2(x)$. Indeed, if $c_2(x,y) = 2(n-1)$ for all $y \in \Gamma_2(x)$ then $k_{2,2(n-1)}(x) = k_2(x)$ and $k_{2,2m}(x) = 0$ for all $m \neq n-1$, and it follows from (\ref{eq:k2-sum2}) that
\[ k_2(x) = \frac{n(n-1)^2}{2} \cdot \frac{n}{n-1} = \frac{n^2(n-1)}{2}. \]
Conversely, suppose that $k_2(x) = n^2(n-1)/2$. Then from (\ref{eq:k2-sum2}) we get
\begin{align*}
k_{2,2(n-1)}(x)
&= n\,\left(k_2(x) - \frac{n(n-1)^2}{2}\right) \\
&= n\,\left(\frac{n^2(n-1)}{2} - \frac{n(n-1)^2}{2}\right) \\
&= \frac{n^2(n-1)}{2} \\
&= k_2(x).
\end{align*}
So $c_2(x,y) = 2(n-1)$ for all $y \in \Gamma_2(x)$, which proves the claim.
We now prove Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}). Assume first that equality holds in (\ref{eq:order-bounded}). Let $x \in \V(\Gamma)$. It follows from (\ref{eq:k2,k3}) that $k_2(x) = n^2(n-1)/2$. Thus, by the claim, $c_2(x,y) = 2(n-1)$ for all $y \in \Gamma_2(x)$. Since $x$ is arbitrary this proves that all $\mu$-graphs have order $2(n-1)$.
Conversely, assume that all $\mu$-graphs have order $2(n-1)$. Let $x \in \V(\Gamma)$ be arbitrary. Then $c_2(x,y) = 2(n-1)$ for all $y \in \Gamma_2(x)$, and thus $k_2(x) = n^2(n-1)/2$ by the claim. Suppose that $k_3(x) = 0$. Then
\[ |\V(\Gamma) = 1 + n^2 + \frac{n^2(n-1)}{2} = \frac{n^3 + n^2 + 2}{2}. \]
By Lemma \ref{lemma:basic-nxn} (\ref{basic-triangles}), $n+1$ divides $2|\V(\Gamma)| = n^3 + n^2 + 2$, and hence $n+1$ divides $2$, contradiction. Thus $k_3(x) \neq 0$ and $\diam(\Gamma) = 3$. By (\ref{eq:k2,k3}) we have $2\,k_3(x) \leq n-1$, so that $2\,k_3(x) = n-s$ for some $s \in \{1, \ldots, n-1\}$. Therefore
\[ 2|\V(\Gamma)| = 2 + 2n^2 + 2\,k_2(x) + 2\,k_3(x) = n^3 + n^2 + n - s + 2, \]
which is divisible by $n+1$ if and only if $s = 1$. So $k_3(x) = (n-1)/2$ and $n$ is odd, and consequently equality holds in (\ref{eq:order-bounded}). This proves the first part of Theorem \ref{maintheorem:order-diameter}, and also that in case of equality we have $\diam(\Gamma) = 3$ and $n$ odd.
It remains to show that $\Gamma$ is a distance-regular antipodal cover of $K_{n^2+1}$ whenever all $\mu$-graphs have order $2(n-1)$. By the hypothesis $c_2$ is constant on $\Gamma_2$, so we need only to show that $b_2$ and $c_3$ are constant on $\Gamma_2$ and $\Gamma_3$, respectively. Let $x \in \V(\Gamma)$. By Lemma \ref{lemma:parameters} (\ref{b3,c3}) any $y \in \Gamma_3(x)$ satisfies $c_3(x,y) \geq ((n-1)+1)^2 = n^2 = |\Gamma(y)| \geq c_3(x,y)$, so $c_3(x,y) = n^2$. Thus $c_3$ is constant for any pair of vertices in $\Gamma_3$. By Lemma \ref{lemma:parameters} (\ref{b2}) any $y \in \Gamma_2(x)$ satisfies $b_2(x,y) \leq (n-(n-1))^2 = 1$. Letting $b_{2,1}(x) = \big|\{ y \in \Gamma_2(x) \ : \ b_2(x,y) = 1 \}\big|$ and applying (\ref{eq:b=c}) with $i = 3$ we get
\[ b_{2,1}(x) = \sum_{y \in \Gamma_2(x)} b_2(x,y) = \sum_{z \in \Gamma_3(x)} c_3(x,y) = n^2 k_3(x) = \frac{n^2(n-1)}{2} = k_2(x). \]
Hence $b_2(x,y)= 1$ for all $y \in \Gamma_2(x)$, which shows that $b_2$ is constant on $\Gamma_2$. Therefore $\Gamma$ is distance-regular. For all $z \in \Gamma_3(x)$ we have $a_3(x,z) = n^2 - c_3(x,z) = 0$, so no two vertices $w, z \in \Gamma_3(x)$ are adjacent in $\Gamma$. If $d_\Gamma(w,z) = 2$ then there is a vertex $y \in \Gamma_2(x)$ such that $w, z \in \Gamma(y)$; in this case $b_2(x,y) > 1$, contradiction. So $d_\Gamma(w,z) = 3$ for any $w, z \in \Gamma_3(x)$. Therefore $x \cup \Gamma_3(x)$ is an antipodal block for all $x \in \V(\Gamma)$, and the quotient graph with respect to the resulting partition is $K_{n^2+1}$. This completes the proof of Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}).
\end{proof}
In the remainder of this section we apply the above results to locally $n \times n$ grid graphs for $n \in \{3,5\}$. We will use the following technical lemma:
\begin{lemma} \label{lemma:ell}
Assume that $\Gamma$ is locally $n \times n$ grid, and that all $\mu$-graphs in $\Gamma$ have order at least $2(n-1)$. Let $x \in \V(\Gamma)$ and $k_{2,2(n-1)}(x)$ as in (\ref{eq:k_2,2m}). Then $k_{2,2(n-1)}(x) = \ell_x n$, for some integer $\ell_x \leq n(n-1)/2$ such that
\[ \ell_x + k_3(x) \equiv \left\{\begin{aligned} &0 \pmod{n+1} &&\text{if $n$ is even}; \\ &0 \pmod{(n+1)/2} &&\text{if $n$ is odd}. \end{aligned}\right. \]
\end{lemma}
\begin{proof}
Recall from the remarks after Theorem \ref{theorem:srg} that $k_{2,2(n-1)}(x) \equiv 0 \pmod{n}$, so indeed $k_{2,2(n-1)}(x) = \ell_x n$ for some $\ell_x$. From (\ref{eq:k2,k3}) and the definition of $k_{2,2(n-1)}(x)$ we have
\[ k_{2,2(n-1)}(x) \leq k_2(x) \leq \frac{n^2(n-1)}{2}, \]
and hence $\ell_x \leq n(n-1)/2$. By (\ref{eq:k2-sum2}) we have $k_2(x) = n(n-1)^2/2 + \ell_x$, and since $\diam(\Gamma) \leq 3$ by Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}),
\[ |\V(\Gamma)| = 1 + n^2 + k_2(x) + k_3(x) = \frac{(n+1)(n^2 - n + 2)}{2} + \ell_x + k_3(x). \]
Recall from Lemma \ref{lemma:basic-nxn} (\ref{basic-triangles}) that $n+1$ divides $2|\V(\Gamma)|$. So $2(\ell_x + k_3(x)) \equiv 0 \pmod{n+1}$, and the result follows.
\end{proof}
\subsection{The subcase $n = 3$}
If $\Gamma$ is locally $3 \times 3$ grid then by Lemma \ref{lemma:basic-mxn} (\ref{mu-cycles}) any $\mu$-graph of $\Gamma$ has order at least $4 = 2(n-1)$. Hence Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}) may be applied. The locally $3 \times 3$ grid graphs belong to a more general family classified by Hall in \cite{3byq}.
\begin{proposition} \label{proposition:3x3}
Assume that $\Gamma$ is connected and locally $3 \times 3$ grid. Then all $\mu$-graphs of $\Gamma$ have the same order $|\mu| \in \{4,6\}$. Moreover:
\begin{enumerate}[1.]
\item \label{mu=4} If $|\mu| = 4$ then $\Gamma \cong J(6,3)$ (equivalently, to the graph in Construction \ref{example:drg} with $n = 3$).
\item \label{mu=6} If $|\mu| = 6$ then $\Gamma$ is isomorphic to the complement $\overline{K_4 \square K_4}$ of the $4 \times 4$ grid graph.
\end{enumerate}
\end{proposition}
\begin{remark} \label{remark:3x3}
Proposition \ref{proposition:3x3} follows from more general results \cite[Theorems 1 and 2]{3byq} concerning line graphs of certain partial linear spaces of order $2$, which are locally $3 \times n$ grid for some $n$. We give a self-contained elementary proof of the subclass of locally $3 \times 3$ grid graphs based on the theory developed in our paper. We note that the two examples we obtain in Proposition \ref{proposition:3x3} all come from partial linear spaces $\mathscr{T}(\Omega,\Omega')$ in \cite[Theorem 1]{3byq}, in particular, $J(6,3)$ arises from $|\Omega| = 6$, $\Omega' = \varnothing$; and $\overline{K_4 \square K_4}$ arises in two ways, namely, $(|\Omega|,|\Omega'|) = (4,1)$ or $(3,2)$. The graph $J(6,3)$ also arises, for example, from the space $\mathscr{S}p(V,f)$ where $f$ is nondegenerate and $V = \F_2^4$.
\end{remark}
\begin{proof}[Proof of Proposition \ref{proposition:3x3}]
Assume that $\Gamma$ is locally $3 \times 3$ grid. By Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}), we have $\diam(\Gamma) \leq 3$ and $|\V(\Gamma)| \leq 20$. Applying (\ref{eq:k2,k3}), we obtain for any vertex $x$ that $k_2(x) \leq 9$ and $k_3(x) \leq 1$. Also, by Lemma \ref{lemma:parameters} (\ref{b2}) and \ref{lemma:parameters} (\ref{b3,c3}), $b_2(x,y) \leq 1$ for any $y \in \Gamma_2(x)$, and $c_3(x,y) \geq 9$ for any $y \in \Gamma_3(x)$.
We claim that if $\epsilon(x) = 2$, where $\epsilon$ is as in (\ref{eq:epsilon}), then $\big(k_{2,4}(x), \, k_{2,6}(x)\big)$ is either $(0,6)$ or $(6,2)$. Indeed, by Lemma \ref{lemma:ell} we have $k_{2,4}(x) = 3\ell_x$ for some integer $\ell_x$ satisfying $\ell_x \leq 3$ and $\ell_x + k_3(x) \equiv 0 \pmod{2}$. Now $k_3(x) = 0$, so $\ell_x \in \{0,2\}$, and from (\ref{eq:edges-k2}),
\[ k_{2,6}(x) = 6 - \frac{2}{3}\, k_{2,4}(x) = 6 - 2\ell_x. \]
The claim follows.
We consider two cases.
\emph{Case 1.} Suppose that $\diam(\Gamma) = 2$. Then $k_2(x) = |\V(\Gamma)| - 10$ for any $x \in \V(\Gamma)$, so that $k_2(x)$ is constant. Also $k_3(x) = 0$, so $\epsilon(x) = 2$, and thus by the claim above $\big(k_{2,4}(x), \, k_{2,6}(x)\big) \in \{(0,6), (6,2)\}$.
Suppose first that $k_{2,4}(x) = 6$ and $k_{2,6}(x) = 2$. Then we can denote the elements of $\Gamma_2(x)$ by $y_i$ ($1 \leq i \leq 6$) and $z_j$ ($1 \leq j \leq 2$), where $c_2(x,y_i) = 4$ and $c_2(x,z_j) = 6$ for each $i$ and each $j$. Let $S_i = \Gamma_2(x) \cap \Gamma(y_i)$ and $T_j = \Gamma_2(x) \cap \Gamma(z_j)$. Notice that $[S_i]$ and $[T_j]$ are subgraphs of $[\Gamma(y_i)]$ and $[\Gamma(z_j)]$, respectively, where $[\Gamma(y_i)] \cong [\Gamma(z_j)] \cong K_3 \square K_3$. By Lemma \ref{lemma:basic-nxn} (\ref{basic-mu}), any $4$-cycle in $K_3 \square K_3$ has two edges in two distinct vertical cliques and two edges in two distinct horizontal cliques, and so its complement consists of one vertical and one horizontal clique in $K_3 \square K_3$. So for each $i \in \{1, \ldots, 5\}$, each induced subgraph $[S_i]$ has five vertices and is isomorphic to $\overline{C_4 \cup K_1}$, as illustrated in Figure \ref{figure:3by3}. Likewise, any $6$-cycle in $K_3 \square K_3$ has three edges in three distinct horizontal cliques and three edges in three distinct vertical cliques, so each clique in $K_3 \square K_3$ contains two vertices of the $6$-cycle. It follows that its complement consists of three vertices no two of which belong in the same clique, that is, no two of which are adjacent. Thus each $[T_j]$ is an empty graph of order three, $3K_1$. Each $[S_i \cup \{y_i\}]$ has two vertices of valency $5$ (including $y_i$) and four vertices of valency $3$, and the neighbourhood in $[S_i \cup \{y_i\}]$ of any of these vertices contains an edge. It follows that for all $i$ and all $j$ we have $z_j \notin S_i$, which implies that $z_j \not\sim_\Gamma y_i$. Hence $T_1, T_2 \subseteq \{z_1,\,z_2\}$, a contradiction since $T_1$ and $T_2$ have three elements each.
It follows that $k_{2,4}(x) = 0$ and $k_{2,6}(x) = 6$. Hence $c_2(x,y) = 6$ for all $y \in \Gamma_2(x)$, and since $x$ is arbitrary this holds for all pairs $(x,y) \in \Gamma_2$. Therefore all $\mu$-graphs of $\Gamma$ have size $|\mu| = 6$. By Theorem \ref{theorem:srg}, $\Gamma$ is strongly regular with parameters $(16, 9, 4, 6)$. Up to isomorphism there are exactly two such graphs \cite{srgdatabase}, and of these only $\overline{K_4 \square K_4}$ is locally $3 \times 3$ grid. Thus part (\ref{mu=6}) of the statement holds.
\emph{Case 2.} Suppose now that $\diam(\Gamma) = 3$. We show that $k_3(x) \neq 0$ for all $x \in \V(\Gamma)$. Indeed, $\diam(\Gamma) = 3$ implies that there exists $x \in \V(\Gamma)$ such that $k_3(x) \neq 0$. Then $k_3(x) = 1$, so that $\Gamma(z) \subseteq \Gamma_2(x)$ for the unique $z \in \Gamma_3(x)$. Thus $k_2(x) \geq |\Gamma(z)| = 9$. If $k_3(y) = 0$ for some vertex $y$ then $\epsilon(y) = 2$, and it follows from the claim above that $k_2(y) = k_{2,4}(y) + k_{2,6}(y) \in \{6, 8\}$. Hence $|\V(\Gamma)| \in \{16, 18\}$ and $9 \leq k_2(x) = |\V(\Gamma)| - (1 + k_1(x) + k_3(x)) = |\V(\Gamma)| - 11 \in \{5, 7\}$, contradiction. Therefore no such $y$ exists, and $k_3(x) \neq 0$ for any $x \in \V(\Gamma)$.
It follows that any $x \in \V(\Gamma)$ satisfies $k_3(x) = 1$, say $\Gamma_3(x) = \{z\}$, so that $\Gamma_2(x) \supseteq \Gamma(z)$ and $k_2(x) \geq 9$. Also, by equation (\ref{eq:k2-sum2}), we have $k_2(x) = 6 + \ell_x$, and as $\ell_x \leq 3$ we conclude that $\ell_x = 3$ and $k_2(x) = 9$. Hence $k_{2,4}(x) = 3\ell_x = 9$ and $k_{2,6}(x) = 0$, so $c_2(x,y) = 4$ for all $y \in \Gamma_2(x)$. Since $x$ is arbitrary this holds for all pairs $(x,y) \in \Gamma_2$. Therefore all $\mu$-graphs of $\Gamma$ have size $|\mu| = 4$. By Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}), $\Gamma$ is a distance-regular antipodal double cover of $K_{10}$, and hence has $20$ vertices. Applying \cite[Theorem 1]{4by4} we conclude that $\Gamma$ is the Johnson graph $J(6,3)$ as in part (\ref{mu=4}) of the statement.
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-7,-2)(7,2)
\rput(-4,0){
\gridgraph3
\pspolygon[linewidth=1pt](-1,1)(0,1)(0,0)(-1,0)(-1,1)
\psarc[linestyle=dashed,linewidth=1pt](0,-4.732){3.864}{75}{105} \psline[linestyle=dashed,linewidth=1pt](-1,-1)(1,-1)
\psarc[linestyle=dashed,linewidth=1pt](4.732,0){3.864}{165}{195} \psline[linestyle=dashed,linewidth=1pt](1,-1)(1,1)
\multido{\n=-1+1}{3}{\rput(\n,0){\multido{\n=-1+1}{3}{\qdisk(0,\n){2pt}}}}
\rput(-0.5,1.75){\small $\mu(x,y_i)$} \psline(-0.5,1.5)(-0.5,1)
\rput(0.5,-1.5){\small $[S_i] \cong \overline{C_4 \cup K_1}$} \psline(0.5,-1)(0.5,-1.25)
\rput(-2,0){\small $\Gamma(y_i)$}
}
\multido{\n=0+60}{6}{\rput{\n}(0,0){\qdisk(0.87,0.5){2pt} \psline(0.87,-0.5)(0.87,0.5)}}
\psline(0,1)(-0.87,-0.5) \psline(0,1)(0,-1) \psline(0,1)(0.87,-0.5) \psline(-0.87,0.5)(0,-1) \psline(0.87,0.5)(0,-1)
\rput(0,1.25){\small $y_i$} \rput(0,-1.5){\small $[S_i \cup \{y_i\}]$}
\rput(4,0){
\gridgraph3
\psline[linewidth=1pt](-1,1)(0,1)(0,0)(1,0)(1,-1)
\psarc[linewidth=1pt](0,-4.732){3.864}{75}{105}
\psarc[linewidth=1pt](2.732,0){3.864}{165}{195}
\multido{\n=-1+1}{3}{\rput(\n,0){\multido{\n=-1+1}{3}{\qdisk(0,\n){2pt}}}}
\rput(2,0){\small $\Gamma(z_j)$}
\rput(-0.5,1.75){\small $\mu(x,z_j)$} \psline(-0.5,1.5)(-0.5,1)
\rput(0,-1.5){\small $[T_j] \cong \overline{K_3}$}
}
\end{pspicture}
\caption{$S_i$ and $T_j$ as in Case 1 of the proof of Proposition \ref{proposition:3x3}} \label{figure:3by3}
\end{figure}
\end{center}
\subsection{The subcase $n = 5$}
If $\Gamma$ is connected and locally $5 \times 5$ grid then Theorem \ref{maintheorem:order-diameter} (\ref{>n-1}) gives the bounds $|\V(\Gamma)| \leq 312$ and $\diam(\Gamma) \leq 8$. In Lemma \ref{lemma:order-bound} we improve this upper bound for the order of $\Gamma$.
\begin{lemma} \label{lemma:order-bound}
Assume that $\Gamma$ is connected and locally $5 \times 5$ grid. Then $|\V(\Gamma)| \leq 300$ and $|\V(\Gamma)| \equiv 0 \pmod{6}$.
\end{lemma}
\begin{proof}
Applying (\ref{eq:k2}), (\ref{eq:k3}), and (\ref{eq:ki}) yields
\begin{align*}
&k_2 \leq \frac{5^2 \cdot 4^2}{4} = 100, &k_3 &\leq \frac{100 \cdot 9}{9} = 100, &k_4 &\leq \left\lfloor \frac{100 \cdot 4}{9} \right\rfloor = 44, &k_5 &\leq \left\lfloor \frac{44 \cdot 4}{9} \right\rfloor = 19, \\
&k_6 \leq \left\lfloor \frac{19 \cdot 4}{9} \right\rfloor = 8, &k_7 &\leq \left\lfloor \frac{8 \cdot 4}{9} \right\rfloor = 3, &k_8 &\leq \left\lfloor \frac{3 \cdot 4}{9} \right\rfloor = 1.
\end{align*}
Hence $|\V(\Gamma)| \leq 1 + 25 + k_2(x) + \ldots + k_8(x) = 301$. Now $5 \equiv 2 \pmod{3}$, so $3$ divides $|\V(\Gamma)|$ by Lemma \ref{lemma:basic-nxn} (\ref{basic-triangles}), and $|\V(\Gamma)|$ is even since $\Gamma$ has odd valency. Thus $|\V(\Gamma)| \equiv 0 \pmod{6}$, and the result follows.
\end{proof}
\begin{lemma} \label{lemma:mu-constant}
Assume that $\Gamma$ is connected and locally $5 \times 5$ grid. If all $\mu$-graphs in $\Gamma$ have constant order $|\mu|$, then either $|\mu| = 4$ and $\Gamma \cong J(10,5)$, or $|\mu| = 8$.
\end{lemma}
\begin{proof}
By Lemma \ref{lemma:basic-nxn} (\ref{basic-mu}), $|\mu| = 2m$ for some $m \in \{2, \ldots, 5\}$. Let $x \in \V(\Gamma)$. We apply (\ref{eq:b=c}) with $i = 2$ and $c_2(x,y) = |\mu| = 2m$ for all $y \in \Gamma_2(x)$ to count the number of edges between $\Gamma(x)$ and $\Gamma_2(x)$, and obtain
\[ k_2(x) = \frac{1}{2m} \sum_{y \in \Gamma(x)} b_1(x,y) = \frac{1}{2m} \cdot 5^2(5-1)^2 = \frac{200}{m}. \]
So $m$ divides $200$, and thus $m \neq 3$. If $m = 5$ then Theorem \ref{theorem:srg} states that $\Gamma$ is strongly regular with parameters $(N, k, \lambda, \nu) = (66, 25, 8, 10)$. However
\[ \frac{1}{2} \left( N - 1 \pm \frac{(N-1)(\nu-\lambda) - 2k}{\sqrt{(\nu-\lambda)^2 + 4(k-\nu)}} \right) = \frac{1}{2}(65 \pm 10), \]
neither of which is an integer, so there is no strongly regular graph having these parameters by \cite[Theorem 3.1]{srg}. Thus $m \neq 5$. Hence $m = 2$ or $4$, and $|\mu| = 4$ or $8$. If $|\mu| = 4$ then $\Gamma \cong J(10,5)$ by \cite[Theorem 1]{4by4}.
\end{proof}
\begin{lemma} \label{lemma:5x5}
Assume that $\Gamma$ is connected and locally $5 \times 5$ grid, and that all $\mu$-graphs in $\Gamma$ have order at least $8$. For any $x \in \V(\Gamma)$:
\begin{enumerate}[(1)]
\item \label{5x5-clique} $\Gamma_2(x)$ does not contain any $6$-clique of $\Gamma$; and
\item \label{eccentricity} $\epsilon(x) = 3$, where the eccentricity $\epsilon$ is as defined in (\ref{eq:epsilon}).
\end{enumerate}
\end{lemma}
\begin{proof}
Suppose that $C \subseteq \Gamma_2(x)$ for some vertex $x$ and $6$-clique $C$. Then by Lemma \ref{lemma:cliques-bounded} (\ref{distance2-bounded}), all $y \in C$ satisfy $c_2(x,y) = 8$. Furthermore the six graphs $\mu(x,y)$, for $y \in C$, satisfy the conditions described in Lemma \ref{lemma:mu-clique}, namely, each pair of these six $\mu$-graphs of order $8$ is either disjoint or intersects in an edge (Lemma \ref{lemma:mu-clique} (\ref{mu-meet})) and the set of such edges forms a matching of $\left(\bigcup_{y \in C \cap \Gamma_2(x)} \mu(x,y)\right) \setminus C$ (Lemma \ref{lemma:mu-clique} (\ref{matching}); note that $C \cap \Gamma(x) = \varnothing$ since $C \subseteq \Gamma_2(x)$). However, a computer search using \textsc{Magma} \cite{magma} establishes that there is no set of subgraphs of $K_5 \square K_5$ that satisfy these conditions. (See Section \ref{sec:appendix} for the \textsc{Magma} code used.) Therefore $C \nsubseteq \Gamma_2(x)$. This proves statement (\ref{5x5-clique}).
To prove statement (\ref{eccentricity}), assume first that $\epsilon(x) = 2$ for some $x \in \V(\Gamma)$. Then $k_3(x) = 0$, so that for any $y \in \Gamma_2(x)$ and $6$-clique $C$ containing $y$, either $d_\Gamma(x,C) = 1$ or $C \subseteq \Gamma_2(x)$. But $C \nsubseteq \Gamma_2(x)$ by statement (\ref{5x5-clique}). So $d_\Gamma(x,C) = 1$ for any such $C$, and it follows that $c_2(x,y) = 10$ (for otherwise $c_2(x,y) = 8$, and so $d_\Gamma(x,C) = 2$ for some $6$-clique $C$ containing $y$ by Lemma \ref{lemma:maxcliques} (\ref{2m}), contradiction). Since $y$ is arbitrary, we then obtain $k_2(x) = 25(16)/10 = 40$ and
\[ |\V(\Gamma)| = 1 + 25 + k_2(x) = 1 + 25 + 40 = 66. \]
If all vertices in $\Gamma$ have eccentricity $2$, then the above implies that $c_2(x,y) = 10$ for all $(x,y) \in \Gamma_2$. However this is impossible by Lemma \ref{lemma:mu-constant}. Thus $\epsilon(x') \geq 3$ for some $x' \in \V(\Gamma)$; since $\diam(\Gamma) \leq 3$ by Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}), we must then have $\epsilon(x') = 3$. In this case $k_3(x') \neq 0$, so by inequality (\ref{eq:k2,k3}) we have $k_3(x') = 1$ or $2$. Thus for some $y' \in \Gamma_2(x')$ and $6$-clique $C$ containing $y'$, we have $C \cap \Gamma_3(x') \neq \varnothing$, so that $d_\Gamma(x',C) = 2$. It follows from Lemma \ref{lemma:maxcliques} that $c_2(x',y') \neq 10$, so $c_2(x',y') = 8$. Hence $k_{2,8}(x') \neq 0$, where $k_{2,8}$ is as defined in (\ref{eq:k_2,2m}), and $\ell_{x'} := k_{2,8}(x')/5 \neq 0$. By equation (\ref{eq:k2-sum2}), $k_2(x') = 40 + \ell_{x'}$. Hence $k_2(x') > 40$, so that
\[ |\V(\Gamma)| = 1 + 25 + k_2(x') + k_3(x') > 1 + 25 + 40 = 66, \]
contradiction. Therefore no vertex in $\Gamma$ has eccentricity $2$.
\end{proof}
For $x \in \V(\Gamma)$ and $m \in \{2, \ldots, n\}$ let
\begin{equation} \label{eq:Gam_2,2m}
\Gamma_{2,2m}(x) := \big\{ y \in \Gamma_2(x) \ : \ c_2(x,y) = 2m \big\}.
\end{equation}
\begin{lemma} \label{lemma:5x5-order}
Assume that $\Gamma$ is locally $5 \times 5$ grid, and that all $\mu$-graphs in $\Gamma$ have order at least $8$. Then either:
\begin{enumerate}[(1)]
\item \label{order78} $|\V(\Gamma)| = 78$ and $c_2(x,y) = 8$ for all $x, y \in \Gamma$ with $d_\Gamma(x,y) = 2$, or
\item \label{order72} $|\V(\Gamma)| = 72$, and with respect to any vertex $x$, $\Gamma$ has distance diagram as in Figure \ref{figure:diagram-5x5}.
\end{enumerate}
\end{lemma}
\begin{figure}
\begin{center}
\begin{pspicture}(-4.5,-2.5)(4.5,0.7)
\rput(-4.5,0){\rput(0,0){\ovalnode{k0}{$1$}} \rput(0.6,0.25){\small $25$}}
\rput(-1.5,0){\rput(0,0){\ovalnode{k1}{$25$}} \rput(-0.6,0.25){\small $1$} \rput(0,0.55){\small $8$} \rput(0.6,0.25){\small $8$} \rput(0,-0.55){\small $8$}}
\rput(0,-2){\rput(0,0){\ovalnode{k210}{$20$}} \rput(0,-0.55){\small $5$} \rput(-0.65,0.35){\small $10$} \rput(0.65,0.35){\small $10$}}
\rput(1.5,0){\rput(0,0){\ovalnode{k28}{$25$}} \rput(0.6,0.25){\small $1$} \rput(0,0.55){\small $8$} \rput(-0.6,0.25){\small $8$} \rput(0,-0.55){\small $8$}}
\rput(4.5,0){\rput(0,0){\ovalnode{k3}{$1$}} \rput(-0.6,0.25){\small $25$}}
\ncline{k0}{k1} \ncline{k1}{k28} \ncline{k28}{k3} \ncline{k1}{k210} \ncline{k28}{k210}
\end{pspicture}
\end{center}
\caption{Distance diagram for $\Gamma$ in Lemma \ref{lemma:5x5-order} (\ref{order72})}
\label{figure:diagram-5x5}
\end{figure}
\begin{proof}
Let $x \in \V(\Gamma)$. Then $\epsilon(x) = 3$ by Lemma \ref{lemma:5x5} (\ref{eccentricity}), and hence $k_3(x) = 1$ or $2$ by the second inequality in (\ref{eq:k2,k3}). Also it follows from Lemma \ref{lemma:maxcliques} that $c_2(x,y) = 8$ for some $y \in \Gamma_2(x)$, and thus $k_{2,8}(x) \neq 0$. By (\ref{eq:k2-sum2}) we have $k_2(x) = 40 + \ell_x$ where $\ell_x := k_{2,8}(x)/5$, and $\ell_x \leq 5(4)/2 = 10$ by Lemma \ref{lemma:ell}. Hence
\[ |\V(\Gamma)| = 1 + 25 + k_2(x) + k_3(x) = 1 + 25 + (40 + \ell_x) + k_3(x) = 66 + \ell_x + k_3(x). \]
Recall that $6$ divides $|\V(\Gamma)|$ by Lemma \ref{lemma:order-bound}. Hence $6$ divides $\ell_x + k_3(x)$, so the only possibilities for $(k_3(x),\ell_x)$ are $(1,5)$, $(2,4)$, and $(2,10)$. These yield $|\V(\Gamma)| = 72$ for $(k_3(x),\ell_x) \in \{ (1,5), (2,4) \}$, and $|\V(\Gamma)| = 78$ for $(k_3(x),\ell_x) = (2,10)$.
Assume that $|\V(\Gamma)| = 78$. It follows from the above that for any $x \in \V(\Gamma)$ we have $(k_3(x),\ell_x) = (2,10)$, so $k_2(x) = 40 + \ell_x = 50$ and $k_{2,8}(x) = 5\ell_x = 50$. Thus $k_{2,10}(x) = k_2(x) - k_{2,8}(x) = 0$, implying that $c_2(x,y) = 8$ for all $y \in \Gamma_2(x)$. Since $x$ is arbitrary, this means that $c_2$ is independent of $x$ or $y$, and thus all $\mu$-graphs in $\Gamma$ have order $8$. This proves statement (\ref{order78}).
For the remainder of the proof assume that $|\V(\Gamma)| = 72$. Then for any $x \in \V(\Gamma)$ we have $(k_3(x),\ell_x) \in \{(1,5), (2,4)\}$. In each case $\ell_x < 10$, and hence
\[ k_{2,10}(x) = k_2(x) - k_{2,8}(x) = (40 + \ell_x) - 5\ell_x = 40 - 4\ell_x > 40 - 4(10) = 0. \]
So $\Gamma_{2,10}(x) \neq \varnothing$. By Lemma \ref{lemma:parameters} (\ref{b2}), $b_2(x,y) = 0$ for any $y \in \Gamma_{2,10}(x)$, and hence for any $x' \in \Gamma_3(x)$, $\Gamma(x') \subseteq \Gamma_{2,8}(x) \cup \Gamma_3(x)$. Thus $|\Gamma_{2,8}(x) \cup \Gamma_3(x)| \geq |\Gamma(x') \cup \{x'\}| = 26$. If $(k_3(x),\ell_x) = (2,4)$ for some $x \in \V(\Gamma)$, then $k_{2,8}(x) = 5\ell_x = 20$ and $|\Gamma_{2,8}(x) \cup \Gamma_3(x)| = k_{2,8}(x) + k_3(x) = 22$, contradiction. It follows that $(k_3(x),\ell_x) = (1,5)$ for all $x \in \V(\Gamma)$. Thus $k_{2,8}(x) = 5\ell_x = 25$, $k_{2,10}(x) = 40 - 4\ell_x = 20$, and there is a unique vertex $x' \in \Gamma_3(x)$. Hence, replacing $x$ by $x'$ in the above, $\Gamma(x') \subseteq \Gamma_{2,8}(x) \cup \Gamma_3(x) = \Gamma_{2,8}(x) \cup \{x'\}$. So $\Gamma(x') \subseteq \Gamma_{2,8}(x)$. Since $|\Gamma_{2,8}(x)| = k_{2,8}(x) = 25 = |\Gamma(x')|$, it follows that $\Gamma(x') = \Gamma_{2,8}(x)$, which in turn implies that $\Gamma_2(x') = \Gamma(x) \cup \Gamma_{2,10}(x)$.
Counting the number of edges between $\Gamma(x)$ and $\Gamma(x')$, and using the fact that $c_2(x,z) = 8$ for all $z \in \Gamma(x')$, we find that
\[ \sum_{y \in \Gamma(x)} c_2(x',y) = \sum_{z \in \Gamma(x')} c_2(x,z) = 25(8). \]
Since $|\Gamma(x)| = 25$ and $c_2(x',y) \geq 8$ for all $y \in \Gamma(x)$, we must have $c_2(x',y) = 8$ for any $y \in \Gamma(x)$. Thus $\Gamma(x) \subseteq \Gamma_{2,8}(x')$. Note that also $(k_3(x'),\ell_{x'}) = (1,5)$, and thus $|\Gamma_{2,8}(x')| = k_{2,8}(x') = 5\ell_{x'} = 25 = |\Gamma(x)|$. Therefore $\Gamma_{2,8}(x') = \Gamma(x)$; since $\Gamma_2(x') = \Gamma_{2,8}(x') \cup \Gamma_{2,10}(x')$, this implies that $\Gamma_{2,10}(x') = \Gamma_{2,10}(x)$. This yields the distance diagram in Figure \ref{figure:diagram-5x5-proof}. Using the fact that $b_1 = 16$, we find that $r = s = b_1 - 8 = 8$.
Since $x$ is arbitrary, we then get the distance diagram in Figure \ref{figure:diagram-5x5-proof}, with the remaining parameters obtained using the fact that $\val(\Gamma) = 25$.
\end{proof}
\begin{figure}
\begin{center}
\begin{pspicture}(-5,-3)(5,1.5)
\rput(-5,0){\cnode*(0,0){3pt}{k0} \rput(-0.6,0){$x$}}
\rput(-1.9,0){\rput(0,0){\ovalnode{k1}{\parbox[c]{1.1cm}{\centering\small $\Gamma(x)$ \\ ($25$)}}} \rput(0.4,-0.85){\small $r$} \rput(1.1,0.25){\small $8$} \rput(0,1.3){\small $\Gamma_{2,8}(x')$} \rput{90}(0,0.9){\small $=$}}
\rput(0,-2.25){\rput(0,0){\ovalnode{k210}{\parbox[c]{1.2cm}{\centering\small $\Gamma_{2,10}(x)$ \\ $(20)$}}} \rput(-1,0.65){\small $10$} \rput(1,0.65){\small $10$} \rput(1.9,0){\small $= \Gamma_{2,10}(x')$}}
\rput(1.9,0){\rput(0,0){\ovalnode{k28}{\parbox[c]{1.1cm}{\centering\small $\Gamma_{2,8}(x)$ \\ ($25$)}}} \rput(0,1.3){\small $\Gamma(x')$} \rput{90}(0,0.9){\small $=$} \rput(-1.1,0.25){\small $8$} \rput(-0.4,-0.85){\small $s$}}
\rput(5,0){\cnode*(0,0){3pt}{k3} \rput(0.6,0){$x'$}}
\ncline{k0}{k1} \ncline{k1}{k28} \ncline{k28}{k3} \ncline{k1}{k210} \ncline{k28}{k210}
\end{pspicture}
\end{center}
\caption{Distance diagram for $\Gamma$ in the proof of Lemma \ref{lemma:5x5-order} (\ref{order72})}
\label{figure:diagram-5x5-proof}
\end{figure}
\begin{proof}[Proof of Theorem \ref{maintheorem:5by5}]
We know from Theorem \ref{maintheorem:order-diameter} (\ref{>2(n-1)}) that $\diam(\Gamma) \leq 3$. By Lemma \ref{lemma:mu-constant} not all $\mu$-graphs can have order $10$, so there exists $(x,y) \in \Gamma_2$ such that $\mu(x,y)$ has order $c_2(x,y) \leq 8$. If all $\mu$-graphs have constant order $|\mu|$ then by Lemma \ref{lemma:mu-constant} either $|\mu| = 4$ and $\Gamma \cong J(10,5)$, or $|\mu| = 8$. Hence Theorem \ref{maintheorem:5by5} (\ref{mu-constant}) holds.
Assume now that all $\mu$-graphs have order at least $8$. Then $\Gamma$ satisfies the conditions of Lemma \ref{lemma:5x5-order}. Claim that $|\V(\Gamma)| \neq 72$. Suppose otherwise. Then with respect to any $x \in \V(\Gamma)$, $\Gamma$ has distance diagram as in Figure \ref{figure:diagram-5x5}. Let $y \in \Gamma(x)$, let $x'$ be the unique vertex in $\Gamma_3(x)$, and let $y'$ be the unique vertex in $\Gamma_3(y)$. Clearly $y' \in \Gamma(x)$, since otherwise $x$ is a common neighbour of $y$ and $y'$. Also $y' \neq x'$, since $d_\Gamma(y,x') = 2$. Suppose that $y' \in \Gamma_{2,10}(x)$. Since $\Gamma_{2,10}(x) = \Gamma_{2,10}(x')$, we have $c_2(x',y') = 10$. So $x' \in \Gamma_{2,10}(y')$. Applying Lemma \ref{lemma:5x5-order} (\ref{order72}) using $y$ in the place of $x$, we find that $\Gamma_{2,10}(y') = \Gamma_{2,10}(y)$, so $c_2(x',y) = 10$. But $y \in \Gamma(x) = \Gamma_{2,8}(x')$, so $c_2(x',y) = 8$, contradiction. Therefore $y' \notin \Gamma_{2,10}(x)$, and so $y' \in \Gamma_{2,8}(x) = \Gamma(x')$. Consequently, for any $z \in \Gamma_{2,10}(x)$, the unique $z' \in \Gamma_3(x)$ also lies in $\Gamma_{2,10}(x)$. Moreover, for any $x, x', y, y' \in \V(\Gamma)$ with $d_\Gamma(x,x') = d_\Gamma(y,y') = 3$, we have $x \sim_\Gamma y$ if and only if $x' \sim_\Gamma y'$.
Now consider the quotient graph $\Gamma_\mathcal{P}$ of $\Gamma$ with respect to the partition $\mathcal{P} = \big\{ \{x,x'\} \ : \ (x,x') \in \Gamma_3 \big\}$. It follows from the above that with respect to any $\overline{x} = \{x,x'\} \in \V(\Gamma_\mathcal{P})$, $\big[\Gamma_\mathcal{P}(\overline{x})\big] \cong [\Gamma(x)] \cong [\Gamma(x')] \cong K_5 \square K_5$. Thus $\Gamma_\mathcal{P}$ is locally $5 \times 5$ grid, and since $k_3(x) = 1$ for all $x \in \V(\Gamma)$, $\diam(\Gamma_\mathcal{P}) = 2$. Also it follows from the preceding paragraph that for any $\overline{x} = \{x,x'\}, \overline{y} = \{y,y'\} \in \V(\Gamma_\mathcal{P})$ we have $d_{\Gamma_\mathcal{P}}(\overline{x},\overline{y}) = 2$ if and only if $y, y' \in \Gamma_{2,10}(x) = \Gamma_{2,10}(x')$, so that $c_2(\overline{x},\overline{y}) = 10$. Since $\overline{x}$ and $\overline{y}$ are arbitrary it follows that all $\mu$-graphs in $\Gamma_\mathcal{P}$ have order $10$. However, by Lemma \ref{lemma:mu-constant} no such graph exists. Therefore $|\V(\Gamma)| \neq 72$, as claimed.
Thus, by Lemma \ref{lemma:5x5-order}, $|\V(\Gamma)| = 78$. Theorem \ref{maintheorem:5by5} (\ref{mu>8}) follows from Lemma \ref{lemma:5x5-order} (\ref{order78}) and Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}).
\end{proof}
\section{Preliminaries} \label{sec:prelims}
Let $\Gamma$ be a graph. The \emph{order} $|\Gamma|$ of $\Gamma$ is the cardinality of $\V(\Gamma)$. For any $x, y \in \V(\Gamma)$, the \emph{distance} $d_\Gamma(x,y)$ in $\Gamma$ of $x$ and $y$ is the length of the shortest path in $\Gamma$ between $x$ and $y$. The \emph{diameter} $\diam(\Gamma)$ of $\Gamma$ is the maximum possible distance between two vertices of $\Gamma$.
Throughout we use the following notation: For $0 \leq i \leq \diam(\Gamma) = D$ and $x \in \V(\Gamma)$ we write $\Gamma_i(x) = \{ y \ : \ d_\Gamma(x,y) = i \}$; we often write $\Gamma(x) = \Gamma_1(x)$. For $y \in \Gamma_i(x)$, \\
\begin{minipage}[c]{0.48\textwidth}
\begin{align*}
k_i(x) &:= |\Gamma_i(x)| \\
a_i(x,y) &:= |\Gamma_i(x) \cap \Gamma_1(y)| \\
b_i(x,y) &:= |\Gamma_{i+1}(x) \cap \Gamma_1(y)|, \ 0 \leq i \leq D - 1 \\
c_i(x,y) &:= |\Gamma_{i-1}(x) \cap \Gamma_1(y)|, \ 1 \leq i \leq D
\end{align*}
\end{minipage}
\hfill
\begin{minipage}[c]{0.48\textwidth}
\begin{center}
\begin{pspicture}(-2.5,-2)(2.5,1)
\pnode(-2.5,0){C} \pnode(2.5,0){B} \pnode(0,-0.37){K2}
\rput(0,0){\ovalnode{K}{$k_i(x)$}}
\ncline[arrows=<-]{C}{K} \ncline[arrows=->]{K}{B} \nccircle[angleA=180,nodesepA=3pt,arrows=->]{K2}{0.25cm}
\rput(-1.5,0.25){\small $c_i(x,y)$} \rput(1.5,0.25){\small $b_i(x,y)$} \rput(0,-1.2){\small $a_i(x,y)$}
\end{pspicture}
\end{center}
\end{minipage}
In particular, if $\Gamma$ is locally $n \times n$ grid, then $k_1(x) = |K_n \square K_n| = n^2$ for each $x$, and since each vertex in $K_n \square K_n$ has $2(n-1)$ neighbours we have $a_1(x,y) = 2(n-1)$ for each $x \in \V(\Gamma)$ and $y \in \Gamma(x)$. Thus
\[ b_1(x,y) = k_1(x) - a_1(x,y) - 1 = (n-1)^2. \]
If $d_\Gamma(x,y) = 2$ we usually write $\mu(x,y) = \Gamma(x) \cap \Gamma(y)$ for the $\mu$-graph, and so $c_2(x,y) = |\Gamma(x) \cap \Gamma(y)| = |\mu(x,y)|$.
In general the parameters $k_i$, $a_i$, $b_i$, and $c_i$ may be non-constant: $k_i(x)$ may depend on $x$, and $a_i(x,y)$, $b_i(x,y)$, and $c_i(x,y)$ may depend on both $x$ and $y$. When they are independent of $x$ or $y$ we sometimes omit the $x$ or $y$. So for example, if $\Gamma$ is locally $n \times n$ grid, we often write $k_1 = n^2$, $a_1 = 2(n-1)$, and $b_1 = (n-1)^2$.
If $b_i$ and $c_i$ are independent of $x$ and $y$ for all $i \in \{0, \ldots, \diam(\Gamma)\}$, then $\Gamma$ is \emph{distance-regular} with \emph{intersection array} $(b_0, b_1, \ldots, b_{D-1}; c_1, c_2, \ldots, c_D)$. In this case the parameters $k_i$ and $a_i$ are also independent of $x$ and $y$, and are determined by the intersection array.
For any $S \subseteq \V(\Gamma)$, we denote by $[S]$ the induced subgraph of $\V(\Gamma)$ on $S$. For any $i \in \{2, \ldots, \diam(\Gamma)\}$, denote by $\Gamma_i$ the set of all pairs of vertices $(x,y)$ such that $d_\Gamma(x,y) = i$. For any $x \in \V(\Gamma)$ define the \emph{eccentricity} $\epsilon(x)$ of $x$ as
\begin{equation} \label{eq:epsilon}
\epsilon(x) := \max\{ i \ : \ \Gamma_i(x) \neq \varnothing \}.
\end{equation}
Clearly $\epsilon(x) \leq \diam(\Gamma)$ for any vertex $x$.
The following result from \cite{4by4} lists basic properties of locally grid graphs. We state it here and include a detailed proof, as the arguments used give additional insight into the structure of locally grid graphs and similar techniques will be used repeatedly in proofs of later results.
\begin{lemma} \cite[Lemma, Section 1, p. 231]{4by4} \label{lemma:basic-mxn}
Let $\Gamma$ be connected and locally grid. Then:
\begin{enumerate}[(1)]
\item \label{loc_mxn} There exist integers $m$ and $n$ such that $\Gamma$ is locally $m \times n$ grid.
\item \label{cliques-per-edge} Each edge is in exactly two maximal cliques: one of size $m+1$ and one of size $n+1$.
\item \label{cliques-per-triangle} Each triangle is in a unique maximal clique.
\item \label{mu-cycles} Each $\mu$-graph is a union of cycles, each of even length at least $4$. No two edges of a $\mu$-graph $\mu(x,y)$ lie in the same clique of size $m$ or $n$ in $[\Gamma(x)]$ or $[\Gamma(y)]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $x \in \V(\Gamma)$, and suppose that $[\Gamma(x)] \cong K_m \square K_n$. Any vertex $y \in \Gamma(x)$ is in two maximal cliques in $K_m \square K_n$ of sizes $m$ and $n$. So the edge $\{x,y\}$ is in two maximal cliques of $\Gamma$ of sizes $m+1$ and $n+1$. This is true for each edge of the form $\{x',y'\}$ where $\Gamma(x') \cong K_m \square K_n$. Hence each $y \in \Gamma(x)$ satisfies $\Gamma(y) \cong K_m \square K_n$. By connectedness statement (\ref{loc_mxn}) holds, and so does statement (\ref{cliques-per-edge}).
Let $\{x,y,z\}$ be a triangle in $\Gamma$. Then $\{y,z\}$ is an edge in $[\Gamma(x)] \cong K_m \square K_n$, and so $\{y,z\}$ is contained in a unique maximal clique $C$ in $[\Gamma(x)]$. Thus $C \cup \{x\}$ is a maximal clique in $\Gamma$, and is the unique maximal clique in $\Gamma$ containing $\{x,y,z\}$. Statement (\ref{cliques-per-triangle}) follows.
Let $x, y \in \V(\Gamma)$ with $d_\Gamma(x,y) = 2$, and let $z \in \Gamma(x) \cap \Gamma(y)$. Then $x$ and $y$ are vertices in $[\Gamma(z)] \cong K_m \square K_n$, and thus $x$ and $y$ have two common neighbours $u$ and $v$ in $[\Gamma(z)]$. The vertices $u$ and $v$ are non-adjacent in $\Gamma$, and are neighbours of $z$ in $\mu(x,y)$. Hence $\mu(x,y)$ is not a complete graph and has valency $2$, which implies that it is a union of cycles, each of length at least $4$. Now $\mu(x,y)$ is a subgraph of $[\Gamma(x)] \cong K_m \square K_n$; since $\mu(x,y)$ has no triangles, no two of its edges can belong to the same clique of $[\Gamma(x)]$. Thus a connected component of $\mu(x,y)$ has the form given in Figure \ref{figure:mu-component}, and must have even length. This proves statement (\ref{mu-cycles}).
\end{proof}
\begin{center}
\begin{figure}
\begin{pspicture}(-1,-1)(1,1)
\multido{\n=-1+0.4}{5}{\psline[linecolor=lightgray](\n,1)(\n,-1)}
\multido{\n=-0.6+0.4}{5}{\psline[linecolor=lightgray](-1,\n)(1,\n)}
\scalebox{0.8}{\rput(-0.25,0.25){\mucomponent}}
\rput(2.5,0){\small $[\Gamma(x)] \cong K_m \square K_n$}
\end{pspicture}
\caption{Connected component of $\mu(x,y)$} \label{figure:mu-component}
\end{figure}
\end{center}
\section{A family of examples} \label{sec:family}
\begin{construction} \label{example:drg} \cite[Construction 4.1]{DRcovers}
Let $n$ be a power of an odd prime, and let $q = n^2$ and $r = (n+1)/2$. Let $V$ be a vector space of dimension $2$ over the finite field $\F_q$ of order $q$, let $V^*$ be the set of all nonzero vectors, let $B$ be a nondegenerate symplectic form on $V$, and let $R$ be the subgroup of index $r$ in the multiplicative group $\F^*_q$ of $\F_q$. The graph $\Gamma^{(n)}$ has vertex set
\[ \V\big(\Gamma^{(n)}\big) = \{ Ru \ : \ u \in V^* \} \]
and edge set
\[ \E\big(\Gamma^{(n)}\big) = \big\{ \{Ru,Rv\} \ : \ B(u,v) \in R \big\}. \]
\end{construction}
By \cite{DRcovers} the graph $\Gamma^{(n)}$ has diameter $3$, and is a distance-regular antipodal cover of $K_{q+1}$ with antipodal blocks of size $r$ and $c_2 = 2(n-1)$. Its intersection array is $\big(q,\, (r-1)c_2,\, 1;\, 1,\, c_2,\, q\big)$. In particular, the graph $\Gamma^{(3)}$ is isomorphic to the Johnson graph $J(6,3)$.
Our aim in this section is to prove Theorem \ref{maintheorem:family}. It will follow from Proposition \ref{proposition:family}.
\begin{proposition} \label{proposition:family}
Let $n$, $q$, $r$, and $\Gamma^{(n)}$ be as in Construction \ref{example:drg}. Then the following hold:
\begin{enumerate}[(1)]
\item \label{trans} The graph $\Gamma^{(n)}$ is vertex-transitive and arc-transitive.
\item \label{locgrid} The graph $\Gamma^{(n)}$ is locally $n \times n$ grid.
\item \label{cycles-divisors} For each $\mu$-graph of $\Gamma^{(n)}$ there is an odd divisor $d$ of $n-1$ such that the $\mu$-graph is a union of $d$ cycles of length $2(n-1)/d$. Conversely, for each odd divisor $d$ of $n-1$, there is a $\mu$-graph of $\Gamma^{(n)}$ which is a union of $d$ cycles of length $2(n-1)/d$.
\item \label{cycles-lowerbound} For each $N > 0$ there exists $n \geq N$ such that the $\mu$-graphs of $\Gamma^{(n)}$ are unions of more than $\log(N)$ cycles.
\end{enumerate}
\end{proposition}
The proof of Proposition \ref{proposition:family} is given at the end of the section, and relies on several intermediate results.
Let $\omega$ be a primitive element of $\F_q$, so that $\omega^{2r}$ is a primitive element of $\F_n$, where $\F_n = \F_{\sqrt{q}} = \left\langle \omega^{2r} \right\rangle \cup \{0\}$ is the subfield of $\F_q$ of index $2$. Then $R = \langle \omega^r \rangle = \F^*_n \,\dot{\cup}\, \F^*_n \omega^r$. The set $\{1, \omega^r\}$ is a basis for $\F_q$ as a vector space over $\F_n$, so $\F_q = \F_n + \F_n \omega^r$ and each $\alpha \in \F_q$ can be written uniquely as
\begin{equation} \label{eq:E+O}
\alpha = \alpha_{ev} + \alpha_{odd}, \quad \text{for } \alpha_{ev} \in \F_n \text{ and } \alpha_{odd} \in \F_n\omega^r.
\end{equation}
Observe that $-1 = \omega^{r(n-1)}$, so $-1 \in \F_n$ (since $n$ is odd) and in particular $-1 \in R$. Also note that $\alpha_{ev}\alpha_{odd}^{-1} \in \F^*_n\omega^r$.
In what follows $\{e,f\}$ is a symplectic basis for $V$ with respect to the form $B$, that is, $e$ and $f$ are nonzero vectors satisfying $B(e,e) = B(f,f) = 0$ and $B(e,f) = -B(f,e) = 1$. Note that $B(u,u) = 0$ for all $u \in V$.
Since $\Gamma^{(n)}$ is an antipodal distance-regular graph of diameter $3$, the antipodal block containing any vertex $u$ is $\{u\} \cup \Gamma^{(n)}_3(u)$.
\begin{lemma} \label{lemma:antipodes}
Let $\Gamma^{(n)}$ be as in Construction \ref{example:drg}. For any $u \in V^*$, the antipodal block containing $Ru$ is $\{ R'u \ : \ R' \text{ an $R$-coset in $\F_q$} \}$ and this block is $\{Ru\} \cup \Gamma^{(n)}_3(Ru)$.
\end{lemma}
\begin{proof}
Let $R' \neq R$ be an $R$-coset in $\F_q$. Then $R' = R\gamma$ for some $\gamma \notin R$, and $R'u = R(\gamma u)$. Now $B(u,\gamma u) = \gamma B(u,u) = 0 \notin R$, so $R'u \notin \Gamma^{(n)}(Ru)$. Let $Rv \in \Gamma^{(n)}(Ru)$. Then $B(u,v) \in R$ so that $B(\gamma u,v) = \gamma B(u,v) \notin R$, and hence $Rv \notin \Gamma^{(n)}(R(\gamma u)) = \Gamma^{(n)}(R'u)$. Thus $\Gamma^{(n)}(Ru) \cap \Gamma^{(n)}(R'u) = \varnothing$, so that $R'u \notin \Gamma^{(n)}_2(Ru)$. Since $\diam\big(\Gamma^{(n)}\big) = 3$ it follows that $R'u \in \Gamma^{(n)}_3(Ru)$. Therefore $Ru$ and $R'u$ are at maximum distance in $\Gamma^{(n)}$. As mentioned above $\Gamma^{(n)}$ is antipodal and its antipodal blocks have size $r$; since $R$ has index $r$ in $\F^*_q$ the result follows.
\end{proof}
The action on vectors of the isometry group $\Sp_2(q)$ of $B$ induces an action on $\V\big(\Gamma^{(n)}\big)$ which preserves $\E\big(\Gamma^{(n)}\big)$. This together with the subgroup of scalars isomorphic to $R$ generates $G := R \circ \Sp_2(q)$; again the $G$-action on vectors induces an action on $\V\big(\Gamma^{(n)}\big)$ whose kernel is $R$. (It is convenient to work with this unfaithful action rather than the induced group $\textnormal{PSp}_2(q)$.) We represent vectors in $V$ a row vectors, so $\alpha e + \beta f$ is represented as $(\alpha,\beta)$ and then $G$ acts by matrix multiplication.
\begin{lemma} \label{lemma:family1}
Let $n$, $q$, $r$, and $\Gamma^{(n)}$ be as in Construction \ref{example:drg}. Set $x := Re \in \V\big(\Gamma^{(n)}\big)$ and let $G = R \circ \Sp_2(q)$.
\begin{enumerate}[(1)]
\item \label{Gam(x)} $\Gamma^{(n)}(x) = \{ R(\alpha e + f) \ : \alpha \in \F_q \}$, and the stabiliser $G_x$ of $x$ is transitive on $\Gamma^{(n)}(x)$.
\item \label{Gam(x)-adj} Two distinct vertices $R(\alpha e + f), R(\alpha'e + f) \in \Gamma^{(n)}(x)$ are adjacent in $\Gamma^{(n)}$ if and only if either $\alpha_{ev} = \alpha'_{ev}$ or $\alpha_{odd} = \alpha'_{odd}$ (but not both), where $\alpha_{ev}$, $\alpha'_{ev}$, $\alpha_{odd}$, and $\alpha'_{odd}$ are as in equation (\ref{eq:E+O}). The maximal cliques in $\Gamma^{(n)}(x)$ which contain $R(\alpha e + f)$ are $\big\{ R((\alpha_{ev} + \gamma)e + f) \ : \ \gamma \in \F_n\omega^r \big\}$ and $\big\{ R((\gamma + \alpha_{odd})e + f) \ : \ \gamma \in \F_n \big\}$.
\item \label{Gam2(x)} $\Gamma^{(n)}_2(x) = \big\{ R(\alpha e + \beta f) \ : \ \alpha \in \F_q,\, \beta \in \F^*_q \setminus R \big\}$. For any $R(\alpha e + \beta f) \in \Gamma^{(n)}_2(x)$, there exists $g \in G_x$ such that $(R(\alpha e + \beta f))^g = R(\beta f)$.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $\alpha, \beta \in \F_q$. The vertex $R(\alpha e + \beta f)$ is adjacent to $x$ if and only if $\beta = B(e, \alpha e + \beta f) \in R$. Since $0 \notin R$, each such vertex in $\Gamma^{(n)}$ has a unique representative of the form $\alpha e + f$, which proves the first part of statement (\ref{Gam(x)}). For any $\alpha, \alpha' \in \F_q$ the element
\[ \left(\begin{array}{cc} 1 & 0 \\ \alpha'-\alpha & 1 \end{array}\right) \]
of $G$ fixes $x$ and sends the vertex $R(\alpha e + f)$ to $R(\alpha' e + f)$. This completes the proof of statement (\ref{Gam(x)}).
Let $y = R(\alpha e + f)$ and $z = R(\alpha' e + f)$. Then $y \sim_{\Gamma^{(n)}} z$ if and only if
\[ \alpha - \alpha' = B(\alpha e + f, \, \alpha'e + f) \in R = \F^*_n \,\dot{\cup}\, \F^*_n\omega^r. \]
Using the representation in equation (\ref{eq:E+O}), $\alpha - \alpha' = (\alpha_{ev} - \alpha'_{ev}) + (\alpha_{odd} - \alpha'_{odd})$. Both $\F_n$ and $\F_n\omega^r$ are closed under addition, so $\alpha_{ev} - \alpha'_{ev} \in \F_n$ and $\alpha_{odd} - \alpha'_{odd} \in \F_n\omega^r$. Thus $\alpha - \alpha' \in \F^*_n$ if and only if $\alpha_{odd} - \alpha'_{odd} \in \F_n$, or equivalently $\alpha_{odd} - \alpha'_{odd} \in \F_n \cap \F_n\omega^r = \{0\}$. Similarly $\alpha - \alpha' \in \F^*_n\omega^r$ if and only if $\alpha_{ev} - \alpha'_{ev} \in \F_n\omega^r$, that is, $\alpha_{ev} - \alpha'_{ev} \in \F_n\omega^r \cap \F_n = \{0\}$. Hence $y \sim_\Gamma^{(n)} z$ if and only if either $\alpha_{ev} = \alpha'_{ev}$ or $\alpha_{odd} = \alpha'_{odd}$, but not both (since $y \neq z$). This proves the first part of statement (\ref{Gam(x)-adj}). The second part follows immediately.
The vertex $R(\alpha e + \beta f) \in \Gamma^{(n)}_2(x)$ if and only if $\beta \neq 0$ (for otherwise $R(\alpha e + \beta f) \in \Gamma^{(n)}_3(x)$ by Lemma \ref{lemma:antipodes}) and $\beta \notin R$ (else $R(\alpha e + \beta f) \in \Gamma^{(n)}(x)$ by the above). Hence we obtain the first part of statement (\ref{Gam2(x)}). For any $\alpha \in \F_q$ and $\beta \in \F^*_q \setminus R$ the stabiliser $G_x$ contains the element
\[ \left(\begin{array}{cc} 1 & 0 \\ -\alpha\beta^{-1} & 1 \end{array}\right), \]
and this sends $R(\alpha e + \beta f)$ to $R(\beta f)$. This completes the proof of statement (\ref{Gam2(x)}).
\end{proof}
By Lemma \ref{lemma:family1} (\ref{Gam2(x)}) each $G_x$-orbit in $\Gamma^{(n)}_2(x)$ contains a vertex $R\beta f$ for some $\beta \in \F^*_q \setminus R$. Hence to determine the structure of the $\mu$-graphs $\mu(x,y)$ for any $y$ we may assume that $y \in \Gamma^{(n)}_2(x)$ is $R\beta f$. This is what we do in the next result.
We denote the multiplicative order of $\alpha \in \F^*_q$ by $|\alpha|$.
\begin{lemma} \label{lemma:family2}
Let $n$, $q$, $r$, and $\Gamma^{(n)}$ be as in Construction \ref{example:drg}. Set $x := Re \in \V\big(\Gamma^{(n)}\big)$ and $y = R(\beta^{-1}f)$ where $\beta \in \F^*_q \setminus R$, and let $G = R \circ \Sp_2(q)$.
\begin{enumerate}[(1)]
\item \label{mu(x,y)-adj} $y \in \Gamma^{(n)}_2(x)$ and $\Gamma^{(n)}(x) \cap \Gamma^{(n)}(y) = \{ R(\alpha e + f) \ : \ \alpha \in R\beta \}$ of size $2(n-1)$. Two distinct vertices $R(\alpha e + f), R(\alpha'e + f) \in \Gamma^{(n)}(x) \cap \Gamma^{(n)}(y)$ are adjacent in $\Gamma^{(n)}$ if and only if $\alpha' = \alpha\big(\beta_{ev}\beta_{odd}^{-1}\big)^{\pm 1}$.
\item \label{mu(x,y)-cycles} The $\mu$-graph $\mu(x,y)$ is a union of $d := 2(n-1)/\left|\beta_{ev}\beta_{odd}^{-1}\right|$ cycles of length $2(n-1)/d$.
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\beta \notin R \cup \{0\}$ neither is $\beta^{-1}$, so $y \in \Gamma^{(n)}_2(x)$ by Lemma \ref{lemma:family1} (\ref{Gam2(x)}). For any $\alpha', \beta' \in \F_q$, we have $R(\alpha'e + \beta'f) \in \Gamma^{(n)}(y)$ if and only if $-\beta^{-1}\alpha' = B(\beta^{-1}f, \, \alpha'e + \beta'f) \in R$, or equivalently $\alpha' \in R\beta$ (since $-1 \in R$). Thus $\Gamma^{(n)}(y) = \big\{ R(\alpha'e + \beta'f) \ : \ \alpha' \in R\beta, \, \beta' \in \F_q \big\}$, and we conclude that
\[ \Gamma^{(n)}(x) \cap \Gamma^{(n)}(y) = \big\{ R(\alpha' e + \beta' f) \ : \ \alpha' \in R\beta, \, \beta' \in R \big\},
= \big\{ R(\alpha' e + f) \ : \ \alpha' \in R\beta \big\}. \]
a set of size $|R\beta| = 2(n-1)$. This proves the first part of statement (\ref{mu(x,y)-adj}).
Next let $w_1 = R(\alpha_1 e + f)$ and $w_2 = R(\alpha_2 e + f)$ be distinct vertices in $\Gamma^{(n)}(x) \cap \Gamma^{(n)}(y)$. Then $\alpha_1$ and $\alpha_2$ are distinct elements of $R\beta$, and for each $i \in \{1,2\}$ we can write $\alpha_i = \rho_i\beta$ for some $\rho_i \in R$. Hence $\alpha_i = \rho_i(\beta_{ev} + \beta_{odd})$ for $i = 1, 2$. If both $\rho_1, \rho_2 \in \F^*_n$ then $(\alpha_i)_{ev} = \rho_i\beta_{ev}$ and $(\alpha_i)_{odd} = \rho_i\beta_{odd}$, and since $\alpha_1 \neq \alpha_2$ either $(\alpha_1)_{ev} \neq (\alpha_2)_{ev}$ or $(\alpha_1)_{odd} \neq (\alpha_2)_{odd}$. So $\rho_1 \neq \rho_2$, and both $(\alpha_1)_{ev} \neq (\alpha_2)_{ev}$ and $(\alpha_1)_{odd} \neq (\alpha_2)_{odd}$ hold. Thus $w_1 \nsim_\Gamma^{(n)} w_2$ by Lemma \ref{lemma:family1} (\ref{Gam(x)-adj}), and we can also deduce by a similar argument that $w_1 \nsim w_2$ whenever both $\rho_1, \rho_2 \in \F^*_n \omega^r$. Let us therefore assume that $\rho_1$ and $\rho_2$ belong in different $\F^*_n$-cosets in $R$; without loss of generality suppose that $\rho_1 \in \F^*_n$ and $\rho_2 \in \F^*_n\omega^r$. Then $(\alpha_1)_{ev} = \rho_1\beta_{ev}$, $(\alpha_1)_{odd} = \rho_1\beta_{odd}$, $(\alpha_2)_{ev} = \rho_2\beta_{odd}$, and $(\alpha_2)_{odd} = \rho_2\beta_{ev}$. By Lemma \ref{lemma:family1} (\ref{Gam(x)-adj}) the vertices $w_1$ and $w_2$ are adjacent if and only if either $\rho_1\beta_{ev} = \rho_2\beta_{odd}$ or $\rho_1\beta_{odd} = \rho_2\beta_{ev}$ (but not both), which is equivalent to $\rho_2 = \rho_1\left(\beta_{ev}\beta_{odd}^{-1}\right)^{\pm 1}$. So $w_1 \sim_\Gamma^{(n)} w_2$ if and only if
\[ \alpha_2 = \rho_1\left(\beta_{ev}\beta_{odd}^{-1}\right)^{\pm 1}\beta = \alpha_1\big(\beta_{ev}\beta_{odd}^{-1}\big)^{\pm 1}. \]
This completes the proof of statement (\ref{mu(x,y)-adj}). (Recall that $\beta_{ev}\beta^{-1}_{odd} \in \F^*_n \omega^r \subseteq R$ by equation (\ref{eq:E+O}).)
Set $\gamma = \beta_{ev}\beta_{odd}^{-1}$. It follows from the above that
\[ R(\alpha e + f) \sim_{\Gamma^{(n)}} R(\alpha\gamma e + f) \sim_{\Gamma^{(n)}} R(\alpha\gamma^2 e + f) \sim_{\Gamma^{(n)}} \ldots, \]
that is, each connected component of $\mu(x,y)$ has vertex set $\big\{ R(\alpha' e + f) \ : \ \alpha' \in \alpha \big\langle \beta_{ev}\beta_{odd}^{-1} \big\rangle \big\}$ for some $\alpha \in \F_q$. Note that $R(\alpha\gamma^k e + f) = R(\alpha e + f)$ by the uniqueness of the coset representative of the form $\alpha e + f$. Thus the length of each component is $|\gamma|$, and the number of components is $d = 2(n-1)/|\gamma|$. This proves statement (\ref{mu(x,y)-cycles}).
\end{proof}
\begin{proof}[Proof of Proposition \ref{proposition:family}]
Let $G = R \circ \Sp_2(q)$. Then the $G$-action on $V^*$ induces an action on $\V\big(\Gamma^{(n)}\big)$ which preserves $\E\big(\Gamma^{(n)}\big)$, and the kernel of this action is $R$. Since $\Sp_2(q)$ acts transitively on $V^*$, the group $G$ is transitive on $\V\big(\Gamma^{(n)}\big)$. By Lemma \ref{lemma:family1} (\ref{Gam(x)}) the stabiliser in $G$ of the vertex $x = Re$ is transitive on $\Gamma^{(n)}(x)$; since $G$ is vertex-transitive, it follows that $G$ is also arc-transitive on $\Gamma^{(n)}$. This proves statement (\ref{trans}).
It is easy to see from Lemma \ref{lemma:family1} (\ref{Gam(x)-adj}) that $[\Gamma^{(n)}(x)] \cong K_n \square K_n$, so by vertex-transitivity $\Gamma^{(n)}$ is locally $n \times n$ grid. Hence statement (\ref{locgrid}) holds.
To prove statement \ref{cycles-divisors} first let $x', y' \in \V\big(\Gamma^{(n)}\big)$ with $d_\Gamma^{(n)}(x',y') = 2$. By vertex-transitivity and Lemma \ref{lemma:family1} (\ref{Gam2(x)}) there exist $g \in G$ and $h \in G_x$ such that $(x')^g = x = Re$ and $(y')^{gh} = R\beta^{-1}f =: y$ for some $\beta \in \F^*_q \setminus R$. That is, $(x',y')^{gh} = (x,y)$, so that $\mu(x',y') \cong \mu(x,y)$. By Lemma \ref{lemma:family2} (\ref{mu(x,y)-cycles}), the graph $\mu(x,y)$ is a union of $d = 2(n-1)/\big|\beta_{ev}\beta_{odd}^{-1}\big|$ cycles of length $\big|\beta_{ev}\beta_{odd}^{-1}\big|$. Since $\beta_{ev}\beta_{odd}^{-1} \in \F^*_n\omega^r$, we have $\beta_{ev}\beta_{odd}^{-1} = \omega^{ri}$ for some odd $i$. Thus $\big|\beta_{ev}\beta_{odd}^{-1}\big| = (q-1)/\gcd(ri,q-1) = 2(n-1)/\gcd(i,2(n-1))$, implying that $d = \gcd(i,2(n-1))$. Further, since $i$ is odd, $d = \gcd(i,n-1)$ is odd. Thus $d$ is an odd divisor of $n-1$. This proves the first part of statement (\ref{cycles-divisors}).
For the converse, let $d$ be an odd divisor of $n-1$. Take $x = Re$ and $y = R\beta^{-1}f$, where $\beta = 1 + \omega^{-rd}$. Note that $\omega^{-rd} = \omega^{r(q-1-d)}$; since both $q$ and $d$ are odd, so is $q-1-d$, and thus $\omega^{-rd} \in \F^*_n\omega^r$. Hence $\beta_{ev} = 1$ and $\beta_{odd} = \omega^{-rd}$. Since for $\gamma = 0$ we have $\gamma_{ev} = \gamma_{odd} = 0$, it follows from the uniqueness of the expression (\ref{eq:E+O}) for $\beta$ that $\beta \neq 0$. Also $\beta \notin \F^*_n \cup \F^*_n\omega^r = R$ since $\beta_{ev}$ and $\beta_{odd}$ are both nonzero. Therefore $\beta \in \F^*_q \setminus R$, so $y \in \Gamma^{(n)}_2(x)$ by Lemma \ref{lemma:family1} (\ref{Gam2(x)}). Now $\big|\beta_{ev}\beta_{odd}^{-1}\big| = \big|\omega^{rd}\big| = 2(n-1)/d$, so by Lemma \ref{lemma:family2} (\ref{mu(x,y)-cycles}) the graph $\mu(x,y)$ is a union of $d$ cycles of length $2(n-1)/d$. This completes the proof of statement (\ref{cycles-divisors}).
It follows from Proposition \ref{proposition:family} (\ref{cycles-divisors}) that there is no absolute upper bound on the number of cycles in a $\mu$-graph in Construction \ref{example:drg}. For if $n = p^m$ for some odd prime $p$, and $m \geq 3$, then $p^m - 1$ has a prime divisor $d$ that does not divide $p^i - 1$ for $i < m$ by \cite{ppd-zsig} (see also \cite[Theorem 2.1]{ppd}), and such a prime is at least $m+1 > \log_p(n)$. This proves statement (\ref{cycles-lowerbound}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{maintheorem:family}]
As mentioned before Proposition \ref{proposition:family}, $\diam\big(\Gamma^{(n)}\big) = 3$. Also $\Gamma^{(n)}$ is an antipodal cover of $K_{q+1}$ with antipodal blocks of size $r$, so that $|\V\big(\Gamma^{(n)}\big)| = (q+1)r = (n^2+1)(n+1)/2$. It is distance-regular with parameter $c_2 = 2(n-1)$, so all of its $\mu$-graphs have order $2(n-1)$, and its intersection array is $\big(q,\, (r-1)c_2,\, 1;\, 1,\, c_2,\, q\big) = \big(n^2,\, (n-1)^2,\, 1;\, 1,\, 2(n-1),\, n^2\big)$. It is locally $n \times n$ grid by Proposition \ref{proposition:family} (\ref{locgrid}). Thus $\Gamma^{(n)}$ satisfies the conditions of Theorem \ref{maintheorem:order-diameter} (\ref{=2(n-1)}). The last part of Theorem \ref{maintheorem:family} follows immediately from Proposition \ref{proposition:family} (\ref{cycles-divisors}).
\end{proof}
|
1,477,468,750,120 | arxiv |
\section{Introduction}
\input{intro.tex}
\section{Notations and definitions}
\input{notdefs.tex}
\section{Core Library}
\input{corelib.tex}\label{sec:corelib}
\section{Invariants Database}
\input{database.tex}\label{sec:database}
\section{Convex Hull}\label{sec:convexhull}
\input{convexhull.tex}
\section{Forbidden Graph Characterization}
\input{fgc.tex}\label{sec:fgc}
\section{Transproof}\label{sec:transproof}
\input{transproof.tex}
\section{Conclusion and future work}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
|
1,477,468,750,121 | arxiv | \section{\label{sec:intro} Introduction}
Semiconductor quantum rings allow for observation of the electron self-interference.
When electron traverses the ring threaded by magnetic field it is subject
to a constructive or a destructive interference what manifests itself as conductance oscillations. This effect
as predicted by Aharonov and Bohm\cite{theoryAB_1} was observed in many experiments with quantum
rings\cite{ex_AB_1}. Manipulation of electron wave function phase in both arms of the quantum ring
allows to obtain strong or weak coupling between the ring and the leads by tuning
the magnetic field. Recently, besides the most intensively examined two-terminal open quantum
rings\cite{ex_2_terminal_1,ex_2_terminal_2,ex_2_terminal_3} this kind of coupling was experimentally
tested in three-terminal \cite{3lorentz,ex_3_terminal_2} as well as in four-terminal quantum
rings\cite{ex_4_terminal_1}. The current oscillations are highly sensitive to decoherence resulting from
interaction with environment such as electron-phonon or electron-electron interactions.
Effect of electrostatic interaction on magnetotransport was observed experimentally for ring
with quantum dot placed in one of its arms\cite{ex_dot_ring_2}, for ring capacitively coupled to the
quantum dot placed beside it\cite{ex_dot_ring_3,ex_dot_ring_1} as well as for rings working in
Coulomb blockade regime and confining from few\cite{ex_cotunneling} to several hundreds
electrons.\cite{ex_interaction_ring_2,ex_interaction_ring_1} The weak localization theory
predicts that the phase coherence time against the effects of the electron-electron interaction approaches infinity for
zero temperture\cite{mohanty}. However, besides the decoherence, which is suppressed in low
temperature, the electrostatic interaction is also responsible for existence of spatial correlations
between charged particles. We may divide correlations induced by electrostatic interaction in a crude way
in two types: (i) the Coulomb correlation which introduces dependence of mutual particle positions
due to repulsive or attractive interaction and (ii) the Pauli correlation which arises directly from
the Pauli exclusion principle.
An extremely strong effect of Coulomb correlation on magnetotransport in quantum ring was
experimentally observed by M\"uhle et al.\cite{ex_ring_ring_1} Measurements of magnetic field
dependence of conductance for system of two concentric capacitively coupled quantum rings revealed
two-period oscillations which authors ascribed to existence of the AB effects in the inner and in
the external ring.
This experiment explicitly proves that Coulomb correlation may greatly affect electron transport in
quantum ring even in low temperatures when the decoherence due to electron-electron scattering
vanishes.
In this paper we study the single-electron transport through a two-terminal quantum ring
in external magnetic field taking into account the Coulomb interaction with another charge
carrier. The second particle (electron or hole) is confined within the dot settled in the center of the ring.
We assume that the barrier between the ring and the dot is thick enough to neglect the tunnel coupling.
For that confinement potential
model, we perform time evolution of two-particle wave function by solving suitable time-dependent
Schr\"odinger equation. We observe AB oscillations in the electron transfer probability. We also find
that the Coulomb correlation modifies the AB effect in the following way: (i) the maxima of transmission
probability grow when transferred electron is attracted by the charged dot while repulsive interaction lowers
them and (ii) the probability of electron transfer may grow for $(n+1/2)$ magnetic flux quanta piercing the
ring when the interaction is strong enough to excite the carrier that is confined in the inner dot.
In the latter case, electron transfers part of its energy to the dot. We find that the energy
transfer depends on magnetic field due to both the AB effect and the Lorentz force. The electrostatic interaction causes
also positive feedback between transferred electron and the second particle. Even small oscillation of charge in
the dot can perturb potential felt by transferred electron which may change the
phase of the electron wave function in both ring arms. Finally, an inelastic scattering of electron on
oscillating Coulomb potential leads to suppression of AB effect.
The paper is organized in the following way. We define the confinement potential of the considered
system and present our theoretical model in Sec.\ref{sec:theory}. Effect of the repulsive as well as effect
of the attractive interaction on transmission probability are presented in Sec. \ref{sec:ee} and
in Sec.\ref{sec:eh} respectively. Inelastic scattering of the transferred electron on Coulomb potential
in two-terminal quantum ring is analyzed in Sec.\ref{sec:inelastic}. Discussion and conclusions are
provided in Sec.\ref{sec:con}.
\section{\label{sec:theory} Theory}
Our confinement potential model consists of quantum ring connected to the left and to the right
leads of finite length and a closed circular quantum dot placed in the center of the ring. Tunneling between the
ring and the dot is neglected due to wide barrier so that the first particle (electron) can only
move in the leads and in the ring while the second (electron or hole) can not leave the dot. The whole
system is put in homogeneous magnetic field which is perpendicular to the quantum ring plane.
The confinement potential is schematically depicted on Fig.\ref{Fig:pot_uw}.
\begin{figure}[htb!]
\hbox{
\epsfxsize=80mm
\epsfbox[10 220 600 560] {fig1.eps}
\hfill}
\caption{Confinement potential of two-terminal quantum ring with quantum dot placed inside. Arrows
indicate spatial limits of three single-electron wave functions bases used for simulation of
electron wave packet in the leads and in the ring (see text below). Labels $x_{l}$ and $x_{r}$ mark
the left and the right limits of the ring.}
\label{Fig:pot_uw}
\end{figure}
We assume that the confinement is much stronger in the growth (z) direction than that in (x-y) plane
of the ring. Both particles occupy a frozen ground state of the quantization in the growth direction.
System of two interacting particles can be described within an effective two-dimensional
model. We define the confinement potential in the ring ($V_{r}$), in the leads ($V_{l}$) and in the
dot ($V_{d}$) as:
\begin{eqnarray}
V_{r}(\mathbf{r})&=&V_{e}\exp \left(
-\frac{\left|\left|\mathbf{r}-\mathbf{r}_{0}\right|-r_{r}\right|^{p}}{\sigma_{r}
^{p}}
\right)
\label{eq:potring}\\
V_{l}(\mathbf{r})&=&V_{e}\exp\left(-\frac{\left|y\right|^{p}}{\sigma_{r}^{p}}
\right)
\label{eq:potkan}\\
V_{d}(\mathbf{r})&=&V_{e(h)}\exp \left(
-\frac{\left| \mathbf{r}-\mathbf{r}_{0}\right|^{p}}{\sigma_{d}^{p}} \right)
\label{eq:potdot}
\end{eqnarray}
In the above equations $V_{e(h)}$ is the maximal depth of the potential for electron (hole),
$\mathbf{r}_{0}$ is the center position of the ring and the dot, $\sigma_{d}$ is the radius of the dot,
$\sigma_{r}$ is the width of the ring arms and both leads as well, $r_{r}$ is the radius of the ring. The
value
of parameter $p$ defines the smoothness of the quantum dot wall. In the calculation we use the following
values: $V_{e}=-200\,\textrm{meV}$, $V_{h}=-140\,\textrm{meV}$,
$\mathbf{r}_{0}=[3.195\,\mu\textrm{m},0]$,
$\sigma_{d}=55\,\textrm{nm}$, $p=8$, $\sigma_{r}=25\,\textrm{nm}$ and $r_{r}=130\,\textrm{nm}$. The
length of the left lead is equal to $3\,\mu\textrm{m}$ while the length of the right lead is
$3.2\,\mu\textrm{m}$.
The main aim of this work is the investigation of the role of Coulomb correlation in the
single-electron transport through the ring. For this purpose we perform the time evolution of
two-particle wave function which fulfills the Schr\"odinger equation:
\begin{equation}
i\hbar\frac{\partial}{\partial t}
|\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)\rangle
=\widehat{H}|\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)\rangle
\label{eq:schrodinger}
\end{equation}
where the two-particle Hamiltonian is defined as:
\begin{equation}
\widehat{H}=\widehat{h}_{1}+\widehat{h}_{2}+\frac{q_{1}q_{2}}{4\pi
\epsilon \epsilon_{0}r_{12}}
\label{eq:ham12}
\end{equation}
The Hamiltonians $\widehat{h}_{1}$ and $\widehat{h}_{2}$ are the single-particle
energy operators. The third term on the right hand side of above equation
describes the electrostatic interaction between the particles which introduces
spatial correlations of their mutual positions. We use single-particle Hamiltonians in the following form:
\begin{equation}
\widehat{h}_{i}=\frac{(\widehat{\mathbf{p}}_{i}-q_{i}\mathbf{A}(\mathbf{r}_{i}
))^{2}}{2m_{i}^{*}}+V_{o(d)}(\mathbf{r}_{i})
\label{eq:ham1}
\end{equation}
where $\widehat{\mathbf{p}}_{i}=-i\hbar\nabla_{\mathbf{r}_{i}}$ is the particle momentum operator, $m_{i}^{*}$
is effective mass, $q_{i}$ is the charge of the particle, $\mathbf{A}(\mathbf{r})$ is a vector
potential,
$V_{d}(\mathbf{r}_{i})$ is a confinement potential of the dot while
$V_{o}(\mathbf{r}_{i})=V_{r}(\mathbf{r}_{i})+V_{l}(\mathbf{r}_{i})$ is a sum of confinement potential of the
ring and the leads. Since the tunnel coupling between the ring and the dot is neglected in our theoretical
model, the particles confined in spatially separated regions can not exchange their spins. In other words, the
exchange interaction between the electron in the ring and the particle confined in the dot exactly vanishes.
Therefore, all the effects due to the presence of the charged dot inside the ring are only result from the
Coulomb coupling. For non-negligible tunnel coupling between the ring and the dot the single-particle wave
functions of the ring and the dot would overlap. In this case
the exchange correlation would lead to a dependence of the transmission probability on the relative spin arrangements.
Moreover, for nonzero overlap between the ring and the dot wave functions the
particle confined in the dot might be able to tunnel out to the ring.
The
Hamiltonian (\ref{eq:ham12}) does not depend on the spin coordinates. According to the
superposition principle, we expand the correlated wave function of two spinless particles as linear
combination:
\begin{equation}
\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)=
\sum_{i}^{M} c_{i}(t) \psi_{i}(\mathbf{r}_{1},\mathbf{r}_{2})
\label{eq:funkcja2}
\end{equation}
where $c_{i}(t)$ are the time-dependent coefficients and M is the size of the two-particle wave functions
base. The elements $\psi_{i}$ are expressed as products of single-particle wave functions:
\begin{equation}
\psi_{i}(\mathbf{r}_{1},\mathbf{r}_{2})=\varphi_{k(i)}(\mathbf{r}_{1})
\phi_{m(i)}(\mathbf{r}_{2})
\label{eq:baza2}
\end{equation}
where every index \textit{i} corresponds to a particular combination of indices \textit{k} and
\textit{m}. The
\textit{k} index numbers the states of the first particle which moves in the ring and in the leads
while
\textit{m} numbers the states of the second particle in the quantum dot. In order to find the wave
functions
$\varphi_{k}$ and $\phi_{m}$ we first express them as linear combinations of centered Gaussian functions. For
example, the \textit{k}-th quantum dot state can be written as:
\begin{equation}
\phi_{k}(\mathbf{r})=\sum_{i}a_{i}^{k}
\exp\bigg(
-\frac{(\mathbf{r}-\mathbf{r}_{i})^{2}}{2\sigma^{2}_{g}}
-\frac{iqB}{2\hbar}(x-x_{i})y_{i}
\bigg)
\label{eq:bazagauss}
\end{equation}
where $a_{i}^{k}$ are the linear combination coefficients, $\mathbf{r}_{i}=[x_{i},y_{i}]$ are position vectors
of Gaussian centers, B is the value of magnetic field and q is the charge of the particle ($+e$ for the hole
and $-e$ for the electron). The nodes $\mathbf{r}_{i}$ form a two-dimensional square mesh with the distance
$\Delta_{g}=\sqrt{2}\sigma_{g}$ between neighboring nodes. In next step, we diagonalize the single-particle
Hamiltonians (\ref{eq:ham1}) in the Gaussian functions base (\ref{eq:bazagauss}) in order to find the
coefficients $a_{i}^{k}$. In
calculations we used material parameters for GaAs i.e. effective electron mass $m_{e}^{*}=0.067m_{0}$,
effective heavy hole mass $m_{h}^{*}=0.5m_{0}$ ($m_{0}$ is bare electron mass) and dielectric constant
$\epsilon=12.9$. We used nonsymmetric gauge for the vector potential $\mathbf{A}(\mathbf{r})=B[-y,0,0]$ for which
the magnetic field vector $\vec{B}$ is parallel to the z-axis (perpendicular to the ring plane). The value of
parameter $\sigma_{g}$ was estimated variationally and was equal to $\sigma_{g}=5.16\,\textrm{nm}$.
We determine the single-particle states in the dot by diagonalizing the single-particle Hamiltonian with
confinement potential given by Eq.\ref{eq:potdot}. Therefore, the wave functions $\phi_{m}$ were determined
only in the quantum dot and in the close surroundings i.e. in the barrier which separates the dot from the
ring. Due to extremely large span of the external subsystem (leads and ring) we divided it into three
overlapping parts. In every spatial parts another single-electron wave functions base is introduced i.e.
$\{\varphi^{(1)} \}$, $\{\varphi^{(2)} \}$ and $\{\varphi^{(3)} \}$ as shown on Figure \ref{Fig:pot_uw}. We
find elements of these three bases in a similar way to the one applied for the quantum dot i.e. diagonalizing the Hamiltonian
(\ref{eq:ham1}) for confinement potential $V_{o}=V_{r}+V_{l}$.
For the preparation of the basis we assumed a different external potential $V_o$ in the three considered regions.
In order to determine the basis elements in a region we modified the potential
assuming $V_o=0$ outside this region in order to spatially limit the basis wave functions for each region.
As the first second and third region (bases $\{\varphi^{(1)} \}$, $\{\varphi^{(2)} \}$, $\{\varphi^{(3)} \}$) we take $0<x<2865\,\textrm{nm}$,
2765 nm $<x<$ 3625 nm, and 3525 nm $<x <$ 6600 nm, respectively (see
Fig.\ref{Fig:pot_uw}).
Wave functions
$\{\varphi^{(1)}\}$ and $\{\varphi^{(3)}\}$ are defined in the left (region 1) and in the right
(region 3) leads
respectively, for the distance larger than $200\,\textrm{nm}$ from an outermost parts of the ring (parameters
$x_{l}$ and $x_{r}$ on Fig.\ref{Fig:pot_uw}). In a similar way, the elements $\{\varphi^{(2)}\}$ are
defined in the
second region which covers the ring (without a dot) with parts of both leads to the distance of
$300\,\textrm{nm}$ from the ring. The ranges of these three regions are schematically marked on a
Fig.\ref{Fig:pot_uw}. Notice that the elements of two adjacent basis overlap e.g. the first with the second as
well as the second with the third on the length equal to $100\,\textrm{nm}$. We carefully checked that these
connections do not perturb motion of electron in both channels.\cite{length} For construction of the
two-particle wave function (\ref{eq:baza2}), we use the lowest energy states obtained from single-particle
Hamiltonian diagonalization. In calculations we use $N_{d}=20$ dot states and $N_{1}=140$, $N_{2}=60$ and
$N_{3}=150$ states for bases $\{\varphi^{(1)} \}$, $\{\varphi^{(2)} \}$ and $\{\varphi^{(3)} \}$ respectively.
In the Schr\"odinger equation (\ref{eq:schrodinger}) we substitute for
$|\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)\rangle$ its expansion (\ref{eq:funkcja2}) and next we multiply both
sides of resulting equation by $\langle \psi_{k}(\mathbf{r}_{1},\mathbf{r}_{2})|$. We obtain the
following matrix equation: \begin{equation} i\hbar \mathbf{H\dot{c}}= \mathbf{Sc} \label{eq:hkm}
\end{equation}
where $\mathbf{S}$ is the overlap matrix of two-particle wave functions basis elements (\ref{eq:baza2})
defined as $S_{km}=\langle \psi_{k}|\psi_{m} \rangle$ while $\mathbf{H}$ is the matrix of two-particle
Hamiltonian (\ref{eq:ham12}) with elements $H_{km}=\langle \psi_{k}|\widehat{H}|\psi_{m} \rangle$. Details of
calculations of the matrix elements of electrostatic interaction are given in previous work [\cite{chwiej1}].
Determination of these matrix elements are very time consuming and therefore we were forced to limit the
range of Coulomb interaction in the system. We assume the transferred electron does not interact with the
particle confined in the dot if the distance between its position and the dot center exceeds
$390\,\textrm{nm}$. In other words, when electron moves towards the ring it may be partly reflected from a smooth
potential step of height $\Delta V=0.28\,\textrm{meV}$. The presence of this potential step does not
influence the electron transfer probability since the original kinetic energy of electron on Fermi surface
($E_{F}=1.42\,\textrm{meV}$), considered in this work, is several times larger.\cite{reflection}
The equation (\ref{eq:hkm}) can be numerically solved by using an iterative method, similarly as was shown in
work[\cite{szafran_1el}] for time evolution of electron wave packet in a two-terminal quantum ring. Notice
however, that every iterative method requires very large number of matrix-vector multiplications so that to
retain the stability and to keep the numerical errors as small as possible. Since, the sizes of matrices
\textbf{H} and \textbf{S} are equal to $7000$, the use of iterative schema in our two-particle problem would
be inefficient. Instead, we performed the time evolution of two-particle wave function in another
non-iterative way. For this purpose, we first diagonalized the two-particle Hamiltonian
(\ref{eq:ham12}) and put all obtained
eigenvectors in columns of the new matrix \textbf{U} (of the same size as \textbf{H}). Next, we use this
\textbf{U} matrix to perform the unitary transformation of Eq.\ref{eq:hkm}:
\begin{equation}
i\hbar \bigg(\mathbf{U^{+} SU}\bigg) \bigg(\mathbf{ U^{+}\dot{c}}\bigg)=
\bigg( \mathbf{U^{+}HU}\bigg) \bigg(\mathbf{U^{+}c}\bigg)
\label{eq:hkmu}
\end{equation}
Let us notice that $\mathbf{U^{+}SU}=\mathbf{I}$ where \textbf{I} is the unity matrix and
$\mathbf{U^{+}HU}=\mathbf{D}$ where \textbf{D} is diagonal matrix with eigenvalues of energy operator
(\ref{eq:ham12}) on a diagonal. Due to the diagonal form of both matrices, the system of $M=7000$ coupled
equations given by Eq.\ref{eq:hkm} transforms into system of M decoupled differential equations:
\begin{equation}
i\hbar \frac{\partial b_{k}}{\partial t}= D_{kk}b_{k}
\label{eq:bmm}
\end{equation}
where $\mathbf{b}=\mathbf{U^{+}c}$, with solutions:
\begin{equation}
b_{k}(t)=b_{k}(t=0)\exp\bigg(-\frac{iD_{kk}t}{\hbar} \bigg)
\end{equation}
Obviously in order to obtain solution for original problem defined by Eq.\ref{eq:hkm}, i.e. to obtain values
of coefficients $c_{k}$, one performs a backward transformation $\mathbf{c}=\mathbf{Ub}$. We made the
diagonalization of \textbf{H} numerically. Therefore, in order to estimate the numerical errors which may
appear due to performing the unitary transformation, we always checked the energy and the norm conservation
for two-particle wave function. The relative errors do not exceed $10^{-6}$.
For $t=0$ we use the following form for the initial two-particle wave function :
\begin{equation}
\Psi_{s}=\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t=0)=\varphi_{0}(\mathbf{r}_{1})
e^{ik_{0}x_{1}}
\phi_{0}(\mathbf{r}_2)
\label{eq:start}
\end{equation}
Wave function $\phi_{0}(\mathbf{r}_{2})$ describes the particle (electron or hole) confined in the dot in the
ground state while $\varphi_{0}(\mathbf{r}_{1})e^{ik_{0}x_{1}}$ is the wave function of the electron moving in
the left channel towards the ring with the average momentum depending on $k_{0}$ value. We determined
$\varphi_{0}(\mathbf{r}_{1})$ by diagonalizing the Hamiltonian (\ref{eq:ham1}) in the centered Gaussian
functions base (\ref{eq:bazagauss}) with the confinement potential: \begin{equation}
V_{s}(\mathbf{r})=V_{l}(\mathbf{r})+\frac{m_{e}^{*}\omega^2}{2}(x-x_{s})^2 \end{equation} where $x_{s}$ is the
center position of harmonic oscillator in the left channel. It was situated in the distance of
$995\,\textrm{nm}$ from the center of the ring (dot). The strength of the harmonic oscillator depends
on the oscillator length ($l_{e}$) i.e. $\hbar\omega=\hbar^{2}/m_{e}^{*}/l_{e}^{2}$. In calculations we used
$l_{e}=50\,\textrm{nm}$. Such way of determination of $\varphi_{0}$ inherently includes the magnetic translation phase change.
For $t=0$ we give the electron in the left channel momentum $\hbar k_{0}$ with $k_{0}=0.05/\textrm{nm}$ which
corresponds to the average energy on the Fermi surface $(E_{F}=1.42\,\textrm{meV})$ in the
two-dimensional electron gas with
density\cite{szafran_1el} $n=4\times10^{10}/\textrm{cm}^{2}$. The choice of initial conditions
i.e. values of parameters such as $ x_{s}$ or $l_{e}$ is quite arbitrary. In Sec.\ref{sec:eh} we will
shortly comment the results obtained also for other sets of initial parameters.
\section{{\label{sec: res}}Results}
Below we denote by $P_{A}$, $P_{B}$
and $P_{C}$ the probabilities of finding the transferred electron in the left channel, within the ring
and in the right channel, respectively. For these quantities we defined auxiliary
operators:
\begin{eqnarray}
\widehat{P}_{A}&=&\Theta(x_{l}-x_{1})\\
\widehat{P}_{B}&=&\Theta(x_{1}-x_{l})+\Theta(x_{r}-x_{1})-1\\
\widehat{P}_{C}&=&\Theta(x_{1}-x_{r})
\label{eq:heaviside}
\end{eqnarray}
In the above definitions $\theta(x)$ is the Heaviside function while
$x_{l}=3040\,\textrm{nm}$ and
$x_{r}=3350\,\textrm{nm}$ are the left and the right limits of the ring in x
direction respectively,
as shown on Fig.\ref{Fig:pot_uw}. Each $P_{i}$ can be simply computed at any
time as expectation
value of specific operator $\widehat{P}_{i}$ i.e. $P_{i}(t)=\langle
\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t) |\widehat{P}_{i}|
\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)\rangle$ (for $i=A,B,C$). We treat $P_{C}$ and $P_{A}$ are
lower bounds for probability of electron transfer and backscattering, respectively since
the ring is not completely empty at the end of simulations. A part (less than 5\%) of the packet always stays
inside the ring since the sizes of the channels are limited.
\subsection{{\label{sec:single_el}}Electron transfer without interaction}
We start the presentation
by the case when the transferred electron does not
interact with the charged dot. These results will serve as the reference point for the main calculation
where the interactions are included.
Electrostatic interaction was turned off simply by extracting its matrix elements from
two-particle Hamiltonian (\ref{eq:hkm}).
The probability distributions $P_{A}$, $P_{B}$ and
$P_{C}$ as functions of time and magnetic field for this case are depicted on
Fig.\ref{Fig:1e_praw}.
\begin{figure*}[htbp!]
\hbox{
\epsfxsize=140mm
\epsfbox {fig2.eps}
\hfill}
\caption{(Color online)
Probabilities $P_{A}$, $P_{B}$ and $P_{C}$ as functions of time and magnetic field.
Coulomb interaction between transferred electron and charged dot is neglected.}
\label{Fig:1e_praw}
\end{figure*}
All probabilities strongly oscillate with magnetic field which is a typical manifestation of
Aharonov-Bohm effect. Period of these oscillations is $\Delta B=78\,\textrm{mT}$. This value is
close to $\Delta B_{T}=77.98\,\textrm{mT}$ obtained for the one-dimensional ring from formula:
\begin{equation} \Delta B_{T}=\frac{h}{e}\frac{1}{\pi r^{2}} \label{eq:deltab} \end{equation} for
ring radius $r=130\,\textrm{nm}$. Especially the most intensive AB pattern is visible for the
probability of electron transfer [see Fig.\ref{Fig:1e_praw}(c)] i.e. distinct maxima for multiple
integers of magnetic field flux quanta ($\phi_{n}=n(h/e)$ with $n=0,1,2,\ldots$) and blockades of
electron transfer in the half way between adjacent maxima. The presented time-magnetic field
characteristics of probabilities clearly show the dynamics of wave packet motion. For the first 6~ps
the most energetic part of the electron wave packet reaches the left entrance to the ring but then it
takes it about 4~ps to get through the ring to the second junction. This is visible as a large
growth of $P_{C}$ value on Fig.\ref{Fig:1e_praw}(c) for $t\approx 10\,\textrm{ps}$. One can also see
that the electron wave packet leaves the ring more quickly when the $P_{C}$ is close to its maximum
rather than for its minimum. Besides the AB effect, probabilities of finding the electron in the
left and in the right leads depend also on magnetic field due to the Lorentz
force.\cite{szafran_1el} In order to show magnetic field effect on electron transport, we made the
cross sections of $P_{A}$, $P_{B}$ and $P_{C}$ distributions shown on Fig.\ref{Fig:1e_praw} for
$t=50\,\textrm{ps}$. These cross sections are shown on Fig.\ref{Fig:1e_widma}(a).
\begin{figure}[htbp!]
\hbox{
\epsfxsize=80mm
\epsfbox[135 131 495 311] {fig3.eps}
\hfill}
\caption{(Color online) a) Probabilities $P_{A}$ (black), $P_{B}$ (red) and $P_{C}$ (blue)
as functions of magnetic field for $t=50\,\textrm{ps}$. Elements of bases
$\left\{\varphi_{1}\right\}$ and
$\left\{\varphi_{2}\right\}$ as well as elements of $\left\{\varphi_{2}\right\}$ and
$\left\{\varphi_{3}\right\}$ overlap on the length of $100\,\textrm{nm}$ (solid lines) and
$200\,\textrm{nm}$ (dots). b) Probability of electron transfer as a function of the initial wave
vector $k_{0}$ for several combinations of parameters $l_{e}$ and $x_{s}$ defining the shape of
single-electron wave packet and its center position for $t=0$. In both cases, the electron does not
interact with charged dot.}
\label{Fig:1e_widma}
\end{figure}
One may notice that the electron transfer through the ring is completely blocked due to AB effect
only in low magnetic field. For example for $B=39\,\textrm{mT}$ probability of an electron transfer
is of the order $10^{-4}$. However, for high magnetic field, the probability of electron transfer
does not drop to zero at all. It means that the AB effect is perturbed by the Lorentz force. Due to
the narrow cross sections of leads and arms of the ring there are no significant changes in the
maxima of transmission probability as it was theoretically predicted\cite{szafran_1el} and
experimentally observed \cite{3lorentz} for rings with wider arms.
Figure \ref{Fig:1e_widma}(a) shows also comparison of results obtained for $100\,\textrm{nm}$ and
for $200\,\textrm{nm}$ wide overlap regions. Probabilities $P_{A}$ and $P_{C}$ are the same what
proves that electron may smoothly move between neighboring regions without reflection.
In order to check the influence of initial conditions on the probability of electron
transfer we made additional simulations for several different values of initial parameters that is
for $k_{0}$, $l_{e}$ and $x_{s}$. Results are presented on Fig.\ref{Fig:1e_widma}(b). We see
that the probability of electron transfer strongly depends on the spatial spread of original wave
packet and its initial momentum rather than its distance from the ring. When initial wave packet becomes
wider (value of $l_{e}$ is larger) then the probability of electron transfer grows even
by several percents. On the other hand, transmission probability is less susceptible for change in
the distance between initial position of wave packet and center position of the ring. Results
obtained for $1.7\,\mu\textrm{m}$ and $1\,\mu\textrm{m}$ are very much the same i.e. the difference
is only about $1\%$.
\subsection{{\label{sec:ee}} Effect of repulsive interaction on electron
transport }
\begin{figure*}[ht!]
\hbox{
\epsfxsize=150mm
\epsfbox{fig4.eps}
\hfill}
\caption{(Color online)
Probabilities $P_{A}$ , $P_{B}$, $P_{C}$ as functions of time and
magnetic
field for the case when the transferred electron electrostatically interacts with
charged dot.
The figures (a-c) were obtained for repulsive interaction (electron confined in the dot) while
figures (d-f) for attractive interaction (heavy hole confined in the dot).}
\label{Fig:ee_eh_p}
\end{figure*}
In order to investigate the correlation effects which are due to the repulsive interaction,
we put single electron into the dot and turned on the interaction in the system. Transferred
electron feels a growing repulsive electrostatic potential as it approaches the ring.
Probabilities $P_{A}$, $P_{B}$,$P_{C}$ as functions of evolution time and magnetic field
obtained for this two-electron system are shown on Fig.\ref{Fig:ee_eh_p}. Comparison of
probabilities distributions obtained for electron subject to the repulsive interaction
[Fig.\ref{Fig:ee_eh_p}(a-c)] with those obtained for noninteracting electrons
[Fig.\ref{Fig:1e_praw}] allows us to distinguish
several differences between these two cases. Maxima of transmission probability are decreased for
repulsive interaction in relation to the previous case. Consequently, the interaction is also responsible
for the growth of probability of finding the electron in left lead $P_{A}$
[cf. Figs. \ref{Fig:1e_praw}(a) and \ref{Fig:ee_eh_p}(a)] and also for faster electron wave
packet leakage from the ring for $t>40\,\textrm{ps}$. However, the interaction does not change the
period of AB oscillation. The probabilities $P_{C}$ shown on Fig.\ref{Fig:1e_praw}(c) and on
Fig.\ref{Fig:ee_eh_p}(c) change with the same frequency. For quantitative analysis of interaction
influence on $P_{A}$, $P_{B}$ and $P_{C}$ we have made the cross sections of
probabilities distributions for $t=50\,\textrm{ps}$. These cross sections are presented on
Fig.\ref{Fig:ee_widma}(a).
\begin{figure}[ht!]
\hbox{
\epsfxsize=80mm
\epsfbox{fig5.eps}
\hfill}
\caption{(Color online)
Probabilities $P_{A}$ (black line), $P_{B}$ (red line), $P_{C}$ (blue line)
as functions of magnetic field for $t=50\,\textrm{ps}$ and for (a) $k_{0}=0.05nm^{-1}$ and
(b) $k_{0}=0.063nm^{-1}$. The
transferred electron interacts electrostatically with negatively charged dot.
c) Energy spectrum of single electron confined in the dot.
d) Interaction energies for the lowest-energy states of two electrons confined in closed quantum
ring-quantum dot system .}
\label{Fig:ee_widma}
\end{figure}
Comparison of $P_{C}$ cross sections shown on Figs. \ref{Fig:1e_widma}(a) and \ref{Fig:ee_widma}(a)
reveals that repulsive interaction is responsible for about $10\%$ decrease in
transmission probability. It results from the fact that when electron approaches the ring, it
simultaneously climbs on a growing slope of the Coulomb potential of the second electron
and converts part of its kinetic energy into potential energy. Therefore the electron wave packet
enters the ring with lower average wave vector $k$ than its initial value $k_{0}$. Since the
probability of electron transfer strongly depends on the k value [see Fig.\ref{Fig:1e_widma}(b)], the
lower average k value bring the transmission probability down. Such situation is clearly visible on
Fig.\ref{Fig:1e_widma}(b) for $k_{0}<k_{F}$. In order to check this hypothesis we performed
additional time evolution of two-electron wave function giving the transferred electron higher
initial momentum just enough to overcome the repulsive interaction [see Fig.\ref{Fig:ee_widma}(b)].
We assumed $0.86\, meV$ as average value of interaction energy what gives initial momentum $k_{0}=0.063\,
\textrm{nm}^{-1}$ and corresponds to $E_{F}=2.28 meV$. Probabilities $P_{A}$, $P_{B}$, $P_{C}$
as functions of magnetic field for this case for $t=50\,\textrm{ps}$ are presented on
Fig.\ref{Fig:ee_widma}(b). We notice that this picture is almost identical with results obtained
for electron transport without interaction [cf. Figs. \ref{Fig:1e_widma}(a) and
\ref{Fig:ee_widma}(b)].
As one may notice, the repulsive interaction does not change the frequency of AB oscillations [cf.
Fig.\ref{Fig:1e_widma}(a) and Fig.\ref{Fig:ee_widma}(b) with Fig.\ref{Fig:ee_widma}(a)]. This
results from the fact that we did not include the term describing interaction between the magnetic
dipole moments in the two-particle Hamiltonian (\ref{eq:ham12}). In addition, the electrostatic
potential originated from charged dot is too weak to induce the electron density redistribution along
the ring radius and thus do not change the effective ring radius [see Eq.\ref{eq:deltab}]. In order
to get a deeper insight into the dynamics of the two-electron wave packet we have calculated the total
two-electron probability densities and the current densities. Results obtained for $t=8,10,14,20
\,\textrm{ps}$ are shown on Fig.\ref{Fig:eegj}.
\begin{figure*}[ht!]
\hbox{
\epsfxsize=150mm
\epsfbox{fig6.eps}
\hfill}
\caption{(Color online)
Two-electron probability density (odd columns) and current density (even columns) calculated for
$t=8,10,14,20\,\textrm{ps}$. The red color indicates the current flowing to the right lead while the blue color
marks the current flowing to the left. Intensity of the colors is proportional to the amplitude of
current. The color scales for two lowest rows are enhanced by the multipliers which are shown in
top right corners.}
\label{Fig:eegj}
\end{figure*}
When the magnetic field is absent in the system, the total electronic density as well as the
current remain symmetrical relative to $y\to -y$ reflection during the whole time evolution.
Obviously, it results from the fact that the electrostatic interaction term in two-particle
Hamiltonian (\ref{eq:ham12}) preserves this symmetry and thus does not change the symmetry of
the two-particle wave function. A detailed analysis of the currents for $B=0$ reveals that the
electrostatic interaction does not induce current inside a dot as one at first may expect. When
electron approaches the ring, both electrons repel each other into opposite
directions. As we do not see any current induced in the dot for $B=0$, we can state that the
electron confined in the dot does not react to the presence of the first electron. This lack of
reaction of the second electron stems from the fact
that the interaction is small in comparison with the lowest single-electron energy excitations in
the dot.
We see on Fig.\ref{Fig:ee_widma}(d) that the value of interaction energy between two electrons
confined in closed quantum ring-quantum dot structure is about $0.86\,\textrm{meV}$. On the
other hand, the energy spacings between the first two excited states and the ground state in quantum
dot confining single electron [see Fig.\ref{Fig:ee_widma}(c)] are equal to $3.1\,\textrm{meV}$ for
$B=0$. The interaction energy is more than three times smaller than even the lowest two
single-electron
energy excitations and therefore can hardly mix the quantum dot states. Since the transferred
electron can not excite the second electron, there is no
energy transfer to the dot. Transferred electron scatters only elastically on the statical
repulsive potential created in the leads and in the ring by second electron which is confined in the
inner dot. The magnetic field breaks the symmetry of the confinement potential and favors an upper
arm. The larger part of electron wave packet is directed to this arm [see
Fig.\ref{Fig:eegj} for $B=0.039\textrm{T}$ and $B=0.429\textrm{T}$]. Let us notice that the current
in the dot is more intensive for stronger magnetic field. Since the electrostatic interaction is too
weak to induce it, only the external magnetic field may be responsible for its existence. We
explain it by analyzing the matrix elements of probability current:
\begin{equation}
\mathbf{j}_{km}=
\frac{i\hbar}{2m^{*}}
\bigg( \phi_{m}\nabla \phi_{k}^{*}- \phi_{k}^{*}\nabla \phi_{m} \bigg)
-\frac{q}{m^{*}}\mathbf{A}\phi^{*}_{k}\phi_{m}
\label{eq:prad}
\end{equation}
The first component on the right hand side in Eq.\ref{eq:prad} is the paramagnetic part
of current while the second component is diamagnetic. Now, if we notice that electron
occupies exclusively the ground state of s-symmetry for B=0 we see that the paramagnetic
current completely disappears.\cite{chwiej2} One may notice on
Fig.\ref{Fig:ee_widma}(c) that even for $B=0.5T$ the energy spacings between the first excited state
and the ground state ($E_{1}-E_{0}=2.7\,\textrm{meV}$) are still much larger than interaction energy.
Since the interaction is not able to mix the dot states, the electron confined in the dot still
occupies the ground state and there is no paramagnetic contribution to the current even in high
magnetic field.
Since the diamagnetic current depends on product of probability density and magnetic field, its
contribution increases for stronger magnetic field what is clearly visible when comparing dot
currents depicted in the fourth and in the sixth columns on Fig.\ref{Fig:eegj}.
\subsection{{\label{sec:eh}}Effect of attractive interaction on electron
transport }
\begin{figure*}[htb!]
\hbox{
\epsfxsize=140mm
\epsfbox[125 121 590 268] {fig7.eps}
\hfill}
\caption{(Color online)
Probabilities $P_{A}$ (black), $P_{B}$ (red) and $P_{C}$ (blue) as functions of magnetic field
calculated for a system: (a) with positively charged dot and (b) with electron or hole frozen in the
dot ground state (solid line for electron and dotted one for hole). c) Effect of Coulomb
correlation in the dot on probability of electron transfer for $B=(n+\frac{1}{2})\Delta B$ with
integer n. In (c) the dot is occupied by electron (black color) or by hole (red
color). Lines are guide to the eye.}
\label{Fig:eh_widma}
\end{figure*}
In the preceding section we showed that the repulsive interaction is responsible for
decrease in probability of electron transfer through the ring. When electron approaches the ring and
negatively charged dot, it is scattered elastically on a static potential. A part of the electron
kinetic energy is converted into the potential energy. The average momentum of the packet is decreased which
leads [Fig.\ref{Fig:1e_widma}(b)] to a decrease in probability of electron transfer. Let us
notice that this mechanism may presumably lead to an increase in the transmission probability provided that
the electron is attracted by the positively charged dot. In order to check this conjecture we put
the heavy hole in the dot and made time evolution of wave function for this electron-hole system.
Probabilities $P_{A}$, $P_{B}$,$P_{C}$ distributions as functions of evolution time and magnetic
field obtained for this case
are shown on Fig.\ref{Fig:ee_eh_p}(d-f). Comparison of the results obtained for the repulsive
interaction [Fig.\ref{Fig:ee_eh_p}(a-c)] with those obtained for the attractive one
[Fig.\ref{Fig:ee_eh_p}(d-f)] shows that the probability of electron transfer through the ring is
indeed larger in the latter case. Moreover, when the transferred electron feels the presence
of the positively charged dot it spends less time in the ring. Attractive interaction leads to an increase
of the packet average momentum and velocity. Therefore, the electron traverses the ring in
shorter time than in the case when it is repelled by negatively charged dot. Cross sections of those
probabilities distributions depicted in Fig.\ref{Fig:eh_widma}(a) indicate that the AB oscillations
are independent of electrostatic interaction. Probabilities $P_{A}$ and $P_{C}$ oscillate with the
same frequency as those shown on Fig.\ref{Fig:ee_widma}(a) obtained for repulsive
interaction, the period of AB oscillation is still equal to $\Delta B=78\,\textrm{mT}$.
The electrostatic interaction does not change the frequency of AB oscillations
but may significantly influence the electron transfer probability provided that the confinement
along the ring radius is strong. The change of character of electrostatic interaction from repulsive
to attractive makes the maxima of probability of electron transfer grow by more than
$20\%$.
Repulsive or attractive potential changes the wave vector
distribution in the electron wave packet due to its deceleration or acceleration by
the electrostatic potential. As it is clearly visible on Fig.\ref{Fig:1e_widma}(b) such a change in
the average value of electron wave vector should influence, to a large extent, the probability of
the electron transfer. However, when the electron interacts with a positively charged dot, the minima of
the transfer probability at half flux quanta become shallower.
For example, for $B=39\,\textrm{mT}$, transmission probability
falls only to $8.4\%$ while the electron transfer is completely blocked when the
electron does not interact with particle confined in a dot [Fig.\ref{Fig:1e_widma}(a)] or
is repelled
by a negatively charged dot [Fig.\ref{Fig:ee_widma}(a)]. Since the Lorentz force is negligible for low
magnetic field, this AB blockade weakness stems only from the interaction of electron wave packet with
the positively charged dot. Figure \ref{Fig:eh_widma}(b) shows probabilities obtained for
attractive and repulsive interaction between the transferred electron and the second particle which is
frozen in the ground state in the dot. Electron or hole confined in the dot can not move and thus
we may neglect the correlation effects in the
dot. Despite this fact, the two-particle wave function is still partly correlated since the
transferred electron interacts with charged dot and its behavior depends on the distance from the
dot due to the Coulomb interaction. We see on Fig.\ref{Fig:eh_widma}(b), that
electron can not be transferred through the ring for $\Delta B/2$ independently of
the character of electrostatic interaction. It means that the Coulomb
correlation in the dot is entirely responsible for the weakness of AB blockade for low magnetic field.
Comparison of the results shown on Figs. \ref{Fig:ee_widma}(a) and \ref{Fig:eh_widma}(a)
suggest that the effect of Coulomb correlation on transmission probability also
depends on the effective mass of particle confined in the dot. We demonstrate this dependence on
Fig.\ref{Fig:eh_widma}(c) for electron (black crosses), frozen electron (black empty circles) and
electron with large effective mass ($m_{e}^{*}=0.5$ - black dots) as well as for hole (red
crosses),
frozen hole (red empty circles) and hole with small effective mass ($m_{h}^{*}=0.067$ - red dots)
confined in the dot. The results for the frozen hole and the hole with a small mass are identical. This shows that small
effective
mass prevents particle from moving inside the dot regardless of the character of electrostatic
interaction. For heavier particle i.e. electron or hole confined in the dot with effective mass of
about $0.5$, probability of electron transfer for $B=39\,\textrm{mT}$ is increased.
However, this growth is bigger for the attractive ($8\%$) than for the repulsive ($2\%$)
interaction.
Notice also that the transmission probability
grows faster for the attractive interaction than those for the repulsive one.
\begin{figure}[htb!]
\hbox{
\epsfxsize=80mm
\epsfbox{fig8.eps}
\hfill}
\caption{a) Energy spectra of heavy hole confined in the dot and b)
electron-hole interaction energy
in closed quantum ring-quantum dot system.}
\label{Fig:veh}
\end{figure}
Relatively large effective mass of the heavy hole leads to its stronger localization in the dot.
This results in smaller spacings between the lowest energy levels than those for electron [cf.
Fig.\ref{Fig:veh}(a) for hole and Fig.\ref{Fig:ee_widma}(c) for electron]. For the hole confined in
the dot, these spacings are comparable with the average absolute value of the attractive interaction.
For example, in the absence of magnetic field, the two lowest excited states shown on
Fig.\ref{Fig:veh}(a) lie only $0.59\,\textrm{meV}$ above the ground state while the absolute average
value of interaction energy between electron and hole shown on Fig.\ref{Fig:veh}(b) for closed
quantum dot-quantum ring system is about $0.885\,\textrm{meV}$. Therefore, when the transferred
electron approaches the positively charged dot placed in the center of the ring, it may quite easily
excite the hole in the dot.
\begin{figure*}[ht!]
\hbox{
\epsfxsize=150mm
\epsfbox{fig9.eps}
\hfill}
\caption{(Color online) The electron-hole probability density (odd columns) and current density
(even columns) for $t=8,10,14,20\,\textrm{ps}$. Red and blue colors indicate directions of current
flow, to the right and to the left respectively. Intensity of colors are proportional to the
amplitudes of currents. The scale for the currents in the dot was enhanced four times (square
region) for $B=0$ and $B=39\,\textrm{mT}$.}
\label{Fig:ehgj}
\end{figure*}
Figure \ref{Fig:ehgj} shows the total probability density and current density distributions obtained for
fully correlated electron-heavy hole system. For $B=0$ and $t=8\,\textrm{ps}$, when the electron
enters the ring through the left junction, the hole is attracted by the electron and starts to move which
induces the current in the dot. At first, this current flows to the left (blue color on
Fig.\ref{Fig:ehgj}), but when the electron fills more or less equally the ring ($t=10\,\textrm{ps}$)
which makes the potential in the dot less perturbed, the hole reflects from the wall. Then the
current within the dot flows to the right (red color). The hole is excited in the dot and starts to
oscillate. Its spatial oscillation do not fade out even for long time e.g.
$t=20\,\textrm{ps}$. This indicates that when the electron passes through the ring, it transfers
part of its energy to the dot. As the energy of the electron changes permanently, we may state that it
scatters inelastically on the Coulomb potential generated by the oscillating
hole. Similar oscillations of current in the dot in the horizontal direction are also visible for
$B=39\,\textrm{mT}$ (fourth column on Fig.\ref{Fig:ehgj}).
We will analyze in detail this process of energy transfer between
the electron and the dot in next section.
Horizontal oscillations of the hole in the dot perturb the potential in both arms of the ring.
Although, the confinement potential of the ring is perturbed, it remains symmetrical relative to
$y\to -y$ reflection. That produces identical phase shifts in both parts of electron wave packet i.e.
in the upper and in the lower ring
arms. In other words, the weakness of AB blockade observed on Fig.\ref{Fig:eh_widma}(a) is not a
result of dephasing\cite{mohanty} because the phases in the upper and in the lower parts of the
electron wave packet still change coherently. In consequence, when they meet at the second junction
for $B=(n+1/2)\Delta B$, their phase difference is no longer equal to $\pi$ due to
potential perturbation. This effect was recently
predicted by Chaves et al.\cite{chaves} They obtained very similar dependence of transmission
probability on magnetic field to that shown on Fig.\ref{Fig:eh_widma}(a) for an open two-dimensional
ring with two static impurities put near both arms of the ring and placed symmetrically to its
center.
For high magnetic field, e.g. $B=0.429\,\textrm{T}$ [the last column on Fig.\ref{Fig:ehgj}], these
current oscillations become invisible and now the current encircles the dot in the clockwise direction
i.e. in the opposite direction to the one of the last column of Fig.\ref{Fig:eegj} when electron
occupies the dot. It does not mean
that the oscillations entirely disappear, but only the diamagnetic contribution to the current in
the dot is much larger than paramagnetic contribution. Such large diamagnetic current was also
induced by magnetic field when an electron was confined in the dot. However, if we compare the dot
currents in the last columns of Fig.\ref{Fig:eegj} and of Fig.\ref{Fig:ehgj} we will see that the
current is less intensive for the hole (color scales on both figures are the same). To explain this
fact we make an assumption that the densities of electron and hole in the dot do not differ much for
the same magnetic field which seems reasonable for our case, since the confinement potential of the
dot is quite strong. With this assumption, and for fixed value of magnetic field, the absolute
value of diamagnetic term in Eq.\ref{eq:prad} depends only on the effective mass of particle. Since
the diamagnetic current is inversely proportional to the effective mass and the effective mass of the electron
used in calculation was about $m_{h}/m_{e}=7.5$ times smaller than effective mass of the heavy hole,
the diamagnetic contribution to the current is by about $m_{h}/m_{e}$ larger
for the electron than that for the hole.
\begin{figure}[ht!]
\hbox{
\epsfxsize=80mm
\epsfbox[150 398 380 620]{fig10.eps}
\hfill}
\caption{(Color online) The $P_{A}$ (black), $P_{B}$ (red) and $P_{C}$ (blue) probabilities as
functions of magnetic field for four combinations of $|x_{S}-x_{0}|$ and $l_{e}$ initial parameters.
Results were obtained for $t=50\,\textrm{ps}$. Pictures (a) and (b) are for repulsive interaction
while (c) and (d) are for attractive interaction. Solid lines are for $l_{e}=30\,\textrm{nm}$ and
dotted for $l_{e}=50\,\textrm{nm}$.}
\label{Fig:wp_abcd}
\end{figure}
Probability of electron transfer depends also on the initial conditions which we have assumed quite arbitrarily.
In order to check the sensitivity of the transmission probability to initial conditions, we studied
the time evolution of the two-particle wave function (\ref{eq:funkcja2}) for four combinations of
the distance between the initial position of electron wave packet and the center position of the ring
($|x_{s}-x_{o}|$) and its spatial span $l_{e}$.
Probabilities $P_{A}$, $P_{B}$ and $P_{C}$ calculated for these new initial parameters
and for $t=50\,\textrm{ps}$ are shown in Fig.\ref{Fig:wp_abcd}(a-d). If the transferred electron
interacts with negatively charged dot, these probabilities are only slightly
sensitive to the change of the initial conditions [see Figs.\ref{Fig:wp_abcd}(a) and
\ref{Fig:wp_abcd}(b)]. For example, transmission probability increases only about $1\%$ when the
parameter $l_{e}$ changes from $30\,\textrm{nm}$ to $50\,\textrm{nm}$ for
$|x_{s}-x_{o}|=0.995\,\mu\textrm{m}$. Much larger differences in transmission probability
were found for system with the positively charged dot.
Generally, the amplitude of AB oscillations is larger for larger $l_{e}$ and when the
electron wave packet stays closer to the ring for $t=0$. For example, for $B=0$ and
$|x_{s}-x_{o}|=1.7\,\mu\textrm{m}$, the transmission probability grows from about $0.46$ for
$l_{e}=30\,\textrm{nm}$ to about $0.52$ for $l_{e}=50\,\textrm{nm}$ what gives the growth
of about $6\%$ while it is equal to about $4.7\%$ for $|x_{s}-x_{o}|=0.995\,\mu\textrm{m}$.
When the parameter $l_{e}$ is fixed, then the change in $|x_{s}-x_{o}|$ value make less impact on
the transmission probability. For example, for $l_{e}=30\,\textrm{nm}$, we get the increase
in transmission probability of about $3\%$ when the initial position
of the electron wave packet is shifted by about $0.7\,\mu\textrm{m}$ closer to the ring
whereas for
$l_{e}=50\,\textrm{nm}$ the increase in $P_{C}$ value is less distinct and is equal to about $1.1\%$
then. On the other hand, weakness of AB blockade for electron-hole system is independent of the
initial position of transferred electron wave packet but grows by $2\%$ when the value of
parameter $l_{e}$ changes from $30\,\textrm{nm}$ to $50\,\textrm{nm}$ for $B=39\,\textrm{mT}$.
\section{\label{sec:inelastic} Elastic and inelastic scattering}
In the previous section we showed that during the electron transition through the ring, the
particle confined in the dot may start to move. Its spatial oscillations within the dot are induced by the
electrostatic interaction between charged particles and are due to the excitation to the higher
energy states in the dot. During the process of excitation, the transferred electron loses a part of its
kinetic energy which is gained by the second particle. If this energy loss is permanent i.e. the
electron does not recover it after it leaves the ring, then the process of electron scattering on the Coulomb
potential is inelastic. Figure Fig.\ref{Fig:transfer}(a) shows the probabilities of
occupation of the low-energy quantum dot states
as functions of evolution time. This picture was obtained for $B=39\,\textrm{mT}$. In order to find
the probability of
occupation of the particular dot state we have projected the two-particle wave function
(\ref{eq:funkcja2}) on that state:
\begin{equation}
{p}_{i}(t)=
\langle \Psi(\mathbf{r}_{1},\mathbf{r}_{2},t)|\widehat{p}_{i}|
\Psi(\mathbf{r}_{1},\mathbf{r}_{2},t) \rangle
\end{equation}
where $\widehat{p}_{i}=|\phi_{i}\rangle \langle\phi_{i}|$ is the projection
operator.
\begin{figure*}[htb!]
\hbox{
\epsfxsize=150mm
\epsfbox[15 416 829 680]{fig11.eps}
\hfill}
\caption{(Color online) a) Probabilities of occupation of the low-energy quantum dot states by
the electron (red) and by the hole (black) as functions of the evolution time. In (a) results were obtained for
$B=39\,\textrm{mT}$. b) Energy gained by the hole confined in the dot as function of time.
c) Permanent energy transfer to positively charged dot depending on magnetic field.}
\label{Fig:transfer}
\end{figure*}
When electron is confined in the dot the probabilities of occupation of the dot states do not change in time.
For the confinement potential model considered here, the electron is always in the ground
state. As it was mentioned in Sec.\ref{sec:ee} the electrostatic interaction is too weak to excite
the electron within the dot and this is the reason we do not see any current in the dot on
Fig.\ref{Fig:eegj} for $B=0$. Situation changes dramatically if we consider the hole confined in
the dot. We see on Fig.\ref{Fig:transfer}(a) that after a few ps, the hole starts to be excited
since the probabilities of two the lowest excited states with angular momentum $L=1$ grow with time.
Contributions of these states are identical, since their linear combination gives the hole
oscillation in the horizontal direction. Obviously, it results from the symmetry of the
confinement potential model relative to $y \to -y$ reflection and due to the absence of Lorentz force in the
system for such
small magnetic field. Other hole states in the dot remain unoccupied. The process of the hole excitation
ends up for $t=25\,\textrm{ps}$. During the next $15\,\textrm{ps}$, the hole partly de-excite and
the probability of finding it in the ground state is increased. For $t>40\,\textrm{ps}$,
contributions from the low-energy dot states stabilize. These changes of probabilities of
occupation of the dot states influences the energy of the hole. We calculated the energy
gained by hole i.e. energy transfer to the dot, from following formula:
\begin{equation}
E_{t}(t)=\sum_{i=0}^{19} p_{i}(t) E_{i}^{dot}-E_{0}^{dot}
\end{equation}
where $E_{i}^{dot}$ are the eigenenergies of particle confined in the dot. Figure
\ref{Fig:transfer}(b) shows the time characteristics of the energy transferred to the dot which is
occupied by the hole for $B=0$ and $B=39\,\textrm{mT}$. We see that both cases differ qualitatively
as well as quantitatively. In the absence of magnetic field, the energy is transferred to the dot
for $t<20\,\textrm{ps}$. Then the hole energy changes only slightly and for $t=50\,\textrm{ps}$
it stabilizes at about $0.08\,\textrm{meV}$. Thus the transferred electron lose $5.6\%$ of its
original kinetic energy. For $B=39\,\textrm{mT}$ the energy transfer in the first $25\,\textrm{ps}$
is twice of that observed for $B=0$. Next, the hole gives back a part of the gained energy to the
electron but for $t=50\,\textrm{ps}$ is still much larger than in the case for $B=0$.
The occurrence of such a distinct difference in energy transfer is not incidental.
The magnetic field dependence of the energy
transferred to the dot occupied by the hole, depicted on Fig.\ref{Fig:transfer}(c), indeed have
minima for $B=n\Delta B$ i.e. for maxima of the transmission probability. On the other hand, the
maxima of the energy transfer do not appear exactly for $B=(n+1/2)\Delta B$, as one may expect, but they
are shifted towards higher magnetic fields. On Fig.\ref{Fig:transfer}(c), we see that, besides
the oscillatory character of magnetic field dependence of energy transfer what is the signature of
AB effect, the minima and maxima of energy gained by the hole lie higher in energy when
magnetic field becomes stronger. This nonlinear effect is the signature of presence of magnetic
force in the system. Magnetic field breaks the symmetry of the confinement potential of the ring and
consequently the Lorentz force injects larger part of the electron wave packet to the upper arm of
the ring [see density distributions on Fig.\ref{Fig:eegj} and Fig.\ref{Fig:ehgj} for
$B=0.429\,\textrm{T}$ and $t=8\,\textrm{ps}$]. In this case, the magnitude of Coulomb interaction
between both particles is getting stronger. It results in a larger amount of the energy transferred to
the dot. For example, for $B=0.453\,\textrm{T}$ the energy gained by the hole reaches even
$0.278\,\textrm{meV}$ what is $19.5\%$ of original kinetic energy of the transferred electron.
\section{\label{sec:con} Discussion and conclusions}
The presence of a charged dot in the center of the ring significantly influences the probability of
electron transmission. The maxima of transmission probability observed in the Aharonov-Bohm effect are
shifted down (up) for repulsive (attractive) interaction between transferred electron and the
charged dot [cf. Fig.\ref{Fig:1e_widma}(a) for empty dot with Figs. \ref{Fig:ee_widma}(a) and
\ref{Fig:eh_widma}(a)]. The reduction of transmission probability stems from
lowering the average value of
wave vector in the electron wave packet [see Fig.\ref{Fig:1e_widma}(b)] due to
deceleration of its
motion when it moves towards the ring. The magnitude of this probability reduction depends in
particular on the radius of the ring and on the number of particles confined in the dot.
Interaction should be stronger for smaller rings due to stronger Coulomb coupling of the ring and
the dot, and for multiple charged dot. Moreover, the probability of electron transmission may also
be decreased when the kinetic energy of electron is of the same order as the interaction energy.
Then, the low-energy part of the electron wave packet should be reflected back from repulsive
Coulomb potential before it get closer to the ring.
The single electron transport in quantum ring depends strongly on the relations between the
magnitude of interaction energy and the lowest excitation energies of particle confined in quantum
dot. When the spacings between the two lowest excited states and the ground state in the dot are
several times larger than the interaction energy [cf. Figs. \ref{Fig:ee_widma}(c) and
\ref{Fig:ee_widma}(d)], the electron transport through the ring is blocked for $(n+1/2)\phi_{0}$ flux quanta in
low magnetic field [see the magnetic field dependence of $P_{C}$ (blue color) on Figs.
\ref{Fig:ee_widma}(a) and \ref{Fig:ee_widma}(b)]. In this case, the transferred electron is not able
to excite the second particle which stays in the ground state [the $p_{i}$ do not change for
repulsive interaction (red color) on Fig.\ref{Fig:transfer}(a)] and the Coulomb potential originated
from charged dot keeps its azimuthal symmetry. In consequence, the quantum interference is not
perturbed by the interaction because the transmitted electron scatters elastically in quantum ring
i.e. there is no permanent energy transfer between the electron and the charged dot.
The situation changes significantly when the interaction energy becomes comparable to excitation energies.
For example, the magnetic field dependence of transmission probability obtained for attractive
interaction and presented on Fig.\ref{Fig:eh_widma}(a) reveals AB blockade weakness for
$(n+1/2)\phi_{0}$ even for low magnetic field. The heavy hole is excited by the transferred electron
due to their Coulomb interaction [see $p_{i}$ for attractive interaction (black color) on
Fig.\ref{Fig:transfer}(a)], and starts to oscillate horizontally within the dot [see the currents on
Fig.\ref{Fig:ehgj} for $B=0$ and $B=39\,\textrm{mT}$]. Coulomb potential originated from oscillating
hole charge, breaks the azimuthal symmetry of the confinement potential in the ring. This dynamical
charge redistribution inside the dot perturbs the quantum interference in the ring. Electron scatters
inelastically on the oscillating Coulomb potential which changes coherently
the phase of the electron wave packet in both arms of the ring. Finally, this leads to the suppression
of AB effect i.e. the maximum-to-minimum ratio is decreased but the amplitude of transmission
probability does not change much. Interestingly, the energy gained by the charge confined in the dot
shows strong oscillation in the magnetic field [see Fig.\ref{Fig:transfer}(c)]. Maxima are localized in
the proximity
of $(n+1/2)\phi_{0}$ and are slightly shifted towards the higher magnetic fields.
Generally the AB oscillation period depends on: (i) the effective radius of the ring and (ii) the vector
potential. Oscillation of charged particle within the dot creates an additional magnetic field and vector
potential. However this induced magnetic field is very small\cite{chwiej2} i.e. of the order of few hundreds
nT and therefore this effect can not perturb significantly the AB period. The Coulomb interaction may potentially
influence the effective ring radius since the electron tends to move closer to the positively charged dot due
to attractive interaction whereas it tries to keep away from negatively charged dot due to repulsive
interaction when it traverses the ring. However the effective ring radius can be changed only if the
interaction is strong enough to modify the electron density along the ring radius what is possible for very
wide ring arms. For confinement potential model considered here, the interaction is too weak to make
noticeable redistribution of electron density in the ring and we did not observe any change in AB
period due to the Coulomb interaction.
Similar effect i.e. suppression of AB oscillation in conductance was observed in experiment of
M\"uhle\cite{ex_ring_ring_1} for two capacitively coupled quantum rings. They obtained much less distinct AB
oscillations for outer ring than the conductance oscillation arising from AB effect for the inner ring. The
authors ascribes this effect to the imperfections of the confinement potential of the outer ring. However it
does not explain such large amplitude of oscillation induced by the inner ring since the charge redistribution
in the inner ring perturbs the Coulomb potential felt by electrons in the outer ring.
In our opinion besides the imperfections of the outer ring, the difference in amplitudes of AB oscillations
observed in experiment results also from inelastic scattering of the transferred electrons on
the Coulomb potential.
Since the energy gaps between the ground state and the first excited state are much smaller in the ring than in
the dot, the particle confined in the inner ring should be much more easily excited i.e. much energy
may be transferred to the inner ring than to the dot. In such a case, strong spatial oscillations of
particle within the inner ring may govern the motion of the electron injected to the outer
ring. Finally, this strongly inelastic scattering process can suppress the AB
oscillation of the outer ring rather than the amplitude of the electron transmission probability.
In conclusion, the effect of Coulomb correlation on single-electron transport in
two-terminal quantum ring capacitively coupled to the charged dot was
theoretically investigated. The Coulomb interaction between the transferred electron
and charged particle confined in the dot, significantly influences the
maxima of transmission probability in Aharonov-Bohm effect. When
interaction energy is comparable to the lowest excitation energies in
the dot then the electron transfers part of its energy to the dot. Thus
electron scatters inelastically on the ring which finally leads to a reduction of
Aharonov-Bohm blockade and suppression of Aharonov-Bohm oscillation.
\begin{acknowledgements}
We are grateful to B. Szafran for useful discussions.
This work was supported by Polish Ministry of Science and Higher Education
within the project N N202 103938 (2010-2013).
Calculations were performed in
ACK\---CY\-F\-RO\-NET\---AGH on the RackServer Zeus.
\end{acknowledgements}
|
1,477,468,750,122 | arxiv | \section*{Abstract}
This article describes a method for using optimization to derive efficient independent transition functions for Markov chain Monte Carlo simulations. Our interest is in sampling from a posterior density $\pi(x)$ for problems in which the dimension of the model space is large, $\pi(x)$ is multimodal with regions of low probability separating the modes, and evaluation of the likelihood is expensive.
We restrict our attention to the special case for which the target density is the product of a multivariate Gaussian prior and a likelihood function for which the errors in observations are additive and Gaussian.
\section*{Introduction}
Many geoscience inverse problems are characterized by large numbers of parameters and nonlinearity in the relationship between parameters and observations. In some cases, the posterior pdf for the model parameters after assimilation of observations is multimodal, although the presence or absence of multiple modes in large models is seldom verified. Examples of smaller models in which multiple modes have been verified include flow and transport problems in porous media, where the nonlinearity resulting from uncertainty in connections in layered media and observation operators that average over layers can both lead to multiple modes \cite{christie:06,oliver:11b}.
Monte Carlo sampling is often the only viable method of quantifying uncertainty in the posterior distribution for problems of this type, but sampling from multi-modal distributions in high dimensions is extremely difficult.
In order to obtain a reasonably high probability of generating acceptable random-walk transitions in Metropolis sampling, the transitions must generally be small: for small enough transition distances, almost all proposals will be accepted. In that case, however,
it is possible to remain trapped in some modes for very long times before jumping to another mode and the time required to obtain useful Monte Carlo estimates can be impractical \cite{fort:14}. The problem of optimal scaling of the proposal distribution to maximize the efficiency of mixing of a random walk Metropolis algorithm has been solved for certain classes of target distributions \cite{roberts:97,roberts:01}. These scaling rules are not applicable in all contexts, and in particular are not appropriate for multimodal distributions \cite{fort:14a}.
The ability to design a proposal distribution that mixes well is key to the efficiency of the MCMC algorithm for multimodal distributions.
In this paper, we discuss an augmented variable independence Metropolis sampler that uses minimization to place proposals in regions of high probability.
Augmented variable methods have been shown to be useful in MCMC to construct efficient sampling algorithms and in particular for sampling from multimodal distributions \cite{besag:93}, where the auxiliary variables might allow for large jumps between modes \cite{tjelmeland:04}.
Storvik \cite{storvik:11} suggests that auxiliary variables can be used in MH either for generating better proposals or for calculation of acceptance probabilities and points out that the use of auxiliary variables allows flexibility in choosing the target distribution in the augmented space, as long as the marginal distribution for the variables of interest is unchanged. In this paper, the augmented variables are useful for obtaining a marginal proposal density that is close to the target marginal density, and for simplifying evaluation of the MH ratio.
Independence Metropolis samplers are Metropolis-Hastings samplers for which the transition probability does not depend on the current state of the chain ($q(y|x) = q(y)$) \cite{tierney:94}.
For independence sampling, it is useful to choose the proposal density $q(x)$ such that $\pi(x) / q(x)$ is bounded and as nearly constant as possible, in much the same way that optimal proposal densities would be chosen for importance sampling or for rejection sampling.
Liu \cite{liu:96} compares the efficiency of the three methods, concluding that the independence Metropolis sampling is asymptotically comparable to rejection sampling, but that independence Metropolis sampling is simpler to implement as it does not require knowledge of the envelope constant.
A number of methods have been developed for MH that use either local gradient information to make transitions to regions of high probability \cite{geweke:89,martin:12} or to make allow better exploration of the target density than can be obtained by random-walk sampling \cite{duane:87,cunha:98,dostert:06}. In several methods, the transition is based on an optimization step.
The Multiple-Try method \cite{liu:00} uses local minimization along a line in state space to propose candidate states for updating one of the current population of states. Alternatively, Tjelmeland and Hegstad \cite{tjelmeland:01} described a two-step transition in the Metropolis-Hastings method in which mode-jumping transitions alternated with mode-exploring transitions.
The Randomized Maximum Likelihood (RML) method \cite{oliver:96e} was developed as an independence Metropolis sampler for the special case in which the target density $\pi(x) $ is the product of a multivariate Gaussian prior $p(x)$ and a likelihood function $p(d|x)$ for which the errors in observations $d$ are Gaussian.
Let $X$ be multivariate normal with mean $\mu$ and covariance $C_x$ such that the prior probability density for $X$ is
\[ p(x) = c_p \exp \left( - \frac{1}{2} (x-\mu)^{\text{\scriptsize T}} C_x^{-1} (x-\mu) \right). \]
Observations $d^o = g(x) + \epsilon_d$ with $\epsilon_d \sim N(0,C_d)$ are assimilated resulting in a posterior density
\begin{equation}
\pi(x) \propto \exp \left( -\frac{1}{2} (x - \mu )^{\text{\scriptsize T}} C_x^{-1} (x - \mu) - \frac{1}{2} (g(x)-d^o)^{\text{\scriptsize T}} C_d^{-1} (g(x)-d^o) \right) .
\label{eq:pi_x}
\end{equation}
Candidate samples in RML are obtained by a two-step process. In the first step, randomized samples from the prior distribution for model variables and data variables are sampled. In the second step, the candidate state is obtained by minimization of a stochastic objective function.
The distribution of candidate states $q(x)$ depends on both the model and data variables so computation of the transition probability requires marginalization, $q(x) = \int q(x,d) \, dd$. For problems in which the prior is Gaussian, the observation operator is linear, and errors in observations are additive and Gaussian, the proposal density $q(x)$ is equal to the target density $\pi(x)$ so that all independently proposed candidate states are accepted in Metropolis-Hastings. For problems in which the observation operator is nonlinear proposed candidates are always placed in regions of high probability density, but evaluation of the Metropolis-Hastings acceptance test is difficult as it requires evaluation of the marginal density for model variables. Consequently, in practice the MH test is ignored for large geoscience problems. Bardsley et al.
\cite{bardsley:14} showed that for non-gaussian but unimodal problems, the MH acceptance test could be based on a Jacobian evaluated at the maximizer of Eq.~\ref{eq:pi_x}. In this case, the probability of candidate states could be relatively easily computed and sampling was shown to be correct for several test problems.
Here we describe a modified Metropolization approach in which the randomized data are included in the state vector. The proposal of model variables is accomplished via a local optimization, but calibrated data variables are retained in the state space and used for evaluation of the probability of acceptance of the state. By doing this, the method allows sampling from multimodal distributions and the need to compute the marginal density for model variables is avoided. The key to the use of RML for correct sampling is determining the distribution of the proposals. This is easier when the state space is augmented with data variables.
The paper is organized as follows. In section~\ref{sec:old_rml} we summarize the original RML algorithm as an independence Metropolis sampler for which only model variables are included in the state of the Markov chain. In section~\ref{sec:new_rml} we describe an augmented state version of the RML independence Metropolis sampler. For this version, computation of the Metropolis-Hastings ratio is simpler as it only requires the Jacobian of the transformation of the augmented state variables. Section~\ref{sec:examples} provides several examples for validation of the algorithm.
\section{RML for model variables} \label{sec:old_rml}
In both the quasi-linear estimation method \cite{kitanidis:95} and the randomized maximum likelihood method \cite{oliver:96e}, samples are generated in high probability regions of the posterior pdf by the simple expedient of simultaneously minimizing the magnitude of misfit of the model with observations and the magnitude of the change in the model variables from an unconditional sample. Unconditional realizations of models variables ($x\sbr{uc}$) are sampled from the prior and realizations of observation ($d\sbr{uc}$) are sampled from the distribution of model noise
\[ x\sbr{uc} \sim N[ x\sbr{pr}, C_x] \quad \text{ and } \quad
d\sbr{uc} \sim N[ d\sbr{obs}, C_d] . \]
A sample from a high probability region is generated by minimizing a nonlinear least-squares functional:
\begin{equation}%
x_\ast = \argmin_{x} \Bigl[ \frac{1}{2} (x - x\sbr{uc})^{\text{\scriptsize T}}
C_x^{-1} (x - x\sbr{uc})
+\frac{1}{2} (g(x) - d\sbr{uc})^{\text{\scriptsize T}} C_d^{-1} (g(x) - d\sbr{uc}) \Bigr]
\label{eq:fYus}
\end{equation}
Although the samples generated in this way tend to be located in parts of parameter space with high posterior probability, the samples are not necessarily distributed according to the posterior distribution. The distribution of RML samples can, however, be computed if the inverse transformation from the calibrated samples to the prior samples is known.
If the objective function (Eq.~\ref{eq:fYus}) is differentiable, then a necessary condition for $x_\ast$ to be a minimizer is that
\begin{equation} x\sbr{uc} =
x_\ast + C_x G^{\text{\scriptsize T}}
C_d^{-1} ( G(x_\ast ) - d\sbr{uc} ) .
\label{eq:Xus}
\end{equation}
where $G(x_\ast ) = \partial g/\partial x |_{x_\ast}$.
Eq.~\ref{eq:Xus} provides the inverse transformation for the unconditional samples.
The joint distribution for $x\sbr{uc}$ and $d\sbr{uc}$ is
\begin{equation*}%
f(x\sbr{uc},d\sbr{uc}) \propto \exp \Bigl( -\frac{1}{2} (x\sbr{uc} - \mu)^{\text{\scriptsize T}}
C_x^{-1} (x\sbr{uc} - \mu)
-\frac{1}{2} (d\sbr{uc} - d\sbr{obs})^{\text{\scriptsize T}} C_d^{-1} (d\sbr{uc} - d\sbr{obs}) \Bigr)
\end{equation*}
and hence the joint
probability of proposing $(x_\ast,d\sbr{uc})$ is
\begin{equation}
q(x_\ast,d\sbr{uc}) = f (x\sbr{uc} (x_\ast ),d\sbr{uc}) |J|
\label{eq:q_RML_old}
\end{equation}
where $J$ is the Jacobian of the inverse transformation from $(x_\ast,d_\ast)$ to $(x\sbr{uc},d\sbr{uc})$. By choosing $d_\ast = d\sbr{uc}$, the Jacobian determinant required only computation of
\[
J = \biggl| \frac{\partial (x\sbr{uc})}{\partial (x_\ast)} \biggr|.
\]
The marginal probability of proposing $x_\ast$ is then obtained by integrating
$q(x_\ast,d\sbr{uc})$ over the data space,
\begin{equation}
q_m(x_\ast) = \int_{D} q(x_\ast,d\sbr{uc}) \, dd\sbr{uc} .
\label{eq:marginal_old_RML}
\end{equation}
For an independence sampler in the Metropolis-Hastings algorithm \cite{tierney:94}, the probability of proposing a transition to state $x_\ast$ is
independent of the current state
$x$, then the proposed state $x_\ast$ is accepted with probability
\begin{equation}%
\alpha(x, x_\ast) = \min \left( 1,
\frac{\pi(x_\ast ) q(x) }{\pi(x) q(x_\ast) } \right).
\label{eq:hastingsaij}
\end{equation}
The
probability density for the proposed model, $\pi(x_\ast)$, is
computed from Eq.~\ref{eq:pi_x}.
Note that the probability $\pi$ is not based on the quality of the match to the perturbed data
obtained in the minimization, but on the quality of the match to the
prior model and the actual observed data.
This density can be used as the proposal density in the MH test, or as weights for importance sampling. Algorithm~\ref{alg:RML_old} describes the method for using RML as a proposal mechanism in a Metropolis-Hastings procedure \cite{oliver:96e}.
\begin{algorithm}
\caption{RML for model variables}
\label{alg:RML_old}
\begin{algorithmic}[1]
\State Generate initial state $(x_0, d_0)$
\For{$i \le i\sbr{max}$ }
\Procedure{Generate candidate state}{}
\State Generate $x_{uc} \sim N[\mu , C_x]$ and $d_{uc} \sim N[d_o , C_d]$
\State Compute
\State $x_\ast = \argmin_{x} \Bigl[ \frac{1}{2} (x - x\sbr{uc})^{\text{\scriptsize T}}
C_x^{-1} (x - x\sbr{uc})
+\frac{1}{2} (g(x) - d\sbr{uc})^{\text{\scriptsize T}} C_d^{-1} (g(x) - d\sbr{uc}) \Bigr]$
\EndProcedure
\State Compute proposal density $q(x_\ast)= \int_{D} q(x_\ast,d) \, dd $ using Eq.~\ref{eq:q_RML_old} for $q(x_\ast,d)$.
\State Compute $\alpha(x_{i},x_\ast) = \min \left( \frac{\pi(x_\ast) q(x_{i}) }{\pi(x_i) q(x_\ast)} \right)$
\State Generate $u$ from $U(0,1)$
\If {$u \le \alpha(x_{i},x_\ast)$}
\State $x_{i+1} \gets x_\ast$
\Else
\State $x_{i+1} \gets x_i$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
When the relationship of model variables to observations is linear and observation errors are Gaussian, it is straightforward to show that the minimizers of Eq.~\ref{eq:fYus} are distributed as the posterior distribution for $x$ \cite{oliver:96b,bardsley:13} and hence
all proposed transitions are accepted when Algorithm~\ref{alg:RML_old} is used for Gauss-linear inverse problems.
When Algorithm~\ref{alg:RML_old} was applied to a problem for which the posterior distribution was bimodal (as in Example 2, below),
the acceptance rate for proposed independent transitions was still very high (76\%) \cite{oliver:96e}.
Although the method was shown to sample correctly, the work required for
computing the marginal distribution for the candidate states made application of the full method impractical in real problems. Consequently, when the method has been applied to large problems, the MH acceptance test was omitted \cite{gao:06a,calverley:11,chen:14}, with the understanding that the sampling method is then only approximately correct.
\section{Augmented state RML} \label{sec:new_rml}
Because the major challenge with the use of RML as a method for generating candidate states in Algorithm~\ref{alg:RML_old} is the computation of the marginal probability of proposing the candidate model variables, it is beneficial to modify the method to avoid the need for computation of the marginal proposal density.
Instead of letting the state of the chain be $x_\ast$, we augment the state with suitably defined data variables $d_\ast$. Although auxiliary variables can improve sampling in several ways \cite{storvik:11}, the purpose of including auxiliary variables in this case is primarily to simplify computation of the MH acceptance criterion.
\subsection{Posterior (target) distribution}
Consider the joint space of model variables and data in the case for which the observation errors and the modelization errors can both be modeled as Gaussian. The prior distribution of model variables is assumed Gaussian with mean $\mu$ and covariance $C_x$; the prior distribution of observations is assumed to be Gaussian with mean $d\sbr{obs}$ and covariance $(1-\gamma) C_d$; modelization errors are Gaussian with mean 0 and covariance $ \gamma C_d$.
The posterior joint probability of $(x, d)$, obtained by combining states of information is
\begin{multline}%
\pi(x,d) \propto \exp \Bigl[ -\frac{1}{2} (x - \mu)^{\text{\scriptsize T}}
C_x^{-1} (x - \mu) -\frac{1}{2 \gamma} ( g(x) - d)^{\text{\scriptsize T}} C_d^{-1} (g(x) - d) \\
-\frac{1}{2 (1-\gamma)} (d - d\sbr{obs})^{\text{\scriptsize T}} C_d^{-1} (d - d\sbr{obs}) \Bigr].
\label{eq:post_md}
\end{multline}
The marginal posterior density for model variable $x$ can be shown to be \cite{tarantola:87},
\begin{equation}%
\pi(x) \propto \exp \Bigl[ -\frac{1}{2} (x - \mu)^{\text{\scriptsize T}}
C_x^{-1} (x - \mu)
-\frac{1}{2 } ( g(x) - d\sbr{obs})^{\text{\scriptsize T}} C_d^{-1} (g(x) - d\sbr{obs}) \Bigr]
\label{eq:post_m}
\end{equation}
which is independent of $\gamma$.
Although the introduction of Gaussian modelization error does not affect the posterior marginal distribution of $x$ as long as it is compensated for by a reduction in observation error, the posterior joint probability of $(x, d)$ does depend strongly on the value of $\gamma$. Figure~\ref{fig:dependence_gamma} shows the dependence of the joint density on $\gamma$ for a simple example in which the observation is related linearly to the model variable by $d = 2x$. As modelization error increases, the variables $x$ and $d$ become more independent.
\begin{figure}[htbp!]
\begin{tabular}{ccc}
\begin{overpic}[width=0.31\textwidth]{Figures/Gamma_eq_02.pdf}
\put(-6.,50.){\footnotesize{$d$}}
\put(50,-6){\footnotesize{$m$}}
\put(10,87){\footnotesize{$\gamma = 0.02$}}
\end{overpic}
&
\begin{overpic}[width=0.31\textwidth]{Figures/Gamma_eq_40.pdf}
\put(50,-6){\footnotesize{$m$}}
\put(10,87){\footnotesize{$\gamma = 0.40$}}
\end{overpic}
&
\begin{overpic}[width=0.31\textwidth]{Figures/Gamma_eq_90.pdf}
\put(50,-6){\footnotesize{$m$}}
\put(10,87){\footnotesize{$\gamma = 0.90$}}
\end{overpic}
\end{tabular}
\caption{Dependence of the joint density for $(x, d)$ on magnitude of the modelization error$\gamma$.}
\label{fig:dependence_gamma}
\end{figure}
\subsection{Proposal distribution}
As in the original version of RML,
we draw unconditional samples from the prior distribution of model and data variables,
\begin{equation}
p(m\sbr{uc},d\sbr{uc}) = c_p \exp \left[ -\frac{1}{2} (m\sbr{uc} - \mu)^{\text{\scriptsize T}} C_x^{-1} (m\sbr{uc} - \mu)
-\frac{1}{2} (d\sbr{uc} - d\sbr{obs})^{\text{\scriptsize T}} C_d^{-1} (d\sbr{uc} - d\sbr{obs}) \right]
\label{eq:joint_prior_xd}
\end{equation}
which is equivalent to drawing samples from the normal distributions
$m\sbr{uc} \sim N[ \mu , C_x]$ and
$d\sbr{uc} \sim N[ d\sbr{obs} , C_d]$.
%
Candidate transitions in a MH algorithm are obtained by minimizing a nonlinear least squares function
\begin{multline*}%
(x_\ast, d_\ast) = \argmin_{x,d} \Bigl[ \frac{1}{2} (x - x\sbr{uc})^{\text{\scriptsize T}}
C_x^{-1} (x - x\sbr{uc}) +\frac{1}{2 \rho} (g(x) - d)^{\text{\scriptsize T}} C_d^{-1} (g(x) - d) \\
+\frac{1}{2 (1-\rho)} (d - d\sbr{uc})^{\text{\scriptsize T}} C_d^{-1} (d - d\sbr{uc})
\Bigr]
\end{multline*}
where $(x_\ast, d_\ast)$ are clearly functions of $(x\sbr{uc},d\sbr{uc})$.
Implicit expressions for the relationship between $(x_\ast, d_\ast)$ and $(m\sbr{uc},d\sbr{uc})$ are derived from requirement that the gradient at the minimum must vanish. At the minimum, the following relationships hold:
\begin{equation}
x_\ast - x\sbr{uc} +\frac{1}{\rho} C_x G^{\text{\scriptsize T}} C_d^{-1} (g(x_\ast) - d_\ast) = 0
\label{eq:m}
\end{equation}
and
\begin{equation} d_\ast - \rho d\sbr{uc}
- (1- \rho) g(x_\ast) = 0.
\label{eq:d}
\end{equation}
Using Eq.~\ref{eq:d} to eliminate $ d_\ast$ from Eq.~\ref{eq:m} gives
\begin{equation}
x_\ast - x\sbr{uc} + C_x G^{\text{\scriptsize T}} C_d^{-1} ( g(x_\ast) - d\sbr{uc}
) = 0
\label{eq:m2}
\end{equation}
which shows that the marginal distribution of $x_\ast$ is independent of $\rho$.
Since this is the same relationship that is obtained from the standard RML, $x_\ast$ must be distributed according to the posterior marginal distribution of $x$ (Eq.~\ref{eq:post_m}) for Gauss-linear problems. The advantage of the joint state space is that the proposal density for $(x_\ast, d_\ast)$ can be easily computed.
The inverse transformation is more straightforward, and is needed for computation of the probability density of candidate states. From Eq.~\ref{eq:m}
\begin{equation}
x\sbr{uc} = x_\ast + \frac{1}{\rho} C_x G^{\text{\scriptsize T}} C_d^{-1} (g(x_\ast) - d_\ast)
\label{eq:muc}
\end{equation}
and from Eq.~\ref{eq:d}
\begin{equation} d\sbr{uc} = \frac{1}{\rho} d_\ast
- \left(\frac{1- \rho}{\rho} \right) g(x_\ast) .
\label{eq:duc}
\end{equation}
The distribution of candidate states, is then obtained by substitution of the expressions for $x\sbr{uc}$ and $d\sbr{uc}$ as functions of $x_\ast$ and $d_\ast$ into the expression for the probability density of $x\sbr{uc}$ and $d\sbr{uc}$ (Eq.~\ref{eq:joint_prior_xd}).
\begin{equation}
\begin{split}
q(x_\ast,d_\ast) & = p \left( x\sbr{uc} (x_\ast, d_\ast), d\sbr{uc} (x_\ast, d_\ast) \right) \det J \\
& = c_p \exp \left[ -\frac{1}{2} (x\sbr{uc} (x_\ast, d_\ast) - \mu)^{\text{\scriptsize T}} C_x^{-1} (x\sbr{uc} (x_\ast, d_\ast) - \mu) \right.
\\
& \qquad \qquad
\left. -\frac{1}{2} (d\sbr{uc} (x_\ast, d_\ast) - d\sbr{obs})^{\text{\scriptsize T}} C_d^{-1} (d\sbr{uc} (x_\ast, d_\ast) - d\sbr{obs}) \right] \det J
\end{split}
\label{eq:q_s_joint}
\end{equation}
where $x\sbr{uc} (x_\ast, d_\ast)$ and $d\sbr{uc} (x_\ast, d_\ast)$ are defined in Eqs.~\ref{eq:muc} and \ref{eq:duc} and the dependence of the expressions on $\rho$ has been suppressed for clarity.
Because the state is composed of both model and data variables, the Jacobian matrix takes a block form with
\[ \begin{split}
\frac{\partial x\sbr{uc}^\alpha}{\partial x_\ast^\beta} & = I^{\alpha \beta} + \frac{1}{\rho} \sum_\gamma \sum_i \sum_j C_x^{\alpha \gamma} \left[ G^{i \gamma} \left[ C_d^{-1} \right]^{ij} G^{j \beta} + \frac{\partial G^{i \gamma}}{\partial x^\beta} \left[ C_d^{-1} \right]^{ij} (g^j(x_\ast) - d^j_\ast) ] \right]
\end{split}
\]
and
\[ G^{i \alpha} = \frac{\partial g^i}{\partial x^\alpha} . \]
Other derivatives required for the Jacobian can be compactly written in matrix notation.
\[ \frac{\partial x\sbr{uc} }{ \partial d_\ast } = - \frac{1}{\rho} C_x G^{\text{\scriptsize T}} C_d^{-1} ,
\]
\[ \frac{\partial d\sbr{uc} }{ \partial x_\ast } = - \left(\frac{1- \rho}{\rho} \right) G
\]
and
\[ \frac{\partial d\sbr{uc} }{ \partial d_\ast } = \frac{1}{\rho} I.
\]
Figure~\ref{fig:dependence_rho} shows the dependence of the joint proposal density on $\rho$ for the same simple example illustrated in Fig.~\ref{fig:dependence_gamma} in which the observation is related to the model variable by $d = 2x$.
\begin{figure}[htbp!]
\begin{tabular}{ccc}
\begin{overpic}[width=0.31\textwidth]{Figures/proposal_beta_eq_10.pdf}
\put(-6.,50.){\footnotesize{$d_\ast$}}
\put(50,-6){\footnotesize{$m_\ast$}}
\put(10,87){\footnotesize{$\rho = 0.10$}}
\end{overpic}
&
\begin{overpic}[width=0.31\textwidth]{Figures/proposal_beta_eq_50.pdf}
\put(50,-6){\footnotesize{$m_\ast$}}
\put(10,87){\footnotesize{$\rho = 0.50$}}
\end{overpic}
&
\begin{overpic}[width=0.31\textwidth]{Figures/proposal_beta_eq_99.pdf}
\put(50,-6){\footnotesize{$m_\ast$}}
\put(10,87){\footnotesize{$\rho = 0.99$}}
\end{overpic}
\end{tabular}
\caption{Dependence of the joint density for proposed transitions $(x_\ast, d_\ast)$ on magnitude of the modelization error partitioning parameter $\rho$.}
\label{fig:dependence_rho}
\end{figure}
The effect of $\rho$ on the joint distribution for a linear Gaussian problem is easily seen by examination of the covariance of the joint variables. It is straightforward to show that
The relationship between the proposed states and the unconditional states is
\[
\begin{split}
\begin{bmatrix} x_\ast \\ d_\ast \end{bmatrix}
& =
\begin{bmatrix}
C_d \left( G C_x G^{\text{\scriptsize T}} + C_d \right)^{-1} &
C_x G^{\text{\scriptsize T}} \left( G C_x G^{\text{\scriptsize T}} + C_d \right)^{-1} \\
(1-\rho ) G^{\text{\scriptsize T}} C_d \left( G C_x G^{\text{\scriptsize T}} + C_d \right)^{-1} &
\left( G C_x G^{\text{\scriptsize T}} + \rho C_d \right) \left( G C_x G^{\text{\scriptsize T}} + C_d \right)^{-1} \\
\end{bmatrix}
\begin{bmatrix} x\sbr{uc} \\ d\sbr{uc} \end{bmatrix}
\\
& =
A \begin{bmatrix} x\sbr{uc} \\ d\sbr{uc} \end{bmatrix} .
\end{split}
\]
The covariance of the proposed states $(x_\ast, d_\ast)$ is
\[
\begin{split}
C\sbr{post}
& =
A \begin{bmatrix} C_x & 0 \\ 0 & C_d \end{bmatrix} A^{\text{\scriptsize T}}
\\
& =
\begin{bmatrix}
C_{x'} &
C_{x'} G^{\text{\scriptsize T}} \\
G C_{x'} &
G C_{x'} G^{\text{\scriptsize T}} + \rho^2 C_d A_D^{-1} C_d
\end{bmatrix}
\end{split}
\]
where
\[ C_{x'} = \left( C_x^{-1} + G^{\text{\scriptsize T}} C_d^{-1} G \right)^{-1} .\]
Note that for $\rho = 0$, the covariance of proposed states is the same as the posteriori covariance for the joint model-data space \cite{tarantola:87}.
This joint density given by Eq.~\ref{eq:q_s_joint} can be used as the proposal density in an independence MH sampler for sampling of the joint posterior distribution (Eq.~\ref{eq:post_md}) with the objective of sampling from the posterior marginal distribution for model variables (Eq.~\ref{eq:post_m}). Algorithm~\ref{alg:RML_new} describes the use of minimization to provide efficient candidate states.
\begin{algorithm}
\caption{Augmented state RML for model-data variables}
\label{alg:RML_new}
\begin{algorithmic}[1]
\State Generate initial state $(x_0, d_0)$
\For{$i \le i\sbr{max}$ }
\Procedure{Generate candidate state}{}
\State Generate $x_{uc} \sim N[\mu , C_x]$ and $d_{uc} \sim N[d_o , C_d]$
\State Compute
\State $(x_\ast, d_\ast) = \argmin_{x,d} \Bigl[ \frac{1}{2} (x - x\sbr{uc})^{\text{\scriptsize T}}
C_x^{-1} (x - x\sbr{uc}) \Bigr.$
\State $ \qquad \Bigl. \mbox{} +\frac{1}{2 \rho} (g(x) - d)^{\text{\scriptsize T}} C_d^{-1} (g(x) - d)
+\frac{1}{2 (1-\rho)} (d - d\sbr{uc})^{\text{\scriptsize T}} C_d^{-1} (d - d\sbr{uc})
\Bigr]$
\EndProcedure
\State Compute proposal density $q(x_\ast,d_\ast)$ using Eq.~\ref{eq:q_s_joint}.
\State Compute $\alpha(x_{i}, d_i,x_\ast,d_\ast) = \min \left( \frac{\pi(x_\ast, d_\ast) q(x_{i},d_i) }{\pi(x_i,d_i) q(x_\ast,d_\ast)} \right)$
\State Generate $u$ from $U(0,1)$
\If {$u \le \alpha(x_{i}, d_i,x_\ast,d_\ast)$}
\State $x_{i+1} \gets x_\ast$ and $d_{i+1} \gets d_\ast$
\Else
\State $x_{i+1} \gets x_i$ and $d_{i+1} \gets d_i$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Selection of $\rho$ and $\gamma$}
The efficiency of the independence Metropolis sampler depends on the proposal distribution $q$ (Eq.~\ref{eq:q_s_joint}) being close to the
target density $\pi$ (Eq.~\ref{eq:post_md}) so that the ratio $\pi(x_\ast, d_\ast)/q(x_\ast,d_\ast)$ is as close to constant as possible \cite{tierney:94}. For Gauss-linear problems, the two distributions are identical if one chooses $\rho = 0$ and $\gamma = 0$, but in this case the distributions are both degenerate and defining a useful inverse transform is not straightforward. As $\gamma $ increases, however, the correlation between $m$ and $d$ decreases. It appears that efficient mixing is obtained when $\gamma$ is chosen greater than 0, but much less than 1 and $\rho \approx 0.5$. This provides a proposal density that is ``wider'' than the target density and a joint posterior density that is not degenerate.
\section{Examples} \label{sec:examples}
It is relatively difficult to find good multimodal inverse problems in the literature that are characterized by a Gaussian prior and Gaussian noise in observations, and are suitably small to allow exhaustive analysis.
Most of the multimodal examples use Gaussian mixtures \cite{tjelmeland:01,feroz:08,gramacy:10} although Tjelmeland and Hegstad\cite{tjelmeland:01} includes an example that is quite similar to one we present here.
This section gives three examples which display the characteristics of the RML method for proposing transitions in a Metropolis-Hastings algorithm. In each case, it is possible to compare the distribution of samples with the true posterior probability density for a small inverse problem. The first example consists of a single model variable and a single data variable so that the joint augmented distribution can be visualized easily. The posterior distribution on the model variable is bimodal but the probability in the region between modes is relatively large. In the second example, the model space dimension is two and the probability mass is located in three isolated regions. Two of the regions are nearly Gaussian, but the third region has a complex shape that makes sampling difficult.
\subsection{Example 1: Bimodal}
In this example, the posterior distribution for the model variable is bimodal. A region of small but significant probability density spans the region between the peaks. This problem was originally used as to test the sampling of RML algorithm \cite{oliver:96e} where it was shown that correct sampling could be attained if the marginal distribution of transition proposals (Eq.~\ref{eq:marginal_old_RML}) is used in the MH test.
Computation of the marginal distribution was relatively difficult, however, even for this single-variable problem because the integration had to be limited to the region of the joint model-data space in which the Jacobian determinant was positive.
Here we apply the augmented variable RML method (Algorithm \ref{alg:RML_new}) on the joint model-data space, without need for computation of the marginal distribution of proposals. The parameter $\gamma$ that determines the relative contribution of modelization error vs observation error in the posterior is set at a small nonzero value (0.01) to prevent degeneracy of the posterior distribution.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/pPost_bimodal_gamma01.pdf}
\caption{Posterior joint pdf} \label{fig:bimodal_compare_a}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/qij_bimodal_beta_65.pdf}
\caption{RML proposal density.} \label{fig:bimodal_compare_b}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/superpose_bimodal_pdfs.pdf}
\caption{comparison.} \label{fig:bimodal_compare_c}
\end{subfigure}%
\caption{Comparison of RML proposal density with the target density for Example 1 (bimodal) sampling problem. Model variable (horizontal axis), data variable (vertical).}
\label{fig:bimodal_compare_q_vs_pi}
\end{figure}
We consider
the problem of
sampling from the following univariate distribution
\begin{equation}
\pi (x) = a \exp \Bigl[ -\frac{(x-\mu)^{2}}{2 \sigma_{x}^{2}}
-\frac{(g(x)-d\sbr{obs})^{2}}{2 \sigma_{d}^{2}}\Bigr],
\label{eq:pi}
\end{equation}
where $\mu=1.9$, $d\sbr{obs}=0.8$, $\sigma_{x}^{2}= 0.1$,
$\sigma_{d}^{2}=0.01$, $g(x) = 1 - 9 (x - 2 \pi/3)^2 /2$, and $a \approx 4.567$. The true target distribution is shown in Fig.~\ref{fig:bimodal_results_a} as a solid curve.
The first term in
Eq.~\eqref{eq:pi} is the prior
density for m. The second term is nonlinear and represents the likelihood term in the Bayesian inverse problem. Because of the nonlinearity of the observation operator $g(\cdot)$, the posterior is bimodal.
An optimal value of $\rho$ was determined by trial and error, but in fact, the MH acceptance rate is not highly sensitive to the value of $\rho$. In the interval $0.5 \le \rho \le 0.8$ the acceptance rate for RML proposals is between 62\% and 64 \%. with a maximum acceptance rate of almost 0.64 when $\rho = 0.65$, which was the value we used.
For the choice $\rho = 0.65$ and $\gamma = 0.01$, the joint proposal density for RML (Fig.~\ref{fig:bimodal_compare_a}) is similar to the posterior joint density of $m$ and $d$ (Fig.~\ref{fig:bimodal_compare_b}). Differences can more easily be seen in Fig.~\ref{fig:bimodal_compare_c} where the two pdfs are superposed. The main deficiency in the proposal density is a reduced probability of proposals in the region between the two peaks.
Figure \ref{fig:bimodal_results_a} shows the marginal posterior distribution of samples obtained using RML (Algorithm \ref{alg:RML_new}) compared with the target distribution (Eq.~\ref{eq:pi}) for a chain of length 40,000. Figure \ref{fig:bimodal_results_b} shows that the mixing of the chain is very good, as should be expected for an independence sampler with a high acceptance rate.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/histogram_bimodal_gamma01_beta_65.pdf}
\caption{Distribution of samples of model variable.} \label{fig:bimodal_results_a}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/mcmc_bimodal_gamma01_beta_65.pdf}
\caption{Markov chain for model variable.} \label{fig:bimodal_results_b}
\end{subfigure}%
\caption{Metropolized RML sampling for Example 1 (bimodal pdf).}
\label{fig:bimodal_results}
\end{figure}
\subsection{Example 2: Multimodal}
In this example the number of nodes is higher and one of the regions of high probability is toroidal, making sampling by random walk difficult (Fig.~\ref{fig:Example2_true_pdf}). The prior distribution for the two model variables is independent standard normal. The observation is related to model variables as
\[ g(x) = \sum_{i=1,4} \exp \left[ -(x-\omega_i)^{\text{\scriptsize T}} (x-\omega_i) / 2 \epsilon \right] \]
for $\omega_1 = (0.62, -0.09)$, $\omega_2 = (0.17, -0.04)$, $\omega_3 = (-0.76, 0.16)$, and $\omega_4 = (-0.89, 0.78)$ and $\epsilon = 0.05$. The observed value of $g$ is $d\sbr{obs} = 1.1$ with variance of observation error equal 0.05.
Algorithm~\ref{alg:RML_new} was used with parameters $\rho=0.35$ and $\gamma = 0.01$. Figure~\ref{fig:multimodal_m-map_pdf} shows the projection of the mapping from the prior model space to the posterior model space for the first 4000 independent proposals. Only candidates transitions for which the proposal was accepted are shown in the figure.
The acceptance rate was 11\% in the independence Metropolis sampling step. As in the previous example, the acceptance rate was nearly independent of $\rho$, varying between 11.25\% for $\rho=0.35$ to 11.17\% for $\rho=0.60$.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[t]{0.43\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/multimodal_1_True_pdf.pdf}
\caption{True posterior pdf.} \label{fig:Example2_true_pdf}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/multimodal_m-map.pdf}
\caption{Mapping from prior to posterior by minimization.} \label{fig:multimodal_m-map_pdf}
\end{subfigure}%
\caption{Multimodal posterior distribution (Example 2).}
\label{fig:Example2_initial}
\end{figure}
In high-dimensional model spaces with large numbers of observations, evaluation of the Jacobian determinant of the proposal transformation will be difficult. The common approach in groundwater hydrology and in petroleum reservoir inverse problems is to accept all proposed transitions, ignoring the MH test. Figure~\ref{fig:multimodal_compare} shows the consequences of two levels of approximation of correct sampling compared with accurate computation of the Jacobian determinant. First, note that when the accurate Jacobian is computed, sampling of the joint posterior distribution for model variables is visually good, including in the toroidal region (Fig.~\ref{fig:multimodal_compare_a}). The acceptance rate in this case is still high for independence sampling, but periods in which the state is stuck for lengthy periods in the same state can be seen in Fig.~\ref{fig:multimodal_compare_d}. If one neglects the second derivative of the residual function (a Gauss-Newton type of approximation), the acceptance rate for independence sampling jumps to 55\% and the lengthy periods in which the state is stuck vanish (Fig.~\ref{fig:multimodal_compare_e}). Unfortunately, the sampling of the toroidal region is clearly not as good as the sampling when the correct Jacobian was computed (Fig.~\ref{fig:multimodal_compare_b}).
\begin{figure}[htbp!]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\begin{overpic}[width=0.95\textwidth]{Figures/multimodal_1_Full_RML_histogram.png}
\put(60,50){\footnotesize{Accept 11\%}}
\end{overpic}
\caption{RML with MH acceptance test using full Jacobian.} \label{fig:multimodal_compare_a}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\begin{overpic}[width=0.95\textwidth]{Figures/multimodal_1_Approx_RML_histogram.png}
\put(60,50){\footnotesize{Accept 55\%}}
\end{overpic}
\caption{RML with MH acceptance test using approximate Jacobian.} \label{fig:multimodal_compare_b}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\begin{overpic}[width=0.95\textwidth]{Figures/multimodal_1_NoTest_RML_histogram.png}
\put(60,50){\footnotesize{Accept 100\%}}
\end{overpic}
\caption{RML without MH acceptance test.} \label{fig:multimodal_compare_c}
\end{subfigure}
\\
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/multimodal_1_Full_RML_mchain.png}
\caption{Evolution of $x_1$ for Algorithm~\ref{alg:RML_new}.} \label{fig:multimodal_compare_d}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/multimodal_1_Approx_RML_mchain.png}
\caption{Evolution of $x_1$ with approximate Jacobian.} \label{fig:multimodal_compare_e}
\end{subfigure}
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/multimodal_1_NoTest_RML_mchain.png}
\caption{Evolution of $x_1$ for RML without Metropolization.} \label{fig:multimodal_compare_f}
\end{subfigure}
\caption{Top row: True pdf shown by solid contours. Samples of model variables from three variations of RML shown as blue dots. Bottom row: Markov chain of states of $x_1$ for methods corresponding to top row.}
\label{fig:multimodal_compare}
\end{figure}
Finally, if one neglects the MH test and simply accepts all proposed transitions, the samples can be seen to come primarily from the regions of high probability, but the distribution is not correct. This is especially clear in the toroidal region, but can be seen in the isolated peaks, also (Fig.~\ref{fig:multimodal_compare_f}).
\subsection{Example 3: Nongaussian prior}
In this example, we look at a problem for which the prior distribution is not Gaussian, but for which a transformation to Gaussian variable can be introduced after which the previous methodology can be applied. The prior distribution for the model variable is exponential with mean and standard deviation equal to 1. A single measurement is made of ($g(x) = x + \epsilon$) with Gaussian observation error $\epsilon \sim N[0,0.36]$. The observed value is $d\sbr{obs}= 1.$ The prior and posterior distributions for the model variable are shown in Fig.~\ref{fig:exponential}.
\begin{figure}[htbp!]
\centering
\begin{overpic}[width=0.45\textwidth]{Figures/exponential_prior_prior.pdf}
\put(60,50){\footnotesize{Prior}}
\end{overpic}
\begin{overpic}[width=0.45\textwidth]{Figures/exponential_prior_posterior.pdf}
\put(60,50){\footnotesize{Posterior}}
\end{overpic}
\caption{Prior and posterior model variable distributions for Example 3.} \label{fig:exponential}
\end{figure}
Since the prior distribution in this example is not Gaussian, we define a new Gaussian variable $z$ related to $x$ as $z = F_z^{-1} \left[ F_x [x] \right]$, where $F_x$ and $F_z$ are the cdfs for $x$ and $z$ respectively. After transformation of the model variable, the joint prior distribution for model and data, the joint posterior distribution, and the joint proposal distribution are shown in Fig.~\ref{fig:exp_joint}. The joint proposal density that results from minimization with $\rho = 0.25$ is similar to the joint posterior density with $\gamma = 0.01$.
\begin{figure}[htbp!]
\centering
\begin{overpic}[width=0.31\textwidth]{Figures/exponential_priorZD.pdf}
\put(10,85){\footnotesize{Prior density}}
\end{overpic}
\begin{overpic}[width=0.31\textwidth]{Figures/exponential_postZD.pdf}
\put(10,85){\footnotesize{Posterior density}}
\end{overpic}
\begin{overpic}[width=0.31\textwidth]{Figures/exponential_qijZD.pdf}
\put(10,85){\footnotesize{Proposal density}}
\end{overpic}
\caption{Joint prior, posterior, and proposal distributions for Gaussian model variable and data variable. Model variable on horizontal axis.} \label{fig:exp_joint}
\end{figure}
An inverse transform must be applied to each sample from the Markov chain to obtain samples in the original model space. Fig.~\ref{fig:exp_summary_a} shows that the distribution of Monte Carlo samples is indistinguishable from the target posterior distribution. When this proposal density is used in the RML independence Meteropolis sampler (Algorithm \ref{alg:RML_new}) the acceptance rate is 74\%.
Because the proposals are independent and the acceptance rate is high, the mixing in the chain is very good. Figure~\ref{fig:exp_summary_b} shows the first 2000 elements of the chain, with no evidence of a burn-in period or of nonstationary behavior.
Recall that the candidates are first generated from the joint posterior, then a minimization is performed to place the proposals in regions of high probability. Fig.~\ref{fig:exp_summary_c} shows the mapping of the first 200 accepted candidates in the Markov chain from the joint prior to the proposal distribution. The weighting has already been used for this figure so that the 26\% unsuccessful candidates are not shown.
\begin{figure}[htbp!]
\centering
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/exponential_prior_histogram.pdf}
\caption{Distribution of samples from Algorithm 2 (histogram) compared with posterior distribution (solid curve).} \label{fig:exp_summary_a}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/exponential_prior_mcmc.pdf}
\caption{First 2000 elements of teh Markov chain for the model variable.} \label{fig:exp_summary_b}
\end{subfigure}%
~
\begin{subfigure}[t]{0.31\textwidth}
\centering
\includegraphics[width=0.95\textwidth]{Figures/exponential_prior_mapping.pdf}
\caption{Mapping of variables from the prior to the posterior.} \label{fig:exp_summary_c}
\end{subfigure}%
\caption{Summary plots for Example 3 in the original variable with an exponential prior distribution.} \label{fig:exp_summary}
\end{figure}
\section{Discussion}
In this paper, we proposed an augmented variable independence Metropolis sampler that uses minimization to place proposals in regions of high probability. Because the acceptance rate is high and candidates are independent, the efficiency of the chain in generating independent samples from the posterior is also quite high. The computation required for a single proposal is considerably higher than the computation required for a typical MH algorithm, but because an independence sampler is used, it is possible to parallelize the algorithm. In all sample problems, the mixing was rapid and the acceptance rate was high, even when the posterior density was very complex as in Example 3. The biggest limitation of the method will be the computation of the Jacobian determinant.
For multimodal posterior pdfs the proposal density $q(x_\ast,d_\ast) \not> 0$ for all $\pi(x_\ast,d_\ast) > 0$ so the Markov chain will not converge to $\pi(\cdot, \cdot)$. The marginal proposal density $q(x_\ast)$ is, however, greater than 0 for all $x_\ast$ such that $\pi(x_\ast) > 0$.
Unlike typical applications of Metropolis random walk samplers, which often get stuck for long periods in a single mode of a multimodal distribution, the RML Metropolis sampler is more likely to remain in the regions between modes as the proposal density tends to undersample from those regions.
For cases in which the prior is not Gaussian, it can sometimes be possible to use anamorphosis to transform the variables so that the prior is Gaussian, in which cases the algorithm can be used for sampling. This is straightforward for single variable problems, but finding a transformation to multivariate Gaussian in high dimensions seems impractical.
\section{Acknowledgements}
Primary support has been provided by the CIPR/IRIS cooperative research project ``4D Seismic History Matching'' which is funded by industry partners Eni, Petrobras, and Total, as well as the Research Council of Norway (PETROMAKS).
|
1,477,468,750,123 | arxiv |
\section{Introduction}
\label{intro}
Neutron stars are formed by the material of the inner core of massive stars, which undergo a gravitational collapse. The enormous gravitational attraction compresses this matter to very high densities such that neutron stars contain more than a solar mass within a diameter of about 25~km. In fact, neutron stars are the places in the universe, where the highest densities can be found in stable equilibrium. Comparable densities can only be obtained in heavy-ion collisions for a very short period of time, e.g.~\cite{Friman2011}.
For this reason neutron stars are very promising objects to search for deconfined quark matter, which for low temperatures is expected to occur at some density beyond nuclear saturation. Whether or not neutron stars feature a quark core, is currently unclear. It is also not known how strongly the transition from ordinary baryonic matter to deconfined quark matter affects the stellar structure. If the phase transition is connected with a considerable density jump (latent heat), the mass-radius relation of compact stars may feature a pronounced kink as consequence of the sudden softening of the equation of state (EoS). If the transition to deconfined quark matter proceeds in a more continuous way, the impact on the stellar structure may be rather minor and in fact hard to detect \cite{Alford2005}.
Gravitational waves (GWs) provide a very promising tool to detect traces of quark matter if it exists in neutron stars (e.g.~\cite{Orsaria2019}), in which case these compact objects are often called hybrid stars. In particular, the merging of neutron stars and the associated GW signal may reveal the presence of quark matter because the collision is a highly dynamical event, which is strongly influenced by the stellar structure and the EoS of high-density matter.
In this contribution we discuss an unambiguous signature of a phase transition to deconfined quark matter in the GW signal of neutron star mergers~\cite{Bauswein2019}. We emphasize the importance of providing evidence that a specific signature is generated only by the presence of quark matter. To this end it is critical to show that all viable hadronic models do not produce such characteristics. We also remark discuss how the presence or absence of such a signature of deconfined quark matter can constrain the onset density of the quark-hadron phase transition~\cite{Bauswein2019,Blacker2020}.
As a completely new finding, we discuss the merging of two hybrid stars, i.e. systems which contain quark matter already before merging. We find that such hybrid mergers produce a signature similar to that of systems which undergo a phase transition to quark matter only during the collision. This is in particular important in a scenario where the onset of quark deconfinement occurs already at very low densities, e.g.~\cite{Blaschke2020}.
In Sect.~\ref{dynamics} we present the dynamics of neutron star mergers and highlight how the EoSs determines the outcome of the merger and the GW signal produced before and after the merging.
Sect.~\ref{Postmerger} discusses the influence of a strong first-order phase transition on the postmerger GW signal. In subsection \ref{identify} we focus on how the GW signal can reveal the occurrence of a strong first-order phase transition in the merger remnant. In this context we highlight the outcome and GW signature of hybrid mergers (subsection~\ref{hybrid}). We summarize and conclude in Sect.~\ref{Summ}.
\section{Dynamics of neutron star mergers}\label{dynamics}
There are different features of the GW signal which contain information about the properties of high-density matter. The phase before the merging of the binary components is called inspiral referring to the trajectories of the stars, which orbit around each other while the orbital separation shrinks. The GW signal of the inspiral is dominantly shaped by the orbital motion \cite{Faber2012}. The stellar structure affects the orbital dynamics such that less compact stars lead to an accelerated inspiral compared to a system of point particles of the same mass. This effect is described by the tidal deformability $\Lambda=\frac{2}{3}k_2\left(\frac{c^2 R}{G M}\right)^5$ with the stellar mass $M$ and radius $R$. $k_2$ is the tidal Love number \cite{Hinderer2008} ($c$ and $G$ are the speed of light and the gravitational constant). The tidal deformability is the EoS parameter that can be extracted from the GW signal \cite{Abbott2017}, see~\cite{Chatziioannou2020} for a review.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{tidalquark2.pdf} }
\caption{Tidal deformability $\Lambda$ of an isolated neutron star as a function of mass $M$ for the purely hadronic DD2F EoS (black curve) and the hybrid DD2F-SF-1 EoS (green curve).}
\label{fig:lambda}
\end{center}
\end{figure}
As apparent from its definition, $\Lambda$ scales tightly with the stellar radius. The occurrence of quark matter beyond some threshold mass leads to a kink in the $\Lambda-M$ relation similar as in the mass-radius relation. An example is provided in Fig.~\ref{fig:lambda}, which shows $\Lambda(M)$ for a purely hadronic EoS (black curve) and for a hybrid model (green curve). The density regime below the onset density of the phase transition is described by the same hadronic model. Hence, the two curves coincide for masses below the threshold mass which marks the occurrence of deconfinement. For these models quark matter appears at a mass of about 1.57~$M_\odot$, where a characteristic kink is visible.
The hadronic regime of the models shown here is described by the DD2F EoS~\cite{Typel2010,Alvarez-Castillo2016} and the hybrid EoS is based on the string-flip model of \cite{Fischer2018,Bastian2018}. Additional details on these two EoS models and their parameters can be found in the supplemental material of \cite{Bauswein2019} and the references therein.
Detecting such a kink in $\Lambda(M)$ would indicate the presence of a phase transition. However, Fig.~\ref{fig:lambda} shows that it may be very difficult to identify a phase transition based entirely on the GW inspiral signal. It would require the detection of different binary mergers with slightly different masses above and blow the kink. Finite-size effects on the inspiral GW signal become smaller at higher masses because the binary components are more compact and thus behave more similar to point particles. Thus, the extraction of the tidal deformability for high-mass mergers is challenging. Moreover, measurements of $\Lambda$ will contain a systematic and a statistical error, both of which may be too large to actually identify the kink, which would require a measurement uncertainty of at most 5\% (see, however, \cite{Chen2019,Chatziioannou2019} for ideas to infer the presence of a phase transition from combining many detections).
Apart from the inspiral phase, the EoS also affects the outcome of a binary merger. The collision of the binary components forms a massive, rotating remnant, whose stability is determined by the properties of high-density matter \cite{Bauswein2017a}. The merged object may be (temporarily) stable even if its total mass exceeds the maximum mass of non-rotating neutron stars because of the strong centrifugal support by the rapid rotation. In fact, for most EoS models the formation of a rotating neutron star remnant is expected within the mass range of typical neutron star binaries around 2.7~$M_\odot$. However, for total binary masses beyond some threshold binary mass the remnant will undergo a prompt collapse to a black hole~\cite{Bauswein2013,Bauswein2020}.
For systems without direct black hole formation the GW emission of the rotating neutron star remnant provides another opportunity to learn about the properties of high-density matter \cite{Bauswein2012}. The collision itself leads to strong oscillations of the merger remnant, which produce GWs in the kHz range. An example is given in Fig.~\ref{fig:spectrum}, which shows the GW spectrum of a 1.35-1.35~$M_\odot$ merger for the two different EoS models used for Fig.~\ref{fig:lambda}. One can recognize different features in the spectrum, i.e. different frequency peaks, which correspond to different oscillation modes of the remnant and are very characteristic of the EoS \cite{Bauswein2019a}. As such signals are a result of a dynamical evolution, it requires detailed relativistic hydrodynamical simulations in three dimensions to follow the merging process and to compute the corresponding GW emission (see e.g.~\cite{Bauswein2019b,Blacker2020} for snapshots from simulations).
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{spectrum.pdf} }
\caption{Gravitational wave spectrum of the cross polarization at a distance of 20~Mpc along the polar axis for the purely hadronic DD2F model (black curve) and the hybrid DD2F-SF-1 EoS (green curve). Figure taken from \cite{Bauswein2019}.}
\label{fig:spectrum}
\end{center}
\end{figure}
These simulations also reveal that the merging results in a strong density increase. See e.g. \cite{Bauswein2019,Blacker2020} for a figure showing the time evolution of the maximum density. The density increase also implies that the GW signal of the postmerger stage carries information from the high-density regime of the EoS. While the inspiral signal only contains information about the EoS regime of the progenitor stars, the consideration of the postmerger GW signal is a natural choice to search for the impact of a phase transition to deconfined quark matter which may occur at higher densities \cite{Most2019,Bauswein2019,Weih2020,Bauswein2020,Blacker2020}. Generally, the impact of the EoS on the postmerger phase can be understood as follows. The EoS determines the stellar structure of the remnant, which affects the frequencies of the different oscillation modes. EoSs which are soft and thus result in compact stars and compact merger remnants, will produce GW emission at higher frequencies. Indeed, it is known that the dominant oscillation frequency scales with the size of the merger remnant and with the size of non-rotating neutron stars \cite{Bauswein2012a}.
\section{Postmerger gravitational wave emission and first-order phase transitions}\label{Postmerger}
Because the GW signal of a merger probes two different density regimes (before and after the merger), it can reveal a strong phase transition occurring in the merger remnant \cite{Bauswein2019}.
The spectra in Fig.~\ref{fig:spectrum} show the postmerger GW emission from a merger described by a hadronic EoS (black) and from an event where a phase transition takes place in the merger remnant (green). Note that both EoSs are identical at densities below the phase transition. One can clearly see that the hybrid EoS with a phase transition to deconfined quark matter leads to higher postmerger frequencies. This is understandable because the softening of the EoS leads to a more compact remnant which enhances the postmerger GW frequencies.
The dominant postmerger oscillation frequency $f_\mathrm{peak}$ of the hybrid model is shifted by about 450~Hz with respect to the hadronic model. Note that $f_\mathrm{peak}$ is caused by the fundamental quadrupole fluid mode and is a robust feature of neutron star merger simulations~\cite{Stergioulas2011}. Typically, it lies between 2~kHz and 4~kHz, depending on the total mass of the system and the EoS \cite{Bauswein2012,Bauswein2012a,Hotokezaka2013,Takami2014,Bernuzzi2015,Bauswein2015}. Since $f_\mathrm{peak}$ is expected to be measurable in the near future with enhanced GW detectors \cite{Clark2016,Chatziioannou2017,Torres-Rivas2018,Easter2019}, it is a quantity that can indicate the presence a of strong phase transition in merger remnants.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{fpeak-lam135_135135withDD2SF.pdf} }
\caption{Dominant postmerger GW frequency $f_\mathrm{peak}$ as a function of the tidal deformability $\Lambda$ for 1.35--1.35~$M_{\odot}$ mergers with different microphysical EoSs. Black crosses refer to results with purely hadronic EoSs while green plus signs depict results from hybrid DD2F-SF models. For these hybrid models the phase transition takes place during the merger, hence the inspiraling stars are still purely hadronic. Red circles display results from hybrid DD2-SF models. These hybrid EoSs have very low onset densities such that the inspiraling neutron stars are hybrid stars (see Fig.~\ref{fig:MR} for the mass-radius relations). The solid curve is a least squares fit with a second-order polynomial to the data excluding results from hybrid EoSs. The gray shaded area illustrates the largest deviation of the data of purely hadronic models from the fit. Results falling within this area are consistent with the assumption of a purely hadronic EoS, while outliers are interpreted as evidence for a strong phase transition. The results from hadronic and DD2F-SF models are taken from \cite{Blacker2020}.}
\label{fig:flam}
\end{center}
\end{figure}
However, a large value of $f_\mathrm{peak}$ by itself may not necessarily be indicative of a phase transition. For instance, for this binary mass configuration $f_\mathrm{peak}$ around and above 3.5~kHz can also be found for very soft hadronic EoSs~\cite{Bauswein2012}.
\subsection{Identifying a strong first-order phase transition}\label{identify}
An unambiguous signature of a strong phase transition can be obtained by comparing the tidal deformability $\Lambda$ with $f_\mathrm{peak}$~\cite{Bauswein2019}. Such a comparison is shown in Fig.~\ref{fig:flam}. It compiles $f_\mathrm{peak}$ as a function of $\Lambda$ for simulations of 1.35--1.35~$M_\odot$ mergers with many different EoSs. See \cite{Blacker2020} for a list of hadronic and hybrid EoS models used in this figure; two additional hybrid models with an early onset of quark deconfinement are included here for the first time and further discussed below. All EoS used for Fig.~\ref{fig:flam} are compatible with current observational constraints by GW170817 and by the maximum-mass limit set by pulsar observations \cite{Abbott2017,Bauswein2017,Antoniadis2013,Cromartie2019}.
In Fig.~\ref{fig:flam}, $\Lambda$ refers to the tidal deformability of the individual inspiraling star. Note that for equal-mass binaries $\Lambda$ coincides with the effective tidal deformability $\tilde{\Lambda}=\frac{16}{13}\frac{(M_1+12 M_2)M_1^4\Lambda_{1}+(M_2+12 M_1)M_2^{4}\Lambda_{2}}{(M_1+M_2)^{5}}$ of the binary system, which is the parameter that is actually measure by GW observations (see~\cite{Chatziioannou2020}). Here, $M_i$ and $\Lambda_i$ refer to the mass and the tidal deformability of the individual stars, respectively. For the sake of clarity we omit a distinction in the following and refer to~\cite{Blacker2020} for further discussions. Black crosses show results from simulations with purely hadronic EoS models. The solid black line is a least squares fit to data from those hadronic models with a second-order polynomial. The gray shaded area visualizes the maximum deviation of this data from the fit (the fit parameters can be found in \cite{Blacker2020}). It is apparent that for these models $f_\mathrm{peak}$ scales tightly with $\Lambda$ and that this relation is well described by the displayed fit. Note that three of the hadronic EoSs contain a phase transition to hyperonic matter. However, they still follow the $f_\mathrm{peak}$--$\Lambda$ relation which indicates that this transition does not strongly influence the remnants structure. See \cite{Blacker2020} for more details.
The green plus signs display simulation results with different hybrid EoSs. One can see that the postmerger frequencies clearly deviate from the tight $f_\mathrm{peak}$--$\Lambda$ relation valid for purely hadronic EoSs. The postmerger frequencies of hybrid models are significantly shifted towards higher frequencies. In these simulations the phase transition occurs during merging. This implies that $f_\mathrm{peak}$ is affected by the occurrence of deconfined quark matter while $\Lambda$ is not. Note that all of these hybrid models are based on the same underlying hadronic reference EoS model (at densities below the onset density of quark deconfinement), which is why they all have the same value of the tidal deformability $\Lambda$.
If $f_\mathrm{peak}$ exceeds the empirical $f_\mathrm{peak}$--$\Lambda$ relation by more than 200~Hz (more than the maximum deviation of a hadronic model from the fit in Fig.~\ref{fig:flam}), this is a clear, unambiguous signature of a strong first-order phase transition, since all hadronic models behave differently. Also note that, as stated before, a large $f_\mathrm{peak}$ value by itself is not indicative of a phase transition since other hadronic models can lead to $f_\mathrm{peak}$ values comparable to those from hybrid EoSs. Only the comparison of $f_\mathrm{peak}$ and $\Lambda$ with the empirical $f_\mathrm{peak}$--$\Lambda$ relation can reveal the phase transition.
Importantly, also for other binary mass configurations including asymmetric mergers such a signature can be observed. See~\cite{Blacker2020} for similar relations with other total binary masses and mass ratios different from unity.
\subsection{Hybrid star mergers}\label{hybrid}
Figure~\ref{fig:flam} also shows simulation results from two other hybrid EoSs (red circles). These hybrid models are based on a different underlying hadronic model DD2 \cite{Typel2010,Hempel2010}, but the same quark model as for the DD2F-SF EoSs is employed~\cite{Bastian2018,Fischer2018}. In particular, these EoSs feature a very early onset of quark deconfinement such that the phase transition to deconfined quark matter occurs already in relatively light stars below 1~$M_\odot$.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{QuarkToV.pdf} }
\caption{Mass-radius relations of cold, non-rotating neutron stars with hybrid DD2-SF-A/B models (colored curves) together with the purely hadronic model DD2. A phase transition to deconfined quark matter leads to a kink in the relations and typically to more compact neutron stars. One can see that $M_\mathrm{onset}$, i.e. the minimal neutron star mass with deconfined quark matter in the core is below 1~M$_\odot$ for both models. Below $M_\mathrm{onset}$ the two curves coincide with the DD2 mass-radius relation.}
\label{fig:MR}
\end{center}
\end{figure}
Fig.~\ref{fig:MR} shows the mass-radius relations of these two models together with the underlying hadronic model DD2. As one can see, the hybrid EoSs have relatively low onset densities of the phase transition. The lightest resulting neutron stars with deconfined quark matter in their cores have masses of M$_\mathrm{onset}$ = 0.96~$M_{\odot}$ and M$_\mathrm{onset}$ = 0.82~$M_{\odot}$, respectively.
There is likely a lower mass limit of neutron stars because during their formation a certain threshold mass should be exceeded to trigger a gravitational collapse. Hence, for such an EoS all neutron stars are expected to be in fact hybrid stars containing a quark matter core. This obviously implies that also binary mergers are in fact hybrid star mergers. As apparent from Fig.~\ref{fig:MR} and the previous discussion, in this case also the inspiral GW signal is affected by the presence of quark matter. Specifically, this implies that the tidal deformability carries information about the phase transition. It thus arises the question whether or not a comparison between $\Lambda$ and $f_\mathrm{peak}$ does indicate the presence of quark deconfinement within such a scenario. Recall that for the other hybrid models discussed above, a sudden softening of the EoS by the phase transition during merging is responsible for the increase of the postmerger frequency. This effect may not be expected if quark matter is already present before merging.
However, as one can see from Fig.~\ref{fig:flam} the red circles still deviate from the empirical $f_\mathrm{peak}$--$\Lambda$ relation for purely hadronic EoSs by more than 200~Hz. This means that the identification of a strong phase transition is still possible even for the merger of two hybrid stars. Since temperatures rise during merging, we speculate that here also temperature effects play a crucial role in increasing the postmerger frequency. On the one hand, thermal pressure support in quark matter may be reduced in the sense that an effective thermal ideal-gas index is smaller than for ordinary nuclear matter, which leads to an increase of $f_\mathrm{peak}$ (see~\cite{Bauswein2010}). On the other hand, finite temperatures may shift the phase boundaries to lower densities implying that the quark matter core grows significantly and leads to an additional compactification of the remnant. As these simulations represent only a very first explorative study of this scenario, future work should consider an even larger set of hybrid EoSs of this type to understand their detailed impact on the GW signal.
Before concluding we emphasize that the relations discussed here are also useful to place a constraint on the onset density of quark deconfinement. This is based on the observation that measurable GW features like $f_\mathrm{peak}$ also inform about the densities which are reached in the postmerger remnant (see Fig.~\ref{fig:frho}). Hence, in a first step the aforementioned signature can be employed to determine whether or not quark deconfinement took place in a merger. Then, the relations between the maximum density that was reached in the merger remnant and the dominant oscillation frequency can be employed to place upper or lower bound on the onset density of quark deconfinement. We refer to~\cite{Blacker2020} for a concrete outline of this procedure and an extensive discussion of the involved subtleties.
\begin{figure}
\begin{center}
\resizebox{0.75\columnwidth}{!}{%
\includegraphics{rhomax-fpeak_135135ForProc.pdf} }
\caption{Maximum rest-mass density $\rho^\mathrm{max}_\mathrm{max}$ during the first 5 milliseconds after merging as a function of the dominant postmerger GW frequency $f_\mathrm{peak}$ for 1.35--1.35~$M_{\odot}$ mergers with different microphysical EoSs. Black crosses refer to results with purely hadronic EoSs while green plus signs depict results from hybrid DD2F-SF models. The solid curve is a least squares fit to the data excluding results from hybrid DD2F-SF models. Figure taken from \cite{Blacker2020}.}
\label{fig:frho}
\end{center}
\end{figure}
\section{Summary and discussion}\label{Summ}
We have discussed an unambiguous and measurable GW signature to identify a strong phase transition to deconfined quark matter occurring in neutron star mergers. We show that a comparison between the tidal deformability and the dominant postmerger frequency can provide strong evidence for the presence of a phase transition. The tidal deformability $\Lambda$ characterizes the impact of the EoS during the pre-merger inpsiral phase and can be infered from the corresponding GW signal. The postmerger frequency $f_\mathrm{peak}$ yields information about the EoS regime probed in the massive merger remnant, i.e. it is characteristic of the high-density regime of the EoS. Also this GW feature will be measurable with sufficient precision in the future. A significant increase of $f_\mathrm{peak}$ with respect to an empirical relation between $\Lambda$ and $f_\mathrm{peak}$ which holds for purely hadronic EoSs, is indicative of a phase transition.
A strong first-order phase transition after the merger suddenly softens the EoS. This leads to a strong compactification of the remnant, which cannot result from any hadronic model since those do not admit such a strong softening of the EoS. A more compact remnant implies a higher oscillation frequency and thus GW frequency. Loosely spoken, the inspiral does not know yet about the later occurrence of quark matter and the sudden softening of the EoS at higher densities. If a phase transition takes place during merging as the densities increase, the postmerger frequency is then much higher as expected based on the tidal deformability probing the EoS during the inspiral.
Along these lines we have extended our previous studies here by considering hybrid mergers, where the increase of the postmerger frequency is at least not obvious based on these arguments. Nevertheless, we find that also hybrid merger, i.e. systems where quark matter is already present during the inspiral, lead to a characteristic increase of the postmerger GW frequency. Hence, also within such a scenario the comparison between $\Lambda$ and $f_\mathrm{peak}$ indicates the onset of quark deconfinement. This demonstrates the extraordinary scientific value of GW instruments with dedicated capabilities to detect postmerger frequencies in the range of a few kHz~\cite{Hild2011,Martynov2019}.
We suspect that for hybrid mergers mostly temperature effects are responsible for the frequency increase of the postmerger phase. However, we remark that hybrid mergers should be investigated within a more extensive, systematic study to corroborate this signature also for other hybrid EoS models. Note that this concerns mostly relatively extreme models with a very early onset with $M_\mathrm{onset}$ below $\sim 1.1~M_\odot$. For higher $M_\mathrm{onset}$ we expect that there will be merger configurations leading to a very robust signature resulting from a ``hadronic'' inspiral and an ``unexpectedly'' compact remnant with quark core and associated GW frequency increase as discussed in~\cite{Bauswein2019}. Phase transitions with low onset density and with $M_\mathrm{onset}$ somewhat larger than $\sim 1.1~M_\odot$ may in particular be detectable by the very different inspiral behavior of binaries with stellar masses above and below $M_\mathrm{onset}$~\cite{Chen2019,Chatziioannou2019}.
We also mention that the mass ejection of mergers undergoing a phase transition may not be too different from purely hadronic mergers~\cite{Bauswein2019b} indicating that the electromagnetic counterpart may not differ strongly~\cite{Metzger2019}. However, the detection of electromagnetic counterparts may yield information about the collapse behavior and thus indirectly about the occurrence of a phase transition~\cite{Bauswein2020}.
\begin{acknowledgement}
Acknowledgments: We thank N.-U. Bastian for providing EoS tables. We thank T. Fischer and D. Blaschke for helpful discussions. This work was funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 279384907 - SFB 1245 and - Project-ID 138713538 - SFB 881 (``The Milky Way System'', subproject A10). AB acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 759253.
\end{acknowledgement}
\input{main.bbl}
\end{document}
|
1,477,468,750,124 | arxiv | \section{Introduction}
The discovery of three dimensional black holes by Ba$\mathrm{\tilde{n}}$%
ados, Teitelboim and Zanelli (BTZ black holes) \cite{Banados:1992wn} has
been known as one of the greatest achievements in the study of gravity. The
BTZ black hole provides us a framework to understand gravitational
interactions in low dimensional spacetimes \cite{Witten:2007kt}. It also
accommodates a simpler setting to explore many of the mysteries of black
hole statistical mechanics in higher dimensions \cite{Carlip}. In three
spacetime dimensions, general relativity becomes a topological field theory,
whose dynamics can be largely described holographically by a two dimensional
conformal field theory at the boundary of spacetime \cite{Carlip}. Thus the
BTZ black hole is a natural environment to realize the idea of AdS/CFT
correspondence. It was found that quasinormal modes determining the
relaxation time of rotating BTZ black hole perturbations are exactly in
agreement with the location of the poles of the retarded correlation
function of the corresponding perturbations in the dual conformal field
theory \cite{Birmingham}. This serves as a quantitative test of the AdS/CFT
correspondence, see \cite{Wang:2005vs,Konoplya:2011qq} for a review and
references therein on this topic.
Considerable progresses have been made for the study of BTZ black holes
where gravity minimally coupled to matter fields \cite{Carlip95,Carlip}.
When the 2+1 gravity is coupled to electromagnetism, we have the charged BTZ
black hole solution \cite{Banados:1992wn}. But compared with the neutral
holes, the charged BTZ black hole has not been thoroughly investigated. One
of the possible reasons is that the charged BTZ black hole is not a
spacetime of constant curvature \cite{BTZ94}. This makes the analysis in
terms of identifications in AdS space not applicable for charged BTZ black
holes \cite{BTZ94}, which becomes an obstacle to understand the geometrical
properties there. Another reason is that there is a logarithmic function in
the metric expression of the charged BTZ black hole, which makes the
analytic investigation difficult.
The motivation of this work is to study carefully the charged BTZ black
holes and present clearly the physical properties once enveiled by
mathematical difficulty. We will first concentrate on the charged BTZ black
hole solutions obtained from the standard Einstein-Maxwell equations in 2+1
spacetime dimensions with a negative cosmological constant. Then we will
generalize our discussions to charged BTZ black holes obtained when the
nonlinear Born-Infeld (BI) electromagnetism is brought in \cite%
{Mansoori:2015sit,Hendi:2015wxa}. The Born-Infeld theory (BI theory) was
first introduced to solve infinite self-energy problem by imposing a maximum
strength of the electromagnetic field \cite{Born:1934gh}. The generalization
of the Maxwell field to the nonlinear electrodynamics will lead to new
solutions and introduce new phenomena to the system under consideration \cite%
{Born:1934gh}. It is of great interest to investigate how the highly
nonlinear corrections to the gauge matter fields influence the bulk black
hole spacetime structure and its dynamical and thermodynamical properties in
$2+1$ gravity.
In this paper we will first go over charged BTZ black hole solutions with
standard Maxwell field and nonlinear BI field. We will discuss the spacetime
structures, and disclose rich spacetime properties of the black hole with
the presence of the charge parameter. The strength of the charge parameter
determines whether the phase transition can happen or not. We will further
use the Landau-Lifshitz theory for thermodynamic fluctuations to discuss the
thermodynamical phase transitions in the charged BTZ black holes. We will
show that some second moments diverge in the extreme limit, indicating that
thermodynamical phase transition occurs.
It was argued that at the phase transition point, when the nonextreme black
hole becomes extreme, the Hawking temperature is zero which indicates that
for the extreme black hole there is only superradiation but no Hawking
radiation, which is in sharp difference from that of the nonextreme black
holes. Different radiation properties between extreme and nonextreme black
holes were used as an indication of the occurrence of the second order phase
transition \cite{Pavon,Pavon:1991kh,Cai:1993aa}.
However, there are some other discussions on Hawking radiation when Hawking
temperature is zero \cite{Chen:2012zn,Ong:2014nha}. For sufficiently low
black hole temperature, Hiscock and Weems modeled the evaporation of
4-dimensional asymptotically flat charged black hole by $\dfrac{dM}{dt}%
=-\alpha a T^{4}\sigma+\dfrac{Q}{r_{h}}\dfrac{dQ}{dt}$. They emphasized that
charged particles emission can be modeled separately from the thermal
Hawking flux of neutral particles, but they are all part of Hawking
radiation. This is because emission of charged particle is thermodynamically
related to a chemical potential associated with the electromagnetic field of
the black hole. Even in the limit $T=0$, the mass loss occurs from $\dfrac{dQ%
}{dt}$ term alone due to pair production by the Schwinger effect. The
presence of particle production at zero temperature can also be read from
Hawking's original formula $\langle N_{j\omega lp} \rangle=\dfrac{%
\Gamma_{j\omega lp}}{exp((\omega-e\Phi)/T)\pm 1}$ of \cite{Hawking:1974sw}
for the number of particle emission. See also Gibbons \cite{Gibbons:1975kk}
for discussions regarding emission from a charged black hole.
These subtleties regarding particle emission when Hawking temperature is
zero make the classification of different phases in nonextreme and extreme
black holes through different radiation properties somewhat subtle and
difficult. Then there comes a question whether there are some other
phenomena to indicate the sharp difference when the phase transition occurs
between the nonextreme and extreme black holes. This serves as another
motivation of the present paper. We will focus on the dynamical properties
of the charged BTZ black holes by studying the quasinormal modes of scalar
perturbation. We find that QNMs can serve as another probe for the phase
transition between the nonextreme and extreme black holes. The results tell
us that the extreme charged BTZ black hole is easily destroyed if we add
more perturbations to the system, while the nonlinearity of the
electromagnetic field can protect the black hole spacetime partially and
make the perturbation outside the black hole decay faster.
\section{ Charged BTZ black hole solutions}
In this section, we first review the derivation of charged BTZ black hole
solutions in the presence of linear Maxwell (LM) \cite{Banados:1992wn} and
nonlinear Born-Infeld (BI) \cite{Mansoori:2015sit} electrodynamics. Then, we
discuss the spacetime properties of these charged BTZ black holes.
Let us begin with the action of Einstein gravity in the presence of gauge
field,%
\begin{equation}
S_{grav}=S_{EH}+S_{gauge}+S_{GH}, \label{Action}
\end{equation}%
where $S_{EH}$, $S_{gauge}$ and $S_{GH}$ are the Einstein-Hilbert, gauge
field and Gibbons-Hawking actions defined as
\begin{eqnarray}
S_{EH} &=&-\frac{1}{16\pi }\int_{M}d^{3}x\sqrt{-g}\left( R+\frac{2}{l^{2}}%
\right) , \\
S_{gauge} &=&-\frac{1}{16\pi }\int_{M}d^{3}x\sqrt{-g}L(F), \\
S_{GH} &=&-\frac{1}{8\pi }\int_{\partial M}d^{2}x\sqrt{-h}K,
\end{eqnarray}%
in which $R$ is the Ricci scalar for the bulk manifold $M$, $l$ is the AdS
radius and $L(F)$ is the Lagrangian of electrodynamic field $F_{\mu \nu }$
where $F=F_{\mu \nu }F^{\mu \nu }$, $F_{\mu \nu }=\partial _{\lbrack \mu
}A_{\nu ]}$ and $A_{\nu }$ is the electrodynamic gauge potential. $h$ is the
determinant of the $2$-dimensional metric on the boundary of manifold $M$ ($%
\partial M$) and $K$ is the trace of the extrinsic curvature of the
boundary. Lagrangians of electrodynamic field for linear Maxwell and
nonlinear BI cases are%
\begin{equation}
L(F)=\left\{
\begin{array}{ll}
-F & \text{LM} \\
4\beta ^{2}\left( 1-\sqrt{1+\frac{F}{2\beta ^{2}}}\right) & \text{BI}%
\end{array}%
\right. ,
\end{equation}%
where $\beta $ is the parameter of nonlinearity. Nonlinear BI
electrodynamics reduces to the linear Maxwell case when $\beta\rightarrow
\infty $. Varying the action (\ref{Action}) with respect to the metric $%
g_{\mu \nu }$ and the gauge potential $A_{\mu }$, we obtain%
\begin{eqnarray}
R_{\mu \nu }-\frac{1}{2}g_{\mu \nu }\left( R+\frac{2}{l^{2}}\right) &=&\frac{%
1}{2}g_{\mu \nu }L(F)-2F_{\mu \sigma }F_{\nu }^{\sigma }\frac{dL(F)}{dF},
\label{EiE} \\
\partial _{\mu }\left( \sqrt{-g}\frac{dL(F)}{dF}F^{\mu \nu }\right) &=&0.
\label{ElE}
\end{eqnarray}%
Substituting the gauge potential $A_{\mu }=\varphi (r)\delta _{\mu }^{0}$
into Eq. (\ref{ElE}), we obtain the nonvanishing component of the
electrodynamic field tensor
\begin{equation}
F_{tr}=-F_{rt}=\frac{q}{r}\times \left\{
\begin{array}{ll}
1 & \text{LM} \\
\Gamma ^{-1} & \text{BI}%
\end{array}%
\right. ,
\end{equation}%
where $q$ is a constant related to the total charge of the black hole and $%
\Gamma =\sqrt{1+q^{2}/\left( r^{2}\beta ^{2}\right) }$. Eq. (\ref{EiE})
admits the charged BTZ black hole solution as%
\begin{equation}
ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+r^{2}d\theta ^{2},
\end{equation}%
in which
\begin{equation}
f(r)=\frac{r^{2}}{l^{2}}-m+\left\{
\begin{array}{ll}
-2q^{2}\ln {\frac{r}{l}} & \text{LM} \\
2r^{2}\beta ^{2}\left( 1-\Gamma \right) +q^{2}\left[ 1-2\ln \left( r\frac{%
\left( 1+\Gamma \right) }{2l}\right) \right] \ & \text{BI}%
\end{array}%
\right. , \label{metric}
\end{equation}%
where $m$ is a constant which is proportional to the total mass of the black
hole and can be obtained by using the fact that metric function vanishes at
the event horizon $r_{+}$%
\begin{equation}
m=\left\{
\begin{array}{ll}
\frac{r_{+}^{2}}{l^{2}}-2q^{2}\ln \left( \frac{r_{+}}{l}\right) & \text{LM}
\\
\frac{r_{+}^{2}}{l^{2}}+2r_{+}^{2}\beta ^{2}\left( 1-\Gamma _{+}\right)
+q^{2}\left[ 1-2\ln \left( r_{+}\frac{\left( 1+\Gamma _{+}\right) }{2l}%
\right) \right] & \text{BI}%
\end{array}%
\right. , \label{m}
\end{equation}%
where $\Gamma _{+}=\sqrt{1+q^{2}/\left( r_{+}^{2}\beta ^{2}\right) }$.
In the following, we will explore the spacetime properties of linearly and
nonlinearly charged BTZ black holes.
\subsection{Linearly charged BTZ black holes}
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{q.eps}
\caption{{}The behavior of the left hand side of Eq.(\protect\ref{eq:q})
versus $q$. There is a maximum at $q_{\mathrm{ext}}=1$. }
\label{fig:q}
\end{figure}
The metric function for the linearly charged BTZ black hole reads%
\begin{equation}
f(r)=\frac{r^{2}}{l^{2}}-m-2q^{2}\ln {\frac{r}{l}}.
\end{equation}%
In \cite{Hendi:2015wxa}, it was argued that for fixed parameters $l=1$ and $%
q=1$, there is a naked singularity when $m$ is sufficiently small, while the
mass parameter is at a critical value, there is an extreme BTZ black hole.
When the mass is above the critical value, the charged BTZ black hole can
have two horizons enveloping the central singularity, namely the inner
Cauchy horizon $r_{-}$ and the outer event horizon $r_{+}$.
How about the situation if we treat $q$ as a variable? The answer was not
provided in \cite{Hendi:2015wxa}. In order to study the extreme charged BTZ
black hole, we first let the metric function $f(r)$ to be zero which can
tell us the location of the horizon. Furthermore we require the derivative
of the metric function $f^{\prime }(r)$ to vanish at the horizon which
ensures the extremal condition to be satisfied. From these conditions,
\begin{equation}
f(r_{+\mathrm{ext}})=\frac{r_{+\mathrm{ext}}^{2}}{l^{2}}-m_{\mathrm{ext}%
}-2q_{\mathrm{ext}}^{2}\ln {\frac{r_{+\mathrm{ext}}}{l}}=0,\text{ \ and \ }%
f^{\prime }(r_{+\mathrm{ext}})=\frac{2r_{+\mathrm{ext}}}{l^{2}}-\frac{2q_{%
\mathrm{ext}}^{2}}{r_{+\mathrm{ext}}}=0,
\end{equation}
we can have an equation relating $q_{\mathrm{ext}}$ and $m_{\mathrm{ext}}$
\begin{equation}
q_{\mathrm{ext}}^{2}-2q_{\mathrm{ext}}^{2}\ln {q}_{\mathrm{ext}}=m_{\mathrm{%
ext}}. \label{eq:q}
\end{equation}%
The behavior of the left-hand-side of Eq. (\ref{eq:q}) is shown in Fig. \ref%
{fig:q}, which has a peak equaling to one when $q=1$. When $m=1$, there is
only one solution of Eq. (\ref{eq:q}), which is at $q=1$ and the charged BTZ
black hole is extreme. When $0<m<1$, there will be two solutions for $q_{%
\mathrm{ext}}$ from Eq. (\ref{eq:q}). The black hole can be extreme for
these two charge values when $m<1$. However when $m>1$, there is no solution
of (\ref{eq:q}), which indicates that the extreme black hole condition
cannot be respected so that the charged BTZ black hole is always nonextreme
when $m>1$.
To have a clearer picture, we plot the metric function $f(r)$ in Fig. \ref%
{fig:MF} for different values of $m$. For $m<1$ (we choose $m=0.5$), we have
two values $q_{1}$ and $q_{2}$ to accommodate the extreme black hole. We can
see From Fig. \ref{fig2a} that when $q<q_{1}$ and $q>q_2$, the black hole is
nonextreme with two horizons. But in the range $q_{1}<q<q_{2}$, there is no
root of the metric function so that the black hole does not exist. When $m=1$%
, we see that there is just one value for the critical charge, namely $q=1$,
to accommodate the extreme charged BTZ black holes. Below or above this
critical charge, the black hole is always nonextreme, which is shown in Fig. %
\ref{fig2b}. For $m>1$, it is clear from Fig. \ref{fig2c} that there are
always two horizons enveloping the central singularity and the black hole is
nonextreme.
\begin{figure}[t]
\centering%
\subfigure[~$m=0.5$]{
\label{fig2a}\includegraphics[width=.3\textwidth]{MF05.eps} }
\subfigure[~$m=1$]{
\label{fig2b}\includegraphics[width=.3\textwidth]{MF1.eps} }
\subfigure[~$m=2$]{
\label{fig2c}\includegraphics[width=.3\textwidth]{MF2.eps}}
\caption{{}The behaviors of $f(r)$ for different values of $q$ when $m=0.5$,
$1$ and $2$.}
\label{fig:MF}
\end{figure}
\begin{figure}[t]
\centering%
\subfigure[~$R$ vs $q$]{
\label{fig3a}\includegraphics[width=.45\textwidth]{R-q.eps} }
\subfigure[~$T$ vs $q$]{
\label{fig3b}\includegraphics[width=.45\textwidth]{T-q.eps} }
\caption{{} The behaviors of distance $R$ between inner and outer horizons
(left) and the temperature $T$ (right) of charged BTZ black hole.}
\label{fig:RT-q}
\end{figure}
We further define the coordinate difference $R=r_{+}-r_{-}$ between the
inner and outer horizons to show its dependence on the parameter $q$ for
different chosen mass values in Fig. \ref{fig3a}. When the black hole
geometrical mass is smaller than one ($m<1$), there will be two horizons
protecting the central singularity when the black hole charge is small.
These two horizons approach each other with the increase of the black hole
charge and the nonextreme black hole becomes extreme at $q=q_{1}$. When the
charge parameter is between $q_{1}$ and $q_{2}$, there is no root for the
metric function $f(r)$ and the black hole does not exist. At $q=q_{2}$ for
the chosen $m$ value smaller than one, there is only one black hole horizon
to appear to envelop the singularity again. The difference between event and
Cauchy horizons becomes bigger when the black hole is more charged when $%
q>q_2$. When $m$ approaches one from below, we see from Fig. \ref{fig3a}
that $q_1$ and $q_2$ approach each other. They finally merge when $m=1$ and
only one extreme charged BTZ black hole exists for $q=1$. Below $q=1$, with
the increase of the electric charge, the two horizons become closer to each
other, degenerate finally at $q=1$. But when above $q=1$, the horizons can
separate again when $q$ grows and the black hole becomes more nonextreme. In
the case when $m>1$, the two horizons will never degenerate so that the
black hole is always nonextreme. There exists a turning point of the mass
value $m_{r}$. In the range $1<m<m_{r}$, the two horizons can come closer to
each other and then separate away with the increase of $q$, while when $%
m>m_{r}$, the distance between the two horizons of the black hole will
increase monotonically.
We can calculate the charged BTZ black hole temperature%
\begin{equation}
T=\frac{f^{\prime }\left( r_{+}\right) }{4\pi }=\frac{1}{2\pi }\left( \frac{%
r_{+}}{L^{2}}-\frac{q^{2}}{r_{+}}\right)
\end{equation}%
which is shown in Fig. \ref{fig3b}. The black hole temperature confirms the
property we disclosed in the spacetime structure. When the black hole mass
is small ($m<1$), the black hole temperature decreases to zero with the
increase of the black hole charge from $0$ to $q_{1}$. When the charge is
big enough ($q\geq q_{2}$), the black hole can exist and the temperature
will rise from zero. When $m=1$, the temperature decreases to zero at $q=1$
and then rises with the increase of charge $q$. For $m>1$, we see from Fig. %
\ref{fig3b} that there exists a turning point of the mass $m_{t}$, when $%
1<m<m_{t}$, with the increase of the electric charge, the temperature will
first decrease and then increase but the minimum of the temperature is
always above zero. When $m>m_{t}$, the temperature increases monotonically
with the increase of the electric charge.
We can clearly see that the qualitative behavior of $T$ is closely related
to $R$.
\subsection{Nonlinearly charged BTZ black holes}
The metric function of nonlinearly BI charged BTZ black hole is
\begin{equation}
f(r)=\frac{r^{2}}{l^{2}}-m+2r^{2}\beta ^{2}\left( 1-\Gamma \right) +q^{2}%
\left[ 1-2\ln \left( r\frac{\left( 1+\Gamma \right) }{2l}\right) \right] .
\end{equation}%
When $\beta\rightarrow \infty $, the results reduce to that presented in the
above subsection. When $\beta $ becomes smaller, the nonlinearity becomes
stronger.
We repeat the discussion above to examine the spacetime structure for the
nonlinearly charged BTZ black hole. The location of the horizon and the
extreme condition are described by
\begin{equation}
f(r_{+})=\frac{r_{+}^{2}}{l^{2}}-m_{\mathrm{ext}}+2r_{+}^{2}\beta ^{2}\left(
1-\Gamma _{+}\right) +q_{\mathrm{ext}}^{2}\left[ 1-2\ln \left( r_{+}\frac{%
\left( 1+\Gamma _{+}\right) }{2l}\right) \right] =0,
\end{equation}%
and%
\begin{equation}
f^{\prime }(r_{+})=\frac{2r_{+}^{2}\beta ^{2}\left( 1+\Gamma _{+}\right)
+2q_{\mathrm{ext}}^{2}\left( 1-2l^{2}\beta ^{2}\Gamma _{+}\right) }{%
l^{2}\beta ^{2}r_{+}\left( 1+\Gamma _{+}\right) \Gamma _{+}}=0,
\end{equation}%
where we can find the relation between $q_{\mathrm{ext}}$ and $m_{\mathrm{ext%
}}$ in the form
\begin{equation}
q_{\mathrm{ext}}^{2}-2q_{\mathrm{ext}}^{2}\ln {\frac{q_{\mathrm{ext}}}{%
2L\beta }}\sqrt{1+4l^{2}\beta ^{2}}=m_{\mathrm{ext}}. \label{eq:q2}
\end{equation}
From Fig. \ref{fig:qNL}, we see that there always exists a maximum value for
the left hand side of Eq. (\ref{eq:q2}). This maximum value $4l^{2}\beta
^{2}/\left( 1+4l^{2}\beta ^{2}\right) $ happens at $q_{\mathrm{ext}}=2l\beta
/\sqrt{1+4l^{2}\beta ^{2}}$. When $m=4l^{2}\beta ^{2}/\left( 1+4l^{2}\beta
^{2}\right) $, there is only one solution $q_{1}$ which can make the black
hole extreme. For $m<4l^{2}\beta ^{2}/\left( 1+4l^{2}\beta ^{2}\right) $,
there are two solutions $q_{1}$ and $q_{2}$ which satisfy the extreme black
hole condition. Interestingly, $q_{1}$ and $q_{2}$ can become closer when
the nonlinearity is increased in the electromagnetic field. When $%
m>4l^{2}\beta ^{2}/\left( 1+4l^{2}\beta ^{2}\right) $, the extreme condition
cannot be satisfied so that the black hole is always nonextreme.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{qNL.eps}
\caption{{}The behaviors of the left hand side of Eq. (\protect\ref{eq:q2})
versus $q$ for $\protect\beta =0.2$, $0.5$, $1$, $4$, and infinity
respectively from below to the up lines, where we take $l=1$. }
\label{fig:qNL}
\end{figure}
\begin{figure}[t]
\centering%
\subfigure[~$m=0.25$]{
\label{fig4a}\includegraphics[width=.3\textwidth]{R-q025NL.eps} }
\subfigure[~$m=1$]{
\label{fig4b}\includegraphics[width=.3\textwidth]{R-q1NL.eps} }
\subfigure[~$m=2$]{
\label{fig4c}\includegraphics[width=.3\textwidth]{R-q2NL.eps} }
\caption{{}The distance $R$ between inner and outer horizons for $m=0.25$, $%
1 $ and $2$ for nonlinearly charged BTZ black holes.}
\label{fig:R-qNL}
\end{figure}
To display the property more clearly, we again plot in Fig. \ref{fig:R-qNL}
the behavior of the distance between the inner and the outer horizons $R$
with respect to $q$ for $m=0.25$, $1 $ and $2$, respectively. For $m<1$ ($%
m=0.25$), we find that with the decrease of $\beta$, the two extreme values
of the charge $q$ get closer to each other, become one and then disappear.
This means that when the nonlinearity in the electromagnetic field is strong
enough, there is no extreme charged BTZ black hole. For $m=1$, the extreme
black hole only exists when the gravity is coupled to the standard Maxwell
field. For gravity coupled to the BI field, when the nonlinearity is not
strong enough, the two horizons of the black hole can approach to each other
first and then separate with the increase of the electric charge. But they
cannot merge. When the electromagnetic field is strongly nonlinear, the
difference between two horizons always becomes bigger with the increase of
the black hole charge. For $m>1$, the influence of the nonlinearity is
similar to that described for $m=1$ case.
The behavior of Hawking temperature $T$ for the nonlinearly charged BTZ
black holes is shown in Fig. \ref{fig:T-qNL}. It gives us qualitatively the
similar behavior as we discussed for $R$. Compared with the linearly charged
BTZ black hole, the nonlinearity in the electromagnetic field makes the
Hawking temperature higher and it is more difficult to reach zero
temperature in the nonlinearly charged BTZ black holes.
\begin{figure}[t]
\centering%
\subfigure[~$m=0.25$]{
\label{fig5a}\includegraphics[width=.3\textwidth]{T-q025NL.eps} }
\subfigure[~$m=1$]{
\label{fig5b}\includegraphics[width=.3\textwidth]{T-q1NL.eps} }
\subfigure[~$m=2$]{
\label{fig5c}\includegraphics[width=.3\textwidth]{T-q2NL.eps} }
\caption{{}The behavior of Hawking temperature $T$ for $m=0.25$, $1$ and $2$
for nonlinearly charged black holes.}
\label{fig:T-qNL}
\end{figure}
\section{ Thermodynamical phase transitions}
Thermodynamics of the BTZ black holes have been discussed in \cite{BTZ94}.
More references can be found in reviews \cite{Carlip95, Carlip}. Most
studies were focused on the BTZ black holes where gravity minimally coupled
to matter fields. For the charged BTZ black holes, the first law of
thermodynamics was constructed in \cite{alexis}, the entropy was examined in
\cite{Cadoni,Myung,Wang2}, the mass bound and thermodynamical behavior was
investigated in \cite{Cadoni2010} and the phase transition was studied in
\cite{Wang}. Recently the thermodynamical stability of the charged BTZ black
hole was investigated in \cite{Hendi:2015wxa} by the method of examining the
heat capacity first proposed by Davies \cite%
{Davies:1978mf,Davies:1978zz,Davies:1989ey}. The extension was performed to
the nonlinearly charged BTZ black holes \cite{Hendi2}.
Here we are going to examine the thermodynamical phase transition in the
charged BTZ black holes more carefully. It was argued in \cite{Pavon} that
the heat capacity cannot be the true character to mark the phase transition,
because there is no sharp change in physical properties at the point of the
phase transition obtained from the heat capacity.
Besides, the comparison of the free energy of two competing configurations are often used in the context of first order phase transition, such as the Hawking-Page phase transition, which describes the competition between a pure anti-de Sitter spacetime and an AdS black hole spacetime. However, the phase transition we studied is between non-extremal and extremal black holes. This is a second order phase transition. So we turn to use the Landau-Lifshitz theory of thermodynamical perturbations. Employing the
Landau-Lifshitz theory of thermodynamic fluctuations, the fluctuations in
the rate of change of mass, angular momentum and other relevant quantities
of different black holes were examined and it was found that some second
moments in the fluctuation of relevant quantities diverge when the black
holes become extreme, which marks the occurrence of the second order phase
transition. Below we will further use the Landau-Lifshitz theory to study
the phase transition in the charged BTZ black holes.
According to Landau-Lifshitz theory \cite{LL1}, in a fluctuation-dissipative
process, the flux $\dot{X}_{i}$ (dot shows temporal derivative) of a given
thermodynamic quantity $X_{i}$ is expressed by%
\begin{equation}
\dot{X}_{i}=-\sum_{j}\Gamma _{ij}\chi _{j},
\end{equation}%
where $\Gamma _{ij}$ and $\chi _{i}$ are the phenomenological transport
coefficients and thermodynamic force conjugate to the flux $\dot{X}_{i}$,
respectively. Also, the entropy production rate is given by%
\begin{equation}
\dot{S}=\sum_{i}\pm \chi _{i}\dot{X}_{i},
\end{equation}%
The second moments in the fluctuations of the fluxes read%
\begin{equation}
\left\langle \delta \dot{X}_{i}\delta \dot{X}_{j}\right\rangle =\left(
\Gamma _{ij}+\Gamma _{ji}\right) \delta _{ij}, \label{smom}
\end{equation}%
where the angular brackets denote the mean value with respect to the steady
state and the fluctuations $\delta \dot{X}_{i}$ are the spontaneous
deviations from the steady state value $\left\langle \dot{X}%
_{i}\right\rangle $ (we set $k_{B}=1$). The Kronecker $\delta _{ij}$ in Eq. (%
\ref{smom}) is to guarantee that correlations vanish when two fluxes are
independent. We can obtain the rate of entropy production as%
\begin{eqnarray}
\dot{S}\left( M,Q\right) &=&\left( \frac{\partial S}{\partial M}\right) _{Q}%
\dot{M}+\left( \frac{\partial S}{\partial Q}\right) _{M}\dot{Q} \notag \\
&=&\frac{\dot{M}}{T}-\frac{\dot{Q}}{T}\left( \frac{\partial M}{\partial Q}%
\right) _{S} \notag \\
&=&\frac{\dot{M}}{T}-\frac{U\dot{Q}}{T},
\end{eqnarray}%
where we have used%
\begin{equation}
T=\left( \frac{\partial M}{\partial S}\right) _{Q},\text{ \ \ \ \ }U=\left(
\frac{\partial M}{\partial Q}\right) _{S}, \label{TU}
\end{equation}%
and%
\begin{equation}
\left( \frac{\partial M}{\partial S}\right) _{Q}\left( \frac{\partial S}{%
\partial Q}\right) _{M}\left( \frac{\partial Q}{\partial M}\right) _{S}=-1,
\end{equation}%
in which $S$, $M$ and $Q$ are entropy, total mass and total charge of the
black hole defined as \cite{Hendi:2015wxa}%
\begin{equation}
S=\frac{\pi r_{+}}{2},\text{ \ \ }Q=\frac{q}{2}\text{ \ and \ }M=\frac{m}{8},
\label{SQM}
\end{equation}%
and $U$ is the electric potential energy. Using Eqs. (\ref{m}), (\ref{TU})
and (\ref{SQM}), one can calculate the temperature and the electric
potential energy as%
\begin{equation*}
T=\left\{
\begin{array}{ll}
\frac{S}{\pi ^{2}l^{2}}-\frac{Q^{2}}{S} & \text{LM} \\
\frac{S}{\pi ^{2}l^{2}}-\frac{Q^{2}}{S}+\frac{2S\beta ^{2}}{\pi ^{2}}\left(
1-\Gamma _{S}\right) +\frac{Q^{2}}{S\Gamma _{S}}+\frac{\pi ^{2}Q^{4}}{\beta
^{2}S^{3}\Gamma _{S}\left( 1+\Gamma _{S}\right) } & \text{BI}%
\end{array}%
\right. ,
\end{equation*}%
and%
\begin{equation*}
U=\left\{
\begin{array}{ll}
-2Q\ln \left( \frac{2S}{\pi l}\right) & \text{LM} \\
-2Q\ln \left( \frac{S}{\pi l}\left( 1+\Gamma _{S}\right) \right) +Q\left(
1-\Gamma _{S}^{-1}\right) -\frac{\pi ^{2}Q^{3}}{\beta ^{2}S^{2}\Gamma
_{S}\left( 1+\Gamma _{S}\right) } & \text{BI}%
\end{array}%
\right. ,
\end{equation*}%
where $\Gamma _{S}=\sqrt{1+Q^{2}\pi ^{2}/\left( S^{2}\beta ^{2}\right) }$.
It is worthwhile to mention that for the extreme black hole case where $%
f^{\prime }(r_{+})=0$, the Hawking temperature vanishes since $T=f^{\prime
}(r_{+})/4\pi $.
The rate of the mass loss is \cite{His,Cardoso:2005cd}%
\begin{equation}
\frac{dM}{dt}=-b\alpha \sigma T^{3}+U\frac{dQ}{dt}. \label{Md}
\end{equation}%
The first term on the right side of (\ref{Md}) exhibits the thermal mass
loss due to Hawking radiation. Note that the power of the temperature is
dimension-dependent. This term is just the Stefan-Boltzmann law, in which $b$
denotes the radiation constant. The constant $\alpha $ is dependent on the
number of species of massless particles and the quantity $\sigma $ is the
geometrical optics cross-section. The second term on the right side of Eq. (%
\ref{Md}) is due to the mass loss of charged particles. In fact, It is the
same term that appears in the first law of black hole mechanics, namely $UdQ$%
.
According to what described and calculated above, we can obtain the
correlation functions or the second moments of the corresponding
thermodynamical quantities as%
\begin{equation}
\left\langle \delta \dot{M}\delta \dot{M}\right\rangle =-2T\dot{M},\text{ \
\ \ \ }\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle =-\frac{2T\dot{%
Q}}{U},\text{ \ \ \ \ }\left\langle \delta \dot{M}\delta \dot{Q}%
\right\rangle =U\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle ,
\end{equation}%
\begin{equation}
\left\langle \delta \dot{S}\delta \dot{M}\right\rangle =\frac{1}{T}%
\left\langle \delta \dot{M}\delta \dot{M}\right\rangle -\frac{U}{T}%
\left\langle \delta \dot{M}\delta \dot{Q}\right\rangle =\frac{1}{T}%
\left\langle \delta \dot{M}\delta \dot{M}\right\rangle -\frac{U^{2}}{T}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle =-2\dot{M}+2U\dot{Q},
\end{equation}%
\begin{equation}
\left\langle \delta \dot{S}\delta \dot{Q}\right\rangle =\frac{1}{T}%
\left\langle \delta \dot{M}\delta \dot{Q}\right\rangle -\frac{U}{T}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle =\frac{U}{T}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle -\frac{U}{T}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle =0
\end{equation}%
\begin{eqnarray}
\left\langle \delta \dot{S}\delta \dot{S}\right\rangle &=&\frac{1}{T^{2}}%
\left\langle \delta \dot{M}\delta \dot{M}\right\rangle +\frac{U^{2}}{T^{2}}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle -\frac{2U}{T^{2}}%
\left\langle \delta \dot{M}\delta \dot{Q}\right\rangle =\frac{1}{T^{2}}%
\left\langle \delta \dot{M}\delta \dot{M}\right\rangle -\frac{U^{2}}{T^{2}}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle \notag \\
&=&-\frac{2\dot{M}}{T}+\frac{2U\dot{Q}}{T}=\frac{\left\langle \delta \dot{S}%
\delta \dot{M}\right\rangle }{T},
\end{eqnarray}%
\begin{equation}
\left\langle \delta \dot{T}\delta \dot{T}\right\rangle
=M_{SS}^{2}\left\langle \delta \dot{S}\delta \dot{S}\right\rangle
+M_{SQ}^{2}\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle
+2M_{SS}M_{SQ}\left\langle \delta \dot{S}\delta \dot{Q}\right\rangle
=M_{SS}^{2}\left\langle \delta \dot{S}\delta \dot{S}\right\rangle
+M_{SQ}^{2}\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle ,
\end{equation}%
\begin{equation}
\left\langle \delta \dot{S}\delta \dot{T}\right\rangle =\frac{M_{SS}}{T}%
\left\langle \delta \dot{S}\delta \dot{M}\right\rangle +\frac{M_{SQ}}{T}%
\left\langle \delta \dot{M}\delta \dot{Q}\right\rangle -\frac{UM_{SS}}{T}%
\left\langle \delta \dot{S}\delta \dot{Q}\right\rangle -\frac{UM_{SQ}}{T}%
\left\langle \delta \dot{Q}\delta \dot{Q}\right\rangle =M_{SS}\left\langle
\delta \dot{S}\delta \dot{S}\right\rangle ,
\end{equation}%
Note that, for calculating $\left\langle \delta \dot{T}\delta \dot{T}%
\right\rangle $ and $\left\langle \delta \dot{S}\delta \dot{T}\right\rangle $%
, we have used%
\begin{equation}
\dot{T}(S,Q)=\left( \frac{\partial T}{\partial S}\right) _{Q}\dot{S}+\left(
\frac{\partial T}{\partial Q}\right) _{S}\dot{Q}=M_{SS}\dot{S}+M_{SQ}\dot{Q},
\end{equation}%
in which $M_{XY}=\partial ^{2}M/\partial X\partial Y$. Since $T$ vanishes
for the extreme black hole case, second moments $\left\langle \delta \dot{S}%
\delta \dot{S}\right\rangle $, $\left\langle \delta \dot{T}\delta \dot{T}%
\right\rangle $ and $\left\langle \delta \dot{S}\delta \dot{T}\right\rangle $
diverge there. The divergence of the second moment means that the
fluctuation is tremendous which breaks down the rigorous meaning of
thermodynamical quantities. This is just the characteristic of the point of
the phase transition. This property holds for the nonlinearly charged BTZ
black hole as well.
\section{The dynamical perturbations}
It is an interesting question how we can examine the phase transition
through physical phenomena. Hiscock and Weems's argument tells us that the
Hawking radiation is not exactly zero when the black hole temperature
vanishes \cite{Wang}. This makes the attempt to indicate of the second order
black hole phase transition through different radiation properties fade. In
this section we are going to explore the properties of the dynamical
perturbations in the charged BTZ black hole background. We will examine the
behavior of quasi normal modes (QNMs) of scalar perturbations, and will
argue that the dynamical perturbation behavior can serve as a new probe of
the phase transition between the extreme and nonextreme black holes.
QNM of black holes does not depend on perturbation fields outside black
holes. Instead, it only depends on properties of the background spacetime.
Thus QNMs can reflect the dynamical properties of black holes. It was argued
in \cite{Koutsoumbas:2006xj} that the QNMs of the electromagnetic
perturbations can give evidence of a second-order phase transition of a
topological black hole to a hairy configuration. Further supports that the
QNMs can be probes of the phase transition were provided in \cite%
{Shen:2007xk,Wang3}. In a recent paper, it was reported that the signature
of the Van der Waals like small-large charged AdS black hole phase
transition can also be observed in QNM \cite{Wang4}. It is of interest to
generalize the previous studies to investigate the dynamical perturbations
in the charged BTZ black holes and examine whether the QNMs of the
perturbation can again be a physical indication of the thermodynamical phase
transition.
In calculating the QNMs, we have tried three different numerical methods
\cite{Konoplya:2011qq}, namely the Horowitz-Hobeny method, the Shooting
method and the Asymptotic iteration method (AIM). We found that the AIM is
the most precise and efficient method, especially when the black hole
approaches extremality. In the following we will introduce the AIM method
first and present the numerical results.
AIM was first used to solve eigenvalue problems of the second order
homogeneous linear differential equations \cite{Ciftci:2005xn}, and then
applied to the case of black hole QNMs. Considering a second order
homogeneous linear differential equation
\begin{equation}
\chi ^{\prime \prime }=\lambda _{0}(x)\chi ^{\prime }+s_{0}(x)\chi ,
\label{eq:standardform}
\end{equation}%
where $\lambda _{0}(x)$ and $s_{0}(x)$ are smooth functions in some interval
$[a,b]$ and taking the derivative of Eq.~(\ref{eq:standardform}) with
respect to $x$, we obtain the following equation
\begin{equation}
\chi ^{\prime \prime \prime }=\lambda _{1}(x)\chi ^{\prime }+s_{1}(x)\chi ,
\end{equation}%
where
\begin{equation}
\lambda _{1}(x)=\lambda _{0}^{\prime }(x)+s_{0}(x)+\lambda _{0}^{2}(x),\text{
\ and \ }s_{1}=s_{0}^{\prime }(x)+s_{0}(x)\lambda _{0}(x).
\end{equation}%
Repeating this step iteratively, we can obtain the $(n+2)$-th derivatives
\begin{equation}
\chi ^{(n+2)}=\lambda _{n}(x)\chi ^{\prime }+s_{n}(x)\chi ,
\end{equation}%
where
\begin{equation}
\lambda _{n}(x)=\lambda _{n-1}^{\prime }(x)+s_{n-1}(x)+\lambda
_{0}(x)\lambda _{n-1}(x),\text{ \ and \ }s_{n}(x)=s_{n-1}^{\prime
}(x)+s_{0}(x)\lambda _{n-1}(x). \label{eq:AIM_iteration}
\end{equation}%
For sufficiently large $n$, the asymptotic aspect of the method is
introduced~as \cite{Cho:2012}
\begin{equation}
\frac{s_{n}(x)}{\lambda _{n}(x)}=\frac{s_{n-1}(x)}{\lambda _{n-1}(x)}.
\label{asym}
\end{equation}%
The QNMs can be derived from the above ``quantization condition". However,
the derivatives of $\lambda _{n}(x)$ and $s_{n}(x)$ in each iteration can
slow down the numerical implementation of the AIM considerably and also lead
to precision problems. These drawbacks were overcome in the improved version
of AIM \cite{Cho:2010}. $\lambda _{n}(x) $ and $s_{n}(x)$ can be expanded in
Taylor series around the point $\xi $ at which the AIM is performed,%
\begin{equation}
\lambda _{n}(\xi )=\sum_{i=0}^{\infty }c_{n}^{i}(x-\xi )^{i},\text{ \ and \ }%
s_{n}(x)=\sum_{i=0}^{\infty }d_{n}^{i}(x-\xi )^{i}. \label{eq:AIM_expansion}
\end{equation}%
Here $c_{n}^{i}$ and $d_{n}^{i}$ are the $i$th Taylor coefficients of $%
\lambda _{n}(\xi )$ and $s_{n}(\xi )$ respectively. Substituting these
expansions into Eq.~(\ref{eq:AIM_iteration}), we obtain a set of recursion
relations for the coefficients as
\begin{equation}
c_{n}^{i}=(i+1)c_{n-1}^{i+1}+d_{n-1}^{i}+%
\sum_{k=0}^{i}c_{0}^{k}c_{n-1}^{i-k},\text{ \ and \ }%
d_{n}^{i}=(i+1)d_{n-1}^{i+1}+\sum_{k=0}^{i}d_{0}^{k}c_{n-1}^{i-k}.
\label{eq:AIM_exp_iteration}
\end{equation}%
Consequently, the ``quantization condition" can be expressed as%
\begin{equation}
d_{n}^{0}c_{n-1}^{0}-d_{n-1}^{0}c_{n}^{0}=0, \label{eq:quantization}
\end{equation}%
which no longer requires the derivative operator. Both the accuracy and
speed of the AIM computation are greatly improved by this change.
Now we apply this method to charged BTZ black holes. The massless
Klein-Gordon equation is
\begin{equation}
\nabla ^{\nu }\nabla _{\nu }\Psi =0. \label{ChSc}
\end{equation}%
The solution of (\ref{ChSc}) can be considered in the form of%
\begin{equation}
\Psi =e^{-i\omega t}R(r)Y(\theta),
\end{equation}%
that makes us able to decompose the differential equation (\ref{ChSc}) into
two parts%
\begin{equation}
\frac{\partial ^{2}Y(\theta)}{\partial \theta^{2}}=0,
\end{equation}%
\begin{equation}
R^{\prime \prime }(r)+\left[ \frac{f^{\prime }(r)}{f(r)}+\frac{1}{r}\right]
R^{\prime }(r)+\frac{\omega ^{2}R(r)}{f(r)^{2}}=0. \label{Rad}
\end{equation}%
where we set the separation constant to zero. Making definitions%
\begin{equation}
R(r)=\frac{\psi (r)}{\sqrt{r}},
\end{equation}%
and%
\begin{equation}
dr_{\ast }=\frac{dr}{f(r)},
\end{equation}%
where $r_{\ast }$ is the tortoise coordinate, we can rewrite (\ref{Rad}) in
the Schr\"{o}dinger form%
\begin{equation}
\frac{\partial ^{2}\psi }{\partial r_{\ast }^{2}}+\left( \omega
^{2}-V(r)\right) \psi =0, \label{eq:radial_eq}
\end{equation}%
where%
\begin{equation}
V(r)=\frac{f(r)f^{\prime }(r)}{2r}-\frac{f(r)^{2}}{4r^{2}}.
\end{equation}%
We take a coordinate transformation $\xi =1-r_{+}/r$. At Infinity $%
r\rightarrow \infty $, $\xi \rightarrow 1$ and at horizon $r\rightarrow
r_{+} $, $\xi \rightarrow 0$ and therefore we can choose the point $\xi $
between $0$ and $1$. Applying $\xi =1-r_{+}/r$, Eq. (\ref{eq:radial_eq})
turns to%
\begin{equation}
\frac{\partial ^{2}\psi }{\partial \xi ^{2}}=\lambda _{0}(\xi )\frac{%
\partial \psi }{\partial \xi }+s_{0}(\xi )\psi . \label{AIM}
\end{equation}%
We will express $\lambda _{0}$ and $s_{0}$ for linearly and nonlinearly
charged cases separately in the following of this section and apply AIM to
calculate the QNMs correspondingly to each case.
\subsection{Linearly charged case}
In the linearly charged case we have%
\begin{eqnarray}
\lambda _{0}(\xi ) &=&\frac{1}{1-\xi }\left[ 3+\frac{2ik\omega \left( 1-\xi
\right) }{\xi }\right] +\frac{2l^{2}\left( 1-\xi \right) ^{2}\left(
m-q^{2}+2q^{2}\ln {\frac{r_{+}}{l\left( 1-\xi \right) }}\right) }{%
l^{2}m\left( 1-\xi \right) ^{2}-r_{+}^{2}+2l^{2}q^{2}\left( 1-\xi \right)
^{2}\ln {\frac{r_{+}}{l\left( 1-\xi \right) }}}, \label{eq:lambda0} \\
s_{0}(\xi ) &=&-\frac{1}{4\left( 1-\xi \right) ^{4}}\left[ 3\left( 1-\xi
\right) ^{2}-\frac{12ik\omega \left( 1-\xi \right) ^{3}}{\xi }-\frac{%
4k\omega \left( k\omega -i\right) \left( 1-\xi \right) ^{4}}{\xi ^{2}}\right]
\notag \\
&&-\frac{1}{4\left( 1-\xi \right) ^{4}}\left[ \frac{4l^{2}\left( 1-\xi
\right) ^{4}\left( 2ik\omega +\xi \left( 3-2ik\omega \right) \right) \left(
m-q^{2}+2q^{2}\ln {\frac{r_{+}}{l\left( 1-\xi \right) }}\right) }{\xi \left(
l^{2}m\left( 1-\xi \right) ^{2}-r_{+}^{2}+2l^{2}q^{2}\left( 1-\xi \right)
^{2}\ln {\frac{r_{+}}{l\left( 1-\xi \right) }}\right) }\right] \notag \\
&&-\frac{r_{+}^{2}}{\left( 1-\xi \right) ^{4}}\left[ \frac{\left( \omega
^{2}-\frac{1}{4l^{4}r_{+}^{2}\left( 1-\xi \right) ^{2}}\right) \left(
r_{+}^{2}-l^{2}m\left( 1-\xi \right) ^{2}-2l^{2}q^{2}\left( 1-\xi \right)
^{2}\ln {\frac{r_{+}}{l\left( 1-\xi \right) }}\right) }{\left( m-\frac{%
r_{+}^{2}}{l^{2}\left( 1-\xi \right) ^{2}}+2q^{2}\ln {\frac{r_{+}}{l\left(
1-\xi \right) }}\right) ^{2}}\right] \notag \\
&&\times \left( 3r_{+}^{2}+l^{2}\left( m-4q^{2}\right) \left( 1-\xi \right)
^{2}+2l^{2}q^{2}\left( 1-\xi \right) ^{2}\ln {\frac{r_{+}}{l\left( 1-\xi
\right) }}\right) , \label{eq:s0}
\end{eqnarray}%
where $k=\left[ f^{\prime} \left(r_{+}\right)\right] ^{-1}$ and $\omega $ is
the quasi normal frequency. Substituting~(\ref{eq:lambda0}) and (\ref{eq:s0}%
) into the (\ref{AIM}), we can obtain the QNMs of charged BTZ black holes in
the presence of Maxwell electrodynamic field by using (\ref%
{eq:AIM_exp_iteration}) and (\ref{eq:quantization}).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Modes-q-AIM.eps}
\caption{{}The behavior of imaginary parts of QNMs ($\protect\omega _{I}$)
with respect to $q$. }
\label{fig:Modes-q-AIM}
\end{figure}
Calculations show that the real part of QNMs ($\omega _{R}$) is always zero.
So in the charged BTZ black hole, there is no oscillation of the dynamic
perturbation. This is consistent with the result found in the neutral BTZ
black hole \cite{Cardoso}. The dynamical perturbation only decays with the
relaxation time scale measured by the imaginary part of QNMs ($\omega _{I}$%
), whose dependence on the model parameters is shown in Fig. \ref%
{fig:Modes-q-AIM}. For the nonextreme black hole in the allowed parameters
range discussed above, the dynamical perturbation will die out which ensures
the stability of the black hole spacetime. But when the black hole
approaches the extreme case, $\omega _{I}$ becomes less negative and the
decay of the perturbation becomes slower. In the extreme case, $\omega _{I}$
becomes zero. This indicates that for the extreme black hole, the
perturbation will never die out which makes the black hole spacetime
unstable. For the cases $m<1$, $q\geq q_2$ and $m=1$, $q\geq 1$, with the
increase of $q$, we observe that the perturbation can decay faster, which
makes the black hole even more stable. When $m>1$, the black hole can never
become extreme and the $\omega _{I}$ is always negative.
It is interesting that the QNMs behavior exactly reflects what we discussed
for the spacetime property. Especially we find that when the black hole
phase transition happens, the dynamical perturbation presents us a
drastically different property. Nonextreme black hole can always be stable
with the decay of the dynamical perturbation, while the perturbation on the
extreme black hole background can persist which makes the extreme black hole
quasi-stable. This serves another character to mark the phase transition and
indicates different properties of different phases in the black hole.
\subsection{Nonlinearly charged case}
In nonlinearly BI charged case, we have
\begin{eqnarray}
\lambda _{0}(\xi ) &=&\frac{3\xi +2ik\omega -2ik\omega \xi }{\xi -\xi ^{2}}
\notag \\
&&+\frac{2\left( 1-\xi \right) \left( m-q^{2}+2q^{2}B(\xi )\right) }{\left(
m-q^{2}\right) \left( 1-2\xi +\xi ^{2}\right) -r_{+}^{2}\left( 2\beta
^{2}-1\right) +2r_{+}^{2}\beta ^{2}A(\xi )+2q^{2}\left( 1-\xi \right)
^{2}B(\xi )}, \label{eq:lambda0NL} \\
s_{0}(\xi ) &=&-\frac{1}{\left( 1-\xi \right) ^{2}}\left[ 1-\frac{3ik\omega
\left( 1-\xi \right) }{\xi }-\frac{k\omega \left( 1-\xi \right) ^{2}\left(
k\omega -i\right) }{\xi ^{2}}\right] \notag \\
&&-\frac{\left( 2ik\omega +\xi \left( 3-2ik\omega \right) \right) \left(
m-q^{2}+2q^{2}B(\xi )\right) }{\xi \left[ \left( m-q^{2}\right) \left(
1-2\xi +\xi ^{2}\right) -r_{+}^{2}\left( 2\beta ^{2}-1\right)
+2r_{+}^{2}\beta ^{2}A(\xi )+2q^{2}\left( 1-\xi \right) ^{2}B(\xi )\right] }
\notag \\
&&-\frac{\omega ^{2}}{r_{+}^{2}\left( -1+2\beta ^{2}\left( -1+A(\xi )\right)
+\frac{m\left( 1-\xi \right) ^{2}}{r_{+}^{2}}+\frac{q^{2}\left( 1-\xi
\right) ^{2}}{r_{+}^{2}}\left( -1+2B(\xi )\right) \right) ^{2}} \notag \\
&&+\frac{r_{+}^{2}\left( -1-2\beta ^{2}+2\beta ^{2}A(\xi )\right) }{\left(
1-\xi \right) ^{2}\left[ \left( m-q^{2}\right) \left( 1-2\xi +\xi
^{2}\right) -r_{+}^{2}\left( 2\beta ^{2}-1\right) +2r_{+}^{2}\beta ^{2}A(\xi
)+2q^{2}\left( 1-\xi \right) ^{2}B(\xi )\right] }, \notag \\
&& \label{eq:s0NL}
\end{eqnarray}%
where
\begin{equation}
A(\xi )=\sqrt{1+\frac{q^{2}\left( 1-\xi \right) ^{2}}{r_{+}^{2}\beta ^{2}}},%
\text{ \ and \ }B(\xi )=\ln {\frac{r_{+}\left( 1+A(\xi )\right) }{2\left(
1-\xi \right) }}.
\end{equation}%
Substituting~(\ref{eq:lambda0NL}) and (\ref{eq:s0NL}) into the (\ref{AIM}),
we can calculate the QNMs of BI charged BTZ black holes by using (\ref%
{eq:AIM_exp_iteration}) and (\ref{eq:quantization}).
Again the real part of QNMs ($\omega _{R}$) vanishes indicating that there
is no oscillation of the perturbation. The decay time scale of the
perturbation can be measured by the imaginary part of $\omega _{I}$ which is
shown in Figure \ref{fig:Modes-q-AIM025NL}. This behavior is similar to that
of the linearly charged case.
\begin{figure}[t]
\centering%
\subfigure[~$m=0.25$]{
\label{fig8a}\includegraphics[width=.3\textwidth]{Modes-q-AIM025NL.eps} }
\subfigure[~$m=1$]{
\label{fig8b}\includegraphics[width=.3\textwidth]{Modes-q-AIM1NL.eps} }
\subfigure[~$m=2$]{
\label{fig8c}\includegraphics[width=.3\textwidth]{Modes-q-AIM2NL.eps} }
\caption{{}The behavior of imaginary parts of QNMs ($\protect\omega _{I}$)
with respect to $q$ for nonlinearly charged case.}
\label{fig:Modes-q-AIM025NL}
\end{figure}
For the nonextreme black hole, the spacetime is always stable, since the
dynamic perturbation will die out finally. When the nonextreme black hole
approaches extreme with the increase of the electric charge, the
perturbation will die out more slowly, since the $\omega _{I}$ becomes less
negative. For the extreme black hole, $\omega _{I}$ is zero which indicates
that for the extreme black hole background, the perturbation will persist
and will not die out. Compared with the linearly charged BTZ black hole
case, we have observed that the nonlinearity introduced here helps to
protect the stability of the black hole. Increasing of the nonlinearity, the
$\omega _{I}$ becomes more negative for the same parameters $m$ and $q$
case. This agrees with the observation in the four-dimensional BI AdS black
hole \cite{Wang5}. The QNMs behavior reflects the spacetime properties
discussed above. Interestingly we again witnessed that the dynamic
perturbation can be a signature to probe the phase transition in the system.
We see the evidence that two phases, the nonextreme and extreme black holes
have different dynamical behaviors. This actually supports the phase
transition results obtained in the previous section.
\section{Summary and conclusions}
In this paper, we have carefully examined the spacetime properties of BTZ
black holes in Maxwell field and Born-Infeld field. We found new, rich
spacetime structures of the black hole when the model parameters are varied.
These special spacetime properties determine the conditions of phase
transition. For a charged BTZ black hole with mass in a small value regime, $%
0<m<1$, the black hole can evolve from nonextreme to extreme black hole when
the charge of the black hole increases from zero to some $q_{1}$. When the
charge of the black hole increases further, there will be no black hole
solution. But when the charge parameter becomes as big as some $q_{2}$, an
extreme black hole appears and this extreme black hole will become
nonextreme with the further increase of the charge parameter. The difference
between $q_{1}$ and $q_{2}$ will become smaller with the increase of $m$ and
$q_{1}$, $q_{2}$ will degenerate when $m=1$. But when the mass parameter is
bigger than 1, the black hole will always be nonextreme no matter how much
we increase the charge parameter. Generalizing the discussion to the
nonlinear charged BTZ black holes, we find the qualitative behaviors in the
structures of black hole persist when the BTZ black hole with nonlinear
Born-Infeld electromagnetic field.
Furthermore we have studied the thermodynamical phase transition in the
charged BTZ black hole background. Instead of the heat capacity, we have
employed the Landau-Lifshitz theory and examined the second moments of
relevant parameters in the system. We have found that some second moments
diverge when the black hole extreme limit is reached, which indicates the
break down of the rigorous meaning of thermodynamical quantities. This marks
the occurrence of the thermodynamical phase transition, where some physical
properties change.
To disclose more signature of the phase transition, we have calculated the
QNMs for the charged BTZ black holes. The dynamical perturbation does not
oscillate in the charged BTZ black hole. It only shows the decay behavior
when the charged BTZ black hole is nonextreme, which shows that the
perturbation outside the nonextreme black hole will finally die out so that
the nonextreme charged BTZ black hole is always stable. When the nonextreme
black hole evolves towards the extreme hole, the decay becomes slower. But
when the extreme black hole is reached, the dynamical perturbation will not
decay, it persists. This is a dangerous signal which tells us that the
extreme charged BTZ black hole is easily destroyed if we add more
perturbations to the system. Considering the nonlinearity of the
electromagnetic field, we have observed that the nonlinearity can partially
save the black hole and make the perturbation outside the black hole decay
faster. The different properties of QNMs for the extreme and nonextreme
charged BTZ black holes are interesting. They can serve as another probe for
the phase transition between the nonextreme and extreme black holes.
We have discussed the relation between horizons and found that there exists
a turning value in the mass parameter, $m_{r}$. When $m>1$, the black hole
is always nonextreme, but in the range $1<m<m_{r}$, two horizons can first
approach each other and then separate away with the increase of charge $q$,
but when $m>m_{r}$, the difference between two horizons will become bigger
monotonically when $q$ increases. At first sight, this property looks
similar to the temperature behavior, since when $m>1$, black hole
temperature also has a turning point at $m_{t}$. But we found that $m_{r}$
and $m_{t}$ do not coincide. The reason can be understood that temperature
is just an apparent thermodynamical quantity, it cannot reflect deep physics
in the spacetime property. In the QNM behavior, we found that when the black
hole horizons get closer, the perturbation around the black hole decays
slower and slower. But when the difference between black hole horizons
becomes bigger, the black hole becomes more nonextreme, the perturbation
around the black hole decays faster. The value $m_{r}$ is really the
boundary parameter we can distinguish the speed of the decay of
perturbations around black holes. However this behavior change in the QNMs
does not show up in $m_{t}$. This tells us that the black hole temperature
is not a good index to reflect the spacetime dynamical property.
The zero temperature of the extreme black hole is a thermodynamical
indication of the occurrence of a second order phase transition between
nonextreme and extreme black holes \cite{Pavon}. But this is not a direct
reason for the behaviour of QNMs. The vanishing decay of the dynamical
perturbation is an intrinsic result of the phase transition happening
between nonextreme and extreme black holes. It is not caused directly by the
zero temperature of the black hole.
\begin{acknowledgments}
This work was supported in part by NNSF of China. MKZ would like to thank
Shanghai Jiao Tong University for the warm hospitality during his visit.
This work has been supported financially by Research Institute for Astronomy
\& Astrophysics of Maragha (RIAAM). We thank Yen Chin Ong for useful
discussions.
\end{acknowledgments}
|
1,477,468,750,125 | arxiv | \section{Introduction}
The emergence of cloud computing has contributed the computing resources as a new utility, like electricity and gas, and shaped the way how IT resources can be used by customers. Since its emergence, cloud computing has developed rapidly into one of the backbones of modern economy. Cloud consumers including government, research institutes and industry enterprises have all embraced and benefited from cloud computing significantly. Cloud computing also enables new business to be established within a short time, facilitates the expansion of enterprise globally, accelerates the progress of scientific research, and promotes the creation of various models and applications. Cloud service providers are also offering a variety of cloud services for customers with on-demand access to resources based on the pay-as-you-go model \cite{Buyya2018Manifesto}\cite{Kaur2015} \cite{GILL2019Trans}.
The infrastructure of cloud computing is cloud data centers. Currently, various cloud providers, including Amazon, Google, Microsoft, establish large-scale cloud data centers to fulfill the resources and services demands for customers. To ensure the availability and reliability, cloud data centers are required to be running 24/7. Nowadays, the majority of data centers spread of 300-4500 square meters containing hundreds to thousands of physical machines. A typical data center can consume up to 25,000 KWh per day. It is estimated that US data centers can consume 140 million kWh by 2020, which equals to 50 coal-based power plants. It is also claimed that the carbon footprint will reach 2-3\% of global emission \cite{Lavallee2014b}. To relieve the huge energy consumption and carbon emission, energy efficient techniques are required in cloud data centers \cite{Gill2019}\cite{Ghosh2018Adaptive}\cite{Soualah2017Energy}.
Virtualization technology is an important part of green cloud data centers to support energy efficiency via Virtual Machine (VM) consolidation, where a VM is the software implementation of a computer that running with an operating system and applications \cite{Kaur2015}. VM consolidation is referred as the process the VMs can be reallocated from one physical machine to another without affecting the execution of users' requests. VM consolidation has been identified as one of the dominant energy efficient solutions to reduce the energy consumption of cloud data centers \cite{Gill2019}\cite{Xu2017a}. As VMs are packed on fewer physical machines via consolidation, idle physical machines can be turned off or switched to the low-power mode \cite{XuBrownoutSurvey}.
VM consolidation has been proven to be an effective approach to reduce data center energy consumption, and various energy efficient algorithms based on VM consolidation have been proposed. These algorithms aim to optimize the placement of VMs to reduce the energy consumption while ensuring other constraints, e.g. Service Level Agreement (SLA) violations. Due to the uncontrolled network traffic, it is hard to conduct experiments under large-scale environment and reproduce results. Therefore, running experiments with validated simulation toolkit is an acceptable and reasonable way. Simulation toolkit can be easily applied to construct large-scale environment and generate reproducible results \cite{TIAN2015Open}.
Among all existing cloud data centers simulation toolkits, CloudSim \cite{Calheiros2011a} is the most widely used one. CloudSim supports both system and behavior for cloud data centers, including physical machines, virtual machines, workloads and resource scheduling policies. The resource provisioning model is also generic so that it can be extended with ease and limited efforts. These attractive features have attracted users from hundreds of universities and research institutes, and some extended simulators complementing CloudSim have also been developed, e.g. CloudAnalyst \cite{wickremasinghe2010cloudanalyst} and NetworkCloudSim \cite{garg2011networkcloudsim}.
The motivation of this article is to analyze the problem of rising energy consumption in depth through a comparison of some state-of-the-art energy efficient algorithms in cloud data centers. This article is particularly motivated from the following facts:
\begin{itemize}
\item The need and demand for understanding existing VM-based energy efficient algorithms for cloud data centers.
\item The proposed algorithms were evaluated in different scenarios and configurations, thus their advantage and disadvantage are not carefully examined.
\item The requirement for selecting the best suitable algorithm based on different priorities.
\end{itemize}
In this paper, we select various well-known VM consolidation-based energy efficient algorithms and implement experiments using CloudSim toolkit.
The evaluated algorithms are picked from state-of-the-art energy efficient algorithms that have shown good performance in energy efficiency.
The main \textbf{contributions} of this work are as follows:
\begin{itemize}
\item Offering a cross-sectional view of the investigated VM consolidation-based energy efficient algorithms, which present outstanding performance in cloud computing area.
{\color{black}\item Presenting a unified simulation-based analysis framework based on CloudSim that allows evaluation and comparison of energy-efficient VM consolidation algorithms in a unified and unbiased way.}
\item Discussing the advantages and disadvantages of the investigated algorithms to provide suggested algorithms for different scenarios.
\end{itemize}
The rest of the paper is organized as: We provide an overview on VM consolidation-based energy efficient algorithm for cloud computing in Section 2. The investigated algorithms are introduced in Section 3. Section 4 discusses the modelling of the investigated algorithms, and the adopted metrics in the investigated algorithms are summarized in Section 5. Section 6 shows the performance comparison of the investigated algorithms. Finally, conclusions and future research trends are given.
\section{Related Work}
\subsection{Energy Efficient Algorithms in Clouds}
A few articles have conducted surveys or taxonomies on energy efficient algorithms based on VM consolidation for cloud data centers. Mansouri et al. \cite{Mansouri2017} introduced a survey on resource management in cloud environment, and discussed VM consolidation-based energy efficient algorithms from cloud management system level. Kaur et al. \cite{Kaur2015} presented a comprehensive survey for energy efficient scheduling approach in clouds, and compared some consolidation-based techniques, including VM consolidation-based approach. Orgerie et al. \cite{Orgerie2014} conducted a survey on techniques for improving the energy efficiency for large-scale distributed systems, and surveyed on VM migration algorithms for Clouds. Gill et al. \cite{Gill2019} proposed a taxonomy for sustainable cloud computing and introduced a taxonomy for VM consolidation-based algorithms without focusing on energy efficiency. Mann et al. \cite{mann2015allocation} introduced a survey on VM allocation in cloud data centers from problem modelling and optimization algorithms perspectives. Ahmad et al. \cite{ahmad2015survey} conducted a survey on VM migration and server consolidation framework for cloud data centers, in which the commonalities and differences of investigated VM migration algorithms are highlighted.
However, these surveys and taxonomy focused on the comparison with high-level comparison of VM consolidation-based energy efficient algorithms without evaluating the performance under experimental environments. Thus, our work advances the previous works by evaluating the state-of-the-art algorithms not only from modelling perspective but also with experimental comparisons. It also identifies the merits and demerits of investigated algorithms to provide suggestions for research in related areas.
\subsection{VM consolidation-based Energy Efficient Algorithms}
Beloglazov et al. \cite{beloglazov2012energy} proposed an energy-aware allocation algorithm of data center resources that focuses on VM scheduling named Modified Best Fit Decreasing (MBFD). Their objective is to reduce the energy consumption of data centers while ensuring SLA. Mastroianni et al. \cite{mastroianni2013probabilistic} proposed energy efficient scheduling policy based on probabilistic procedures. Li et al. \cite{li2017holistic} presented a holistic VM scheduling algorithm capable of minimizing total data center energy consumption, including computing energy and cooling energy. Ranjbari et al. \cite{ranjbari2018learning} introduced an algorithm based on learning automata for energy and SLA efficient consolidation of VMs in cloud data centers. Farahnkian et al. \cite{ACW2015} proposed a novel dynamic VM consolidation approach based on Ant Colony Optimization to reduce the energy consumption of data centers.
In the performance evaluation of these algorithms, MBFD has been evaluated as the baseline, however, these algorithms are not compared and evaluated together, thus it is hard to identify the performance of these VM consolidation-based energy efficient algorithms. In this paper, we aim to compare these algorithms in depth and evaluate them under the same configurations to show the performance comparison.
\section{Overview of the investigated algorithms}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.95\linewidth]{VMmigration}
\caption[VarPerOptCom]{Energy efficient scheduling based on VM consolidation}
\label{fig:vmconsolidation}
\end{figure}
Fig. \ref{fig:vmconsolidation} shows the high-level architecture of VM consolidation-based energy efficient algorithms in cloud data centers. This architecture is generic and our investigated algorithms all conform it. The architecture mainly has 4 entities:
\textit{1. Users:} Cloud users submit their requests to cloud, and the requests will be processed by cloud services.
\textit{2. Energy efficient scheduler:} It can contain the following components to support the energy efficient scheduling and act as the interface between users and provisioned resources from service providers:
\textit{(a) VM manager:} This component tracks the resource usage of VMs and makes decisions on VMs behaviors, including when and where to consolidate VMs. To achieve this objective, it requires the energy and SLA information from Energy Monitor and SLA monitor.
\textit{(b) Energy Monitor:} It monitors the energy usage in the system caused by VMs and physical machines. It also provides information for the decisions of the VM manager.
\textit{(c) SLA Monitor:} It monitors how SLA is influenced by the operations in the system. It can also represent performance constraints when reducing energy consumption in the system.
\textit{(d) Workloads Analyzer:} This component interprets users' requests and service requirements before processing them. The workloads can be distributed to different VMs based on resource characteristics.
\textit{3. Virtual machines:} The workloads are deployed and executed on VMs. The VMs can be managed according to the incoming workloads with two phases: initial placement and VM migration. VMs are placed by initial placement algorithm to be allocated to physical machines initially. According to workloads, the placement can be optimized via VM consolidation algorithm, thus the unused machines can be temporarily turned off or switched to low-power mode.
\textit{4. Physical machines:} The infrastructure offers physical machines to provision virtualized resources to meet users' requests.
Based on the following criteria, we carefully select 5 state-of-the-art VM consolidation-based energy efficient algorithms for our comparisons and evaluations:
\begin{itemize}
\item To make the comparison more persuasive, the algorithms were published in prominent journals or conferences, and the algorithms can be representative of a category of algorithms.
\item To ensure the evaluation results reproducible, the algorithms were implemented in CloudSim or can be easily evaluated in CloudSim.
\item To make the algorithm comparable, the algorithms should have been evaluated with the same baseline.
\end{itemize}
In the following subsections, we will provide the overview of our investigated algorithms. Table \ref{tab:comparison} shows the comparison of the investigated algorithms based on different parameters such as application type, operational environment, objective function, scheduling mechanism, scheduling criteria, type of workloads and identified merits and demerits of each algorithm.
\begin{table*}[!ht]
\caption{High-level Comparison of Investigated Algorithms}
\label{tab:comparison}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
\textbf{Algorithm} & \textbf{\begin{tabular}[c]{@{}c@{}}Application\\ Type\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Operational\\ Environment\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Objective\\ Function\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Power\\ Model\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Power\\ Component\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Scheduling\\ Mechanism\end{tabular}} & \textbf{Workloads} & \textbf{Merits} & \textbf{Demerits} \\ \hline
MBFD\cite{beloglazov2012energy} & \begin{tabular}[c]{@{}c@{}}Dynamic\\ workloads \\ (web service)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distributed\\ and\\ Heterogeneous\end{tabular} & \begin{tabular}[c]{@{}c@{}}To optimize\\ energy\\ consumption\end{tabular} & Linear & \begin{tabular}[c]{@{}c@{}}CPU and\\ memory\end{tabular} & \begin{tabular}[c]{@{}c@{}}Dynamic\\ Consolidation\\ (Proactive)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Heterogeneous\\ workloads\end{tabular} & \begin{tabular}[c]{@{}c@{}}Reduced energy\\ consumption and\\ SLA violation rate\end{tabular} & \begin{tabular}[c]{@{}c@{}}There is a need\\ of holistic resource\\ management\end{tabular} \\ \hline
EcoCloud \cite{mastroianni2013probabilistic}& \begin{tabular}[c]{@{}c@{}}Dynamic\\ workloads\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distributed \\ and homogeneous\end{tabular} & \begin{tabular}[c]{@{}c@{}}To improve consolidation\\ of VMs\end{tabular} & Linear & \begin{tabular}[c]{@{}c@{}}Memory and\\ CPU\end{tabular} & \begin{tabular}[c]{@{}c@{}}Bounouli-based \\ scheduling\\
(Proactive)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Planetlab workloads\\ traces\end{tabular} & \begin{tabular}[c]{@{}c@{}}Evaluated the \\ effect of power\\ consumption on SLA\\ violations\end{tabular} & \begin{tabular}[c]{@{}c@{}}To investigate the impact\\ of number of VM migrations on\\ SLA violations\end{tabular} \\ \hline
GRANITE \cite{li2017holistic}& Heterogeneous & Distributed & \begin{tabular}[c]{@{}c@{}}To investigate the \\ temperature distribution\\ of airflow and server CPU\end{tabular} & Linear & \begin{tabular}[c]{@{}c@{}}Cooling, \\ storage, memory,\\ CPU and network\end{tabular} & \begin{tabular}[c]{@{}c@{}}2D-Computational Fluid\\ Dynamics (CFD) modelling-\\ based scheduling \\ (Proactive)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Google datacenter\\ trace logs\end{tabular} & \begin{tabular}[c]{@{}c@{}}Minimizing total\\ datacenter energy\\ consumption (cooling\\ and computing)\end{tabular} & \begin{tabular}[c]{@{}c@{}}To improve accuracy, \\ 2-dimensional CFD model can be\\ enhanced to 3-dimensional \\ CFD model\end{tabular} \\ \hline
LOAD \cite{ranjbari2018learning}& \begin{tabular}[c]{@{}c@{}}Dynamic \\ workloads\end{tabular} & Homogeneous & \begin{tabular}[c]{@{}c@{}}To optimize energy\\ consumption, number of \\ VM migrations and SLA\\ violations\end{tabular} & Linear & CPU & \begin{tabular}[c]{@{}c@{}}Learning automata \\ based scheduling \\ (proactive)\end{tabular} & \begin{tabular}[c]{@{}c@{}}CPU\\ utilization\end{tabular} & \begin{tabular}[c]{@{}c@{}}Improved CPU\\ utilization and reduced\\ SLA violations and\\ energy consumption\end{tabular} & \begin{tabular}[c]{@{}c@{}}Under-utilization of \\ resource is not \\ considered\end{tabular} \\ \hline
ACS \cite{ACW2015} & \begin{tabular}[c]{@{}c@{}}CPU and\\ memory-intensive\\ workloads\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distributed\\ and\\ Heterogeneous\end{tabular} & \begin{tabular}[c]{@{}c@{}}To investigate the\\ relationship between\\ energy consumption, VM\\ migrations and QoS\end{tabular} & Linear & \begin{tabular}[c]{@{}c@{}}CPU and,\\ Memory \end{tabular} & \begin{tabular}[c]{@{}c@{}}Ant colony \\ optimization and \\ (reactive)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Heterogeneous\\ workloads\end{tabular} & \begin{tabular}[c]{@{}c@{}}Reduced energy and\\ VM migrations\end{tabular} & \begin{tabular}[c]{@{}c@{}}The impact of VM migration \\ on network bandwidth can be \\ investigated to reduce power\\ consumption further\end{tabular} \\ \hline
\end{tabular}%
}
\end{table*}
\subsection{MBFD}
Modified Best Fit Decreasing (MBFD) \cite{beloglazov2012energy} aims to reduce the energy consumption of data centers while ensuring SLA. It solves the VM initial placement phase by regarding it as a bin packing problem. MBFD is designed to assign the VMs to the hosts that produce the minimum incrementation of energy consumption. In the VM consolidation phase, the algorithm optimizes the VM allocation via consolidation for better energy efficiency, the target host is also selected as in the initial placement. The motivation of the proposed algorithm is
due to the uncertainty of task lifecycles, some VMs are likely to host an over-provisioned of applications while others run with low resource utilization. Unbalanced workloads in cloud data centers result in a severe waste of resources and performance degradation. The VM consolidation in this work is a proactive process and can be applied with heterogeneous workloads.
MBFD has been evaluated as the baseline for many VM consolidation-based energy efficient algorithm. Various recent algorithms were proposed to improve the performance of this algorithm. The advantage of this algorithm is that it considers the trade-offs between the reduced energy and SLA violation and it is easy to implement. The limitation is that it does not consider the holistic resource management, which is complemented by recent research.
\subsection{EcoCloud}
EcoCloud \cite{mastroianni2013probabilistic} is an energy efficient scheduling policy based on probabilistic procedures. VM assignment and VM migration are the two phases in the algorithm. In the VM assignment phase, different from the MBFD introduced in the previous section, the VM manager sends an invitation to a subset of all active hosts to obtain a list of hosts that accept the incoming VM. After receiving the invitation, the host performs a Bernoulli trial, which computes the value of an overall assignment function to decide whether to accept a VM or not. The assignment function considers the resource utilization and utilization threshold, then calculates the probability to accept the incoming VM. If the trial succeeds, the host sends back a message to the manager. The manager collects all messages and selects one available host to allocate the incoming VM. The second part of EcoCloud is VM migration, which utilizes two Bernoulli trial based functions to handle the over-utilized and under-utilized situations separately. If the trial of the under-utilized situation is successful, a randomly selected VM will be migrated. On the other hand, migrations with over-utilized will migrate the VM that decreases resource utilization to be lower than the over-utilized threshold. The target host to accept the VM is decided by another assignment function with a slight modification of a parameter.
EcoCloud can be the representative algorithm aiming to reduce energy while minimizing VM migrations. The main advantage of EcoCloud is that the approach can reduce downtime duration and transmission bandwidth. Furthermore, hosts can make migration decisions by themselves, thus the pressure of VM manager is relieved. The limitation of this work is that it investigates the effect of power consumption on SLAs while the effects of VM migrations on SLAs are not evaluated.
\subsection{GRANITE}
GRANITE \cite{li2017holistic} is a holistic virtual machine scheduling algorithm capable of minimizing total data center energy consumption, including computing energy and cooling energy. This work considers Computer Room Air Conditioner (CRAC) as the only cooling devices and constructed their models. Based on server models and cooling models, GRANITE utilizes a greedy algorithm to conduct the VM initial placement and dynamic migration. They assume that the users' resource demand can be predicted. In the initial placement stage, the greedy algorithm is applied in GRANITE for all VMs to select the host with the least increment of total energy after the placement. The CRAC will be adjusted if the CPU temperature is above the threshold. In the dynamic VM consolidation stage, the algorithm aims to balance the workloads and cooling energy consumption. The GRANITE defines a dynamic temperature threshold and checks host status. If the host temperature is above the temperature threshold, a set of VMs will be migrated to other hosts. The target host selection for migration is the same as the greedy algorithm in the initial placement stage.
GRANITE can present the algorithms that consider the holistic management of energy in cloud data centers. The core idea of the algorithm is similar to MBFD algorithm, while it considers cooling power, which leads to more accurate and holistic scheduling results. The advantage of this work is that it combines the server status and data center thermal control to form the fine-grained energy efficient scheduling.
{\color{black}However, the model accuracy can be further improved by using a 3-Dimensional computational fluid dynamics model rather than 2-Dimensional one.}
\subsection{LOAD}
LOAD \cite{ranjbari2018learning} is an algorithm based on learning automata for energy and SLA efficient consolidation of VMs in cloud data centers. The proposed algorithm considers the demanded resource of users to predict overloaded hosts. By preventing overloaded hosts and shutting down idle hosts, the proposed algorithm aims to save energy consumption of data centers. The learning automata-based overload detection enhances the VM consolidation by predicting CPU usage of hosts upon resource usage history. Each VM is equipped with one automata including 3 actions, increasing CPU utilization, reducing CPU utilization and no changing of CPU utilization. At the beginning, the 3 actions are with equal probability. After the beginning stage, the reward and penalty of learning automata will be updated based on the environment. The automata selects one of the three actions in each iteration based on the probability. And in the next iteration, if the automata's decision is right, the action will be rewarded, otherwise, the action will be penalized. The learning automata is applied to estimate usage of VMs on the host. If the estimation shows that the host may be overloaded, the VMs will be migrated and other VMs will be prevented to be migrated to the current host. The migrated destination is based on the Best Fit Decreasing (BFD) algorithm \cite{Yue1991}. The simulation results show that leveraging learning-based prediction can reduce energy consumption of data centers.
LOAD is a typical algorithm applying learning techniques to optimize VM consolidation. This work advances the existing work by considering the dynamic prediction for resource usage. But the limitation is that this work predicts the overloaded situations and does not handle the under-utilized situations.
\subsection{ACS}
ACS \cite{ACW2015} is an online optimization meta-heuristic algorithm based on ant colony optimization and VM consolidation to achieve the near-optimal solution. Its objective is to balance the energy consumption, the number of VM migrations and QoS concerning performance. In this approach, the authors formulate the energy efficient VM consolidation as a multi-objective optimization problem to optimize multiple metrics simultaneously. To leverage ACO, the necessary entities, including pheromone updating rules and probabilistic decision rules are defined. If one solution has more pheromone trails, the VM has a larger probability to be placed on the host. ACS also has updating rules for local and global pheromone, which are applied in each iteration. In iterations, the local pheromone is updated by ant when they perform a movement. After all ant construct their solutions locally, the global pheromone update is conducted as the migration process, and only the dominated place will be kept. The process is repeated until it reaches the maximum iteration number.
ACS represents a set of meta-heuristic algorithms proposed to balance multiple objectives. The results based on simulations show that the proposed algorithm can reduce energy consumption and VM migrations while guaranteeing QoS. The performance can be further improved by investigating the impact of VM migration on network.
In summary, the investigated algorithms all follow the two-phase energy efficient scheduling process as shown in Fig. \ref{fig:vmconsolidation}. The investigated algorithms apply different techniques to optimize the placement of VMs. Most of the investigated algorithms focus on the optimization of VM consolidation phase except EcoCloud spends more effort on the initial placement based on probabilistic approach. In the algorithms focusing on the VM consolidation optimization, MBFD applies heuristic algorithms based on bin-packing modelling, while EcoCloud utilizes a probabilistic approach to find the host with the highest probability to accept migrated VM. GRANITE considers energy and performance together by modelling cooling energy consumption and VM performance degradation. ACS applies a meta-heuristic algorithm to find the optimized consolidation solutions.
\section{Architecture and Modelling}
\begin{figure}[!h]
\centering \includegraphics[width=0.75\linewidth]{./graph/comparisonSchedulingPhase}
\caption[VarPerOptCom]{The focused scheduling phase comparison of investigated algorithms}
\label{fig:focusedSchedulingPhase}
\end{figure}
This section discusses the investigated algorithms from the architecture and modelling perspectives, and the algorithm complexity is also discussed.
\subsection{MBFD}
\textbf{Architecture:} The four main components of the green cloud architecture are: broker, green service allocator, Virtual Machines (VMs) and Physical Machines (PMs). Broker enables the user interaction module to submit the workloads and their QoS requirements from any geographical distribution allocation. PM based hardware infrastructure creates virtualized resources (VMs). VMs are consolidated to fulfill the demand of workloads dynamically using DVFS. Green service allocator incorporates the energy manager and VM manager to allocate the virtual resources to user workloads based on their requirements for their execution at runtime.
\textbf{Model:} Equation (\ref{eq:MBFP}) defines the power model for this research work.
\begin{equation}
\label{eq:MBFP}
P(u)=k \times P_{max} + (1-k)\times P_{max}\times u
\end{equation}
where $P_{max}$ is the maximum consumption of power while fully utilization of server; $k$ is the small amount of power used by idle server; and $u$ is the CPU utilization. The value of energy consumption $E$ of a PM is defined in Equation (\ref{eq:MBFD1}). $u(t)$ is CPU utilization, which is a function of time.
\begin{equation}
\label{eq:MBFD1}
E=\int_{t2}^{t1}P(u(t))dt
\end{equation}
where $t1$ is start time of Task $T$ and $t2$ is end time of Task $T$.
\subsection{EcoCloud}
\textbf{Architecture:} The architecture of EcoCloud contains two main probabilistic procedures: 1) VM assignment and VM migration. Based on the availability of CPU and RAM on different servers of cloud data centers, VM allocation is performed. EcoCloud allocates the VM to the newly submitted application and sends an invitation to all the servers participated to find the best one to place this VM.
\textbf{Model:} Equation (\ref{eq:ecocloud}) defines the linear model used in this research for power consumption and it is expressed as:
\begin{equation}
\label{eq:ecocloud}
P(u) = P_{idle} + (P_{max}-P_{idle})\times u
\end{equation}
where $P_{max}$ is the maximum consumption of power while fully utilization of server; $P_{idle}$ is the small amount of power used by idle server; and $u$ is the CPU utilization.
\subsection{GRANITE}
\textbf{Architecture:} It comprises three sub-components including workload manager, scheduling manager and cooling manager. Workload manager manages the workloads submitted by users and process for scheduling based on their requirements. Scheduling manager schedules the resources for the execution of workloads while maximizing the performance of data centers and minimizes the consumption of energy. Cooling manager maintains the temperature of data centers and saves cooling energy by performing VM placement and dynamic migration in an efficient manner.
\textbf{Model:} Equation (\ref{eq:granite}) defines a linear power model to find the energy consumption of data centers, which is a combination of computing energy and cooling energy.
\begin{equation}
\label{eq:granite}
Energy = Energy_{computing} + Energy_{cooling}
\end{equation}
\subsection{LOAD}
\textbf{Architecture:} The system architecture contains four sub-components such as user portal (to submit workloads), global manager (which is an intermediate between user portal and local manager), local manager (which manages the PMs and VMs, and it is controlled by a single global manager) and VM manager (which manages the virtual resources).
\textbf{Model:} Equation (\ref{eq:MBFP}) is used to calculate the energy consumption for this research work, which represents the linear model of power and CPU utilization.
\subsection{ACS}
\textbf{Architecture:} In the architecture of ACS, two types of agents are included: local and global agents. The local agent is deployed in a host to solve the host status detection sub-problem by monitoring the host resource utilization. The global agent is responsible for supervising and optimizing the VM placement by taking advantage of the ACO-based algorithm.
\textbf{Model:} ACS utilizes the linear power model as MBFD in Equation \ref{eq:MBFD1}. \\
We note that the investigated algorithms have different focuses on the phases of VM consolidation, and we show the focuses in Fig. \ref{fig:focusedSchedulingPhase}. As introduced, the VM consolidation process can be mainly divided into two phases: the initial VM placement and dynamic VM migration. And for the VM migration, the overloads and under-utilized detection, VM selection and VM allocation are included.
MBFD and EcoCloud spend more effort on the VM placement phase, including the initial VM placement and VM allocation in Dynamic VM migration. On the other hand, LOAD focuses on overloads detection by predicting the utilization via learning automaton to optimize overload detection. GRANITE takes the CPU temperature into consideration and optimizes the overload detection policy. ACS improves all the phases in dynamic VM consolidation. It utilizes LiRCUP\cite{farahnakian2013lircup} to forecast overloads of servers, and adopts Ant Colony System to find the near-optimal solutions for VM selection and allocation.
In summary, from the perspective of architecture, the investigated algorithms all adopt the layered architecture, and the layers can be mainly divided into three parts. The bottom layer is the resource provisioner, which provides physical or virtual resources. The middle layer is responsible for the energy efficient scheduling, which handles the VM management and provides the energy efficient scheduling algorithms. At the top layer, the users' request and optimization goals are configured.
From the modelling perspective, all algorithms adopt the linear power model. As for power components, all algorithms include the CPU energy consumption, MBFD, EcoCloud and ACS consider the memory part, and GRANITE utilizes a more comprehensive model including storage, network and cooling.
\textbf{Algorithm complexity analysis:} MBFD, GRANITE and LOAD are based on heuristic algorithms, and the complexity of them are all $M\times N$, where $N$ is the number of hosts and $M$ is the number of VMs. The complexity of EcoCloud equals to the number of hosts to accept VMs, which is $N$. Based on meta-heuristic with iterations, the complexity of ACS is $M\times N \times A\times I$, where $A$ is the number of ants that concurrently build their migration plans, and $I$ is the number of iterations.
\section{Metrics}
For the objective of energy efficiency, energy consumption is the major metric to be evaluated. However, the algorithms are also making trade-offs between the energy consumption and other metrics, such as SLA violations. In this section, we discuss the adopted metrics in our investigated energy efficient algorithms. Note that the investigated algorithms use some similar metrics while having some other additional metrics. Here we aim to cover the metrics applied in these algorithms and identify the differences between these metrics. Table \ref{tab:metrics} summaries the algorithms with their corresponding adopted metrics.
\textit{Metrics for energy efficiency}
\textbf{Total energy consumption:} It is the total energy consumption consumed by physical machines in the data centers. It is derived from the power model in Equation (\ref{eq:MBFD1}).
\textbf{Number of active servers:} It represents how many servers are running as active during the observation time. The value should be minimized, and thus more idle serves can be switched into low-power mode.
\textit{Metrics for SLA}
\textbf{SLA violation percentage (SLAV) \cite{beloglazov2012energy}}: It is the percentage of SLA violations events relatively to the total number of the processed time frame. The SLA violation is identified when a given VM cannot get the amount of MIPS as requested.
\textbf{VM migration times}: The number of migrations triggered by the algorithm during the VM scheduling process.
\textbf{Average SLA violation}: It is the average CPU performance that has not been allocated to an application when requested, resulting in performance degradation.
\begin{table}[]
\caption{Metrics adopted in compared algorithms}
\label{tab:metrics}
\resizebox{0.48\textwidth}{!}{%
\begin{tabular}{|c|c|c|}
\hline
Metrics & Optimization Objective & Adopted Algorithms \\ \hline
Total Energy Consumption & Minimization & All \\ \hline
SLA violation percentage & Minimization & All \\ \hline
VM migrations & Minimization & BMDP, LOAD, EcoCloud, ACS \\ \hline
Average SLA violation & Minimization & BMDP, LOAD, ACS \\ \hline
Number of active hosts & Minimization & EcoCloud, GRANITE\\ \hline
\end{tabular}%
}
\end{table}
In summary, we can notice that several metrics have been adopted for evaluations by more than one algorithm, including total energy, SLA violation percentage, VM migrations, average SLA violations and number of active hosts. To make our evaluations more comparable from the metrics perspective, we evaluate these metrics in the performance evaluations section.
\begin{table*}[]
\centering
\caption{Energy consumption with different CPU utilization {\color{black}in Watts}}
\label{tab:power_model}
\begin{tabular}{|l|ccccccccccc|}
\hline
\textbf{Server} & 0\% & 10\% & 20\% & 30\% & 40\% & 50\% & 60\% & 70\% & 80\% & 90\% & 100\% \\
\hline
\textbf{HP ProLiant G4} & 86 & 89.4 & 92.6 & 96 & 99.5 & 102 & 106 & 108 & 112 & 114 & 117 \\
\textbf{HP ProLiant G5} & 93.7 & 97 & 101 & 105 & 110 & 116 & 121 & 125 & 129 & 133 & 135 \\ \hline
\end{tabular}
\end{table*}
\section{Performance Evaluations}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/varyingThreshold/Energy_consumption.pdf}
\caption{Energy consumption}
\label{fig:energyConsumptionVaryingThreshold}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/varyingThreshold/Number_of_VM_migrations.pdf}
\caption{VM migrations}
\label{fig:VMmigrationVaryingThreshold}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/varyingThreshold/Number_of_active_hosts.pdf}
\caption{Number of active hosts}
\label{fig:numberActiveHostVaryingThreshold}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/varyingThreshold/SLA_violation_percentage.pdf}
\caption{SLAV}
\label{fig:SLAVVaryingThreshold}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under {\color{black}Synthetic} workloads (ratio of hosts and VMs number is \color{black}{50:50})}
\label{fig:random1.0}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/box_plot/Energy_consumption_box_plot.pdf}
\caption{Energy consumption}
\label{fig:boxplotEnergyConsumption}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/box_plot/Number_of_VM_migrations_box_plot.pdf}
\caption{VM migrations}
\label{fig:boxplotVMmigration}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/box_plot/Number_of_active_hosts_box_plot.pdf}
\caption{Number of active hosts}
\label{fig:boxplotActiveHost}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/box_plot/SLA_violation_percentage_box_plot.pdf}
\caption{SLAV}
\label{fig:boxplotSLAV}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under {\color{black}Synthetic} workloads with lower utilization threshold as 0.5 (ratio of hosts and VMs number is \color{black}{50:50})}
\label{fig:boxplotRandom1.0}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/ratio/Energy_consumption.pdf}
\caption{Energy consumption}
\label{fig:energyConsumptionVaryingThresholdRandomRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/random/ratio/Number_of_VM_migrations.pdf}
\caption{VM migrations}
\label{fig:VMmigrationVaryingThresholdRandomRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/ratio/Number_of_active_hosts.pdf}
\caption{Number of active hosts}
\label{fig:numberActiveHostVaryingThresholdRandomRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/random/ratio/SLA_violation_percentage.pdf}
\caption{SLAV}
\label{fig:SLAVVaryingThresholdRandomRatio}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under {\color{black}Synthetic} workloads ({\color{black}{setting hosts number = 50 and}} varying ratios of hosts and VMs number are 1:1, 1:1.25, 1:1.5, and 1:1.75)}
\label{fig:randomRatio}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/varyingThreshold/Energy_consumption.pdf}
\caption{Energy consumption}
\label{fig:energyConsumptionVaryingThresholdPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/varyingThreshold/Number_of_VM_migrations.pdf}
\caption{VM migrations}
\label{fig:VMmigrationVaryingThresholdPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/varyingThreshold/Number_of_active_hosts.pdf}
\caption{Number of active hosts}
\label{fig:numberActiveHostVaryingThresholdPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/varyingThreshold/SLA_violation_percentage.pdf}
\caption{SLAV}
\label{fig:SLAVVaryingThresholdPlanet}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under PlanetLab workloads}
\label{fig:PerformanceVaryingThresholdPlanetlab}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/box_plot/Energy_consumption_box_plot.pdf}
\caption{Energy consumption}
\label{fig:boxplotEnergyConsumptionPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/box_plot/Number_of_VM_migrations_box_plot.pdf}
\caption{VM migrations}
\label{fig:boxplotVMmigrationPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/box_plot/Number_of_active_hosts_box_plot.pdf}
\caption{Number of active hosts}
\label{fig:boxplotActiveHostPlanet}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/box_plot/SLA_violation_percentage_box_plot.pdf}
\caption{SLAV}
\label{fig:boxplotSLAVPlanet}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under PlanetLab workloads with lower utilization threshold as 0.5}
\label{fig:PerformanceVaryingThreshold_boxplot_Planetlab}
\end{figure*}
\begin{figure*}[t]
\centering
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/ratio/Energy_consumption.pdf}
\caption{Energy consumption}
\label{fig:energyConsumptionVaryingThresholdPlanetRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{./graph/planetlab/ratio/Number_of_VM_migrations.pdf}
\caption{VM migrations}
\label{fig:VMmigrationVaryingThresholdPlanetRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/ratio/Number_of_active_hosts.pdf}
\caption{Number of active hosts}
\label{fig:numberActiveHostVaryingThresholdPlanetRatio}
\end{subfigure}
\begin{subfigure}{0.24\linewidth}
\includegraphics[width=0.99\linewidth]{./graph/planetlab/ratio/SLA_violation_percentage.pdf}
\caption{SLAV}
\label{fig:SLAVVaryingThresholdPlanetRatio}
\end{subfigure}
\caption[VarPerOptCom]{Performance comparison of algorithms under PlanetLab workloads (varied number of servers with 800, 900, 1000 and 1100)}
\label{fig:planetlabRatio}
\end{figure*}
In this section, to compare the performance of investigated algorithms, we conduct experiments for the five well-known and investigated algorithms based on different performance metrics and two traces. We also include one provided algorithm in CloudSim as baselines, the Interquartile Range (IQR) \cite{Calheiros2011a} that manages a dynamic threshold for overload detection.
\subsection{Experiments Settings}
For host capacity, each host has two CPU cores with millions of instructions per second (MIPS) of 1880 or 2660, 4 GB of RAM and 1 TB of storage. We use the power model derived from HP ProLiant ML110 G4 or HP ProLiant ML110 G5, which has been used in \cite{beloglazov2012energy} and \cite{ACW2015}. According to the model, the energy consumption of the host with different utilization is shown in Table \ref{tab:power_model}. For VM configurations, we consider four types of VMs with MIPS of 500, 1000, 1500 and 2500, and the VMs number of each type is randomly generated. The detailed specifications of hosts and VMs are shown in Table \ref{tab:hsotVMcapa}.
\begin{table}
\centering
\caption{Host / VM Types and Capacity}
\resizebox{0.48\textwidth}{!}{%
\label{tab:hsotVMcapa}
\begin{tabular}{|l|l|l|l|l|l|}
\hline Name & CPU (MIPS) & Cores & Memory & Bandwidth & Storage \\
\hline
\hline Host Type 1 & 1.86 GHz & 2 & 4 GB & 1 Gbit/s & 1 TB \\
\hline Host Type 2 & 2.66 GHz & 2 & 4 GB & 1 Gbit/s & 1 TB \\
\hline
\hline VM Type 1 & 2.5 GHz & 1 & 870 MB & 100 Mbit/s & 1 GB \\
\hline VM Type 2 & 2.0 GHz & 1 & 1740 MB & 100 Mbit/s & 1 GB \\
\hline VM Type 3 & 1.0 GHz & 1 & 1740 MB & 100 Mbit/s & 1 GB \\
\hline VM Type 4 & 0.5 GHz & 1 & 613 MB & 100 Mbit/s & 1 GB \\
\hline
\end{tabular}
}
\end{table}
We first carry out several experiments under {\color{black}synthetic} workloads, and then, to simulate the real cloud data center, we utilize the real workload data of the CoMon project provided by PlanetLab\cite{Park2006}. The workload includes CPU utilization data of thousands of VMs allocated to servers that are located in more than 500 places around the world. And the data is collected every five minutes for 10 days, which represents the workload in the real cloud environment.
We select four metrics to evaluate the performance of these algorithms, including Energy Consumption, SLA violation percentage (SLAV), VM migration times and the number of active hosts. We choose these metrics as they have been adopted widely and also used in more than one algorithms as we discussed in Section V. Due to the page limitation, we evaluate SLAV to represent the SLA violations instead of average SLA violations.
\subsection{Implementation Details}
The configurations in EcoCloud include a shape parameter $p$, $\alpha$ and $\beta$ in its probability function. We set $p = 3$, $\alpha = \beta = 0.25$ in our experiments, {\color{black} following the configurations of the original paper.} For LOAD, the reward and penalty parameters for updating learning automaton, $a$ and $b$, are both configured to 0.1. We utilize the parameters in the original paper of ACS, however, we don't have a training data set to estimate the initial pheromone level. Therefore, we set $M$ to be the total number of selected VMs, and $P$ is configured as the number of under-utilized servers. The configured parameters of MBFD and GRANITE are the same as the original paper.
\subsection{{\color{black}Synthetic} Workloads}
To show the performance under synthetic workloads, the first part of the evaluations focuses on the experiments under {\color{black}synthetic} workloads, which has been used in MBFD and ACS.
We firstly generate the same number of hosts and VMs and vary the lower utilization threshold that defines when the host is under-utilized. And we varied the threshold from 0.1 to 0.5 with increment as 0.1. The higher utilization threshold is set as 0.4 more than the lower utilization threshold. {\color{black}We follow the configuration of the utilization threshold interval in \cite{beloglazov2012energy}}. Under each configuration, we randomly generate workloads to run the experiments and repeat 10 times. Both the number of hosts and VMs are set as 50.
Fig. \ref{fig:random1.0} shows the average results of the experiments under {\color{black}synthetic} workloads.
Based on the results, the higher values of lower utilization threshold can enable all the algorithms to achieve better energy efficiency, {\color{black}fewer VM migrations} and more SLAV.
To be more specific, ACS performs the best on energy consumption by reducing 21.1\% power compared with MBFD. EcoCloud requires less than 600 migration times, which are much fewer than other algorithms. The reason that ACS achieves the best energy efficiency is that it has the lowest number of active hosts. For SLA violation comparison, these algorithms perform better when the lower threshold is set to be 0.1.
As setting the lower utilization threshold with 0.5 can achieve the best performance for all the algorithms, we fix the lower utilization threshold as 0.5, and run the experiments repeatedly with 10 times to show the variance of results{\color{black}, which are shown in Fig. \ref{fig:boxplotRandom1.0}.} We can notice that ACS can achieve the best performance in energy consumption with 31.2 kWh, and EcoCloud can reduce the VM migrations as 420.5 in average.
To investigate the impacts of different numbers of hosts and VMs, we also fix the lower utilization threshold as 0.5, but configure the ratio of the number of servers to the number of VMs to be 1:1, 1:1.25, 1:1.5 and 1:1.75 respectively. With each ratio, the experiments are repeated 10 times, and the results as shown in Fig \ref{fig:randomRatio}. We can observe that energy consumption increases significantly when there are more VMs. ACS is the most energy efficient algorithm and consumes 31.2 KWh and 31.5 kWh when the ratio is 1:1 and 1:1.75 respectively, which is about 21.2-34.4\% less compared to MBFD. EcoCloud achieves the best results in VM migrations. Although the investigated algorithms can reduce more energy than IQR, more SLAV happens in these algorithms.
\subsection{PlanetLab Workloads}
To show the algorithm performance under realistic traces, we also conduct experiments under PlanetLab workloads.
We vary the lower CPU utilization threshold varies from 0.1 to 0.5, and the interval between the lower threshold and the higher threshold is fixed at 0.4. The number of hosts is configured as 800 and the number of VMs is retrieved from PlanetLab traces. The average results are shown in Fig. \ref{fig:PerformanceVaryingThresholdPlanetlab} by running 10 times of experiments, and each experiment is with a one-day PlanetLab trace.
Fig. \ref{fig:energyConsumptionVaryingThresholdPlanet} shows the energy consumption comparison, in which the power {\color{black}decreases} when having a larger value of the lower utilization threshold. ACS consumes the least energy compared with other algorithms with 148.7 kWh when the lower utilization threshold is 0.4. EcoCloud consumes more energy consumption than MBFD because it keeps more servers to be active.
The numbers of VM migrations are compared in Fig. \ref{fig:VMmigrationVaryingThresholdPlanet}. EcoCloud achieves an apparent reduction in this metric and only needs 1251 migrations on average when the lower threshold is set to be 0.5. LOAD achieves improvement in the number of migrations and reduces 12.4\% migrations compared with MBFD. GRANITE acquires better results when the lower utilization threshold increases.
The number of active hosts comparison is shown in Fig. \ref{fig:numberActiveHostVaryingThresholdPlanet}, and ACS can shut down the maximum number of hosts.
Fig. \ref{fig:SLAVVaryingThresholdPlanet} shows the SLA violation percentage comparison, and the SLA violation percentage increases as the lower utilization threshold increases. As the figure shows, LOAD and ACS perform worse on this metric compared with MBFD and EcoCloud. EcoCloud maintains a low SLA violation percentage and thus ensures the quality of services.
We set the lower utilization threshold as 0.5 and the higher utilization threshold as 0.9, and run 10 times experiments repeatedly to show the variance of performance results as shown in Fig. \ref{fig:PerformanceVaryingThreshold_boxplot_Planetlab}.
In Fig. {\ref{fig:boxplotEnergyConsumptionPlanet}}, we can notice that ACS and LOAD perform better in energy consumption compared with other baselines. The average energy consumption of ACS is 125.1 KWh while the MBFD, EcoCloud, and GRANITE consume more than 180 KWh.
Fig. {\ref{fig:boxplotActiveHostPlanet}} shows the average results of the number of active hosts, and ACS can achieve the best results with 48 in average.
Fig. {\ref{fig:boxplotVMmigrationPlanet}} demonstrates the comparison of VM migrations, and EcoCloud focuses on optimizing this metric and reduces the number of VM migration to be under lower than 2000. Compared with MBFD and GRANITE, LOAD reduces the VM migrations.
The SLA violation percentage comparison is presented in Fig {\ref{fig:boxplotSLAVPlanet}}, and EcoCloud ensures SLA in a the best manner with $0.01\times 10^{-4} $. Although ACS saves more energy, it performs worst in reducing SLA violations with $0.71\times 10^{-4} $.
To evaluate the performance with a different number of hosts, We also vary the number of hosts from 800 to 1100 in our experiments as shown in Fig. \ref{fig:planetlabRatio}. As the number of hosts increases, we can notice that ACS always consumes the minimum power, and the least number of VM migration happens in EcoCloud. ACS only keeps 48 active hosts and leads to the maximum SLAV as $0.72\times 10^{-4} $.
In conclusion, we can notice that under both workloads, ACS can achieve the best energy efficiency in most cases as it also has the least number of active hosts. EcoCloud can reduce more VM migrations compared with other algorithms and trigger the least SLA violations under PlanetLab workloads. We can notice that meta-heuristic algorithms, e.g. ACS, can achieve better performance in energy compared with heuristic algorithms as it searches larger solution space. As the pioneer of VM consolidation-based energy efficient algorithm for cloud data centers, MBFD has been widely accepted as it is easy to implement and efficient. Although the performance of MBFD has been outperformed by recent algorithms, the main idea of MBFD has been referred, such as in GRANITE and LOAD, where the holistic energy and SLA violation are optimized respectively. GRANITE is suggested for the case that aims to optimize more energy-consuming components rather than only the hosts. EcoCloud is suitable for the scenarios that VM migrations should be minimized, such as the network is the bottleneck of the whole system, as it can reduce VM migrations significantly. LOAD can achieve good performance if the future resource usage can be accurately predicted, thus it suits the conditions that the system has adequate history resource usage data or resource usage has a typical pattern, like Wikipedia traces.
\section{Conclusions and Future Work}
In this paper, we present an investigation of 5 state-of-the-art energy efficient algorithms based on VM consolidations for cloud data centers. We discuss and compare the algorithms from multiple perspectives, including core ideas, architectures, modelling and algorithm complexity. We also implement these algorithms in CloudSim and configure the parameters as noted in these algorithms. By conducting experiments under both {\color{black}synthetic} and real traces, the results show that these algorithms can efficiently reduce the energy consumption while balancing the trade-offs between energy and some other metrics, such as VM migrations and SLA violations.
Based on our investigation, some future research directions are identified as follows:
\begin{itemize}
\item Utilization threshold settings have important impacts on energy consumption, thus self-adaptive threshold configuration approach can be further investigated.
\item Most VM consolidation-based energy efficient algorithms consider the CPU resource as the bottleneck and the key factor in their power models. Resource usage and energy consumption related to other resources, e.g. network, can be further considered.
\item Meta-heuristic algorithms have shown their better performance in improving energy consumption compared with traditional heuristic algorithms, however, the related time cost should be reduced, which can be achieved by reducing the solution space.
\item Evaluations can be conducted under recently published workloads, such as Google traces and Alibaba traces.
\end{itemize}
\section*{Acknowledgment}
This work is supported by the SIAT Innovation Program for Excellent Young Researchers (No. 55Y05504).
\bibliographystyle{IEEEtran}
|
1,477,468,750,126 | arxiv | \section{Introduction}
\vspace{-0.05in}
\input{sections/010introduction}
\vspace{-0.1in}
\section{Related Work}
\vspace{-0.05in}
\input{sections/020relwork}
\vspace{-0.1in}
\section{Theoretical Analysis}
\vspace{-0.05in}
\input{sections/030theory}
\vspace{-0.1in}
\section{Experimental Analysis}
\vspace{-0.05in}
\input{sections/040experiments}
\vspace{-0.1in}
\section{Conclusion}
\vspace{-0.05in}
\input{sections/050conclusion}
\pagebreak
\vspace{-0.1in}
\subsection{Types of Recourses}
Before proceeding further, we adopt the classification taxonomy proposed by \cite{pred-multiplicity}; which broadly classified recourse generation techniques into two types: those that promote greater sparsity (\textbf{sparse counterfactuals}) \cite{wachter2018a, laugel2017inverse, looveren2019interpretable, FACE, Tolomei_2017}), and those that promote greater support from the given data distribution (\textbf{data support counterfactuals}) \cite{Ustun_2019, Pawelczyk_2020, laugel2018defining, joshi2019realistic, MACE, karimi2020algorithmic1, karimi2020algorithmic2}). For a given set of counterfactual explanations $\mathcal{S'}: \mathcal{S'} \in \mathcal{B}(\mathcal{S'})=+1$ and arbitrary cost function $d(\mathcal{S}, \mathcal{S'})$ (usually set to $d(\mathcal{S}, \mathcal{S'}) = ||\mathcal{S}-\mathcal{S'}||_p$, $p \in \{1, 2\}$) these correspond to counterfactual explanations defined as $arg min_{\mathcal{S'}} {d(\mathcal{S'}, \mathcal{S})}$ and $arg min_{\mathcal{S'}: P_{data}(\mathcal{S'})>0} {d(\mathcal{S'}, \mathcal{S})}$ respectively. It is further shown here that there necessarily exists a tradeoff between these two notions of cost, and that there is an upper bound on the cost under predictive multiplicity. However, none of these works explicitly focus on analyzing if the recourses generated by state-of-the-art algorithms are robust to distribution shifts.
}
\subsection{The Cost vs Invalidation Trade-off}
\vspace{-0.07in}
To show that there is a trade-off between cost and invalidation percentage, we need to determine that recourses with lower costs are at high risk of invalidation, and vice-versa. Our analysis considers sparse counterfactual style recourses \cite{Pawelczyk_2020}, presuming cost to be represented by the common Euclidean notion of distance. Our notation is illustrated in figure \ref{fig:thm} below, defining $x$ as the data point, and $x'_1$ and $x'_2$ as two potential recourses. We consider the model $\mathcal{M}_2$ to be defined as a perturbation of model $\mathcal{M}_1$ by some arbitrary magnitude $\delta_m$. Distances (measured according to the generic cost metric $d$) are measured along the vectors $x \rightarrow x'_1$ and $x \rightarrow x'_2$: $q_1$ and $q_2$ from $x$ to $\mathcal{M}_1$, and $l_1$ and $l_2$ from $\mathcal{M}_1$ to $x'_1$ and $x'_2$ respectively. We similarly have $q'_1$ and $q'_2$ from $x$ to $\mathcal{M}_2$, and $l'_1$ and $l'_2$ from $\mathcal{M}_2$ to $x'_1$ and $x'_2$ respectively. The cost function with L2 norm is very common in sparse counterfactuals, defined as $d(x,x') = ||x-x'||_2$\cite{wachter2018a}. Lastly, we denote the probability of invalidation for an arbitrary recourse $x'$ under model $\mathcal{M}_2$ as $Q_{x'} = \frac{1 - \mathcal{M}_2(x')}{2}$. Note that by the definition of recourse, $\mathcal{M}_1(x')=+1$ and $\mathcal{M}_2(x')=+1$ for valid recourses but $\mathcal{M}_2(x')=-1$ for invalid recourses.
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/theorem.png}}
\caption{Abstract Diagramatic Representation of $\mathcal{M}_1$ and $\mathcal{M}_2$, with a data point $\mathcal{S}$ represented by feature-vector $x$ and two potential feature vectors $x'_1$ and $x'_2$ denoting possible recourse $\mathcal{S'}$.}
\label{fig:thm}
\end{center}
\vspace{-0.1in}
\end{figure}
\begin{theorem}
\label{thrm1}
If we have recourses $x'_1$ and $x'_2$ for a data point $x$ and model $\mathcal{M}_1$, such that $d(x,x'_1) \leq d(x,x'_1)$ then the respective expected probabilities of invalidation under model $\mathcal{M}_2$, $Q_{\mathbb{E}\left[x'_1\right]}$ and $Q_{\mathbb{E}\left[x'_2\right]}$, follow $Q_{\mathbb{E}\left[x'_1\right]} \geq Q_{\mathbb{E}\left[x'_2\right]}$.
\end{theorem}
\vspace{-0.1in}
\begin{hproof}
We know from our construction that $d(x,x'_1)=q_1+l_1$ and $d(x,x'_2)=q_2+l_2$, and also that $l'_1=l_1 \pm \delta_m$ and $l'_2=l_2 \pm \delta_m$. We assume that the small random perturbation $\delta_m$ between $\mathcal{M}_1$ and $\mathcal{M}_2$ has expected value $\mathbb{E}[\delta_m] = 0$. Further, we also assume that both $q_1$ and $q_2$ are random variables drawn from the same unknown distribution, with some arbitrary expected value $\bar{q} = \mathbb{E}\left[q_1 \right] = \mathbb{E}\left[q_2 \right]$.
\vspace{-0.2in}
\begin{align}
d(x,x'_1) &\leq d(x,x'_2) \\
q_1 + l_1 &\leq q_2 + l_2 \\
\mathbb{E} \left[ q_1 + l_1 \right] &\leq \mathbb{E} \left[ q_2 + l_2 \right] \\
\mathbb{E} \left[ q_1 \right] + \mathbb{E} \left[ l_1 \right] &\leq \mathbb{E} \left[ q_2 \right] + \mathbb{E} \left[ l_2 \right] \\
\bar{q} + \mathbb{E} \left[l_1 \right] &\leq \bar{q} + \mathbb{E} \left[l_2 \right] \\
\mathbb{E} \left[l_1 \right] &\leq \mathbb{E} \left[l_2 \right] \\
\mathbb{E} \left[l_1 \right] \pm 0 &\leq \mathbb{E} \left[l_2 \right] \pm 0 \\
\mathbb{E} \left[l_1 \right] \pm \mathbb{E} \left[\delta_m \right] &\leq \mathbb{E} \left[l_2 \right] \pm \mathbb{E} \left[\delta_m \right] \\
\mathbb{E} \left[l_1 \pm \delta_m \right] &\leq \mathbb{E} \left[l_2 \pm \delta_m \right] \\
\mathbb{E} \left[l'_1 \right] &\leq \mathbb{E} \left[l'_2 \right]
\end{align}
\vspace{-0.2in}
To capture the notion that models are less confident in their predictions on points close to their decision boundaries, we construct a function $g(l') = P\left[ \mathcal{M}_2(x') = +1 \right]$, using the bijective relationship between $x'$ and $l'$. Therefore, $g$ is a monotonically increasing function, with $g(l') = -Q_{x'}$.
We now equate the probability of invalidation $Q$ to an arbitrary function $g(l')$. The bijection between $x'$ and $l'$ allows us to define $g(l') = -Q(x')$ as a monotonic increasing function, with $g(-\infty) = 0$, $g(-\delta_m) <= 0.5$, $g(\delta_m) >= 0.5$, and $g(\infty) = 1$. Lastly, we can now apply this monotonic increasing function and continue our derivation as follows:
\begin{align*}
g(\mathbb{E}\left[l'_1\right]) &\leq g(\mathbb{E}\left[l'_2\right]) \\
-Q_{\mathbb{E}\left[x'_1\right]} &\leq -Q_{\mathbb{E}\left[x'_2\right]} \\
Q_{\mathbb{E}\left[x'_1\right]} &\geq Q_{\mathbb{E}\left[x'_2\right]}
\end{align*}
which gives us the intended result.
\end{hproof}
\begin{proposition}
The above theorem implies there is a tradeoff between recourse costs with respect to model $\mathcal{M}_1$ and recourse invalidation percentages with respect to the updated model $\mathcal{M}_2$.
\end{proposition}
\vspace{-0.15in}
\begin{hproof}
The proof for theorem \ref{thrm1} above shows two things: \textbf{(1)} from the $L.H.S$ we see that cheaper costs $d(x,x'_1)$ imply higher invalidation chances $Q_{\mathbb{E}\left[ x'_1 \right]}$, and also \textbf{(2)} from the $R.H.S$ we see that more expensive costs $d(x,x'_2)$ imply lower invalidation chances $Q_{\mathbb{E}\left[ x'_2 \right]}$. We can also take contrapositives of these first two statements, giving us the third and fourth statement:
\vspace{-0.07in}
\begin{enumerate}
\item cheaper costs $\implies$ higher invalidation
\vspace{-0.1in}
\item more expensive costs $\implies$ lower invalidation
\vspace{-0.1in}
\item lower invalidation $\implies$ more expensive costs
\vspace{-0.1in}
\item higher invalidation $\implies$ cheaper costs
\end{enumerate}
\vspace{-0.07in}
Taken together, these statements complete our proof and show that an increase or decrease in recourse cost must necessarily be accompanied by a decrease or increase in the probability of invalidation respectively, and vice versa. Therefore, we can say that \textit{there exists a tradeoff between recourse costs and invalidation probabilities.}
\end{hproof}
These results establish that recourses that have lower costs are more likely to get invalidated by the updated model $\mathcal{M}_2$. This is a critical result that demonstrates a fundamental flaw in the design of state-of-the-art recourse finding algorithms since the objective formulations in these algorithms explicitly try to minimize these costs. By doing so, these algorithms are essentially generating recourses that are more likely to be invalidated upon model updation.
\vspace{-0.07in}
\subsection{Lower Bounds on Recourse Invalidation Probabilities}
\vspace{-0.07in}
In order to find general lower bounds on the probability of invalidation for recourses $\mathcal{CF}_1$ under $\mathcal{M}_2$, we first cast the couunterfactual search procedure as a Markov Decision Process, and then use this observation to hypothesize about the distributions of the recourses $\mathcal{CF}_1$. We then use these distributions to derive the lower bounds on the invalidation probability.
\begin{proposition}
The search process employed by state-of-the-art counterfactual explanation based recourse generation algorithms \cite{wachter2018a} is a Markov Decision Process.
\label{mdp}
\end{proposition}
\vspace{-0.15in}
\begin{hproof}
We know that the search for counterfactual explanations consists of solving the optimization problem given by $\arg \min_{\mathcal{S'}} {d(\mathcal{S'}, \mathcal{S})}$, where sparse counterfactuals are \emph{unrestricted}, and data support counterfactuals are \emph{restricted} to the set ${\mathcal{S'}: P_{data}(\mathcal{S'})>0}$. The search procedure moves through the path $\mathcal{S} \rightarrow \cdots \mathcal{S"} \cdots \rightarrow \mathcal{S'}$, looping through different possible values of $\mathcal{S"}$ until $\mathcal{M}_1(\mathcal{S"})=+1$ and cost $d(\mathcal{S},\mathcal{S"})$ is minimized - at which point the recourse $\mathcal{S'} = \mathcal{S"}$ is returned. For sparse counterfactuals, each step in this search depends only on the previous iteration, and not on the entire search path so far, that is: $P(\mathcal{S"}_{t+1} | \mathcal{S"}_t, \mathcal{S"}_{t-1} \dots \mathcal{S}) = \mathcal{S"}_{t+1} | \mathcal{S"}_t$. Thus, by satisfying this condition, the recourse generation technique is a Markov Decision Process.
\end{hproof}
\begin{lemma}
If the model $\mathcal{M}_1$ is linear, then the above proposition implies that the distribution of recourses $\mathcal{CF}_1$ is either exponential or geometric, along the normal to the classifier hyperplane, for continuous or categorical data respectively.
\label{lm:distbn}
\end{lemma}
\vspace{-0.15in}
\begin{hproof}
By Proposition \ref{mdp} above, we know that the recourse search process follows the Markov Property, and thus the distribution of the recourses must have the memoryless property. Further, since the classifier is linear, we know that an \emph{unrestricted} recourse search, such as that of sparse counterfactual based recourses, will proceed exactly along the normal to the classifier hyperplane, in the direction of increasingly positive $\mathcal{M}_1$ classification probabilities. Lastly, we assume that the probability of the counterfactual explanation search ending at any given iteration $t$ with $\mathcal{S'} = \mathcal{S"}_t$ is always constant $\rho$. Thus, for continuous data, the distribution of recourses is exponential with $\lambda = \rho$, and for ordinal data the distribution is geometric with parameter $p = \rho$.
\end{hproof}
\vspace{-0.1in}
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/new-theorem.png}}
\vspace{-0.1in}
\caption{Parallel model perturbation of arbitrary magnitude $\delta_m$ between linear models $\mathcal{M}_1$ and $\mathcal{M}_2$. The range of the data is represented by manifold $\Psi$.}
\label{fig:thm2}
\end{center}
\vspace{-0.1in}
\end{figure}
\begin{theorem}
For a given linear model $\mathcal{M}_1$ with recourses $\mathcal{CF}_1$, and a parallel linear model $\mathcal{M}_2$ with known constant perturbation $\delta_m$, we can find the exact invalidation probabilities as a function of the distribution of the recourses $\mathcal{CF}_1$.
\label{thrm3}
\end{theorem}
\vspace{-0.15in}
\begin{hproof}
Let the recourses be distributed according to some unknown arbitrary distribution with density $f(l)$, where $l \in (0, \infty)$ is the normal distance between $\mathcal{M}_1$ and $\mathcal{S'}$. Then, as is clear from the invalid region between the models $\mathcal{M}_1$ and $\mathcal{M}_2$ illustrated in figure \ref{fig:thm2}, the probability of invalidation of the recourses $\mathcal{S'}$ is $Q_{\mathcal{S'}} = \frac{1}{\Psi} \int_{\Psi} \left[ \int_0^{\delta_m} f(l) \,dl \right] \, d\psi $. Here $\psi$ is an arbitrary element of the decision boundary, within the data manifold $\Psi$.
We can now combine this result with known distributions from Lemma \ref{lm:distbn} to get:
\begin{align}
Q_{\mathcal{S'}} &= \frac{1}{\Psi} \int_{\Psi} \left[ \int_0^{\delta_m} f(l) \, dl \right] \, d\psi \\
&= \frac{1}{\Psi} \int_{\Psi} \left[ \int_0^{\delta_m} \rho e^{-\rho l} \, dl \right] \, d\psi \\
&= \frac{1}{\Psi} \int_{\Psi} \left[ 1 - e^{-\rho \delta_m} \right] \, d\psi \\
&= \frac{1}{\Psi} \int_{\Psi} \, d\psi \cdot \left[ 1 - e^{-\rho \delta_m} \right] \\
&= 1 - e^{-\rho \delta_m}
\end{align}
Similarly, for ordinal data this would be $Q_{\mathcal{S'}} = \frac{1}{\Psi} \int_{\Psi} \left[\sum_{l=1}^{\delta_m} (1-\rho)^{l-1}\cdot\rho \right] \, d\psi = 1 - (1-\rho)^{\delta_m}$. Thus, we have characterised the exact invalidation probabilities for parallel linear models $\mathcal{M}_1$ and $\mathcal{M}_2$, with known distance $\delta_m$.
\end{hproof}
We now tend to the case of classifiers with non-linear decision boundaries.
\begin{theorem}
For a given nonlinear model $\mathcal{M}_1$ with recourses $\mathcal{CF}_1$, and a parallel nonlinear model $\mathcal{M}_2$ with known constant perturbation $\delta_m$, the lower bound on the invalidation probabilities is achieved exactly when both models $\mathcal{M}_1$ and $\mathcal{M}_2$ are linear.
\end{theorem}
\vspace{-0.15in}
\begin{hproof}
We consider a piecewise linear approximation of the nonlinear models, with an arbitrary degree of precision. At each point in the classifier decsion boundary, we consider the piecewise linear approximation to make an angle $\theta$ with a hypothetical hyperplane in the data manifold $\Psi$. We then proceed identically as in the proof for theorem \ref{thrm3} above, with $Q_{\mathcal{S'}} = \frac{1}{\Psi \cos \theta} \int_{\Psi} \left[ \int_0^{\delta_m} f(l) \,dl \right] \, d\psi $, where the extra $\cos \theta$ term reflects that the models are locally inclined at an angle $\theta$ from the hyperplane $\psi$.
It is easy to see that $Q_{\mathcal{S'}}$ will be maximized when $\theta = 0, \forall \theta$, because $\frac{dQ_{\mathcal{S'}}}{d\theta} \propto \tan \theta = 0 \implies \theta = 0$. If each element of the piecewise linear approximation makes a constant angle of $0$ with the models, then the models themselves must be linear. Thus, the lower bound on invalidation probability for non-linear models must exactly be the invalidation probability for linear models, given the same data (manifold).
\end{hproof}
\vspace{-0.1in}
Combining the last two results, for any arbitrary model $\mathcal{M}_1$, we know that another model $\mathcal{M}_2$ perturbed parallelly with magnitude would cause invalidation with the following lower bounds: $Q_{\mathcal{S'}} \geq 1 - e^{-\rho \delta_m}$, for continuous data and $Q_{\mathcal{S'}} \geq 1 - (1-\rho)^{\delta_m}$ for ordinal data.
\subsection{Temporal Shifts}
\vspace{-0.07in}
\label{sec:temporal}
To examine the effects of temporal data distribution shifts on recourse validity we perform experiments in the domain of criminal justice. We have access to two datasets, collected in 1978 ($\mathcal{D}_1$) and 1980 ($\mathcal{D}_2$) respectively \cite{lakkaraju2016interpretable}. These datasets contain demographic features such as race, sex, age, time-served, and employment, and a target attribute corresponding to bail decisions. We reserve 10\% of both $\mathcal{D}_1$ and $\mathcal{D}_2$ just for validation, to perform sanity checks on our model $\mathcal{M}_1$ and $\mathcal{M}_2$ accuracies. We then follow the experimental setup described in figure \ref{fig:setup}, for various model types: Logistic Regression (\textbf{LR}), Random Forests (\textbf{RF}), Gradient Boosted Trees (\textbf{XGB}), Linear Support Vector Machines (\textbf{SVM}), a small Neural Network with hidden layers = [10, 10, 5] (\textbf{DNN (s)}), and a larger Neural Network with hidden layers = [20, 10, 10, 10, 5] (\textbf{DNN (l)}). The models are all treated as binary classification black-boxes used to predict whether or not a person would be granted bail. For those people that are denied bails $\mathcal{M}_1(\mathcal{D}_1) = -1$, we provide counterfactual explanation based recourses $\mathcal{CF}_1$ and test whether these bails are granted bail by the updated prediction model $\mathcal{M}2$. Our results are summarised in table \ref{tab:temporal} below.
\begin{table}[!htp]\centering
\vspace{-0.1in}
\scriptsize
\begin{tabular}{lc|rrrrr}\toprule
\textbf{Algorithm} &\textbf{Model} &\textbf{M1 acc} &\textbf{M2 acc} &\textbf{CF1 Size} &\textbf{Invalidation \%} \\\midrule \midrule
\multirow{6}{*}{\textbf{AR}} &LR &94 &95.4 &5592 &96.6 \\
&RF &99 &99.5 &4435 &0.05 \\
&XGB &100 &99.7 &1459 &0 \\
&SVM &81 &78.9 &6108 &3.05 \\
&DNN (s) &99 &99.4 &1521 &19.26 \\
&DNN (l) &99 &99.6 &1817 &0 \\
\midrule
\multirow{6}{*}{\textbf{CFE}} &LR &94 &95.4 &5601 &98.29 \\
&RF &99 &99.5 &5196 &0.71 \\
&XGB &100 &99.7 &5187 &0.46 \\
&SVM &81 &78.9 &6108 &100 \\
&DNN (s) &99 &99.4 &4955 &91.38 \\
&DNN (l) &99 &99.6 &763 &0.13 \\
\bottomrule
\end{tabular}
\vspace{-0.1in}
\caption{\textbf{Temporal Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due to temporal data drift. $\mathcal{D}_1$ had 8395 total points, with 5203 in class $-1$, and $\mathcal{D}_2$ had 8595 total points, with 5430 in class $-1$. The model accuracies are reported as 10-fold cross validation accuracy percentages, and $\mathcal{CF}_1$ Size represents the number of points for which recourses exist.}
\label{tab:temporal}
\vspace{-0.25in}
\end{table}
\hideh{
\begin{table}[!htp]\centering
\scriptsize
\begin{tabular}{llrrrrr}\toprule
\textbf{Algorithm} &\textbf{Model} &\textbf{CF1 Size} &\textbf{Invalidation \%} &\textbf{M1 acc} &\textbf{M2 acc} \\\midrule
AR &LR &5592 &96.6 &0.94 &0.954 \\
AR &RF &4435 &0.05 &0.99 &0.995 \\
AR &XGB &1459 &0 &1 &0.997 \\
AR &SVM &6108 &3.05 &0.81 &0.789 \\
AR &DNN (s) &1521 &19.26 &0.99 &0.994 \\
AR &DNN (l) &1817 &0 &0.99 &0.996 \\
CFE &LR &5601 &98.29 &0.94 &0.954 \\
CFE &RF &5196 &0.71 &0.99 &0.995 \\
CFE &XGB &5187 &0.46 &1 &0.997 \\
CFE &SVM &6108 &100 &0.81 &0.789 \\
CFE &DNN (s) &4955 &91.38 &0.99 &0.994 \\
CFE &DNN (l) &763 &0.13 &0.99 &0.996 \\
\bottomrule
\end{tabular}
\caption{\textbf{Temporal Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due to temporal data drift from $\mathcal{D}_1$ (8395 total points, 5203 in class $-1$) to $\mathcal{D}_2$ (8595 total points, 5430 in class $-1$). Analysis with respect to AR and CFE recourse generation algorithms, various model classes along with the 10-fold cross-validation accuracies of $\mathcal{M}_1$ and $\mathcal{M}_2$.}\label{tab:temporalh}
\end{table}
}
\vspace{-0.07in}
\subsection{Geospatial Shifts}
\vspace{-0.07in}
Next, we replicate our process above to study the impact of geospatial data distribution shifts on recourse validity. We consider a problem of predicting grades, modelled as a binary classification between pass ($+1$) and fail ($-1$), using input predictors such as grade, nationality, holidays-taken, and class-participation. Our dataset consists of schools spread out across Jordan and Kuwait. We presume that the decision maker initially trained a model using data $\mathcal{D}_1$ collected in Jordan. When this model is deployed, students at risk of failing are provided with recourses to improve their predicted grades. \hideh{However, when the students reapply for grade prediction, the decision maker has updated the model training data to using the schools in Kuwait. This way, the training dataset $\mathcal{D}_2$ has been modified, and model $\mathcal{M}_2$ has been updated.} Eventually, we postulate that the decision maker trains a new model $\mathcal{M}_2$ on more relevant data $\mathcal{D}_2$ collected in Kuwait. We would ideally want the grade improvement recommendation recourses already made to continue to remain valid for every student. We use the same settings for models and recourse generation methods as before, \hideh{ 6 models - LR, RF, XGB, SVM, DNN (s), and DNN (l); and 2 recourse generation algorithms - AR and CFE,} and our results are presented below in table \ref{tab:geospatial}. Note that for some models, AR was not able to generate any valid recourses ($\mathcal{CF}_1$ has size $0$), and consequently the invalidation percentage is reported \textit{NAN}.
\begin{table}[!htp]\centering
\vspace{-0.05in}
\scriptsize
\begin{tabular}{lc|rrrrr}\toprule
\textbf{Recourse} &\textbf{Model} &\textbf{M1 acc} &\textbf{M2 acc} &\textbf{CF1} &\textbf{Invalidation \%} \\ \midrule \midrule
\multirow{6}{*}{\textbf{AR}} &LR &88 &93 &47 &76.6 \\
&RF &89 &92 &0 &\textit{NAN} \\
&XGB &85 &93 &0 &\textit{NAN} \\
&SVM &80 &91 &40 &90 \\
&DNN (s) &83 &87 &0 &\textit{NAN} \\
&DNN (l) &82 &93 &0 &\textit{NAN} \\
\midrule
\multirow{6}{*}{\textbf{CFE}} &LR &88 &93 &47 &65.96 \\
&RF &89 &92 &17 &76.47 \\
&XGB &85 &93 &14 &57.14 \\
&SVM &80 &91 &54 &100 \\
&DNN (s) &83 &87 &2 &50 \\
&DNN (l) &82 &93 &33 &30.3 \\
\bottomrule
\end{tabular}
\vspace{-0.07in}
\caption{\textbf{Geospatial Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due to geospatial data drift. $\mathcal{D}_1$ had 129 total points, with 46 in class $-1$, and $\mathcal{D}_2$ had 122 total points, with 27 in class $-1$. The model accuracies are reported as 10-fold cross validation accuracy percentages, and $\mathcal{CF}_1$ Size represents the number of points for which recourses exist.}
\label{tab:geospatial}
\vspace{-0.15in}
\end{table}
\hideh{
\begin{table}[!htp]\centering
\scriptsize
\begin{tabular}{llrrrrr}\toprule
\textbf{Algorithm} &\textbf{Model} &\textbf{CF1 Size} &\textbf{Invalidation \%} &\textbf{M1 acc} &\textbf{M2 acc} \\\midrule
AR &LR &47 &76.6 &0.88 &0.93 \\
AR &RF &0 &nan &0.89 &0.92 \\
AR &XGB &0 &nan &0.85 &0.93 \\
AR &SVM &40 &90 &0.8 &0.91 \\
AR &DNN (s) &0 &nan &0.83 &0.87 \\
AR &DNN (l) &0 &nan &0.82 &0.93 \\
CFE &LR &47 &65.96 &0.88 &0.93 \\
CFE &RF &17 &76.47 &0.89 &0.92 \\
CFE &XGB &14 &57.14 &0.85 &0.93 \\
CFE &SVM &54 &100 &0.8 &0.91 \\
CFE &DNN (s) &2 &50 &0.83 &0.87 \\
CFE &DNN (l) &33 &30.3 &0.82 &0.93 \\
\bottomrule
\end{tabular}
\caption{\textbf{Geospatial Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due to geospatial data drift from $\mathcal{D}_1$ (129 total points, 46 in class $-1$) to $\mathcal{D}_2$ (122 total points, 27 in class $-1$). Analysis with respect to AR and CFE recourse generation algorithms, various model classes, along with 10-fold cross-validation accuracies of $\mathcal{M}_1$ and $\mathcal{M}_2$.}\label{tab:geospatialh}
\end{table}
}
\vspace{-0.07in}
\subsection{Dataset Correction Related Shifts}
\vspace{-0.07in}
\label{sec:correction}
Data collection in the real world often suffers from selection bias. These typically take some time to identify, and correcting the dataset bias can be necessary to make models fairer and more interpretable, even if there is no change in accuracy. In addition to selection bias, real world datasets are often messy and require significant preprocessing before being used to build machine learning models. One such dataset is the German credit dataset \cite{UCI}, commonly used to build and evaluate credit worthiness models. Here we wish to predict whether or not a given customer should be considered credit worthy using features such as their credit history, loan amount, employment history, and age as predictors. The first release (and English translation) of this dataset, however, contains multiple errors. A corrected dataset also exists, which can be thought of as the same data with minor changes to revert previous mistakes \cite{ngcredit}. Thus, to model data correction based distribution shifts, we train an initial binary classification model $\mathcal{M}_1$ on the first release of the German credit data, $\mathcal{D}_1$, and we consider $\mathcal{M}_2$ to be a model trained on the recently corrected dataset $\mathcal{D}_2$. The rest of our setup is unchanged from before, and results from this experiment can be seen in table \ref{tab:correction}. Again, here we see models for which no recourses could be found, and we report the invalidation percentage for these cases to be \textit{NAN}.
\vspace{-0.07in}
\subsection{Causal Recourse Generation}
\vspace{-0.07in}
There have been a multitude of recent works showing the inadequacies of non-causal recourses \cite{karimi2020algorithmic1, Barocas_2020, venkat}. Even though not specific to distribution shifts, these works posit that causal recourse generation is more robust than sparse counterfactual explanation based recourse generation. In general, it is hard to generate causal recourses because of lack of access to a generative causal model of the data. For the German credit dataset described in section \ref{sec:correction} however, we are able to directly adopt the exact causal data distribution from \cite{karimi2020algorithmic2}, replicating their simulation of generating causal recourses on the German credit dataset.
Our experiments show that even with causally generated recourses, distribution shifts can invalidate recourses. These results are included in table \ref{tab:correction}, with a recourse generation algorithm labelled \textit{Causal}, in addition to the AR and CFE algorithm results.
\begin{table}[!htp]\centering
\vspace{-0.02in}
\scriptsize
\begin{tabular}{lc|rrrrr}\toprule
\textbf{Algorithm} &\textbf{Model} &\textbf{M1 acc} &\textbf{M2 acc} &\textbf{CF1 Size} &\textbf{Invalidation \%} \\ \midrule \midrule
\multirow{6}{*}{\textbf{AR}} &LR &71 &75 &154 &7.79 \\
&RF &73 &73 &20 &35 \\
&XGB &74 &75 &25 &8 \\
&SVM &63 &69 &1 &100 \\
&DNN (s) &68 &69 &0 &\textit{NAN} \\
&DNN (l) &66 &67 &1 &0 \\
\midrule
\multirow{6}{*}{\textbf{CFE}} &LR &71 &75 &154 &3.9 \\
&RF &73 &73 &258 &36.82 \\
&XGB &74 &75 &253 &23.72 \\
&SVM &63 &69 &1 &0 \\
&DNN (s) &68 &69 &0 &\textit{NAN} \\
&DNN (l) &66 &67 &57 &0 \\
\midrule
\multirow{6}{*}{\textbf{Causal}} &LR &69 &71 &61 &0 \\
&RF &65 &64 &256 &96.09 \\
&XGB &64 &68 &8 &12.5 \\
&DNN (s) &69 &70 &0 &\textit{NAN} \\
&DNN (l) &69 &70 &0 &\textit{NAN} \\
\bottomrule
\end{tabular}
\vspace{-0.07in}
\caption{\textbf{Data Correction Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due data drift caused by correcting data reporting errors. $\mathcal{D}_1$ had 900 total points, with 275 in class $-1$, and $\mathcal{D}_2$ had 900 total points, with 271 in class $-1$. The model accuracies are reported as 10-fold cross validation accuracy percentages, and $\mathcal{CF}_1$ Size represents the number of points for which recourses exist.}
\label{tab:correction}
\vspace{-0.13in}
\end{table}
\hideh{
\begin{table}[!htp]\centering
\scriptsize
\begin{tabular}{llrrrrr}\toprule
\textbf{Algorithm} &\textbf{Model} &\textbf{CF1 Size} &\textbf{Invalidation \%} &\textbf{M1 acc} &\textbf{M2 acc} \\\midrule
\multirow{6}{*}{\textbf{AR}} &LR &154 &7.79 &0.71 &0.75 \\
&RF &20 &35 &0.73 &0.73 \\
&XGB &25 &8 &0.74 &0.75 \\
&SVM &1 &100 &0.63 &0.69 \\
&DNN (s) &0 &nan &0.68 &0.69 \\
&DNN (l) &1 &0 &0.66 &0.67 \\
\\
\multirow{6}{*}{\textbf{CFE}} &LR &154 &3.9 &0.71 &0.75 \\
&RF &258 &36.82 &0.73 &0.73 \\
&XGB &253 &23.72 &0.74 &0.75 \\
&SVM &1 &0 &0.63 &0.69 \\
&DNN (s) &0 &nan &0.68 &0.69 \\
&DNN (l) &57 &0 &0.66 &0.67 \\
\\
\multirow{5}{*}{\textbf{Causal}} &LR &61 &0 &0.69 &0.71 \\
&RF &256 &96.09 &0.65 &0.64 \\
&XGB &8 &12.5 &0.64 &0.68 \\
&DNN (s) &0 &nan &0.69 &0.7 \\
&DNN (l) &0 &nan &0.69 &0.7 \\
\bottomrule
\end{tabular}
\caption{\textbf{Data Correction Distribution Shifts}: Recourse invalidation in $\mathcal{CF}_1$ by model $\mathcal{M}_2$, due to data drift caused by correcting data reporting errors from $\mathcal{D}_1$ (900 total points, 275 in class $-1$) to $\mathcal{D}_2$ (900 total points, 271 in class $-1$). Analysis with respect to different AR, CFE, and Causal recourse generation algorithms, various model classes along with 10-fold cross-validation accuracies of $\mathcal{M}_1$ and $\mathcal{M}_2$.}\label{tab:correctionh}
\end{table}
}
\subsection{Sensitivity Analysis}
\vspace{-0.07in}
We have demonstrated empirically that distribution shifts cause recourse invalidation. In this section we wish to demonstrate that increasing the magnitude of distribution shifts increases recourse invalidation percentages. To do this we construct synthetic datasets and precisely control the type of distribution shift. We start with a fixed distribution $\mathcal{D}_1$, which has two independent predictors $X0$ and $X1$, both drawn from a standard normal distribution, with the binary target attribute defined linearly as $Y = (X0 + X1 \geq 0)$. We then train a logistic regression model $\mathcal{M}_1$ and generate recourses $\mathcal{CF}_1$ from either AR or CFE. Finally, we shift the distribution $\mathcal{D}_2$ according to some shift parameter $\alpha$ and construct logistic regression model $\mathcal{M}_2$, and analyse the relation between recourse invalidation percentage and $\alpha$.
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/sens-b-target.png}}
\vspace{-0.1in}
\caption{[AR] Recourse Invalidation with Drift in Target}
\label{fig:sens1}
\end{center}
\vspace{-0.1in}
\end{figure}
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/sens-w-target.png}}
\vspace{-0.1in}
\caption{[CFE] Recourse Invalidation with Drift Target}
\end{center}
\vspace{-0.3in}
\end{figure}
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/sens-b-pred.png}}
\vspace{-0.1in}
\caption{[AR] Recourse Invalidation with Drift in Predictors}
\end{center}
\vspace{-0.1in}
\end{figure}
\begin{figure}[ht]
\vspace{-0.1in}
\begin{center}
\centerline{\includegraphics[width=\columnwidth]{sections/images/sens-w-pred.png}}
\vspace{-0.1in}
\caption{[CFE] Recourse Invalidation with Drift in Predictors}
\label{fig:sens4}
\end{center}
\vspace{-0.3in}
\end{figure}
We consider the following two scenarios, with shift parameter $\alpha$ defining the shift between $\mathcal{D}_2$ and $\mathcal{D}_1$.
\vspace{-0.07in}
\begin{enumerate}
\item Shifting target: where the predictor distribution ($X0$ and $X1$) stays constant, but the $\mathcal{D}_2$ target attribute is defined as $Y = (X0 + \alpha X1 \geq 0)$, for a \% shift of $\alpha \in (-60\%, 60\%)$.
\vspace{-0.07in}
\item Shifting predictors: where the definition of the target stays constant ($Y = (X0 + X1 \geq 0)$), but we shift the mean of the predictor distribution in $\mathcal{D}_2$ from $(X0, X1) = (0, 0)$ to $(X0, X1) = (\alpha, \alpha)$.
\end{enumerate}
\vspace{-0.17in}
As figures \ref{fig:sens1} through \ref{fig:sens4} demonstrate, recourse invalidation increases with increasing distribution shift, but this is not universal. There may be some distribution shifts that the recourses generated are robust to, and others that result in upto 100\% of generated recourses getting invalidated. Particularly, when $\alpha$ is negative while shifting predictors (with this particular target distribution), recourses generated are robust. In all other cases there is significant invalidation that increases with increasing $\alpha$. It is also noteworthy that even with data support based counterfactual recourse AR, we do not uniformly see less invalidation, as postulated by \cite{Ustun_2019}. Sometimes the recourses generated are indeed more robust, but they also often fail miserably with up to 100\% invalidation. This would suggest that data support counterfactual based recourses are actually less predictable than sparse counterfactual based recourses, while still being vulnerable to invalidation, and could thus potentially be more risky for real world decision makers.
|
1,477,468,750,127 | arxiv | \section{Introduction}
The $k_T$ factorization
formalism \cite{CCH,CE,LRS,BS,LS,HS} has been
applied to next-to-leading-order (NLO) analysis of several
space-like form factors, such as the pion-photon transition form factor
\cite{Nandi:2007qx,Li:2009pr}, the pion electromagnetic (EM) form
factor \cite{Li:2010nn}, and the $B\to\pi$ transition form factors
\cite{Li:2012nk}. The calculations are nontrivial, because partons
off-shell by $k_T^2$ are considered in both QCD quark diagrams and
effective diagrams for meson wave functions. The gauge invariance of
hard kernels, derived from the difference of the above two sets of
diagrams, needs to be verified. The regularization of the light-cone
singularity in the effective diagrams generates double logarithms,
which should be summed to all orders. It has been found that the NLO
corrections, after the above treatments, are negligible in the pion
transition form factor, but reach 30\% in the latter two cases.
In this Letter we shall extend the NLO $k_T$ factorization formalism
to the time-like pion transition and EM form factors.
One of the widely adopted theoretical frameworks for two-body
hadronic $B$ meson decays is the perturbative QCD (PQCD) approach
\cite{KLS} based on the $k_T$ factorization.
It has been shown that factorizable contributions to these decays can be
computed in PQCD without the ambiguity from the end-point
singularity. These computations indicated that sizable strong phases
are produced from penguin annihilation amplitudes, with which the
direct CP asymmetry in the $B^\pm\to K^\pm \pi^\mp$ decays was
successfully predicted. It is then a concern whether PQCD
predictions for strong phases are stable against radiative
corrections. The factorizable penguin annihilation amplitudes
involve time-like scalar form factors. Before completing NLO
calculations for two-body hadronic $B$ meson decays, it is possible
to acquire an answer to the above concern by studying the time-like
pion EM form factor. Besides, the PQCD formalism for three-body $B$ meson
decays \cite{CL03} has demanded the introduction of two-meson wave
functions \cite{MP}, whose parametrization also involves time-like
form factors associated with various currents. If PQCD results
for complex time-like form factors are reliable, a theoretical
framework for three-body $B$ meson decays can be constructed.
NLO corrections to time-like form factors are derived easily from
those to space-like ones by suitable analytic continuation from
$-Q^2$ to $Q^2$, with $Q^2$ denoting the momentum transfer squared.
We shall present the $k_T$ factorization formulas for the time-like
pion transition and EM form factors up to NLO at leading twist.
Following the prescription proposed in \cite{LS,KLS}, both the
renormalization and factorization scales are set to the virtuality
of internal particles. With this scale choice, it will be
demonstrated that the NLO corrections to the time-like pion
transition and EM form factors are under control at leading twist.
It implies that PQCD predictions for strong phases of factorizable
annihilation amplitudes in two-body hadronic $B$ meson decays may be
stable against radiative corrections. Moreover, we observe the
increase of the strong phases of the above form factors with $Q^2$,
consistent with the tendency indicated by experimental data. It will
be explained that this behavior is attributed to the inclusion of
the parton transverse momenta $k_T$. This consistency supports the
$k_T$ factorization
as a potential framework for studying
complex time-like form factors and three-body $B$ meson decays.
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{TF.eps}
\end{center}
\caption{LO quark diagrams for time-like and space-like pion-photon
transition form factors with $\otimes$ representing the virtual photon vertex.
The virtuality of the internal quark is labeled explicitly.}
\label{fig:TFLO}
\end{figure}
\section{Pion-Photon Transition Form Factor}\label{PionTF}
In this section we present the leading-twist NLO factorization
formula for the time-like pion-photon transition form factor. The
leading-order (LO) QCD quark diagram describing $\gamma^*(q) \to
\pi(P_1)~\gamma(P_2)$ is displayed in Fig.~\ref{fig:TFLO}(a), where
the momentum $P_1$ of the pion and the momentum $P_2$ of the
outgoing on-shell photon are chosen as
\begin{align}
P_1 =(P_1^+,0,{\bf 0}_T) ,
\;\;\;
P_2 = (0,P_2^-,{\bf 0}_T) ,
\;\;\;
P_1^+ = P_2^- = Q/\sqrt{2} ,
\label{TFcor}
\end{align}
with $Q^2 = q^2 = (P_1 + P_2)^2 > 0$ being the invariant mass
squared of the virtual photon $\gamma^*$. Figure~\ref{fig:TFLO}(a)
leads to the LO hard kernel
\begin{align}
H^{(\text{LO})}_{\pi\gamma}(x,Q^2,k_T) =
-i\frac{N_c}{\sqrt{2N_c}}\frac{\operatorname{Tr}
[~\feyn \epsilon (\feyn P_2 + \feyn k ) \gamma_\mu \gamma^5 \feyn P_1]}
{(P_2+k)^2 + i\varepsilon}
= -i\sqrt{\frac{N_c}{2}}\frac{\operatorname{Tr}
[~\feyn \epsilon \feyn P_2 \gamma_\mu \feyn P_1\gamma^5]}
{k_T^2-xQ^2-i\varepsilon} ,
\label{TFH0}
\end{align}
where $N_c = 3$ is the number of colors, $\epsilon$ is the
polarization vector of the outgoing photon, $k = (xP_1^+, 0, {\bf
k}_T)$ is the momentum carried by the valence quark,
$\gamma^5\feyn P_1 /\sqrt{2\text{N}_{c}}$ is the leading-twist
spin projector of the pion, and the subscript $\mu$ associated with
the virtual photon vertex is implicit on the left-hand side. In the previous works
on the space-like transition form factor \cite{Jakob:1994hd,Li:2000hh,
Nagashima:2002ia,Nandi:2007qx}, the internal quark remains
off-shell by $(P_2-k)^2 = - (xQ^2+k_T^2)<0 $ as indicated in
Fig.~\ref{fig:TFLO}(b). For the time-like case, the internal quark
may go on mass shell, and an imaginary part is generated in the hard
kernel according to the principal-value prescription
\begin{align}
\frac{1}{k_T^2-xQ^2-i\varepsilon} = \Pr \frac{1}{k_T^2-xQ^2} +i\pi
\delta(k_T^2-xQ^2) . \label{eq:Pri}
\end{align}
Fourier transforming Eq.~(\ref{TFH0}) into the impact-parameter $b$
space, we derive the LO pion transition form factor
\begin{align}
F^{(\text{LO})}_{\pi\gamma}(Q^2) = i\pi\frac{\sqrt{2}f_\pi}{6} \int_0^1 dx
\int_0^\infty b db\, \phi_\pi(x)\exp[-S(x,b,Q,\mu)]\,
H^{(1)}_0\left( \sqrt{x}Qb \right) , \label{TFI0}
\end{align}
with the pion decay constant $f_\pi$,
the renormalization and factorization scale $\mu$,
the Hankel function of the first kind $H_0^{(1)}$,
and the twist-2 pion distribution amplitude (DA) $\phi_\pi$.
Here we shall not consider the potential intrinsic $k_T$
dependence of the pion wave function \cite{Jakob},
because its inclusion would introduce additional model dependence,
which is not the focus of this work. For example, the intrinsic
$k_T$ dependence has been parameterized into the different Gaussian
and power forms in \cite{Brodsky80}.
The Sudakov factor $e^{-S}$ sums the double logarithm
$\alpha_s\ln^2 k_T$ to all orders, and takes the same expression for
both the space-like and time-like form factors \cite{Raha:2010kz},
since it is part of the universal meson wave function. For its
explicit expression, refer to \cite{Li:2001ay,LS,Li:1994zm}. Note
that Eq.~(\ref{TFI0}) can be obtained from the LO space-like pion
transition form factor in \cite{Nagashima:2002ia} by substituting $i
\pi H^{(1)}_0/2$ for the Bessel function $K_0$, as a consequence of
the analytic continuation $q^2 = -Q^2 \to (Q^2+i\varepsilon)$ in the
hard kernel.
As stated in the Introduction, the NLO hard kernel is derived by
taking the difference of the $O(\alpha_s)$ quark diagrams and the
$O(\alpha_s)$ effective diagrams for meson wave functions. The
ultraviolet divergences in loops are absorbed into the renormalized
strong coupling constant $\alpha_s(\mu)$, and the infrared
divergences are subtracted by the nonperturbative meson wave
functions. The above derivation has been demonstrated explicitly in
\cite{Nandi:2007qx} for the space-like pion transition form factor.
We repeat a similar calculation for the time-like pion transition
factor, and derive the NLO hard kernel\footnote{
Compared to \cite{Nandi:2007qx},
three effective diagrams for the
self-energy corrections to the Wilson lines have been
included in Eq.~(\ref{TFH1}).}
\begin{align}
H^{(\text{NLO})}_{\pi\gamma}(x,Q^2,k_T,\mu)
=
h_{\pi\gamma}(x,Q^2,k_T,\mu)
H^{(\text{LO})}_{\pi\gamma}(x,Q^2,k_T)
,\label{TFH1}
\end{align}
with the NLO correction function
\begin{align}
h_{\pi\gamma}(x,Q^2,k_T,\mu) = \frac{\alpha_s(\mu)C_F}{4\pi}\bigg\{
& -3\ln\frac{\mu^2}{Q^2}-\ln^2\frac{|k_T^2-xQ^2|}{Q^2}
+2\left[1-i\pi-i\pi\Theta\left(k_T^2-xQ^2\right)\right]\ln\frac{|k_T^2-xQ^2|}{Q^2}
\nonumber\\& -2\ln x
+\left(4\pi^2-i\pi\right)\Theta\left(k_T^2-xQ^2\right)-3-i5\pi
\bigg\}, \label{TFNLO}
\end{align}
$C_F$ being the color factor. The imaginary pieces proportional to
the step function $\Theta$ are generated from the $O(\alpha_s)$
quark diagrams. For the evaluation of the $O(\alpha_s)$ effective
diagrams, we have chosen the direction $n^\mu$ of the Wilson lines
the same as in \cite{Nandi:2007qx}
in order to respect the universality of the meson wave function.
Equation~(\ref{TFH1}) can also be achieved by substituting
$(Q^2+i\varepsilon)$ for the virtuality of the external photon, and
$(xQ^2-k_T^2+i\varepsilon)$ for the internal quark in
\cite{Nandi:2007qx}, and then employing the relations
$\ln(-Q^2-i\varepsilon) = \ln Q^2 -i\pi$ and
$\ln(-xQ^2+k_T^2-i\varepsilon) = \ln\left|xQ^2-k_T^2\right|- i\pi
\Theta(xQ^2-k_T^2)$.
Fourier transforming Eq.~(\ref{TFH1}) to the $b$ space, we arrive at
the NLO $k_T$ factorization formula for the time-like pion
transition factor
\begin{align}
F_{\pi\gamma}^{\text{(NLO)}}(Q^2) =
i\pi & \frac{\sqrt{2}f_\pi}{6} \int_0^1 dx
\int_0^\infty b db\, \phi_\pi(x)
\exp[-S(x,b,Q,\mu)]
\nonumber\\
&\times
\frac{\alpha_s(\mu)C_F}{4\pi}
\left[
~\widetilde{h}_{\pi\gamma}(x,Q^2,k_T,\mu)
H^{(1)}_0\left( \sqrt{x}Qb \right)
+ H^{(1)\prime \prime}_0\left( \sqrt{x}Qb \right)
\right],
\label{TFI1}
\end{align}
with
\begin{align}
\widetilde{h}_{\pi\gamma}(x,Q^2,k_T,\mu)
=
&
-3\ln\frac{\mu^2}{Q^2} - {1 \over 4}\ln^2 \frac{4x}{Q^2 b^2}
+ (1+\gamma_E-i{3\pi\over 2}) \ln \frac{4x}{Q^2 b^2} -2\ln x
\nonumber\\&
+{17 \pi^2 \over 12}+\pi-3-2 \gamma_E-\gamma_E^2 -i(4-3\gamma_E)\pi,
\label{TFh1}
\end{align}
$\gamma_E$ being the Euler constant. The function
\begin{align}
H^{(1)\prime \prime}_0\left( \rho \right)
\equiv \left[\frac{\partial^2}{\partial\alpha^2}H^{(1)}_\alpha(\rho )\right]_{\alpha=0} ,
\end{align}
where $\alpha$ denotes the order parameter of the Hankel function,
comes from the Fourier transformation of
$\ln^2(-xQ^2+k_T^2-i\varepsilon)$ in Eq.~(\ref{TFNLO}). For a small
argument $\rho = \sqrt{x}Qb \to 0$, its magnitude behaves as
$|H^{(1)\prime \prime}_0(\rho)| \sim
(1/3)\ln^2\rho~|H^{(1)}_0(\rho)|$, which represents a
double-logarithmic correction essentially. The perturbative
expansion could be improved by summing the double logarithm
$\alpha_s \ln^2 [x/(Q^2b^2)]$ in Eq.~(\ref{TFh1}), which arises from the
Fourier transformation of the term $\alpha_s\ln^2(|k_T^2-xQ^2|/Q^2)$ in
Eq.~(\ref{TFNLO}). Strictly speaking, it differs from the threshold
resummation of $\alpha_s\ln^2 x$ performed in \cite{Li:2001ay}, and
deserves a separate study. Besides, there is no end-point
enhancement involved in the present calculation, so we shall not
perform the resummation here for simplicity. Equations~(\ref{TFI0}) and
(\ref{TFI1}) will be investigated numerically in Sec.~\ref{Numeric}.
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{EM.eps}
\end{center}
\caption{LO quark diagrams for time-like and space-like pion electromagnetic form factors.}
\label{fig:EMLO}
\end{figure}
\section{Pion Electromagnetic form factor}\label{PionEM}
We then derive the NLO, i.e., $O(\alpha_s^2)$ contribution to the
time-like pion EM form factor at leading twist. An LO quark diagram
for the corresponding scattering $\gamma^*(q) \to
\pi^+(P_1)~\pi^-(P_2)$ is depicted in Fig.~\ref{fig:EMLO}(a). We
choose light-cone coordinates, such that the momenta $P_1$ and $P_2$ are
parameterized the same as in Eq.~(\ref{TFcor}) with $Q^2 =q^2=
(P_1+P_2)^2>0$. The valence quark carries the momentum
$k_1=(x_1P_1^+,0,{\bf k}_{1T})$ and the valence anti-quark carries
$k_2=(0,x_2P_2^-,{\bf k}_{2T})$. The LO hard kernel reads
\begin{align}
H^{(\text{LO})}_{\text{II}}(x_1,k_{1T},x_2,k_{2T},Q^2)
= i4 \pi\alpha_s C_F\frac{x_1 \operatorname{Tr}
\left[\feyn P_2\feyn P_1\gamma_\mu\feyn P_1\right]}{
(x_1Q^2-{\bf k}_{1T}^2+i\varepsilon)
(x_1 x_2Q^2-|{\bf k}_{1T}+{\bf k}_{2T}|^2+i\varepsilon)} ,
\label{EMH02}
\end{align}
where the denominators $(x_1Q^2-{\bf k}_{1T}^2)$ and $(x_1
x_2Q^2-|{\bf k}_{1T}+{\bf k}_{2T}|^2)$ are the virtuality of the
internal quark and gluon, respectively. The subscript $\text{II}$
denotes that the $k_T$-dependent terms in both the internal quark
and gluon propagators are retained. When one of the internal
particle propagators goes on mass shell, an imaginary part is
produced according to the principle-value prescription in
Eq.~(\ref{eq:Pri}).
Fourier transforming Eq.~(\ref{EMH02}) from the transverse-momentum
space $({\bf k}_{1T}, \,{\bf k}_{2T})$ to the impact-parameter space
$({\bf b}_1, \,{\bf b}_2)$, we obtain a double-$b$ convolution for
the LO time-like pion EM form factor \cite{Chen:2009sd}
\begin{align}
F_{\text{EM}}^{(\text{LO})}(Q^2)
&= \frac{\pi^3 f_\pi^2 C_F}{2N_c} Q^2
\int_0^1dx_1 dx_2 \int_0^\infty db_1 db_2 b_1 b_2
\, \alpha_s(\mu) x_1
\phi_\pi(x_1) \phi_\pi(x_2) \exp[-S_{\text{II}}(x_1, b_1, x_2, b_2,
Q, \mu)] \nonumber\\
&\times H_0^{(1)}(\sqrt{x_1 x_2}Qb_2)
\left[H_0^{(1)}(\sqrt{x_1}Qb_1)J_0(\sqrt{x_1}Qb_2)\Theta(b_1-b_2)
+H_0^{(1)}(\sqrt{x_1}Qb_2)J_0(\sqrt{x_1}Qb_1)\Theta(b_2-b_1)
\right] ,
\label{EMI2}
\end{align}
with the Bessel function of the first kind $J_0$, and the Sudakov
exponent $S_{\text{II}}(x_1, b_1, x_2, b_2, Q,\mu)=S(x_1, b_1,
Q,\mu)+S(x_2, b_2, Q,\mu)$. The above expression can also be
obtained via analytical continuation of the space-like form factor
in Fig.~\ref{fig:EMLO}(b) to the time-like region.
The NLO hard kernel for the space-like pion EM form factor has been
computed as the difference between the one-loop QCD quark diagrams
and effective diagrams in \cite{Li:2010nn}. To simplify the
calculation, the hierarchy $x_1Q^2, x_2Q^2\gg x_1x_2Q^2,k_{T}^2$ has
been postulated, since the $k_T$ factorization applies to processes
dominated by small-$x$ contributions. Ignoring the transverse
momenta of the internal quarks, the LO hard kernel in
Eq.~(\ref{EMH02}) reduces to
\begin{align}
H^{(\text{LO})}_{\text{I}}
(x_1,k_{1T},x_2,k_{2T},Q^2) =
i4\pi \alpha_s C_F\frac{\operatorname{Tr}\left[
\feyn P_2\feyn P_1\gamma_\mu\feyn P_1\right]}
{ Q^2(x_1 x_2Q^2-|{\bf k}_{1T}+{\bf k}_{2T}|^2+i\varepsilon)} .
\label{EMH01}
\end{align}
The Fourier transformation of the above expression leads to a
single-$b$ convolution \cite{Gousset:1994yh}
\begin{align}
F_{\text{I}}^{(\text{LO})}(Q^2) = i\frac{\pi^2 f_\pi^2 C_F}{N_c}
\int_0^1dx_1 dx_2 \int_0^\infty db b
\,\alpha_s(\mu) \phi_\pi(x_1) \phi_\pi(x_2)
\exp[-S_{\text{I}} (x_1, x_2, b, Q, \mu)]
H_0^{(1)}(\sqrt{x_1 x_2}Qb),
\label{EMI1}
\end{align}
with the simplified Sudakov exponent $S_{\text{I}} (x_1, x_2, b, Q,
\mu) \equiv S_{\text{II}} (x_1, b, x_2, b, Q, \mu) $. Comparing the
outcomes from Eqs.~(\ref{EMI2}) and (\ref{EMI1}), we can justify the
proposed hierarchical relation, and tell which particle propagator,
the internal quark or the internal gluon, provides the major source
of the strong phase.
Substituting $(Q^2+i\varepsilon)$ for the virtuality of the external
photon, and $(x_1 x_2Q^2-|{\bf k}_{1T}+{\bf
k}_{2T}|^2+i\varepsilon)$ for the internal gluon in
\cite{Li:2010nn}, we have the NLO hard kernel for the time-like pion
EM form factor
\begin{align}
H^{(\text{NLO})}_{\text{EM}}(x_1,k_{1T},x_2,k_{2T},Q^2,\mu) =
h_{\text{EM}}(x_1, x_2, \delta_{12},Q,\mu)
H^{(\text{LO})}_{\text{I}}(x_1,k_{1T},x_2,k_{2T},Q^2) ,
\label{EMH1}
\end{align}
with the NLO correction function
\begin{align}
h_{\text{EM}}(x_1, x_2, \delta_{12},Q,\mu) =
{ \alpha_s(\mu) C_F \over 4 \pi } \bigg[
&
-{3 \over 4}\ln { \mu^2 \over Q^2}
- {17 \over 4}\ln^2 x_1 + {27 \over 8} \ln x_1 \ln x_2
- {13 \over 8} \ln x_1 + {31 \over 16} \ln x_2
\nonumber\\
&
-\ln^2 \delta_{12}
+\left({17 \over 4} \ln x_1 + {23 \over 8} +i2\pi \right) \ln \delta_{12}
+{ \pi^2 \over 12} + {1 \over 2} \ln 2 + {53 \over 4}-i{3\pi\over 4}
\bigg]
,
\label{h_1}
\end{align}
and the notation
\begin{align}
\ln{\delta_{12}} \equiv
\ln{\frac{\left| {|{\bf k}_{1T}+{\bf k}_{2T}|}^2 - x_1 x_2 Q^2 \right|}{Q^2}}
+i\pi\Theta\left({|{\bf k}_{1T}+{\bf k}_{2T}|}^2 - x_1 x_2 Q^2 \right) .
\label{DEL12}
\end{align}
Fourier transforming Eq.~(\ref{EMH1}), we derive the $k_T$
factorization formula for the NLO contribution at leading twist
\begin{align}
F^{(\text{NLO})}_{\text{EM}}(Q^2) = {} &
i\frac{\pi f_\pi^2 C_F^2}{4N_c} \int_0^1dx_1 dx_2 \int_0^\infty db b
\, \alpha_s^2(\mu) \phi_\pi(x_1) \phi_\pi(x_2)
\exp[-S_{\text{I}} (x_1, x_2, b, Q, \mu)]
\nonumber \\
&\times \left[
\,\widetilde{h}_{\text{EM}}(x_1, x_2, b, Q, \mu) \, H_0^{(1)}(\sqrt{x_1 x_2}Qb)
+H^{(1)\prime \prime}_0\left(\sqrt{x_1 x_2}Qb\right)
\right] ,
\label{EM1I1}
\end{align}
with the function
\begin{align}
\widetilde{h}_{\text{EM}}(x_1, x_2, b, Q, \mu) = ~&
-{3 \over 4}\ln { \mu^2 \over Q^2}
-{1\over 4}\ln^2 \frac{4 x_1 x_2}{Q^2 b^2}
+\left(
{17\over 8}\ln x_1 +{23\over 16}+\gamma_E+i{\pi \over 2}
\right)
\ln \frac{4 x_1 x_2}{Q^2 b^2}
\nonumber\\
&
- {17 \over 4}\ln^2 x_1 + {27 \over 8} \ln x_1 \ln x_2
- \left( {13 \over 8} +{17\gamma_E \over 4} -i{17 \pi\over 8} \right)\ln x_1
+ {31 \over 16} \ln x_2
\nonumber\\
&
- {\pi^2 \over 2} + (1-2\gamma_E)\pi
+{1 \over 2}\ln 2+ {53 \over 4}
- {23 \over 8}\gamma_E - \gamma_E^2
+ i\left({171\over 16}+\gamma_E\right)\pi
.
\label{th_1}
\end{align}
The perturbative expansion could be improved by organizing the double
logarithm $\alpha_s\ln^2 x_1$ in Eq.~(\ref{h_1}) into the threshold
resummation factor $S_t( x_1, Q^2 )$ \cite{Li:2001ay}. This double
logarithm, the same as analyzed in \cite{Li:2001ay}, appears in the loop
correction to the virtual photon vertex under
the hierarchical relation $x_1Q^2\gg k_{T}^2$ \cite{Li:2010nn}.
Because there is no end-point enhancement involved
at leading twist, we shall not perform the threshold resummation here.
However, the end-point enhancement exists in the two-parton twist-3 contribution,
for which $S_t$ will play a crucial role, and be implemented in
Sec.~\ref{Numeric}. We shall investigate the NLO effect at leading
twist in the time-like pion EM form factor based on
Eqs.~(\ref{EMI2}) and (\ref{EM1I1}).
\section{Numerical Analysis}\label{Numeric}
The numerical analysis is performed in this section, for which we
adopt the standard two-loop QCD running coupling constant
$\alpha_s(\mu)$ with the QCD scale $\Lambda_{\rm QCD}=0.2$ GeV, the
pion decay constant $f_\pi = 0.131$ GeV, the nonasymptotic
two-parton twist-2 pion DA
\begin{align}
\phi_\pi(x) = 6 x (1-x) \left[1+a_2 C_2^{3/2}(1-2x)\right] ,
\label{NAD}
\end{align}
with the Gegenbauer coefficient $a_2 = 0.2$ being fixed by lattice
QCD \cite{Braun:2006dg}, and the Gegenbauer polynomial
$C_2^{3/2}(u)=(3/2)(5u^2 -1)$.
We compute the LO and NLO contributions to the time-like pion-photon
transition form factor at leading twist via Eqs.~(\ref{TFI0}) and
(\ref{TFI1}), with the renormalization and factorization scale $\mu$
being set to the virtuality of the internal quark $\mu =
\max(\sqrt{x}Q, 1/b)$. The behavior of $Q^2 F_{\pi\gamma}(Q^2)$ for
$Q^2<20$ GeV\textsuperscript{2} displayed in Fig.~\ref{fig:TFNAD}
reflects the oscillatory nature of the LO hard kernel in the $b$
space. The LO time-like pion transition form factor exhibits an
asymptotic magnitude, $Q^2 \left|F_{\pi\gamma}(Q^2)\right| \approx
0.225$ GeV at large $Q^2$. Recall that an asymptotic scaling is
known as $Q^2 F_{\pi\gamma}(Q^2) \to \sqrt{2} f_\pi = 0.185$ GeV for
the space-like pion transition form factor at large $-q^2= Q^2$
\cite{BL}. The larger asymptotic value for the former is expected in
the $k_T$ factorization, because the internal quark may go on mass
shell for a time-like momentum transfer $q$, but it is always
off-shell for a space-like $q$. The ratio between the asymptotic
values of the above two transition form factors is roughly $1.22$,
comparable to the data $1.14$ from the $\eta - \gamma$ transition
form factors for $Q^2 >100$ GeV\textsuperscript{2}
\cite{Aubert:2006cy}. Note that the time-like and space-like
transition form factors would have equal magnitudes in the LO
collinear factorization without including the parton transverse
momentum $k_T$. The NLO contribution to the time-like pion
transition form factor is also displayed in Fig.~\ref{fig:TFNAD},
which decreases with $Q^2$ as expected in PQCD. Compared to the LO
result, the NLO correction to the magnitude is about $30\%$ at $Q^2
= 30$ GeV\textsuperscript{2}, and less than $20\%$ for $Q^2 > 50$
GeV\textsuperscript{2}.
For the phase, the LO result arises with $Q^2$, and approaches an
asymptotic value close to $180^\circ$ as shown in
Fig.~\ref{fig:TFNAD}. It is obvious that the variation with $Q^2$ is
also attributed to the inclusion of the parton transverse momentum
$k_T$. If $k_T$ in Eq.(\ref{TFH0}) was dropped, the LO hard kernel
reduces to the traditional expression in the collinear
factorization, which always leads to a real $F_{\pi\gamma}$. A
quantitative understanding can be attained via Eq.~(\ref{eq:Pri}):
the contributions from the two terms in Eq.~(\ref{eq:Pri}) are
comparable at low $Q^2$, such that the time-like pion transition
form factor acquires a nontrivial phase. At high $Q^2> 20$
GeV\textsuperscript{2}, the phase is dominated by the first term in
Eq.~(\ref{eq:Pri}), since it is unlikely to have a large parton
$k_T^2=xQ^2$ demanded by the second term. That is, the tiny
deviation (less than $5^\circ$) of the asymptotic phase from
$180^\circ$ is caused by the power-suppressed $k_T^2/Q^2$ effect.
The NLO correction to the phase is about $30^\circ$ at $Q^2 = 30$
GeV\textsuperscript{2}, and fewer than $20^\circ$ for $Q^2 > 50$
GeV\textsuperscript{2}. The above investigation implies that
higher-order corrections to the complex time-like transition form
factors are under control in the $k_T$ factorization.
As stated before, the perturbative expansion could be improved by
resumming the double logarithm $\alpha_s \ln^2 x$ in
Eq.~(\ref{TFh1}).
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{TF_Abs.eps}
\includegraphics[height=5.5cm]{TF_Arg.eps}
\end{center}
\caption{
Magnitude and phase of the
time-like pion-photon transition form factor at LO (dashed) and
up to NLO (solid).
The NLO correction is marked in gray.}
\label{fig:TFNAD}
\end{figure}
For the analysis of the time-like pion EM form factor, we first
identify the major source of the strong phase by comparing the
results from Eqs.~(\ref{EMI2}) and (\ref{EMI1}) in
Fig.~\ref{fig:EMtw2}. The renormalization and factorization scale
$\mu$ is set to $\mu = \max\left( \sqrt{x_1}Q, 1/b_1, 1/b_2 \right)$
\cite{LS,Li:2010nn}, associated with the virtuality of the internal
particles. The curve from Eq.~(\ref{EMI2}) implies that the
magnitude of the time-like pion EM form factor has an asymptotic
behavior $Q^2 |F_{\text{EM}}(Q^2)| \to 0.14$ GeV\textsuperscript{2}
as $Q \to \infty$. Similar to the case of the transition form
factor, this asymptotic value is larger than that of the space-like
pion EM form factor \cite{Li:2010nn}, because of the inclusion of
the parton transverse momenta $k_T$. The inclusion of $k_T$ also
leads to the variation of the phase with $Q^2$, which arises from
the first quadrant, and then approaches an asymptotic value close to
$165^\circ$. For $Q^2>15$ GeV\textsuperscript{2}, the difference
between using the single-$b$ and double-$b$ convolutions is
insignificant in both magnitude and phase, verifying the
hierarchical relation $x_1Q^2, x_2Q^2\gg x_1x_2Q^2,k_{T}^2$, and the
major source for the strong phase as the internal gluon propagator.
We then investigate the NLO effect in the time-like pion EM form
factor based on Eqs.~(\ref{EMI2}) and (\ref{EM1I1}), which is also
shown in Fig.~\ref{fig:EMtw2}. For $Q^2 > 10$
GeV\textsuperscript{2}, the observed NLO correction is roughly
$25\%$ for the magnitude, and less then $10^\circ$ for the phase.
That is, the perturbative evaluation of the time-like pion EM form
factor is stable against radiative corrections at leading twist.
At last, we include another piece of subleading effects, the LO
two-parton twist-3 contribution \cite{Nagashima:2002iw}, for completeness.
Following the same derivation of the twist-2 contribution in Sec.~\ref{PionEM},
the $k_T$ factorization formula for the LO two-parton twist-3 contribution
to the time-like pion EM form factor was obtained in \cite{Chen:2009sd}, where
an explicit double-$b$ convolution expression similar to Eq.~(\ref{EMI2})
can be found. The hard kernel in the impact-parameter space is identical
to the one in Eq.~(\ref{EMI2}).
We employ the asymptotic two-parton twist-3 DAs,
\begin{align}
\phi_\pi^P(x) = 1 ,
\;\;\;\;\;\;
\phi_\pi^T(x) = 1-2x ,
\label{Phi3}
\end{align}
with the associated chiral scale $\mu_\pi=1.3$ GeV. The threshold
resummation factor $S_t(x, Q)$ with a shape parameter $c = 0.4$ is
included, since the important double logarithm $\alpha_s \ln^2 x$ at
small $x$ needs to be summed \cite{Li:2001ay}. The numerical
outcomes for the time-like pion EM form factor are presented in
Fig.~\ref{fig:EMtw3}, where the available experimental data
\cite{Whalley2003aaa,Proto1973} are displayed for comparison. It is
known that the pion EM form factor is dominated by the two-parton
twist-3 contribution, instead of by the twist-2 one at currently
accessible energies, because of the end-point enhancement developed
by the above DAs \cite{Cao:1997st,Huang:2004su,Chen:2009sd}.
This enhancement is understood easily as follows:
the virtual quark and gluon propagators behave like $1/x_1$ and $1/(x_1 x_2)$,
respectively, as indicated in
Eq.~(\ref{EMH02}). The twist-2 pion DA is proportional to $\phi_\pi(x) \sim O(x)$,
but the twist-3 pion DAs remain constant $\phi^{P,T}_\pi(x) \sim O(1)$
at small $x$, which then enhance the end-point contribution
dramatically. This enhancement was also observed in perturbative evaluation
of the $B\to\pi$ transition form factors \cite{KLS01}, and confirmed by the
light-cone sum-rule analysis \cite{KP98}.
The relative phase between the twist-2 and two-parton twist-3 pieces is
about $70^\circ$ as indicated by Figs.~\ref{fig:EMtw2} and
\ref{fig:EMtw3}, so the magnitude of the form factor is hardly
affected by the former. However, the twist-2 contribution does have
a sizable effect on the phase as illustrated in
Fig.~\ref{fig:EMtw3}.
The predictions for the magnitude of the time-like pion EM form
factor from the $k_T$ factorization can accommodate the data
\cite{Whalley2003aaa} for $Q^2>4$ GeV\textsuperscript{2}, an
observation consistent with that from the LO analysis
\cite{Chen:2009sd}. We point out that the measured magnitude of the
time-like pion EM form factor is larger than the space-like one
\cite{Chen:2009sd}, and simultaneous accommodation of both data is
possible in the $k_T$ factorization, but not in the collinear
factorization. Though the perturbative calculations may not be
justified for small $Q^2 < 4$ GeV\textsuperscript{2}, it is
interesting to see the coincidence between the increases of the
phase with $Q^2$ from the $k_T$ factorization and from the data for
$Q^2<1.3$ GeV\textsuperscript{2}. In a Breit-Wigner picture, the
observed phase increase could be attributed to a resonant $\rho$
meson propagator \cite{Proto1973}. It happens that the parton
transverse momentum $k_T$ plays the role of the $\rho$ meson mass,
such that the two curves in Fig.~\ref{fig:EMtw3} exhibit the similar
tendency, and begin to merge for $Q^2 > 1$ GeV\textsuperscript{2}.
Again, this coincidence cannot be achieved in the collinear
factorization, which does not generate a significant phase shift.
The consistency between the present analysis and the data supports
the $k_T$ factorization formalism
as an appropriate framework for
studying complex time-like form factors. It has been understood that
the complex penguin annihilation contribution is essential for
explaining direct CP asymmetries in two-body hadronic $B$ meson
decays \cite{KLS}. This contribution involves time-like scalar form
factors, which can be calculated in the same $k_T$ factorization
formalism. It has been observed that the phase of the $S$-wave
component in $\pi\pi$ scattering shows a similar $Q^2$ dependence to
that of the $P$-wave \cite{Proto1973}. Therefore, the PQCD
predictions for the above direct CP asymmetries are expected to be
reliable. The formalism for three-body hadronic $B$ meson decays
\cite{CL03} has required the introduction of two-meson wave
functions, whose parametrization also involves time-like form
factors of various currents. Stimulated by our work, we have the
confidence on computing these complex time-like form factors
directly in the PQCD approach.
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{EM_Abs_tw2.eps}
\includegraphics[height=5.5cm]{EM_Arg_tw2.eps}
\end{center}
\caption{Magnitude and phase of the time-like pion EM form factor at leading twist.
Contributions from LO with the single-$b$ convolution (dotted),
LO (dashed), and LO+NLO (solid) are shown.
}
\label{fig:EMtw2}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.5cm]{EM_Abs_tw3.eps}
\includegraphics[height=5.5cm]{EM_Arg_tw3.eps}
\end{center}
\caption{
Contributions to the time-like pion EM form factor from two-parton twist-3 LO (dotted),
two-parton twist-3 LO plus twist-2 LO (dashed),
and two-parton twist-3 LO plus twist-2 up to NLO (solid).}
\label{fig:EMtw3}
\end{figure}
\section{Conclusions}\label{Concl}
In this Letter we have calculated the time-like pion-photon
transition and EM form factors up to NLO in the $k_T$ factorization
formalism. The corresponding NLO hard kernels were derived by
analytically continuing the space-like ones to the time-like region
of the momentum transfer squared $Q^2$. We have identified the
$k_T$-dependent internal gluon propagator as the major source for
the strong phase of the time-like pion EM form factor, which
increases with $Q^2$, and approaches an asymptotic value
\cite{Ananthanarayan:2012tn}. The magnitudes of the time-like form
factors are larger than those of the space-like ones. It has been
realized that the above features are attributed to the inclusion of
the parton transverse momenta, and consistent with the tendency
implied by the data. It was observed that the NLO corrections in
magnitude (phase) change the LO leading-twist results by roughly
$30\%$ ($30^\circ$) for the pion transition form factor, and $25\%$
($10^\circ$) for the pion EM form factor as $Q^2> 30$ GeV$^2$. The
stability against radiative corrections justifies the $k_T$
factorization formalism for both time-like form factors at leading
twist. Therefore, the predictions for strong phases of annihilation
contributions to two-body hadronic $B$ meson decays in the PQCD
approach may be reliable. The framework presented here will have
other applications, for example, to the construction of the
two-meson wave functions for three-body $B$ meson decays.
\vskip 0.3cm We thank B. Ananthanarayan and I. Caprini for useful discussions.
The work was supported in part by the National Science
Council of R.O.C. under Grant No. NSC-98-2112-M-001-015-MY3, and by
the National Center for Theoretical Sciences of R.O.C.
|
1,477,468,750,128 | arxiv | \section{Introduction}
Contextuality has many approaches, each built for a specific purpose or strategy to exploit its characteristics, and some were built much before the identification of them as contextuality. All of them start from a codification of physical systems in some mathematical structure, that cannot be represented by another compatible structure called classical. There are already examples of topological representations \cite{Abramsky_2011,Abramsky_2012,abramsky_et_al:LIPIcs:2015:5416,okay2017topological,2020Okay,Sidiney_2021}, and algebraic representations \cite{BirkhoffvonNeumann,PhysicsPhysiqueFizika.1.195,Kochen1975,Gleason}. Some evolved to a geometrical representation due to the relation between inequalities and convex sets \cite{Cabello_2014,amaral2017geometrical}, and others look for a measure theory foundation \cite{Dzhafarov2015ContextualitybyDefaultAB}. Other notions \cite{1994SORKIN,Spekkens2008,Schmid2021} are known to be related to standard contextuality, and they are more or less explored in the literature.
Non-classicality has an amazing number of applications, and more are presented each day. Contextuality is proving to be the primary fuel for this revolution, as it is defined as the inability to be classical, at least in relation to a fixed notion of classicality. It is known that contextuality is the origin of quantum behavior \cite{doring2020contextuality}, and it is the generalization of the famous notion of nonlocality \cite{Abramsky_2011}. It is necessary to any computational advantage on classical computers \cite{shahandeh2021quantum}, and it is the "magic" necessary for some kinds of quantum computers \cite{Howard2014}. But it is not just technological applications. To know contextuality in a more general formulation is essential to understand why and how we live in a quantum reality, or if we need to look for more general theories. Such a fundamental exploration has the intention of setting the framework where future theories and technologies will be projected.
In this work, I explore the geometrical/topological origins of the generalized contextuality of generalized probability theories. The idea is to re-think the operational equivalences as loops with discrete parts in the tangent space of the usually piecewise linear manifold given by the elements of a processes (for example, an extension to the set of effects) and non-contextuality as the preservation of valuation maps to these loops, therefore, without the presence of non-trivial phases. Usually, generalized probability theories are constructed with a finite set of extremal effects, and contextual demonstrations use them. But there are theories where the set of such effects is infinite and, like quantum theory, even continuous. This implies the possibility of different kinds of operational equivalences. The idea is to construct a framework where all the operational equivalences are explored on equal footing, thus entering the realm of differential geometry for the continuum case and, more generally, discrete differential geometry. With this framework, I present two ways to interpret contextuality that follow from holonomy or monodromy, respectively, linked to realistic and anti-realistic views of the theory, and which, respectively, imply a geometrical or topological cause. They are equivalent and have dual interpretations of the fact that we have lost classicality. I use them to explore the relationship with different notions of non-classicality, such as contextual fraction, interference in the sense of generalized measure theory, the necessity of signed measures as the valuation maps, and the impossibility of embedding in a classical mathematical structure. The topological view gives a generalization of the famous Voroby'ev theorem, and the geometrical view gives a relationship between the transition maps and disturbance.
This work is divided as follows. In section \ref{2} there is a quick presentation of generalized probability theories (\ref{2.1}), the notion of ontic representation of it and the valuation functions for each piece of an usual process (\ref{2.2}), the conditions of non-contextuality for each of these pieces (\ref{2.3}), and finally a quick presentation of discrete differential geometry, the framework I will use to codify the processes (\ref{2.4}). In section \ref{3}, I will define contextuality as the identification of a phase of the valuation functions, and investigate some generalizations provided by it. In the geometrical view, I define the notion of contextual curvature for a generalized probability theory (\ref{3.1}), and in the topological view, I define the relationship between contextuality and non-trivial topology (\ref{3.2}). Also, I show how one can translate them as duals (\ref{3.3}). The tour of already known non-classical phenomena and their relations with contextuality follows in section \ref{4}, first with the identification of contextual fraction (\ref{4.1}), interference and generalized measure theory (\ref{4.2}), with the special example of quantum theory and its dependence on Planck's constant, and second, with the necessity of signed measures and the impossibility of embedding the process in a classical mathematical object (\ref{4.3}). I also comment on the Voroby'ev theorem and generalize it with the topological view (\ref{4.4}). The zeroth order process, given by disturbance, is finally explained in a generalized framework for generalized probability theories, adding a new term to the valuation decomposition (\ref{4.5}). Finally, I conclude with the section \ref{5}, where I also talk a bit about future possibilities.
\section{GPTs and DDG}
\label{2}
The objective of operational probability theories is to give an operational description of physical theories. We will work with standard probabilities and with only a system, so it is unnecessary to define how to glue systems together. This restricts them to generalized probability theories, or GPTs.
\subsection{Quick presentation of GPTs}
\label{2.1}
A GPT can be described as a category with morphisms defining operations, with a trivial object. The operations from the trivial object to any other object are called states, and the set of operations from objects other than the trivial one to it are called effects. The automorphisms of the trivial object codify the scalars, usually a semi-ring or semi-field, like the Boolean and the probabilistic ones. The other morphisms are the representations of transformations, which can also be codified by a function of the set of states to itself or the set of effects to itself. There are more details to be given on this construction. For a more formal construction, see Ref. \cite{Janotta2014,Muller2021,Selby2021}.
The sets of states, transformations, and effects $\{\mathcal{P},\mathcal{T},\mathcal{E}\}$ give us the probabilities of a process in the system that they represent by a function
\begin{equation}
p:\{\mathcal{P},\mathcal{T},\mathcal{E}\}\to[0,1]::(P,T,E)\mapsto P(E|T,P)
\end{equation}
interpreted as the probability of getting the outcome $E$ when starting with a state $P$ and doing a transformation $T$. The important point is that we can compose transformations such that $p(E|T,P)$ can be understood as a path from the trivial object to itself, passing through the operations that define first $P$, then the transformations that define $T$, and finally the operation of $E$, closing the loop in the category of operations. Usually we can use the bracket notation to write $p(E|T,P)=\bra{E}T\ket{P}$, in analogy with quantum theory and linear algebra.
An important imposition is the preservation of mixtures of operations by $p$, usually used as an argument for linearity. With it, we can represent states and effects in a vector space, and transformations as linear maps acting on them such that they preserve the sets of states and effects. This justifies the bracket notation by the identification of $p$ with an inner product in this space.
\subsection{Ontic representation}
\label{2.2}
An ontic representation of a model is the inclusion of its vector space representation into a classical probability theory. Such a theory defines a simplex as a set of states and its dual as the set of effects. We can always do that, and the probability of the model will be given by the chain rule
\begin{equation}
P(E_{r}|T_{t},P_{s})=\sum_{\lambda,\lambda'}\xi(E_{r}|\lambda')\Gamma(\lambda',T,\lambda)\mu(\lambda|P_{s})
\end{equation}
and the valuation functions $\xi$, $\Gamma$, $\mu$, and the set of ontic variables denoted by $\Lambda$. The main idea is to extend the original model by refining the variables involved, thereby allowing an explanation as a sub-model of a classical theory. But one needs to analyze if this sub-model violates any classical behavior.
First, note that such representation is independent of measurements, once the function $\xi$ has the outcomes as domain, in the form of an effect algebra. Thus, it is restricted to non-disturbing models\footnote{Non-disturbance is defined in the intersection of contexts: if the valuation of an intersection (when one defines it) $\xi(C\cap C')$ is independent of $C$ and $C'$ for all pairs of contexts, then $\mathcal{M}$ is said to be non-disturbing. It is related to parameter-independence \cite{Brandenburger_2008,barbosa2019continuousvariable}.}, or in other words, for measurements $m$ and $n$ hold
\begin{equation}
\xi(E_{r}|\lambda,m)=\xi(E_{r}|\lambda,n)=\xi(E_{r}|\lambda).
\end{equation}
At this level of generality, outcome-determinism\footnote{Outcome-determinism is defined in the valuation: the outcomes defined on the contexts where the distribution is defined can be explained in a deterministic way. It is equivalent to restricting our interest to ideal measurements, which one can easily criticize when thinking in empirical applications \cite{Spekkens2014}.} is not required, unless one wants to use factorizability as a condition for non-contextuality, as in the sheaf approach \cite{Wester_2018}\footnote{Outcome-determinism implies that $\xi::\mathcal{E}\times\Lambda\to\{0,1\}$, thus codifying the determinism of this valuation.}. Once the outcomes are fundamental in this framework, and deal with the non-classicality of all the steps in a process, it is natural that this is the strongest framework to construct new generalized models. But it is still limited by non-disturbance.
\subsection{Generalized Contextuality}
\label{2.3}
The notion of generalized contextuality \cite{Spekkens2005}, dealing with preparations, transformations, and unsharp measurements (this one equivalent to effect algebras), is a way to inquire about the classicality of a system through operational equivalences. Operational equivalence on the states, the effects, and the transformations can be written as linear conditions \cite{selby2021contextuality}
\begin{equation}
\label{basestates}
\sum_{s}a_{s}^{(\alpha)}P_{s}=0,
\end{equation}
\begin{equation}
\label{baseeffects}
\sum_{r}b_{r}^{(\beta)}E_{r}=0,
\end{equation}
\begin{equation}
\label{basetrasformations}
\sum_{t}c_{t}^{(\tau)}T_{t}=0,
\end{equation}
indexed by $\alpha$, $\beta$ and $\tau$.
A theory is non-contextual if the operational equivalences are preserved in the probabilities given by the valuation maps
\begin{equation}
\label{NCstates}
\sum_{s}a_{s}^{(\alpha)}\mu(\lambda|P_{s})=0,
\end{equation}
\begin{equation}
\label{NCeffects}
\sum_{r}b_{r}^{(\beta)}\xi(E_{r}|\lambda')=0,
\end{equation}
\begin{equation}
\label{NCtransformations}
\sum_{t}c_{t}^{(\tau)}\Gamma(\lambda',T_{t},\lambda)=0,
\end{equation}
for all $\lambda$ and $\lambda'$\footnote{This approach is more refined than other frameworks because of its operational interpretation, which allows the exploration of non-classicality beyond measurement and effects. The outcome-determinism is not imposed, but when imposed this approach is equivalent to the sheaf one \cite{Staton2015}.}.
The ontic space $\Lambda$ is a simplicial theory, thus representing a classical measurement. The conditions state that the original theory, its states, effects, and transformations, can be embedded in $\Lambda$ and its probabilities codified in it as a coarse graining without violating classical probability theory in the sense of the Kolmogorov axioms \cite{schmid2020structure,Schmid2021}. This is equivalent to the non-necessity of negative values for the functions $\xi$, $\Gamma$ and $\mu$ \cite{Spekkens2008}.
A property of valuations is their linearity within the convex set of objects in the dominion. Let $A,B\in\mathcal{E}$, and $A+B\in\mathcal{E}$, then
\begin{equation}
p(A+B)=p(A)+p(B)
\end{equation}
if and only if
\begin{equation}
\xi(A+B|\lambda')=\xi(A|\lambda')+\xi(B|\lambda'),
\end{equation}
which follows from the definition of ontic representation. Another way to see this is to note that if $A+B\in\mathcal{E}$, then $\{A,B,\mathds{1}-(A+B)\}$ is a valid measurement, thus
\begin{equation}
\begin{split}
1=&\xi(\mathds{1}|\lambda')\\
=&\xi(\mathds{1}-(A+B)|\lambda')+\xi((A+B)|\lambda')\\
=&\xi(\mathds{1}-(A+B)|\lambda')+\xi(A|\lambda')+\xi(B|\lambda')
\end{split}
\end{equation}
and linearity follows. But there is no guarantee that such linearity holds outside $\mathcal{E}$.
\begin{example}
The Wigner's representation of quantum mechanics is an ontic representation of quantum theory, and it is linear for mixed states. But as explained in Ref. \cite{doi:10.1119/1.2957889}, $W_{\psi}=W_{\alpha}+W_{\beta}$ with $\psi=\psi_{\alpha}+\psi_{\beta}$ generally does not hold. On the other hand, this holds for classical theories, which follow from the Kolmogorov axioms for probabilities.
\end{example}
\subsection{Quick presentation of DDG}
\label{2.4}
See that in a GPT the states and effects are vectors in a $\mathbb{R}^{n}$, thus an operational equivalence codify a discrete loop in such space and how the functions $\xi$, $\Gamma$ and $\mu$ deals with it. The non-contextual conditions can be thought of as discrete parallel transport of the probability functions that presents no phase in a closed loop. Contextuality as the violation of such conditions is the discrete phase in each set. To formalize it, we will use discrete differential geometry, or DDG \cite{Crane:2013:DGP,grady2010discrete}.
The formalism of DDG comes from the necessity of discrete methods to describe approximately smooth manifolds, as in computer graphics and geometry processing. For a smooth manifold, it is always possible to triangulate it, but without the imposition of the usual differential structure, DDG allows the study of more general topological manifolds called piecewise linear manifolds.
We start with a piecewise linear manifold $\mathcal{M}=\bigcup_{n}\mathcal{C}_{n}$, formed by sets of $n$-simplices $\mathcal{C}_{n}$. A $n$-simplex is treated as a $n$-dimensional "quanta of space", and the topology follows from the topology of the simplicial complex. To be valid as an approximation, each simplex will be thought of as the tangent space of a point in a hypothetical smooth manifold. To do calculus on this simplicial complex, we can introduce the formalism of discrete differential forms, which can be informally thought of as a way to measure the "size" of the simplices. Discrete differential forms are defined as linear duals of the simplices, where we will denote by $\mathcal{C}^{n}$ the set of $n$-forms. If $\omega\in\mathcal{C}^{n}$, then we have
\begin{align}
\begin{split}
\omega:\mathcal{C}_{n} &\to R\\
\ket{S} &\mapsto \left<\omega |S\right>=\int_{S}\omega.
\end{split}
\end{align}
The first operator in DDG is the boundary $\partial:\mathcal{C}_{n}\to\mathcal{C}_{n-1}$, defined as usual by the orientation defined in the simplicial complex
\begin{equation}
\partial \{a_{1}a_{2}...a_{n}\}=\{a_{2}...a_{n}\}-\{a_{1}a_{3}...a_{n}\}+...\pm\{a_{1}a_{2}...a_{n-1}\}.
\end{equation}
As an example, a tetrahedron $\{abcd\}$ has boundary
\begin{equation}
\partial \{abcd\}=\{bcd\}-\{acd\}+\{abd\}-\{abc\}
\end{equation}
in an alternate way. The second one is the coboundary $d:\mathcal{C}^{n}\to\mathcal{C}^{n+1}$. It is defined as the unique linear map that satisfies the generalized Stokes theorem for DDG
\begin{equation}
\int_{\partial S}\omega=\bra{\omega}\ket{\partial S}=\bra{d\omega}\ket{S}=\int_{S}d\omega,
\end{equation}
where the bracket notation will be used to denote the action of a $n$-form on a $m$-dimensional region, $n\geq m$, both for the discrete and continuum cases. See that the integral gives a $(n-m)$-form as expected, and $0$-forms are identified as scalar.
The homology of the manifold $\mathcal{M}$ follows the simplicial homology \cite{Hatcher:478079}, and explores the topological structure of $\mathcal{M}$ through its simplicial complex structure and the boundary operator. Once the boundary of a boundary is empty, we get $\partial\partial=0$ in the chain complex
\begin{equation}
\begin{tikzcd}[row sep=tiny]
0 & \arrow{l}{\partial_0} \mathcal{C}_{0}(\mathcal{M}) & \arrow{l}{\partial_1} \mathcal{C}_{1}(\mathcal{M}) & \arrow{l}{\partial_2} \mathcal{C}_{2}(\mathcal{M}) & \arrow{l}{\partial_3} \dots
\end{tikzcd}
\end{equation}
where we can define the kernel of $\partial_{n}$ denoted by $Z_{n}(\mathcal{M})$ as the $n$-cycles, the image of $\partial_{(n+1)}$ denoted by $B_{n}(\mathcal{M})$ as the $n$-boundaries, and the algebraic structure $H_{n}(\mathcal{M})=Z_{n}(\mathcal{M})/B_{n}(\mathcal{M})$ as the $n$-homology. Informally it explore the shape of $\mathcal{M}$ directly studying the "quanta of space", or equivalently in the tangent space of $\mathcal{M}$ that is locally isomorphic to itself. Non-trivial $n$-holonomy implies a $n$-dimensional topological failure in the simplicial complex.
On the other hand, cohomology deals with the dual of the simplices, the discrete differential forms, and the coboundary operator. We have, by the property of $dd=0$ that follows from the definition of the coboundary using the generalized Stokes theorem, the cochain complex
\begin{equation}
\begin{tikzcd}[row sep=tiny]
0 \arrow{r} & \mathcal{C}^{0}(\mathcal{M}) \arrow{r}{d_{0}} & \mathcal{C}^{1}(\mathcal{M}) \arrow{r}{d_{1}} & \mathcal{C}^{2}(\mathcal{M}) \arrow{r}{d_{2}} & \dots
\end{tikzcd}
\end{equation}
where we can define the kernel of $d_{n}$ denoted by $Z^{n}(\mathcal{M})$ as the $n$-cocycles or exact $n$-forms, the image of $d_{(n-1)}$ denoted by $B^{n}(\mathcal{M})$ as the $n$-coboundaries or closed $n$-forms, and the algebraic structure $H^{n}(\mathcal{M})=Z^{n}(\mathcal{M})/B^{n}(\mathcal{M})$ as the de Rham $n$-cohomology. It is the study of what we integrate on $\mathcal{M}$ and how it deals with the shape of that. In other words, cohomology studies the failure of solutions of equations with the form $d\omega=\sigma$, that live in the cotangent space.
\section{Diffential Geometry of Contextuality}
\label{3}
Effects, states, and transformations live in a real vector space by construction, that is isomorphic to its own tangent space. They form convex subsets by the imposition of convex combination. This naturally gives rise to piecewise linear manifolds embedded in real vector spaces. The equations \ref{basestates}, \ref{baseeffects} and \ref{basetrasformations} are just saying that an operational equivalence is a closed discrete loop in the tangent space,
\begin{equation}
\sum_{s}a_{s}^{(\alpha)}\ket{P_{s}}=\gamma^{(\alpha)},
\end{equation}
\begin{equation}
\sum_{r}b_{r}^{(\beta)}\ket{E_{r}}=\gamma^{(\beta)},
\end{equation}
\begin{equation}
\sum_{t}c_{t}^{(\tau)}\ket{T_{t}}=\gamma^{(\tau)}.
\end{equation}
Thus, operational equivalences and closed loops generated by elements of each subset codify the same information.
The non-contextual conditions presented in equations \ref{NCstates}, \ref{NCeffects} and \ref{NCtransformations} are defined by probabilistic functions $\xi$, $\Gamma$ and $\mu$. They are indexed by ontic variables. Rewriting them, we define their representations as differential forms
\begin{equation}
\label{NCstates2}
\phi^{(\alpha)}=\sum_{s}a_{s}^{(\alpha)}\mu_{\lambda}(P_{s})=\bra{\mu_{\lambda}}\left(\sum_{s}a_{s}^{(\alpha)}\ket{P_{s}}\right)=0,
\end{equation}
\begin{equation}
\label{NCeffects2}
\phi^{(\beta)}=\sum_{r}b_{r}^{(\beta)}\xi_{\lambda'}(E_{r})=\bra{\xi_{\lambda'}}\left(\sum_{r}b_{r}^{(\beta)}\ket{E_{r}}\right)=0,
\end{equation}
\begin{equation}
\label{NCtransformations2}
\phi^{(\tau)}=\sum_{t}c_{t}^{(\tau)}\Gamma_{\lambda'\lambda}(T_{t})=\bra{\Gamma_{\lambda'\lambda}}\left(\sum_{t}c_{t}^{(\tau)}\ket{T_{t}}\right)=0,
\end{equation}
for all $\lambda$ and $\lambda'$. The non-contextual conditions become just the valuation of the $1$-forms in each space given by the ontic representation $\xi$, $\Gamma$ and $\mu$ to preserve the flat behavior of the vector spaces involved. In other words, we can understand such functions as potential vector fields in our discrete space, and ask for the preservation of the convex combination in the sense that they present no phase when evaluated on a loop.
An important point is to be noted is the linearity in the forms. As the map that defines the vectors are $E\mapsto\ket{E}$, a vector $\sum_{r}c_{r}\ket{E_{r}}$ is different from $\ket{\sum_{r}c_{r}E_{r}}$, once the last one can lie outside $\mathcal{E}$. It also can include negative elements, so even the operation lies outside of $\mathcal{E}$. This is necessary to deal with non-contextuality: the objective is to classically complete the theory, thus embed it into a classical one, where $\sum_{r}c_{r}\ket{E_{r}}=\ket{\sum_{r}c_{r}E_{r}}$.
\subsection{Geometrical View}
\label{3.1}
Let's keep our model in a flat space\footnote{This supposition is the trivial extension of the convex set to all vector space without any topological failure.}. All loops are just boundaries, $\gamma=\partial S$, and non-contextuality conditions can be rewritten as
\begin{equation}
\bra{\mu_{\lambda}}\ket{\partial S_{\alpha}}=0,
\end{equation}
\begin{equation}
\bra{\xi_{\lambda'}}\ket{\partial S_{\beta}}=0,
\end{equation}
\begin{equation}
\bra{\Gamma_{\lambda'\lambda}}\ket{\partial S_{\tau}}=0,
\end{equation}
in the language of differential forms. Here we can use Stokes theorem to define the coboundary operator and get
\begin{equation}
\bra{d \mu_{\lambda}}\ket{S_{\alpha}}=0,
\end{equation}
\begin{equation}
\bra{d \xi_{\lambda'}}\ket{S_{\beta}}=0,
\end{equation}
\begin{equation}
\bra{d \Gamma_{\lambda'\lambda}}\ket{S_{\tau}}=0.
\end{equation}
Again, this is possible because we are in a $\mathds{R}^n$, with states, effects and transformations represented by vectors in its tangent space, making sense of $S_{\alpha}$, $S_{\beta}$ and $S_{\tau}$.
In these conditions, every closed differential form is exact\footnote{I will drop the braket notation for forms when there is no doubt that they exist.}: if $\bra{d \xi_{\lambda'}}\ket{S}=0$ for all regions $S$, then $d\xi_{\lambda}'=0$, which means it is closed and thus exact, $\xi_{\lambda'}=dc_{\lambda'}$ with $c_{\lambda'}$ a function. The failure of non-contextual conditions implies that $\xi_{\lambda'}=dc_{\lambda'}+\omega_{\lambda'}$, where $d\omega_{\lambda'}\neq 0$, and by
\begin{equation}
\begin{split}
\bra{d \xi_{\lambda'}}\ket{S_{\beta}}&=\bra{ddc_{\lambda'}}\ket{S_{\beta}}+\bra{d\omega_{\lambda'}}\ket{S_{\beta}}\\
&=\bra{d\omega_{\lambda'}}\ket{S_{\beta}}
\end{split}
\end{equation}
we see that $\omega_{\lambda'}$ is the connection that generates contextual behavior, and $F_{\lambda'}=d\omega_{\lambda'}$ as the curvature 2-form. The same holds for states and transformations.
\begin{theorem}
Non-contextuality for measurements (transformations; states) is equivalent to a null contextual curvature $0=F_{\lambda'}=d\xi_{\lambda'}$ (respectively $0=F_{\lambda'\lambda}=d\Gamma_{\lambda'\lambda}$; $0=F_{\lambda}=d\mu_{\lambda}$) for all hidden variables that index it.
\end{theorem}
Geometrically, we can see each valuation and set of objects as defining a fiber bundle, with $\mathbb{R}$ seen as a commutative group. As a $\mathbb{R}$-bundle it is isomorphic to the trivial bundle $\mathbb{R}^{n}\times\mathbb{R}$, and with the restriction $\mathcal{E}\times [0,1]$ well defined (and analogously for $\mathcal{T}$ and $\mathcal{P}$). Therefore, there is no topological failure; it is a geometrical question. It is analogous to electromagnetism, with an electromagnetic tensor $F$ that can be written through holonomic loops \cite{Sarita2016,weatherall2016categories}. The geometrical view identifies contextuality with non-trivial holonomy of the contextual connection represented by $\omega$.
\subsection{Topological View}
\label{3.2}
Let's refuse the use of a curvature to explain contextuality. Thus, $F=0$ and contextuality is not a correction in the valuation due to a hidden object.
\begin{theorem}
If $F=0$, then contextuality is equivalent to monodromy.
\end{theorem}
\begin{proof}
Contextuality implies a correction in the valuation, once that by construction the form $dc$ satisfies the non-contextuality conditions. Thus the valuation must be $dc+\omega$ with $\phi=\bra{\omega}\ket{\gamma}$. But as $F=d\omega=0$, the 1-form must be closed but not exact to show any non-trivial phase $\phi$, which happens when the loop $\gamma\neq\partial S$. In other words, the valuation cannot be defined in $S$, implying that the representation in $\mathbb{R}^{n}$ does not preserve the topology induced by the set of objects $\mathcal{E}$, $\mathcal{P}$ or $\mathcal{T}$. Therefore, $\omega$ capts topological failures through monodromy\footnote{There is nothing wrong with monodromy, once we cannot access these topological failures.} $\phi$.
\end{proof}
As an example, without curvature, we still need to define a correction $\xi_{\lambda'}=dc_{\lambda'}+\omega_{\lambda'}$, with $F=d\omega_{\lambda'}=0$. But now closed forms cannot be exact, which means $\omega_{\lambda'}$ is a representation of a topological failure by the Poincaré theorem \cite{Hatcher:478079}. Specifically, it represents a non-trivial element of the first cohomological group $[\omega_{\lambda'}]\in H^1$ defined on the set of objects. In terms of the phase in the contextual conditions, they follow from the monodromy of a loop $\gamma$ with a singularity in the topological view, where the region $S$ and the loop given by the boundary $\partial S=\gamma$ is in the geometrical view.
In the topological view, even with the fiber $\mathbb{R}$ and with the restriction $\mathcal{E}$ well defined (and analogously for $\mathcal{T}$ and $\mathcal{P}$), the fiber bundle is not trivial. The basis is not topologically trivial, and so is the fiber bundle. And this is what the valuation detects.
The topological view allows us to generalize results from the standard contextuality framework to the generalized one \cite{Sidiney_2021}.
\begin{theorem}
The $\mathbb{R}$-fiber bundle described by a theory is trivial in the topological view and so non-contextual if and only if any local section admits an extension to a global section.
\end{theorem}
This result follows from the equivalence of the extendability of local sections and triviality. Thus, the ontic representation is non-contextual for a given valuation if and only if the fiber bundle presents no phase, which is equivalent to present extensions to global sections for any ontic variable $\lambda$ (or the pair $\lambda$ and $\lambda'$ for transformations). Therefore, the fiber bundle is trivial.
\subsection{Topology vs. Geometry}
\label{3.3}
We can codify what is going on with a diagram (here I will use $\mathcal{E}$, but the same can be said about the other two):
\begin{equation}
\label{diagram}
\begin{tikzcd}
\mathcal{E} \arrow{rr}{\xi_{\lambda'}} \arrow[hookrightarrow]{dr}{i} & & \left[0,1\right] \\
& \mathcal{S} \arrow{ur} &
\end{tikzcd}.
\end{equation}
The three elements, the system $\mathcal{E}$, the classical representation $\mathcal{S}$, and the target for valuation $[0,1]$, are all fixed, as is the system valuation map $\xi_{\lambda'}$. As contextuality is the failure of one of the maps to keep the data of the system, the inclusion or the valuation of the representation fails.
The first case, failure of inclusion, is the usual interpretation in terms of contextuality. This is the topological view, since it's the inclusion of a simpler system that causes the problem. For it, the non-commutativity of the diagram is fundamental, and it cannot be understood in another way than as a break from our notion of reality, as in the non-realistic interpretations of quantum theory. A justification for its use is that a loop could not be immediately written as the boundary of a region since it is not supposed to exist an inner region in the first place, at least not for every loop. Thus, the curvature could not be defined. To avoid such a problem, the extreme is to suppose $F=0$ everywhere, which one can interpret as the non-existence of a hidden object changing the classical behavior\footnote{See that one can argue that both causes can coexist.}. It is an intrinsic description, whose contextuality depends on the set of objects itself. Therefore, we have failures in reality itself, which defines the topological description as an anti-realistic point of view.
The second case, the failure of valuation, imposes the embed (which usually always exists). It is the structure of the representation that causes problems with the valuation, so the problem is the inappropriate rule used to evaluate the representation. All the properties of the original system can be captured once one uses a modified valuation. It is not just hidden variables; it needs new rules to extract knowledge. This is what happens in realistic interpretations of quantum theory, and this is the point of the geometrical view. The trivial fiber bundle is imposed, but a curvature in the connection creates the phases by holonomy, following the Ambrose–Singer theorem. The geometrical point of view therefore changes the valuation function by a generator of non-classicality. It can be thought of as a curvature of the valuation on a set of classical objects. It is a modification of our classical laws by a hidden nature.
I want to point out again that both notions are equivalent; it is just a matter of representation of a deeper phenomenon: contextuality. While there are no ways to differentiate between different representations by verifying that they aren't faithful to some level of reality yet to be explored, it doesn't make much difference what is actually going on.
\section{Applications}
\label{4}
\subsection{Relation with Contextual Fraction}
\label{4.1}
Let's restrict the theory to a measurement scenario with a fixed state that is omitted. An ontic representation will have this form
\begin{equation}
p(E)=\int_{\Lambda}\mu(\lambda)\xi(E|\lambda)
\end{equation}
with $\mu$ a measure on the set of ontic variables $\Lambda$. Let's also suppose that the theory satisfies the conditions for applying contextual fraction \cite{Abramsky_2017,barbosa2019continuousvariable}\footnote{The outcome-determinism holds.} and, by simplicity, a finite number of outcomes for each measurement.
Once the contextual fraction holds, we can write the probability as a decomposition
\begin{equation}
p(E)=(\text{NCF})p_{NC}(E)+(\text{CF})p_{SC}(E),
\end{equation}
with the non-contextual fraction NCF and the contextual fraction CF satisfying $\text{NCF}+\text{CF}=1$, $p_{NC}$ being the probability of a non-contextual theory and $p_{SC}$ being the probability of a strong contextual theory. It must satisfy the sum property of probability
\begin{equation}
\begin{split}
1&=\sum_{r}p(E_{r})\\
&=(\text{NCF})\sum_{r}p_{NC}(E)+(\text{CF})\sum_{r}p_{SC}(E)\\
&=\text{NCF}+\text{CF}.
\end{split}
\end{equation}
We can also do the decomposition of the valuation $\xi$
\begin{equation}
p(E)=\int_{\Lambda}\mu(\lambda)\bra{dc_{\lambda}}\ket{E}+\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E},
\end{equation}
that also satisfies the sum property of probability
\begin{equation}
1=\sum_{r}p(E_{r})=\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{dc_{\lambda}}\ket{E_{r}}+\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E_{r}}.
\end{equation}
Identifying the contextual and non-contextual parts, we get the fractions
\begin{equation}
\text{NCF}=\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{dc_{\lambda}}\ket{E_{r}},
\end{equation}
\begin{equation}
\text{CF}=\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E_{r}},
\end{equation}
and for the models
\begin{equation}
p_{NC}(E)=\frac{\int_{\Lambda}\mu(\lambda)\bra{dc_{\lambda}}\ket{E}}{\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{dc_{\lambda}}\ket{E_{r}}},
\end{equation}
\begin{equation}
p_{SC}(E)=\frac{\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E}}{\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E_{r}}}.
\end{equation}
In particular, the maximal violation of a generalized Bell inequality of the theory is given by $\text{CF}=\sum_{r}\int_{\Lambda}\mu(\lambda)\bra{\omega_{\lambda}}\ket{E_{r}}$, explicitly dependent on the contextual correction of the valuation.
\subsection{Interference and Quantum Theory}
\label{4.2}
Interference is a natural property of quantum theory when described by wave functions. Feynmann once said that such phenomena are fundamental to making quantum theory what it is. Sorkin \cite{1994SORKIN,sorkin1995quantum} introduced a generalized notion of interference as a correction to the standard measure theory based on Kolmogorov axioms, by modifying the disjoint rule to
\begin{equation}
\begin{split}
p(A\sqcup B)=&p(A)+p(B)+I_{2}(A,B)\\
p(A\sqcup B\sqcup C)=&p(A)+p(B)+p(C)\\&-I_{2}(A,B)-I_{2}(B,C)-I_{2}(A,C)\\&+I_{3}(A,B,C)
\end{split}
\end{equation}
and so on. The important thing here is that such formalism can be used to capture contextuality \cite{2006Craig,2008Dowker,2014Dowker}.
\begin{example}
Quantum theory presents only the correction $I_{2}$ non-trivial, which can be seen clearly in the path integral approach. For sharp effects $e$, $e'$, and a pure state $\rho$, we can write
\begin{equation}
\begin{split}
|\bra{e\vee e'}\ket{\rho}|^2&=\bra{e\vee e'}\ket{\rho}+\bra{\rho}\ket{e\vee e'}\\
&=|\bra{e}\ket{\rho}|^2+|\bra{e'}\ket{\rho}|^2+I_{2}^{\rho}(e,e')
\end{split}
\end{equation}
with
\begin{equation}
I_{2}^{\rho}(e,e')=\bra{e}\ket{\rho}\bra{\rho}\ket{e'}+\bra{e'}\ket{\rho}\bra{\rho}\ket{e}
\end{equation}
the symmetric interference function. For sharp effects, only $I_{2}$ is non-trivial, which follows from Specker's Principle \cite{speckervideo,cabello2012speckers}. For unsharp ones, high-order interference appears, but they are non-fundamental once they can be rewritten from the intersections ($\wedge$) and $I_{2}$.
\end{example}
Any correction of the valuation follows from the connection $\omega$. The cause follows, in the topological view, from the failure of the loop given by the effects $E$, $E'$ and $E\vee E'$ not being in a Boolean sub-algebra. One can see this by noting that $dc_{\lambda'}$ satisfies Kolmogorov axioms, therefore
\begin{equation}
\begin{split}
p(E\vee E')=&\int d\mu \bra{\xi}\ket{E\vee E'}\\
=&\int d\mu \left(\bra{dc}\ket{E\vee E'}+\bra{\omega}\ket{E\vee E'}\right)\\
=&\int d\mu \left(\bra{dc}\ket{E}+\bra{dc}\ket{E'}+\bra{\omega}\ket{E\vee E'}\right)\\
=&\int d\mu\bra{\xi}\ket{E}+\int d\mu\bra{\xi}\ket{E'}\\
&+\int d\mu\left(\bra{\omega}\ket{E\vee E'}-\bra{\omega}\ket{E}-\bra{\omega}\ket{E'}\right)\\
=&p(E)+p(E')+\\
&\int d\mu\left(\bra{\omega}\ket{E\vee E'}-\bra{\omega}\ket{E}-\bra{\omega}\ket{E'}\right).
\end{split}
\end{equation}
So for disjoint effects,
\begin{equation}
I_{2}(E,E')=\int d\mu \left(\bra{\omega}\ket{E\vee E'}-\bra{\omega}\ket{E}-\bra{\omega}\ket{E'}\right)
\end{equation}
showing that the failure of $\omega$ to satisfy the Kolmogorov disjoint axiom is the cause of interference. See that what is measured in the geometrical view is the failure of the parallelogram law. The same construction can be made with high-order interference.
As shown in \cite{anastopoulos2002quantum}, in quantum theory the decoherence functional \cite{PhysRevD.46.1580} is determined by the geometric phases. But only its real part has reality due to being Hermitian, a property that follows from strong positivity \cite{https://doi.org/10.25560/70797}. This real part is interference, the main object of quantum measurement theory \cite{1994SORKIN,sorkin1995quantum,surya2008quantum,Gudder2009}. Therefore, the relationship between interference and geometric phase is profound in quantum theory, which can be used to detect non-classicality \cite{Asadian2015}.
\begin{example}
Non-commutativity in sharp quantum theory is where contextual behavior on effects is hidden. This follows from the capacity of using unitary transformations in relation to a fixed effect to define any other effect. As incompatibility of sharp effects is necessary for contextuality, and it is equivalent to non-commutativity, the non-exact part $\omega$ of the valuation depends on the non-trivial commutator. For two non-commutative unitaries, $U$ and $U'$, the structure constant depends on $\hbar$, and defines a loop $U^{-1}U'^{-1}UU'$. Non-commutativity implies a geometric phase that can generate an interference correction, thus given by the non-exact term. The limit $\hbar\to 0$ cancels the non-classical behavior, which means we can write $\omega=\hbar\Tilde{\omega}$, to explicitly denote the dependence of it in $\hbar$,
\begin{equation}
\xi_{\lambda'}=dc_{\lambda'}+\hbar\Tilde{\omega}_{\lambda'}.
\end{equation}
This also holds for states and transformations (that can show non-classicality, as the Aharonov–Bohm effect \cite{2010Popescu}), by using unitary transformations on them, and generalizing known results for Wigner function representation of quantum theory \cite{kocia2017,kocia2017again,kocia2018}.
\end{example}
\subsection{Signed Measures and Embedding}
\label{4.3}
Results have already been explored in Ref. \cite{Spekkens2008} on the relationship between contextuality and negativity; and in Ref. \cite{shahandeh2021,Schmid2021} about embedding and contextuality, and finally in Ref. \cite{schmid2020structure} unifying contextuality, embedding and negativity.
The violation of the third Kolmogorov axiom, which is investigated by the generalized measure theory with the inclusion of interference terms, leads to the necessity of signed measures. As shown in Ref. \cite{Spekkens2008}, the violation of the non-contextual conditions, thus the existence of phases in the valuation maps, is equivalent to this necessity. This result gives a different notion of what the curvature means; it codifies the negative part of the valuation. It also explains why we cannot access negative probabilities once they can be seen as a topological failure of the theory.
Related to it is the embedding of the model into a simplex theory, where the states form a simplex, which is the definition of classical theory. In GPTs, such an embedding is a set of linear maps, each map to a set of objects, such that the valuation is preserved. A way to see that a theory is contextual is to impose the embedding and to show that the sets of objects in the model are not embedded in the respective sets of objects in the simplex theory, thus violating the valuation map. Following the diagram \ref{diagram}, there are two ways to see that a contextual GPT cannot be embedded in a classical theory because of contextuality. Notice that a classical theory is defined by the commutation of the diagram: it is its own natural hidden variable theory. As a result,
\begin{itemize}
\item in the geometrical view, it cannot have a non-trivial curvature to correct the valuation, and any theory that has such a curvature cannot be represented by a classical theory;
\item in the topological view, a classical theory shows no topological failure, so monodromy is impossible, and a theory with monodromy cannot be represented by a classical theory.
\end{itemize}
Again, all these notions are just different ways to explain to our classical eyes what contextuality is. They are just different representations of this phenomenon.
\subsection{Necessary condition for contextuality}
\label{4.4}
For measurement contextuality satisfying outcome-determinism and non-disturbance, the measurements can be represented by a hypergraph of compatibility, where measurements are vertices and the contexts with mutually compatible measurements are the hyperedges connecting them\footnote{See Ref. \cite{Sidiney_2021} and references within for the construction of these scenarios.}. The next example follows a known necessary condition to show contextuality.
\begin{example}[Voroby'ev result]
Voroby'ev theorem \cite{Vorobyev_1962}, used in the sheaf approach, says that an empirical model is contextual only if its hypergraph of compatibility $\left(\mathbf{M},\mathcal{C}\right)$ is acyclic, which means it can be reduced to the empty set by the operations:
\begin{itemize}
\item if $m\in C$ belongs to only one hyperedge, then delete $m$;
\item if $C \subsetneq C'$, with $C, C' \in\mathcal{C}$, then delete $C$.
\end{itemize}
One can explore, like myself in Ref. \cite{Sidiney_2021}, the relationship between the topology of this hypergraph and contextual behavior.
We can rethink what these operations mean for the logical structure of effect algebras. The problem here is the lack of information about what a context is: a $\sigma$-algebra of local events. With this fact in mind, it is easy to rewrite the acyclicity conditions:
\begin{itemize}
\item if a sub-algebra isn't in a context, we can simplify the $\sigma$-algebra by ignoring it, just coarse-graining the algebra of local events;
\item every information about proper sub-algebras is already in the bigger one, and we can ignore them.
\end{itemize}
Any relations with topology will appear at the level of events, not of contexts.
\end{example}
Measurement contextuality follows from a loop $\gamma$ in the effect algebra. As any loop in a Boolean algebra has a trivial contextual connection ($\omega=0$), only loops defined through different Boolean algebras can show any contextuality. Voroby'ev result just identifies the fact that without such loops, no contextual behavior appears.
To identify the unavoidable contextual behavior of a GPT, we must exhaust all possible ontic representations. In other words, all possible $1$-forms indexed by the ontic variables must present a cohomological obstruction in at least one of the functions. By now, it gives us no direct conditions to test a model through its accessible probabilities. We need to explicitly say which ontic representation is being used. This can be used to generalize the Voroby'ev theorem.
\begin{theorem}[Generalized Voroby'ev]
An operational model is non-contextual if its first de Rham cohomological group is trivial when we impose $F=0$.
\end{theorem}
\begin{proof}
Once $F=0$, we need to use the topological view, so contextuality will appear as topological failures. If the cohomological group is trivial, then there are no differential forms that capture these failures, so any ontic representation satisfies the non-contextuality conditions.
\end{proof}
If the theory shows no topological failure detected by cohomology, then its dual curvature must be trivial. Therefore, acyclicity is an example of such an absence in the possibility of detecting curvature by the stronger imposition of not even showing the possibility of having such a loop. Thus, the Voroby'ev theorem is a particular case of the one presented here. But see that $H^{1}$ being non-trivial is not a characterization of contextuality, once there are more differential forms than valuations.
An important point is that $H^{1}$ is defined by the effects, not measurements. Thus the intuition that the first homology groups of the compatible hypergraph must be non-trivial to show contextuality is false, even though the topological view holds that $\phi=\bra{\xi_{\lambda'}}\ket{\gamma}\neq 0$ is a failure also detected by $H_{1}$. For discussion and counterexamples, see Ref. \cite{Sidiney_2021}.
\subsection{Disturbance}
\label{4.5}
Disturbing models for measurement contextuality are not explored in almost all the frameworks\footnote{A well-known exception is the Contextuality by Default approach \cite{Dzhafarov2015ContextualitybyDefaultAB}}, which is necessary for experimental applications. A way to avoid them is to modify the scenario \cite{Amaral_2019}, which looks like William James's point of view about contradiction: when you happen upon a contradiction, make a distinction. The idea is to make explicit in the scenario the contradiction that disturbance is, by adding new maximal contexts representing the disturbing intersections. I already thought about it in the fibration approach \cite{Sidiney_2021}, relating it to transition functions.
Non-disturbance is exactly the triviality of the transition maps on intersections of $\sigma$-algebras. The geometrical view, where the contextual phases follow from holonomy, has a direct way to deal with disturbance, once the holonomy can be codified in an element of a commutative group, the group of automorphism of $\mathbb{R}$. One can write it as \cite{Holonomyvideo}
\begin{equation}
Hol(\partial S)=exp\left(\bra{\omega}\ket{\partial S}\right)
\end{equation}
then $\bra{\omega}\ket{\partial S}$ is an element of the Lie algebra of the Lie group of transformations. For discrete cases, one can embed the group in a Lie group and the same follows. As a commutative group, one only needs to track each chart, here $\sigma$-algebra,
\begin{equation}
Hol(\partial S)=exp\left[\sum_{r}\bra{\omega}\left(b_{r}\ket{E_{r}}\right)\right].
\end{equation}
Each vector can be in a different chart: even with the horizontal change given trivially by the vectors, the vertical change on the fibers may not be trivial, and this is given by the transition maps $t_{r,r'}$, with the indices $r$ giving the charts. The holonomy transformation will be given by
\begin{equation}
Hol(\partial S)=\prod_{r}exp\left[\bra{\omega}\left(b_{r}\ket{E_{r}}\right)\right]\prod_{r}t_{r,r-1},
\end{equation}
where the transition can be but in algebra form, $t_{r,r-1}=exp(\eta_{r,r-1})$, to rewrite the holonomy term as
\begin{equation}
\sum_{r}\bra{\omega}\left(b_{r}\ket{E_{r}}\right)+\sum_{r}\eta_{r,r-1}.
\end{equation}
We can thus define $\eta_{r,r-1}=\bra{\eta}\left(b_{r}\ket{E_{r}}\right)$, and rewrite the valuation function as
\begin{equation}
\xi=dc+\omega+\eta,
\end{equation}
where the disturbance is on the same footing as the contextuality. It can explain the fact that disturbance consumes contextual behavior, as already noted in nonlocality \cite{Abramsky_2014,Blasiak_2021}.
A measure of disturbance follows naturally from the relation between contextual fraction and contextual connection. The disturbance fraction DF will be
\begin{equation}
\text{DF}=\sum_{r}\int_{\Lambda}d\mu(\lambda)\bra{\eta_{\lambda}}\ket{E_{r}}
\end{equation}
and an induced maximally disturbing model is given by
\begin{equation}
p_{D}(E)=\frac{\int_{\Lambda}d\mu(\lambda)\bra{\eta_{\lambda}}\ket{E}}{\sum_{r}\int_{\Lambda}d\mu(\lambda)\bra{\eta_{\lambda}}\ket{E_{r}}}
\end{equation}
with $p=(\text{NCF})p_{NC}+(\text{CF})p_{SC}+(\text{DF})p_{D}$.
In this framework, the identification of transition maps as the same objects as the connection allows the extension of the original scenario, as proposed in the fibration approach in Ref. \cite{Sidiney_2021}, without the failures of imposing an unnatural group representation. We just need to define $r,r-1$ as a new chart index of an effect $E_{r,r-1}$, extending the covering with disturbing intersections, with $\eta_{r,r-1}=\bra{\eta}\ket{E_{r,r-1}}$. We can redefine the loop in the extended base, which is equivalent to defining the effective $\omega$ of a non-disturbing model.
In the topological view, the extension is better seen in the light of Voroby'ev. In an inverse process of Graham's decomposition, the intersection with disturbance in the valuation can be refined to a new $\sigma$-algebra with two copies of the original one and a valuation given by the trivial coupling of the disturbing ones. Again, the base is redefined with new charts, and the loop modified, and contextuality and disturbance are on the same footing, now with non-disturbance.
\section{Conclusion}
\label{5}
In this work, I looked for a different description of contextuality for GPTs by imposing a representation as a differential geometry problem.
I identified operational equivalences as loops in the vector space where the objects are represented, and contextuality for them as the non-preservation of such loops by the valuation functions. This identification allows the use of discrete differential geometry to deal with contextuality, and naturally extends to non-discrete cases. There are two different ways in this approach to understand contextual behavior: (1) the classical flat structure is imposed, which implies the existence of a correction to the valuation map; and (2) the standard valuation is imposed thus forbidden a correction without a fundamental cause, the non-triviality of the topology of the set of objects itself. Both notions are equivalent, and were used to explore other concepts of non-classicality: contextual fraction, interference, signed measures, and non-embeddability. I also used them to rewrite the Voroby'ev theorem in a topological and generalized version, and to propose a way to deal with disturbance in GPTs.
We can thus say that contextuality is topological \cite{Mansfield2020contextualityis}, but it can be interpreted in a different way. They are related to a choice of interpretation and mathematical representation, that can be related to notions of realism. To mention some of them \cite{realismvideo}
\begin{itemize}
\item Fixed Realism: one model, and the real is what is true in it;
\item Covariant Realism: equivalent models, and the real is how things change when one changes the model;
\item Local Realism: nonequivalent models, and the real is how to handle disagreement.
\end{itemize}
The first two examples are true realistic views: what is important is the reality itself and how we see it, sometimes through relational group transformations. In fact, we are more naturally prepared to deal with fixed realism. The third is an anti-realistic view\footnote{Anti-realism here is classically anti-realism. A interpretation can be realistic by defining the elements of reality and be seen as anti-realistic by its contextual behavior being explained by a fundamental non-classical construction.}, where the most important thing is the failure of our notion of reality. Contextuality as it is presented today looks like the third case. The topological view explains that non-trivial cohomology is a signal of disagreement, in this case, contextual behavior. It put us at the level of local realism. However, accepting that it imposes the existence of new hidden features allows one to return to realism, at least in the covariant form \cite{rovelli2021}: the geometrical view forces covariant realism, allowing the existence of globally dependent objects, the 2-form $F$. It is globally dependent because it codifies what, in the topological view, is a global property, the topology itself.
Some open questions for future explorations follows next.
\begin{itemize}
\item It would be interesting in the future to explore the relationship between the curvature of the state space, the effects and transformations that naturally appear in quantum theory, and the holonomic formalism described here.
\item The classical limit would be interesting in this approach, to show mechanisms that erase the contextual connection.
\item A categorical formalization of these ideas would also be interesting to codify the diagram presented as a general concept.
\item The exploration of examples and the explicit construction of their connection and curvature, or their topological failures.
\item The identification of transition maps as representations of disturbance naturally leads us to ask about higher holonomies and their possible relationship with the contextuality of higher-level processes.
\end{itemize}
The paradox that contextuality seems to have is not about the possible answers, but how we unjustifiably assume the questions we want to ask, and we don't finally understand the lack of answers we get. It's another lesson in humility that nature gives us: we can't ask the questions we want, we don't have the power to force it into an interrogation; we only receive answers that we are ready to receive, and perhaps only what it allows us to access. We can only interpret the answers we hope to have with those it gives us and imagine what we think is the reality in which we live. Our confusion over quantum theory stems more from our arrogance than our wisdom.
\begin{acknowledgments}
The author thanks the MathFoundQ – UNICAMP – Mathematical Foundations of Quantum Theory, a research group of the Instituto de Matemática, Estatística e Computação Científica, in special Prof. Dr. Marcelo Terra Cunha, for the conversations in the preparation of this manuscript.
This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
\end{acknowledgments}
\nocite{*}
\raggedright
|
1,477,468,750,129 | arxiv | \section{Introduction}\label{sec:intro}
Although direct manifestations of new physics are searched for at the high energy frontier, new phenomena can also be looked for at low energies via precision measurements. The new effects might be observed in rare decays of the positronium (see e.g. \cite{wsproceedings}).
Positronium (Ps), the positron-electron bound state,
is the lightest known atom, which at the
current level of experimental and theoretical precision is bound and self-annihilates
through the electromagnetic interaction \cite{adkins}.
This feature has made positronium an ideal system for
testing the accuracy of Quantum Electrodynamics (QED) calculations
for bound states, in particular for the triplet ($1^3S_1$)
state of $Ps$ (orthopositronium, \ensuremath{\mathrm{o\text{-}Ps}} ) \cite{karshenboim2004}.
Due to the odd-parity under
C-transformation \ensuremath{\mathrm{o\text{-}Ps}}\ decays
predominantly into three photons with a lifetime in vacuum of $\tau_{\ensuremath{\mathrm{o\text{-}Ps}} }=142.05$ ns \cite{adkins}-\cite{powder2003}.
The singlet ($1^1S_0$) state (parapositronium, \ensuremath{\mathrm{p\text{-}Ps}} ) decays predominantly into two photons with a lifetime in vacuum of $\tau_{\ensuremath{\mathrm{p\text{-}Ps}} }=125$ ps \cite{czarnecki,ppsdecayrate}.
The longer lifetime of \ensuremath{\mathrm{o\text{-}Ps}}\ (due to the phase-space and additional
$\alpha$ suppression factors) gives an enhancement factor $\simeq 10^3$
in the sensitivity to an admixture of potential
new interactions not accommodated in the Standard Model (SM) \cite{matveev}.
This paper focuses on a new search for \ensuremath{o-Ps \to \mathrm{invisible}}\ decays. By invisible we mean photonless decays, i.e. decays which are not accompanied by energy deposition in a hermetic calorimeter.
In the SM the decay into a
neutrino-antineutrino pair has a branching ratio of $6.6\times 10^{-18}$ \cite{karshenboim1999}. Evidence for invisible decays in the region $\simeq 10^{-7}$ would, therefore, unambiguously signal the presence of new physics. New models that are relevant to the $o-Ps\to invisible$ decay mode
predict the existence either of i) extra-dimensions \cite{tinyakov,extradim2}, or ii) fractionally charged particles \cite{holdom,david}, or iii) a new light vector gauge boson \cite{extradim}, or
iv) mirror particles, which could be candidates for dark matter \cite{ly}-\cite{okun}.
The first experiment to search for invisible decay channels of o-Ps was performed by Atoyan et al.~\cite{atoyan}. Their result on \ensuremath{Br(\invdecay)}\ $<5.3\times 10^{-4}$ ($90\%$ C.L.) excluded this channel as a possible explanation of the \ensuremath{\mathrm{o\text{-}Ps}}\ lifetime anomaly (for a recent review see e.g. \cite{dsi}). This search was repeated by Mitsui et al. who found a branching ratio \ensuremath{Br(\invdecay)}\ $< 2.8 \times 10^{-6}$ ($90\%$ C.L.) \cite{mits}. Furthermore, they could place a limit on the existence of milli-charged particles and on the photon mirror-photon mixing. This result was corrected in \cite{gninenko} by taking into account the suppression factor for the mixing due to the presence of matter.
The rest of the paper is organized as follows. The details of the experimental setup are reported in Section \ref{sec:experiment}. The expected background is presented in Section \ref{bkg}. In Sections \ref{sec:datareduction} and \ref{sec:results} the data analysis and the results are described. The interpretation and the conclusion are reported in Sections \ref{sec:interpretation} and \ref{sec:conclusion}.
\section{Experimental technique and setup}\label{sec:experiment}
The experimental signature of \ensuremath{o-Ps \to \mathrm{invisible}}\ decay is the apparent disappearence of the energy $2m_e$ expected in ordinary decays in a hermetic calorimeter surrounding the \ensuremath{\mathrm{o\text{-}Ps}}\ formation target. The readout trigger for the calorimeter is performed by tagging the stopping of a positron in the target with high efficiency.
For the design of the experimental setup aiming at a sensitivity \ensuremath{Br(\invdecay)}\ $\simeq 10^{-8}$, the following criteria were considered:
\renewcommand{\labelenumi}{(\alph{enumi})}
\begin{enumerate}
\item The probability not to detect all direct $e^+e^-$ annihilation photons was suppressed to $\le 10^{-9}$ using a thick hermetic crystal calorimeter, minimizing the dead material and with a good energy resolution. The probability to loose all photons in 3$\gamma$ decays is consequently even smaller.
\item The region around the target was designed with as little dead material as possible in order to reduce photon losses.
\item By the appropriate choice of a porous target material and its dimensions a high fraction of \ensuremath{\mathrm{o\text{-}Ps}}\ was produced resulting in a suppression of the background from the the direct $e^+e^-$ annihilation and \ensuremath{\mathrm{p\text{-}Ps}}\ decays and in high statistics.
\item The trigger rate and the DAQ speed were maximized for statistics.
\item An efficient positron tagging system was designed to provide a clean trigger for positronium formation. A method was developed to suppress the background from the electrons emitted in the EC process (shake-off electrons).
\item An efficient identification of the 1.27~MeV photon emitted by the $^{22}$Na radioactive source was achieved with a method to veto the charged particles entering the trigger counter, thus, the backgrounds related to them could be reduced.
\end{enumerate}
The schematic illustration of the detector setup is shown in Figure \ref{detector} (the detailed description of the experimental technique and setup can be found in Refs. \cite{pol,thesis}).
\begin{figure}[h!]
\hspace{.0cm}\includegraphics[width=.6\textwidth]{calofront.eps}
\hspace{1.cm}\includegraphics[width=.4\textwidth]{caloside.eps}
\vspace*{-.cm}
\caption{\em Schematic illustration of the experimental setup: a) front view, b) top view.}
\label{detector}
\end{figure}
Positrons are produced from a $^{22}$Na source
with an activity of $\simeq 30$ kBq. The $^{22}$Na has a half life of 2.6 years and has a $Q$-value for the nuclear transition to $^{21}$Ne of $Q=2.842$~MeV. This is the maximum energy available for the particles involved in one of the three possible decay modes of $^{22}$Na:
\begin{enumerate}
\item Decay mode A (Br $\simeq 90.6\%$): the $\beta ^+$ decay with end-point energy 0.546~MeV. The positron is always followed by the prompt emission of a 1.27~MeV photon ($\tau \simeq$ 3.7 ps) from the $^{21}$Ne$^*$ de-excitation to the ground state.
\item Decay mode B (Br $\simeq 9.44\%$): the Electron Capture process (EC), where an orbital electron is captured by the nucleus, and only a 1.27~MeV photon and a neutrino are emitted from the source. In some rare cases (with a probability $\simeq 6 \times 10^{-3}$), an orbital electron is ejected, due to the sudden change in the nucleus charge (shake-off) \cite{shakeoff}.
\item Decay mode C (Br $\simeq 0.056\%$): there is no photon emission because the transition goes directly to the ground state. The end point energy of the positron is 1.83~MeV.
\end{enumerate}
Therefore, in most \ensuremath{\mathrm{o\text{-}Ps}}\ (\ensuremath{\mathrm{p\text{-}Ps}}) decays we expect 3(2) photons with a summed energy equal to $2m_e$ and one photon with an energy of 1.27~MeV (see Figure~\ref{opsregion}).
\begin{figure}[h!]
\hspace{.0cm}\includegraphics[width=\textwidth]{3decay.eps}
\vspace*{.0cm}
\caption{\em Schematic illustration of the positron tagging system and the \ensuremath{\mathrm{o\text{-}Ps}}\ formation target of the setup.}
\label{opsregion}
\end{figure}
Photons from the direct $e^+e^-$ annihilation in flight or from the positronium decays were detected in a hermetic, segmented BGO calorimeter (the ECAL). Two endcap counters called TBGO and FBGO (see Figures~\ref{detector}-\ref{opsregion}) surrounded the target on each side. At the analysis level the 1.27~MeV photon (``the triggering photon'') was required to be identified in the TBGO counter. The calorimeter was instrumented with charge and time readout.
The activity of the source was chosen to maximize the trigger rate versus the inefficiency of signal detection (mostly due to pileup events). The source was prepared by evaporating drops of a $^{22}$Na solution directly on a 100 $\mu$m thick and 2x8~mm$^2$ wide plastic scintillator (see Figure~\ref{fiber}) fabricated by squeezing a 500 $\mu$m diameter scintillating fiber (Bicron BF-12). In this way, no dead material was introduced for a source holder. The S-shape of the fiber (see Figure~\ref{detector}) was selected to avoid background from back-to-back 511~keV annihilation photons.
\begin{figure}[h!]
\hspace{.0cm}\includegraphics[width=\textwidth]{fiber_squeezed.eps}
\vspace*{.0cm}
\caption{\em Schematic view of the scintillating fiber with the $^{22}$Na source on the squeezed part in the center.}
\label{fiber}
\end{figure}
The scintillating fiber was read out at both ends by two photomultipliers
(Philips XP2020) located outside the detector (see Figure~\ref{detector}).
The coincidence of the two PMT signals was used to
tag the passage of a positron through the fiber and acted as a start signal for the data readout system. The use of two PMTs in coincidence, instead of a single one \cite{mits}, lowered the ratio between fake and real positron triggers to $<1.9\times 10^{-10}$.
Opposite to the source, a 4x8x8~mm$^3$ SiO$_2$ aerogel piece (type SP30, purchased from Matsushita Electric Works) was placed in contact with the squeezed fiber (see Figure~\ref{opsregion} and \ref{fbgofiber}).
Positrons stopping in the aerogel
target may form positronium (the formation probability is $45\%$ \cite{kalimoto}) which can migrate into the aerogel pores. The collisions with the walls of the pores did not appreciably quench the \ensuremath{\mathrm{o\text{-}Ps}}: when the aerogel was flushed with nitrogen, a fit to the distribution of the time difference between the start from the fiber and the stop from the calorimeter yielded $\tau_{\ensuremath{\mathrm{o\text{-}Ps}}}=129.1\pm1.8$ ns, which is very close to the lifetime in vacuum $\tau_{\ensuremath{\mathrm{o\text{-}Ps}}}\simeq 142.05\pm0.02$ ns \cite{PDG}.
The ECAL was composed of 100 BGO crystals that
surrounded the target region providing a nearly isotropic sphere of radius 200-220~mm (see Figure~\ref{detector}).
Each crystal had a hexagonal cross-section of
61~mm diameter and a length of 200~mm. The crystals of the inner most ring were wrapped in a 2 $\mu$m thick aluminized Mylar foil to minimize the photon energy absorption. The other crystals were wrapped with Teflon foils of 750 $\mu$m thickness. The crystals in the barrel and the FBGO were readout with ETL 9954 photomultiplier tubes.
The energy deposited in the fiber is an important parameter in order to reject the background from the electrons emitted in the EC process (shake-off electrons, see Section \ref{bkg}).
The mean number of photoelectrons detected in each XP2020 for a positron crossing the fiber was measured to be about 1.2, thus, a cut on the energy deposited in the fiber using these signals would not be meaningful. For this reason the FBGO was also used to measure the energy deposited in the fiber by the positron: the light emitted by the fiber could traverse the transparent aerogel and enter the FBGO through an aperture in the wrapping on its front face. This light was then guided by the FBGO to the PMT attached on the back face of FBGO (as illustrated in Figure~\ref{fbgofiber}). This method provided a mean number of photoelectrons equal to 13$\pm1$ for a positron traversing the fiber \cite{thesis}.
\begin{figure}[h!]
\hspace{.0cm}\includegraphics[width=\textwidth]{fbgofiber.eps}
\vspace*{.0cm}
\caption{\em Schematic illustration of the method to readout the energy deposition in the fiber.}
\label{fbgofiber}
\end{figure}
The TBGO endcap was used in the off-line analysis to identify the 1.27~MeV photon. The energy resolution of the TBGO was an essential parameter to reduce backgrounds related to the misidentification of the 1.27~MeV photon (see also Section \ref{bkg}). To provide a better energy resolution, this crystal was coupled to an ETL 9964 PMT with a more uniform light collection and a larger quantum efficiency than the ETL 9954. For the same reason this crystal was of a better quality than the others and efforts were dedicated to maximize the light collection by keeping the amount of dead material introduced by the crystal wrapping as small as possible. The best results were achieved with the crystal wrapped in the 3M radiant mirror (64 $\mu$m thickness): the resolution at 662~keV was measured to be about $15\%$ (FWHM). To select the triggering photon, an energy window $[1275\pm 67]$~keV was used in the analysis. In addition, to veto charged particles (positrons and electrons) entering the TBGO, a 1~mm thick plastic scintillator (Bicron BC-400) was optically coupled to the TBGO front face (for more details see \cite{sauter}), i.e. the same PMT was used to detect the light signals from the plastic scintillator and the TBGO crystal (see Section \ref{bkg} for a discussion of backgrounds associated to charged particles entering the TBGO). The signals from the plastic scintillator and the BGO could be distinguished because of the different decay time ($\tau$ = 2.7~ns for the plastic scintillator and $\tau \simeq$ 300~ns for the BGO). For this purpose, split signals from the PMT were fed into two ADCs with different integration gates (called short gate and long gate).
A VME system interfaced to a PC was used for data
acquisition (the DAQ rate was about 1800 events/s). For every trigger, five CAEN 32 channels QDC v792 modules recorded the charge of the crystal signals while a CAEN TDC v775 recorded the time information. A trigger gate length of 2.9 $\mu$s was chosen to keep the probability of \ensuremath{\mathrm{o\text{-}Ps}}\ to decay after this time to $\lesssim 10^{-9}$.
A temperature stabilized light-tight box containing the calorimeter was built in order to keep the temperature of the BGO crystals in the range of $\pm 0.5~^0$C. Water, whose temperature was controlled by two thermostats, circulated through copper tubes welded on two copper plates inside the box. The experimental hall was air conditioned to keep temperature variations within $\pm 1^0$C.
The high-voltage dividers of all PMTs were placed outside the box in order to avoid energy dissipation close to the crystals.
The BGO crystals were equipped with LEDs that could be pulsed periodically to monitor the response. Additionally, the gains of the PMTs were also monitored to check their stability.
The detector was calibrated and monitored internally
using the 511~keV annihilation photon
and the 1.27~MeV photons emitted by the $^{22}$Na source.
Variations of the energy scale during the run period were within $\lesssim 1\%$ and corrected on the basis of an internal calibration procedure \cite{thesis}.
\section{Background estimation and dedicated engineering run}\label{bkg}
In order to reach the required sensitivity, the background must be reduced and controlled at the level of $10^{-8}$.
To understand the different background sources and to cross-check the simulation, we perfomed an engineering run with a simplified version of our detector. During two months of data taking, the stability of the detector and its components was investigated. The comparison between the background expected based on Monte Carlo (MC) simulations and the data of the engineering run is summarized in Table~\ref{Bkgtbl}.
The ECAL thickness of 200~mm provides a probability of $<10^{-9}$ for two 511~keV photons to escape detection (see background 1 in Table~\ref{Bkgtbl}). For three photons decays, this probability is consequently even smaller.
If one (or more) annihilation photon (e.g. backscattered from the target) overlaps with the 1.27~MeV in the TBGO, it can fall in the trigger energy window $[1275\pm 67]$~keV because of the finite energy resolution. This introduces a background if the remaining annihilation photon gets absorbed in the dead material or escapes detection. The separation between the upper bound of the trigger energy window and the sum of a triggering and a 511 keV photon was, thanks to the good energy resolution of the TBGO, 7$\sigma$ of the 1786 keV peak (1275+511 keV). Thus, the level of this background is $<5\times10^{-9}$ (background 2 in Table~\ref{Bkgtbl}).
In order to suppress the other sources of background related to the misidentification of the 1.27 MeV photon (backgrounds 3, 4 and 5 listed in Table~\ref{Bkgtbl}), one had to veto charged particles entering the TBGO.
Two processes are responsible for generating such triggers.
One is associated with decay mode B (EC) when the 1.27 MeV photon interacts in the fiber, faking a positron signal. If the scattered photon and the Compton electron reach the TBGO, the sum of the energy of the two particles can be misidentified as the triggering photon without any energy in the rest of the calorimeter.
Another background can occur in decay mode A if the 1.27 MeV photon is not detected: a trigger can be produced by a positron that multiple scatters (MS) in the fiber and deposits enough energy to trigger the experiment. If the positron reaches the TBGO with a kinetic energy of about 200-300 keV and the two 511 keV annihilation photons are completely absorbed in the TBGO an energy close to 1.27 MeV will be reconstructed. This will appear as an invisible decay since no energy is expected in the rest of the detector.
If the 1.27 MeV photon is not present (decay mode C) a trigger can similarly be produced by the 1.83 MeV positron. To veto these backgrounds the charged particle veto of the TBGO described in the previous section was used such that the backgrounds 3, 4 and 5 had a probability of $<10^{-8}$ in the final setup.
The EC photon may accidentally coincide with a trigger from the fiber generated either by the PMTs noise or by some other particles emitted from possible unstable isotopes formed during the target activation. The level of this background could be reduced with the selection of two XP 2020 with very low noise ($<30$ counts/s) and the requirement of the coincidence between them. In addition, a radioactive source with a controlled high purity was chosen. From the data of the engineering run, this background was estimated to be $<1.9\times 10^{-10}$ (background 6 in Table~\ref{Bkgtbl}).
During the decay mode B (EC) the fiber signal can be generated by shake-off electrons (background 7 in Table~\ref{Bkgtbl}). Since the probability of an electron ejection steeply drops with its emission kinetic energy (more than 4 orders of magnitude in the first 100 keV) the cut on the energy deposited in the fiber by the triggering particles was used to suppress this background.
The engineering run allowed to test these backgrounds as shown in Table~\ref{Bkgtbl}. The expected fraction of zero energy events is 10$\%$ smaller than what was measured. This difference can be explained by the contribution from the shake-off electrons which was not included in the simulation. Indeed, in the engineering run there was no information about the energy deposited in the fiber so that no cut on the energy of the particles passing through the fiber could be applied.
The last column of the table lists the expectations for the final setup. The total background is estimated to be at the level $10^{-8}$.
The threshold on the energy deposited in the fiber was set considering the uncertainty due to the number of photoelectrons and the subtraction method to be sure that no electron below 100 keV could trigger the fiber; it was chosen, based on simulations, to be at 140 keV.
\begin{table}[!h]
{\begin{tabular}{|c|c|c|c|c|} \hline
\multicolumn{2}{|c|}{BACKGROUND} & \multicolumn{2}{c}{ENGINEERING RUN} \vline& \multicolumn{1}{c}{FINAL SETUP}\vline\\
\multicolumn{2}{|c|}{SOURCE} & expected & measured & expected \\
\hline
\hline
1)&Hermiticity\hphantom{00} & & & \\
&Dead Material\hphantom{00} & $<10^{-9}$ &$<10^{-9}$ & $<10^{-9}$ \\
&Resolution\hphantom{00} & & & \\
\hline
2)&Absorption in trigger & & & \\
&Energy window\hphantom{00} & $1.3 \times 10^{-6}$ &$1.5 \times 10^{-6}$ & $<5\times 10^{-9}$\\
\hline
3)&MS positron & & & \\
& with $\mathrm E_{max}$=546keV\hphantom{00} & $2.1 \times 10^{-6}$ & &$<10^{-8}$ \\
\hline
4)&MS positron & & & \\
& with $\mathrm E_{max}$=1.83MeV\hphantom{00} & $1.4 \times 10^{-7}$ & &$<10^{-8}$ \\
\hline
5)&Compton EC photon\hphantom{0} & $1.3\times 10^{-6}$ & &$<10^{-8}$\\
\hline
6)&Accidental noise \hphantom{0} & & & \\
&and EC photon\hphantom{0} & $3.2\times 10^{-11}$ &$<1.9\times 10^{-10}$ &$1.9\times 10^{-10}$ \\
\hline
7)&Shake--off electrons & & & \\
&in EC process\hphantom{0} & $10^{-6}$-$10^{-7}$ & &$10^{-8}$ \\
\hline
\hline
\multicolumn{2}{|c|}{Total}& $4.8\times10^{-6}$ & $5.6\times10^{-6}$ & $10^{-8}$ \\
\hline
\end{tabular}}
\caption{\em Comparison between expected and measured background level for the different background sources in the engineering run and expected background level for the final setup (see text for details).}
\label{Bkgtbl}
\end{table}
\section{Data analysis}\label{sec:datareduction}
For the analysis we used a data
sample of $1.39 \times 10^{10}$ recorded fiber triggers collected over a four months data taking period.
For each event
the following variables were used and cuts were applied to suppress background:
\renewcommand{\labelenumi}{(\arabic{enumi})}
\begin{enumerate}
\item $ \Delta T_{\textrm{short}}$ is the time from the trigger start to the end marker of the dual timer unit that generates the short gate (measuring the light from the plastic scintillator coupled to the TBGO).
\item $T _{\textrm{long}}$ is the pedestal of one of the QDC channels integrated using the long gate as a start trigger.
\item $ \Delta T _{\textrm{XP}}$ is the time difference between the two XPs reading the fiber.
\item $ \Delta T _{\textrm{TBGO}}$ is the time difference between the XPs coincidence and the TBGO.
\item $ E_{\textrm{TBGOc}}$ is the energy deposited in the TBGO with a gate delayed by 15~ns to measure the energy deposited in the TBGO crystal without the contribution of the plastic scintillator.
\item $ E_{\textrm{TBGO}}$ is the energy deposited in the TBGO with the full gate (long gate).
\item $ E_{\textrm{FBGO}}$ is the energy deposition in the fiber measured with the FBGO.
\end{enumerate}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.45\textwidth]{tsgate_cut2.eps}
\includegraphics[width=.45\textwidth]{elgate_cut2.eps}
\includegraphics[width=.45\textwidth]{txp_cut2.eps}
\includegraphics[width=.45\textwidth]{tbgo_cut2.eps}
\includegraphics[width=.45\textwidth]{scatterplot_cut2.eps}
\includegraphics[width=.45\textwidth]{nfbgo_ng_cal_cut2.eps}
\caption{\em The cuts applied to the variables. The numbers on the plot correspond to the variable defined in the text. Only the colored regions contribute.}
\label{cuts}
\end{center}
\end{figure}
The distributions of these variables for a reduced data sample of $10^{6}$ triggers are shown in Figure \ref{cuts}. The used selection cuts are listed in Table \ref{tab:cuts2005}. The cuts were selected and tuned with the help of a dedicated run with corresponding statistics of about $5\%$ of the data.
These variables can be grouped in three categories, depending on their function: (a) The first two variables check the stability of the electronics and the duration of the gate widths. The selection has been tuned experimentally looking at the obtained spectra. (b) The variables 3) and 4) suppress accidental triggers faking positrons in the fiber. The cuts have been chosen to minimize the accidentals and maximize the signal statistics.
(c) The variables 5) to 7) are the cuts that reduce triggers from backgrounds that mimic the appearance of a positron in the formation cavity region.
Furthermore, the upper limit of the energy window for the 1.27 MeV photon is very sensitive to the background from the ``absorption'' of one 511 keV $\gamma$ in the trigger energy window. Therefore, the long gate integration, which has a better resolution, is used to define this selection, while the short gate is used to reject the events with some energy deposited in the scintillator. The cut of $\pm 1\sigma$ was selected in order to enhance good triggers to the required level.
All the selection cuts, except 7), have been defined in terms of the sigma of the signal determined with a Gaussian fit to the data sample with 10$^6$ events. Table~\ref{tab:cuts2005} summarizes the values used and the evolution of the total number of events passing the cuts.
The lower cut for the energy deposited in the fiber has been chosen to reduce the probability of shake-off electrons to trigger the fiber to a level $<10^{-8}$ as discussed in Section \ref{bkg}.
The measured fraction of the \ensuremath{\mathrm{o\text{-}Ps}}\ produced in the aerogel is reduced by $20\%$ applying the threshold
for the energy cut in the fiber. This was expected, since the positrons that deposit the most energy are the ones stopping in the fiber.
\begin{table}[!h]
\begin{center}
{\begin{tabular}{|l|c|c|c|}
\hline
Variable & \multicolumn{2}{|c|}{Selection cut} & Fraction of events \\
name & $\#$ $\sigma$'s & value of 1$\sigma$ & remaining after cuts \\
\hline
1) $\Delta T_{short}$ & $\pm 4\sigma$ & 1.03 ns & 99.3$\%$\\
2) $ T_{long}$ & $\pm 4\sigma$ & 0.8 ADC counts & 98.9$\%$\\
3) $\Delta T_{XP}$ & $\pm 1\sigma$ & 1.87 ns & 75.5$\%$ \\
4) $\Delta T_{TBGO}$ & $\pm 1\sigma$ & 3.71 ns & 27.2$\%$\\
5) $E_{TBGO}$ & $\pm 1\sigma$ & 74 keV & 3.1$\%$\\
6) $E_{TBGOc}$& $\pm 1\sigma$ & 67 keV & 2.7$\%$\\
7) $E_{FBGO}$ & \multicolumn{2}{|c|}{140 keV $<E_{FBGO}<$ 400 keV} & 1.1$\%$\\ \hline
\end{tabular}}
\caption{\em Definition of cuts and the remaining fraction of events after the cut is applied.}
\label{tab:cuts2005}
\end{center}
\end{table}
\section{Results}\label{sec:results}
After imposing the above requirements a final sample of 1.41$\times 10^{8}$ events was obtained.
For these events the energies of all the 100 BGO crystals, except the TBGO, were summed. Figure~\ref{EcalSum} shows the spectrum of the total energy ($E_{\textrm{tot}}$) deposited in the ECAL. The peak at 1022 keV corresponds to the positronium mass ($M_{Ps}\simeq 2m_e$). The inset shows that no event is observed in the zero energy region.
To define the upper energy cut on $E_{\textrm{tot}}$, below which an event is considered as photonless (invisible), a dedicated run of $10^7$ triggers was performed triggering the experiment only with the 1.27 MeV photon and no requirement of the fiber \cite{atoyan}. The zero peak contained about 10.4$\pm 1.2\%$ of the events passing the selection cuts defined in Section \ref{sec:datareduction}. This value was corrected by a factor 0.8 determined using the MC simulations to take into account the different detection efficiencies of the 1.27 MeV photon in the case of the EC process and in the case of the transition with the positron. The measured value was consistent with the expected fraction of electron capture events (decay mode B). The cut $E_{\textrm{tot}}<80$ keV corresponding to the region containing 99$\%$ of the events in the EC peak at zero energy was used to define the photonless events.
To determine the signal inefficiency, mostly due to pileup, $\simeq 10^6$ events have been collected using a random trigger formed by delaying the fiber coincidence by 16 $\mu$s (half of the mean time interval between two events). For the 80 keV threshold defined above, this gave an inefficiency of $(11.6 \pm0.5)\%$ that was consistent with the prediction of the simulation. This inefficiency was measured at the beginning of the data taking and, conservatively, was not corrected for the reduction of the source intensity during that period.
For the FBGO, the broadening of the pedestal due to the contribution of energy deposited in the fiber had to be considered. The energy of the two 511 keV annihilation photons was localized in two opposite-located crystals of the barrel and then the FBGO pedestal was built. This resulted in a correction of 1$\%$ in the efficiency of zero signal detection.
Finally, the signal efficiency was estimated to be $\epsilon \simeq (87.4 \pm0.5)\%$.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.8\textwidth]{sumECAL_paper.eps}
\caption{\em Spectrum of the sum of the total energy in the ECAL. The inset shows the magnified view of the low-energy region in logarithmic scale.}
\label{EcalSum}
\end{center}
\end{figure}
The mean fraction of \ensuremath{\mathrm{o\text{-}Ps}}\ in the data sample could be evaluated
from the decay time curve by fitting the observed distribution to the
function $A \cdot e^{(-t/\tau_{\ensuremath{\mathrm{o\text{-}Ps}}})} + B$ ($B$ is the accidental background)
starting from the time $t=100$ ns when \ensuremath{\mathrm{o\text{-}Ps}}\ was completely
thermalized in the target \cite{thesis}. After taking into account the estimated difference of efficiency for 2 and 3 gamma detection and the pick-off effect (measured from the lifetime spectra with the same method as described in Ref.\cite{rubbia}), the fraction of \ensuremath{\mathrm{o\text{-}Ps}}\ in the data sample was $4.5\pm 0.2~\%$ \cite{thesis}.
Since in the signal region no zero energy events were observed
the upper limit for the branching ratio \cite{PDG} is:
\begin{equation}\label{eq:BR_opsinv}
Br(o-Ps \rightarrow invisible) = 2.3/ ( N_{o-Ps} \cdot \epsilon) \leq 4.2\times10^{-7}~(90\%~\textrm{C.L.})
\end{equation}
where $N_{\ensuremath{\mathrm{o\text{-}Ps}}}=(6.31\pm0.28) \times 10^6$ is the number of \ensuremath{\mathrm{o\text{-}Ps}}\ in the selected sample.
Figure~\ref{EcalSum} shows the extrapolation into the region of the zero signal with an exponential.
The integral from 0 to 80 keV of the function obtained from the fit gives an evaluation of the background contribution in this region. The result is $N_{\textrm{bkg}}=0.34\pm 0.04$ expected events, where
the error was evaluated from the uncertainty related to the
extrapolation procedure itself.
This experiment can also be used to obtain upper limits on $Br(\ensuremath{\mathrm{p\text{-}Ps}} \to invisible)$ and on $Br(e^+e^-\to invisible)$.
For this purpose the different probabilities of positrons stopping in the fiber and in the aerogel were calculated with the help of simulations.
Several papers were reviewed in Ref.\cite{mits} concluding that the probability to form positronium (75$\%$ of o-Ps and 25$\%$ of p-Ps) per positron stopping is about 0.45 in the aerogel and between 0.2 and 0.4 in the fiber. In the aerogel a fraction of 0.05 to 0.1 of the p-Ps atoms experience pick-off annihilation in about 2 ns before escaping out of the silica grains in the pores. For o-Ps this probability ranges from 0.28 to 0.45. In a plastic scintillator (the fiber) the corresponding pick-off probabilities are 0.99 to 1 for \ensuremath{\mathrm{o\text{-}Ps}}\ and 0.05 to 0.1 for p-Ps \cite{mits}. From the simulations, the fraction of positrons stopping in the fiber is 0.43 and in the aerogel is 0.25, therefore, the smallest possible fraction of p-Ps decays (including pick-off) is $5.5\%$. Thus, an upper bound for the invisible decay of p-Ps can be calculated:
\begin{equation}
Br(p-Ps\to invisible) = 2.3/(N_{p-Ps} \cdot \epsilon) \leq 4.3 \times 10^{-7}~(90\%~\textrm{C.L.})
\end{equation}
where $N_{p-Ps}=0.055\times 1.41\times 10^{8}\simeq 6.14\times 10^6$ was used and the positrons that do not stop in the fiber or in the aerogel are assumed to annihilate directly.\\
The number of $e^+e^-$ decays can be calculated subtracting from the total number of events the p-Ps and o-Ps decays, thus, one obtains $N_{e^+e^-}\simeq 1.29\times 10^8$ and an upper limit for the branching ratio of:
\begin{equation}
Br(e^+e^-\to invisible) = 2.3/( N_{e^+e^-} \cdot \epsilon) \leq 2.1 \times 10^{-8}~(90\%~\textrm{C.L.})
\end{equation}
\section{Interpretation}\label{sec:interpretation}
No event consistent with an invisible decay was found in the large sample of events.
Using Eq. (32) of Ref.\cite{extradim}, the bound for particles with a fraction $Q_x$ of the electron charge can be plotted as a function of their mass $m_X$ for $m_X<m_e$, as shown in Fig.~\ref{milli}a. Thus, the region of the charge-mass parameter space, which was not excluded directly by the SLAC results \cite{prinz} and the previous search for \ensuremath{o-Ps \to \mathrm{invisible}}\ \cite{mits}, is covered by this experiment (see Fig.~\ref{milli}b).
\begin{figure}[!h]
\begin{center}
\includegraphics[width=.45\textwidth,height=.4\textwidth]{milli.eps}
\includegraphics[width=.45\textwidth]{milli_JHEP2000.eps}
\caption{\em a) Mass--charge parameter space for the \ensuremath{\mathrm{o\text{-}Ps}}\ decay into milli-charged particles, excluded with this experiment, b) Comparison of our results (the dashed region on the plot) with other experimental (SLAC \cite{prinz} and previous \ensuremath{o-Ps \to \mathrm{invisible}}\ \cite{mits}) and astrophysical bounds (the plot was taken from \cite{david}).}
\label{milli}
\end{center}
\end{figure}
The strength of the photon mirror-photon mixing $\epsilon$ can be extracted from the limit on the \ensuremath{Br(\invdecay)}\ with \cite{gninenko}:
\begin{equation}\label{eq:mixing_mirror_cavity}
\epsilon= \frac{1}{2\pi f}\sqrt{\frac{Br(\ensuremath{o-Ps \to \mathrm{invisible}})\Gamma_\mathrm{SM}\Gamma_\mathrm{coll}}{2(1-Br(\ensuremath{o-Ps \to \mathrm{invisible}}))}}\end{equation}
by substituting a conservative value of $\Gamma_\mathrm{coll}=5\times 10^{4}~\mathrm{s}^{-1}$ for the collision rate of the \ensuremath{\mathrm{o\text{-}Ps}}\ against the walls of the aerogel pores \cite{foot}. $\Gamma_\mathrm{SM}$ is the decay rate of \ensuremath{\mathrm{o\text{-}Ps}}\ in vacuum and $f = 8.7\times 10^4$ MHz is the contribution to the
ortho-para splitting from the one-photon annihilation diagram
involving \ensuremath{\mathrm{o\text{-}Ps}}\ \cite{glashow}. Using our result one can estimate the mixing strength to be $\epsilon \leq 1.55\times 10^{-7}~(90\%~\textrm{C.L.})$. This is close to the BBN limit of $\epsilon < 3\times 10^{-8}$ \cite{cg} but does not cover all the region of interest suggested by the DAMA and CRESST results \cite{foot1} and motivated by GUT predictions \cite{berezhiani} and by string theory \cite{abel} ($\epsilon > 10^{-9}$).
\section{Conclusion}\label{sec:conclusion}
In this paper the results of a new search for an invisible decay of \ensuremath{\mathrm{o\text{-}Ps}}\ were reported. Since no event was found in the energy window [0,80] keV, an upper limit for the branching ratio was set:
\begin{equation}
\ensuremath{Br(\invdecay)} \leq 4.2\times 10^{-7}~(90\%~\textrm{C.L.})
\end{equation}
improving the best existing bound \cite{mits} by a factor 7.
Analyzed in the context of theoretical models the negative result provides an upper limit on the photon mirror-photon mixing strength $\epsilon \leq 1.55\times 10^{-7}~(90\%~\textrm{C.L.})$ and rules out particles with a fraction $Q_x \leq 3.4 \times 10^{-5}$ (for $m_X \leq m_e$) of the electron charge (milli-charged particles). Furthermore, an upper limit on the branching ratios for the process $Br(p-Ps\to invisible)\leq 4.3 \times 10^{-7}~(90\%~\textrm{C.L.})$ and $Br(e^+e^-\to invisible)\leq 2.1 \times 10^{-8}~(90\%~\textrm{C.L.})$ could be set.
{ \bf Acknowledgements}\\
We thank the Paul Scherrer Instititute (Villigen, Switzerland) for lending us the BGO crystals. We gratefully acknowledge the help of
B. Eichler and J. Neuhausen for the $^{22}$Na source preparation. We wish to thank Z. Berezhiani, R. Eichler, S. Karshenboim, N.V. Krasnikov, V. Matveev and V.A. Rubakov for useful discussions. We are grateful to N.A. Golubev, L. Knecht, G. Natterer, J. P. Peigneux and M. Sauter for their essential help. We wish to thank M. Haguenauer for providing us with samples of scintillating fibers.
This work was supported by the Swiss National Science Foundation and by the French Ministery of Foreign Affairs through an ECONET program. We thank the CERN for its hospitality.
|
1,477,468,750,130 | arxiv | \section{Introduction}
Soft robotics utilize softness and smart design to realize robots that are inherently safe for human-robot interaction and compliant to unforeseen disturbances. Soft fluidic robotics are among the most popular type due to their flexibility, wide range of applications, and (relative) simplicity for actuation. They have been used in a broad range of applications from grippers\cite{mosadegh2014pneumatic} to autonomous systems\cite{wehner2016integrated}.
Soft fluidic robotics work by transferring fluidic power (flow and pressure) from one place to another. This power can be used for actuation\cite{mosadegh2014pneumatic}, sensing\cite{cheng2018soft}, and control\cite{wehner2016integrated,mahon2019soft}. Soft fluidic actuators (SFAs) consist of mechanically compliant chambers that can be deformed using fluidic power. The deformation of these chambers is defined by the stiffness gradient of the structure. This gradient allows for motions such as translation, bending, and twisting \cite{connolly2015mechanical}.
Researchers employ a broad number of methods to realize this stiffness gradient, which includes exploiting material properties and/or geometry. An approach is to add a second material such as fibers\cite{connolly2015mechanical} textile\cite{mosadegh2014pneumatic}, and/or paper\cite{martinez2012elastomeric} in a dedicated geometry. Other approaches use a single material and exploit geometrical features, such as origami\cite{paez2016design} and kirigami\cite{dias2017kirigami}. A prominent example of exploiting internal geometry is porous structures, such as foam.
An advantage of porous structures, is that they can, inherently, allow for fluidic transport and are flexible. In addition, porous structures are compressible. This feature has been exploited to realize vacuum-driven actuators. Examples thereof include: a continuum soft robot\cite{robertson2017new}, high-force sensorized foam actuators\cite{murali2021sensorized}, and bending actuators\cite{yamada2019laminated}. Lastly, positive pressure actuation is also feasible with porous structures for both actuation\cite{mac2015poroelastic} and control\cite{futran2018leveraging}.
Existing methods for fabricating porous structures span a broad spectrum of approaches. A popular approach is to use commercial foams combined with semi-automatic/manual processing, such as laser-cutting, gluing, and coating with silicone or laminating (for airtightness)\cite{robertson2017new,murali2021sensorized,yamada2020actuatable}. Another method is to use casting with a sacrificial scaffold/material such as salt \cite{mac2015poroelastic,futran2018leveraging}, sugar\cite{bilent2019influence} or Polyvinyl Alcohol (PVA)\cite{bilent2020fabrication} that can be removed afterwards.
To realize the multifunctional nature of soft robotic systems (including SFAs based on porous structures), additive manufacturing (AM) is considered a promising fabrication method. Literature has already demonstrated that AM can realize (sensorized) SFAs\cite{morrow2016directly,yap2016high,walker2019zero,khondoker2019direct,georgopoulou2021sensorized}. However, the fabrication of porous structures for soft robots by AM methods is less developed.
Recently, a custom ink\cite{yirmibesoglu2021multi} for realizing local porosity was developed by adding a porogen. By regulating the ratio of ammonium bicarbonate (the porogen) and silicone rubber the porosity can be changed locally. This approach requires modification of the material itself.
Besides using a chemical reaction, a porous structure can also be realized by the fabrication of miniature patterns during the AM process. Liquid rope coiling (LRC) is an interesting candidate for the fabrication of these patterns. LRC is the coiling of a viscous fluid, such as honey, due to buckling when falling from an elevated height\cite{ribe2004coiling}.
LRC has already been exploited to fabricate porous structures\cite{lipton20163d,brun2017molten,tian2017silicone,zou2020spiderweb,emery2021applied}. Results in the literature indicate that LRC can realize stiffness changes of more than one order of magnitude\cite{lipton20163d,emery2021applied}. In addition, it has been shown to be feasible to print different patterns by proper selection of process parameters. \cite{tian2017silicone,brun2017molten,emery2021applied} Such an approach has been applied to AM processes extrude molten materials that solidify in the process, such as glass\cite{brun2017molten} and thermoplastics\cite{zou2020spiderweb,emery2021applied}.
Although these initial results indicate that porous structures can be realized with LRC and AM, its application to soft robotics has, to the knowledge of the authors, not yet been explored. In addition, we also explore how to use LRC to create multiple levels of porosity in a single structure to realize mechanical programming with porosity.
Within this work, a new printing approach that exploits LRC for the fabrication of porous structures using hyperelastic materials is proposed. This new approach is referred to as the Infill Foam (InFoam) approach. We demonstrate how LRC's coiling behavior can be incorporated into the fused deposition modeling (FDM) process to fabricate porous hyperelastic structures. To use hyperelastic materials within the FDM process a screw extruder (see Fig. \ref{fig:lrc_intro}(a)) was used\cite{saari2015additive,khondoker2019direct}.
Subsequently, the InFoam method is used for mechanical programming of stiffness, density, and energy dissipation. Then the extension to graded porosity is demonstrated as a tool for programming the deformation and to program the behavior of soft bending vacuum actuators.
\section{Materials and Methods}
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\textwidth]{fig_1.png}
\caption{(a) Screw extruder, (b) liquid rope coiling setup with important parameters, (c) coil geometry, (d) grading approach, (e) cubes with different porosity under the same load, and (f) porosity gradients for soft vacuum actuators.}
\label{fig:lrc_intro}
\end{figure*}
\subsection{From Coiling to Printing}
Liquid rope coiling is a phenomenon that can be observed when extruding a viscous liquid above the printing (see Fig. \ref{fig:lrc_intro}(b)). This situation is also observed in extrusion-based AM methods such as FDM. When there is a gap between the nozzle and printing surface, the deposited material acts as a flexible (liquid) rope that buckles, which leads to a pattern of coils on the surface (see Fig. \ref{fig:lrc_intro}(b)). A wide variety of shapes can be achieved through LRC, such as a meander and figure of eight\cite{chiu2006fall,yuk2018new}. Within this work, the focus is purely on the circular coil pattern. The repetitive nature of the circular coiling pattern makes it an interesting tool for 3D printing.
Literature has shown that the shape of the deposited coils (called coiling pattern) depends on three parameters, namely: (I) the viscosity of the extruded material, (II) the height ($H$) of the nozzle above the platform, (III) the ratio of extrusion ($V_e$) and printhead speed ($V_F$)\cite{chiu2006fall,ribe2012liquid}. All three parameters can be set by the user in the FDM process. The viscosity can (primarily) be set using the printing temperature and the height by moving the nozzle. Whereas the ratio of extrusion and printhead speed is set by the extrusion multiplier ($\alpha$). The variable $\alpha$ is the screw rotation (in radians) per millimeter moved of the nozzle. In essence, $\alpha$ scales the distance traveled by the printhead to compute the required screw rotation, which defines the ratio of speeds.
These parameters can be set to control the coiling pattern and to plan the desired shape. However, modeling the coiling pattern is not straightforward due to the changes in material properties (such as viscosity) due to the solidification of the thermoplastic during deposition. Within this work, an experimental method, similar to\cite{brun2017molten}, is used to characterize the coiling pattern.
Similar to conventional printing, the InFoam method needs the dimensions (width and height) of the coiling pattern for path planning. The coiling pattern consists of coils that stack on top of each other in line with the movement of the printhead (see Fig. \ref{fig:lrc_intro}(b)). The geometry of this coiling pattern can be used to compute the width and height. As observed in Fig. \ref{fig:lrc_intro}(c), the width $W$ of the coiling pattern is equal to the diameter of the coil, which is equal to
\begin{equation}
W = 2R_c+d
\label{eq:Rc}
\end{equation}
Within this equation, $R_c$ is the coil radius and $d$ the nozzle diameter.
To compute the coil height $h_c$, a second parameter of the coiling pattern needs to be quantified, namely the $N$-value. The $N$-value represents the coil density within a row of coils, which is defined by the number of coils within one (outer) coil diameter ($W$). This value can be computed after deposition by measuring the distance between two coils $\Delta x$ and using the relation:
\begin{equation}
N=\sqrt{\frac{W^2}{\Delta x^2-d^2}}
\label{eq:N1}
\end{equation}
The $N$-value can also be computed from the coiling behavior by examining the ratio of the extruded length over the linear distance of one outer coil diameter ($W$). This modelling approach allows for a relation between the (user set) parameters $\alpha$ and $H$ (controls the value of $R_c$) and the $N$-value. After rewriting the coil density ($N$-value) can be expressed as (see derivation in Supplementary Information)
\begin{equation}
N = \frac{G\alpha W}{2 \pi R_c}
\label{eq:N}
\end{equation}
The variable $G$ (mm/rad) represents the length of material extruded per radian turned by the screw. The exact value of $G$ is defined by the material and geometry of the screw and barrel\cite{crawford2020plastics}. Within this work, this value is acquired by fitting experimental data. Thus, in Equation \ref{eq:N} moving the linear distance $W$ the screw rotates $\alpha W$ radians leading to an extruded length of $G\alpha W$. This length is used to compute the coil density by dividing it by the circumference of the coil ($2\pi R_c$). Thus, the coil density can be adjusted by changing the extrusion multiplier ($\alpha$) under the assumption of constant $R_c$. Adjustment of the extrusion multiplier for the desired density at a different $R_c$ requires scaling by the factor $W/(2\pi R_c)$. The scaled extrusion multiplier $\alpha W/(2 \pi R_c)$ is referred to as the coil density multiplier.
The coil density and radius can be used to compute the coil height $h_c$. The coil can be modelled as a rectangle with rounded corners (see Fig. \ref{fig:lrc_intro}(c)) under an angle of $\theta=\arctan\left(\frac{Nd}{W}\right)$. The height is then equal to
\begin{equation}
h_c = W \sin\left(\theta\right)+\left(1-\sin\left(\theta\right)\right)d
\label{eq:hc}
\end{equation}
The preceding equations can be used for path planning and characterization of the coiling pattern. Specifically, Equations \ref{eq:Rc} and \ref{eq:hc} can be used to compute the width and height of the coiling pattern. Whereas Equation \ref{eq:N1} and \ref{eq:N} can be used to measure and set the desired coil density ($N$-value), respectively. To use these equations the value of $R_c$ needs to be characterized to relate it to $H$.
The geometry of the deposited coiling patterns will introduce porosity. This porosity can be used for mechanical programming of a structure\cite{gibson1982mechanics}. The possibility of grading the porosity allows for localized mechanical programming. Combining LRC and the normal plotting of the 3D printer enables such functionality by using the deposited coils as a scaffold for the normal plotting. A solution for printing non-porous zones is to move the nozzle down after extruding a layer of coils (i.e. the scaffold) and then locally plot additional material(see Fig. \ref{fig:lrc_intro}(d)). This simple approach enables porosity grading and ensures proper force transfer by welding layers of coils together.
\subsection{Mechanical Programming with Porosity}
The InFoam method can produce (graded) coiling patterns with different coiling radii and $N$-values by adjusting the extrusion multiplier ($\alpha$) and/or the height ($H$). The realized coiling patterns can be used to print porous structures. The porosity will have a major influence on the resulting mechanical properties\cite{gibson1982mechanics}. The repetitive nature of the coiling pattern was exploited to estimate the porosity $\phi$ (in percent), which is equal to (see Supplementary Information for the derivation)
\begin{equation}
\phi = 100\left(1-\frac{G \alpha \pi d^2}{ 4Wh_c}\right)
\label{eq:Gman}
\end{equation}
This equation estimates the porosity by dividing the extruded volume (numerator) by the spanned volume (denominator). The spanned volume is based on a rectangular bar of coil height $h_c$ and outer coil diameter $W$ whereas the length is canceled out (see Supplementary Information). This equation shows that the user can decrease the porosity nonlinearly by both $\alpha$ and $H$. The latter is implicit as changing the $H$ will change the coil radius $R_c$, which is related to $W$ and $h_c$.
The porosity can be used to program a wide variety of mechanical properties, which include elastic modulus and density. The elastic modulus decreases quadratically with the porosity\cite{gibson1982mechanics}. The large change in stiffness is visualized by the samples shown in Fig. \ref{fig:lrc_intro}(e) wherein the same load lead to deformations ranging from minor ($\phi=46$\%) to large ($89$\%).
In general, the effect of porosity on mechanical properties can be captured in a power-law relation through the phenomenological model\cite{gibson1982mechanics}:
\begin{equation}
\frac{p_p}{p_s} = C\left(1-\phi/100\right)^n
\end{equation}
Wherein the $p_p$ and $p_s$ represent the mechanical property of interest of the porous structure and solid, respectively. Whereas $C$ and $n$ are two fitting values. The relation between density and porosity for air (and other low-density fluids), is an example of such a relation (with $n=1$ and $C=1$)
\begin{equation}
\frac{\rho_p}{\rho_b}= 1-\phi/100
\label{eq:dens}
\end{equation}
The variables $\rho_p$ and $\rho_b$ represent the densities of the porous structure and bulk material, respectively. This equation approximates the porosity well as low-density fluids (such as air) have densities that are multiple orders of magnitude below that of the solid material.
\subsection{Experimental Setup \& Methodology}
\subsubsection{3D Printing Setup}
For printing with the InFoam method, a modified Creality Ender 5 Plus (Shenzhen Creality 3D Technology Co., Ltd., China) was used (see Figure S1 in the Supplementary Information). The printer was modified to use a screw extruder (see schematic in Fig. \ref{fig:lrc_intro}(a)) to allow for the use of thermoplastic elastomers. To magnify the torque a 22:63 gear ratio between the stepper motor and screw was added. However, the extrusion multiplier ($\alpha$) will always refer to the screw's rotation. As printing material G1657 styrene-ethylene-butylene-styrene (SEBS) pellets (Kraton Corporation, USA) with a Shore hardness of 47A and bulk density of 900 kg/m$^3$ were used. In addition, all experiments were conducted with a printing temperature of 230$^\circ$C and with a nozzle diameter of $d=0.4$ mm. Lastly, the G-code for printing with the InFoam method was generated using a custom script in MATLAB (The Mathworks, Inc., USA). Printing with the InFoam method is shown in Supplementary Movie 1 in the Supplementary Information.
\subsubsection{Characterization of Coiling Pattern}
Using the InFoam method for printing complex structures requires characterization of the coiling pattern (i.e. the coil radius ($R_c$) and density ($N$-value)). The LRC effect indicates that the coil radius and the $N$-value are controlled by the viscosity, extrusion multiplier ($\alpha$), and height ($H$). Within this experiment, the $H$ and $\alpha$ were varied at a fixed temperature but due to the shear-thinning behavior of the thermoplastic, the extrusion speed will also affect the viscosity (and therefore the coil radius).
For this characterization, several lines were printed. Afterwards, the printed lines were scanned with a USB camera, and the acquired images were analyzed using ImageJ\cite{schindelin2012fiji}. To relate the pixel distance to real-world distances dots on the printbed were used as a reference distance. Specifically, the width ($W$) and linear distance ($\Delta x$) were measured. These measurements were used to determine $R_c$ and the $N$-value using Equations \ref{eq:Rc} and \ref{eq:N1}, respectively.
\subsubsection{Mechanical Programming of Soft Cubes}
To investigate the effect of the coil radius ($R_c$) and density ($N$-value) on the porosity, stiffness, and damping a set of 25x25x25 mm$^3$ cubes were printed. The cubes were printed (in triplicate) for different heights ($H$) and $N$-values. The corresponding extrusion multiplier ($\alpha$) and $R_c$ were based on the results of the coiling characterization to use for process planning (i.e Equations \ref{eq:Rc}, \ref{eq:N}, and \ref{eq:hc}). The cubes were divided into two groups of six (with one the same in both $[H,N]=[6,3]$) with either a change in $N$-value$=[2.2,3,6.3,8.5,10.75,12.75]$ or $H=[2,4,6,8,10,15]$. All cubes were printed with the coils in the same direction for each layer.
After printing the cubes, the mass and height were measured using a scale and caliper, respectively. The resulting porosity was then computed using the measured density by dividing the mass by the cube's volume. The resulting porosity (in percent) is then computed using Equation \ref{eq:dens}.
Subsequently, the stiffness and damping were investigated using a universal tensile tester (Instron 3343, Instron, USA). The testing was done in four phases. Firstly, the cubes were pre-loaded by compressing to 5 mm at a speed of 5 mm/min. Secondly, it returned to its original position at 60 mm/min. These two load steps were performed to have a similar pre-load for all cubes. In the third phase, the cubes were compressed for 21 mm at 10 mm/min. This loading step was slow to have a primarily elastic response. Lastly, the tensile tester returned to its original position with a speed of 60 mm/min. The speed was higher in this step to increase the viscous damping component. These loading steps were performed for all three axes (each in triplicate).
The stiffness was approximated by computing the segment modulus at different levels of strain. A quadratic curve was fitted at specific levels of strain using the twelve preceding points. The result of this fitting was used to compute the segment modulus.
In addition, the dissipated energy by hysteresis was investigated to evaluate the damping. The dissipated energy was computed by taking the ratio of the stress-strain curve's inner shape (i.e. between the forward and backward motion) and the overall curve.
\subsubsection{Graded Porosity for Soft Fluidic Actuators}
By moving from a single level of porosity to a graded porosity, the deformation can be programmed. To demonstrate this grading capability, a set of soft vacuum actuators with different porosity gradients were printed. These included the contraction pattern of Fig. \ref{fig:lrc_intro}(f), which used high (80+\%) and low porosity discs stacked on top of each other to realize contraction. The low porosity discs had a low porosity ring (<20\%) on the outside and high porosity inside to improve contraction but preserve airflow (the exceptions are the top and bottom parts, which are completely low porosity). In addition,bending actuators were printed (see Fig. \ref{fig:lrc_intro}(f)) with and without spacer (with a porosity of 83.8\% and spacers with 15\%). A zero porosity layer was added as a stiff substrate, such that when a vacuum is applied the structure would collapse in a bending motion. Similarly, a twisting actuator was printed with the spacers at a 45-degree angle to make a twisting motion the preferred deformation. Lastly, a screw pattern (see Fig. \ref{fig:lrc_intro}(f) with high porosity of 81\% and spacers of 15\%) had two zero-porosity sections perpendicular to each other. By spacing these two sections in succession, actuation would lead to a screw-like motion. The outer dimensions were set the same as the bending actuators. After printing, all printed actuators were put in a 0.4 mm thick SEBS (heat-sealed) sleeve and were actuated using a vacuum.
The bending actuators with spacers were further investigated to characterize the effect of porosity on the output force and stroke. To this end, four soft bending actuators were printed with different levels of porosity ($\phi=[77.6,79.1,83.8,85.2]$). The nominal dimensions of these actuators were 15 mm wide, 75 mm long and 8 mm high. These actuators used a zero porosity strip as a (relatively) stiff substrate (1 mm thick). Subsequently, the actuators were put in a heat-sealed SEBS sleeve (0.4 mm thick). Both the sleeves and actuators were fabricated in triplicate.
The setup used for characterization of the output force and stroke evaluation is shown in Fig. \ref{fig:ba1}. The swing acts as a grasping object with a bearing to reduce friction. An LSB200 loadcell (Futek, USA) was integrated into the swing to measure the output force, which was connected to an Arduino Uno (Arduino AG, Italy) through an HX711 loadcell amplifier (Avia Semiconductor, China). The loadcell was tared right before starting a measurement to offset the mass of the actuator. For actuation, a vacuum source was operated using a manually triggered valve, and the vacuum pressure was set using a flow valve. The pressure was read by the Arduino Uno using an MXP5011DP pressure sensor (NXP Semiconductors, The Netherlands). The Arduino and a USB webcam (to record the actuator's deformation) were connected through USB to MATLAB. This experiment was performed in triplicate for each actuator for 30 seconds.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{fig_2.png}
\caption{Actuator characterization setup.}
\label{fig:ba1}
\end{figure}
For analysis of the data, a Maxwell viscoelastic model\cite{Macosko} with one term was fitted to the force data, which is defined as:
\begin{equation}
F(t) = (K_0 + K_1 \exp(-t/\tau_1))F_{max}
\end{equation}
Wherein $t$ is the time in seconds after the maximum force $F_{max}$ with $K_0$ and $K_1$ fitting parameters and $\tau_1$ the relaxation time. This equation, was used to predict the steady-state force ($F_{ss}=K_0F_{max}$) and settling time. The settling time was set to the time needed to be within 5\% of final value (i.e. $0.05K_0 = K_1\exp(-t_s/\tau_1)$).
In addition, the bending actuators were characterized in terms of stroke using the same setup. For these experiments (repeated thrice), the swing was removed and the actuator pressurized. Afterwards, the captured images were used to compute the stroke. Specifically, MATLAB was used to fit a circle along the deformed structure using three points to compute the maximum curvature.
\section{Results and Discussion}
\subsection{Characterization of Coiling Pattern}
The effect of the height ($H$) on the coil radius ($R_c$) is shown in Fig. \ref{fig:RC}(a). The plotted values of $R_c$ are plotted for multiple values of screw rotational speed ($\alpha V_f$ with $V_f$ the printhead speed) at the same height. It can be observed that $R_c$ is linearly related to the height ($H$) in the range of 2.5 mm to 15 mm, with almost one order of magnitude of increase. In contrast, the increase of $R_c$ seems to stall between 15 and 20 mm. This stalling implies that increasing $H$ will no longer increase $R_c$ (and could even decrease)\cite{ribe2012liquid}. This behavior makes further increasing the height not interesting for the InFoam method as it does not allow for larger coils. In addition, it further cools down the thermoplastic, which can lead to bonding issues making it even less appealing.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{fig_3.png}
\caption{(a,b) Coil radius versus the height $H$ (a) and (b) rotational speed of screw ($\alpha V_F$), and (c) The coil density ($N$-value) versus coil density multiplier $(\alpha W(2\pi R_c))$ used to estimate $G$ (see Equation \ref{eq:N}).}
\label{fig:RC}
\end{figure*}
Besides $H$, the literature indicates that $R_c$ decreases when the viscosity decreases\cite{ribe2012liquid}. Due to the shear-thinning behavior of the thermoplastic, a decrease in viscosity (for constant temperature) should therefore coincide with an increased extrusion speed. The extrusion speed is equivalent (under the assumption of linearity) to an increased rotational speed of the screw $\alpha V_F$, which is plotted against $R_c$ in Fig. \ref{fig:RC}(b). It can be observed that there is a nonlinear and downwards trend with increasing extrusion speed. This behavior is in line with a power-law shear-thinning relationship. To compare the shear-thinning behavior, the viscosity of the SEBS material was first measured using a Rheocompass MC501 (Anton-Paar GmbH, Germany) with the data shown in Fig. S2 of the Supplementary Information. The data were then fitted to a power-law normalized by $R_c$ (using the exponent of the viscosity ($n-1=-0.09$), average coil radius $\hat{R}_c$ per $H$, and rotational speed of the screw $\alpha V_F$):
\begin{equation}
R_c = a (\alpha V_F)^{n-1} \hat{R}_c
\end{equation}
Fitting the value of $a$ for all heights and averaging leads to the curve seen in Fig. \ref{fig:RC}(b). The fits had errors of less than 10\%. Thereby implying that there is a correlation between the change in coiling radius $R_c$ and the extrusion speed, which can explain the spread of coiling radii. This extrusion speed-induced change provides an additional opportunity to fine-tune $R_c$ at a fixed $H$. However, increasing the extrusion speed will also affect other properties such as the diameter (by increasing die swell).
The $N$-value was plotted against the coil density multiplier $\alpha W/(2 \pi R_c)$ (see Equation \ref{eq:N}) in Fig. \ref{fig:RC}(c) (using the mean value of $R_c$ per $H$). It can be observed that there is indeed a linear relation for the $N$-value. The linearity is also observed by fitting a line with $G=0.17$ mm/mrad (Equation \ref{eq:N}). The accuracy thereof implies that the proposed relationship is valid. In addition, when extruding in the air the value of $G$ was found to be in the range of 0.154-0.165 mm/mrad, which is in the same range as the fitted value (see Supplementary Information). Lastly, it can be observed that lower $H$ do not achieve high $N$-values. This discrepancy is due to the coils stacking up too much. When this occurs the coiling pattern is replaced with an accumulation of material\cite{yuk2018new}. This accumulation phase is more chaotic and not used by the InFoam method.
Lastly, the coil height ($h_c$) estimation (Equation \ref{eq:hc}) was validated using the coiling patterns to be on average within 4\% (see Supplementary Information).
\subsection{Mechanical Programming of Soft Cubes}
The measured and computed porosity of the soft cubes versus the height ($H$) and the coil density ($N$-value) are shown in Fig. \ref{fig:poroest2602}(a). It can be observed that the porosity ranged from 45 to 89\% with similar levels of porosities were attained by changing $H$ and the $N$-value. Increases in the $N$-value/$H$ lead to a linear decreasing and nonlinear increase of the porosity, respectively. The predicted porosity of Equation \ref{eq:Gman} (with the estimate of $G$ from the previous section) correlated quite well. Even though the accuracy for the $N$-value is lower than the coil radius ($R_c$) estimate, the mean absolute error was less than 4\% for both ($H$ (2.6\%) and the $N$-value 3.9\%). This graph demonstrates that the $N$-value can be used to tune the level of porosity independent of $H$. This change in porosity is advantageous as it means the InFoam method can adjust the $N$-value by setting the extrusion multiplier ($\alpha$) accordingly (see Equation \ref{eq:N}). By doing so the InFoam method can use both large and small coiling radii to increase accuracy (smaller) or printing speed (larger), while still achieving the desired porosity.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{fig_4.png}
\caption{(a) Porosity versus height ($H$) and coil density ($N$-value), (b,c) stress-strain curve at 46\% and 89\% porosity, (d,e) segment modulus versus porosity for different axes at maximum strain (d) and different levels of strain (e), and (f) the energy dissipation versus porosity.}
\label{fig:poroest2602}
\end{figure*}
The results of the compression tests on the cubes were analyzed with the axes defined as: in line with the coils $x$, perpendicular to the coils but in-plane $y$, and layer-wise $z$. The stress-strain behavior of the soft cubes are shown in Fig. \ref{fig:poroest2602}(c) and (d) for the $x$- and $z$-axis ($y$-axis is similar but not shown for clarity) for both low (45.8\%,(c)) and high (89\%,(d)) porosity, respectively. Their general shape is similar with a clear hysteretic behavior but with much lower magnitudes for the higher porosity cube. It can also be noted that both have a linear region and a nonlinear region. The nonlinearity occurs due to the densification of the porous structure (i.e. when all air has been pushed out). This densification limits the effective deformation one can practically achieve\cite{mac2018compliant}, which is reached at an earlier stage in the higher porosity than the lower porosity. The stress in $x,z$ seems linear up to 0.2 and 0.5 strain, for the high and low porosity cube, respectively. The compression of three foam cubes with different porosity is shown in Supplementary Movie 2 in the Supplementary Information.
The segment modulus at maximum strain (0.84) is plotted in Fig. \ref{fig:poroest2602}(d) for all three axes. The raw data is shown for the $x$ and $z$ axis (see Fig. S3 of the Supplementary Information for the $y$). Error bars were not added for clarity but the standard errors were on average within 13\%. It can be observed that the trend in both $N$-value and $H$ are similar. This observation implies that the magnitude of the porosity is important and not the geometry of the underlying coiling pattern (i.e. the specific $R_c$ and $N$-value combination). In addition, it can be observed that all three axes have a power-law relation with the porosity, as expected. The fitted line shows that a (nearly) quadratic function describes all three directions reasonably well, which is in line with the decay of the elastic modulus seen in normal foams\cite{gibson1982mechanics}. In addition, different moduli can be observed for the three axes, which indicates that the layout of the coiling pattern will induce some anisotropy. A possible method to reduce the anisotropy between $x$ and $y$ is to change the coil trajectory by 90 degrees between layers. This approach leads to similar stress in the $x$ and $y$-direction (see Fig. S4 in the Supplementary Information).
The segment modulus was also evaluated at multiple levels of strain for the $x$-axis (see Fig. \ref{fig:poroest2602}(e)). It can be observed that before densification (low strain) the moduli are more similar. After densification the slope at which moduli changes increases significantly (from 11.25 to 19.09). Interestingly, the modulus decreases faster but still in a very similar magnitude to normal foams (i.e. around quadratically)\cite{gibson1982mechanics}. This change allows for mechanical programming to have parts that behave very differently under the same load. To compare the magnitudes to the bulk material, the modulus was approximated using the Shore hardness (47A) and the equation\cite{Dow} $E=0.486\exp{(0.0345\cdot 47)}=2.46$ MPa. It can be observed that even at a strain of 0.4 that the 45.8\% and 89\% cubes were 3.26 and 246 times as small. Thereby indicating that the InFoam method can reduce the modulus by over two orders of magnitude. This value exceeds the 66 times already demonstrated in literature when printing with LRC\cite{lipton20163d}.
Lastly, the dissipated energy (in percentage) versus porosity is plotted in Fig. \ref{fig:poroest2602}(f) with the standard deviation. It can be observed that the $x$-direction (with 31-40\%) dissipates the most energy. A power-law relationship with the porosity can be observed for the $x$ and $y$ directions but is nearly constant in $z$. We expect that is due to the buckling of coils in both $x$ and $y$, whereas the coils are compressed/bent in the $z$-direction. In addition, the effect of $H$ and the $N$-value are similar (just as for the stiffness) providing more evidence that not the coil geometry but porosity is the important value.
\subsection{Graded Porosity for Soft Fluidic Actuators}
The printed structures with porosity gradients are shown in Fig. \ref{fig:porofluid2} and Supplementary Movie 3 (see Supplementary Information). It can be seen that the porosity gradient can deform to the screw motion, which is shown without a sleeve (Fig. \ref{fig:porofluid2}(a)) and actuated (\ref{fig:porofluid2}(b)). In addition, the twisting (\ref{fig:porofluid2}(c)) and contraction (\ref{fig:porofluid2}(d)) are shown in both actuated and unactuated form. Lastly, Fig. \ref{fig:porofluid2}(e) and (f) show that the absence (\ref{fig:porofluid2}(e)) and presence of a porosity gradient with spacers (\ref{fig:porofluid2}(f)) can significantly increase the bending. In general, screw, twisting, and two bending actuators all have similar external geometry but using different porosity gradients allows for a wide variety of deformation patterns. These examples show that the InFoam method can change the deformation behavior significantly. This grading capability during AM enables designers to design the deformation behavior purely by the porosity gradient.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{fig_5.png}
\caption{Deformation pattern for different porosity patterns.}
\label{fig:porofluid2}
\end{figure}
\begin{figure*}[!b]
\centering
\includegraphics[width=1.0\textwidth]{fig_6.png}
\caption{(a) Maximum and steady state force, (b) settling time, and (c) curvature.}
\label{fig:ba2}
\end{figure*}
The maximum and steady-state force of the bending actuators (of Figure \ref{fig:porofluid2}(f)) are plotted in Fig. \ref{fig:ba2}(a) (and in Supplementary Movie 4 of the Supplementary Information). For clarity of the image, the error bars are omitted but these were lower than 10\% for most (the data is provided with error bars in Fig. S5 in the Supplementary Information). It can be observed that an increase in pressure leads to an increase in both steady-state and maximum force. The maximum forces were in the range of 67.2 to 21.1 grams for actuators, which is 10.2 to 3.8 times their weight (6.6 and 5.5 grams, respectively). In addition, a decrease in force for higher porosity actuators is observed. This behavior is expected to be due to the lower stiffness. The lower stiffness would allow the actuator to deform such that the output force is reduced. The steady-state value of the force increases less with increasing pressure for lower porosity samples. This trend is expected to be linked to densification, which decreases the achievable deformation of lower porosity samples. Fitting a curve for the maximum and steady-state force curve gives exponents that are similar to each other. However, there is a discrepancy between the maximum and steady-state values in terms of their exponent. This discrepancy is expected to be due to the viscous component of the thermoplastic, which impacts the maximum force only. Interestingly, the power-law fits for both maximum and steady-state force correlate well with data, which indicates that porosity could be an interesting metric to predict the mechanical performance of an SFA.
The settling time, i.e. when the force is within 5\% of the steady-state value, is plotted in Fig. \ref{fig:ba2}(b). Although there is a significant spread, the trend does indicate that the settling time decreases with increasing porosity. This decrease is expected to be due to the combination of decreased density and increased energy dissipation associated with higher porosity, as seen in the soft cubes. Fits based on the average settling time for 20 and 75 kPa, follow a power-law behavior with the porosity. Thereby supporting that the porosity could be an interesting metric to predict the mechanical performance.
Lastly, the maximum curvature versus porosity for different pressures is shown in Fig. \ref{fig:ba2}(c). A nonlinear relation can be observed between the curvature and porosity. This behavior is expected as the higher porosity actuators will deform more under the same load. Similar to the force metrics discussed previously, these curves can be described with a power-law relation. Fitting the data shows that the curvature increases with the porosity with an exponent around the cubic root of the porosity. Thereby indicating again that porosity could be a useful tool for programming deformation behavior. In addition, a nonlinear relation between the curvature and pressure can be observed. Independent of the porosity the actuators seem to plateau at 60 and 75 kPa. This plateau is expected to be due to the densification of the porous structure. The densification means that exponentially more pressure would be required to increase curvature. The higher densification strain of the higher porosity structures allows them to deform more before reaching this point.
\section{Conclusion}
Soft fluidic actuators are a popular choice in soft robotics due to their versatility, accessibility, and (relative) simplicity. To further expand their capabilities, fabrication methods for structures that simultaneously allow for the transfer of fluidic power and have a stiffness gradient are required. Our InFoam method, as proposed in this work, can provide this capability by allowing for the direct printing of porous structures. The printed porous structures allow for fluid transport and mechanical flexibility. In addition, the structures used a hyperelastic thermoplastic to further reduce the stiffness due to the usage of a hyperelastic thermoplastic.
The InFoam method is capable of mechanical programming by a single level or gradient of porosity (by combining it with normal plotting). To achieve this capability it only requires the characterization of the coil radius (with respect to the height) and the length of extruded material per unit of movement ($G$) at the printing temperature. Subsequently, the InFoam method can use the developed equations to adjust the porosity of a structure by setting the height and/or extrusion speed while taking the coiling pattern's width and height into account.
With a single level of porosity, the InFoam method can program the density, stiffness, and energy dissipation. Large changes could be realized such as an 89\% lower density and scaling the modulus by a factor of 246. In addition, the energy dissipation in $x$ and $y$ could be programmed without affecting the $z$-direction. All of these properties were shown to correlate well through a power-law relationship with the porosity.
By printing a porosity gradient, the InFoam method allowed for a wide range of motion (including twisting, contraction, and bending). In addition, the grading was shown to be capable of programming the behavior of soft bending actuators. Specifically, it was shown that the porosity could be used to program both the output force (magnitude and settling time) and stroke.
Thus, the InFoam method can print graded porous structures that are mechanically programmed by porosity. The InFoam method allows this capability by solely adjusting the height of the nozzle and the ratio of extrusion and movement speed. An interesting observation of the printed porous structures is that the mechanical behavior of both the cubes and bending actuators could be correlated through a power-law relationship with the porosity. In addition, they showed similar trends in their changes in behavior. Further investigation of this correlation could enable tuning of the mechanical performance of soft (porous) actuators using a simple compression experiment.
In addition, the InFoam method's stiffness grading can be further investigated. A structure with high porosity has low stiffness and allows fluid transport, which can be interesting for applications such as scaffolds for more complex SFAs and support. Whereas graded porosity can be a tool for soft fluidic actuators/sensors with complex deformation patterns.
\bibliographystyle{IEEEtran}
|
1,477,468,750,131 | arxiv | \section{Introduction}
Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that
the behavior of a learning model can be better understood.
In recent years, many approaches have been proposed towards
tackling disentangling in deep neural networks
and have achieved promising results.
Most prior efforts, however,
have been focused on the disentanglement of
convolutional neural network~(CNN)
especially the auto-encoder architecture,
where disentangling takes place
during the stage of latent feature generation.
{For example, VAE~\citep{kingma2013autovae}
restrains the distribution of the latent features
to Gaussian and generates disentangled
representation; $\beta$-VAE~\citep{higgins2017betaVAE} further improves the disentangling by introducing $\beta$ to balance the
independence constraints and reconstruction accuracy.}
Despite the many prior efforts in CNN disentangling,
there are few endeavors
toward disentangling in the irregular structural domain,
where graph convolutional network~(GCN) models are applied.
Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones.
The works of~\citep{ma2019disentangled,liu2019independence},
as pioneering attempts,
focus on the node-level
neighbour partition
and ignore the latent multi-relations among nodes.
We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as \emph{factorizable graph convolutional network}~(FactorGCN),
takes as input a \emph{simple graph},
and decomposes it into several
factor graphs, each of which corresponds
to a disentangled and interpretable relation space,
as shown in Fig.~\ref{fig:factorlayer}.
Each such graph then undergoes a GCN, tailored to
aggregate features only from one disentangled latent space,
followed by a merging operation that concatenates all derived
features from disentangled spaces,
so as to produce the final block-wise interpretable features.
These steps constitute one layer of the proposed FactorGCN.
As the output graph with updated features share the identical
topology as input, nothing prevents us
from stacking a number of
layers to disentangle the input data at different levels,
yielding a hierarchical disentanglement
with various numbers of factor
graph at different levels.
FactorGCN, therefore,
potentially finds application in a
wide spectrum of scenarios.
In many real-world graphs,
multiple heterogeneous relations
between nodes are mixed and
collapsed to one single edge.
In the case of social networks,
two people may be \emph{friends}, \emph{colleagues},
and \emph{living in the same city} simultaneously,
but linked via one single edge that omits such
interconnections;
in the co-purchasing scenario~\citep{co-purchase},
products are bought together for
different reasons like \emph{promotion},
and \emph{functional complementary},
but are often ignored in the graph construction.
FactorGCN would, in these cases,
deliver a disentangled and interpretable
solution towards explaining the underlying
rationale, and provide discriminant learned features
for the target task.
\iffalse
multiple relations may exist between two person,
like friend, colleague, neighbor, and family.
However, these relations may mixed up when constructing the graph
by only considering whether them are connected.
Another example is the co-purchase graph~\citep{co-purchase},
where products can be purchased together with
different reasons, like promotion, advertisement,
and functional complementary.
These reasons cannot be detected when constructing
the graph by consider whether they are bought
together frequently.
\fi
\iffalse
Many methods have been proposed to solve the problem of disentanglement.
Most of these methods are based on the architecture of auto-encoder,
where the disentanglement is happened during the generation of the
latent features. For example,
$\beta-$VAE~\citep{higgins2017betaVAE} improves the disentanglement
performance of the original VAE by introducing a weight in the
distribution constrain item. Using a $\bata$ more than one will
enforce the auto-encoder to learn a more efficient latent representation
and disentangle in an unsupervised manner.
Although there are many pioneers focus on the disentanglement of CNN models,
there are few works that discover the disentangling in the structural domain,
where GCN models are applied.
The inherent differences between grid-like data and structural data make
the disentanglement on GCN difficult.
One of the challenges is that the meaning of the disentangled features
of a GCN model are hard to explain. In the field of image or audio,
different dimensions of the disentangled features can represent a special aspect
of the input data, like size of object or tone of the audio. However, in structural
domain, it is hard to say what is one dimension of the node's feature represents.
The exist methods~\citep{ma2019disentangled,liu2019independence} all focus on grouping
the neighbours of the node and fail to capture the latent multi-relations among entities.
In this paper, we propose factorized graph convolutional networks~(FactorGCN), a general
framework that disentangles the structural input into several factor graphs.
Each of these factor graphs represents a relations across all the entities and generates
the features of entities independently. Fig.~\ref{fig:factorlayer} shows an example
one layer in the FactorGCN model.
\fi
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Figs/pipeline.pdf}
\caption{Illustration of one layer in the proposed FactorGCN.
It contains three steps:
\emph{Disentangling}, \emph{Aggregation}, and \emph{Merging}.
In the disentangling step, the input graph is decomposed into several factor graphs, each of which represents a latent relation among nodes. In the aggregation step, GCNs are applied separately to the derived factor graphs
and produce the latent features.
In the merging step, features from all latent graphs
are concatenated to form the final features,
which are block-wise interpretable.}
\label{fig:factorlayer}
\end{figure}
Specifically, the contributions of FactorGCN are summarized as follows.
\begin{itemize}
\item {\bf Graph-level Disentangling}.
FactorGCN conducts disentangling and produces block-wise
interpretable node features by analyzing the whole graph
all at once, during which process the global-level topological semantics,
such as the higher-order relations between edges and nodes,
is explicitly accounted for. The disentangled factor graphs
reveal latent-relation specific interconnections between
the entities of interests, and yield interpretable features
that benefit the downstream tasks.
This scheme therefore contrasts to the prior approaches of~\citep{ma2019disentangled,liu2019independence},
where the {disentanglement takes place only within a local neighborhood, without accounting for global contexts}.
\item {\bf Multi-relation Disentangling}.
Unlike prior methods that decode only
a single attribute for a neighboring node,
FactorGCN enables multi-relation
disentangling, meaning that
{the center node may aggregate information
from a neighbour under multiple types of relations}.
This mechanism is crucial since real-world
data may contain various relations among the
same pair of entities.
In the case of a social network graph, for example,
FactorGCN would produce disentangled results
allowing for two users to be both \emph{friends}
and \emph{living in the same city}; such
multi-relation disentangling is not supported by prior
GCN methods.
\item {\bf Quantitative Evaluation Metric}.
Existing quantitative evaluation methods~\citep{eastwood2018framework,burgess2018understanding}
in the grid domain rely on
generative models, like auto-encoder~\citep{kim2018disentangling}
or GAN~\citep{chen2016infogan}.
Yet in the irregular domain,
unfortunately, state-of-the-art graph generative models
are only applicable for generating small graphs or
larger ones without features.
{Moreover, these models comprise a sequential generation step,
making it infeasible to be integrated into the graph disentangling frameworks.}
To this end, we propose a graph edit-distance based metric,
which bypasses the generation step
and estimates the similarity between the factor graphs and the ground truth.
\end{itemize}
We conducted experiments on five datasets
in various domains,
and demonstrate that the proposed FactorGCN
yields state-of-the-art performances
for both disentanglement and
downstream tasks.
This indicates that,
even putting side its disentangling capability,
FactorGCN may well serve as a general GCN framework.
Specifically, on the ZINC dataset~\citep{jin2018junctionZINC},
FactorGCN outperforms other methods by a large margin,
and, without {the bond information of the edges},
FactorGCN achieves a performance on par with the state-of-the-art
method that explicitly {utilizes}
edge-type information.
\iffalse
For two of them, ground truhts of the
disentangled factor graphs are available;
on these two datasets,
FactorGCN performs consistently the best
in terms of both the disentanglement performance
and the downstream task performance.
The other three datasets are from
social network
and bioinformatics graph,
on which FactorGCN
achieves the state-of-the-art performance,
showing that it is ready
to be used as a general GCN framework.
Specifically, on the ZINC dataset, our method
outperforms the other methods by a large margin
and achieve a similar performance as the state-of-the-art
method that explicitly \emph{utilizes} the type information of edges,
indicating that the disentangled factor graphs can
indeed boost results of the downstream tasks.
\fi
\section{Related Work}
\textbf{Disentangled representation learning}.
Learning disentangled representations has recently
emerged as a significant task towards
interpretable AI~\citep{yang2020ECCV,Song_2020_CVPR}.
Unlike earlier attempts that rely on
handcrafted disentangled representations
or variables~\citep{WangECCV14,WangTPAMI16},
most of the recent works
in disentangled representation learning are based on the architecture
of auto-encoder~\citep{higgins2017betaVAE,feng2018dual,bouchacourt2018multi,burgess2018understanding,wang2017tag,kim2018disentangling}
or generative
model~\citep{chen2016infogan,zhao2017learning,siddharth2017learning}.
One mainstream auto-encoder approach is to constrain
the latent feature generated from the encoder to make it independent
in each dimension. For example, VAE~\citep{kingma2013autovae}
constrains the distribution of the latent features to Gaussian;
$\beta$-VAE\citep{higgins2017betaVAE}
enlarges the weight of the KL divergence term to
balance the independence constraints and reconstruction accuracy;
\citep{schmidhuber1992learning} disentangles the latent features by
ensuring that each block of latent features cannot be predicted
from the rest;
DSD~\citep{feng2018dual} swaps some of the latent features
twice to achieve semi-supervised disentanglement.
For the generative model, extra information is introduced during the
generation. For example, InfoGAN~\citep{chen2016infogan} adds the class code to
the model and maximizes the mutual information between the
generated data and the class code.
\textbf{Graph convolutional network}.
Graph convolutional network~(GCN) has shown its potential in the
non-grid domain~\citep{xu2018powerful,Qiu2020ECCV,li2018combinatorial,yang2020distilling,monti2017geometricMoNet,ijcai_spagan}, achieving promising results on various type of
structural data, like citation graph~\citep{velickovic2018graphgat},
social graph~\citep{kipf2017semi},
and relational graph~\citep{schlichtkrull2018modeling}.
Besides designing GCN to better extract information from
non-grid data, there are also a couple of works that explore the
disentangled GCNs~\citep{ma2019learning,liu2019independence}.
DisenGCN~\citep{ma2019disentangled} adopts
neighbour routine to divide the neighbours of the node
into several mutually exclusive parts.
IPGDN~\citep{liu2019independence} improves DisenGCN
by making the different parts of the embedded feature
independent. Despite results of the previous works, there
remain still several problems:
the disentanglement is
in the node level, which does not consider the information of
the whole graph,
and there is no quantitative metrics to evaluate
the performance of disentanglement.
\section{Method}
In this section, we will give a detailed description
about the architecture of FactorGCN, whose basic component
is the disentangle layer, as shown in Fig.~\ref{fig:factorlayer}.
\subsection{{Disentangling Step}}
The goal of this step is to factorize the input graph into several factor graphs.
To this end, we treat the edges equally across the whole graph.
The mechanism we adopt to
generate these factorized coefficient is
similar to that of graph attention network~\citep{velickovic2018graphgat}.
We denote the input of the disentangle layer as
$\mathbf{h} = \{h_0, h_1, ..., h_n\}, h_i \in \mathcal{R}^F$
and
$\mathbf{e} = \{e_0, e_1, ..., e_m\}, e_k = (h_i, h_j)$.
$\mathbf{h}$ denotes the set of nodes with feature of $F$ dimension, and
$\mathbf{e}$ denotes the set of edges.
The input nodes are transformed to a new space, done by multiplying the
features of nodes with a linear transformation matrix
$\mathbf{W} \in \mathcal{R}^{F^\prime \times F}$.
This is a standard operation in most GCN models, which increases the capacity of the model.
The transformed features are then used to
generate the factor coefficients as follows
\begin{equation}
E_{ije} = 1 / \left(1 + e^{-\Psi_e (h^\prime_{i}, h^\prime_{j}) } \right); h^\prime=\mathbf{W} h,
\label{eq:1}
\end{equation}
where $\Psi_{e}$ is the function that takes the features of
node $i$ and node $j$ as input and computes the attention score of the edge
for factor graph $e$,
and takes the form of an one-layer MLP
in our implementation;
$E_{ije}$ then can be obtained by normalizing the attention score
to $[0, 1]$, representing the coefficient of edge from node $i$ to node $j$
in the factor graph $e$;
{$h^\prime$ is the transformed node feature, shared
across all functions $\Psi_{*}$.} Different from most previous
{forms of attention-based GCNs} that normalize
the attention coefficients among all the neighbours of nodes,
our proposed model generates these coefficients directly
{as the factor graph}.
Once all the coefficients are computed,
a factor graph $e$ can be represented by its own $E_e$,
which will be used for the next aggregation step.
However, without any other
constrain, some of the generated factor graphs may contain a similar
structure, degrading the disentanglement performance and
capacity of the model. We therefore introduce an additional
head in the disentangle layer, aiming to avoid the
degradation of the generated factor graphs.
The motivation of the additional head is that, a well
disentangled factor graph should have enough information to
be distinguished from the rest, only based on its
structure.
{Obtaining the solution that all the disentangled
factor graphs differ from each other to the
maximal degree, unfortunately, is not trivial.}
We thus approximate the solution by
giving unique labels to the factor graphs
and optimizing the factor graphs
as a graph classification problem.
Our additional head will serve as a {discriminator, shown in Eq.~\ref{eq:2}}, to distinguish which label a given graph has:
\begin{small}
\begin{equation}
G_e = {\rm Softmax}\Big( f \big({\rm Readout}(\mathcal{A}(\mathbf{E}_{e}, \mathbf{h^\prime}) ) \big) \Big).
\label{eq:2}
\end{equation}
\end{small}
The discriminator contains
a three-layer graph auto-encoder $\mathcal{A}$, which takes the transformed feature
$\mathbf{h^\prime}$ and the generated attention coefficients of factor graph $\mathbf{E}_e$
as inputs, and generates the new node features.
These features are then readout to generate
the representation of the whole factor graph.
Next, the feature vectors will be sent to
a classifier with one fully connected layer.
Note that all the factor graphs share the
same {node features}, making sure that the
information discovered by the discriminator only
comes from the difference among the structure of
the factor graphs.
More details about the discriminator architecture
can be found in the supplementary materials.
The loss used to train the discriminator
is taken as follows:
\begin{small}
\begin{equation}
\mathcal{L}_{d} = - \frac{1}{N} \sum_i^N \left( \sum_{c=1}^{N_e} \mathbbm{1}_{e=c} log(G_i^e[c]) \right),
\label{eq:disloss}
\end{equation}
\end{small}\noindent
where $N$ is the number of training samples,
set to be the number of input graphs
multiplies by the number of factor graphs;
$N_e$ is the number of factor
graphs; $G_i^e$ is the distribution
of sample $i$ and $G_i^e[c]$ represents the
probability that the generated factor graph has label $c$.
$\mathbbm{1}_{e=c}$ is an indicator function, taken to be one
when the predicted label is correct.
\subsection{{Aggregation Step}}
As the factor graphs derived from the disentangling step
is optimized to be as diverse as possible,
in the aggregation step, we will use the generated factor graphs
to aggregate information in different structural spaces.
This step is similar as the most GCN models, where
the new node feature is generated by taking the weighted sum of its
neighbors. Our aggregation mechanism is based on
the simplest one, which is used in GCN~\citep{kipf2017semi}.
The only difference is that the aggregation will take place independently
for each of the factor graphs.
The aggregation process is formulated as
\begin{small}
\begin{equation}
h^{(l+1)_e}_i = \sigma(\sum_{j\in \mathcal{N}_i} E_{ije} / c_{ij} h^{(l)}_j \mathbf{W}^{(l)} ),
c_{ij} = \left( |\mathcal{N}_i||\mathcal{N}_j| \right)^{1/2},
\label{eq:agg}
\end{equation}
\end{small}\noindent
where $h^{(l+1)_e}_i$ represents the new feature for node
$i$ in $l+1$ layer aggregated
{from} the factor graph $e$; $\mathcal{N}_i$ represents all
the neighbours of node $i$ in the input graph;
$E_{ije}$ is the coefficient of the edge from node $i$ to
node $j$ in the factor graph $e$; $c_{ij}$ is the
normalization term that is computed according to
the degree of node $i$ and node $j$;
$\mathbf{W}^{(l)}$ is a linear transformation matrix,
which is the same as the matrix used in the disentangling step.
Note that although we use all the neighbours of a node
in the input graph to aggregate information,
{some of them are making no contribution if the corresponding
coefficient in the factor graph is zero.}
\subsection{{Merging Step}}
Once the aggregation step is complete,
different factor graphs will lead to
different features of nodes.
We merge these features generated from
different factor graphs by applying
\begin{small}
\begin{equation}
h^{(l+1)}_i = ||^{N_e}_{e=1} h^{(l+1)_e}_i,
\label{eq:merge}
\end{equation}
\end{small}\noindent
where $h^{(l+1)}_i$ is the output feature of node $i$; $N_e$
is the number of factor graphs; $||$ represents the
concatenation operation.
\subsection{Architecture}
We discuss above the design of one disentangle layer,
which contains three steps. The FactorGCN model
we used in the experimental section contains
several such disentangle layers, increasing
the power of expression. Moreover, by setting different
number of factor graphs in different layers,
the proposed model can disentangle the input data
in a hierarchical manner.
The total loss to train FactorGCN model is $\mathcal{L} = \mathcal{L}_{t} + \lambda * \mathcal{L}_{d} $. $\mathcal{L}_{t}$ is the loss of
the original task, which is taken to be
a binary cross entropy loss for multi-label
classification task, cross entropy loss for
multi-class classification task, or L1 loss for regression task.
$\mathcal{L}_{d}$ is the loss
of the discriminator we mentioned above. $\lambda$ is the
weight to balance these two losses.
\section{Experiments}
In this section,
we show the effectiveness of the proposed FactorGCN,
and provide discussions
on its various components as well as
the sensitivity with respect to the
key hyper-parameters.
More results can be found in the supplementary materials.
\subsection{Experimental setups}
{\textbf{Datasets}. }
Here, we use six datasets to evaluate the
effectiveness of the proposed method.
The first one is a synthetic dataset
that contains a fixed number of predefined graphs
as factor graphs. The second one is the ZINC dataset~\citep{dwivedi2020benchmarkinggnn}
built from molecular graphs.
The third one is Pattern dataset~\citep{dwivedi2020benchmarkinggnn},
which is a large scale dataset for node classification task.
The other three are widely used graph classification datasets
include social networks~(COLLAB,IMDB-B)
and bioinformatics graph~(MUTAG)~\citep{yanardag2015deepgin}.
To generate the synthetic dataset that contains $N_e$ factor graphs,
we first generate $N_e$ predefined graphs,
which are the well-known graphs like Tur\'an graph, house-x graph,
and balanced-tree graph.
We then choose half of them and pad them with isolated nodes to
make the number of nodes to be 15.
The padded graphs will be merged together as a training sample.
The label of the synthetic data is a binary vector, with the dimension $N_e$.
Half of the labels will be set to one according to
the types of graphs that the sample generated from, and
the rest are set to zero.
More information about the datasets can be found
in the supplemental materials.
\textbf{Baselines}. We adopt several methods,
including state-of-the-art ones, as the baselines.
Among all,
MLP is the simplest one, which contains multiple fully connected layers.
Although this method is simple, it can in fact perform well when comparing
with other methods that consider the structural information.
We use MLP to check whether the other compared methods benefit
from using the structural information as well.
GCN aggregates the information in the graph according to
the laplacian matrix of the graph,
which can be seen as a fixed weighted sum on
the neighbours of a node.
GAT~\citep{velickovic2018graphgat} extends the idea of GCN by
introducing the attention mechanism.
The weights when doing the aggregation is computed dynamically according to
all the neighbours. For the ZINC dataset, we also add MoNet~\citep{monti2017geometricMoNet}
and GatedGCN$_E$~\citep{dwivedi2020benchmarkinggnn}
as baselines. The former one is the state-of-the-art method that does not
use the type information of edges while the latter one is the state-of-the-art
one that uses additional edge information.
Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add
non DL-based methods~(WL subtree, PATCHYSAN, AWL) and
DL-based methods~(GCN, GraphSage~\citep{hamilton2017inductive}, GIN)
as baselines. DisenGCN~\citep{ma2019disentangled} and
IPDGN~\citep{liu2019independence}
are also added.
\textbf{Hyper-parameters}.
For the synthetic dataset, Adam optimizer is used with a
learning rate of 0.005,
the number of training epochs is set to 80,
the weight decay is set to 5e-5.
The row of the adjacent matrix of the generated synthetic
graph is used as the feature of nodes.
The negative slope of LeakyReLU for GAT model is set to 0.2,
which is the same as the original setting.
The number of hidden layers for all models is set to two.
The dimension of the hidden feature is set to 32 when
the number of factor graphs is no more than four
and 64 otherwise.
The weight for the loss of discriminator in FactorGCN is set to 0.5.
For the molecular dataset, the dimension of the hidden feature is set to 144
for all methods and the number of layers is set to four.
Adam optimizer is used with a learning rate of 0.002.
No weight decay is used.
$\lambda$ of FactorGCN is set to 0.2. All the methods
are trained for 500 epochs. The test results are obtained using the model with
the best performance on validation set. For the other three datasets,
three layers FactorGCN is used.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Figs/vis_samples.pdf}
\vspace{-1.3em}
\caption{Examples of the disentangled factor graphs on the synthetic dataset.
The isolated nodes are eliminated for a better visualization.}
\label{fig:synthetic_vis}
\vspace{-1.2em}
\end{figure}
\subsection{Qualitative Evaluation}
We first provide the qualitative evaluations of disentanglement performance,
including the visualization of the disentangled factor graphs and
the correlation analysis of the latent features.
\textbf{Visualization of disentangled factor graphs}.
To give an intuitive understanding of the
disentanglement. We provide in Fig.~\ref{fig:synthetic_vis}
some examples of the generated factor graphs.
We remove the isolated nodes and visualize
the best-matched factor graphs with ground truths.
More results and analyses can be found in the supplemental
materials.
\begin{figure}[ht]
\centering
\includegraphics[width=0.93\linewidth]{Figs/corr_six.pdf}
\caption{Feature correlation analysis. The hidden features are obtained from
the test split using the pre-trained models on the synthetic dataset.
It can be seen that the features generated
from FactorGCN present a more block-wise
correlation pattern, indicating that
the latent features have indeed been disentangled.
We also show the classification performance in brackets.}
\label{fig:synthetic_corr}
\end{figure}
\textbf{Correlation of disentangled features}.
Fig.~\ref{fig:synthetic_corr} shows the correlation
analysis of the latent features
obtained from several pre-trained models on the synthetic dataset.
It can be seen that also GCN and MLP models can achieve
a high performance in the downstream task, and
their latent features are hidden entangled.
GAT gives {more} independent latent features
but the performance is degraded in the
original task. FactorGCN is able to extract the highly independent
latent features and meanwhile achieve a better performance in the downstream task.
\subsection{Quantitative Evaluation}
The quantitative evaluation focuses on two parts,
the performance of the downstream tasks
and that of the disentanglement.
\textbf{Evaluation protocol}.
For the downstream tasks,
we adopt the corresponding metrics to evaluate,
i.e., Micro-F1 for the multi-label classification task,
mean absolute error~(MAE) for the regression task.
We design two new metrics to evaluate the disentanglement
performance on the graph data.
The first one is graph edit distance on edge~(GED$_{E}$).
This metric is inspired by the traditional graph edit distance~(GED).
Since the input graph already provides the information
about the order of nodes, the disentanglement of
the input data, in reality,
only involves the changing of edges.
Therefore, we restrict the GED by only
allowing adding and removing the edges, and
thus obtain a score of
GED$_{E}$ by Hungarian match
between the generated factor graphs
and the ground truth.
Specifically, for each pair of the
generated factor graph and the
ground truth graph, we first convert the continuous value in the
factor graph to 1/0 value by setting the threshold to make
the number of edges in these two graphs are the same.
Then, GED$_{E}$s can be computed for every such combination.
Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GED$_{E}$ score.
Besides the GED$_{E}$ score, we also
care about the consistency of the generated factor graph.
In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be
identical across all samples.
We therefore introduce the second metric named as consistency score~(C-Score), related to GED$_{E}$.
C-Score is computed as the
average percentage of the
most frequently matched factor graphs.
The C-score will be one
if the ground truth graphs are always matched
to the fixed factor graphs.
A more detailed description of evaluation
protocol can be found in the supplemental materials.
\begin{table}
\caption{Performance on synthetic dataset. The four methods are evaluated in terms of
the classification and the disentanglement performance. Classification performance
is evaluated by Micro-F1 and disentanglement performance is measured by GED$_E$ and C-Score. For each method,
we run the experiments five times and report the mean and std. Random method generates four factor graphs.
GAT\_W/Dis represents GAT model with
the additional discriminator proposed in this paper.}
\label{tab:synthetic}
\centering
\scalebox{0.71}{
\begin{tabular}{lccccccc}
\toprule
& MLP & GCN & GAT & GAT\_W/Dis & DisenGCN & FactorGCN~(Ours) & Random\\
\midrule
Micro-F1 $\uparrow$ & 0.940 $\pm$ 0.002 & 0.947 $\pm$ 0.003 & 0.923 $\pm$ 0.009 & 0.928 $\pm$ 0.009 & 0.904$\pm$0.007 & \textbf{0.995 $\pm$ 0.004} & 0.250 $\pm$ 0.002 \\
GED$_{E}$ $\downarrow$ & - & - & 12.59 $\pm$ 3.00 & 12.35 $\pm$ 3.86
& \textbf{10.54$\pm$4.35} & \textbf{10.59 $\pm$ 4.37} & 32.09 $\pm$ 4.85 \\
C-Score $\uparrow$ & - & - & 0.288 $\pm$ 0.064 & 0.274 $\pm$ 0.065
& 0.367$\pm$0.026 & \textbf{0.532 $\pm$ 0.044} & 0.315 $\pm$ 0.002 \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Evaluation on the synthetic dataset}. We first evaluate the disentanglement
performance on a synthetic dataset. The results are shown in Tab.~\ref{tab:synthetic}.
Although MLP and GCN achieve good classification performances, they are not capable of disentanglement.
GAT disentangles the input by using multi-head attention,
but the performance of the original task is degraded.
Our proposed method,
on the other hand, achieves a much better performance in terms of both disentanglement and
the original task.
We also evaluate the compared methods on the synthetic dataset with various numbers of
factor graphs, shown in Tab.~\ref{tab:synthetic_various}.
As the number of
latent factor graphs increase, the performance gain of the FactorGCN becomes large.
However, when the number of factor graphs becomes too large,
the task will be more challenging,
yielding lower performance gains.
\begin{table}
\caption{Classification performance on synthetic graphs with different numbers of factor graphs.
We change the total number of factor graphs and generate five synthetic datasets.
When the number of factor graphs increases, the performance gain of FactorGCN becomes larger.
However, as the number of factor graphs becomes too large, disentanglement
will be more challenging, yielding {lower performance gains}.}
\label{tab:synthetic_various}
\centering
\scalebox{0.92}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{5}{c}{Number of factor graphs} \\
\cmidrule(r){2-6}
& 2 & 3 & 4 & 5 & 6 \\
\midrule
MLP & 1.000 $\pm$ 0.000 & 0.985 $\pm$ 0.002 & 0.940 $\pm$ 0.002 & 0.866 $\pm$ 0.001 & 0.809 $\pm$ 0.002 \\
GCN & 1.000 $\pm$ 0.000 & 0.984 $\pm$ 0.000 & 0.947 $\pm$ 0.003 & 0.844 $\pm$ 0.002 & 0.765 $\pm$ 0.001 \\
GAT & 1.000 $\pm$ 0.000 & 0.975 $\pm$ 0.002 & 0.923 $\pm$ 0.009 & 0.845 $\pm$ 0.006 & 0.791 $\pm$ 0.006 \\
FactorGCN & 1.000 $\pm$ 0.000 & \textbf{1.000 $\pm$ 0.000} & \textbf{0.995 $\pm$ 0.004} & \textbf{0.893 $\pm$ 0.021} & \textbf{0.813 $\pm$ 0.049} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Performance on the ZINC dataset. FactorGCN outperforms the compared methods
by a large margin, with the
capability of disentanglement.
Note that our proposed method even achieves a similar performance as
GatedGCN$_E$, the state-of-the-art method on ZINC dataset that explicitly uses additional
edge information. }
\label{tab:zinc}
\centering
\scalebox{0.70}{
\begin{tabular}{c|cccccc|c}
\toprule
& MLP & GCN & GAT & MoNet & DisenGCN & FactorGCN~(Ours) & GatedGCN$_E$\\
\midrule
MAE $\downarrow$ & 0.667 $\pm$ 0.002 & 0.503 $\pm$ 0.005 & 0.479 $\pm$ 0.010 & 0.407 $\pm$ 0.007
& 0.538$\pm$0.005 & \textbf{0.366 $\pm$ 0.014} & \textit{0.363 $\pm$ 0.009}\\
GED$_{E}$ $\downarrow$ & - & - & 15.46 $\pm$ 6.06 & -
& 14.14$\pm$6.19 & \textbf{12.72 $\pm$ 5.34} &- \\
C-Score $\uparrow$ & - & - & 0.309 $\pm$ 0.013 & -
& 0.342$\pm$0.034 & \textbf{0.441 $\pm$ 0.012} &- \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Accuracy~(\%) on three graph classification datasets.
FactorGCN performances on par with or better
than the state-of-the-art
GCN models. We highlight the best DL-based methods and non DL-based methods
separately. FactorGCN uses the same hyper-parameters for all datasets.}
\label{tab:other}
\centering
\scalebox{0.85}{
\begin{tabular}{cccc|cccc}
\toprule
& WL subtree & PATCHYSAN & AWL & GCN & GraphSage & GIN & FactorGCN \\
\midrule
IMDB-B & 73.8 $\pm$ 3.9 & 71.0 $\pm$ 2.2 & \textbf{74.5 $\pm$ 5.9}& 74.0 $\pm$ 3.4 &72.3 $\pm$ 5.3 & \textbf{75.1 $\pm$ 5.1} & \textbf{75.3 $\pm$ 2.7} \\
COLLAB & \textbf{78.9 $\pm$ 1.9} & 72.6 $\pm$ 2.2 & 73.9 $\pm$ 1.9 & 79.0 $\pm$ 1.8 & 63.9 $\pm$ 7.7 & 80.2 $\pm$ 1.9 & \textbf{81.2 $\pm$ 1.4} \\
MUTAG & 90.4 $\pm$ 5.7 & \textbf{92.6 $\pm$ 4.2} & 87.9 $\pm$ 9.8 & 85.6 $\pm$ 5.8 & 77.7 $\pm$ 1.5 & \textbf{89.4 $\pm$ 5.6} & \textbf{89.9 $\pm$ 6.5}\\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[ht]
\caption{Accuracy~(\%) on the Pattern dataset
for node-classification task.
FactorGCN achieves the best performance,
showing its ability to serve as a general GCN framework.}
\label{tab:pattern}
\centering
\scalebox{0.86}{
\begin{tabular}{ccccccc}
\toprule
GCN & GatedGCN & GIN & MoNet & DisenGCN & IPDGN & FactorGCN \\
\midrule
63.88 $\pm$ 0.07 & 84.48 $\pm$ 0.12 & 85.59 $\pm$ 0.01
& 85.48 $\pm$ 0.04 & 75.01 $\pm$ 0.15 & 78.70 $\pm$ 0.11 & \textbf{86.57 $\pm$ 0.02} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{Figs/sens_1x2.pdf}
\caption{The influence of the balanced weight $\lambda$ and
the number of factor graphs.}
\label{fig:sens}
\end{figure}
\textbf{Evaluation on the ZINC dataset}.
For this dataset, the type information of edges is
hidden during the training process,
and is serve as the ground truth to evaluate the performance of
disentanglement. Tab.~\ref{tab:zinc} shows the results. The proposed method
achieves the best performance on both the disentanglement
and the downstream task.
We also show the state-of-the-art method GatedGCN$_E$
on this dataset on the right side of Tab.~\ref{tab:zinc}, which utilizes
the type information of edges during the training process.
Our proposed method, without any additional edge information,
achieves truly promising results that are
to that of GatedGCN$_E$, which {needs the bond information
of edges during training.}
\textbf{Evaluation on more datasets}.
To provide a thorough understanding of the proposed method,
We also carry out evaluations on three widely
used graph classification datasets and one node classification dataset
to see the performances of FactorGCN as a general GCN framework.
The same 10-fold evaluation protocol
as~\citep{xu2018powerful} is adopted.
Since there are no ground truth factor graphs,
we only report the accuracy, shown in
Tab.~\ref{tab:other} and Tab.~\ref{tab:pattern}.
Our method achieves consistently the best performance,
showing the potential of the FactorGCN
as a general GCN framework, even putting aside its
disentangling capability.
More details about the evaluation protocol,
the setup of our method, and the statistic information about these datasets
can be found in the supplemental materials.
\subsection{Ablation and sensitivity analysis}
We show in Fig.~\ref{fig:sens} the ablation study and sensitivity analysis
of the proposed method.
When varying $\lambda$, the number of factors
is set to be eight;
when varying the number of factors ,
$\lambda$ is set to be 0.2.
As can be seen from the left figure,
the performance of both the disentanglement and the
downstream task will degrade without the discriminator.
The right figure shows the relations between the performance
and the number of factor graphs we used in FactorGCN.
Setting the number of factor graphs
to be slightly larger than that of the ground truth,
in practice, leads to a better performance.
\section{Conclusion}
We propose a novel GCN framework, termed as FactorGCN,
which achieves graph convolution through
graph-level disentangling. Given an input graph, FactorGCN
decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections
between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable
and beneficial to the downstream tasks.
Specifically, FactorGCN enables multi-relation
disentangling, allowing information propagation
between two nodes to take places
in disjoint spaces.
We also introduce two new metrics to
measure the graph disentanglement
performance quantitatively.
FactorGCN outperforms other methods
on both the disentanglement and the downstream tasks, indicating
the proposed method is ready to serve as a general GCN framework
with the capability of graph-level disentanglement.
\section*{Acknowledgments}
This work is supported by the startup funding of
Stevens Institute of Technology.
\section*{Broader Impact}
In this work we introduce a GCN framework,
termed as FactorGCN,
that explicitly accounts for disentanglement
FactorGCN is applicable to various scenarios,
both technical and social.
For conventional graph-related
tasks, like node classification of the social network
and graph classification of the molecular graph,
our proposed method can serve as a general GCN
framework.
For disentangling tasks, our method
generates factor graphs that reveal
the latent relations among entities,
and facilitate the
further decision making process like
recommendation.
Furthermore, given sufficient data,
FactorGCN can be used as a tool to
analyze social issues like discovering
the reasons for the quick spread of
the epidemic disease in some areas.
Like all learning-based methods,
FactorGCN is not free of errors.
If the produced disentangled factor graphs
are incorrect, for example,
the subsequent inference and prediction results
will be downgraded, possibly yielding
undesirable bias.
\newpage
\small
\bibliographystyle{unsrtnat}
\section{Introduction}
Disentangling aims to factorize an entity, like a feature vector, into several interpretable components, so that
the behavior of a learning model can be better understood.
In recent years, many approaches have been proposed towards
tackling disentangling in deep neural networks
and have achieved promising results.
Most prior efforts, however,
have been focused on the disentanglement of
convolutional neural network~(CNN)
especially the auto-encoder architecture,
where disentangling takes place
during the stage of latent feature generation.
{For example, VAE~\citep{kingma2013autovae}
restrains the distribution of the latent features
to Gaussian and generates disentangled
representation; $\beta$-VAE~\citep{higgins2017betaVAE} further improves the disentangling by introducing $\beta$ to balance the
independence constraints and reconstruction accuracy.}
Despite the many prior efforts in CNN disentangling,
there are few endeavors
toward disentangling in the irregular structural domain,
where graph convolutional network~(GCN) models are applied.
Meanwhile, the inherent differences between grid-like data and structural data precludes applying CNN-based disentangling methods to GCN ones.
The works of~\citep{ma2019disentangled,liu2019independence},
as pioneering attempts,
focus on the node-level
neighbour partition
and ignore the latent multi-relations among nodes.
We introduce in this paper a novel GCN, that aims to explicitly conduct graph-level disentangling, based on which convolutional features are aggregated. Our approach, termed as \emph{factorizable graph convolutional network}~(FactorGCN),
takes as input a \emph{simple graph},
and decomposes it into several
factor graphs, each of which corresponds
to a disentangled and interpretable relation space,
as shown in Fig.~\ref{fig:factorlayer}.
Each such graph then undergoes a GCN, tailored to
aggregate features only from one disentangled latent space,
followed by a merging operation that concatenates all derived
features from disentangled spaces,
so as to produce the final block-wise interpretable features.
These steps constitute one layer of the proposed FactorGCN.
As the output graph with updated features share the identical
topology as input, nothing prevents us
from stacking a number of
layers to disentangle the input data at different levels,
yielding a hierarchical disentanglement
with various numbers of factor
graph at different levels.
FactorGCN, therefore,
potentially finds application in a
wide spectrum of scenarios.
In many real-world graphs,
multiple heterogeneous relations
between nodes are mixed and
collapsed to one single edge.
In the case of social networks,
two people may be \emph{friends}, \emph{colleagues},
and \emph{living in the same city} simultaneously,
but linked via one single edge that omits such
interconnections;
in the co-purchasing scenario~\citep{co-purchase},
products are bought together for
different reasons like \emph{promotion},
and \emph{functional complementary},
but are often ignored in the graph construction.
FactorGCN would, in these cases,
deliver a disentangled and interpretable
solution towards explaining the underlying
rationale, and provide discriminant learned features
for the target task.
\iffalse
multiple relations may exist between two person,
like friend, colleague, neighbor, and family.
However, these relations may mixed up when constructing the graph
by only considering whether them are connected.
Another example is the co-purchase graph~\citep{co-purchase},
where products can be purchased together with
different reasons, like promotion, advertisement,
and functional complementary.
These reasons cannot be detected when constructing
the graph by consider whether they are bought
together frequently.
\fi
\iffalse
Many methods have been proposed to solve the problem of disentanglement.
Most of these methods are based on the architecture of auto-encoder,
where the disentanglement is happened during the generation of the
latent features. For example,
$\beta-$VAE~\citep{higgins2017betaVAE} improves the disentanglement
performance of the original VAE by introducing a weight in the
distribution constrain item. Using a $\bata$ more than one will
enforce the auto-encoder to learn a more efficient latent representation
and disentangle in an unsupervised manner.
Although there are many pioneers focus on the disentanglement of CNN models,
there are few works that discover the disentangling in the structural domain,
where GCN models are applied.
The inherent differences between grid-like data and structural data make
the disentanglement on GCN difficult.
One of the challenges is that the meaning of the disentangled features
of a GCN model are hard to explain. In the field of image or audio,
different dimensions of the disentangled features can represent a special aspect
of the input data, like size of object or tone of the audio. However, in structural
domain, it is hard to say what is one dimension of the node's feature represents.
The exist methods~\citep{ma2019disentangled,liu2019independence} all focus on grouping
the neighbours of the node and fail to capture the latent multi-relations among entities.
In this paper, we propose factorized graph convolutional networks~(FactorGCN), a general
framework that disentangles the structural input into several factor graphs.
Each of these factor graphs represents a relations across all the entities and generates
the features of entities independently. Fig.~\ref{fig:factorlayer} shows an example
one layer in the FactorGCN model.
\fi
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Figs/pipeline.pdf}
\caption{Illustration of one layer in the proposed FactorGCN.
It contains three steps:
\emph{Disentangling}, \emph{Aggregation}, and \emph{Merging}.
In the disentangling step, the input graph is decomposed into several factor graphs, each of which represents a latent relation among nodes. In the aggregation step, GCNs are applied separately to the derived factor graphs
and produce the latent features.
In the merging step, features from all latent graphs
are concatenated to form the final features,
which are block-wise interpretable.}
\label{fig:factorlayer}
\end{figure}
Specifically, the contributions of FactorGCN are summarized as follows.
\begin{itemize}
\item {\bf Graph-level Disentangling}.
FactorGCN conducts disentangling and produces block-wise
interpretable node features by analyzing the whole graph
all at once, during which process the global-level topological semantics,
such as the higher-order relations between edges and nodes,
is explicitly accounted for. The disentangled factor graphs
reveal latent-relation specific interconnections between
the entities of interests, and yield interpretable features
that benefit the downstream tasks.
This scheme therefore contrasts to the prior approaches of~\citep{ma2019disentangled,liu2019independence},
where the {disentanglement takes place only within a local neighborhood, without accounting for global contexts}.
\item {\bf Multi-relation Disentangling}.
Unlike prior methods that decode only
a single attribute for a neighboring node,
FactorGCN enables multi-relation
disentangling, meaning that
{the center node may aggregate information
from a neighbour under multiple types of relations}.
This mechanism is crucial since real-world
data may contain various relations among the
same pair of entities.
In the case of a social network graph, for example,
FactorGCN would produce disentangled results
allowing for two users to be both \emph{friends}
and \emph{living in the same city}; such
multi-relation disentangling is not supported by prior
GCN methods.
\item {\bf Quantitative Evaluation Metric}.
Existing quantitative evaluation methods~\citep{eastwood2018framework,burgess2018understanding}
in the grid domain rely on
generative models, like auto-encoder~\citep{kim2018disentangling}
or GAN~\citep{chen2016infogan}.
Yet in the irregular domain,
unfortunately, state-of-the-art graph generative models
are only applicable for generating small graphs or
larger ones without features.
{Moreover, these models comprise a sequential generation step,
making it infeasible to be integrated into the graph disentangling frameworks.}
To this end, we propose a graph edit-distance based metric,
which bypasses the generation step
and estimates the similarity between the factor graphs and the ground truth.
\end{itemize}
We conducted experiments on five datasets
in various domains,
and demonstrate that the proposed FactorGCN
yields state-of-the-art performances
for both disentanglement and
downstream tasks.
This indicates that,
even putting side its disentangling capability,
FactorGCN may well serve as a general GCN framework.
Specifically, on the ZINC dataset~\citep{jin2018junctionZINC},
FactorGCN outperforms other methods by a large margin,
and, without {the bond information of the edges},
FactorGCN achieves a performance on par with the state-of-the-art
method that explicitly {utilizes}
edge-type information.
\iffalse
For two of them, ground truhts of the
disentangled factor graphs are available;
on these two datasets,
FactorGCN performs consistently the best
in terms of both the disentanglement performance
and the downstream task performance.
The other three datasets are from
social network
and bioinformatics graph,
on which FactorGCN
achieves the state-of-the-art performance,
showing that it is ready
to be used as a general GCN framework.
Specifically, on the ZINC dataset, our method
outperforms the other methods by a large margin
and achieve a similar performance as the state-of-the-art
method that explicitly \emph{utilizes} the type information of edges,
indicating that the disentangled factor graphs can
indeed boost results of the downstream tasks.
\fi
\section{Related Work}
\textbf{Disentangled representation learning}.
Learning disentangled representations has recently
emerged as a significant task towards
interpretable AI~\citep{yang2020ECCV,Song_2020_CVPR}.
Unlike earlier attempts that rely on
handcrafted disentangled representations
or variables~\citep{WangECCV14,WangTPAMI16},
most of the recent works
in disentangled representation learning are based on the architecture
of auto-encoder~\citep{higgins2017betaVAE,feng2018dual,bouchacourt2018multi,burgess2018understanding,wang2017tag,kim2018disentangling}
or generative
model~\citep{chen2016infogan,zhao2017learning,siddharth2017learning}.
One mainstream auto-encoder approach is to constrain
the latent feature generated from the encoder to make it independent
in each dimension. For example, VAE~\citep{kingma2013autovae}
constrains the distribution of the latent features to Gaussian;
$\beta$-VAE\citep{higgins2017betaVAE}
enlarges the weight of the KL divergence term to
balance the independence constraints and reconstruction accuracy;
\citep{schmidhuber1992learning} disentangles the latent features by
ensuring that each block of latent features cannot be predicted
from the rest;
DSD~\citep{feng2018dual} swaps some of the latent features
twice to achieve semi-supervised disentanglement.
For the generative model, extra information is introduced during the
generation. For example, InfoGAN~\citep{chen2016infogan} adds the class code to
the model and maximizes the mutual information between the
generated data and the class code.
\textbf{Graph convolutional network}.
Graph convolutional network~(GCN) has shown its potential in the
non-grid domain~\citep{xu2018powerful,Qiu2020ECCV,li2018combinatorial,yang2020distilling,monti2017geometricMoNet,ijcai_spagan}, achieving promising results on various type of
structural data, like citation graph~\citep{velickovic2018graphgat},
social graph~\citep{kipf2017semi},
and relational graph~\citep{schlichtkrull2018modeling}.
Besides designing GCN to better extract information from
non-grid data, there are also a couple of works that explore the
disentangled GCNs~\citep{ma2019learning,liu2019independence}.
DisenGCN~\citep{ma2019disentangled} adopts
neighbour routine to divide the neighbours of the node
into several mutually exclusive parts.
IPGDN~\citep{liu2019independence} improves DisenGCN
by making the different parts of the embedded feature
independent. Despite results of the previous works, there
remain still several problems:
the disentanglement is
in the node level, which does not consider the information of
the whole graph,
and there is no quantitative metrics to evaluate
the performance of disentanglement.
\section{Method}
In this section, we will give a detailed description
about the architecture of FactorGCN, whose basic component
is the disentangle layer, as shown in Fig.~\ref{fig:factorlayer}.
\subsection{{Disentangling Step}}
The goal of this step is to factorize the input graph into several factor graphs.
To this end, we treat the edges equally across the whole graph.
The mechanism we adopt to
generate these factorized coefficient is
similar to that of graph attention network~\citep{velickovic2018graphgat}.
We denote the input of the disentangle layer as
$\mathbf{h} = \{h_0, h_1, ..., h_n\}, h_i \in \mathcal{R}^F$
and
$\mathbf{e} = \{e_0, e_1, ..., e_m\}, e_k = (h_i, h_j)$.
$\mathbf{h}$ denotes the set of nodes with feature of $F$ dimension, and
$\mathbf{e}$ denotes the set of edges.
The input nodes are transformed to a new space, done by multiplying the
features of nodes with a linear transformation matrix
$\mathbf{W} \in \mathcal{R}^{F^\prime \times F}$.
This is a standard operation in most GCN models, which increases the capacity of the model.
The transformed features are then used to
generate the factor coefficients as follows
\begin{equation}
E_{ije} = 1 / \left(1 + e^{-\Psi_e (h^\prime_{i}, h^\prime_{j}) } \right); h^\prime=\mathbf{W} h,
\label{eq:1}
\end{equation}
where $\Psi_{e}$ is the function that takes the features of
node $i$ and node $j$ as input and computes the attention score of the edge
for factor graph $e$,
and takes the form of an one-layer MLP
in our implementation;
$E_{ije}$ then can be obtained by normalizing the attention score
to $[0, 1]$, representing the coefficient of edge from node $i$ to node $j$
in the factor graph $e$;
{$h^\prime$ is the transformed node feature, shared
across all functions $\Psi_{*}$.} Different from most previous
{forms of attention-based GCNs} that normalize
the attention coefficients among all the neighbours of nodes,
our proposed model generates these coefficients directly
{as the factor graph}.
Once all the coefficients are computed,
a factor graph $e$ can be represented by its own $E_e$,
which will be used for the next aggregation step.
However, without any other
constrain, some of the generated factor graphs may contain a similar
structure, degrading the disentanglement performance and
capacity of the model. We therefore introduce an additional
head in the disentangle layer, aiming to avoid the
degradation of the generated factor graphs.
The motivation of the additional head is that, a well
disentangled factor graph should have enough information to
be distinguished from the rest, only based on its
structure.
{Obtaining the solution that all the disentangled
factor graphs differ from each other to the
maximal degree, unfortunately, is not trivial.}
We thus approximate the solution by
giving unique labels to the factor graphs
and optimizing the factor graphs
as a graph classification problem.
Our additional head will serve as a {discriminator, shown in Eq.~\ref{eq:2}}, to distinguish which label a given graph has:
\begin{small}
\begin{equation}
G_e = {\rm Softmax}\Big( f \big({\rm Readout}(\mathcal{A}(\mathbf{E}_{e}, \mathbf{h^\prime}) ) \big) \Big).
\label{eq:2}
\end{equation}
\end{small}
The discriminator contains
a three-layer graph auto-encoder $\mathcal{A}$, which takes the transformed feature
$\mathbf{h^\prime}$ and the generated attention coefficients of factor graph $\mathbf{E}_e$
as inputs, and generates the new node features.
These features are then readout to generate
the representation of the whole factor graph.
Next, the feature vectors will be sent to
a classifier with one fully connected layer.
Note that all the factor graphs share the
same {node features}, making sure that the
information discovered by the discriminator only
comes from the difference among the structure of
the factor graphs.
More details about the discriminator architecture
can be found in the supplementary materials.
The loss used to train the discriminator
is taken as follows:
\begin{small}
\begin{equation}
\mathcal{L}_{d} = - \frac{1}{N} \sum_i^N \left( \sum_{c=1}^{N_e} \mathbbm{1}_{e=c} log(G_i^e[c]) \right),
\label{eq:disloss}
\end{equation}
\end{small}\noindent
where $N$ is the number of training samples,
set to be the number of input graphs
multiplies by the number of factor graphs;
$N_e$ is the number of factor
graphs; $G_i^e$ is the distribution
of sample $i$ and $G_i^e[c]$ represents the
probability that the generated factor graph has label $c$.
$\mathbbm{1}_{e=c}$ is an indicator function, taken to be one
when the predicted label is correct.
\subsection{{Aggregation Step}}
As the factor graphs derived from the disentangling step
is optimized to be as diverse as possible,
in the aggregation step, we will use the generated factor graphs
to aggregate information in different structural spaces.
This step is similar as the most GCN models, where
the new node feature is generated by taking the weighted sum of its
neighbors. Our aggregation mechanism is based on
the simplest one, which is used in GCN~\citep{kipf2017semi}.
The only difference is that the aggregation will take place independently
for each of the factor graphs.
The aggregation process is formulated as
\begin{small}
\begin{equation}
h^{(l+1)_e}_i = \sigma(\sum_{j\in \mathcal{N}_i} E_{ije} / c_{ij} h^{(l)}_j \mathbf{W}^{(l)} ),
c_{ij} = \left( |\mathcal{N}_i||\mathcal{N}_j| \right)^{1/2},
\label{eq:agg}
\end{equation}
\end{small}\noindent
where $h^{(l+1)_e}_i$ represents the new feature for node
$i$ in $l+1$ layer aggregated
{from} the factor graph $e$; $\mathcal{N}_i$ represents all
the neighbours of node $i$ in the input graph;
$E_{ije}$ is the coefficient of the edge from node $i$ to
node $j$ in the factor graph $e$; $c_{ij}$ is the
normalization term that is computed according to
the degree of node $i$ and node $j$;
$\mathbf{W}^{(l)}$ is a linear transformation matrix,
which is the same as the matrix used in the disentangling step.
Note that although we use all the neighbours of a node
in the input graph to aggregate information,
{some of them are making no contribution if the corresponding
coefficient in the factor graph is zero.}
\subsection{{Merging Step}}
Once the aggregation step is complete,
different factor graphs will lead to
different features of nodes.
We merge these features generated from
different factor graphs by applying
\begin{small}
\begin{equation}
h^{(l+1)}_i = ||^{N_e}_{e=1} h^{(l+1)_e}_i,
\label{eq:merge}
\end{equation}
\end{small}\noindent
where $h^{(l+1)}_i$ is the output feature of node $i$; $N_e$
is the number of factor graphs; $||$ represents the
concatenation operation.
\subsection{Architecture}
We discuss above the design of one disentangle layer,
which contains three steps. The FactorGCN model
we used in the experimental section contains
several such disentangle layers, increasing
the power of expression. Moreover, by setting different
number of factor graphs in different layers,
the proposed model can disentangle the input data
in a hierarchical manner.
The total loss to train FactorGCN model is $\mathcal{L} = \mathcal{L}_{t} + \lambda * \mathcal{L}_{d} $. $\mathcal{L}_{t}$ is the loss of
the original task, which is taken to be
a binary cross entropy loss for multi-label
classification task, cross entropy loss for
multi-class classification task, or L1 loss for regression task.
$\mathcal{L}_{d}$ is the loss
of the discriminator we mentioned above. $\lambda$ is the
weight to balance these two losses.
\section{Experiments}
In this section,
we show the effectiveness of the proposed FactorGCN,
and provide discussions
on its various components as well as
the sensitivity with respect to the
key hyper-parameters.
More results can be found in the supplementary materials.
\subsection{Experimental setups}
{\textbf{Datasets}. }
Here, we use six datasets to evaluate the
effectiveness of the proposed method.
The first one is a synthetic dataset
that contains a fixed number of predefined graphs
as factor graphs. The second one is the ZINC dataset~\citep{dwivedi2020benchmarkinggnn}
built from molecular graphs.
The third one is Pattern dataset~\citep{dwivedi2020benchmarkinggnn},
which is a large scale dataset for node classification task.
The other three are widely used graph classification datasets
include social networks~(COLLAB,IMDB-B)
and bioinformatics graph~(MUTAG)~\citep{yanardag2015deepgin}.
To generate the synthetic dataset that contains $N_e$ factor graphs,
we first generate $N_e$ predefined graphs,
which are the well-known graphs like Tur\'an graph, house-x graph,
and balanced-tree graph.
We then choose half of them and pad them with isolated nodes to
make the number of nodes to be 15.
The padded graphs will be merged together as a training sample.
The label of the synthetic data is a binary vector, with the dimension $N_e$.
Half of the labels will be set to one according to
the types of graphs that the sample generated from, and
the rest are set to zero.
More information about the datasets can be found
in the supplemental materials.
\textbf{Baselines}. We adopt several methods,
including state-of-the-art ones, as the baselines.
Among all,
MLP is the simplest one, which contains multiple fully connected layers.
Although this method is simple, it can in fact perform well when comparing
with other methods that consider the structural information.
We use MLP to check whether the other compared methods benefit
from using the structural information as well.
GCN aggregates the information in the graph according to
the laplacian matrix of the graph,
which can be seen as a fixed weighted sum on
the neighbours of a node.
GAT~\citep{velickovic2018graphgat} extends the idea of GCN by
introducing the attention mechanism.
The weights when doing the aggregation is computed dynamically according to
all the neighbours. For the ZINC dataset, we also add MoNet~\citep{monti2017geometricMoNet}
and GatedGCN$_E$~\citep{dwivedi2020benchmarkinggnn}
as baselines. The former one is the state-of-the-art method that does not
use the type information of edges while the latter one is the state-of-the-art
one that uses additional edge information.
Random method is also added to provide the result of random guess for reference. For the other three graph datasets, we add
non DL-based methods~(WL subtree, PATCHYSAN, AWL) and
DL-based methods~(GCN, GraphSage~\citep{hamilton2017inductive}, GIN)
as baselines. DisenGCN~\citep{ma2019disentangled} and
IPDGN~\citep{liu2019independence}
are also added.
\textbf{Hyper-parameters}.
For the synthetic dataset, Adam optimizer is used with a
learning rate of 0.005,
the number of training epochs is set to 80,
the weight decay is set to 5e-5.
The row of the adjacent matrix of the generated synthetic
graph is used as the feature of nodes.
The negative slope of LeakyReLU for GAT model is set to 0.2,
which is the same as the original setting.
The number of hidden layers for all models is set to two.
The dimension of the hidden feature is set to 32 when
the number of factor graphs is no more than four
and 64 otherwise.
The weight for the loss of discriminator in FactorGCN is set to 0.5.
For the molecular dataset, the dimension of the hidden feature is set to 144
for all methods and the number of layers is set to four.
Adam optimizer is used with a learning rate of 0.002.
No weight decay is used.
$\lambda$ of FactorGCN is set to 0.2. All the methods
are trained for 500 epochs. The test results are obtained using the model with
the best performance on validation set. For the other three datasets,
three layers FactorGCN is used.
\begin{figure}
\centering
\includegraphics[width=0.95\linewidth]{Figs/vis_samples.pdf}
\vspace{-1.3em}
\caption{Examples of the disentangled factor graphs on the synthetic dataset.
The isolated nodes are eliminated for a better visualization.}
\label{fig:synthetic_vis}
\vspace{-1.2em}
\end{figure}
\subsection{Qualitative Evaluation}
We first provide the qualitative evaluations of disentanglement performance,
including the visualization of the disentangled factor graphs and
the correlation analysis of the latent features.
\textbf{Visualization of disentangled factor graphs}.
To give an intuitive understanding of the
disentanglement. We provide in Fig.~\ref{fig:synthetic_vis}
some examples of the generated factor graphs.
We remove the isolated nodes and visualize
the best-matched factor graphs with ground truths.
More results and analyses can be found in the supplemental
materials.
\begin{figure}[ht]
\centering
\includegraphics[width=0.93\linewidth]{Figs/corr_six.pdf}
\caption{Feature correlation analysis. The hidden features are obtained from
the test split using the pre-trained models on the synthetic dataset.
It can be seen that the features generated
from FactorGCN present a more block-wise
correlation pattern, indicating that
the latent features have indeed been disentangled.
We also show the classification performance in brackets.}
\label{fig:synthetic_corr}
\end{figure}
\textbf{Correlation of disentangled features}.
Fig.~\ref{fig:synthetic_corr} shows the correlation
analysis of the latent features
obtained from several pre-trained models on the synthetic dataset.
It can be seen that also GCN and MLP models can achieve
a high performance in the downstream task, and
their latent features are hidden entangled.
GAT gives {more} independent latent features
but the performance is degraded in the
original task. FactorGCN is able to extract the highly independent
latent features and meanwhile achieve a better performance in the downstream task.
\subsection{Quantitative Evaluation}
The quantitative evaluation focuses on two parts,
the performance of the downstream tasks
and that of the disentanglement.
\textbf{Evaluation protocol}.
For the downstream tasks,
we adopt the corresponding metrics to evaluate,
i.e., Micro-F1 for the multi-label classification task,
mean absolute error~(MAE) for the regression task.
We design two new metrics to evaluate the disentanglement
performance on the graph data.
The first one is graph edit distance on edge~(GED$_{E}$).
This metric is inspired by the traditional graph edit distance~(GED).
Since the input graph already provides the information
about the order of nodes, the disentanglement of
the input data, in reality,
only involves the changing of edges.
Therefore, we restrict the GED by only
allowing adding and removing the edges, and
thus obtain a score of
GED$_{E}$ by Hungarian match
between the generated factor graphs
and the ground truth.
Specifically, for each pair of the
generated factor graph and the
ground truth graph, we first convert the continuous value in the
factor graph to 1/0 value by setting the threshold to make
the number of edges in these two graphs are the same.
Then, GED$_{E}$s can be computed for every such combination.
Finally, Hungarian match is adopted to obtain the best bipartite matching results as the GED$_{E}$ score.
Besides the GED$_{E}$ score, we also
care about the consistency of the generated factor graph.
In other words, the best-matched pairs between the generated factor graphs and the ground truths, optimally, should be
identical across all samples.
We therefore introduce the second metric named as consistency score~(C-Score), related to GED$_{E}$.
C-Score is computed as the
average percentage of the
most frequently matched factor graphs.
The C-score will be one
if the ground truth graphs are always matched
to the fixed factor graphs.
A more detailed description of evaluation
protocol can be found in the supplemental materials.
\begin{table}
\caption{Performance on synthetic dataset. The four methods are evaluated in terms of
the classification and the disentanglement performance. Classification performance
is evaluated by Micro-F1 and disentanglement performance is measured by GED$_E$ and C-Score. For each method,
we run the experiments five times and report the mean and std. Random method generates four factor graphs.
GAT\_W/Dis represents GAT model with
the additional discriminator proposed in this paper.}
\label{tab:synthetic}
\centering
\scalebox{0.71}{
\begin{tabular}{lccccccc}
\toprule
& MLP & GCN & GAT & GAT\_W/Dis & DisenGCN & FactorGCN~(Ours) & Random\\
\midrule
Micro-F1 $\uparrow$ & 0.940 $\pm$ 0.002 & 0.947 $\pm$ 0.003 & 0.923 $\pm$ 0.009 & 0.928 $\pm$ 0.009 & 0.904$\pm$0.007 & \textbf{0.995 $\pm$ 0.004} & 0.250 $\pm$ 0.002 \\
GED$_{E}$ $\downarrow$ & - & - & 12.59 $\pm$ 3.00 & 12.35 $\pm$ 3.86
& \textbf{10.54$\pm$4.35} & \textbf{10.59 $\pm$ 4.37} & 32.09 $\pm$ 4.85 \\
C-Score $\uparrow$ & - & - & 0.288 $\pm$ 0.064 & 0.274 $\pm$ 0.065
& 0.367$\pm$0.026 & \textbf{0.532 $\pm$ 0.044} & 0.315 $\pm$ 0.002 \\
\bottomrule
\end{tabular}
}
\end{table}
\textbf{Evaluation on the synthetic dataset}. We first evaluate the disentanglement
performance on a synthetic dataset. The results are shown in Tab.~\ref{tab:synthetic}.
Although MLP and GCN achieve good classification performances, they are not capable of disentanglement.
GAT disentangles the input by using multi-head attention,
but the performance of the original task is degraded.
Our proposed method,
on the other hand, achieves a much better performance in terms of both disentanglement and
the original task.
We also evaluate the compared methods on the synthetic dataset with various numbers of
factor graphs, shown in Tab.~\ref{tab:synthetic_various}.
As the number of
latent factor graphs increase, the performance gain of the FactorGCN becomes large.
However, when the number of factor graphs becomes too large,
the task will be more challenging,
yielding lower performance gains.
\begin{table}
\caption{Classification performance on synthetic graphs with different numbers of factor graphs.
We change the total number of factor graphs and generate five synthetic datasets.
When the number of factor graphs increases, the performance gain of FactorGCN becomes larger.
However, as the number of factor graphs becomes too large, disentanglement
will be more challenging, yielding {lower performance gains}.}
\label{tab:synthetic_various}
\centering
\scalebox{0.92}{
\begin{tabular}{cccccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{5}{c}{Number of factor graphs} \\
\cmidrule(r){2-6}
& 2 & 3 & 4 & 5 & 6 \\
\midrule
MLP & 1.000 $\pm$ 0.000 & 0.985 $\pm$ 0.002 & 0.940 $\pm$ 0.002 & 0.866 $\pm$ 0.001 & 0.809 $\pm$ 0.002 \\
GCN & 1.000 $\pm$ 0.000 & 0.984 $\pm$ 0.000 & 0.947 $\pm$ 0.003 & 0.844 $\pm$ 0.002 & 0.765 $\pm$ 0.001 \\
GAT & 1.000 $\pm$ 0.000 & 0.975 $\pm$ 0.002 & 0.923 $\pm$ 0.009 & 0.845 $\pm$ 0.006 & 0.791 $\pm$ 0.006 \\
FactorGCN & 1.000 $\pm$ 0.000 & \textbf{1.000 $\pm$ 0.000} & \textbf{0.995 $\pm$ 0.004} & \textbf{0.893 $\pm$ 0.021} & \textbf{0.813 $\pm$ 0.049} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Performance on the ZINC dataset. FactorGCN outperforms the compared methods
by a large margin, with the
capability of disentanglement.
Note that our proposed method even achieves a similar performance as
GatedGCN$_E$, the state-of-the-art method on ZINC dataset that explicitly uses additional
edge information. }
\label{tab:zinc}
\centering
\scalebox{0.70}{
\begin{tabular}{c|cccccc|c}
\toprule
& MLP & GCN & GAT & MoNet & DisenGCN & FactorGCN~(Ours) & GatedGCN$_E$\\
\midrule
MAE $\downarrow$ & 0.667 $\pm$ 0.002 & 0.503 $\pm$ 0.005 & 0.479 $\pm$ 0.010 & 0.407 $\pm$ 0.007
& 0.538$\pm$0.005 & \textbf{0.366 $\pm$ 0.014} & \textit{0.363 $\pm$ 0.009}\\
GED$_{E}$ $\downarrow$ & - & - & 15.46 $\pm$ 6.06 & -
& 14.14$\pm$6.19 & \textbf{12.72 $\pm$ 5.34} &- \\
C-Score $\uparrow$ & - & - & 0.309 $\pm$ 0.013 & -
& 0.342$\pm$0.034 & \textbf{0.441 $\pm$ 0.012} &- \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}
\caption{Accuracy~(\%) on three graph classification datasets.
FactorGCN performances on par with or better
than the state-of-the-art
GCN models. We highlight the best DL-based methods and non DL-based methods
separately. FactorGCN uses the same hyper-parameters for all datasets.}
\label{tab:other}
\centering
\scalebox{0.85}{
\begin{tabular}{cccc|cccc}
\toprule
& WL subtree & PATCHYSAN & AWL & GCN & GraphSage & GIN & FactorGCN \\
\midrule
IMDB-B & 73.8 $\pm$ 3.9 & 71.0 $\pm$ 2.2 & \textbf{74.5 $\pm$ 5.9}& 74.0 $\pm$ 3.4 &72.3 $\pm$ 5.3 & \textbf{75.1 $\pm$ 5.1} & \textbf{75.3 $\pm$ 2.7} \\
COLLAB & \textbf{78.9 $\pm$ 1.9} & 72.6 $\pm$ 2.2 & 73.9 $\pm$ 1.9 & 79.0 $\pm$ 1.8 & 63.9 $\pm$ 7.7 & 80.2 $\pm$ 1.9 & \textbf{81.2 $\pm$ 1.4} \\
MUTAG & 90.4 $\pm$ 5.7 & \textbf{92.6 $\pm$ 4.2} & 87.9 $\pm$ 9.8 & 85.6 $\pm$ 5.8 & 77.7 $\pm$ 1.5 & \textbf{89.4 $\pm$ 5.6} & \textbf{89.9 $\pm$ 6.5}\\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[ht]
\caption{Accuracy~(\%) on the Pattern dataset
for node-classification task.
FactorGCN achieves the best performance,
showing its ability to serve as a general GCN framework.}
\label{tab:pattern}
\centering
\scalebox{0.86}{
\begin{tabular}{ccccccc}
\toprule
GCN & GatedGCN & GIN & MoNet & DisenGCN & IPDGN & FactorGCN \\
\midrule
63.88 $\pm$ 0.07 & 84.48 $\pm$ 0.12 & 85.59 $\pm$ 0.01
& 85.48 $\pm$ 0.04 & 75.01 $\pm$ 0.15 & 78.70 $\pm$ 0.11 & \textbf{86.57 $\pm$ 0.02} \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.99\linewidth]{Figs/sens_1x2.pdf}
\caption{The influence of the balanced weight $\lambda$ and
the number of factor graphs.}
\label{fig:sens}
\end{figure}
\textbf{Evaluation on the ZINC dataset}.
For this dataset, the type information of edges is
hidden during the training process,
and is serve as the ground truth to evaluate the performance of
disentanglement. Tab.~\ref{tab:zinc} shows the results. The proposed method
achieves the best performance on both the disentanglement
and the downstream task.
We also show the state-of-the-art method GatedGCN$_E$
on this dataset on the right side of Tab.~\ref{tab:zinc}, which utilizes
the type information of edges during the training process.
Our proposed method, without any additional edge information,
achieves truly promising results that are
to that of GatedGCN$_E$, which {needs the bond information
of edges during training.}
\textbf{Evaluation on more datasets}.
To provide a thorough understanding of the proposed method,
We also carry out evaluations on three widely
used graph classification datasets and one node classification dataset
to see the performances of FactorGCN as a general GCN framework.
The same 10-fold evaluation protocol
as~\citep{xu2018powerful} is adopted.
Since there are no ground truth factor graphs,
we only report the accuracy, shown in
Tab.~\ref{tab:other} and Tab.~\ref{tab:pattern}.
Our method achieves consistently the best performance,
showing the potential of the FactorGCN
as a general GCN framework, even putting aside its
disentangling capability.
More details about the evaluation protocol,
the setup of our method, and the statistic information about these datasets
can be found in the supplemental materials.
\subsection{Ablation and sensitivity analysis}
We show in Fig.~\ref{fig:sens} the ablation study and sensitivity analysis
of the proposed method.
When varying $\lambda$, the number of factors
is set to be eight;
when varying the number of factors ,
$\lambda$ is set to be 0.2.
As can be seen from the left figure,
the performance of both the disentanglement and the
downstream task will degrade without the discriminator.
The right figure shows the relations between the performance
and the number of factor graphs we used in FactorGCN.
Setting the number of factor graphs
to be slightly larger than that of the ground truth,
in practice, leads to a better performance.
\section{Conclusion}
We propose a novel GCN framework, termed as FactorGCN,
which achieves graph convolution through
graph-level disentangling. Given an input graph, FactorGCN
decomposes it into several interpretable factor graphs, each of which denotes an underlying interconnections
between entities, and then carries out topology-aware convolutions on each such factor graph to produce the final node features. The node features, derived under the explicit disentangling, are therefore block-wise explainable
and beneficial to the downstream tasks.
Specifically, FactorGCN enables multi-relation
disentangling, allowing information propagation
between two nodes to take places
in disjoint spaces.
We also introduce two new metrics to
measure the graph disentanglement
performance quantitatively.
FactorGCN outperforms other methods
on both the disentanglement and the downstream tasks, indicating
the proposed method is ready to serve as a general GCN framework
with the capability of graph-level disentanglement.
\section*{Acknowledgments}
This work is supported by the startup funding of
Stevens Institute of Technology.
\section*{Broader Impact}
In this work we introduce a GCN framework,
termed as FactorGCN,
that explicitly accounts for disentanglement
FactorGCN is applicable to various scenarios,
both technical and social.
For conventional graph-related
tasks, like node classification of the social network
and graph classification of the molecular graph,
our proposed method can serve as a general GCN
framework.
For disentangling tasks, our method
generates factor graphs that reveal
the latent relations among entities,
and facilitate the
further decision making process like
recommendation.
Furthermore, given sufficient data,
FactorGCN can be used as a tool to
analyze social issues like discovering
the reasons for the quick spread of
the epidemic disease in some areas.
Like all learning-based methods,
FactorGCN is not free of errors.
If the produced disentangled factor graphs
are incorrect, for example,
the subsequent inference and prediction results
will be downgraded, possibly yielding
undesirable bias.
\newpage
\small
\bibliographystyle{unsrtnat}
|
1,477,468,750,132 | arxiv | \section{Introduction}
Non-equilibrium systems are ubiquitous in nature. With no comprehensive
framework for such systems in general, the understanding of non-equilibrium
statistical mechanics is recognized as one of the major challenges \cit
{CMMP10}. To make progress toward finding such a framework, it is reasonable
to study simplified models, in order to gain some insight into this type
of complex systems. One such model is totally asymmetric simple exclusion
process (TASEP). On the one hand, this model is simple enough to be amenable
to analytic methods, so that many exact results are known. At the same time,
it is applicable to a wide range of biological and
physical systems, e.g. protein production \cite{MacDonald68, MacDonald69,
Shaw03,Chou03}, traffic flow \cite{Chowdhury00, Popkov01}, and surface growth \cit
{Kardar86, Wolf90}.
The simplest version of the TASEP consists of a one-dimensional lattice with
particles moving unidirectionally from one site to the next. Particles may
move only if the adjacent site is empty. Two types of boundary conditions are
typically studied, periodic and open. With periodic boundary conditions, the
stationary distribution is trivial \cite{Spitzer70}, though its dynamics
differ from that of ordinary diffusion \cite{DeMasi85, Kutner85, Dhar87,
Majumdar91, Gwa92, Derrida93b, Kim95, Golinelli05}. For open
boundary conditions, three distinct phases emerge that depend on the entry
and exit rates \cite{Krug91} - a low density (LD) phase with the lattice
less than half filled, a high density (HD) phase with more than half of the
lattice filled, and a maximal current (MC) phase where the current of
particles through the lattice is a maximum. If the entry and exit rates are
the same, then a shock forms between a LD and HD region that performs a
random walk on the lattice. Because of the presence of a shock, it is often
referred to as the shock phase (SP). The exact solution of the steady-state
distribution is non-trivial and was found only two decades ago \cit
{Derrida92, Derrida93, Schutz93}. Not surprisingly, its dynamics is more
complex \cite{Pierobon05, Dudzinski00, Nagy02, Takesue03, deGier06, Gupta07}. For a recent review on these aspects of the TASEP, as well as its applications to other processes of biological transport, see \cite{Chou11}.
The TASEP with open boundary conditions has been used to study the
production of proteins during translation in a cell \cite{MacDonald68,
MacDonald69, Shaw03}. In this process, ribosomes attach at one end of the
messenger RNA (mRNA) strand and move unidirectionally to the other end. At
the other end, the ribosome detaches from the strand and can be used again
by either the same mRNA or another one. To build more realistic models for
protein synthesis, modifications to the simplest version of the TASEP have
been introduced, such as having large particles \cite{Chou03,Dong07},
inhomogeneous hopping rates \cite{Chou04,Dong07b}, and ribosome
\textquotedblleft recycling\textquotedblright\ \cite{Chou03b, Adams08,
Cook09}. In this paper, we expand on our previous work on competition
between multiple TASEPs\cite{Cook09b}, modeling the simultaneous translation
of multiple genes in a cell with a limited number of ribosomes. Unlike
earlier studies, we consider another important aspect of synthesis of
proteins in a cell, i.e., the presence of various regulatory mechanisms
which control the rates of ribosome binding to different proteins. Thus, we
study TASEP's with {\em different} entry and exit rates. Though we are not
aware of any similar mechanism for termination, we consider different exit
rates also, simply as part of a systematic investigation. With such a large
parameter space to explore, we restrict ourselves to only two TASEPs here,
in search for novel and (possibly) universal properties that could be
applicable for mRNA competition in a real cell.
This paper is organized as follows: In the next section, we define our model. In section \ref{Section3}, we present our simulation results. We give some theoretical considerations in section \ref{Section4}. Finally, we give a summary and outlook in section \ref{Section5}.
\section{Model specifications}
In our previous study \cite{Cook09b}, we model the competition between mRNAs
by coupling two or more open TASEPs to a finite pool of $N_{p}$ particles
and let the entry rates depend on this $N_{p}$. Particles exiting each
TASEP join this pool and are \textquotedblleft recycled\textquotedblright\
for entry into any of the other TASEPs. Thus, the total number of particles
N_{tot}$ is conserved. While on any lattice, the particles move
uni-directionally from one side to the other as in the ordinary TASEP. All
internal hopping rates are set to unity.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{model.eps}
\caption{Our current model of connecting two TASEPs to a finite pool of particles. The large arrows indicate how particles enter and leave the pool.}
\label{model}
\end{center}
\end{figure}
Our current model (shown in figure \ref{model}) differs from \cit
{Cook09b}: Here, we relax the constraint that the {\em intrinsic }(i.e.,
limiting) entry rates of the TASEPs are identical. Thus, we define $\alpha
_{1,2}$ as the intrinsic rate for our two-TASEP system, applicable when the
supply of particles is very large. For simplicity, let us assume the
crossover function ($f$) to be the same, so that the {\em effective }entry
rates are given by
\begin{eqnarray}
\alpha _{eff,1}& =\alpha _{1}f(N_{p}) \label{a-eff1} \\
\alpha _{eff,2}& =\alpha _{2}f(N_{p}) \label{a-eff2}
\end{eqnarray
As in \cite{Adams08, Cook09, Cook09b}, we will use
\begin{equation}
f(N_{p})=\tanh \left( \frac{N_{p}}{N^{\ast }}\right) \label{f-def}
\end{equation
(where $N^{\ast }$ is a scaling parameter), so that $f\left( 0\right) =0$
and $f\rightarrow 1$ as $N_{p}\rightarrow \infty $. Clearly, it is
reasonable to use the labels \textquotedblleft faster\textquotedblright
/\textquotedblleft slower\textquotedblright\ TASEP for the one with
larger/smaller $\alpha $. We also consider different exit
rates $\beta _{1,2}$, even though we are not aware of biological systems
which exhibit such differences.
In our Monte Carlo simulations, we first consider the case of two TASEPs of
lengths $L_{1}$ and $L_{2}$ connected to a single pool of particles.To
represent the pool, we have a \textquotedblleft virtual\textquotedblright\
site, with unlimited occupation (so that we have $L_{1}+L_{2}+1$ sites in
total). Since this site is connected to both TASEPs, there are actually
L_{1}+L_{2}+2$ \textquotedblleft bonds\textquotedblright\ connecting the
sites. The simulations are performed as follows. In an update attempt, we
randomly choose one bond and update the contents of the sites according to
the usual rules: A hole-particle pair within a TASEP is left unchanged,
while a particle-hole pair is always changed to a hole-particle pair. If a
pool-TASEP bond is chosen and the entry site is empty, then a particle is
moved in it with probability $\alpha _{eff,1}$ or $\alpha _{eff,2}$.
Finally, for the TASEP-pool bond, a particle in the last site is moved into
the pool with probability $\beta _{1,2}$. One Monte Carlo step (MCS) is
defined as $L_{1}+L_{2}+2$ attempts.
Starting with $N_{tot}$ particles in the pool (none on the TASEPs), we
allow the system to reach steady-state, which typically takes 100k MCS. For
the next 1M MCS, we record the density profile ($\rho \left( x\right) $) for
each TASEP at every 100 MCS. From these, we compute the overall densities (
\rho $), for a total of 10k data points. We also measure the average
currents ($J$), by measuring (for example) the total number of particles
which exit each TASEP over the run and dividing that by $10^{6}$. As in the
earlier study, we are interested in how these quantities are affected by
varying $N_{tot}$. The profiles obviously contain much more detailed
information. Thus, in this first stage, we will mostly report the behavior
of the four functions $\rho _{1,2}\left( N_{tot}\right) $ and $J_{1,2}\left(
N_{tot}\right) $.
Our model has a total of eight parameters: $L_{1}$, $L_{2}$, $\alpha _{1}$,
\alpha _{2}$, $\beta _{1}$, $\beta _{2}$, $N_{tot}$, and $N^{\ast }$.To reduce the
number of parameters, we fix $N^{\ast }=1000$. $N^{\ast }$ controls the
strength of the feedback effect for both TASEPs; however, we are focusing on
the effects of having different entry and exit rates, so we will not explore
the effects of $N^{\ast }$ in this study. Since we have different $\alpha
's and $\beta $'s, each TASEP can be in a different phase (LD, HD, MC, or
SP) when the pool size becomes large. Thus, 16 different combinations are
possible. From our experience \cite{Adams08,Cook09b}, the most interesting
phenomena occur in the combination HD-HD, the results of which will be
presented next.
\section{Simulation Results}
\label{Section3}
\subsection{HD-HD}
From the earlier study \cite{Adams08}, the overall density of a constrained
HD-TASEP displays three regimes, as $N_{tot}$ is increased: an LD dominated
one, a ``crossover regime'', and one controlled by HD. Respectively, these
are characterized by $\alpha _{eff}\left( N_p\right) <\beta $ , $\alpha
_{eff}\left( N_p\right) =\beta $, and $\alpha _{eff}\left( N_p\right) >\beta
$. In the crossover regime, $N_p$ remains fixed, while all changes in
N_{tot}$ are absorbed by the lattice. Thus, $\rho $ increases {\em linearly
, from the LD value of $\beta $ to the HD value of $1-\beta $. The threshold
values of $N_{tot}$ are given by $\alpha _{eff}\left( N_{tot}-\beta L\right)
=\beta $ and $\alpha _{eff}\left( N_{tot}-\left( 1-\beta \right) L\right)
=\beta $. These characteristics are again present when two TASEPs compete
for the pool. The novel features here are the following. If the two TASEPs
make their crossovers at entirely different points, then all changes in
N_{tot}$ are absorbed by whichever is in the crossover regime, so that
activity in both the pool and its competitor are completely interrupted. In
Figure \ref{HD-HD-density-all}, we illustrate this phenomenon with the case
of $L_1=1000$, $L_2=1000$, $\alpha _1=0.8$, $\beta _1=0.2$, $\alpha _2=0.6$,
and $\beta _2=0.4$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{density-1L1000-2L1000-Ns1000-1A80-1B20-2A60-2B40.eps}
\caption{Two TASEPs of equal lengths and different rates with $\alpha$'s and $\beta$'s in the HD phase. The open circles and squares are the results from the domain wall theory presented in section \ref{DW}}
\label{HD-HD-density-all}
\end{center}
\end{figure}
Note first that the two TASEPs fill at different rates
at low $N_{tot}$. This difference is a simple consequence of $\alpha
_{eff,1}\simeq \alpha _1N_p/N^{*}>\alpha _{eff,2}\simeq \alpha _2N_p/N^{*}$.
Next, from $N_{tot}\thicksim 600$ to $\thicksim 1200$, the faster TASEP
makes its crossover while the numbers in the pool and the slower TASEP
remains constant. Thereafter, the slower TASEP continues on its LD regime
and, lastly, makes its crossover in, approximately, the interval $\left[
2000,2200\right] $. We emphasize that, in the respective crossover regimes,
\rho _1\in \left[ \beta _1,1-\beta _1\right] $ and $\rho _2\in \left[ \beta
_2,1-\beta _2\right] $.
To understand this effect, we examine the density profile. Even after the
faster TASEP reaches the HD state, its entry rate continues to increase.
This increase results in the decay of the tail near the entrance to changing
as $N_{tot}$ increases, similar to changing $\alpha$ (with fixed $\beta$) in the unconstrained, ordinary TASEP \cite{Derrida92, Derrida93, Schutz93}. As the slower TASEP moves through a crossover
regime, the tail in the profile of the faster TASEP does not change. During
each crossover from LD to HD, the average number of particles in the pool
remains constant. Since each $\alpha _{eff}$ depends on $N_{p}$, the $\alpha
_{eff}$'s also remain constant as $N_{tot}$ increases. The extra particles
from the increase in $N_{tot}$ are added to the TASEP crossing the phase
boundary between the LD and HD phases, resulting in the formation of a
localized shock. A similar phenomenon is found in a single constrained TASEP
\cite{Adams08} and multiple TASEPs with the same $\alpha $ and $\beta $ \cite{Cook09}.
For both TASEPs to be in the crossover regime simultaneously, each $\alpha_{eff}$ must reach $\beta$ at the same $N_{tot}$ value. This condition is achieved when $\alpha_1/\beta_1=\alpha_2/\beta_2$. The slower TASEP's overall density increases linearly with $N_{tot}$ in the crossover regime, but the faster TASEP's density does not. Two examples are shown in figure \ref{HD-HD-crossover-density} for $L_1=L_2=1000$, \subref{HD-HD-crossover-density-small} $\alpha_1=0.8$, $\beta_1=0.2$, $\alpha_2=0.6$, $\beta_2=0.15$ and \subref{HD-HD-crossover-density-large} $\alpha_1=1.0$, $\beta_1=0.4$, $\alpha_2=0.5$, $\beta_2=0.2$.
\begin{figure}[htb]
\begin{center}
\subfigure[]{\includegraphics[width=0.4\textwidth]{density-1L1000-2L1000-Ns1000-1A80-1B20-2A60-2B15.eps}\label{HD-HD-crossover-density-small}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{density-1L1000-2L1000-Ns1000-1A100-1B40-2A50-2B20.eps}\label{HD-HD-crossover-density-large}}
\caption{Overall densities and currents of two TASEPs of equal lengths entering the crossover regime simultaneously with \subref{HD-HD-crossover-density-small} $\alpha_1/\alpha_2=1.33$ and \subref{HD-HD-crossover-density-large} $\alpha_1/\alpha_2=2$.}
\label{HD-HD-crossover-density}
\end{center}
\end{figure}
The ratio of $\alpha_1/\alpha_2$ controls the formation of a plateau region for the faster TASEP with a density of $\rho=0.5$. In our previous study \cite{Cook09b}, similar plateau regions form when the lengths of the TASEPs differed.
Another way to visualize the difference of having both TASEPs in the crossover regime is by looking at the probability $P(N_1, N_2)$ to find the $N_1$ particles on the first TASEP and $N_2$ particles on the second. When $\alpha_1/\beta_1\ne\alpha_2/\beta_2$ and in the crossover regime, the distribution is sharply peaked about the average $N_1$ and $N_2$; otherwise, it is spread across a range of particle occupation pairs whose sum is constant. These cases are shown in figure \ref{HD-HD-dist} for (a) $L_1=L_2=1000$, $\alpha_1=0.8$, $\beta_1=0.2$, $\alpha_2=0.6$, and $\beta_2=0.15$ with $N_{tot}=1250$ and $L_1=L_2=1000$, $\alpha_1=0.8$, $\beta_1=0.2$, $\alpha_2=0.6$, and $\beta_2=0.40$ with (b) $N_{tot}=900$ and (c) $N_{tot}=2100$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{dist-1L1000-2L1000-compare.eps}
\caption{Distributions of particle occupation for two TASEPs when (a) both TASEPs are in the crossover regime, (b) one TASEP in the crossover regime and the other in a LD state, and (c) one TASEP in the crossover regime and the other in a HD state.}
\label{HD-HD-dist}
\end{center}
\end{figure}
The increase of the spread of the distribution when both TASEPs are in the crossover regime comes from the additional degree of freedom that the second TASEP provides in keeping the average number of particles in the pool constant \cite{Cook09b}. It is important to note that the ranges of $N$ values are the same for both TASEPs and are governed by the exit rate of the faster TASEP, $N_{1,2}\in[\beta L, (1-\beta)L]$.
To further investigate this crossover regime, we turn to the density profile. Here, we find that the confinement of the shock between the LD and HD regions is controlled by the ratio of $\alpha_1/\alpha_2$. Figure \ref{HD-HD-crossover-profile} shows the density profiles for the same set of parameters shown in figure \ref{HD-HD-crossover-density} at \subref{HD-HD-crossover-profile-small} $N_{tot}=1250$ and \subref{HD-HD-crossover-profile-large} $N_{tot}=1400$.
\begin{figure}[htb]
\begin{center}
\subfigure[]{\includegraphics[width=0.4\textwidth]{profile-1L1000-2L1000-N1250-Ns1000-1A80-1B20-2A60-2B15.eps}\label{HD-HD-crossover-profile-small}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{profile-1L1000-2L1000-N1400-Ns1000-1A100-1B40-2A50-2B20.eps}\label{HD-HD-crossover-profile-large}}
\caption{Density profiles of two TASEPs of equal lengths entering the crossover regime simultaneously for \subref{HD-HD-crossover-profile-small} $\alpha_1/\alpha_2=1.33$ with $N_{tot}=1250$ and \subref{HD-HD-crossover-profile-large} $\alpha_1/\alpha_2=2$ with $N_{tot}=1400$.}
\label{HD-HD-crossover-profile}
\end{center}
\end{figure}
For the simplest TASEP in the SP, the shock performs a random walk over the entire lattice, which results in a linear density profile \cite{Derrida92, Derrida93, Schutz93}. The linearly increasing regions in the profiles in figure \ref{HD-HD-crossover-profile} indicate the allowed portions of the lattice on which each shock performs a random walk. The flat regions (of LD or HD) are areas in which the shock does not travel.
Since the number of particles in the pool remain relatively constant in the crossover regime, excess particles are free to choose which lattice to occupy. Due to the constraint of $\alpha_1/\beta_1=\alpha_2/\beta_2$, the $\alpha_1/\alpha_2$ ratio correlates with the difference between the two shock heights (i.e. the difference between the LD and HD regions densities). The faster TASEP will always have a smaller shock height, which limits the range of the number of particles it can hold, $[\beta L, (1-\beta)L]$. The same particle limit applies the the slower TASEP as shown in figure \ref{HD-HD-dist}. But due to larger shock height, the shock is now confined to a smaller portion of the lattice than the faster TASEP's shock in order to have the same range of $N$ values (thereby keeping the pool size relatively constant). By decreasing the shock height in the faster TASEP, the range of particles it can hold decreases. Thus, the shock becomes confined over a smaller region on the slower TASEP as seen in figure \ref{HD-HD-crossover-profile}. This effect was not seen in our previous study \cite{Cook09b} since it is a result of having different entry and exit rates.
When both TASEPs enter the crossover regime at the same $N_{tot}$ and the lengths are unequal, we see a trend in the overall density similar to the case of equal rates in \cite{Cook09b}, shown in figure \ref{HD-HD-diffL-density} for $L_1=1000$, $L_2=100$, $\alpha_1=0.8$, $\beta_1=0.2$, $\alpha_2=0.6$, and $\beta_2=0.15$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{density-1L1000-2L100-Ns1000-1A80-1B20-2A60-2B15.eps}
\caption{Two TASEPs of unequal lengths with different $\alpha$'s and $\beta$'s entering the crossover regime at the same $N_{tot}$ value.}
\label{HD-HD-diffL-density}
\end{center}
\end{figure}
The smaller TASEP has a density of 0.5 for most of the crossover regime, quickly rising to this value from the LD state and from this value to the HD state. The density profile for this TASEP is linear indicating a delocalized shock. Figure \ref{HD-HD-diffL-profile} shows this delocalization for $L_1=1000$, $L_2=100$, $\alpha_1=0.8$, $\beta_1=0.2$, $\alpha_2=0.6$, $\beta_2=0.15$, and $N_{tot}=800$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.5\textwidth]{profile-1L1000-2L100-N800-Ns1000-1A80-1B20-2A60-2B15.eps}
\caption{Density profiles for two TASEPs with different lengths.}
\label{HD-HD-diffL-profile}
\end{center}
\end{figure}
The larger TASEP has a localized shock during the crossover regime as seen in the density profile in figure \ref{HD-HD-diffL-profile}. Even when the rates are reversed, the smaller TASEP has a delocalized shock. We can conclude that, as long as the size of the smaller TASEP is less than the intrinsic width of the shock localization, the smaller TASEP will have a delocalized shock.
\subsection{HD-SP, HD-LD, and HD-MC}
The combination of having $\alpha$ and $\beta$ on one TASEP in a HD phase with $\alpha$ and $\beta$ on the other TASEP in another phase produces an effect on the density and current similar to having different ratios of $\alpha/\beta$ for each TASEP. Initially, both TASEPs are in the LD state when $N_{tot}$ is small. As we increase the number of particles in the system, the HD TASEP begins to crossover from the LD state to a HD one, while the other TASEP's density and current remain constant during this regime. After the HD TASEP enters its HD state, the other TASEP's density and current continue to increase until it reaches its final state. Examples of this effect are shown in figures \ref{HD-SP-density-equal}, \ref{HD-LD-density-equal}, and \ref{HD-MC-density-equal} for $L_1=L_2=1000$, $\alpha_1=0.7$, $\beta_1=0.3$ and $\alpha_2=\beta_2=0.3$, $\alpha_2=1-\beta_2=0.3$, $\alpha_2=\beta_2=0.7$, respectively.
\begin{figure}[htb]
\begin{center}
\subfigure[]{\includegraphics[width=0.4\textwidth]{density-1L1000-2L1000-Ns1000-1A70-1B30-2A30-2B30.eps}\label{HD-SP-density-equal}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{density-1L1000-2L1000-Ns1000-1A30-1B70-2A70-2B30.eps}\label{HD-LD-density-equal}}
\subfigure[]{\includegraphics[width=0.4\textwidth]{density-1L1000-2L1000-Ns1000-1A70-1B30-2A70-2B70.eps}\label{HD-MC-density-equal}}
\caption{Two TASEPs with one in the HD state and the other in the \subref{HD-SP-density-equal} SP, \subref{HD-LD-density-equal} LD, \subref{HD-MC-density-equal} MC state.}
\label{HD-other}
\end{center}
\end{figure}
We see no new phenomena when we have different lengths for the TASEPs. Further, as figure \ref{HD-other} shows, an appropriately generalized domain wall (GDW) theory, which is presented in section \ref{DW}, is quite adequate in predicting the behavior of the overall density (and therefore the current as well).
Finally, we present the data for the HD-MC combination (figure \ref{HD-MC-density-equal}). The various regimes here are easy to understand qualitatively. Since $\alpha_1=\alpha_2$, the initial rise of the densities are the same. Thereafter, if there were no competition, the behavior of the second
TASEP would rise smoothly until $\alpha_{2,eff}$ reaches 0.5. But this rise
is interrupted by the first TASEP traversing the crossover regime, i.e., in the second region. In
the third region, it continues its increase and, in the last section, it remains in the MC phase. As the first TASEP has essentially dropped out of the competition, there is no ``kink'' in the transition between these regimes, of course (as better displayed by the current around $N_{tot}\sim2000$).
\subsection{Other phase combinations}
When the $\alpha$'s and $\beta$'s are such that neither TASEP will enter the HD phase, we find the density and current behaving in a manner similar to a single, constrained TASEP \cite{Adams08} as the pool size is increased. While the total number of particles needed to saturate the system is larger than the number needed for a single TASEP, we find no new features emerging in the overall density and current as a function of $N_{tot}$, even for different lengths. Some typical results are shown in the figures found in \ref{appendix}.
\section{Theoretical considerations}\label{Section4}
While we presented some qualitative analysis in the previous section, we now supplement those results with a more quantitative analysis for the various phase regimes.
\subsection{LD state}
Regardless of the entry and exit rates, both TASEPs are in a LD state when $N_{tot}$ is small when compared to the smallest lattice length. From the ordinary TASEP \cite{Derrida93, Schutz01, Evans07}, we know that the overall density is given by $\rho=\alpha$; and for a single TASEP with finite resources \cite{Adams08} it is equal to the average effective entry rate, $\rho=\bar{\alpha}_{eff}$. Extending these results for two TASEPs with unequal entry and exit rates, we have
\begin{eqnarray}
\rho_1&=\bar{\alpha}_{eff,1}=\alpha_1 f(N_{tot}-\rho_1 L_1-\rho_2 L_2)\\
\rho_2&=\bar{\alpha}_{eff,2}=\alpha_2 f(N_{tot}-\rho_1 L_1-\rho_2 L_2)
\end{eqnarray}
The $\alpha_{eff}$'s depend on both $\rho_1$ and $\rho_2$; therefore, a self-consistent solution is found using these two equations. For the $f$ chosen in this paper, the solution is found numerically. Once one of the TASEPs has left the LD state, we must modify our equations.
\subsection{MC state}
If one of the TASEPs enters the MC state, an increase in $\alpha_{eff}$ no longer has an effect on the density or current. The transition occurs when its $\alpha_{eff}=1/2$. As with the ordinary TASEP \cite{Derrida92, Derrida93, Schutz93}, the density $\rho=1/2$ and the current $J=1/4$. The $N_{tot}$ at which this TASEP reaches is final density (assuming $\rho_1$ is entering the MC state) is
\begin{equation}
N_{tot}=f^{-1}\left(\frac{1}{2\alpha_1}\right)+\frac{L_1}{2}+\rho_2 L_2\label{MC-Ntot}
\end{equation}
where $\rho_2$ is the density of the second TASEP. This density depends on its state,
\begin{equation}
\rho_2=\left\{
\begin{array}{cc}
\frac{\alpha_2}{2\alpha_1} & \rm{LD}\\
\frac{1}{2} & \rm{MC}\\
1-\beta_2 & \rm{HD}
\end{array}
\right. \label{MC-rho2}
\end{equation}
For the parameters shown in figures A\ref{MC-SP-density-equal} and A\ref{LD-MC-density-equal} (where the other TASEP is in the LD state), the MC state is reached at $N_{tot}\sim 1600$. In figure \ref{HD-MC-density-equal}, the MC state is reached at $N_{tot}\sim 2100$ with the other TASEP in the HD state. Both TASEPs are approaching the MC state as $N_{tot}$ increases in figure A\ref{MC-MC-density-equal}, where the first one reaches its final density at $N_{tot}\sim 1600$ and the second one at $N_{tot}\sim 2200$. These values agree with the values predicted by equations \eref{MC-Ntot} and \eref{MC-rho2}.
\subsection{HD crossover}
The HD TASEP enters the crossover regime as $N_{tot}$ increases when $\bar{\alpha}_{eff}=\beta=\rho$ and leaves when $\rho=1-\beta$ \cite{Adams08}. Taking the HD TASEP to be $\rho_1$, the beginning $N_{tot,1}$ value of the crossover is
\begin{equation}
N_{tot,1}=f^{-1}\left(\frac{\beta_1}{\alpha_1}\right)+\beta_1 L_1+\rho_2 L_2 \label{HD-Ntot1}
\end{equation}
where the value of $\rho_2$ depends on the state of the second TASEP. Two possibilities exist: the second TASEP is in either the LD state or HD state. Then $\rho_2$ is given by
\begin{equation}
\rho_2=\left\{
\begin{array}{cc}
\alpha_2\frac{\beta_1}{\alpha_1} & \rm{LD}\\
1-\beta_2 & \rm{HD}
\end{array}
\right. \label{HD-rho2}
\end{equation}
For the parameters shown in figures \ref{HD-SP-density-equal} and \ref{HD-LD-density-equal} with the second TASEP in the LD state, equations \eref{HD-Ntot1} and \eref{HD-rho2} give a value of $N_{tot,1}\simeq 887$, which agrees with the simulation results. Using the parameters for figure \ref{HD-MC-density-equal}, we obtain $N_{tot,1}\simeq 1058$, which also agrees with the data shown. We have similar agreement for figure \ref{HD-HD-density-all} with $N_{tot,1}\simeq 705$ and $N_{tot,1}\simeq 1905$ for each TASEP.
When leaving the crossover regime, the $N_{tot}$ value is given by
\begin{equation}
N_{tot,2}=f^{-1}\left(\frac{\beta_1}{\alpha_1}\right)+(1-\beta_1) L_1+\rho_2 L_2\label{HD-Ntot2}
\end{equation}
where $\rho_2$ is given above. Equations \eref{HD-Ntot2} and \eref{HD-rho2} give $N_{tot,2}\simeq 1286$ and $N_{tot,2}\simeq 1458$ for the parameters shown in figures \ref{HD-SP-density-equal} and \ref{HD-MC-density-equal}, respectively. For the parameters shown in figure \ref{HD-HD-density-all}, equations \eref{HD-Ntot2} and \eref{HD-rho2} result in $N_{tot,2}\simeq 1205$ and $N_{tot,2}\simeq 2204$ for each TASEP. All these values agree with the simulation results.
Determining the density of each TASEP during the crossover regime is simple when only one of the TASEPs is in this regime. The density of the one in the crossover regime rises linearly with $N_{tot}$, similar to a single constrained TASEP \cite{Adams08}, while the other density remains constant. Taking $\rho_1$ to be crossing over, we have
\begin{equation}
\rho_1 L_1=N_{tot}-f^{-1}\left(\frac{\beta_1}{\alpha_1}\right)+\rho_2 L_2
\end{equation}
where $\rho_2$ is in either the LD or HD state as before. However, this simple approach does not work if both TASEPs enter the crossover regime at the same time. To understand how the density varies with $N_{tot}$ in that situation and with the SP case, we turn to a domain wall approach.
\subsection{Domain wall theory}\label{DW}
The phenomenological domain wall theory has been successfully applied to the unconstrained TASEP \cite{Kolomeisky98, Belitsky02, Santen02} as well as ones with finite resources \cite{Cook09, Cook09b} to understand the steady-state results. The theory assumes the presence of a sharp domain wall, or shock, separating a low density region near the entrance of a TASEP and a high density region near the exit. The shock's movement on the lattice depends on the currents of the particles(holes) entering from the entrance(exit) and the wall height \cite{Santen02}.
For an ordinary TASEP with no feedback mechanism, the domain wall moves to the left and to the right with fixed rates that depend on $\alpha$ and $\beta$ \cite{Santen02}. We generalize this result to include the feedback effect of $\alpha_{eff}$; thus, the hopping rates become site dependent. While we cannot use the generalized domain wall (GDW) theory when either TASEP is in the MC state, we apply it to all other cases here. Due to the connection between the shock positions $k_1$, $k_2$, and $N_p$,
\begin{equation}
N_p=N_{tot}-(1-\beta_1)(L_1-k_1)-\alpha_1 f(N_p) k_1-(1-\beta_2)(L_2-k_2)-\alpha_2 f(N_p) k_2
\end{equation}
the function $f(N_p)$ can be rewritten as $f(k_1,k_2)$. Then $f$ is found using this self-consistent equation. As the values of $k_1$ and $k_2$ (and subsequently $f$) change, the current of incoming particles $\alpha_{eff}(1-\alpha_{eff})$ and domain wall height $1-\beta-\alpha_{eff}$ for each TASEP will also change due to their dependence on $f$. This fluctuation in $f$ will lead to domain wall hopping rates that are site dependent \cite{Cook09, Cook09b}.
With two TASEPs connected to a single finite pool of particles, the probability $P$ of finding a set of domain wall positions $\{k_1,k_2\}$ at steady-state is given by
\begin{eqnarray}
\nonumber\fl 0=D^+_{k_1-1,k_2}P(k_1-1,k_2)+D^-_{k_1+1,k_2}P(k_1+1,k_2)+E^+_{k_1,k_2-1}P(k_1,k_2-1)\\
+E^-_{k_1,k_2+1}P(k_1,k_2+1)-\left(D^+_{k_1,k_2}+D^-_{k_1,k_2}+E^+_{k_1,k_2}+E^-_{k_1,k_2}\right)P(k_1,k_2)
\end{eqnarray}
where
\begin{eqnarray}
D^-_{k_1,k_2}&=\frac{\alpha_1 f(k_1,k_2)(1-\alpha_1 f(k_1,k_2))}{1-\beta_1-\alpha_1 f(k_1,k_2)}\\
D^+_{k_1,k_2}&=\frac{\beta_1(1-\beta_1)}{1-\beta_1-\alpha_1 f(k_1,k_2)}\\
E^-_{k_1,k_2}&=\frac{\alpha_2 f(k_1,k_2)(1-\alpha_2 f(k_1,k_2))}{1-\beta_2-\alpha_2 f(k_1,k_2)}\\
E^+_{k_1,k_2}&=\frac{\beta_1(1-\beta_2)}{1-\beta_2-\alpha_2 f(k_1,k_2)}
\end{eqnarray}
along with appropriate reflecting boundary conditions. We lose the detailed balance that was previously exploited to find an analytical solution \cite{Cook09b}. While it is possible to find the $P(k_1,k_2)$ analytically, it is not very practical. This system of $(L_1+1)(L_2+1)$ equations ($(L_1+1)(L_2+1)-1$ which are linearly independent) becomes time-consuming to solve even for numerically finding the eigenvector corresponding to the zero eigenvalue. Instead, we build the probability distribution through Monte Carlo simulations of a random walker on a two-dimensional lattice with the hopping rates $D^+$, $D^-$, $E^+$, and $E^-$. These simulations give us the $P(k_1,k_2)$ we need to calculate the density profile and overall density. The profile for each TASEP is given by \cite{Cook09b}
\begin{eqnarray}
\rho_1(x)&=\sum_{k_2=0}^{L_2}\left[\sum_{k_1=0}^x (1-\beta_1)P(k_1,k_2)+\sum_{x+1}^{L_1} \alpha_1f(k_1,k_2)P(k_1,k_2)\right]\\
\rho_2(x)&=\sum_{k_1=0}^{L_1}\left[\sum_{k_2=0}^x (1-\beta_1)P(k_1,k_2)+\sum_{x+1}^{L_2} \alpha_1f(k_1,k_2)P(k_1,k_2)\right]
\end{eqnarray}
The overall density is given by $\rho_i=\sum_{x=1}^{L_i}\rho_i(x)$. The GDW theory results agree with the simulation results as shown in figures \ref{HD-HD-density-all}, \ref{HD-HD-crossover-density}, \ref{HD-HD-diffL-density}, \ref{HD-SP-density-equal}, \ref{HD-LD-density-equal}, A\ref{LD-LD-density-equal}, A\ref{LD-SP-density-equal}, A\ref{SP-SP-density-equal} for the overall density, and figures \ref{HD-HD-crossover-profile}, \ref{HD-HD-diffL-profile} for the density profile.
The domain wall picture helps explain the difference between the results in figure \ref{HD-HD-diffL-density} and \ref{HD-HD-crossover-density-large}. In figure \ref{HD-HD-diffL-density}, the delocalization of the shock over a range of $N_{tot}$ is due to the domain wall reflecting at the boundaries on the smaller TASEP. In figure \ref{HD-HD-crossover-density-large}, the difference in hopping rates allow the shock in the faster TASEP (larger rates) to move about the entire lattice more easily than the one on the slower TASEP (smaller rates). The shock in the slower TASEP will be less likely to move away from its average position, leading to shock localization. Also, the domain wall height, which appears in the denominator of the hopping rates, plays a significant role. If the wall height is too large, then the difference between the rates for each TASEP decreases. The smaller difference allows the shock to wander over a large portion of the slower TASEP, as seen in figure \ref{HD-HD-crossover-profile-small}. Thus, shock localization can be induced by either different lengths or different rates.
Finally, associated with figure \ref{HD-MC-density-equal} (HD-MC), we have no GDW theory to provide a good theoretical prediction, as the second TASEP ends in a state with no domain walls (MC). Since the general aspects of this competition is qualitatively understood, designing a more sophisticated and quantitative theory seems unnecessary.
\section{Summary and Outlook}\label{Section5}
In this paper, we explored how competition for particles between two TASEPs affect the overall density, density profile, and current. Through simulation results and theoretical considerations, we have shown that new effects arise from having different entry and exit rates on the TASEPs. One of these effects is the localization of a shock on the lattice due to the difference in entry and exit rates. The appropriately generalized domain wall theory captured the shock localization phenomenon and reproduced the overall density and density profiles. However, more work still needs to be done if we want to make a connection to the translation process in a cell.
While our study has focused on only two TASEPs, more should be added. Recalling our motivation of protein synthesis, many mRNA's compete for the same pool of ribosomes. The parameter space to explore increases with each additional TASEP, which could lead to new phenomena occurring. Similarly, the dimension of the random walk set forth in the GDW theory increases for each new TASEP that is added. A systematic study of multiple TASEPs would be useful.
Beyond multiple TASEPs, other additions to the model should be made in order to better model the translation process during protein synthesis \cite{MacDonald68,MacDonald69,Shaw03,Chou03,Zia11}. First, the ribosome does not move to the next codon at the same rate for all codons, and the rate may depend on the concentration of amino acid transfer-RNAs (aa-tRNA) in the cell \cite{Dong96}. Thus, TASEPs with inhomogeneous, mRNA-sequence dependent, hopping rates must be taken into account \cite{Shaw03,Zia11}. Now that these rates depend on the aa-tRNA concentrations, it is reasonable to consider the competition for finite aa-tRNA resources. Notably, such an ambitious undertaking has been carried out recently \cite{Brackley10,Brackley11}, although the behavior in a real cell, with thousands of copies of thousands of different genes, will remain difficult for simulation studies in the conceivable future. Second, ribosomes cover more than one codon, typically 12 \cite{cell}. Therefore, the size of the particles should be larger as well \cite{MacDonald68,MacDonald69,Shaw03,Chou03,Dong07}. By combining these individual elements, we hope to gain a better understanding of the translation process during protein synthesis, as well as non-equilibrium systems in general.
After completing this work, we became aware of a similar study by P.\ Greulich, et.\ al.\ \cite{Greulich12}. The main differences between our efforts are the following. 1) We explore the density profile in our Monte Carlo simulations and theoretical approaches. 2) We distinguish between systems in the SP with localized shocks and those with delocalized ones. 3) We explain our results from a domain wall perspective for both the overall density and density profile, instead of using a mean-field approach as in \cite{Greulich12}.
\section*{Acknowledgments}
We would like to thank Jiajia Dong and Beate Schmittmann for insightful discussions, Irina Mazilu and Tom Williams for a critical reading of the manuscript, and Martin Evans for calling our attention to ref. \cite{Greulich12}. This work was funded in part by the U.S. National Science Foundation through Grant No.\ DMR-1005417, and Washington and Lee University through the Lenfest Grant.
\newpage
|
1,477,468,750,133 | arxiv | \section{\label{sec:intro}Introduction}
Linear dielectric response is the underlying property that renders materials interesting for optoelectronic applications including solar cells, transistors, and displays, since excitations of electrons control fundamental mechanisms of optical absorption and emission\cite{RevModPhys.77.1173,PHILLIPS196655,PhysRevLett.63.1719,rohlfing2000electron}.
Most state-of-the-art devices rely on traditional inorganic semiconductors that are well-studied from both experimental and theoretical perspective \cite{roundhill1999optoelectronic}.
Apart from these, systems such as organic crystals have also been reported to have great potential e.g.\ as solar cells, sensors, transistors, and others \cite{liu1999optoelectronic,zimmerman2011mechanism,wang2018possibility,bredas2012conjugated,tang1986two}.
Singlet-triplet fission, for instance, can provide a novel mechanism that may enable design of more efficient, flexible solar cells \cite{wang2018possibility,zimmerman2011mechanism}.
It is thus important to accurately model optical and excitonic properties for these materials, to make reliable predictions for potential applications and device design.
Predictive first-principles simulations based on density functional theory (DFT) \cite{PhysRev.140.A1133, PhysRev.136.B864} and Fermi's golden rule have proven to be important in understanding optical absorption of many semiconductor materials \cite{rohlfing2000electron,PhysRevLett.63.1719,PhysRevB.45.11749,PhysRevB.25.6310}.
However, the lack of considering the electron-hole Coulomb interactions that dominate excited electronic states render traditional DFT-based techniques insufficient for describing optical absorption.
The independent-particle picture fails to provide accurate optical spectra and, in particular, does not provide information about excitonic effects that are critically important for applications, including organic solar cells \cite{dadsetani2015ab,Liu_2020,wang2016effect}.
To accurately model these, the \emph{screened} Coulomb interaction of excited electron-hole pairs needs to be considered and the Bethe-Salpeter equation (BSE) approach within many-body perturbation theory is commonly used \cite{rohlfing2000electron,bechstedt2016many}.
The solution of the BSE is a Green's function technique that allows to include excitonic effects in the first-principles description, leading to an accurate and commonly used theoretical-spectroscopy route to describe optical excitations.
It has proven successful in many studies that predict optical and excitonic properties of bulk semiconductors \cite{onida2002electronic,schleife2009optical,schleife2012ab,kang2019pushing,PhysRevB.92.045209}.
Within the BSE approach, the accurate description of dielectric screening is an important aspect of the underlying physics that is key to accurately simulating optical spectra.
While the screening of the electron-hole interaction is spatially inhomogeneous and dynamical in principle, especially the dynamical effects are not well explored in practice.
This is because the theoretical description of dynamical screening is challenging, the computational cost is high, and such effects are believed to be small in many traditional bulk semiconductors.
Hence, most of the BSE implementations currently used adopt a static, frequency-independent approximation of dielectric screening \cite{PhysRevB.92.045209,DESLIPPE20121269,Sangalli_2019,Vorwerk_2019,BLUM20092175,Giantomassi2011}.
This approximation neglects the rearrangement of the electrons upon forming electron-hole pairs, i.e.\ the dynamical evolution of the screening \cite{bechstedt2016many}.
Whether \emph{electronic} screening dynamics can be neglected, however, is related to the relative ratio of the plasma frequency and the exciton binding energy of a material \cite{bechstedt2016many,rohlfing2000electron}.
In particular, electronic dynamical effects cannot be neglected when exciton-binding energies are comparable to the plasma frequency.
Examples of large exciton binding energies on the order of few hundreds of meV to a few eV include low-dimensional materials \cite{qiu2016screening,gao2016dynamical,zhu2015exciton,ugeda2014giant} and organic crystals \cite{hummer2005oligoacene,dadsetani2015ab,Liu_2020,wang2016effect}.
Consequently, there are indeed computational studies of organic crystals and doped 2D materials\cite{leng2015excitons,gao2016dynamical} that report large corrections on the order of a few hundred meV for the exciton binding energy due to electronic screening dynamics.
However, different approximations of incorporating dynamical screening effects, treating the dynamical screening as a first-order perturbation\cite{rohlfing2000electron} or using an effective static screening function\cite{gao2016dynamical} have not been systematically compared before to each other or to exactly solving the dynamical BSE.
In this work, we discuss different approximations for incorporating dynamical screening into the solution of the BSE.
The challenges are at least two-fold:
Dynamical screening of the electron-hole interaction complicates the many-body perturbation theory framework, since the resulting BSE depends on two frequencies, preventing a closed-form equation for a single-frequency dependent polarization function \cite{bechstedt2016many,rohlfing2000electron,blase2018bethe}.
While this can be overcome using the Shindo approximation \cite{shindo1970effective,bechstedt2016many},
the resulting BSE eigenvalue problem parametrically depends on the frequency and requires sampling of many frequency points, significantly increasing the computational cost.
Rohlfing and Louie proposed a perturbative treatment and used it to examine dynamical screening in molecular SiH$_4$\cite{rohlfing2000electron}.
This approach is also used to examine dynamical effects in biological organic materials such as photoactive yellow protein and dicyanovinyl-substituted oligothiophenes \cite{ma2009excited,baumeier2012excited,ma2009modeling}.
While it provides an efficient way to incorporate dynamical screening for a small number of excitonic states e.g.\ around the absorption edge, it is not directly applicable for simulations of optical spectra, where a large number of excitonic states across a certain energy range is required.
In addition, this approach approximates the true excitonic wave function by the static one, which is only valid when dynamical effects are small.
In this work we follow Refs.\ \onlinecite{bechstedt2016many, hybertsen1986electron} in using the Shindo approximation and a plasmon pole model for the analytical integration of the frequency-dependent dielectric function
to obtain an expression for a single frequency-dependent dynamical BSE.
We then implement and compare different approximations to numerically solve the dynamical BSE, including a static model with effective screening \cite{gao2016dynamical}, the above-mentioned first-order perturbation approach \cite{rohlfing2000electron}, and exact diagonalization of the Hamiltonian.
By solving the dynamical BSE directly on a frequency grid, we were able to examine not only the effect of dynamical screening on exciton-binding energies, but also on optical spectra.
Our results show that while approximate treatments provide reasonable estimates of the magnitude of spectral shifts due to screening dynamics close to the absorption onset, small qualitative differences remain compared to the exact solution for excitonic states higher in energy.
In addition, we show that an \emph{effective} static screening, derived within the dynamical screening framework \cite{gao2016dynamical,PhysRevLett.127.067401}, requires only the lowest exciton-binding energy as input and still provides a good description of spectra.
It provides a computationally tractable alternative e.g.\ for studying complex or large numbers of materials.
In this work we use crystalline naphthalene as an example.
For this material, previous theoretical calculations report exciton binding energies of 0.9 eV, underestimating experimental measurements of 1.0\,--\,1.5 eV\cite{hummer2005oligoacene}.
Since this exciton binding energy is about 10\,\% of the plasma frequency, dynamical electronic screening can become important \cite{rohlfing2000electron, bechstedt2016many, gao2016dynamical,ma2009excited}.
Our work provides a quantitative understanding of the importance of dynamical electronic screening and provides guidance for appropriate regimes of using different approximations when studying optical and excitonic properties of more complicated materials.
\section{\label{sec:theory}Theoretical approach}
The theoretical description of excitonic effects in this work starts from the Bethe-Salpeter equation (BSE) for the macroscopic ($M$) optical polarization function $P^{M}$.
It follows from Hedin's equations for interacting electrons \cite{Hedin:1965} and describes the probability amplitude of the process of annihilating an electron at ($\mathbf{r}'_2,t'_2$) after creating one at ($\mathbf{r}'_1,t'_1$), together with annihilating a hole at ${\left(\mathbf{r_2}, t_2\right)}$ after creating one at ${\left(\mathbf{r_1}, t_1\right)}$.
In reciprocal and frequency space the full BSE reads\cite{bechstedt2016many}
\begin{widetext}
\begin{equation}
\label{eq:fullbse}
\begin{split}
P^M(\lambda_1\lambda'_1,\lambda_2\lambda'_2,z_nz_m)=&-i\hbar G_{\lambda_1}(z_n)G_{\lambda'_1}(z_n-z_m)\times\{\delta_{\lambda_1\lambda'_2}\delta_{\lambda'_1\lambda_2}\\
&+\frac{1}{-i\hbar\beta}\sum_{n'}\sum_{\lambda_3\lambda_4}
\left[-W^{\lambda_1\lambda_3}_{\lambda'_1\lambda_4}(z_n-z_{n'})+2\bar{v}^{\lambda_1\lambda'_1}_{\lambda_3\lambda_4}\right]\times P^M(\lambda_3\lambda_4,\lambda_2\lambda'_2,z_{n'}z_m).
\end{split}
\end{equation}
\end{widetext}
Here, $\beta=1/(k_\mathrm{B} T)$ where $k_\mathrm{B}$ is the Boltzmann constant and $T$ is temperature.
$z_n$ and $z_{n'}$ are Fermionic Matsubara frequencies, corresponding to the Fourier components of the time difference between $t_1$ and $t'_1$, $z_m$ is the Bosonic Matsubara frequency, corresponding to the Fourier component of the time difference between $t_1$ and $t_2$.
$\lambda$ are indices for all single-particle electronic states.
$G_{\lambda}(z)=1/(\hbar z-E_{\lambda})$ are single-particle Green's functions\cite{PhysRevB.29.5718, rohlfing2000electron} with $E_{\lambda}$ being the energy of the single-particle electron and hole state $\lambda$.
$W$ and $\bar{v}$ are the screened and the short-range bare Coulomb interaction of electrons and holes, respectively.
It can be seen that the polarization function in Eq.\ \eqref{eq:fullbse} depends on \emph{two} frequency arguments, $z_n$ and $z_m$.
To describe optical excitation due to absorption of a single photon, one needs to obtain the polarization function that depends on only \emph{one} frequency.
In principle, this can be obtained by summing Eq.\ \eqref{eq:fullbse} over $n$, \cite{bechstedt2016many}
\begin{equation}
\label{eq:polarization}
P^M(\lambda_1\lambda'_1,\lambda_2\lambda'_2,z_m)=\frac{1}{-i\hbar\beta}\sum_nP^M(\lambda_1\lambda'_1,\lambda_2\lambda'_2,z_nz_m).
\end{equation}
In practice, evaluating Eq.\ \eqref{eq:polarization} is difficult for two reasons:
First, to obtain each polarization function on the right-hand side, a complicated matrix problem needs to be solved that involves the $n'$-sum over the frequency-dependent screened Coulomb interaction in Eq.\ \eqref{eq:fullbse}.
Second, this procedure needs to be done many times for different $z_n$ in order to evaluate the sum in Eq.\ \eqref{eq:polarization}.
\subsection{Static Bethe-Salpeter equation}
The standard approach to avoiding these difficulties is to neglect the frequency dependence of the screening, i.e.\ assuming
\begin{equation}
\label{eq:static}
W(z_n-z_{n'})\equiv W(0),
\end{equation}
where $W(0)$ is the zero-frequency, static limit.
It can be shown that with this approximation one can insert Eq.\ \eqref{eq:fullbse} into Eq.\ \eqref{eq:polarization} and obtain a problem that only involves the polarization function that depends on a single frequency argument\cite{bechstedt2016many,PhysRevB.29.5718,rohlfing2000electron}
\begin{widetext}
\begin{equation}
\label{eq:bse_static}
P^M(\lambda_1\lambda'_1,\lambda_2\lambda'_2,z_m)=\frac{f(\lambda_1)-f(\lambda_1')}{E_{\lambda_1}-E_{\lambda_1'}-\hbar z_m}\times\{\delta_{\lambda_1\lambda'_2}\delta_{\lambda'_1\lambda_2}+\sum_{\lambda_3\lambda_4}
\left[-W^{\lambda_1\lambda_3}_{\lambda'_1\lambda_4}+2\bar{v}^{\lambda_1\lambda'_1}_{\lambda_3\lambda_4}\right]\times P^M(\lambda_3\lambda_4,\lambda_2\lambda'_2,z_m).
\end{equation}
\end{widetext}
The Green's functions in Eq.\ \eqref{eq:fullbse} result in the term $\frac{f(\lambda_1)-f(\lambda_1')}{E_{\lambda_1}-E_{\lambda_1'}-\hbar z_m}$, where $f(\lambda)$ is the occupation factor of state $\lambda$.
The crucial difference to Eq.\ \eqref{eq:fullbse} is that Eq.\ \eqref{eq:bse_static} contains only one frequency argument $z_m$ and the complicated sum over $n$ in Eq.\ \eqref{eq:polarization} is avoided.
Subsequently, Eq.\ \eqref{eq:bse_static} is transformed into a generalized eigenvalue problem\cite{fuchs2008efficient}.
From now on, we consider translational invariance, fully occupied valence states, and entirely empty conduction states, as is the case in semiconductor crystals at low temperature.
This turns $\lambda \rightarrow c\mathbf{k}, v\mathbf{k}$, and the standard BSE Hamiltonian is obtained\cite{rohlfing2000electron,bechstedt2016many,fuchs2008efficient} as
\begin{equation}
\label{eq:bse}
\begin{split}
\hat{H}_{vc\mathbf{k},v'c'\mathbf{k'}}=&(E_{c\mathbf{k}}-E_{v\mathbf{k}})\delta_{vv'}\delta_{cc'}\delta_{\mathbf{kk'}}\\&+2\bar{v}^{v'c'\mathbf{k}'}_{vc\mathbf{k}}-W^{v'c'\mathbf{k}'}_{vc\mathbf{k}},
\end{split}
\end{equation}
where $E_{c\mathbf{k}}$ and $E_{v\mathbf{k}}$ are the energies of the electronic state at point $\mathbf{k}$ in reciprocal space, and $c$ and $v$ represent conduction and valence band index, respectively.
The term $\bar{v}^{v'c'\mathbf{k}'}_{vc\mathbf{k}}$ describes the bare Coulomb interaction, which is a short-range exchange term, and $W^{v'c'\mathbf{k}'}_{vc\mathbf{k}}$ describes the screened electron-hole Coulomb interaction that in the static approximation is computed using the inverse $\mathbf{q}$-dependent dielectric matrix $\varepsilon^{-1}(\mathbf{q}, \omega=0)$.
Solving the eigenvalue problem for the Hamiltonian in Eq.\ \eqref{eq:bse} provides pair resonance energies $E_\Lambda$ and eigenfunctions $\mathbf{A}_\Lambda$ for excitonic states indexed by $\Lambda$.
These are used to compute the dielectric function that can be compared to experiment, and to analyze exciton binding energies.
\vspace{0.3 cm}
\subsection{\label{sec:dynbse}Dynamical Bethe-Salpeter equation}
To preserve the frequency dependence of $W$, an alternative way of obtaining a single-frequency dependent polarization function is through Shindo's approximation\cite{shindo1970effective}.
Instead of Eq.\ \eqref{eq:static}, this approximation expresses the two-frequency dependent polarization function $P^{M}(z_nz_m)$ in Eq.\ \eqref{eq:fullbse} directly in terms of the Green's function of non-interacting electrons and holes and the one-frequency dependent polarization function $P^{M}(z_m)$, see Eq. (S1) in the supplemental information\cite{supplement}.
This approximation leads to an expression for the single-frequency dependent polarization function that takes a very similar form as Eq.\ \eqref{eq:bse_static}, with the frequency-independent screened Coulomb interaction $W$ replaced by an effective, frequency-dependent $\tilde{W}(z_m)$ \cite{bechstedt2016many} (see Eq. (S3) in the supplemental information\cite{supplement}).
Considering only real frequencies involved in optical excitations ($z_m\rightarrow\omega$) allows the transformation into an eigenvalue problem for the frequency-dependent BSE Hamiltonian\cite{bechstedt2016many}
\begin{equation}
\label{eq:hamiltonian_dyn}
\begin{split}
\tilde{H}_{vc\mathbf{k},v'c'\mathbf{k'}}(\omega)=&(E_{c\mathbf{k}}-E_{v\mathbf{k}})\delta_{vv'}\delta_{cc'}\delta_{\mathbf{kk'}}\\&+2\bar{v}^{v'c'\mathbf{k}'}_{vc\mathbf{k}}-\tilde{W}^{v'c'\mathbf{k}'}_{vc\mathbf{k}}(\omega).
\end{split}
\end{equation}
Compared to Eq.\ \eqref{eq:bse}, the frequency dependence of Eq.\ \eqref{eq:hamiltonian_dyn} comes from the effective, frequency-dependent screened Coulomb interaction.
The effective frequency-dependent $\tilde{W}(\omega)$ takes the form\cite{rohlfing2000electron,bechstedt2016many}
\begin{widetext}
\begin{equation}
\label{equ:w}
\begin{split}
\tilde{W}^{v'c'\mathbf{k}}_{vc\mathbf{k}}(\omega)=&\frac{1}{V}
\sum_{\mathbf{GG'}}v\left(\sqrt{|\mathbf{q}+\mathbf{G}||\mathbf{q}+\mathbf{G'}|}\right)B_{cc',\mathbf{kk}'}(\mathbf{q}+\mathbf{G})B_{vv',\mathbf{kk}'}^{*}(\mathbf{q}+\mathbf{G'}) \times\\
&\times \Big\{\delta_{\mathbf{GG'}}+\int_0^{\infty}\frac{d\hbar \omega '}{\pi}\text{Im}\varepsilon^{-1}(\mathbf{q}+\mathbf{G},\mathbf{q}+\mathbf{G'},\omega ')\times \\
&\times\left[\frac{1}{\hbar\omega '+E_{c\mathbf{k}}-E_{v'\mathbf{k}'}-\hbar \omega}+\frac{1}{\hbar\omega '+E_{c'\mathbf{k}'}-E_{v\mathbf{k}}-\hbar \omega}\right]\Big\}\delta_{\mathbf{q},\mathbf{k-k'}},
\end{split}
\end{equation}
\end{widetext}
where $v$ is the Coulomb potential in reciprocal space.
The terms $B_{cc',\mathbf{kk'}}$ and $B_{vv',\mathbf{kk'}}$ are the Bloch integrals that account for the coupling between single particle Bloch wave functions\cite{bechstedt2016many}.
The frequency integral in the second term inside the curly brackets results from Shindo's approximation\cite{bechstedt2016many}, since the two-frequency dependence is replaced by a sum over single-frequency dependent polarization functions\cite{bechstedt2016many,shindo1970effective}.
We refer to the supplemental information Eq. (S3) for the single frequency dependent polarization function using Shindo's approximation\cite{supplement} and Ref.\ \onlinecite{bechstedt2016many} for the derivation of Eq.\ \eqref{equ:w}, as well as for complete details on how to obtain the eigenvalue problem from the BSE, which is identical in the static and dynamic case.
The form of the frequency dependent screened Coulomb potential Eq.\ \eqref{equ:w} has also been derived in multiple other references \cite{shindo1970effective,rohlfing2000electron,bechstedt1980theory}.
Shindo's approximation is argued to be a first-order approximation with respect to the dynamical nature of the screened potential\cite{bechstedt2016many,scharf2019dynamical,bornath1999two}.
Studying its validity quantitatively is very hard and has not been accomplished so far.
In this work, we analyze dynamical screening effects within the framework of Shindo's approximation, and do not consider any effects beyond.
With Eqs.\ \eqref{eq:hamiltonian_dyn} and \eqref{equ:w}, two aspects need to be addressed to solve the dynamical problem.
First, an eigenvalue problem needs to be solved similar to the static case, however, now with a frequency-dependent BSE Hamiltonian.
Second, one needs to evaluate the frequency-dependent screened Coulomb interaction, Eq.\ \eqref{equ:w}.
In the following, we discuss practical ways to address the first aspect in Sec.\ \ref{sec:dyn_eig}, and the second aspect in Sec.\ \ref{sec:dynscreen}.
\subsection{\label{sec:dyn_eig}Dynamical eigenvalue problem}
A dynamical eigenvalue problem needs to be solved for the frequency-dependent BSE Hamiltonian Eq.\ \eqref{eq:hamiltonian_dyn},
\begin{equation}
\label{eq:bsefreq}
\tilde{H}(\omega)\mathbf{A}_{\Lambda}(\omega)=E_{\Lambda}(\omega)\mathbf{A}_{\Lambda}(\omega),
\end{equation}
to obtain the frequency-dependent excitonic eigenvalues and eigenfunctions.
Different from the static case, where this set of solutions directly provides excitation energies, in the case of dynamical screening, one needs to find the solution of \cite{bechstedt2016many}
\begin{equation}
\label{eq:solution}
E_\Lambda(\omega)=\hbar\omega.
\end{equation}
Physically, this represents the condition where the energy of excitonic state $\Lambda$ equals the energy of the absorbed photon and it amounts to identifying the state $\Lambda$ that was computed using the corresponding photon frequency.
In the following, we discuss three different approaches to accomplish this:
Exact diagonalization of the dynamical Hamiltonian, a perturbative treatment of the problem \cite{rohlfing2000electron}, and an \emph{effective} static screening approximation\cite{gao2016dynamical,PhysRevLett.127.067401}.
In the exact diagonalization approach, the excitation energy $E_\Lambda(\omega)$ can be obtained by sampling the frequency $\omega$ on a grid and solving one eigenvalue problem at each frequency point.
Subsequently, Eq.\ \eqref{eq:solution} can be solved via interpolation of this data or using the nearest data point on the frequency grid that minimizes $E_{\Lambda}(\omega)-\hbar\omega$.
Compared to the static BSE, this increases the complexity by at least a factor of $N$, where $N$ is the number of frequency sampling points.
We note that this computational cost can be somewhat mitigated using efficient solvers of eigenvalue problems, such as the ChASE library \cite{chase22}, that we recently interfaced with our BSE code \cite{chase21}, demonstrating speedups on the order of a factor of five in solving the static BSE.
The perturbative approach to solving Eq.\ \eqref{eq:solution} was proposed by Rohlfing and Louie \cite{rohlfing2000electron}.
It treats the dynamical effect of the screened Coulomb potential as a first-order perturbation to the solutions of the static BSE.
The solutions $E^{\text{sta}}_{\Lambda}$ of the static eigenvalue problem for each excitonic state $\Lambda$ are used as the input frequency $\hbar\omega$ in Eq.\ \eqref{equ:w} to the dynamical screening function $\tilde{W}(\omega)$.
Next, the difference between the resulting approximated dynamical screening potential and the static screening potential $\tilde{W}(E^\mathrm{sta}_\Lambda)-W^{\text{sta}}$ is treated as a first-order perturbation, so that the solution for each state $\Lambda$ becomes
\begin{equation}
\label{eq:perturb}
E^{\text{dyn}}_{\Lambda}\approx E^\mathrm{sta}_{\Lambda}+\langle A_\Lambda|\tilde{W}(E^\mathrm{sta}_\Lambda)-W^\mathrm{sta}|A_{\Lambda}\rangle,
\end{equation}
where $|A_\Lambda\rangle$ are the eigenfunctions of the static BSE Hamiltonian Eq.\ \eqref{eq:bse}.
Validity of the perturbative treatment requires two conditions:
First, that $E^{\text{sta}}_\Lambda$ is reasonably close to the true solution such that evaluating $\tilde{W}(\omega)$ at $\hbar\omega=E^{\text{sta}}_\Lambda$ is close to the true dynamical screening function for each state $\Lambda$, and second, that the difference $\tilde{W}(E^\mathrm{sta}_\Lambda)-W^\mathrm{sta}$ is small, so that dynamical effects can be considered as a first-order perturbation.
Ref.\ \onlinecite{rohlfing2000electron} recommends to iterate several times and reports quick convergence, and indeed we verified that the solution converges within two to three steps.
Instead of solving the entire problem on a frequency grid, this approach focuses on a few specific excitonic states and only updates the energy of those states based on the static solutions, leaving the excitonic wave functions unchanged.
This provides a fast route to solving the dynamical problem especially when only a few excitonic states, e.g.\ the lowest ones, are of interest.
However, its validity needs to be examined, in particular for systems where dynamical effects are significant, since in this case, the solutions obtained through the static approximation can differ significantly from the dynamic ones.
As can be seen in Eq.\ \eqref{eq:perturb}, the perturbative approach requires evaluating the screened interaction $\tilde{W}(E^\mathrm{sta}_\Lambda)$ for each state $\Lambda$ of interest.
This is not practical for simulations of spectra, where a large number of eigenstates $N_\Lambda$ is required.
In this work, we instead group the $E^\mathrm{sta}_\Lambda$ into energy intervals of 0.3 eV which allows us to reduce the number of times we need to evaluate the screening matrix $\tilde{W}(E^{\mathrm{sta}}_{\Lambda})$ from $N_{\Lambda}$ to the number of chosen frequency intervals.
For eigenstate $\Lambda$ with an eigenvalue $E^{\text{sta}}_{\Lambda}$ in a given interval, the dynamical screening function is approximated by the lower end of the interval.
In Sec.\ \ref{sec:res} of this work, the number of screening potential evaluations needed to compute dynamical corrections to the energies and spectra is reduced from 10$^4$ to about 30.
In addition, we note that in practice even for static screening the full diagonalization is usually avoided, using e.g.\ a time propagation approach \cite{PhysRevLett.88.016402,PhysRevB.67.085307}.
This would be feasible in the context of this work only by using the effective static approach discussed in the following.
In this approach, effective, static screening can be adopted to obtain an approximate solution of Eq.\ \eqref{eq:solution}.
This bypasses the frequency-dependent eigenvalue problem entirely, but instead focuses on approximating Eq.\ \eqref{equ:w} by replacing the energy difference terms, $E_{c'\mathbf{k}'}-E_{v\mathbf{k}}-\hbar \omega$ and $E_{c\mathbf{k}}-E_{v'\mathbf{k}'}-\hbar \omega$ by an effective, constant exciton-binding energy, that is independent of the energies of the electronic states and $\hbar\omega$.
This reduces the dynamical screening problem to an effectively static problem since the two terms in the brackets of Eq.\ \eqref{equ:w} reduce to one single value, that can be chosen as the binding energy of the lowest excitonic state
\begin{equation}\label{eq:eb}
E_b=E_g-E^\mathrm{sta}_{\Lambda=0},
\end{equation}
where $E_g$ is the band gap without considering excitonic effects.
As a result, this approach replaces the dynamical screening function with an effective static screening function, that takes the exciton binding energy of the material explicitly into consideration and Eq.\ \eqref{equ:w} is simplified to\cite{gao2016dynamical}
\begin{widetext}
\begin{equation}
\label{equ:weff}
\begin{split}
\tilde{W}^{\text{eff}}_{cc',vv',\mathbf{kk}'}=\frac{1}{V}
&\sum_{\mathbf{GG'}}v\left(\sqrt{|\mathbf{q}+\mathbf{G}||\mathbf{q}+\mathbf{G'}|}\right)B_{cc',\mathbf{kk}'}(\mathbf{q}+\mathbf{G})B_{vv',\mathbf{kk}'}^{*}(\mathbf{q}+\mathbf{G'}) \\
&\times \left\{\delta_{\mathbf{GG'}}+\int_0^{\infty}\frac{d\hbar \omega '}{\pi}\text{Im}\varepsilon^{-1}(\mathbf{q}+\mathbf{G},\mathbf{q}+\mathbf{G'},\omega ')\frac{2}{\hbar\omega '+E_b}\right\}\delta_{\mathbf{q},\mathbf{k-k'}}.
\end{split}
\end{equation}
\end{widetext}
The resulting Eq.\ \eqref{equ:weff} contains no frequency dependence anymore since $\omega'$ can be integrated explicitly.
This approach is the cheapest among the three, as it is a modified version of the static approximation and it has been used to study effects of free-carrier screening\cite{gao2016dynamical} and the screening of lattice polarizability \cite{PhysRevLett.127.067401}.
In addition, among the three approaches we introduced, the effective static screening does not require the excitonic wavefunction, allowing us to take advantage of the time propagation approach\cite{PhysRevLett.88.016402,PhysRevB.67.085307} to avoid the diagonalization of the eigenvalue problem.
In Ref.\ \onlinecite{gao2016dynamical}, the authors argue that the process can be repeated several times to converge the solution.
However, we note that it needs to be tested whether the converged values will match the true solution of the frequency-dependent BSE.
\subsection{\label{sec:dynscreen}The dynamical screening function}
In order to proceed with solving the dynamical eigenvalue problem, Eq.\ \eqref{eq:bsefreq}, one needs to compute the \emph{frequency-dependent} screened Coulomb interaction, Eq.\ \eqref{equ:w}.
The major challenge lies in the frequency integral with respect to $\omega'$.
While it can be evaluated numerically, e.g.\ within the random-phase approximation \cite{ren2012random,onida2002electronic}, this comes with high computational cost, since one $\omega'$ integral needs to be evaluated explicitly for each $\omega$ and each combination of $cv\mathbf{k}$ and $c'v'\mathbf{k'}$, see Eq.\ \eqref{equ:w}.
The integral can be carried out explicitly if an analytical model function is assumed for the $\omega'$ dependence of the inverse dielectric matrix.
In this work, we pursue that route and use the generalized plasmon-pole approximation (PPA) from Hybertsen and Louie \cite{hybertsen1986electron,rohlfing2000electron,botti2013strong} to carry out the frequency integral.
This model expresses the frequency dependent inverse dielectric matrix to be a pole function of the form
\begin{widetext}
\begin{equation}
\label{eq:imppa}
\text{Im}\,\varepsilon^{-1}(\mathbf{q+G},\mathbf{q+G'},\omega)=A(\mathbf{q+G},\mathbf{q+G'})\times \{ \delta[\omega-\tilde{\omega}(\mathbf{q+G},\mathbf{q+G'})]-\delta[\omega+\tilde{\omega}(\mathbf{q+G},\mathbf{q+G'})] \}
\end{equation}
\begin{equation}
\label{eq:reppa}
\text{Re}\,\varepsilon^{-1}(\mathbf{q+G},\mathbf{q+G'},\omega)=1+\frac{\Omega^2(\mathbf{q+G},\mathbf{q+G'})}{\omega^2-\tilde{\omega}^2(\mathbf{q+G},\mathbf{q+G'})}.
\end{equation}
\end{widetext}
The three parameters $A(\mathbf{q+G,q+G'})$, $\tilde{\omega}(\mathbf{q+G,q+G'})$, and $\Omega(\mathbf{q+G,q+G'})$
are given by three additional constraints, i.e.\ the Kramers-Kronig relation, the $f$-sum rule, and the \emph{static} inverse dielectric matrix $\varepsilon^{-1}(\mathbf{q+G},\mathbf{q+G'},\omega=0)$\cite{hybertsen1986electron}.
To describe the wave-vector dependence of the \emph{static} inverse dielectric matrix we adopt the model from Bechstedt \emph{et al.}\cite{bechstedt1992efficient}, which considers only diagonal terms $\mathbf{G}=\mathbf{G'}$.
It interpolates between free-electron gas behavior at large $\mathbf{q}$ and Thomas-Fermi like behavior at small $\mathbf{q}$ \cite{bechstedt1992efficient,fuchs2008efficient,Roedl:2008}.
The approximation of neglecting local-field effects in the screening ($\mathbf{G=G'}$) is reasonable in typical semiconductors \cite{bechstedt1992efficient,schleife2009optical,schleife2011electronic}.
Whether it can impose problems for studying dynamical screening effects, e.g.\ when excitons become more localized, remains worthwhile exploring in the future.
With the above constraints the three parameters of the plasmon-pole model follow as (see the supplemental information section B for details of the derivation\cite{supplement})
\begin{eqnarray}
\label{equ:omegalf}
\Omega(\mathbf{q+G})&=&\omega_p,\\
\label{equ:tomegalf}
\tilde{\omega}(\mathbf{q+G})&=&\omega_p[1-\varepsilon^{-1}(\mathbf{q+G},\omega=0)]^{-1/2},\\
\label{equ:alf}
A(\mathbf{q+G})&=&-\frac{\pi}{2}\omega_p[1-\varepsilon^{-1}(\mathbf{q+G},\omega=0)]^{1/2}.
\end{eqnarray}
Carrying out the frequency integral in Eq.\ \eqref{equ:w} then provides the dynamical screening potential
\begin{widetext}
\begin{equation}
\label{eq:wbloch}
\begin{split}
\tilde{W}_{cc',vv',\mathbf{kk}'}(\omega)=&\frac{1}{V} \sum_{\mathbf{G}}v(|\mathbf{q}+\mathbf{G}|)B_{cc',\mathbf{kk}'}(\mathbf{q+G})B_{vv',\mathbf{kk}'}^{*}(\mathbf{q+G}) \\
&\times \Bigg\{1-\frac{\hbar\omega_p}{2}\left[1-\varepsilon^{-1}(\mathbf{q+G},\omega=0)\right]^{1/2}\\
&\times \Bigg[\frac{1}{\hbar\omega_p(1-\varepsilon^{-1}(\mathbf{q+G},\omega=0))^{-1/2}+E_{c\mathbf{k}}-E_{v'\mathbf{k}'}-\hbar \omega}\\&+\frac{1}{\hbar\omega_p(1-\varepsilon^{-1}(\mathbf{q+G},\omega=0))^{-1/2}+E_{c'\mathbf{k}'}-E_{v\mathbf{k}}-\hbar \omega}\Bigg]\Bigg\}\delta_{\mathbf{q},\mathbf{k-k'}}.
\end{split}
\end{equation}
\end{widetext}
In the effective static screening approach the energy differences in the denominator are replaced by the exciton binding energy, see Eq.\ \eqref{eq:eb}, resulting in
\begin{widetext}
\begin{equation}
\label{eq:wblochstatic}
\begin{split}
\tilde{W}^{\text{eff}}_{cc',vv',\mathbf{kk}'}=&\frac{1}{V} \sum_{\mathbf{G}}v(|\mathbf{q}+\mathbf{G}|)B_{cc',\mathbf{kk}'}(\mathbf{q+G})B_{vv',\mathbf{kk}'}^{*}(\mathbf{q+G}) \\
&\times \Bigg\{1-\frac{\hbar\omega_p}{2}\left[1-\varepsilon^{-1}(\mathbf{q+G},\omega=0)\right]^{1/2}\\
&\times \Bigg[\frac{2}{\hbar\omega_p(1-\varepsilon^{-1}(\mathbf{q+G},\omega=0))^{-1/2}+E_b}\Bigg]\Bigg\}\delta_{\mathbf{q},\mathbf{k-k'}}.
\end{split}
\end{equation}
\end{widetext}
The denominators in Eqs.\ \eqref{eq:wbloch} and \eqref{eq:wblochstatic} significantly determine the nature of screening through the interplay between the plasma frequency $\omega_p$ as a characteristic frequency, and exciton binding, either expressed as the two energy differences $E_{c'\mathbf{k}'}-E_{v\mathbf{k}}-\hbar \omega$ and $E_{c\mathbf{k}}-E_{v'\mathbf{k}'}-\hbar \omega$ in Eq.\ \eqref{eq:wbloch} or $E_b$ in Eq.\ \eqref{eq:wblochstatic}.
The static limit corresponds to negligible exciton binding compared to the plasma frequency and can be obtained from Eqs.\ \eqref{eq:wbloch} and \eqref{eq:wblochstatic} by dropping these energy differences or $E_b$, respectively.
In this case, all terms in the curly brackets reduce to $\varepsilon^{-1}(\mathbf{q+G},\omega=0)$.
In bulk semiconductors, plasma frequencies are usually several eV to several tens of eV and exciton-binding energies are several tens to a few hundreds of meV, i.e.\ at least one order of magnitude smaller, illustrating the validity of the static approximation.
In many low-dimensional or organic semiconductors, however, the exciton binding energies are relatively large and can be on the order of 1 eV\cite{das2018electronic,PhysRevB.53.15909,hummer2005oligoacene}, rendering the validity of the static approximation questionable.
Furthermore, this illustrates, e.g.\ for the lowest bound excitonic state, that including electronic screening dynamics effectively reduces screening compared to the static approximation, leading to stronger excitonic effects.
This is because the denominator of Eq.\ \eqref{eq:wbloch} is larger than when $E_b$ is dropped in the static case.
Hence, dynamical screening is effectively weaker and in the static approximation screening is always overestimated.
Physically, this can be interpreted as an initially incomplete screening in the dynamic case, compared to an instantly formed screening in the static approximation \cite{bechstedt2016many}.
\section{\label{sec:comp}Computational methods}
In this work we compare the three different approaches to describe screening dynamics, i.e.\ exact diagonalization, perturbative treatment, and the effective static screening approach, for optical spectra and exciton binding energies of crystalline naphthalene.
This material is an organic crystal for which large exciton binding energies of 1.0\,--\,1.5 eV were reported from experiment \cite{hummer2005oligoacene}.
We implemented the three different approaches using the plasmon-pole approximation (PPA), into the BSE implementation discussed in Refs.\ \onlinecite{Roedl:2008,fuchs2008efficient}, based on the Vienna \emph{Ab-Initio} Simulation Package \cite{Gajdos:2006,Kresse:1999,kresse96} (VASP).
For naphthalene, we first performed density functional theory \cite{PhysRev.140.A1133,PhysRev.136.B864} (DFT) simulations using the generalized-gradient approximation (GGA) by Perdew, Burke, and Ernzerhof (PBE) to describe exchange and correlation \cite{PhysRevLett.77.3865} and the projector-augmented wave (PAW) scheme \cite{blo94} to model the electron-ion interaction.
Kohn-Sham states were expanded into plane waves up to a cutoff energy of 400 eV.
We used lattice constants that were reported from experiment \cite{capelli2006molecular} and relaxed atomic positions until all Hellmann-Feynman forces were smaller than 5 meV/\r{A}, using the DFT-D2 method of Grimme\cite{grimme2006semiempirical} to capture Van der Waals corrections.
For these relaxations, the Brillouin zone (BZ) was sampled using $3\times5\times3$ $\Gamma$-centered
$\mathbf{k}$ points.
We verified that the total energy of the unit cell is converged to better than 1 meV/atom with these parameters.
For the BSE simulations, we computed the DFT-PBE electronic structure for the relaxed atomic geometries described above and tested convergence with respect to BZ sampling and BSE cutoff energy, i.e.\ the energy up to which non-interacting electron-hole pairs are included in the BSE Hamiltonian.
We did these tests using static screening and all details can be found in the supplemental information\cite{supplement} (see Figs. S4, S5, and S6).
We find that, contrary to materials with dispersive valence and conduction bands such as MgO and ZnO \cite{fuchs2008efficient,zhang2018nonequilibrium}, the valence and conduction band edges of naphthalene are flat (see Fig. S1), and the exciton binding energy of naphthalene converges quickly with BZ sampling (see Fig. S5).
We obtain the value of the exciton binding energy by extrapolating to infinitely dense sampling as discussed in Ref.\ \onlinecite{fuchs2008efficient}.
Balancing computational cost and accuracy of the BSE calculations of optical spectra, we adopted a $5\times7\times5$ $\mathbf{k}$-point grid centered at the $A$ point of the BZ, to capture the lowest-energy transitions near that point.
We use a BSE cutoff energy of 14 eV to compute spectra with static screening and compare these to literature results in Fig.\ \ref{fig:comp_gga}.
Due to the larger computational cost of dynamical screening, we reduce the BSE cutoff energy to 9 eV for investigating dynamical effects, and focus on the spectra between the onset at 3 eV and up to 5.5 eV.
Based on the convergence tests above, we anticipate that the choice of the energy cutoff and $\mathbf{k}$-points sampling results in deviation around 0.2 eV compared to the converged values, however, we show in the SI Sec. G that these error induces constant shifts to the predicted exciton binding energy, and we do not expect them to affect our analysis of dynamical screening effects.
In all spectra calculations, a Lorentzian life-time broadening of 0.1 eV is used.
In the static model dielectric function, a high-frequency dielectric constant of 2.35 is used, and is chosen based on the experimental value \cite{wohlfarth2008static,suthan2010growth}.
\section{\label{sec:res}Results and discussion}
\begin{figure}
\includegraphics[width=0.45\textwidth]{./2A_unit_cell.pdf}
\caption{\label{fig:crystal}(Color online.)
Monoclinic naphthalene (C$_{10}$H$_8$) viewed from the crystalline $b$ direction (left) and the crystalline $c$ direction (right).
Black spheres represent carbon atoms and white spheres represent hydrogen atoms.
The unit cell consists of two conjugated orientated naphthyl rings.
The crystal structure is obtained from Ref.\ \onlinecite{capelli2006molecular} and we subsequently fully relaxed all atomic positions.
}
\end{figure}
We compute the optical spectra and exciton binding energies of the organic crystal naphthalene using static screening in the BSE and the three different approaches to dynamical electronic screening discussed above.
The unit cell of naphthalene is shown in Fig.\ \ref{fig:crystal} and consists of two units of double carbon rings with conjugated orientation.
This system is an ideal test bed to systematically study dynamical effects in the description of electronic screening, since it exhibits exciton binding energies on the order of 1 eV \cite{hummer2005oligoacene,hummer2005electronic,pope1999electronic}, which is an order of magnitude larger than exciton binding energies of typical bulk inorganic semiconductors of several 10 meV \cite{yu1996fundamentals,roessler1967electronic,whited1973exciton}.
Using the independent-particle approach in VASP and the integral from the $f$-sum rule without considering the electron-hole interaction, we compute a plasma frequency of 17.9 eV, which is similar to bulk inorganic materials.
The closer the exciton binding energy is to the plasma frequency, the more important are dynamical effects for electronic screening, and this is what we expect for naphthalene in this work.
\subsection{\label{sec:gga_sta}Independent quasi-particle approximation and static BSE}
\begin{figure}
\includegraphics[width=0.45\textwidth]{./2a_sta_dft.pdf}
\caption{\label{fig:comp_gga}(Color online.)
Imaginary part of the $\varepsilon_{yy} \parallel b$ component of the dielectric tensor, calculated without electron-hole interaction (upper panel) and from solving the BSE with statically screened electron-hole interaction (lower panel).
We used the high-frequency dielectric constant from experiment\cite{wohlfarth2008static,suthan2010growth}, $\varepsilon_{\infty}$=2.35, and a scissor shift of $\Delta$=1.55 eV.
The shape of our spectra agrees very well with data by Puschnig \emph{et al.}\cite{puschnig2013excited} and Hummer \emph{et al.}\cite{hummer2005oligoacene}
}
\end{figure}
We first compute the optical spectrum of naphthalene using the independent-quasiparticle approximation within the GGA+$\Delta$ approach as well as the static BSE, see Fig.\ \ref{fig:comp_gga}.
In this work, we focus on optical spectra and exciton binding energy for the $y$-polarization, i.e.\ the $\varepsilon_{yy}$ component of the dielectric tensor parallel to the crystalline $b$ direction, since for this direction the lowest-energy excitonic eigenstates were reported \cite{hummer2005electronic,hummer2005oligoacene}.
Our calculated value for the GGA band gap, $E_\mathrm{g}^\mathrm{GGA}$=3.12 eV, agrees well with an earlier result of 3.10 eV \cite{hummer2005electronic}.
We use a scissor shift of $\Delta$=1.55 eV so that the first bright peak is at the same position at 4.8 eV as reported from quasiparticle calculations \cite{hummer2005oligoacene}.
Details of the band structure can be found in Fig. S1 of the supplemental information\cite{supplement}.
The upper panel of Fig.\ \ref{fig:comp_gga} shows good agreement between our GGA+$\Delta$ result and a spectrum from the literature \cite{hummer2005oligoacene,puschnig2013excited}.
We notice that some differences are observed at the peak around 8 eV, as the height of the peak from our calculation is slightly larger.
We also verified that our results agree very well with a GGA+$\Delta$ spectrum using a broadening of 0.05 eV\cite{hummer2005electronic}, if we adopt the same broadening (see supplemental information\cite{supplement} Sec. D).
When including the electron-hole interaction by solving a BSE with static screening, we find strong excitonic effects in naphthalene and indeed report an exciton-binding energy of 1.06 eV.
The difference between the first main peak with (BSE) and without (GGA+$\Delta$) excitonic effects, see upper and lower panel of Fig.\ \ref{fig:comp_gga}, is used to obtain the value of the exciton binding energy, similar as in Ref.\ \onlinecite{hummer2005oligoacene}.
Comparing our static BSE spectrum with earlier results in the literature in the lower panel of Fig.\ \ref{fig:comp_gga} shows reasonable agreement also of the overall spectral shape \cite{hummer2005oligoacene,puschnig2013excited}.
We note that Puschnig \emph{et al.}\ used a broadening of 0.2 eV\cite{puschnig2013excited}, possibly explaining some of the deviations with respect to our data.
In addition, our predicted exciton binding energy is slightly larger than 0.9 eV reported in Ref.\ \onlinecite{hummer2005oligoacene}, and correspondingly, the onset of our spectrum in Fig.\ \ref{fig:comp_gga} appears at slightly lower energy.
We attribute this to the different approaches of describing the wave-vector dependence of the screening the static BSE.
While our work uses the Bechstedt model and the experimental high-frequency dielectric constant of 2.35\cite{wohlfarth2008static,suthan2010growth}, Ref.\ \onlinecite{puschnig2013excited} uses the RPA and the corresponding dielectric constant of 3.8 computed within the independent particle approximation.
\subsection{\label{sec:dyn}Dynamical electronic screening}
\begin{figure}
\includegraphics[width=\columnwidth]{./solutions.pdf}
\caption{\label{fig:illu}(Color online.)
Frequency dependent exciton eigenvalues $E_\Lambda(\omega)$ for excitonic states $\Lambda$, obtained using exact diagonalization of the Hamiltonian in Eq.\ \eqref{eq:bsefreq}.
We show various randomly selected states to cover a range of exciton energies.
We find the solution of $E_\Lambda(\omega)=\hbar\omega$ (black dashed line) via a nearest neighbor approximation (see text).
The solution using the standard static approximation for the lowest excitonic state $(\Lambda=1)$ is marked with the horizontal red-dashed line.
}
\end{figure}
We now compare the spectra we computed from solutions of the BSE that account for electronic screening dynamics via the three different approaches discussed in Sec.\ \ref{sec:theory}.
First, we compute excitonic eigenvalues $E_\Lambda$ via direct diagonalization by sampling a frequency grid to solve Eq.\ \eqref{eq:solution}.
Figure \ref{fig:illu} shows this sampling of the frequency range of interest with a spacing of 0.3 eV and our computed solutions for the excitonic eigenvalues $E_\Lambda$.
This figure shows that there is no complicated dependence on frequency, illustrating that a simple interpolation scheme is appropriate and our frequency spacing of 0.3 eV is sufficient.
We further verified this by calculating spectra using different samplings, and the results can be found in Fig. S3 of the supplemental information\cite{supplement}.
Here we use a nearest neighbor approximation, i.e.\ for each state, the solution that is the closest to the $E_\Lambda(\omega)=\hbar\omega$ line is adopted as the solution of the dynamical problem for that state.
This allows us to also compute optical spectra, which requires excitonic wave functions that would be more challenging to obtain in an interpolation scheme.
We estimate from Fig.\ \ref{fig:illu} that the nearest neighbour approximation does not cause an error in the solution of more than 0.01 eV.
Nevertheless Fig.\ \ref{fig:illu} illustrates, e.g.\ for the $\Lambda=1$ state, that an exciton binding energy of 1.49 eV compared to the plasma frequency of 17.9 eV is affected by the frequency dependence due to electronic screening dynamics.
\begin{figure}
\includegraphics[width=0.48\textwidth]{./dyn_sta_w_effective.pdf}
\caption{\label{fig:spec}(Color online.)
Comparison of $\varepsilon_\text{yy}$ from the static BSE (black solid lines) with the different approaches to include dynamical screening.
Red shows the exact diagonalization and blue results from the perturbative approach.
Effective static screening using $E_b=1.14$ eV (orange) and $E_b=1$ eV (green) is also shown.
All curves use a scissor shift of 1.55 eV.
The positions of the first major peak are highlighted in the figure with vertical dashed lines.
We also note that an excitonic state is visible when adopting the perturbative approach, that is dark in all other cases.
}
\end{figure}
In the upper two panels of Fig.\ \ref{fig:spec} we compare the optical spectra from the exact diagonalization of the dynamical problem and the perturbative approach to our static BSE result.
This comparison shows that the perturbative treatment works well for the lowest-energy major peak predicted by the static approximation at 3.78 eV, as it predicts a 0.12 eV redshift from the peak with static screening, compared to a redshift of 0.14 eV when the exact diagonalization approach is used.
Figure \ref{fig:spec} also shows a redshift of all spectra that include screening dynamics relative to the static case, which confirms the expected effective reduction of screening when dynamics is included, as discussed in Sec.\ \ref{sec:dynscreen}.
Further comparison shows that the perturbative treatment results in a magnified excitonic shoulder at a photon energy of 3.20 eV, see Fig.\ \ref{fig:spec}.
We find a corresponding excitonic eigenvalue using all three approximations to dynamical screening, however, its oscillator strength is much smaller in the case of exact diagonalization and for the effective static treatment of screening.
This suggests that screening dynamics can modify the character of the excitonic state between optically weak to dark and in such a case, the perturbative approach struggles, as it uses unchanged excitonic eigenfunctions of the static simulation.
In addition, at higher energies beyond about 4.2 eV, we see that the perturbative treatment strongly resembles the static result, while the exact diagonalization approach yields a redshift.
This renders the perturbative approach more questionable for accurate spectral analysis.
Finally, we investigate the effective static screening approximation, see Eq.\ \eqref{eq:wblochstatic}.
In Fig.\ \ref{fig:spec} we show results for effective static screening using the exciton-binding energy calculated from the static screening approximation (1.0 eV) and that from exact diagonalization (1.14 eV), leading to dynamical screening corrections of 0.10 eV and 0.11 eV, respectively.
The lower two panels of Fig.\ \ref{fig:spec} show that both cases underestimate the correction due to dynamical screening compared to the exact diagonalization.
The position of the first peak is about 0.03\,--\,0.04 eV higher in energy, i.e.\ closer to the static approach, corresponding to a 28.5\,\% and 21.4\,\% change of the dynamical correction to the exciton binding energy when compared to exact diagonalization.
Overall, however, effective static screening describes spectra better than the perturbative approach especially at higher energies, as can be seen for instance near energies around 4.5 eV.
We note that Hummer \emph{et al.}\ \cite{hummer2005oligoacene} report a minor underestimation of the exciton-binding energy compared to experiment.
Their theoretical value is 0.9 eV, while experimental results are reported as 1.0\,--\,1.5 eV \cite{hummer2005oligoacene}.
We find that the reduction of the screening due to dynamical effects provides exactly the additional redshift to the spectra, putting the predicted exciton-binding energy within the range of the experimental values.
This is additional evidence that electronic dynamical screening effects need to be taken into consideration for accurate modeling of these systems where large exciton-binding energies are observed.
\subsection{\label{sec:omegap}Influence of the plasmon-pole model}
\begin{figure}
\includegraphics[width=0.98\columnwidth]{./omegap_reduced.pdf}
\caption{\label{fig:plsm}(Color online.)
Imaginary part of the $\varepsilon_{yy} \parallel b$ component of the dielectric tensor from
static BSE (black solid line) and independent quasiparticle approximation (GGA+scissor, black dashed line) are compared to exact diagonalization results computed using different values of the plasmon frequency (red shaded).
We show that increasing $\omega_p$ from 16 eV to 19 eV changes the prediction of the correction on the exciton binding energy due to dynamical screening effects from 0.16 eV to 0.14 eV.
}
\end{figure}
As discussed in Sec.\ \ref{sec:dynscreen}, in this work a plasmon-pole model is adopted to derive Eq.\ \eqref{eq:wbloch}, which requires the plasma frequency $\omega_p$ as an input to calculate the screened Coulomb potential.
We compute $\omega_p$=17.9 eV within independent-particle approximation, i.e.\ without considering the electron-hole interaction, through integrating the imaginary part of the dielectric function using the VASP code and averaging over the Cartesian coordinates.
It has been found previously that the enforcement of the $f$-sum rule of the HL PPM model can overestimate the energy of the pole of the dielectric function, e.g.\ by about 10\,\% for elemental carbon \cite{larson2013role}.
Hence, in the following we examine the influence of the plasmon-pole frequency $\omega_p$ on our results and varied $\omega_p$ in a $\sim$10\,\% range from the calculated value, between 16 and 19 eV, to examine the influence on resulting spectra and exciton-binding energy.
From the spectra shown in Fig.\ \ref{fig:plsm} we see that with increasing plasma frequency from 16 eV to 19 eV, the first major peak is slightly blue shifted because the predicted red shift due to dynamical screening corrections is increased from 0.14 eV to 0.16 eV, i.e.\ by about 15\,\%.
This reduction of the dynamical correction is expected, since increasingly large $\omega_p$ corresponds more closely to the static screening case, as discussed in Sec.\ \ref{sec:dynscreen}, reducing dynamical corrections and increasing the strength dielectric screening.
However, the influence is not substantial around the plasma frequency of interest in this work.
Overall, we found that the potential overestimation of the pole energy due to the choice of the plasmon-pole model, does not qualitatively affect the significance of dynamical effects.
Finally, we note that for naphthalene, the exciton binding energy is only 5\% of the plasma frequency.
However, the ratio between exciton binding energy and characteristic frequency that determines screening dynamics is much larger in other important applications of the BSE technique.
For example, in the case of phonon screening in polar materials, the characteristic frequency scale to compare to is that of phonon frequencies, instead of the plasma frequency \cite{PhysRevLett.127.067401}.
In this case, the exciton-binding energy is on the same order compared to the characteristic frequency, and screening dynamics is expected to be important for an accurate description of optical properties.
This remains a subject of future study.
\section{\label{sec:conclusion}Conclusion}
We examined the effects of electronic dynamical screening of the electron-hole interaction when solving the Bethe-Sapleter equation of the optical polarization function for a naphthalene organic crystal.
By adopting the Shindo approximation and a plasmon-pole model, the BSE can be written as a frequency-dependent eigenvalue problem.
We compared three different ways of addressing this dynamical problem, i.e., exact diagonaliztion of the frequency-dependent BSE Hamiltonian, perturbative treatment of the static eigenstates, and an effective static approximation.
The exact-diagonalization approach requires solving the BSE Hamiltonian at numerous frequencies to obtain the eigenenergies and eigenstates.
The perturbative approach requires diagonalization of the BSE Hamiltonian in the static approximation in order to obtain the excitonic wavefunctions.
Meanwhile, the effective screening approach bears the same cost of the standard approach of solving the BSE within the static approximation.
We show that for naphthalene, all three methods induce a $\sim$15\,\% correction of the exciton binding energy, predicted to be around 1 eV by the standard static approximation.
While the exact diagonalization constitutes a reference case in this work, it comes at high computational cost that renders this approach unfeasible for computing spectra.
The perturbative treatment is a decent alternative that does not require full solution of multiple BSE Hamiltonians while providing good qualitative estimates of binding energies of the lowest excitonic states.
Finally, we show that for spectra, the effective static screening approach is well suited and numerically efficient, possible allowing application to complex materials.
The results for naphthalene are in good agreement with experiments.
We also note that these insights will have implications when lattice screening is considered, since then the characteristic frequency of phonons is close to the exciton-binding energy, likely exacerbating the importance of screening dynamics.
\begin{acknowledgements}
We thank Dr. Felipe H. da Jornada, Dr. Steven G. Louie, and Dr. Emmanouil Kioupakis for fruitful discussions.
This material is based upon work supported by the National Science Foundation under Grant No.\ DMR-1555153.
This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois.
Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications.
This work made use of the Illinois Campus Cluster, a computing resource that is operated by the Illinois Campus Cluster Program (ICCP) in conjunction with the National Center for Supercomputing Applications (NCSA) and which is supported by funds from the University of Illinois at Urbana-Champaign.
\end{acknowledgements}
|
1,477,468,750,134 | arxiv | \section{Introduction}
\label{intro}
The discrepancy between the volume and the number of integer points in
$r\Omega-x$, a dilated by a factor $r$ and translated by a vector $x$ of
bounded domain $\Omega$ in $\mathbb{R}^{d}$, is
\[
\mathcal{D}\left(\Omega,r,x\right) =%
{\displaystyle\sum_{k\in\mathbb{Z}^{d}}}
\chi_{r\Omega-x}(k)-r^{d}\left\vert \Omega\right\vert .
\]
Here $\chi_{r\Omega-x}(y)$ denotes the characteristic function of $r\Omega-x$
and $\left\vert \Omega\right\vert $ the measure of $\Omega$.
A classical
problem is to estimate the size of $\mathcal{D}\left( \Omega,r,x\right) $, as
$r\rightarrow+\infty$. For a survey see e.g. \cite{IKKN}, \cite{Kratzel} or \cite{Travaglini}.
By a classical result of D. G. Kendall, the $L^{2}$ norm with respect to the translation variable $x$ of the discrepancy $\mathcal{D}\left( \Omega,
r,x\right) $ of an oval $\Omega$ is of the order of $r^{\left( d-1\right) /2}$. See \cite{Kendall} and what follows. For this reason we shall call
$r^{-\left( d-1\right) /2}\mathcal{D}\left( \Omega,r,x\right) $ the
normalized discrepancy. Our main result below is an estimate of
the fractal dimension of the set of values of the dilation variable $r$ where this normalized discrepancy may be large.
Throughout the paper, we shall assume that $\Omega$ is a convex body in $\mathbb R^d$ with smooth boundary with strictly positive Gaussian curvature
such that the origin belongs to the interior of $\Omega$.
We will also assume that $\mu$ is a positive Borel measure with compact support contained in
$\{0\leq r <+\infty\}$ and with Fourier transform $|\widehat\mu(\xi)|\le C(1+|\xi|)^{-\beta}$ for some $\beta\ge0$.
We recall that the Fourier transform of $\mu $ is defined by
\begin{align*}
& \widehat{\mu}\left( \xi\right) =%
{\displaystyle\int_{\mathbb{R}}}
\exp\left( -2\pi i\xi r\right) d\mu\left( r\right),
\end{align*}
and that the Fourier dimension of a measure is the supremum of all $\delta$
such that there exists $C$ such that $\left\vert \widehat{\mu}\left(
\xi\right) \right\vert \leq C\left\vert \xi\right\vert ^{-\delta/2}$.
See \cite[Section 4.4]{falconer} and \cite[Section 12.17]{PM}.
Also, for any $p\ge1$ and for any $R\ge 2$ we define
\[
I(d,\Omega,\mu,p,R)=\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| r^{-(d-1)/2}\mathcal D(\Omega,r,x) | ^{p}dxd\mu(r-R)\right\}^{1/p},
\]
where the translated measure $d\mu\left(
r-R\right) $ is defined by
\[%
{\displaystyle\int_{\mathbb{R}}}
f\left( r\right) d\mu\left( r-R\right) =%
{\displaystyle\int_{\mathbb{R}}}
f\left( r+R\right) d\mu\left( r\right) .
\]
\begin{theorem} \label{thm_d=2} Let $d=2$.
\noindent If $0\leq\beta<2/5$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p}\left( R\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $\beta=2/5$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p+1/12}\left( R\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $2/5<\beta<1/2$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+10\beta/(3+5\beta),\\
C\log^{1/p}\left( R\right) & \text{if }p=4+10\beta/(3+5\beta).
\end{cases}
\end{align*}
If $\beta=1/2$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+10/11,\\
C\log^{1/p+1/9}\left( R\right) & \text{if }p=4+10/11.
\end{cases}
\end{align*}
If $\beta>1/2$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+10/11,\\
C\log^{1/p}\left( R\right) & \text{if }p=4+10/11.
\end{cases}
\end{align*}
\end{theorem}
\begin{theorem}
\label{thm_d>2}
Let $d\ge3$.
\noindent If $0\leq\beta<1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(d,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<2(d-\beta)/(d-\beta-1),\\
C\log^{1/p}\left( R\right) & \text{if }p=2(d-\beta)/(d-\beta-1).
\end{cases}
\end{align*}
If $\beta=1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(d,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<2(d-1)/(d-2),\\
C\log^{3/4}\left( R\right) & \text{if }p=2(d-1)/(d-2)\text{ and } d=3,\\
C\log^{1/2}\left( R\right) & \text{if }p=2(d-1)/(d-2)\text{ and } d>3.
\end{cases}
\end{align*}
If $\beta>1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(d,\Omega,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<2(d-1)/(d-2),\\
C\log^{1/2}\left( R\right) & \text{if }p=2(d-1)/(d-2)\text{ and } d=3,\\
C\log^{1/p}\left( R\right) & \text{if }p=2(d-1)/(d-2)\text{ and } d>3.
\end{cases}
\end{align*}
\end{theorem}
The case $d=2$ can be improved in the range $\beta>2/5$ when $\Omega$ is an ellipse $E$. More precisely
we have the following result.
\begin{theorem} \label{thm_d=2_ellipse} Let $E$ be an ellipse in the plane.
\noindent If $0\leq\beta<1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,E,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p}\left( R\right) & \text{if }\beta \neq 2/5 \text{ and } p=4+2\beta,\\
C\log^{1/p+1/12}\left( R\right) & \text{if }\beta = 2/5 \text{ and } p=4+2\beta
\end{cases}
\end{align*}
\noindent If $\beta=1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,E,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<6,\\
C\log^{5/6}\left( R\right) & \text{if }p=6.
\end{cases}
\end{align*}
If $\beta>1$ then there exists a constant $C$ such that for every $R\ge 2$,
\begin{align*}
I(2,E,\mu,p,R)
\leq
\begin{cases}
C & \text{if }p<6,\\
C\log^{2/3}\left( R\right) & \text{if }p=6.
\end{cases}
\end{align*}
\end{theorem}
When the measure $\mu$ is the Dirac delta $\delta_0$ centered at $0$, then
\[
I(d,\Omega,\delta_0,p,R)=
\left\{ \int_{\mathbb{T}^{d}}| R^{-(d-1)/2}\mathcal D(\Omega,R,x) | ^{p}dx\right\}^{1/p}.
\]
In this case, $|\widehat\delta_0(\xi)|=1$ so that $\beta=0$, and the above Theorems \ref{thm_d=2} and
\ref{thm_d>2} can be restated as
\begin{corollary}
\[
\left\{ \int_{\mathbb{T}^{d}}| R^{-(d-1)/2}\mathcal D(\Omega,R,x) | ^{p}dx\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<2d/(d-1),\\
C\log^{1/p}\left( R\right) & \text{if }p=2d/(d-1).
\end{cases}
\]
\end{corollary}
This recovers recent results of M. Huxley \cite{Huxley3} for the case $d=2$
and L. Brandolini, L. Colzani, G. Gigante, G. Travaglini \cite{BCGT} for the general dimension $d$.
If $\mu $ is the uniformly distributed measure in the interval $\left\{
0<r<1\right\} $, then
\[
I(d,\Omega,\mu,p,R)=
\left\{ \int_{R}^{R+1} \int_{\mathbb{T}^{d}}| r^{-(d-1)/2}\mathcal D(\Omega,r,x) | ^{p}dxdr\right\}^{1/p}.
\]
The Fourier transform of the uniformly distributed measure in $\left\{
0<r<1\right\} $ has decay $\beta=1$,
\[
\widehat{\mu}\left( \xi\right) =%
{\displaystyle\int_{0}^{1}}
\exp\left( -2\pi i\xi r\right) dr=\exp\left( -\pi i\xi\right)
\dfrac{\sin\left( \pi\xi\right) }{\pi\xi}.
\]
On the other hand, if $\psi\left( r\right) $\ is a non negative smooth
function with integral one and support in $0\leq r\leq1$, one can consider a
smoothed average
\[
\left\{ \int_{\mathbb R} \int_{\mathbb{T}^{d}}\vert r^{-\left(
d-1\right) /2}\mathcal{D}\left( \Omega,r,x\right) \vert ^{p}%
dx\psi\left( r-R \right) dr\right\}
^{1/p}.
\]
This smoothed average is equivalent to the uniform average over $\left\{
R<r<R+1\right\} $, but the decay of the Fourier transform $\widehat{\psi
} $ is faster than any power $\beta$. Hence for the
uniformly distributed measure in $\left\{ 0<r<1\right\} $ Theorems \ref{thm_d=2} and
\ref{thm_d>2}
apply with the indices corresponding to $\beta>1$, and one obtains the following
\begin{corollary}
\begin{align*}
& \left\{ \int_{R}^{R+1} \int_{\mathbb{T}^{d}}| r^{-(d-1)/2}\mathcal D(\Omega,r,x) | ^{p}dxdr\right\}^{1/p}\\
& \leq\left\{
\begin{array}
[c]{ll}%
C & \text{if }d=2\text{ and }p<4+10/11\text{,}\\
C\log^{1/p}\left( R\right) & \text{if }d=2\text{ and }p=4+10/11\text{,}\\
C & \text{if }d\geq3\text{ and }p<2(d-1)/(d-2)\text{,}\\
C\log^{1/2}\left( R\right) & \text{if }d=3\text{ and }p=2(d-1)/(d-2)\text{,}\\
C\log^{1/p }\left( R\right) &
\text{if }d>3\text{ and }p=2( d-1) /( d-2)\text{.}%
\end{array}
\right.
\end{align*}
\end{corollary}
As mentioned before, when $d=2$ and $\Omega=E$ (an ellipse) this result can be improved. By Theorem
\ref{thm_d=2_ellipse},
\begin{corollary}
\[
\left\{ \int_{R}^{R+1} \int_{\mathbb{T}^{d}}| r^{-(d-1)/2}\mathcal D(E,r,x) | ^{p}dxdr\right\}^{1/p} \leq\left\{
\begin{array}
[c]{ll}%
C & \text{if }d=2\text{ and }p<6\text{,}\\
C\log^{2/3}\left( R\right) & \text{if }d=2\text{ and }p=6\text{.}\\%
\end{array}
\right.
\]
\end{corollary}
Observe that the range of indices for
which the $L^{p}$ norm remains uniformly bounded with this choice of $\mu$
is larger than the range of indices in \cite{BCGT} and \cite{Huxley3} (those that we obtain
when $\mu=\delta_0$).
As an intermediate case between the two preceeding examples, one can consider a measure $d\mu\left( r\right)=r^{-\alpha}\chi_{\left\{ 0<r<1\right\}
}\left( r\right) dr$, with $0<\alpha<1$. In this case $\left\vert \widehat{\mu
}\left( \xi\right) \right\vert \leq C\left( 1+\left\vert \xi\right\vert
\right) ^{\alpha-1}$, that is $\beta=1-\alpha$.
As a more sophisticated intermediate example, recall that a compactly supported probability measure is a Salem measure if its Fourier dimension $\gamma
=\sup\left\{ \delta:\ \left\vert \widehat{\mu}\left( \zeta\right)
\right\vert \leq C\left( 1+\left\vert \zeta\right\vert \right) ^{-\delta
/2}\ \right\} $ is equal to the Hausdorff dimension of the support. Such
measures exist for every dimension $0<\gamma<1$. See \cite[Section 12.17]{PM}.
The above theorems assert that the discrepancy cannot be too large in mean on the
supports of translations of these measures.
The techniques used to prove the above theorems also apply to the estimates of
mixed $L^{p}(L^2)$ norms of the discrepancy:
\[
\left\{ \int_{\mathbb{T}^{d}}\left(
{\displaystyle\int_{\mathbb{R}}}
\left\vert r^{-\left( d-1\right) /2}\mathcal{D}\left( \Omega,r,x\right)
\right\vert ^{2}d\mu\left( H^{-1}(r-R)\right) \right) ^{p/2}dx\right\}
^{1/p}.
\]
This has been done by the authors in \cite{CGG}. Here it suffices to remark that the set of $p$'s that give bounded mixed $L^p(L^2)$ norms is larger than the set of $p$'s that give bounded pure
$L^p$ norms. See also \cite{Gariboldi}.
We would like to thank Luca Brandolini and Giancarlo Travaglini for several discussions
on this subject during the early stages of the preparation of this paper.
\section{Preliminary lemmas}
The proofs of the theorems will be splitted into a number of lemmas, some of them well known.
The starting point is the observation of D. G. Kendall that the discrepancy
$\mathcal{D}\left( \Omega,r,x\right) $ is a periodic function of the
translation, and it has a Fourier expansion with coefficients that are a
sampling of the Fourier transform of $\Omega$,
\[
\widehat{\chi}_{\Omega}\left( \xi\right) =%
{\displaystyle\int_{\Omega}}
\exp\left( -2\pi i\xi x\right) dx.
\]
\begin{lemma}
\label{Fourier} The number of integer points in $r\Omega-x$, a translated by a
vector $x\in\mathbb{R}^{d}$\ and dilated by a factor $r>0$\ of a domain
$\Omega$\ in the $d$ dimensional Euclidean space, is a periodic function of the
translation with Fourier expansion
\[%
{\displaystyle\sum_{k\in\mathbb{Z}^{d}}}
\chi_{r\Omega-x}(k)=%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}}}
r^{d}\widehat{\chi}_{\Omega}\left( rn\right) \exp(2\pi inx).
\]
In particular,
\[
\mathcal{D}\left( \Omega,r,x\right) =%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
r^{d}\widehat{\chi}_{\Omega}\left( rn\right) \exp(2\pi inx).
\]
\end{lemma}
\begin{proof}
This is a particular case of the Poisson summation formula.
\end{proof}
\begin{remark}\label{r1}
We emphasize that the Fourier expansion of the discrepancy converges at least
in $L^{2}\left( \mathbb{T}^{d}\right) $, but we are not claiming that it
converges pointwise. Indeed, the discrepancy is discontinuous, hence the
associated Fourier expansion does not converge absolutely or uniformly. To
overcome this problem, one can introduce a mollified discrepancy. If the
domain $\Omega$\ is convex and contains the origin, then there exists
$\varepsilon>0$\ such that if $\varphi\left( x\right) $\ is a non negative
smooth radial function with support in $\left\{ \left\vert x\right\vert
\leq\varepsilon\right\} $\ and with integral 1, and if $0<\delta\leq1$\ and
$r\geq1$, then
\[
\delta^{-d}\varphi(\delta^{-1}\cdot)\ast\chi_{(r-\delta)\Omega}(x)
\leq\chi_{r\Omega}(x)
\leq
\delta^{-d}\varphi(\delta^{-1}\cdot)\ast\chi_{(r+\delta)\Omega}(x)
\]
and therefore
\begin{gather*}
\left\vert \Omega\right\vert \left( \left( r-\delta\right) ^{d}%
-r^{d}\right) +\left( r-\delta\right) ^{d}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat{\varphi}\left( \delta n\right) \widehat{\chi}_{\Omega}\left(
\left( r-\delta\right) n\right) \exp\left( 2\pi inx\right) \\
\leq%
\mathcal D\left(\Omega,r,x\right)\\
\leq\left\vert \Omega\right\vert \left( \left( r+\delta\right) ^{d}%
-r^{d}\right) +\left( r+\delta\right) ^{d}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat{\varphi}\left( \delta n\right) \widehat{\chi}_{\Omega}\left(
\left( r+\delta\right) n\right) \exp\left( 2\pi inx\right) .
\end{gather*}
One has $\left\vert \left( r+\delta\right) ^{d}-r^{d}\right\vert \leq
Cr^{d-1}\delta$, and one can work with the mollified discrepancy
defined by
\[
\mathcal D_{\delta}(\Omega,r,x)=
r ^{d}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat{\varphi}\left( \delta n\right) \widehat{\chi}_{\Omega}\left(
rn\right) \exp\left( 2\pi inx\right) .
\]
Observe that since $\left\vert \widehat{\varphi
}\left( \zeta\right) \right\vert \leq C\left( 1+\left\vert \zeta\right\vert
\right) ^{-\lambda}$ for every $\lambda>0$, the mollified Fourier
expansion has no problems of convergence.
\end{remark}
Let us recall that the support function of a convex set $\Omega\subset \mathbb R^d$ is defined by
$g(x)=\sup_{y\in\Omega}\left\{ x\cdot y\right\} $.
When $\Omega$ is strictly convex with smooth boundary, the point $y$ that realizes the supremum in this definition
is the point of $\partial \Omega$ where the outer normal
is parallel to $x$ (see Lemma \ref{Support} (2) and Figure \ref{F1} ).
For example, if $\Omega$ is the unit ball centered at the origin, then $g(x)=|x|$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.8]{support_figure_2.eps}
\end{center}
\caption{The value of $g(x)$ when $|x|=1$. }\label{F1}
\end{figure}
\begin{lemma}
\label{Fourier Asymptotic copy(1)} Assume that $\Omega$\ is a convex body in
$\mathbb{R}^{d}$\ with smooth boundary with strictly positive Gaussian
curvature. Let $g\left( x\right) =\sup_{y\in\Omega
}\left\{ x\cdot y\right\} $ be the support function of $\Omega$. Then, there exist functions $\left\{
a_{j}\left( \xi\right) \right\} _{j=0}^{+\infty}$ and $\left\{
b_{j}\left( \xi\right) \right\} _{j=0}^{+\infty}$ homogeneous of degree $0$
and smooth in $\mathbb{R}^{d}\setminus\left\{ 0\right\} $ such that the Fourier
transform of the characteristic function of $\Omega$ for $\left\vert
\xi\right\vert \rightarrow+\infty$ has the asymptotic expansion
\begin{align*}
& \widehat{\chi}_{\Omega}\left( \xi\right) =%
{\displaystyle\int_{\Omega}}
\exp\left( -2\pi i\xi\cdot x\right) dx\\
& =\exp\left( -2\pi ig\left( \xi\right) \right) \left\vert \xi\right\vert
^{-\left( d+1\right) /2}%
{\displaystyle\sum_{j=0}^{h}}
a_{j}\left( \xi\right) \left\vert \xi\right\vert ^{-j}\\
&+\exp\left( 2\pi
ig\left( -\xi\right) \right) \left\vert \xi\right\vert ^{-\left(
d+1\right) /2}%
{\displaystyle\sum_{j=0}^{h}}
b_{j}\left( \xi\right) \left\vert \xi\right\vert ^{-j}
+\mathcal{O}\left( \left\vert \xi\right\vert ^{-\left( d+2h+3\right)
/2}\right) .
\end{align*}
The functions $a_{j}\left( \xi\right) $ and $b_{j}\left( \xi\right) $
depend on a finite number of derivatives of a parame-trization of the boundary
of $\Omega$ at the points with outward unit normal $\pm\xi/\left\vert
\xi\right\vert $. In particular, $a_{0}\left( \xi\right) $ and $b_{0}\left(
\xi\right) $ are, up to some absolute constants, equal to $K\left( \pm
\xi\right) ^{-1/2}$, with $K\left( \pm\xi\right) $\ the Gaussian curvature
of $\partial\Omega$\ at the points with outward unit normal $\pm\xi/\left\vert
\xi\right\vert $.
\end{lemma}
\begin{proof}
This is a classical result. See e.g. \cite{GGV},
\cite{Herz_decad}, \cite{Hlawka}, \cite{Stein}. Here, as an explicit example, we just recall that the
Fourier transform of a ball $\left\{ x\in\mathbb{R}^{d}:\left\vert
x\right\vert \leq R\right\} $\ can be expressed in terms of a Bessel
function, and Bessel functions have simple asymptotic expansions in terms of
trigonometric functions,
\begin{align*}
& \widehat{\chi}_{\left\{ \left\vert x\right\vert \leq R\right\} }\left(
\xi\right) =R^{d}\widehat{\chi}_{\left\{ \left\vert x\right\vert
\leq1\right\} }\left( R\xi\right) =R^{d}\left\vert R\xi\right\vert
^{-d/2}J_{d/2}\left( 2\pi\left\vert R\xi\right\vert \right) \\
& =\pi^{-1}R^{(d-1)/2}\left\vert \xi\right\vert ^{-\left( d+1\right)
/2}\cos\left( 2\pi R\left\vert \xi\right\vert -\left( d+1\right)
\pi/4\right) \\
& -2^{-4}\pi^{-2}\left( d^{2}-1\right) R^{(d-3)/2}\left\vert \xi\right\vert
^{-\left( d+3\right) /2}\sin\left( 2\pi R\left\vert \xi\right\vert -\left(
d+1\right) \pi/4\right) +...\\
&+O(R^{(d-2h-3)/2}|\xi|^{-(d+2h+3)/2}).
\end{align*}
See \cite{SW}. More generally, also the Fourier transform of an ellipsoid, that is an affine
image of a ball, can be expressed in terms of Bessel functions.
\end{proof}
By the above lemma, the normalized discrepancy has a Fourier expansion of the form
\begin{align*}
&\sum_{j=0}^h r^{-j}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
a_{j}\left( n\right) \left\vert n\right\vert ^{-(d+1)/2-j}\exp\left( -2\pi
ig\left( n\right) r\right) \exp\left( 2\pi inx\right) \\
& +\sum_{j=0}^hr^{-j}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
b_{j}\left( n\right) \left\vert n\right\vert ^{-(d+1)/2-j}\exp\left( 2\pi
ig\left( -n\right) r\right) \exp\left( 2\pi inx\right)+\ldots
\end{align*}
By replacing $(d+1)/2$ with a complex variable $z$ one obtains an analytic function of this complex
variable with values in $L^p$ spaces. This will allow us to estimate the norm of this discrepancy via complex interpolation.
\begin{lemma}
\label{Asymptotic Discrepancy} Assume that $\Omega$\ is a convex body in
$\mathbb{R}^{d}$\ with smooth boundary with strictly positive Gaussian
curvature. Let $z$ be a complex parameter, and for every $h=0,1,2,...$ and
$r\geq1$, with the notation of the previous lemmas, let define the function
$\Phi\left( \delta, z,r,x\right) $ via the Fourier expansions
\begin{align*}
& \Phi\left( \delta,z,r,x\right) \\
&=\sum_{j=0}^h r^{-j}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat\varphi(\delta n)a_{j}\left( n\right) \left\vert n\right\vert ^{-z-j}\exp\left( -2\pi
ig\left( n\right) r\right) \exp\left( 2\pi inx\right) \\
& +\sum_{j=0}^hr^{-j}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat\varphi(\delta n)b_{j}\left( n\right) \left\vert n\right\vert ^{-z-j}\exp\left( 2\pi
ig\left( -n\right) r\right) \exp\left( 2\pi inx\right) .
\end{align*}
The convergence of the above series is absolute and uniform. With $z=(d+1)/2$, define
\[
\mathcal{R}_{h}\left( \delta, r,x\right) =r^{-\left( d-1\right) /2}\mathcal{D}_\delta%
\left( \Omega,r,x\right) -%
\Phi\left( \delta, \left( d+1\right) /2,r,x\right) .
\]
Then, if $h>\left( d-3\right) /2$ there exists $C$ such that for every $\delta>0$ and $r\geq1$,
\[
\left\vert \mathcal{R}_{h}\left( \delta, r,x\right) \right\vert \leq Cr^{-h-1}.
\]
\end{lemma}
\begin{proof}
This is a consequence of the previous lemmas.
\end{proof}
\begin{lemma} \label{lm_6_integral_ellipsoid}
Let $N$ be a positive integer, and $\mu$ a positive measure with compact support in the positive real axis and with Fourier transform satisfying $|\widehat\mu(\xi)|\leq (1+|\xi|)^{-\beta}$ for some $\beta\geq0$.
Then for every $\lambda>0$ and for every $z\in\mathbb C$ there exists $C>0$ such that for every $\delta>0$ and for every $1<R<+\infty$,
\begin{align*}
& \int_{\mathbb{R}} \int_{\mathbb{T}^{d}} |\Phi(\delta,z,r,x)|^{2N}dxd\mu(r-R)\leq C\int_{\mathbb{R}^{d}} (1+\delta|k|)^{-\lambda}\\
& \times\int\limits_{\substack{m_1,\ldots,m_{N}\in\mathbb{R}^{d}\\ |m_1|, \ldots,|m_{N}| >1\\ m_1+\dots+m_{N}=k }} (1+\delta|m_1|)^{-\lambda}\dots(1+\delta|m_N|)^{-\lambda}|m_1|^{-\mathrm{Re}(z)}\dots|m_{N}|^{-\mathrm{Re}(z)}\\
& \times\int\limits_{\substack{n_1,\ldots,n_{N}\in\mathbb{R}^{d}\\ |n_1|, \ldots,|n_{N}| >1\\ n_1+\dots+n_{N}=k }}(1+\delta|n_1|)^{-\lambda}\dots(1+\delta|n_N|)^{-\lambda} |n_1|^{-\mathrm{Re}(z)}\dots|n_{N}|^{-\mathrm{Re}(z)} \\
&\times \left(1+| g(m_1)+\dots+g(m_{N})-g(n_1)-\dots-g(n_{N})|\right)^{-\beta}\\
&\times d\sigma(n_1,\dots,n_{N}) d\sigma(m_1,\dots,m_{N}) dk.
\end{align*}
The inner integrals are with respect to the surface measure on the $(N-1)d$ dimensional variety of $N$ points with sum $k$. In other words, they are essentially Lebesgue integrals with respect to the first $N-1$ variables, replacing the last variables $ m_N$ and $n_N$ respectively with $k-m_1-\dots-m_{N-1}$
and $k-n_1-\dots-n_{N-1}$.
The above final expression is a decreasing function of $\mathrm{Re}(z)$.
\end{lemma}
\begin{proof} The function $\Phi(\delta,z,r,x)$ is a sum of terms of the form
\[
\Theta_j\left( \delta,z,r,x\right) =r^{-j}%
{\displaystyle\sum_{n\in\mathbb{Z}^{d}\setminus\left\{ 0\right\} }}
\widehat\varphi(\delta n)c(n)\left\vert n\right\vert ^{-z-j}\exp\left( \mp2\pi ig\left( \pm n\right)
r\right) \exp\left( 2\pi inx\right) .
\]
The expression $\exp\left( \mp2\pi ig\left( \pm n\right)
r\right)$ has to be intended either as $\exp\left( -2\pi ig\left( n\right)
r\right)$ or $\exp\left( +2\pi ig\left( - n\right)
r\right)$.
Assume that the first exponential is $\exp(-2\pi ig\left( n\right)r)$.
It follows that for every positive integer $N$,
\begin{align*}
& \left(\Theta_j(\delta,z,r,x)\right)^{N}\\
=& r^{-jN}\sum_{k\in\mathbb{Z}^{d}} \sum_{\substack{n_1,\dots n_N\neq 0\\ n_1+\dots+n_N=k}} \widehat\varphi(\delta n_1)\dots\widehat\varphi(\delta n_N)c(n_1)\dots c(n_N) |n_1|^{-z-j}\dots |n_N|^{-z-j}\\
& \times e^{-2\pi i\left(g(n_1)+\dots+g(n_N)\right)r}e^{2\pi ikx}.
\end{align*}
For a proof, just observe that since $\widehat{\varphi}(\xi)$ has a fast decay at infinity, all series involved are absolutely convergent, and one can freely expand the $N$-th power and rearrange the terms.
Then, by Parseval equality,
\begin{align*}
& \int_{\mathbb{T}^{d}} |\Theta_j(\delta,z,r,x)|^{2N}dx\\
& = r^{-2jN}\sum_{k\in\mathbb{Z}^{d}} \Bigg| \sum_{\substack{n_1,\dots n_N\neq 0\\ n_1+\dots+n_N=k}} \widehat\varphi(\delta n_1)\dots\widehat\varphi(\delta n_N)c(n_1)\dots c(n_N) |n_1|^{-z-j}\dots |n_N|^{-z-j}\\
& \times e^{-2\pi i\left(g(n_1)+\dots+g(n_N)\right)r}\Bigg|^{2}.
\end{align*}
Expanding the square and integrating in the variable $r$, one obtains
\begin{align*}
& \int_{\mathbb{R}} \int_{\mathbb{T}^{d}} |\Theta_j(\delta,z,r,x)|^{2N}dxd\mu(r-R)\\
&=\sum_{k\in\mathbb{Z}^{d}} \sum_{\substack{m_1,\dots m_N\neq 0\\ m_1+\dots+m_N=k}} \widehat\varphi(\delta m_1)\dots\widehat\varphi(\delta m_N)c(m_1)\dots c(m_N) |m_1|^{-z-j}\dots |m_N|^{-z-j}\\
&\times \sum_{\substack{n_1,\dots n_N\neq 0\\ n_1+\dots+n_N=k}} \widehat\varphi(\delta n_1)\dots\widehat\varphi(\delta n_N)\overline{c(n_1)}\dots \overline{c(n_N)} |n_1|^{-\overline z-j}\dots |n_N|^{-\overline z-j}\\
& \times \int_{\mathbb{R}} e^{-2\pi i\left(g(m_1)+\dots+g(m_N)-g(n_1)-\dots-g(n_N)\right)r} r^{-2jN}d\mu(r-R).
\end{align*}
The last integral is the Fourier transform of the measure $r^{-2jN}d\mu(r-R)$ and
it is easy to see that if $R>1$, so that the singularity of $r^{-2jN}$ is far from the support of $d\mu(r-R)$, it satisfies the same estimates as the Fourier transform of $d\mu(r)$. Indeed
\begin{align*}
&\int_{\mathbb R} e^{-2\pi i\xi r}r^{-2jN}d\mu(r-R)=e^{-2\pi i\xi R}\int_{\mathbb R} e^{-2\pi i\xi r}(r+R)^{-2jN}d\mu(r)\\
&=e^{-2\pi i\xi R}\int_{\mathbb R}\widehat{d\mu}(\xi-s) q(s)ds,
\end{align*}
where $q(s)$ is the Fourier transform of a smooth extension outside the support of $d\mu$ of the function $(r+R)^{-2jN}$.
Hence,
\begin{align*}
&\left|\int_{\mathbb{R}} e^{-2\pi i\left(g(m_1)+\dots+g(m_N)-g(n_1)-\dots-g(n_N)\right)r} r^{-2jN}d\mu(r-R)\right|\\
&\leq
C\left(1+\left|g(m_1)+\dots+g(m_N)-g(n_1)-\dots-g(n_N)\right|\right)^{-\beta}.
\end{align*}
The function $\varphi(x)$ is smooth, so that $|\widehat{\varphi}(\xi)|\leq C\left(1+|\xi|\right)^{-\lambda}$ for every $\lambda>0$.
Hence for every $j$ the above quantity is dominated up to a constant by
\begin{align*}
&\sum_{k\in\mathbb{Z}^{d}} \sum_{\substack{m_1,\dots m_N\neq 0\\ m_1+\dots+m_N=k}}
\left(1+|\delta m_1|\right)^{-\lambda}\dots (1+|\delta m_N|)^{-\lambda}
|m_1|^{-\mathrm{Re}(z)}\dots |m_N|^{-\mathrm{Re}(z)}\\
&\times \sum_{\substack{n_1,\dots n_N\neq 0\\ n_1+\dots+n_N=k}}
(1+|\delta n_1|)^{-\lambda}\dots(1+|\delta n_N|)^{-\lambda}
|n_1|^{-\mathrm{Re}(z)}\dots |n_N|^{-\mathrm{Re}(z)}\\
& \times\left(1+\left|g(m_1)+\dots+g(m_N)-g(n_1)-\dots-g(n_N)\right|\right)^{-\beta} .
\end{align*}
In this formula there is no cutoff in the variable $k$. In order to obtain a cutoff in $k$, observe that, if $m_1+\dots+m_N=k$, then
\begin{align*}
& (1+\delta|m_1|)^{-\sigma}\dots(1+\delta |m_{N}|)^{-\sigma} \\
=& \left(1+\delta(|m_1|+\dots+ |m_N|) + \delta^{2}(|m_1| |m_2|+\dots)+\dots\right)^{-\sigma}\\
&\leq \left(1+\delta(|m_1|+\dots+ |m_N|)\right)^{-\sigma} \leq \left(1+\delta |m_1+\dots+m_N| \right)^{-\sigma} = \left(1+\delta |k|\right)^{-\sigma}.
\end{align*}
In particular, some of the cutoff functions $(1+\delta|m_1|)^{-\sigma}\dots (1+\delta |m_N|)^{-\sigma}$ can be replaced with $(1+\delta |k|)^{-\sigma}$.
Finally, in the above formulas one can replace the sums with integrals. Indeed, there exist positive constants $A$ and $B$ such that for every integer point $m\neq0$ and every $x\in Q$, the cube centered at the
origin with sides parallel to the axes and of length one,
\[
A |m| \leq |m+x| \leq B |m|.
\]
This implies that the function $|m+x|^{-\mathrm{Re}(z)}$ is slowly varying in the cube $Q$.
Moreover, also the function
$
\left(1+| g(m+x)+\dots|\right)^{-\beta}
$
is slowly varying.
Hence, one can replace a sum over $m$ with an integral over the union of cubes $m+Q$,
\begin{align*}
&\sum_{k\in\mathbb{Z}^{d}} \left(1+|\delta k|\right)^{-\lambda}\\
&\times\sum_{\substack{m_1,\dots m_N\neq 0\\ m_1+\dots+m_N=k}}
\left(1+|\delta m_1|\right)^{-\lambda}\dots (1+|\delta m_N|)^{-\lambda}
|m_1|^{-\mathrm{Re}(z)}\dots |m_N|^{-\mathrm{Re}(z)}\\
&\times \sum_{\substack{n_1,\dots n_N\neq 0\\ n_1+\dots+n_N=k}}
(1+|\delta n_1|)^{-\lambda}\dots(1+|\delta n_N|)^{-\lambda}
|n_1|^{-\mathrm{Re}(z)}\dots |n_N|^{-\mathrm{Re}(z)}\\
& \times\left(1+\left|g(m_1)+\dots+g(m_N)-g(n_1)-\dots-g(n_N)\right|\right)^{-\beta}
\\
&\leq C\int_{\mathbb{R}^{d}} (1+\delta|k|)^{-\lambda}\\
&\times\int\limits_{\substack{m_1,\ldots,m_{N}\in\mathbb{R}^{d}\\ |m_1|, \ldots,|m_{N}| >1/2\\ m_1+\dots+m_{N}=k }} (1+\delta|m_1|)^{-\lambda}\dots(1+\delta|m_N|)^{-\lambda}|m_1|^{-\mathrm{Re}(z)}\dots|m_{N}|^{-\mathrm{Re}(z)}\\
& \times\int\limits_{\substack{n_1,\ldots,n_{N}\in\mathbb{R}^{d}\\ |n_1|, \ldots,|n_{N}| >1/2\\ n_1+\dots+n_{N}=k }}(1+\delta|n_1|)^{-\lambda}\dots(1+\delta|n_N|)^{-\lambda} |n_1|^{-\mathrm{Re}(z)}\dots|n_{N}|^{-\mathrm{Re}(z)} \\
&\times \left(1+| g(m_1)+\dots+g(m_{N})-g(n_1)-\dots-g(n_{N})|\right)^{-\beta}\\
&\times d\sigma(n_1,\dots,n_{N}) d\sigma(m_1,\dots,m_{N}) dk.
\end{align*}
Finally, with a change of variables one can transform the domain of integration $\left\{|x|>1/2\right\}$ into $\left\{|y|>1\right\}$, and the thesis follows immediately.
Indeed, if $|x|>1$ then $|x|^{-\mathrm{Re}(z)}$ decreases as $\mathrm{Re}(z)$ increases.
\end{proof}
\section{The case of a generic convex set with non vanishing curvature}
This section contains the proof of Theorems \ref{thm_d=2} and \ref{thm_d>2}.
\begin{lemma}
\label{Support} Let $g\left( x\right) =\sup_{y\in\Omega}\left\{ x\cdot
y\right\} $ be the support function of a convex set $\Omega$ which contains the
origin, and with a smooth boundary with strictly positive Gaussian curvature.
(1) The support function is convex, homogeneous of degree one, positive and
smooth away from the origin, and it is equivalent to the Euclidean norm, that
is there exist $0<A<B$ such that for every $x$,
\[
A\left\vert x\right\vert \leq g\left( x\right) \leq B\left\vert x\right\vert
.
\]
(2) For every $x\in\mathbb R^d\setminus\{0\}$ there exist a unique point $z(x)\in\partial\Omega$
such that $x\cdot z(x)=g(x)$. In particular, $z(x)$ is the unique point in $\partial\Omega$ with outer normal $x$.
Furthermore $z(x)$ is homogeneous of degree $0$ in the variable $x$.
(3) There exist two positive constants $c$ and $C$ such that for every $\vartheta,\omega$ with $\left|\vartheta\right|=\left|\omega\right|=1$
\[
c\left(1-\omega\cdot\vartheta\right)\le z(\vartheta)\cdot\vartheta-z\left(\omega\right)\cdot\vartheta\le C\left(1-\omega\cdot\vartheta\right)
\]
\end{lemma}
\begin{proof}
(1) is trivial and (2) follows from the smoothness and positive curvature of the boundary of $\Omega$.
The estimate in (3) can be written as
\[
c\left(\vartheta-\omega\right)\cdot\vartheta\le (z(\vartheta)-z\left(\omega\right))\cdot\vartheta\le C\left(\vartheta-\omega\right)\cdot\vartheta.
\]
A compactness argument implies that there exist two radii $c\leq C$ such that
at every point of $\partial\Omega$, $\Omega$ contains the ball with radius $c$
tangent to $\partial\Omega$ and is contained in the ball with radius $C$ tangent to $\partial\Omega$, and (3) follows.
\end{proof}
\begin{lemma}
In the hypotheses of the above lemma, assume that $\beta\geq
0$, $\delta,\gamma>0$ and $Y\geq1$. Set
\begin{equation*}
\tau=\tau(\gamma,\beta) =\min\left\{ 1,\gamma,\beta\right\} \qquad
\sigma =\sigma(\gamma,\beta) =\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\beta=1\text{ and }\gamma\geq1,\\
1 & \text{if }\beta=\gamma\leq1,\\
0 & \text{else.}%
\end{array}
\right.
\end{equation*}
Then, for every $\vartheta$ with
$\left\vert \vartheta\right\vert =1$ and every $k\neq0$, if we call $\Delta=g\left( \vartheta\right) -\nabla g\left( k\right) \cdot\vartheta
$,
\begin{equation*}
\int_{0}^{\delta Y}\rho^{\gamma-1}\left( 1+\left\vert g\left(
\rho\vartheta\right) +g\left( k-\rho\vartheta\right) -Y\right\vert \right)
^{-\beta}d\rho
\leq CY^{\gamma}\left( 1+Y\Delta\right) ^{-\tau}\log^{\sigma}\left(
2+Y\Delta\right).
\end{equation*}
\end{lemma}
\begin{remark}
Here and in the rest of the paper we will use the Kronecker delta notation
\[
\delta_{x,y}=\begin{cases}
1& \text{ if } x=y,\\
0 & \text{ if } x\neq y.
\end{cases}
\]
\end{remark}
\begin{proof}
The case $\beta=0$ is trivial. When $\beta>0$, then $\left( 1+Y\Delta\right) ^{-\tau}\log^{\sigma}\left(
2+Y\Delta\right)$ is the gain with respect to the trivial estimate
\begin{equation*}
\int_{0}^{\delta Y}\rho^{\gamma-1}d\rho
= C Y^{\gamma}.
\end{equation*}
Fix $\vartheta$ and $k$ and define%
\begin{align*}
F\left( \rho\right) & =g\left( \rho\vartheta\right) +g\left(
k-\rho\vartheta\right) ,\\
F^{\prime}\left( \rho\right) & =g\left( \vartheta\right) -\nabla
g\left( k-\rho\vartheta\right) \cdot\vartheta,\\
F^{\prime\prime}\left( \rho\right) & =\vartheta^{t}\cdot\nabla^{2}g\left(
k-\rho\vartheta\right) \cdot\vartheta.
\end{align*}
The quadratic form $\vartheta^{t}\cdot\nabla^{2}g\left( x\right)
\cdot\vartheta$ is positive semidefinite, with a $0$ eigenvalue in the
direction given by $\nabla g\left( x\right) $, and strictly positive
eigenvalues in the other directions. See \cite{Schneider}. Let $\lambda>0$ be the minimum
of all the $d-1$ positive eigenvalues of $\vartheta^{t}\cdot\nabla^{2}g\left(
x\right) \cdot\vartheta$ over the sphere $\left\{ \left\vert x\right\vert
=1\right\} $. Splitting $\vartheta=\vartheta_{0}+\vartheta_{1}$ with
$\vartheta_{0}$ parallel to $\nabla g\left( k-\rho\vartheta\right) $ and
$\vartheta_{1}$ orthogonal to $\nabla g\left( k-\rho\vartheta\right) $, and
since $\nabla^{2}g\left( x\right) $ is homogeneous of degree $-1$, we
therefore have
\begin{align*}
& \vartheta^{t}\cdot\nabla^{2}g\left( k-\rho\vartheta\right) \cdot
\vartheta\\
& =\vartheta_{1}^{t}\cdot\nabla^{2}g\left( k-\rho\vartheta\right)
\cdot\vartheta_{1}\\
& =\frac{1}{\left\vert k-\rho\vartheta\right\vert }\vartheta_{1}^{t}%
\cdot\nabla^{2}g\left( \frac{k-\rho\vartheta}{\left\vert k-\rho
\vartheta\right\vert }\right) \cdot\vartheta_{1}\\
& \geq\frac{\lambda\left\vert \vartheta_{1}\right\vert ^{2}}{\left\vert
k-\rho\vartheta\right\vert }.
\end{align*}
In particular, $F^{\prime\prime}\left( \rho\right) \geq0$, so that
$F^{\prime}\left( \rho\right) $ is increasing and if $\rho\geq0$,
\[
F^{\prime}\left( \rho\right) \geq F^{\prime}\left( 0\right) =g\left(
\vartheta\right) -\nabla g\left( k\right) \cdot\vartheta.
\]
If $z\left( x\right) $ is the unique point in the boundary of $\Omega$ such
that $x\cdot z\left( x\right) =g\left( x\right) $, as described in Lemma \ref{Support}, then
\[
\nabla g\left( x\right) =\nabla\left( z\left( x\right) \cdot x\right)
=\nabla z\left( x\right) \cdot x+z\left( x\right) =z\left( x\right) .
\]
The last equality follows from Euler's formula, since $z\left( x\right) $ is
homogeneous of degree $0$. Hence,%
\[
F^{\prime}\left( 0\right) =g\left( \vartheta\right) -\nabla g\left(
k\right) \cdot\vartheta=g\left( \vartheta\right) -z\left( k\right)
\cdot\vartheta\geq0.
\]
This follows by the definition of $g\left( \vartheta\right) $ as the maximum
of $y\cdot\vartheta$ for $y\in\Omega$.
Thus, $F\left( \rho\right) $ is an increasing function. If $Y\geq F\left(
0\right) $ define $r=F^{-1}\left( Y\right) $, and if $Y<F\left( 0\right)
$ define $r=0$. Observe that for $r>0$ one has $Y=F\left( r\right) \geq
g\left( r\vartheta\right) \geq Cr$. Thus, in any case, $0\leq r\leq CY$.
Then
\[
\left\vert F\left( \rho\right) -Y\right\vert \geq F^{\prime}\left(
0\right) \left\vert \rho-r\right\vert =\left( g\left( \vartheta\right)
-\nabla g\left( k\right) \cdot\vartheta\right) \left\vert \rho-r\right\vert
=\Delta\left\vert \rho-r\right\vert .
\]
Hence,%
\begin{align*}
& \int_{0}^{\delta Y}\rho^{\gamma-1}\left( 1+\left\vert g\left(
\rho\vartheta\right) +g\left( k-\rho\vartheta\right) -Y\right\vert \right)
^{-\beta}d\rho\\
& \leq C\int_{0}^{\delta Y}\rho^{\gamma-1}\left( 1+\Delta\left\vert
\rho-r\right\vert \right) ^{-\beta}d\rho\\
& =C\Delta^{-\gamma}\int_{0}^{\delta\Delta Y}t^{\gamma-1}\left( 1+\left\vert
t-\Delta r\right\vert \right) ^{-\beta}dt.
\end{align*}
Call $\delta\Delta Y=T$ and $\Delta r=S$. Then we need to show that%
\[
\int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta
}dt \le CT^{\gamma-\tau}\log^{\sigma}\left(T\right).
\]
In any case we have%
\[
\int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta
}dt\leq\int_{0}^{T}t^{\gamma-1}dt\leq CT^{\gamma}.
\]
In particular, the thesis follows when $T\le 4$.
If $T\geq4$ and $0\leq S\leq2$ then
\begin{align*}
&\int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta
}dt\\
&\leq C\left( \int_{0}^{1}t^{\gamma-1}dt+\int_{1}^{T}t^{\gamma-\beta
-1}dt\right)\\
& \leq T^{\gamma-\min\{\gamma,\beta\}}\log^{\delta_{\gamma,\beta}}(T).
\end{align*}
If $T\geq4$ and $2\leq S\leq T/2$ then%
\begin{align*}
& \int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right)
^{-\beta}dt\\
& \leq\left( \int_{0}^{S/2}+\int_{S/2}^{2S}+\int_{2S}^{T}\right)
t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta}dt\\
& \leq CS^{\gamma-\beta}+
S^{\gamma-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(S)
+CT^{\gamma-\min\{\gamma,\beta\}}\log^{\delta_{\gamma,\beta}}(T) \\
&
\leq
CT^{\gamma-\tau}\log^{\sigma}\left(T\right).
\end{align*}
If $T\geq4$ and $T/2\leq S\leq2T$, then%
\begin{align*}
& \int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right)
^{-\beta}dt\\
& \leq CT^{-\beta}\int_{0}^{S/2}t^{\gamma-1}dt+CT^{\gamma-1}\int_{S/2}%
^{T}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta}dt\\
& \leq CT^{\gamma-\beta}+
CT^{\gamma-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(T) \\
& \leq CT^{\gamma-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(T).
\end{align*}
If $T\geq4$ and $S\geq2T$, then%
\[
\int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-S\right\vert \right) ^{-\beta
}dt\leq\int_{0}^{T}t^{\gamma-1}\left( 1+\left\vert t-T\right\vert \right)
^{-\beta}dt\leq
C
T^{\gamma-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(T).
\]
\end{proof}
\begin{lemma}\label{lm_mu}
In the hypotheses of the previous lemma and with
\[
\varsigma=\varsigma(d,\gamma,\beta)=\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\tau=\left( d-1\right) /2\\
0 & \text{else,}%
\end{array}
\right.
\]
one has
\begin{align*}
& \int_{\left\vert \vartheta\right\vert =1}\int_{0}^{\delta Y}\rho^{\gamma
-1}\left( 1+\left\vert g\left( \rho\vartheta\right) +g\left(
k-\rho\vartheta\right) -Y\right\vert \right) ^{-\beta}d\rho d\vartheta\\
& \leq CY^{\gamma-\min\{ \tau,\left( d-1\right) /2\} }%
\log^{\sigma+\varsigma}\left( 2+Y\right).
\end{align*}
\end{lemma}
\begin{proof}
Fix $k$. By the previous lemma and with the notation $\omega=k/\left\vert
k\right\vert $ and $\Delta\left( \vartheta,\omega\right) =g\left(
\vartheta\right) -\nabla g\left( \omega\right) \cdot\vartheta$, we have to
estimate
\begin{align*}
& \int_{\left\vert \vartheta\right\vert =1}\int_{0}^{\delta Y}\rho^{\gamma
-1}\left( 1+\left\vert g\left( \rho\vartheta\right) +g\left(
k-\rho\vartheta\right) -Y\right\vert \right) ^{-\beta}d\rho d\vartheta\\
& \leq C Y^{\gamma}\int_{ \left\vert \vartheta\right\vert =1}\left( 1+Y\Delta\left( \vartheta,\omega\right) \right) ^{-\tau}%
\log^{\sigma}\left( 2+Y\Delta\left( \vartheta,\omega\right) \right)
d\vartheta.
\end{align*}
By Lemma \ref{Support}, since $\Omega$ has everywhere positive curvature,
\[
c\left( 1-\vartheta\cdot\omega\right) \leq\Delta\left( \vartheta
,\omega\right) \leq C\left( 1-\vartheta\cdot\omega\right) .
\]%
Therefore,
\begin{align*}
& Y^{\gamma}\int_{ \left\vert \vartheta\right\vert =1
}\left( 1+Y\Delta\left( \vartheta,\omega\right) \right) ^{-\tau}%
\log^{\sigma}\left( 2+Y\Delta\left( \vartheta,\omega\right) \right)
d\vartheta\\
& \leq CY^{\gamma}\int_{ \left\vert \vartheta\right\vert =1
}\left( 1+Y\left( 1-\vartheta\cdot\omega\right) \right) ^{-\tau}%
\log^{\sigma}\left( 2+Y\left( 1-\vartheta\cdot\omega\right) \right)
d\vartheta\\
& \leq CY^{\gamma}\int_{0}^{\pi}\left( 1+Y\left( 1-\cos\varphi\right)
\right) ^{-\tau}\log^{\sigma}\left( 2+Y\left( 1-\cos\varphi\right)
\right) \sin^{d-2}\varphi d\varphi\\
& \leq CY^{\gamma}\int_{0}^{\pi}\left( 1+Y\varphi^{2}\right) ^{-\tau}%
\log^{\sigma}\left( 2+Y\varphi^{2}\right) \varphi^{d-2}d\varphi\\
& \leq CY^{\gamma}\int_{0}^{Y^{-1/2}}\varphi^{d-2}d\varphi+CY^{\gamma-\tau
}\int_{Y^{-1/2}}^{\pi}\varphi^{d-2-2\tau}\log^{\sigma}\left( 2+Y\varphi
^{2}\right) d\varphi\\
& \leq CY^{\gamma-\left( d-1\right) /2}+
CY^{\gamma-\min\{\tau,(d-1)/2\}}\log^{\sigma+\varsigma}\left( 2+Y\right)\\
& \leq CY^{\gamma-\min\{ \tau,\left( d-1\right) /2\} }%
\log^{\sigma+\varsigma}\left( 2+Y\right).
\end{align*}
\end{proof}
\begin{lemma}
\label{integral}
Let $g\left( x\right) $ be the
support function of $\Omega$, and let $d\geq2$, $d/2<\alpha<d$, $\beta\geq0$, $\zeta=\zeta(d,\alpha,\beta)=\min\left\{ 1,\beta,d-\alpha,\left(
d-1\right) /2\right\}$. Finally, define $\eta=\eta(d,\alpha,\beta)$ as follows.
If $d=2$ define
\[
\eta=\left\{
\begin{array}
[c]{ll}%
2 & \text{if }\beta=1/2\text{ and }\alpha=3/2\text{,}\\
1 & \text{if }\beta=1/2\text{ and }1<\alpha<3/2\text{,}\\
1 & \text{if }0<\beta<1/2\text{ and }\alpha=2-\beta\text{,}\\
1 & \text{if }\beta>1/2\text{ and }\alpha=3/2\text{,}\\
0 & \text{else.}%
\end{array}
\right.
\]
If $d=3$ define%
\[
\eta=\left\{
\begin{array}
[c]{ll}%
2 & \text{if }\beta=1\text{ and }3/2<\alpha\leq2\text{,}\\
1 & \text{if }\beta>1\text{ and }3/2<\alpha\leq2\text{,}\\
1 & \text{if }0<\beta<1\text{ and }\alpha=3-\beta\text{,}\\
0 & \text{else.}%
\end{array}
\right.
\]
If $d\geq4$ define%
\[
\eta=\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\beta=1\text{ and }d/2<\alpha\leq d-1,\\
1 & \text{if }0<\beta<1\text{ and }\alpha=d-\beta,\\
0 & \text{else.}%
\end{array}
\right.
\]
Then there exists $C$\ such that for every
$k\in\mathbb{R}^{d}\setminus\left\{ 0\right\} $ and for every $-\infty<Y<+\infty$,
\begin{align*}
&
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq C\left\vert k\right\vert ^{d-2\alpha}\left( 1+\left\vert k\right\vert
+\left\vert Y\right\vert \right) ^{-\zeta }\log^{\eta}\left( 2+\left\vert k\right\vert
+\left\vert Y\right\vert \right) .
\end{align*}
\end{lemma}
\begin{figure}[h]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{zeta_d=2.eps}
\caption{The case $d=2$.}
\end{subfigure}\qquad
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{zeta_d_2.eps}
\caption{The case $d>2$.}
\end{subfigure}
\caption{The value of $\zeta$ as a function of $\beta$ and $\alpha$.}
\end{figure}
\begin{proof}
Let us explain the numerology behind the lemma. If there is no cutoff $\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}$, then the change of variables $x=\left\vert k\right\vert z$ and
$k=\left\vert k\right\vert \omega$ gives
\[%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha
}dx=\left\vert k\right\vert ^{d-2\alpha}%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert z\right\vert ^{-\alpha}\left\vert \omega-z\right\vert ^{-\alpha
}dz=C\left\vert k\right\vert ^{d-2\alpha}.
\]
On the other hand, the cutoff $\left( 1+\left\vert g\left( x\right)
+g\left( k-x\right) -Y\right\vert \right) ^{-\beta}$ gives an extra decay.
In particular, the integral with the cutoff $\left( 1+\left\vert g\left(
x\right) +g\left( k-x\right) -Y\right\vert \right) ^{-\beta}$ with $\beta$
large is essentially over the $d-1$ dimensional set $\left\{ g\left( x\right) +g\left(
k-x\right) =Y\right\} $, that is the cutoff reduces the space dimension by
$1$. Hence, at least when $\beta$ is large, the integral with the
cutoff can be seen as the convolution in $\mathbb{R}^{d-1}$ of two homogeneous
functions of degree $-\alpha$, and this suggests the decay $\left\vert
k\right\vert ^{d-1-2\alpha}$. Hence, if with $\beta=0$ the decay is $\left\vert
k\right\vert ^{d-2\alpha}$, and if with $\beta>1$ the decay is $\left\vert
k\right\vert ^{d-1-2\alpha}$, then, by interpolation, when $0<\beta<1$ the
decay is $\left\vert k\right\vert ^{d-\beta-2\alpha}$. This is just a rough
numerology, indeed also the parameter $Y$ enters into play and the details of the proof
are more delicate.
For every $Y$ and $k$ one has
\begin{align*}
&
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha
}dx=C\left\vert k\right\vert ^{d-2\alpha}.
\end{align*}
Assume $\left\vert k\right\vert +\left\vert Y\right\vert \geq1.$ Since
$c\left\vert x\right\vert \leq g\left( x\right) \leq C\left\vert
x\right\vert $, one has
\[
c\left( \left\vert k\right\vert +\left\vert x\right\vert \right) \leq
g\left( x\right) +g\left( k-x\right) \leq C\left( \left\vert k\right\vert
+\left\vert x\right\vert \right) .
\]
Hence, if $-\infty<Y\leq\varepsilon\left\vert k\right\vert $ for a small
enough $\varepsilon>0$, one also has
\[
\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \geq
C\left( \left\vert k\right\vert +\left\vert Y\right\vert \right) .
\]
In this case $-\infty<Y\leq\varepsilon\left\vert k\right\vert $,
\begin{align*}
&
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq C\left( \left\vert k\right\vert +\left\vert Y\right\vert \right)
^{-\beta}%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}dx\leq
C\left\vert k\right\vert ^{d-2\alpha}\left( \left\vert k\right\vert
+\left\vert Y\right\vert \right) ^{-\beta}.
\end{align*}
Assume now $\left\vert k\right\vert +\left\vert Y\right\vert \geq1$ and
$Y\geq\varepsilon\left\vert k\right\vert $.
Let us split the integral into the three sets
$\left\{ \left\vert x\right\vert +\left\vert k\right\vert \leq\varepsilon
Y\right\} $, $\left\{ \varepsilon Y\leq\left\vert x\right\vert +\left\vert
k\right\vert \leq\delta Y\right\} $ and $\left\{ \delta Y\leq\left\vert
x\right\vert +\left\vert k\right\vert <+\infty\right\} $, with $\varepsilon$
small and $\delta$ large. In $\left\{ \left\vert x\right\vert +\left\vert
k\right\vert \leq\varepsilon Y\right\} $ one has
\[
\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \geq CY.
\]
Hence%
\begin{align*}
&
{\displaystyle\int_{ \left\vert x\right\vert +\left\vert k\right\vert
\leq\varepsilon Y }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq CY^{-\beta}%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}dx\leq
CY^{-\beta}\left\vert k\right\vert ^{d-2\alpha}\leq C\left\vert k\right\vert
^{d-2\alpha}\left( \left\vert k\right\vert +\left\vert Y\right\vert \right)
^{-\beta}.
\end{align*}
In $\left\{ \delta Y\leq\left\vert x\right\vert +\left\vert k\right\vert
<+\infty\right\} $ one has
\[
\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \geq
C\left( \left\vert k\right\vert +\left\vert x\right\vert \right) .
\]
Hence%
\begin{align*}
&
{\displaystyle\int_{ \delta Y\leq\left\vert x\right\vert +\left\vert
k\right\vert <+\infty }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq C%
{\displaystyle\int_{ \delta Y\leq\left\vert x\right\vert +\left\vert
k\right\vert <+\infty }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
\left\vert k\right\vert +\left\vert x\right\vert \right) ^{-\beta}dx\\
& \leq CY^{-\beta}%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}dx\\
& \leq C\left\vert k\right\vert ^{d-2\alpha}\left( \left\vert k\right\vert
+\left\vert Y\right\vert \right) ^{-\beta}.
\end{align*}
It remains to estimate the integral over the spherical shell
\[
\left\{ \varepsilon Y\leq\left\vert x\right\vert +\left\vert k\right\vert
\leq\delta Y\right\} \subseteq\left\{
\begin{array}
[c]{ll}%
\left\{ \left\vert x\right\vert \leq\delta Y\right\} & \text{if }%
Y\leq4\left\vert k\right\vert /\varepsilon,\\
\left\{ \varepsilon Y/2\leq\left\vert x\right\vert \leq\delta Y\right\} &
\text{if }Y\geq4\left\vert k\right\vert /\varepsilon.
\end{array}
\right.
\]
Recall that $Y\geq\varepsilon\left\vert k\right\vert $. Hence if
$\varepsilon\left\vert k\right\vert \leq Y\leq4\left\vert k\right\vert
/\varepsilon$, then%
\begin{align*}
&
{\displaystyle\int_{ \left\vert x\right\vert \leq\delta Y }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( k+x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq%
{\displaystyle\int_{\left\{ \left\vert x-k/2\right\vert \leq\delta
Y+\left\vert k\right\vert /2\right\} \cap\left\{ \left\vert x\right\vert
\leq\left\vert k-x\right\vert \right\} }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& +%
{\displaystyle\int_{\left\{ \left\vert x-k/2\right\vert \leq\delta
Y+\left\vert k\right\vert /2\right\} \cap\left\{ \left\vert x\right\vert
\geq\left\vert k-x\right\vert \right\} }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq C\left\vert k\right\vert ^{-\alpha}%
{\displaystyle\int_{ \left\vert x\right\vert \leq C\delta Y}}
\left\vert x\right\vert ^{-\alpha}\left( 1+\left\vert g\left( x\right)
+g\left( k-x\right) -Y\right\vert \right) ^{-\beta}dx.
\end{align*}
Similarly, if $Y\geq4\left\vert k\right\vert /\varepsilon$, then%
\begin{align*}
&
{\displaystyle\int_{ \varepsilon Y/2\leq\left\vert x\right\vert
\leq\delta Y }}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq C%
{\displaystyle\int_{ \varepsilon Y/2\leq\left\vert x\right\vert
\leq\delta Y }}
\left\vert x\right\vert ^{-2\alpha}\left( 1+\left\vert g\left( x\right)
+g\left( k-x\right) -Y\right\vert \right) ^{-\beta}dx.
\end{align*}
In polar coordinates $x=\rho\vartheta$ with $\rho\geq0$ and $\left\vert
\vartheta\right\vert =1$ the first integral takes the form%
\begin{align*}
& \left\vert k\right\vert ^{-\alpha}%
{\displaystyle\int_{ \left\vert x\right\vert \leq C\delta Y}}
\left\vert x\right\vert ^{-\alpha}\left( 1+\left\vert g\left( x\right)
+g\left( k-x\right) -Y\right\vert \right) ^{-\beta}dx\\
& =\left\vert k\right\vert ^{-\alpha}\int_{ \left\vert \vartheta
\right\vert =1 }\int_{0}^{C\delta Y}\rho^{d-\alpha-1}\left(
1+\left\vert g\left( \rho\vartheta\right) +g\left( k-\rho\vartheta\right)
-Y\right\vert \right) ^{-\beta}d\rho d\vartheta.
\end{align*}
Similarly, the second integral takes the form%
\begin{align*}
&
{\displaystyle\int_{ \varepsilon Y/2\leq\left\vert x\right\vert
\leq\delta Y }}
\left\vert x\right\vert ^{-2\alpha}\left( 1+\left\vert g\left( x\right)
+g\left( k-x\right) -Y\right\vert \right) ^{-\beta}dx\\
& =\int_{ \left\vert \vartheta\right\vert =1 }%
\int_{\varepsilon Y/2}^{\delta Y}\rho^{d-2\alpha-1}\left( 1+\left\vert
g\left( \rho\vartheta\right) +g\left( k-\rho\vartheta\right) -Y\right\vert
\right) ^{-\beta}d\rho d\vartheta.
\end{align*}
By Lemma \ref{lm_mu}, the first integral can be bounded by%
\begin{align*}
& \left\vert k\right\vert ^{-\alpha}\int_{ \left\vert \vartheta
\right\vert =1 }\int_{0}^{C\delta Y}\rho^{d-\alpha-1}\left(
1+\left\vert g\left( \rho\vartheta\right) +g\left( k-\rho\vartheta\right)
-Y\right\vert \right) ^{-\beta}d\rho d\vartheta\\
& \leq C\left\vert k\right\vert ^{-\alpha}Y^{d-\alpha-\min\{
1,\beta,d-\alpha,\left( d-1\right) /2\} }\log^{\sigma_{1}+\varsigma_{1}%
}\left( 2+Y\right) \\
& \leq C\left\vert k\right\vert ^{d-2\alpha}Y^{-\min\{ 1,\beta
,d-\alpha,\left( d-1\right) /2\} }\log^{\sigma_{1}+\varsigma_{1}}\left(
2+Y\right) ,
\end{align*}
where
\begin{align*}
\varsigma_{1} & =\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\min\left\{ 1,\beta,d-\alpha\right\} =\left( d-1\right) /2\\
0 & \text{else.}%
\end{array}
\right. \\
\sigma_{1} & =\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\beta=1\text{ and }d-\alpha\geq1,\\
1 & \text{if }\beta=d-\alpha\leq1,\\
0 & \text{else.}%
\end{array}
\right.
\end{align*}
Again by Lemma \ref{lm_mu}, the second integral can be bounded by%
\begin{align*}
& \int_{ \left\vert \vartheta\right\vert =1 }\int%
_{\varepsilon Y/2}^{\delta Y}\rho^{d-2\alpha-1}\left( 1+\left\vert g\left(
\rho\vartheta\right) +g\left( k-\rho\vartheta\right) -Y\right\vert \right)
^{-\beta}d\rho d\vartheta\\
& \leq CY^{d-2\alpha-1}\int_{ \left\vert \vartheta\right\vert
=1 }\int_{0}^{\delta Y}\left( 1+\left\vert g\left( \rho
\vartheta\right) +g\left( k-\rho\vartheta\right) -Y\right\vert \right)
^{-\beta}d\rho d\vartheta\\
& \leq CY^{d-2\alpha-\min\{ 1,\beta,\left( d-1\right) /2\} }%
\log^{\sigma_{2}+\varsigma_{2}}\left( 2+Y\right) \\
& \leq C\left\vert k\right\vert ^{d-2\alpha}Y^{-\min\{ 1,\beta,\left(
d-1\right) /2\} }\log^{\sigma_{2}+\varsigma_{2}}\left( 2+Y\right)
\end{align*}
where%
\begin{align*}
\varsigma_{2} & =\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\min\left\{ 1,\beta\right\} =\left( d-1\right) /2\\
0 & \text{else.}%
\end{array}
\right. \\
\sigma_{2} & =\left\{
\begin{array}
[c]{ll}%
1 & \text{if }\beta=1,\\
0 & \text{else.}%
\end{array}
\right.
\end{align*}
It is easy to show that when $d\geq3$ then $\varsigma_{1}+\sigma_{1}\leq\eta$, and
that when $d\geq3$ and $\alpha\leq d-1$, then $\varsigma_{2}+\sigma_{2}\leq\eta$.
This shows the lemma in the case $d\geq3$. The case $d=2$ is more delicate.
First observe that the integral
\[%
{\displaystyle\int_{\mathbb{R}^{d}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx
\]
is a decreasing function of the variable $\beta$. It follows that for
$1/2<\beta<1$ and $\alpha=2-\beta$, one can take some $\tilde{\beta}\in\left(
1/2,\beta\right) $ and obtain%
\begin{align*}
&
{\displaystyle\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\beta}dx\\
& \leq%
{\displaystyle\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert g\left( x\right) +g\left( k-x\right) -Y\right\vert \right)
^{-\tilde{\beta}}dx\\
& \leq C\left\vert k\right\vert ^{2-2\alpha}Y^{-\min\{ 1,\tilde{\beta
},2-\alpha,1/2\} }\leq C\left\vert k\right\vert ^{2-2\alpha}%
Y^{-\min\{ 1,\beta,2-\alpha,1/2\} }\\
& \leq C\left\vert k\right\vert ^{2-2\alpha}Y^{-1/2}.
\end{align*}
This shows that for $d=2$ and $1/2<\beta<1$ and $\alpha=2-\beta$ one can
indeed take $\eta=0.$ A similar argument shows that one can take $\eta=0$
also when $\beta=1$ and $1<\alpha<3/2$.
In the remaining cases, it is easy to show that $\varsigma
_{1}+\sigma_{1}\leq\eta$, and that when $\alpha\leq3/2$, then $\varsigma_{2}%
+\sigma_{2}\leq\eta$.
\end{proof}
\begin{lemma}\label{lm_p=2}
Let $z_2=d/2$. If $\mathrm{Re}\left( z\right) \geq z_2$, there exists $C>0$ such that for every $R\ge1$ and $0<\delta<1/2$,
\[
\int_{\mathbb{R}} \int_{\mathbb{T}^{d}}|\Phi\left( \delta,z,r,x\right) | ^{2}dxd\mu(r-R)\leq
\begin{cases}
C & \text{if }\mathrm{Re}\left( z\right) >z_2,\\
C\log\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_2.
\end{cases}
\]
\end{lemma}
\begin{proof}
By Plancherel formula applied to $\Phi\left( \delta,z,r,x\right) $
as a function of the variable $x$,
\begin{align*}
& \int_{\mathbb{R}}\int_{\mathbb{T}^{d}}| \Phi\left(\delta,z,r,x\right) | ^{2}dxd\mu(r-R)\\
& \leq C\sum_{j=0}^h\sum_{m\in\mathbb{Z}^{d},\,m\neq0}| \widehat{\varphi}\left( \delta m\right) |^{2}| m| ^{-2\mathrm{Re}\left( z\right)-2j }\int_{\mathbb{R}} r^{-2j}d\mu(r-R).
\end{align*}
Observe that if $R>1$ then the singularity of $r^{-j}$ is outside the support of $d\mu(r-R)$, and recall that
$\widehat\varphi(r)$ has fast decay at infinity. Hence for every $\lambda>0$,
\[
\int_{\mathbb{R}}\int_{\mathbb{T}^{d}}| \Phi\left(\delta,z,r,x\right) | ^{2}dxd\mu(r-R) \leq C\sum_{m\in\mathbb{Z}^{d},\,m\neq0}\left( 1+\delta|m| \right) ^{-\lambda}| m| ^{-2\mathrm{Re}\left( z\right) }
\]
and the lemma follows.
\end{proof}
The following lemma is an estimate of the $L^p $ norms of the function $\Phi\left( \delta,z,r,x\right) $ when $p=4$ and the space dimension $d\geq2$.
In dimension $d=2$ there is a second relevant exponent $p=6$, and this will be considered later.
\begin{figure}[h]
\begin{subfigure}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{kappa_convesso_d=2.eps}
\caption{The case $d=2$ with $z_4$ (bottom) and $z_6$ (top).}
\end{subfigure}\qquad
\begin{subfigure}[t]{0.47 \textwidth}
\includegraphics[width=\textwidth]{kappa_convesso_d_2_normale.eps}
\caption{The case $d>2$ with $z_2$ (bottom) and $z_4$ (top).}
\end{subfigure}
\caption{The minimal values of $\alpha=\mathrm{Re}(z)$ as a function of $\beta$.}\label{F3}
\end{figure}
\begin{lemma}
\label{lm_p=4}
Let $z\in\mathbb C$.
Define $\nu=\min \{1,(d-1)/2\}$ and let $\eta=\eta(d,\mathrm{Re}(z),\beta)$ be defined as in Lemma \ref{integral}.
Let $z_4=\max\{(3d-\beta)/4,\,(3d-\nu)/4\}$. If $\mathrm{Re}(z) \geq z_4$, then there exists $C>0$ such that for every $R\geq 1$ and $0<\delta<1/2$,
\begin{align*}
\int_{\mathbb{R}} \int_{\mathbb{T}^{d}}|\Phi\left( \delta,z,r,x\right) | ^{4}dxd\mu(r-R)
\leq
\begin{cases}
C & \text{if }\mathrm{Re}\left( z\right) >z_4,\\
C\log^{\eta+1}\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_4.\\
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Call $\alpha=\mathrm{Re}\left( z\right)$.
By the above Lemma \ref{lm_6_integral_ellipsoid} with $N=2$, it suffices to estimate
\begin{align*}
& \underset{\mathbb{R}^{d}}\int\left( 1+\delta| k| \right)^{-\lambda}\underset{| m| ,| k-m|>1}\int| m| ^{-\alpha}| k-m| ^{-\alpha}\\
& \times\underset{{| n| ,| k-n|>1}}\int| n| ^{-\alpha}| k-n| ^{-\alpha}\left( 1+| g( m) +g( k-m)-g(n) -g( k-n) | \right)^{-\beta}dn dm dk.
\end{align*}
Notice that we have canceled all the cutoff functions in the variables $m$, $k-m$, $n$, $k-n$.
The integral over the set $\left\{ | k| \leq2\right\} $ is bounded by
\begin{align*}
& \int_{ | k| \leq2 }\int_{\substack{| m| ,| k-m| >1}}|m| ^{-\alpha}| k-m| ^{-\alpha}\int_{\substack{| n| ,| k-n| >1}}|n| ^{-\alpha}| k-n| ^{-\alpha}dn dm dk\\
& \leq \int_{| k| \leq2} {dk} \int_{\substack{|m| >1}}| m| ^{-2\alpha}dm \int_{\substack{|n| >1}}| n| ^{-2\alpha}dn
\leq C.
\end{align*}
Let us now consider the integral over the set $\left\{ |k| \geq2\right\} $,
\begin{align*}
& \underset{|k|\geq 2}\int\left( 1+\delta| k| \right)^{-\lambda}\underset{{| m| ,| k-m|>1}}\int| m| ^{-\alpha}| k-m| ^{-\alpha}\\
& \times\underset{{| n| ,| k-n|>1}}\int| n| ^{-\alpha}| k-n| ^{-\alpha}\left( 1+| g( m) +g( k-m)-g(n) -g( k-n) | \right)^{-\beta}dn dm dk.
\end{align*}
By Lemma \ref{integral}, the inner integral
\[
\int_{\substack{| n| ,| k-n|>1}}| n| ^{-\alpha}| k-n| ^{-\alpha}\left( 1+| g( m) +g( k-m)-g(n) -g( k-n) | \right)^{-\beta}dn
\]
is bounded by
\begin{align*}
&C| k| ^{d-2\alpha}(1+|k|+g(m)+g(k-m))^{-\zeta}\log^{\eta}(2+|k|+g(m)+g(k-m))\\
&\le C| k| ^{d-2\alpha}(1+|k|+|m|)^{-\zeta}\log^{\eta}(2+|k|+|m|).
\end{align*}
Thus, the goal estimate becomes
\begin{align*}
& \underset{{ | k| >2} }\int\left(1+\delta| k| \right) ^{-\lambda}| k|^{d-2\alpha}\underset{\mathbb{R}^{d}}\int| m| ^{-\alpha}| k-m| ^{-\alpha}\left( 1+| k|+| m| \right) ^{-\zeta}\log^{\eta}(|k|+|m|)dmdk\\
& \leq C\int_{\substack{ | k| >2} }\left(1+\delta| k| \right) ^{-\lambda}| k|^{d-2\alpha-\zeta}
\log^{\eta}(|k|)
\int_{|m|\leq 2| k| }| m| ^{-\alpha}| k-m| ^{-\alpha}dmdk\\
& + C\int_{ | k| >2 }\left(1+\delta| k| \right) ^{-\lambda}| k|
^{d-2\alpha}\int_{ |m|\geq 2| k| }| m|^{-2\alpha-\zeta} \log^{\eta}(|m|)dmdk\\
&\leq \int_{ | k| >2 }\left(1+\delta| k| \right) ^{-\lambda}| k|
^{2d-4\alpha-\zeta} \log^{\eta}(|k|)dk\\
&\leq
\begin{cases}
C & \text{if }\alpha>\left( 3d-\zeta\right) /4,\\
C\log^{\eta+1}\left( 2/\delta\right) & \text{if }\alpha=\left( 3d-\zeta\right)/4.
\end{cases}
\end{align*}
The result now follows after the observation that $\alpha>(3d-\zeta)/4$ if and only if $\alpha>z_4$
and $\alpha=(3d-\zeta)/4$ if and only if $\alpha=z_4$.
\end{proof}
In the following lemma the space dimension is $d=2$.
\begin{lemma} \label{lm_p=6}
Let $d=2$ and let $z_6=\max\{(10-\beta)/6,8/5\}$.
If $\mathrm{Re}(z) \geq z_6$, then there exists $C>0$ such that for every $R\geq 1$ and $0<\delta<1/2$,
\begin{align*}
\int_{\mathbb{R}} \int_{\mathbb{T}^{2}}|\Phi\left( \delta,z,r,x\right) | ^{6}dxd\mu(r-R)
\leq
\begin{cases}
C & \text{if }\mathrm{Re}\left( z\right) >z_6,\\
C\log\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text { and }
\beta\neq 2/5,\\
C\log^2\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text { and }
\beta= 2/5.\\
\end{cases}
\end{align*}
\end{lemma}
\proof
Call $\alpha=\mathrm{Re}\left( z\right)$. By Lemma \ref{lm_6_integral_ellipsoid} with $N=3$, it suffices to estimate
\begin{align*}
& \int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} \underset{|m_1| ,|m_2|,| k-m_1-m_2| >1}{\int\int}|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\\
& \times \underset{|n_1| ,|n_2|,| k-n_1-n_2| >1}{\int\int}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
& \times\left( 1+| g(m_1) +g(m_2)+g(k-m_1-m_2) -g(n_1)-g(n_2)-g(k-n_1-n_2) |\right) ^{-\beta}\\
& \times dn_1 dn_2 dm_1 dm_2 dk.
\end{align*}
Split $\mathbb{R}^{2}$ as $\left\{|k| \leq2\right\} \cup\left\{ | k|
\geq2\right\} $. The integral over the disc $\left\{ | k| \leq2\right\} $ is bounded by
\begin{align*}
& \int_{ | k| \leq2 }\left(\int_{|m_1|>1}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\right) ^{2}dk\\
& =C\int_{ | k| \leq2 }\left(\int_{|m_1|>1}| m_1| ^{-\alpha}| k-m_1| ^{2-2\alpha}dm_1\right) ^{2}dk\\
& =C\int_{ | k| \leq1/2 }\left(\int_{|m_1|>1}| m_1| ^{2-3\alpha}dm_1\right) ^{2}dk
+C\int_{1/2\leq | k| \leq2 }|k| ^{8-6\alpha}dk\leq C,
\end{align*}
since $\alpha\ge z_6>4/3$.
Consider now the case $\left\{ | k| \geq2\right\} $. Apply Lemma \ref{integral} to the integral with respect to $n_2$ with $k$ replaced with $k-n_1$ and
$Y$ replaced with $g(m_1)+g(m_2)+g(k-m_1-m_2)-g(n_1)$,
\begin{align*}
& \underset{|n_1| ,|n_2|,| k-n_1-n_2| >1}{\int\int}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
&\times \left( 1+| g(m_1) +g(m_2)+g(k-m_1-m_2) -g(n_1)-g(n_2)-g(k-n_1-n_2) |\right) ^{-\beta}\\
&\times dn_2dn_1 \\
& \leq C\int_{\mathbb R^2}|n_1|^{-\alpha}|k-n_1|^{2-2\alpha} \\
& \times\left( 1+ |k-n_1|+ |g(m_1) +g(m_2)+g(k-m_1-m_2) -g(n_1) |\right) ^{-\zeta}\\
& \times \log^\eta(2+ |k-n_1|+ |g(m_1) +g(m_2)+g(k-m_1-m_2) -g(n_1))dn_2dn_1\\
& \leq \int_{\mathbb R^2}|n_1|^{-\alpha}|k-n_1|^{2-2\alpha}
\left(1+|k-n_1| \right)^{-\zeta}
\log^\eta(2+|k-n_1|)
dn_1\\
& \leq C|k|^{4-3\alpha-\zeta}\log^\eta(|k|).
\end{align*}
Here $\eta=\eta(2,\alpha,\beta)$ as defined in Lemma \ref{integral}.
Moreover,
\begin{align*}
&\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\\
&=C\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}|k-m_1| ^{2-2\alpha}dm_1=C| k| ^{4-3\alpha}.
\end{align*}
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
$$\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-\lambda}|k| ^{8-6\alpha-\zeta}\log^\eta(|k|)dk\leq
\begin{cases}
C & \text{ if }\alpha>(10-\zeta)/6,\\
C\log^{\eta+1}(1/\delta) & \text{ if }\alpha=(10-\zeta)/6.
\end{cases}$$
The result now follows after the observation that $\alpha>(10-\zeta)/6$ if and only if $\alpha>z_6$
and $\alpha=(10-\zeta)/6$ if and only if $\alpha=z_6$.
\endproof
\begin{lemma} \label{lm_Interpolation}
The notation is as in the previous lemmas.
\begin{itemize}
\item[(1)] Let $d=3$. If $\beta<1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<3+\beta/(2-\beta),\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=3+\beta/(2-\beta).
\end{cases}
\end{align*}
If $\beta=1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4,\\
C\log^{3/4}\left( 1/\delta\right) & \text{if }p=4.
\end{cases}
\end{align*}
If $\beta>1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4,\\
C\log^{1/2}\left( 1/\delta\right) & \text{if }p=4.
\end{cases}
\end{align*}
\item[(2)] Let $d\ge4$. If $\beta<1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq (d+1)/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
& \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}\\
& \leq
\begin{cases}
C & \text{if }p<2(d-\beta)/(d-\beta-1),\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=2(d-\beta)/(d-\beta-1).
\end{cases}
\end{align*}
If $\beta=1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq (d+1)/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
&\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}\\
&\leq
\begin{cases}
C & \text{if }p<2(d-1)/(d-2),\\
C\log^{1/2}\left( 1/\delta\right) & \text{if }p=2(d-1)/(d-2).
\end{cases}
\end{align*}
If $\beta>1$ then there exists a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq (d+1)/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
&\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}\\
&\leq
\begin{cases}
C & \text{if }p<2(d-1)/(d-2),\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=2(d-1)/(d-2).
\end{cases}
\end{align*}
\end{itemize}
\end{lemma}
\begin{proof}
It is enough to prove the result for $z=(d+1)/2$.
The Lemma follows
via complex interpolation. For the definition of the complex interpolation method and the complex interpolation of $L^p $ spaces, see for example \cite[Chapters 4 and 5]{BL}. Here we recall the relevant result: Let $\mathbb{X}$ be a measure space, $1\leq a<b\leq+\infty$, $-\infty<A<B<+\infty$, and let $\Phi\left( z\right) $ be a function with values in $L^{a}\left( \mathbb{X}\right) +L^{b}\left( \mathbb{X}\right) $, continuous and bounded on the closed strip $\left\{ A\leq\mathrm{Re}\left( z\right) \leq B\right\} $ and analytic on the open strip $\left\{A<\mathrm{Re}\left( z\right) <B\right\} $. Assume that there exist constants $H$ and $K$ such that for every $-\infty<t<+\infty$,
$$
\begin{cases}
\left\Vert \Phi\left( A+it\right) \right\Vert _{L^{a}\left( \mathbb{X}\right) }\leq H,\\
\left\Vert \Phi\left( B+it\right) \right\Vert _{L^{b}\left( \mathbb{X}\right) }\leq K.
\end{cases}
$$
If $1/p=\left( 1-\vartheta\right) /a+\vartheta/b$, with $0<\vartheta<1$, then
$$\left\Vert \Phi\left( \left( 1-\vartheta\right) A+\vartheta B\right)\right\Vert _{L^{p}\left( \mathbb{X}\right) }\leq H^{1-\vartheta}K^{\vartheta}.$$
In our case, the analytic function is $\Phi\left( \delta,z,r,x\right) $, the measure space is $\mathbb{R\times T}^{d}$ with measure $d\mu(r-R)dx$, $a=2$, $b=4$, $A=z_2+\varepsilon$, $B=z_4+\varepsilon$, with $\varepsilon\geq0$. The norms $H$ and $K$ are given in Lemma \ref{lm_p=2} and Lemma \ref{lm_p=4}.
Set
$$\frac{d+1}{2}=\left( 1-\vartheta\right) A+\vartheta B.$$
This gives
$$\vartheta=\frac{d+1-2A}{2B-2A},$$
and
$$p=\frac{2(d-\min\{\beta,1\}) }{d-1-\min\{\beta,1\}+2\varepsilon}.$$
When $\varepsilon>0$ and $p<{2(d-\min\{\beta,1\}) }/{(d-1-\min\{\beta,1\})} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,\left( d+1\right) /2,r,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C.$$
When $\varepsilon=0$ and $p={2(d-\min\{\beta,1\}) }/{(d-1-\min\{\beta,1\})} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,\left( d+1\right) /2,r,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C\log^{1/p+\eta/(2d-2\min\{\beta,1\})}\left( 1/\delta\right) ,$$
where $\eta=\eta(d,z_4,\beta)$ as in Lemma \ref{integral}.
\end{proof}
\begin{lemma} \label{lm_Interpolation_d=2}
The notation is as in the previous lemmas and let $d=2$.
\noindent If $0\leq\beta<2/5$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $\beta=2/5$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p+1/12}\left( 1/\delta\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $2/5<\beta<1/2$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+10\beta/(3+5\beta),\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=4+10\beta/(3+5\beta).
\end{cases}
\end{align*}
If $\beta=1/2$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+10/11,\\
C\log^{1/p+1/9}\left( 1/\delta\right) & \text{if }p=4+10/11.
\end{cases}
\end{align*}
If $\beta>1/2$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+10/11,\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=4+10/11.
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Again, it is enough to prove the result for $z=3/2$. The case $\beta=0$ is contained in Lemma \ref{lm_p=4}. If $\beta>0$
the proof follows by complex interpolation with $a=4$, $b=6$, $A=z_4+\varepsilon$, $B=z_6+\varepsilon$, with $\varepsilon\geq0$. The norms $H$ and $K$ are given in Lemma \ref{lm_p=4} and Lemma \ref{lm_p=6}.
Set
$$\frac{3}{2}=\left( 1-\vartheta\right) A+\vartheta B.$$
This gives
$$\vartheta=\frac{3-2A}{2B-2A},$$
and
$${p}=\frac{24(z_6-z_4)}{6z_6-4z_4-3+2\varepsilon}.$$
When $\varepsilon>0$ and $p<{24(z_6-z_4)}/{(6z_6-4z_4-3)} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,3/2,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C.$$
When $\varepsilon=0$ and $p={24(z_6-z_4)}/{(6z_6-4z_4-3)} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,3 /2,r,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C\log^{1/p+\eta(1-\vartheta)/4+(\omega-1)\vartheta/6}\left( 1/\delta\right) ,$$
where $\eta=\eta(2,z_4,\beta)$ as in Lemma \ref{integral} and $\omega$ is the exponent of the logarithm in Lemma \ref{lm_p=6}, that is
$\omega=1$ if $\beta\neq2/5$ and $\omega=2$ if $\beta=2/5$.
\end{proof}
\proof(of the Theorems \ref{thm_d=2} and \ref{thm_d>2}) By Remark \ref{r1}, one has
\begin{align*}
& \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| r^{-{(d-1)}/{2}}\mathcal{D}\left( \Omega,r,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\\
=& \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| r^{-{(d-1)}/{2}}\mathcal{D}_{\delta}\left( \Omega,r\pm\delta,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\\
&+C\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| r^{(d-1)/2}\delta |^{p}dxd\mu(r-R)\right\} ^{1/p}\\
\end{align*}
If $\delta=R^{-\left( d-1\right) /2}$ the last term is bounded. On the other hand, by Lemma \ref{Asymptotic Discrepancy}
\begin{align*}
& \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| r^{-{(d-1)}/{2}}\mathcal{D}_{\delta}\left( \Omega,r\pm\delta,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\\
\le& \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi(\delta,(d+1)/2,r\pm\delta,x) |^{p}dxd\mu(r-R)\right\} ^{1/p}\\
& + \left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}|\mathcal{R}_h(\delta,r\pm\delta,x) |^{p}dxd\mu(r-R)\right\} ^{1/p}.\\
\end{align*}
If $h\geq\left( d-3\right) /2$ then the last term is bounded, while the first term can be written as
\[
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi(\delta,(d+1)/2,r,x) |^{p}dxd\mu(r-(R\pm\delta))\right\} ^{1/p}.
\]
The theorem now follows from the two previous lemmas, with $R$ replaced by $R\pm\delta$.
\endproof
\section{The case of the ellipse}
Here we assume $d=2$ and $\Omega=E=\{x\in\mathbb R^2:|M^{-1}x|\le 1\}$, where $M$ is a non singular
$2\times 2$ matrix. In this case the support function is $g(x)=|M^Tx|$.
By the change of variable $M^Tx=y$ applied to all variables $n_1,\ldots,n_N,\,m_1,\ldots,m_N$, in this case Lemma \ref{lm_6_integral_ellipsoid} can be restated as
\begin{align*}
& \int_{\mathbb{R}} \int_{\mathbb{T}^{2}} |\Phi(\delta,z,r,x)|^{2N}dxd\mu(r-R) \\
&\leq C\int_{\mathbb{R}^{2}} (1+\delta|k|)^{-\lambda}\\
&\times\int\limits_{\substack{m_1,\ldots,m_{N}\in\mathbb{R}^{2}\\ |m_1|, \ldots,|m_{N}| >1\\ m_1+\dots+m_{N}=k }} (1+\delta|m_1|)^{-\lambda}\dots(1+\delta|m_N|)^{-\lambda}|m_1|^{-\mathrm{Re}(z)}\dots|m_{N}|^{-\mathrm{Re}(z)}\\
&\times \int\limits_{\substack{n_1,\ldots,n_{N}\in\mathbb{R}^{2}\\ |n_1|, \ldots,|n_{N}| >1\\ n_1+\dots+n_{N}=k }}(1+\delta|n_1|)^{-\lambda}\dots(1+\delta|n_N|)^{-\lambda} |n_1|^{-\mathrm{Re}(z)}\dots|n_{N}|^{-\mathrm{Re}(z)} \\
&\times \left(1+| |m_1|+\dots+|m_{N}|-|n_1|-\dots-|n_{N}||\right)^{-\beta}\\
&\times d\sigma(n_1,\dots,n_{N}) d\sigma(m_1,\dots,m_{N}) dk.
\end{align*}
\begin{lemma}\label{lm_crucial}
\begin{itemize}
\item[(1)] If $\beta\ge0$ and $0<2-\alpha<\beta$, there exists $C$ such that for every $-\infty<X<+\infty$,
$$\int_{0}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\leq
C(1+|X|)^{2-\alpha-\min\{\beta,1\}}\log^{\delta_{1,\beta}}(2+|X|).
$$
\item[(2)] If $\alpha<2$, then for every $\beta$ there exists $C$ such that for every $-\infty<X<+\infty$,
$$\int_{0}^{1}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}(-\log\left(\tau\right)) d\tau\leq C\left( 1+| X| \right) ^{-\beta}.$$
\item[(3)] If $0\leq\beta<1$ and $2-\alpha\ge\beta$, then there exists $C$ such that for every $-\infty< X<+\infty$ and $2\leq T<+\infty$,
$$\int_{0}^{T}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\leq CT^{2-\alpha-\beta}\log^{\delta_{2-\alpha,\beta}}\left( T\right) .$$
\item[(4)] If $0\leq\beta<1$ and $\alpha>1$, there exists $C$ such that for every $-\infty< X<+\infty$ and $2\leq T<+\infty$,
$$\int_{T}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-2\alpha}d\tau\leq CT^{2-2\alpha-\beta}.$$
\end{itemize}
\end{lemma}
\proof (1) If $X\leq0$, then
\begin{align*}
&\int_{0}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\leq\left( 1+| X| \right) ^{-\beta}\int_{0}^{1+| X| }\tau^{1-\alpha}d\tau+\int_{1+| X| }^{+\infty}\tau^{1-\alpha-\beta}d\tau\\
&\leq C\left( 1+| X| \right)^{2-\alpha-\beta}.
\end{align*}
If $0\leq X\leq1$, then
$$\int_{0}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\leq\int_{0}^{2}\tau^{1-\alpha}d\tau+\int_{2}^{+\infty}\tau^{1-\alpha-\beta}d\tau\leq C.$$
If $X\geq1$, then
\begin{align*}
&\int_{0}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\\
& \leq CX^{-\beta}\int_{0}^{X/2}\tau^{1-\alpha}d\tau+C X^{1-\alpha}\int_{X/2}^{2X}\left( 1+| X-\tau| \right) ^{-\beta}d\tau+C\int_{2X}^{+\infty}\tau^{1-\alpha-\beta}d\tau\\
& \leq
\begin{cases}
CX^{2-\alpha-\beta} & \text{if }0\leq\beta<1,\\
CX^{1-\alpha}\log\left( 1+X\right) & \text{if }\beta=1,\\
CX^{1-\alpha} & \text{if }\beta>1.
\end{cases}
\end{align*}
(2) It suffices to observe that there exists $C$ such that for every $X$,
$$\max_{0\leq\tau\leq1}\left\{ \left( 1+| X-\tau| \right)^{-\beta}\right\} \leq C\left( 1+| X| \right) ^{-\beta}.$$
(3) Let $X\ge0$. If $T\leq X/2$, then
$$\int_{0}^{T}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau
\leq C X^{-\beta} \int_{0}^{T}\tau^{1-\alpha}d\tau\leq C X^{-\beta}{T}^{2-\alpha}
.$$
If $X/2\leq T\leq2X$, then
\begin{align*}
&\int_{0}^{T}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\\
&\leq CX^{-\beta}\int_{0}^{X/2}\tau^{1-\alpha}d\tau+CX^{1-\alpha}\int_{X/2}^{2X}\left( 1+| X-\tau| \right) ^{-\beta}d\tau\\
&\leq CX^{2-\alpha-\beta}.
\end{align*}
If $T\geq2X$, then
\begin{align*}
&\int_{0}^{T}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau\\
& \leq CX^{-\beta}\int_{0}^{X/2}\tau^{1-\alpha}d\tau+CX^{1-\alpha}\int_{X/2}^{2X}\left( 1+| X-\tau| \right) ^{-\beta}d\tau+C\int_{2X}^{T}\tau^{1-\alpha-\beta}d\tau\\
& \leq CT^{2-\alpha-\beta}\log^{\delta_{2-\alpha,\beta}}\left( T\right) .
\end{align*}
If $X<0$ then simply observe that
\[
\int_{0}^{T}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau
\leq
\int_{0}^{T}\left( 1+| |X|-\tau| \right) ^{-\beta}\tau^{1-\alpha}d\tau.
\]
(4) As before, it suffices to show the result for $X\ge0$. If $T\leq X/2$, then
\begin{align*}
&\int_{T}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-2\alpha}d\tau\\
& \leq CX^{-\beta}\int_{T}^{X/2}\tau^{1-2\alpha}d\tau+CX^{1-2\alpha}\int_{X/2}^{2X}\left( 1+| X-\tau| \right) ^{-\beta}d\tau+C\int_{2X}^{+\infty}\tau^{1-2\alpha-\beta}d\tau\\
& \leq CX^{-\beta}T^{2-2\alpha}+CX^{2-2\alpha-\beta}\leq CT^{2-2\alpha-\beta}.
\end{align*}
If $X/2\leq T\leq2X$, then
\begin{align*}
&\int_{T}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-2\alpha}d\tau\\
& \leq T^{1-2\alpha}\int_{X/2}^{2X}\left( 1+| X-\tau| \right) ^{-\beta}d\tau+C\int_{2X}^{+\infty}\tau^{1-2\alpha-\beta}d\tau\leq CT^{2-2\alpha-\beta}.
\end{align*}
If $T\geq2X$, then
$$\int_{T}^{+\infty}\left( 1+| X-\tau| \right) ^{-\beta}\tau^{1-2\alpha}d\tau\leq C\int_{T}^{+\infty}\tau^{1-2\alpha-\beta}d\tau\leq CT^{2-2\alpha-\beta}.$$
\endproof
\begin{lemma}
\label{LM_Integrale}
(1) Let $3/2\leq\alpha<2$ and $\beta>2-\alpha$.
Then there exists $C$\ such that for every
$k\in\mathbb{R}^{2} $ with $|k|\ge2$ and for every $-\infty<Y<+\infty$,
\begin{align*}
&
{\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert Y-| x| -| k-x| \right\vert \right)
^{-\beta}dx\\
& \leq C
|k|^{-\alpha}\log^{\delta_{3/2,\alpha}}(|k|)(1+|Y-|k||)^{2-\alpha-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(2+|Y-|k||).
\end{align*}
(2) Let $3/2<\alpha<2$ and $0\leq\beta\leq2-\alpha$.
Then there exists $C$\ such that for every
$k\in\mathbb{R}^{2} $ with $|k|\ge2$ and for every $-\infty<Y<+\infty$,
\begin{align*}
{\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert Y-| x| -| k-x| \right\vert \right)
^{-\beta}dx \leq C
|k|^{2-2\alpha-\beta}\log^{\delta_{2-\alpha,\beta}}(|k|).
\end{align*}
(3) Let $\alpha=3/2$ and $\beta=1/2$.
Then there exists $C$\ such that for every
$k\in\mathbb{R}^{2} $ with $|k|\ge2$ and for every $-\infty<Y<+\infty$,
\begin{align*}
{\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert Y-| x| -| k-x| \right\vert \right)
^{-\beta}dx \leq C
|k|^{-\alpha}\log^2(|k|).
\end{align*}
(4) Let $3/4<\alpha<3/2$ and $\beta\geq0$.
Then there exists $C$\ such that for every
$k\in\mathbb{R}^{2} $ with $|k|\ge2$ and for every $-\infty<Y<+\infty$,
\begin{align*}
&{\int_{\mathbb{R}^{2}}}
\left\vert x\right\vert ^{-\alpha}\left\vert k-x\right\vert ^{-\alpha}\left(
1+\left\vert Y-| x| -| k-x| \right\vert \right)
^{-\beta}dx \\
\leq&
\begin{cases}
C|k|^{2-2\alpha-\beta} & \text { if } 0\leq \beta \leq 1/2,\\
C|k|^{-2\alpha+3/2}(1+|Y-|k||)^{1/2-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(2+|Y-|k||) & \text { if } 1/2< \beta.\\
\end{cases}
\end{align*}
(5) Let $\alpha>1$. Then for every $k\in \mathbb R^2$
\[
\int_{|x|,|x-k|>1}|x|^{-\alpha}|x-k|^{-\alpha}dx\le(2\alpha-2)^{-1}.
\]
\end{lemma}
\proof The symmetry between $0$ and $k$ gives
\begin{align*}
& \int_{\mathbb{R}^{2}} | x| ^{-\alpha}| k-x| ^{-\alpha}\left(1+\left| Y-| x| -| k-x| \right|\right) ^{-\beta}dx\\
&= 2\int_{\left\{ | x| +|k-x| \leq3| k| ,\ | x|\leq| k-x| \right\} } | x| ^{-\alpha}| k-x| ^{-\alpha}\left(1+\left| Y-| x| -| k-x| \right|\right) ^{-\beta}dx\\
&+ 2\int_{ | x| +|k-x| \geq3| k| ,\ | x|\leq| k-x| }| x| ^{-\alpha}| k-x| ^{-\alpha}\left(1+\left| Y-| x| -| k-x| \right|\right) ^{-\beta}dx\\
&\leq C| k| ^{-\alpha} \int_{ | x| +|k-x| \leq3| k| }| x| ^{-\alpha}\left( 1+\left| Y-|x| -| k-x| \right| \right) ^{-\beta}dx\\
&+ C\int_{ | x| +|k-x| \geq3| k| }| x| ^{-2\alpha}\left( 1+\left| Y-|x| -| k-x| \right| \right) ^{-\beta}dx.
\end{align*}
We estimate here the first integral, the second being studied similarly.
The integral is invariant under rotations of $k$, so that one can assume $k=\left( | k| ,0\right) $.
Write in polar coordinates $y=\left( \rho\cos\left( \vartheta\right) ,\rho\sin\left(\vartheta\right) \right) $, with $0\leq\rho<+\infty$, $0\leq\vartheta\leq2\pi$.
In these polar coordinates the ellipse $\left\{| x| +| k-x| =\tau\right\} $ has equation
$$\rho=\frac{\tau^{2}-| k| ^{2}}{2\left( \tau-|k| \cos\left( \vartheta\right) \right) }.$$
In the variables $\left( \tau,\vartheta\right) $, $|k| \leq\tau<+\infty$, $0\leq\vartheta\leq2\pi$, one has
$$\frac{d\rho}{d\tau} =\frac{\tau^{2}-2| k| \tau\cos\left( \vartheta\right) +| k| ^{2}}{2\left(\tau-| k| \cos\left( \vartheta\right) \right) ^{2}},$$
and
\begin{align*}
dx =\rho d\rho d\vartheta = \frac{\tau^{2}-| k| ^{2}}{2\left(\tau-| k| \cos\left( \vartheta\right) \right)} \frac{\tau^{2}-2| k| \tau\cos\left( \vartheta\right) +| k| ^{2}}{2\left( \tau-| k|\cos\left( \vartheta\right) \right) ^{2}} d\tau d\vartheta.
\end{align*}
Hence,
\begin{align*}
& \int_{ | x| +|k-x| \leq3| k| }\left( 1+\left| Y-| x| -| k-x|\right| \right) ^{-\beta}| x| ^{-\alpha}dx\\
&= \int_{| k| }^{3| k| }\int_{0}^{2\pi}\left( 1+| Y-\tau| \right) ^{-\beta}\left( \frac{\tau^{2}-| k| ^{2}}{2\left( \tau-| k|\cos\left( \vartheta\right) \right) }\right) ^{1-\alpha}
\frac{\tau^{2}-2| k| \tau\cos\left( \vartheta\right) +| k| ^{2}}{2\left( \tau-| k|\cos\left( \vartheta\right) \right) ^{2}} d\tau d\vartheta \\
&= 2^{\alpha-2}\int_{| k| }^{3| k| }\left( 1+| Y-\tau| \right) ^{-\beta}\left( \tau-|k| \right) ^{1-\alpha}\left( 1+\left( | k|/\tau\right) \right) ^{1-\alpha}\\
& \times\int_{0}^{2\pi}\left( 1-2\left( | k| /\tau\right) \cos\left(\vartheta\right) +\left( | k| /\tau\right) ^{2}\right)\left( 1-\left( | k| /\tau\right) \cos\left(\vartheta\right) \right) ^{\alpha-3}d\vartheta d\tau.
\end{align*}
The term $1+\left( | k| /\tau\right) $ in the last double integral is bounded between 1 and 2. Hence,
\begin{align*}
&\int_{ | x| +|k-x| \leq3| k| }\left( 1+\left| Y-| x| -| k-x|\right| \right) ^{-\beta}| x| ^{-\alpha}dx\\
& \leq C\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}E\left( | k| /\left( |k| +\tau\right) ,\alpha\right) d\tau,
\end{align*}
where
\[
E(t,\alpha)=2 \int_{0}^{\pi}\left( 1-2t\cos\left( \vartheta\right) +t^{2}\right)\left( 1-t\cos\left( \vartheta\right) \right) ^{\alpha-3} d\vartheta.
\]
When $0<t<1/2$, $E(t,\alpha)\leq C$.
When $1/2\leq t<1,$ the integral over $\pi/2\leq\vartheta\leq\pi$ is bounded independently of $t$, and when $0\leq\vartheta\leq\pi/2$ one has $1-\vartheta^{2}/2\leq\cos\left( \vartheta\right) \leq1-4\vartheta^{2}/\pi^{2}$.
Hence one ends up with the integral
\begin{align*}
&\int_{0}^{\pi/2}\left( 1-2t\left( 1-\vartheta^{2}/2\right) +t^{2}\right) \left( 1-t\left( 1-4\vartheta^{2}/\pi^{2}\right)\right) ^{\alpha-3}d\vartheta\\
& =\int_{0}^{\pi/2}\left( \left( 1-t\right) ^{2}+t\vartheta^{2}\right)\left( 1-t+4t\vartheta^{2}/\pi^{2}\right) ^{\alpha-3}d\vartheta\\
& \leq C\left( 1-t\right) ^{\alpha-1}\int_{0}^{1-t}d\vartheta+C\left( 1-t\right) ^{\alpha-3}\int_{1-t}^{\sqrt{1-t}}\vartheta^{2}d\vartheta+C\int_{\sqrt{1-t}}^{\pi/2}\vartheta^{2\alpha-4}d\vartheta\\
& \leq
\begin{cases}
C(1-t)^{\alpha-3/2} & \textit{if }\alpha<3 /2,\\
-C\log\left( 1-t\right) & \textit{if }\alpha=3/2,\\
C & \textit{if }\alpha>3 /2.
\end{cases}
\end{align*}
Hence
\begin{align*}
&| k| ^{-\alpha} \int_{ | x| +|k-x| \leq3| k| }| x| ^{-\alpha}\left( 1+\left| Y-|x| -| k-x| \right| \right) ^{-\beta}dx\\
\leq & C| k| ^{-\alpha-\min\{0,\alpha-3/2\}}\\
&\times \int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha+\min\{0,\alpha-3/2\}}\log^{\delta_{\alpha,3/2}}((|k|+\tau)/\tau)d\tau.
\end{align*}
Similarly,
\begin{align*}
& \int_{ | x| +|k-x| \geq3| k| }| x| ^{-2\alpha}\left( 1+\left| Y-|x| -| k-x| \right| \right) ^{-\beta}dx\\
\leq & C\int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau.
\end{align*}
Assume $3/2<\alpha<2$ and $\alpha>2-\beta$. Then applying Lemma \ref{lm_crucial} (1), we obtain
\begin{align*}
&
| k| ^{-\alpha}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau
+ \int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\\
&\leq | k| ^{-\alpha}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\\
&\leq C |k|^{-\alpha}(1+|Y-|k||)^{2-\alpha-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(2+|Y-|k||).
\end{align*}
Assume now $\alpha=3/2$ and $\beta>1/2$. Then
\begin{align*}
&
| k| ^{-\alpha}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}\log\left((|k|+\tau)/\tau\right)d\tau\\
&+ \int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\\
&\leq | k| ^{-\alpha}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}\log\left((|k|+\tau)/\tau\right)d\tau\\
&\leq | k| ^{-\alpha}\log\left(1+|k|\right)\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\\
&+| k| ^{-\alpha}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}\log\left(1+1/\tau\right)d\tau.
\end{align*}
Applying Lemma \ref{lm_crucial} (1), we obtain
\begin{align*}
&| k| ^{-\alpha}\log\left(|k|\right)\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\\
&\leq C| k| ^{-\alpha}\log\left(|k|\right)(1+|Y-|k||)^{2-\alpha-\min\{\beta,1\}}\log^{\delta_{\beta,1}}(2+|Y-|k||).
\end{align*}
Applying Lemma \ref{lm_crucial} (1) and (2), we obtain
\begin{align*}
&| k| ^{-\alpha}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}\log\left(1+1/\tau\right)d\tau\\
&\leq | k| ^{-\alpha}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\\
& +| k| ^{-\alpha}\int_{0}^{1 }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}(-\log\left(\tau\right))d\tau\\
&\leq C| k| ^{-\alpha}(1+|Y-|k||)^{2-\alpha-\min\{\beta,1\}}\log^{\delta_{\beta,1}}(2+|Y-|k||).
\end{align*}
If $3/2<\alpha<2$ and $\alpha\leq 2-\beta$, then applying Lemma \ref{lm_crucial} (3), we obtain
\begin{align*}
| k| ^{-\alpha}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\leq C | k| ^{2-2\alpha-\beta}\log^{\delta_{2-\alpha,\beta}}(|k|),
\end{align*}
and, by Lemma \ref{lm_crucial} (4),
\begin{align*}
\int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\leq C | k| ^{2-2\alpha-\beta}.
\end{align*}
Assume $\alpha=3/2$ and $\beta=1/2$. Then, by Lemma \ref{lm_crucial} (2), (3) and (4),
\begin{align*}
&
| k| ^{-\alpha}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}\log\left((|k|+\tau)/\tau\right)d\tau\\
&+ \int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\\
&\leq C| k| ^{-\alpha}\log(|k|)\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}d\tau\\
&+C| k| ^{-\alpha}\int_{0}^{1 }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-\alpha}(-\log\left(\tau\right))d\tau\\
&+ \int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\\
&\leq C|k|^{-\alpha}\log^2(|k|).
\end{align*}
Assume $3/4<\alpha< 3/2$ and $\beta>1/2$. Then applying Lemma \ref{lm_crucial} (1), we obtain
\begin{align*}
&
| k| ^{-2\alpha+3/2}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{-1/2}d\tau\\
&+ \int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\\
&\leq | k| ^{-2\alpha+3/2}\int_{0}^{+\infty }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{-1/2}d\tau\\
&\leq C |k|^{-2\alpha+3/2}(1+|Y-|k||)^{1/2-\min\{1,\beta\}}\log^{\delta_{1,\beta}}(2+|Y-|k||).
\end{align*}
If $3/4<\alpha<3/2$ and $0\leq\beta\leq 1/2$, then applying Lemma \ref{lm_crucial} (3), we obtain
\begin{align*}
| k| ^{-2\alpha+3/2}\int_{0}^{2| k| }\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{-1/2}d\tau\leq C | k| ^{2-2\alpha-\beta},
\end{align*}
and, by Lemma \ref{lm_crucial} (4),
\begin{align*}
\int_{2| k| }^{+\infty}\left( 1+\left| Y-| k| -\tau\right| \right)^{-\beta}\tau^{1-2\alpha}d\tau\leq C | k| ^{2-2\alpha-\beta}.
\end{align*}
Finally, (5) follows by a simple rearrangement inequality (see \cite[Theorem 3.4]{LL}),
\[
\int_{|x|,|x-k|>1}|x|^{-\alpha}|x-k|^{-\alpha}dx
\le \int_{|x|>1}|x|^{-2\alpha}dx=(2\alpha-2)^{-1}.
\]
\endproof
The next two lemmas are the counterpart of Lemmas \ref{lm_p=4} and \ref{lm_p=6} in the case of the ellipse.
The minimal value $z_4$ of $\mathrm{Re}(z)$ for which the $L^4$ norm of $\Phi(\delta,z,\cdot,\cdot)$
is proved to be bounded is lowered from $\max\{(6-\beta)/4,11/8\}$ to $\max\{(6-\beta)/4,5/4\}$. Similarly, $z_6$
is lowered from $\max\{(10-\beta)/6,8/5\}$ to $\max\{(10-\beta)/6,3/2\}$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.6]{kappa_ellisse_d=2.eps}
\end{center}
\caption{The minimal values of $\alpha=\mathrm{Re}(z)$ for the ellipse in dimension $2$. The values $z_4$ (bottom) and $z_6$ (top). Compare with Figure \ref{F3}{\footnotesize(A)}.}
\end{figure}
\begin{lemma} \label{lm_p=4_disc}
Let $d=2$ and $\beta\ge 0$. Set $z_4=\max\{(6-\beta)/4,5/4\}$.
If $\mathrm{Re}(z) \geq z_4$, then there exists $C>0$ such that for every $R\geq 1$ and $0<\delta<1/2$,
\begin{align*}
\int_{\mathbb{R}} \int_{\mathbb{T}^{2}}|\Phi\left( \delta,z,r,x\right) | ^{4}dxd\mu(r-R)
\leq
\begin{cases}
C & \text{if }\mathrm{Re}\left( z\right) >z_4,\\
C\log\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_4 \text{ and }
\beta\neq 1,\\
C\log^2\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_4 \text { and }
\beta= 1.
\end{cases}
\end{align*}
\end{lemma}
\proof
Call $\alpha=\mathrm{Re}\left( z\right)$. By Lemma \ref{lm_6_integral_ellipsoid} with $N=2$, it suffices to estimate
\begin{align*}
& \int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} \int_{|m| ,| k-m| >1}\left( 1+\delta| m| \right)^{-\lambda}|m|^{-\alpha}|k-m|^{-\alpha}\\
& \times \int_{|n|,| k-n| >1}\left( 1+\delta| n| \right)^{-\lambda}|n|^{-\alpha}|k-n|^{-\alpha}\\
&\times\left( 1+| |m| +|k-m| -|n|-|k-n| |\right) ^{-\beta}dndm dk.
\end{align*}
Split $\mathbb{R}^{2}$ as $\left\{|k| \leq2\right\} \cup\left\{ | k|
\geq2\right\} $. By Lemma \ref{LM_Integrale} (5), the integral over the disc $\left\{ | k| \leq2\right\} $ is bounded by
\[
\int_{ | k| \leq2 }\left(\int_{|m|,|k-m|>1}| m| ^{-\alpha}| k-m| ^{-\alpha}dm\right) ^{2}dk
\leq C.
\]
Consider now the case $\left\{ | k| \geq2\right\} $. Assume $\beta>1/2$. Apply Lemma \ref{LM_Integrale} (4) to the integral with respect to $n$ with
$Y$ replaced with $|m|+|k-m|
$.
\begin{align*}
& \int_{|n|,| k-n| >1}|n|^{-\alpha}|k-n|^{-\alpha} \left( 1+| |m| +|k-m| -|n|-|k-n| |\right) ^{-\beta}dn\\
& \leq C|k|^{3/2-2\alpha}(1+||m|+|k-m|-|k||)^{1/2-\min\{1,\beta\}}\\
&\times\log^{\delta_{1,\beta}}(2+||m|+|k-m|-|k||).
\end{align*}
Thus we obtain the integral
\begin{align*}
&|k|^{3/2-2\alpha} \int_{|m|,| k-m| >1}|m|^{-\alpha}|k-m|^{-\alpha} (1+||m|+|k-m|-|k||)^{1/2-\min\{1,\beta\}}\\
&\times\log^{\delta_{1,\beta}}(2+||m|+|k-m|-|k||)dm.
\end{align*}
If $\beta=1$ and $\alpha>5/4$, the logarithm can be removed as long as one replaces $\alpha$ with a slightly smaller value greater than $5/4$. Therefore, if $(\alpha,\beta)\neq(5/4,1)$, then we only need to estimate
\begin{align*}
&|k|^{3/2-2\alpha} \int_{|m|,| k-m| >1}|m|^{-\alpha}|k-m|^{-\alpha} (1+||m|+|k-m|-|k||)^{1/2-\min\{1,\beta\}}dm\\
&\leq C|k|^{4-4\alpha-\min\{1,\beta\}}
\end{align*}
where we have applied Lemma \ref{LM_Integrale} (4).
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
\begin{align*}
&\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-\lambda}|k| ^{4-4\alpha-\min\{1,\beta\}}dk\\
&\leq
\begin{cases}
C & \text{ if } \alpha>z_4,\\
C\log(1/\delta) & \text{ if } \alpha=z_4 \text{ and } \beta\neq1.
\end{cases}
\end{align*}
Assume $0\leq\beta\leq1/2$. Apply Lemma \ref{LM_Integrale}(4) to the integral with respect to $n$ with
$Y$ replaced with $|m|+|k-m|
$.
\begin{align*}
\underset{|n|,| k-n| >1}\int|n|^{-\alpha}|k-n|^{-\alpha} \left( 1+| |m| +|k-m| -|n|-|k-n| |\right) ^{-\beta}dn \leq C|k|^{2-2\alpha-\beta}.
\end{align*}
Thus we obtain the integral
\begin{align*}
|k|^{2-2\alpha-\beta} \int_{|m|,| k-m| >1}|m|^{-\alpha}|k-m|^{-\alpha}dm
\leq C|k|^{4-4\alpha-\beta}.
\end{align*}
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
\begin{align*}
\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-\lambda}|k| ^{4-4\alpha-\beta}dk
\leq
\begin{cases}
C & \text{ if } \alpha>z_4\\
C\log(1/\delta) & \text{ if } \alpha=z_4.
\end{cases}
\end{align*}
It remains to study the case $\alpha=5/4$ and $\beta=1$. By Lemma \ref{LM_Integrale} (4),
\begin{align*}
& \int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} \int_{|m| ,| k-m| >1}\left( 1+\delta| m| \right)^{-\lambda}|m|^{-\alpha}|k-m|^{-\alpha}\\
& \times \int_{|n|,| k-n| >1}|n|^{-\alpha}|k-n|^{-\alpha} \left( 1+| |m| +|k-m| -|n|-|k-n| |\right) ^{-\beta}dndm dk\\
&\leq
C\int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} |k|^{-1}\int_{|m| ,| k-m| >1}\left( 1+\delta| m| \right)^{-\lambda}|m|^{-\alpha}|k-m|^{-\alpha}\\
& \times(1+||m|+|k-m|-|k||)^{-1/2}\log(2+||m|+|k-m|-|k||)dmdk.
\end{align*}
Since there is a $C>0$ such that for $0<\delta<1/2$
\[
(1+\delta|m|)^{-\lambda}\log(2+2|m|)\leq C\log(1/\delta),
\]
and $2+||m|+|k-m|-|k||\leq 2+2|m|$, we obtain
\begin{align*}
&\int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} |k|^{-1}\int_{|m| ,| k-m| >1}\left( 1+\delta| m| \right)^{-\lambda}|m|^{-\alpha}|k-m|^{-\alpha}\\
& \times(1+||m|+|k-m|-|k||)^{-1/2}\log(2+||m|+|k-m|-|k||)dmdk\\
& \leq C\log(1/\delta)\int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} |k|^{-1}\int_{|m| ,| k-m| >1}|m|^{-\alpha}|k-m|^{-\alpha}\\
& \times(1+||m|+|k-m|-|k||)^{-1/2}dmdk\\
& \leq C\log(1/\delta)\int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-\lambda} |k|^{-2}dk\leq C\log^2(1/\delta),
\end{align*}
by Lemma \ref{LM_Integrale} (4).
\endproof
\begin{lemma} \label{lm_p=6_disc}
Let $d=2$ and $\beta\ge 0$. Set $z_6=\max\{(10-\beta)/6,3/2\}$.
If $\mathrm{Re}(z) \geq z_6$, then there exists $C>0$ such that for every $R\geq 1$ and $0<\delta<1/2$,
\begin{align*}
\int_{\mathbb{R}} \int_{\mathbb{T}^{2}}|\Phi\left( \delta,z,r,x\right) | ^{6}dxd\mu(r-R)
\leq
\begin{cases}
C & \text{if }\mathrm{Re}\left( z\right) >z_6,\\
C\log\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text{ and }
0\leq\beta<2/5,\\
C\log^2\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text { and }
\beta= 2/5,\\
C\log\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text{ and }
2/5<\beta<1,\\
C\log^5\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text{ and }
\beta=1,\\
C\log^4\left( 1/\delta\right) & \text{if }\mathrm{Re}\left( z\right)=z_6 \text{ and }
\beta>1.
\end{cases}
\end{align*}
\end{lemma}
\proof
Call $\alpha=\mathrm{Re}\left( z\right)$. By Lemma \ref{lm_6_integral_ellipsoid} with $N=3$, it suffices to estimate
\begin{align*}
& \int_{\mathbb{R}^{2}} \left( 1+\delta| k| \right)^{-3\lambda} \\
&\times\iint\limits_{\substack{|m_1| ,|m_2|,\\|k-m_1-m_2| >1}}\left( 1+\delta|m_1| \right)^{-\lambda}\left( 1+\delta| m_2| \right)^{-\lambda}|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\\
& \times \iint\limits_{\substack{|n_1| ,|n_2|,\\|k-n_1-n_2| >1}}(1+\delta|n_1|)^{-2\lambda}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
& \times\left( 1+| |m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|n_2|-|k-n_1-n_2| |\right) ^{-\beta}\\
& \times dn_2 dn_1 dm_2 dm_1 dk.
\end{align*}
Split $\mathbb{R}^{2}$ as $\left\{|k| \leq2\right\} \cup\left\{ | k|
\geq2\right\} $. The integral over the disc $\left\{ | k| \leq2\right\} $ is bounded by
\begin{align*}
& \int_{ | k| \leq2 }\left(\int_{|m_1|>1}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\right) ^{2}dk\\
& =C\int_{ | k| \leq2 }\left(\int_{|m_1|>1}| m_1| ^{-\alpha}| k-m_1| ^{2-2\alpha}dm_1\right) ^{2}dk\\
& =C\int_{ | k| \leq1/2 }\left(\int_{|m_1|>1}| m_1| ^{2-3\alpha}dm_1\right) ^{2}dk
+C\int_{ 1/2\leq | k| \leq2 }|k| ^{8-6\alpha}dk\leq C,
\end{align*}
since $\alpha\ge z_6>4/3$.
Consider now the case $\left\{ | k| \geq2\right\} $. Assume $\alpha>2-\beta$ and $\alpha>3/2$. When $|k-n_1|\geq 2$, apply Lemma \ref{LM_Integrale} (1)
to the integral with respect to $n_2$ with $k$ replaced with $k-n_1$ and
$Y$ replaced with $|m_1|+|m_2|+|k-m_1-m_2|-|n_1|$, and when $|k-n_1|\leq2$ apply Lemma \ref{LM_Integrale} (5) to the same integral,
\begin{align*}
& \underset{|n_1| ,|n_2|,| k-n_1-n_2| >1}{\int\int}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
&\times \left( 1+| |m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|n_2|-|k-n_1-n_2| |\right) ^{-\beta}
dn_2 dn_1\\
& \leq C\int_{\mathbb R^2}|n_1|^{-\alpha}|k-n_1|^{-\alpha}\\
& \times\left( 1+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |\right) ^{2-\alpha-\min\{1,\beta\}}\\
&\times \log^{\delta_{\beta,1}}(2+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |)
dn_1\\
&+C\int_{|k-n_1|<2}|n_1|^{-\alpha}dn_1.
\end{align*}
The last integral is bounded by a constant times $|k|^{-\alpha}$.
If $\beta=1$, the logarithm can be removed as long as one replaces $\alpha$ with a slightly smaller value greater than $3/2$. Thus we only need to estimate
\begin{align*}
& \int_{\mathbb R^2}|n_1|^{-\alpha}|k-n_1|^{-\alpha}\\
& \times\left( 1+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |\right) ^{2-\alpha-\min\{1,\beta\}}
dn_1\\
& \leq C|k|^{-\alpha}|k|^{\max\{0,4-2\alpha-\min\{1,\beta\}\}}\log^{\delta_{\alpha,2-\beta/2}}(|k|),
\end{align*}
where we have applied Lemma \ref{LM_Integrale} (1) and (2).
Moreover,
\begin{align*}
&\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\\
&=C\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}|k-m_1| ^{2-2\alpha}dm_1=C| k| ^{4-3\alpha}.
\end{align*}
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
\begin{align*}
&\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-3\lambda}|k| ^{4-4\alpha+\max\{0,4-2\alpha-\min\{1,\beta\}\}}\log^{\delta_{\alpha,2-\beta/2}}(|k|)dk\\
&\leq
\begin{cases}
C & \text{ if } \alpha>(10-\beta)/6,\\
C\log(1/\delta) & \text{ if } \alpha=(10-\beta)/6 \text{ and } 2/5<\beta<1.
\end{cases}
\end{align*}
Assume now $3/2<\alpha\leq2-\beta$ so that $0\leq\beta<1/2$. Apply Lemma \ref{LM_Integrale} (2) and (5) to the integral with respect to $n_2$ with $k$ replaced with $k-n_1$,
\begin{align*}
& \underset{|n_1| ,|n_2|,| k-n_1-n_2| >1}{\int\int}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
&\times \left( 1+| |m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|n_2|-|k-n_1-n_2| |\right) ^{-\beta}
dn_2 dn_1\\
& \leq C\underset{\mathbb R^2}{\int}|n_1|^{-\alpha}|k-n_1|^{2-2\alpha-\beta} \log^{\delta_{2-\alpha,\beta}}(2+|k-n_1|)\,dn_1\\
& \leq C |k|^{4-3\alpha-\beta}\log^{\delta_{2-\alpha,\beta}}(|k|).
\end{align*}
Moreover,
\begin{align*}
&\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\\
&=C\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}|k-m_1| ^{2-2\alpha}dm_1=C| k| ^{4-3\alpha}.
\end{align*}
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
\begin{align*}
&\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-3\lambda}|k| ^{8-6\alpha-\beta}\log^{\delta_{\alpha,2-\beta}}(|k|)dk\\
&\leq
\begin{cases}
C & \text{ if } \alpha>(10-\beta)/6,\\
C\log(1/\delta) & \text{ if } \alpha=(10-\beta)/6 \text{ and } 0\leq\beta<2/5,\\
C\log^2(1/\delta) & \text{ if } \alpha=(10-\beta)/6 \text{ and } \beta=2/5.
\end{cases}
\end{align*}
It remains to study the case $\alpha=3/2$ and $\beta\ge1$.
Apply Lemma \ref{LM_Integrale} (1) and (5) to the integral with respect to $n_2$ with $k$ replaced with $k-n_1$,
\begin{align*}
& \left( 1+\delta| k| \right)^{-3\lambda}
\left( 1+\delta|m_1| \right)^{-\lambda}\left( 1+\delta| m_2| \right)^{-\lambda}|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\\
& \times \underset{|n_1| ,|n_2|,| k-n_1-n_2| >1}{\int\int}(1+\delta|n_1|)^{-2\lambda}|n_1|^{-\alpha}|n_2|^{-\alpha}|k-n_1-n_2|^{-\alpha}\\
& \times\left( 1+| |m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|n_2|-|k-n_1-n_2| |\right) ^{-\beta} dn_2 dn_1 \\
& \leq C(1+\delta|k|)^{-3\lambda}
\left( 1+\delta|m_1| \right)^{-\lambda}\left( 1+\delta| m_2| \right)^{-\lambda}|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\\
&\times\int_{\mathbb R^2}(1+\delta|n_1|)^{-2\lambda}|n_1|^{-\alpha}|k-n_1|^{-\alpha}\log(2+|k-n_1|)\\
& \times\left( 1+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |\right) ^{-1/2}\\
&\times \log^{\delta_{\beta,1}}(2+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |)
dn_1 \\
&+ C(1+\delta|k|)^{-3\lambda}
|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}
\int_{|n_1-k|<2}|n_1|^{-\alpha}
dn_1
\end{align*}
Since there is a $C>0$ such that for $0<\delta<1/2$
\begin{align*}
&(1+\delta|k|)^{-\lambda}(1+\delta|m_1|)^{-\lambda}(1+\delta|m_2|)^{-\lambda} (1+\delta|n_1|)^{-\lambda}\\&\times\log(2+|k|+|m_1|+|m_2|+|n_1|)\\
&\leq C\log(1/\delta),
\end{align*}
we obtain
\begin{align*}
&C\log^{1+\delta_{1,\beta}}(1/\delta) (1+\delta|k|)^{-\lambda}
|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\\
&\times\int_{\mathbb R^2}|n_1|^{-\alpha}|k-n_1|^{-\alpha}\\
&\times\left( 1+ ||m_1| +|m_2|+|k-m_1-m_2| -|n_1|-|k-n_1| |\right) ^{-1/2}
dn_1 \\
&+ C(1+\delta|k|)^{-3\lambda}
|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}\int_{|n_1-k|<2}|n_1|^{-\alpha}
dn_1.
\end{align*}
By Lemma \ref{LM_Integrale}(3), the above integrals are bounded by
\begin{align*}
&C\log^{1+\delta_{1,\beta}}(1/\delta) (1+\delta|k|)^{-\lambda}|k|^{-\alpha}\log^2{|k|}
|m_1|^{-\alpha}|m_2|^{-\alpha}|k-m_1-m_2|^{-\alpha}.
\end{align*}
Moreover,
\begin{align*}
&\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}\int_{\mathbb{R}^{2}}| m_2| ^{-\alpha}| k-m_1-m_2| ^{-\alpha}dm_2dm_1\\
&=C\int_{\mathbb{R}^{2}}| m_1| ^{-\alpha}|k-m_1| ^{2-2\alpha}dm_1=C| k| ^{4-3\alpha}.
\end{align*}
Finally, the integral over $\left\{ | k| \geq2\right\} $ gives
\begin{align*}
\log^{1+\delta_{1,\beta}}(1/\delta)\int_{\substack{ | k| \geq2} }(1+\delta|k|)^{-\lambda}|k| ^{4-4\alpha}\log^2{|k|}dk
\leq
C\log^{4+\delta_{1,\beta}}(1/\delta).
\end{align*}
\endproof
\begin{lemma} \label{lm_Interpolation_sphere_d=2}
The notation is as in the previous lemmas and let $d=2$.
\noindent If $0\leq\beta<2/5$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $\beta=2/5$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p+1/12}\left( 1/\delta\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $2/5<\beta<1$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<4+2\beta,\\
C\log^{1/p}\left( 1/\delta\right) & \text{if }p=4+2\beta.
\end{cases}
\end{align*}
If $\beta=1$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<6,\\
C\log^{1/p+2/3}\left( 1/\delta\right) & \text{if }p=6.
\end{cases}
\end{align*}
If $\beta>1$ then there exist a constant $C$ such that for every $\mathrm{Re}\left( z\right) \geq 3/2$, for every $R\ge 1$ and $0<\delta<1/2$,
\begin{align*}
\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,z,r,x\right) | ^{p}dxd\mu(r-R)\right\}^{1/p}
\leq
\begin{cases}
C & \text{if }p<6,\\
C\log^{1/p+1/2}\left( 1/\delta\right) & \text{if }p=6.
\end{cases}
\end{align*}
\end{lemma}
\begin{proof}
Again, it is enough to prove the result for $z=3/2$. The case $\beta=0$ is contained in Lemma \ref{lm_p=4_disc}. If $\beta>0$
the proof follows by complex interpolation with $a=4$, $b=6$, $A=z_4+\varepsilon$, $B=z_6+\varepsilon$, with $\varepsilon\geq0$. The norms $H$ and $K$ are given in Lemma \ref{lm_p=4_disc} and Lemma \ref{lm_p=6_disc}.
Set
$$\frac{3}{2}=\left( 1-\vartheta\right) A+\vartheta B.$$
This gives
$$
\vartheta=
\begin{cases}
\frac{3(\beta-4\varepsilon)}{2+\beta} \text { if } \beta<1,\\
1-4\varepsilon \text { if } \beta\geq1,
\end{cases}
$$
and
$$\frac{1}{p}=\frac{\left( 1-\vartheta\right) }{a}+\frac{\vartheta}{b}=
\begin{cases}
\frac{1+2\varepsilon}{2(2+\beta)} & \text{ if } \beta<1,\\
\frac{1+2\varepsilon}{6} & \text{ if } \beta\geq1.
\end{cases}
$$
When $\varepsilon>0$ and $p<\min\{6,4+2\beta\} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{2}}| \Phi\left( \delta,3/2,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C.$$
When $\varepsilon=0$ and $p=\min\{6,4+2\beta\} $,
$$\left\{ \int_{\mathbb{R}} \int_{\mathbb{T}^{d}}| \Phi\left( \delta,3 /2,r,x\right) |^{p}dxd\mu(r-R)\right\} ^{1/p}\leq C\log^{1/p+\eta(1-\vartheta)/4+\omega\vartheta/6}\left( 1/\delta\right) ,$$
where $\omega=4$ if $\beta=1$, $\omega=3$ if $\beta>1$, $\omega=1$ if $\beta=2/5$ and $\omega=0$ in the other cases, while
$\eta=1$ if $\beta=1$ and $\eta=0$ in the other cases.
\end{proof}
The proof of Theorem \ref{thm_d=2_ellipse} can now be concluded as in the case of the general convex set with smooth boundary with strictly positive curvature.
\bibliographystyle{amsplain}
|
1,477,468,750,135 | arxiv | \section{Introduction \label{sec:formalism}}
Transverse Momentum Dependent (TMD) Parton Distributions (PDFs) and Fragmentation functions (FFs) are fundamental ingredients for the study of the inner structure of matter, as they encode how fundamental constituents bind into hadrons and shed light on the hadronization mechanism that, thanks to the confinement properties of QCD, leads to the formation of hadronic states.
Their pivotal role in the investigation of the 3D structure of nucleons has motivated a huge effort in terms of experimental facilities as well as theoretical and phenomenological studies.
Unpolarized TMD PDFs are relatively well known objects, as their extraction can rely on combined analysis of different processes, like SIDIS and Drell-Yan scattering~\cite{Anselmino:2013vqa,Signori:2013mda,Bacchetta:2017gcc, Bacchetta:2019sam, Echevarria:2016scs}, for which dedicated TMD factorization theorems have been devised~\cite{Collins:2011zzd,Aybat:2011zv,Echevarria:2012js,Idilbi:2004vb}.
On the contrary TMD FFs, their final state counterparts, are rather less known.
In fact, the study of unpolarized TMD FFs is currently based on the phenomenological analysis of the sole SIDIS, as data for $e^+e^-$ annihilations into two hadrons, the ideal framework for their determination, are not yet available.
To be precise, data on $e^+e^- \to h_1 h_2 X$ processes are only available for polarized TMD FFs, like the pion and kaon Collins function, or for the $\lambda$ polarizing fragmentation function, for which several phenomenological studies have been performed; for example, some recent analyses can be found in Refs.~\cite{Anselmino:2013vqa,Anselmino:2015sxa,Anselmino:2015fty,DAlesio:2020wjq}.
Moreover, extractions relying on SIDIS cross sections are inevitably affected by the strong
correlation between the TMD PDF and the TMD FF which appear convoluted in the measured cross section.
This issue could be circumvented by exploiting processes which involve only one TMD FF. In these regards, the thrust distribution of $e^+e^- \to h\,X$, sensitive to the transverse momentum of the detected hadron with respect to the thrust axis, as recently measured by the BELLE collaboration~\cite{Seidl:2019jei}, is a very promising candidate, as it represents a process in which the TMD effects are traced back to one single hadron, observed in the final state.
We note that some phenomenological analyses have been performed on $e^+e^-\to h X$ data~\cite{Boglione:2017jlh,Soleymaninia:2019jqo,Modarres:2021ffg}, where some subsets of TASSO~\cite{Braunschweig:1990yd}, PLUTO~\cite{PLUTO:1983pce}, MARKII~\cite{Petersen:1987bq}, AMY~\cite{Bhattacharjee:1990iq}, CELLO~\cite{CELLO:1982fzq} data and the more recent BELLE~\cite{Seidl:2019jei} measurements have been considered.
These studies ignored or only partially addressed issues related to universality and factorization properties of $e^+e^-$ annihilations in a single hadron.
From a theory perspective,
in fact, the study of the $e^+e^- \to hX$ process has been very challenging, as
standard TMD factorization techniques~\cite{Collins:2011zzd,Aybat:2011zv,Echevarria:2011epo,Echevarria:2012pw} do not apply.
As discussed in Refs.~\cite{Makris:2020ltr, Boglione:2021wov}, the $2$-jet final state topology of the above process can occur in three different kinematic configurations or ``Regions", denoted Region $1$, $2$ and $3$ in Refs.~\cite{Boglione:2021wov,Makris:2020ltr}, each corresponding to a different factorization theorem.
These kinematic regions can be defined in terms of the size of the transverse momentum $P_T$ of the hadron observed inside the jet cone.
If the hadron is detected very close to the thrust axis, the structure of the resulting factorization theorem is very similar to the standard TMD factorization, as in this case the soft radiation significantly affects the transverse momentum of the detected hadron. This configuration corresponds to Region 1 and it has recently been investigated for pion~\cite{Kang:2020yqw} and $\Lambda$~\cite{Gamberg:2021iat,DAlesio:2020wjq} production
neglecting the thrust dependence, which is integrated out.
On the other hand, if the hadron is detected very close to the jet boundary, its transverse momentum is large enough to affect directly the measured value of thrust.
This configuration corresponds to Region 3.
The associated factorization theorem involves a Generalized Fragmenting Jet Function (gFJF) rather than a TMD FF, and its treatment goes beyond the realm of TMD physics.
While Regions $1$ and $3$ are rather extreme configurations of the $e^+e^- \to h\,X$ phase space, the ``bulk" of events will belong to Region $2$,
associated to the detection of hadrons with intermediate values of transverse momenta, neither extremely close to the thrust axis, nor too close to the jet external boundaries. Differently from the two above kinematic configurations, the proper theoretical treatment of Region $2$ is still somehow controversial
as the two main available approaches on this subject, Ref.~\cite{Makris:2020ltr} and Ref.~\cite{Boglione:2021wov}, do not find total agreement on the final form of the corresponding factorization theorem. In this paper we will follow the factorization scheme devised in Refs.~\cite{Boglione:2020cwn,Boglione:2020auc,Boglione:2021wov}, which offers some clear advantages for the practical implementation of a phenomenological analysis, leaving aside any discussion on the discrepancies between the two formalisms.
These have been addressed in Section $5$ of Ref.~\cite{Boglione:2021wov} and will be widely discussed in a forthcoming paper \cite{Boglione-Simonelli:2022}.
In Region $2$, soft radiation does not contribute actively to the generation of TMD effects.
This is what makes the standard TMD factorization crucially different from
the factorization mechanism of Region $2$, which shows features of both collinear and TMD factorization. The corresponding cross section can indeed be written as a convolution of a TMD FF with a ``partonic cross section", encoding the details of thrust dependence. There are, however, two relevant issues that must be carefully taken into account. First of all, the TMD FF appearing in the $e^+e^- \to hX$ factorized cross section of Region $2$ does not coincide with the usual TMD FF appearing in SIDIS cross sections. However, as we will discuss in more details below, differences between these two TMD definitions are well under control and their universality properties are not undermined~\cite{Boglione:2020cwn}.
Hence, a phenomenological analysis of the thrust distribution of $e^+e^-\to h\,X$ would allow to access the genuinely non-perturbative behavior of a TMD FF,
free from any soft radiation effects.
The second issue arises from the proper treatment of the rapidity divergences. Due to the very peculiar interplay between soft and collinear contributions, in Region 2 some of the rapidity divergences are naturally regulated by the thrust, $T$, but those associated with terms which are strictly TMD parts of the cross section need an extra artificial regulator, which is a rapidity cut-off in the Collins factorization formalism~\cite{Collins:2011zzd}. This induces a redundancy, which generates an additional relation between the regulator, the transverse momentum and thrust. Such relation inevitably spoils the picture in which the cross section factorizes into the convolution of a partonic cross section (encoding the whole $T$ dependence) with a TMD FF (which encapsulates the whole $P_T$ dependence), as both these quantities turn out to depend on the rapidity cut-off. Hence, while the first becomes sensitive to the transverse momentum of the detected hadron, $P_T$, the other acquires a dependence on thrust, $T$. Moreover, also the thrust resummation is intertwined with the transverse momentum dependence, making the treatment of the large $T$ behavior highly non-trivial.
A proper phenomenological analysis of Region $2$ must rely on a factorized cross section where the regularization of rapidity divergences is properly taken into account. As usual, all the difficulties encountered in the theoretical treatment get magnified in the phenomenological applications.
In this paper we will adopt some approximations,
in order to simplify the structure of the factorization theorem without altering its main architecture. In particular, for single pion production from $e^+e^-$ annihilation, we refer to the cross section presented in Ref.~\cite{Boglione:2020auc}
\begin{widetext}
\begin{align}
\label{eq:cross-sect}
& \frac{d \sigma}{dz_h \, dT \, d^2 \vec{P}_T} =
-\sigma_B N_C
\frac{\alpha_S}{4 \pi} C_F
\frac{3 + 8 \log{\tau}}{\tau}
e^{
-\frac{\alpha_S}{4 \pi}\, 3 C_F
\log^2{\tau}
}
\sum_f \, e_f^2 \,
D_{1,\,\pi^{\pm}/f}(z_h,\,{P_T}/{z_h};\,Q,\,\tau\,Q^2) \,.
\end{align}
\end{widetext}
where $z_h$ is the fractional energy of the detected pion, $\tau = 1-T$ and $\sigma_B = {4\pi\alpha^2}/{3Q^2}$ is the Born cross section.
The unpolarized TMD FF, $D_{1,\,\pi^{\pm}/f}$, is defined in the impact parameter space, in terms of the transverse distance $\vec{b}_T$ Fourier conjugate of $\vec{q}_T \equiv {\vec{P}_T}/{z_h}$. At next-to-leading logarithmic (NLL) accuracy, and at the scales $\mu=Q$ and $\zeta=\tau Q^2$ as in Eq.~\eqref{eq:cross-sect}, it reads~\cite{Boglione:2021wov}:
\begin{widetext}
\begin{align}
&\widetilde{D}_{1,\,h/f}
(z_h,\,b_{\text{T}};\,Q,\,\tau\,Q^2)
=
\notag \\
&\quad
\frac{1}{z_h^2}\bigg(
d_{h/f}(z_h,\,\mu_{b_\star}) +
\frac{\alpha_S(\mu_{b_\star})}{4\pi}
\int_{z_h}^1 \, \frac{dz}{z}\,
\left[
d_{h/f}({z_h}/{z},\,\mu_{b_\star})
\,z^2 \,
\mathcal{C}_{q / q}^{[1]}(z,b_*;\mu_{b_*},\mu_{b_*}^2) +
d_{h/g}({z_h}/{z},\,\mu_{b_\star})
\,z^2 \,
\mathcal{C}_{g / q}^{[1]}(z,b_*;\mu_{b_*},\mu_{b_*}^2)
\right]
\bigg)
\notag \\
&\quad \times
\mbox{exp}
\left\{
\log{\frac{Q}{\mu_{b_\star}}}\,g_1 (\lambda) + g_2(\lambda)
+
\frac{1}{4} \,
\log{\tau} \,
\left[
g_2^{K}(\lambda) +
\frac{1}{\log{\frac{Q}{\mu_{b_\star}}}}\,g_3^{K}(\lambda)
\right]
\right\}
\notag \\
&\quad \times
M_{\text D}(z,b_{\text{T}})
\mbox{ exp} \left\{
-\frac{1}{4} \, g_{\text K}(b_{\text{T}}) \,
\log{\left(\frac{Q^2}{M_H^2}\,\tau\right)}
\right \},
\label{e.tmd_NLL}
\end{align}
\end{widetext}
where the transition from small to large $b_{\text{T}}$ has been treated through the $b_*$-prescription by defining
\begin{align}
\label{eq:bstar}
b_\star \left(b_{\text{T}}\right) = \frac{b_{\text{T}}}{\sqrt{1 + (b_{\text{T}}/b_{\text{max}})^2}},\quad \mu_{b_*}=\frac{2e^{-\gamma_E}}{b_*}
\end{align}
as is usual in the CSS formalism ~\cite{Collins:1984kg,Collins:1989gx,Collins:2011zzd}
.
Moreover, in order to ensure that integrating the above TMD FF renders the usual collinear FFs (indicated by lower-case $d$ in \eref{tmd_NLL}), we introduce in the $b_{\star}$-prescription a minimum value of $b_{\text{T}}$,
$b_{\text{min}}$, as in Ref.~\cite{Collins:2011zzd}, and replace Eq.~\eqref{eq:bstar} with $b_{\star} \left(\sqrt{b_{\text{T}}^2 + (b_{\text{min}}) ^2}\right)$.
The first line of \eref{tmd_NLL} embeds the uppolarized TMD FF at short-distances and fixed scales $\mu = \mu_{b_\star} \equiv 2e^{-\gamma_E}/b_{\star}$ and $\zeta = \mu_{b_\star}^2$. It is a standard result to express this contribution as an operator product expansion where the operator basis are the collinear FFs and the Wilson coefficients are fully predicted by perturbative QCD.
The detailed expressions of the 1-loop Wilson coefficients are given in Appendix~\ref{app:tmd}.
The second line of \eref{tmd_NLL} describes the perturbative part of the evolution from $\mu = \mu_{b_\star}$ to $\mu = Q$ and from $\zeta = \mu_{b_\star}^2$ to $\zeta = \tau Q^2$. The functions $g_i$, $i=1,2$ and $g^K_j$, $j=2,3$ are required to reach the NLL-accuracy. They depend on the variable $\lambda = 2\,\beta_0 \,
a_S(Q) \, \log{\frac{Q}{\mu_{b_\star}}}$. For convenience they are reported in Appendix~\ref{app:tmd}.
Finally, the last line of \eref{tmd_NLL} embeds the non-perturbative content of the unpolarized TMD FF, which is encoded in two non-perturbative functions, that must be extracted from experimental data.
The first is the model function $M_{\text D}$, which is the fingerprint of $D_{1,\,\pi^{\pm}/f}$ as it embeds the
genuine large-distance behavior of the TMD. The second is the function $g_{\text K}$, describing the long-distance behavior of the Collins-Soper kernel, accounting for soft recoiling effects.
Notice that a factor $z_h$ is usually included~\cite{Collins:2011zzd} in the logarithm of $g_{\text K}$, which is not present in \eref{tmd_NLL}. This simply corresponds to a different choice for the reference scale of evolution. We choose not to include it in order to have a $g_{\text K}$-factor completely unrelated to the $z_h$ dependence in $b_{\text{T}}$-space.
With respect to the usual definition of TMDs~\cite{Collins:2011zzd,Aybat:2011zv}, or ``square root definition" as labeled in Ref.~\cite{Boglione:2020cwn}, these two non-perturbative functions are related by the following equations
\begin{subequations}
\label{eq:tmddef_comparison}
\begin{align}
M_{\text D}^{\text{sqrt}}(z,b_{\text{T}}) &= M_{\text D}(z,b_{\text{T}})\,\sqrt{M_{\text{S}}(b_{\text{T}})},
\label{e.sqrtMD}\\
g_{\text K}^{\text{sqrt}}(b_{\text{T}}) &= \frac{1}{2} g_{\text K}(b_{\text{T}}),
\label{e.sqrtgK}
\end{align}
\end{subequations}
where $M_{\text{S}}$ is the soft model introduced in Ref.~\cite{Boglione:2020cwn}, describing the non-perturbative content of the soft factor appearing in standard TMD factorization theorems. Notice that while $M_{\text D}$ is different in the two definitions, $g_{\text K}$ is basically the same, apart from a constant factor. Hence, for the extraction of $g_{\text K}$ from Region $2$ of $e^+e^- \to h\,X$ we can test the parametrization already used in past phenomenological extractions, based on standard TMD factorization.
On the side of the TMD model, the comparison between the novel $M_{\text D}$ extracted from Region $2$ of $e^+e^- \to h\,X$ with its ``square root" counterpart
will shed light on the soft model $M_{\text{S}}(b_{\text{T}})$, the remaining unknown required to perform global phenomenological analyses.
The cross section in Eq.~\eqref{eq:cross-sect} can be obtained in two different ways. In Ref.~\cite{Boglione:2020auc}
it is achieved by adopting a topology cut-off $\lambda$ that forces the cross section to describe a $2$-jet final state in the limit $\lambda \to 0$. This introduces an additional, artificial constraint which simplifies the computation of the transverse momentum dependent contributions by limiting the values of the transverse momentum to be smaller than the topology cut-off. Moreover, it allows to set an explicit relation linking the thrust, $T$, to the rapidity cut-off $\zeta$, namely
$\zeta = \tau Q^2$.
Finally, an approximated
resummation of $\lambda$ produces the exponential suppressing factor of Eq.~\eqref{eq:cross-sect}, which replaces the effect of a proper thrust resummation~\cite{Boglione:2020auc}. Alternatively, Eq.~\eqref{eq:cross-sect} can be obtained from the correct factorization theorem of Region $2$ devised in Ref.~\cite{Boglione:2021wov} by making two rather strong approximations. First, the whole transverse momentum dependence encoded \emph{outside} the TMD FF is integrated out up to the typical thrust-collinear scale $\sim \sqrt{\tau} Q$. This allows to recover the naive picture of a partonic cross section convoluted with a TMD FF. Then, the TMD is equipped with a rapidity cut-off, set to the minimal allowed rapidity for particles belonging to the same jet of the detected hadron, corresponding to $\zeta = \tau Q^2$.
In this way, the underlying correlation between thrust and transverse momentum (due to the peculiar role of the rapidity regulator in Region $2$) is
strongly simplified.
Nevertheless, Eq.~\eqref{eq:cross-sect} embodies the
essence of Region $2$,
as the definition of the TMD FF is not affected by non-perturbative soft effects. Moreover,
it represents the first attempt to account for the interplay between thrust and rapidity regulator.
In this paper, we present the first
extraction of this universal TMD FF from $e^+e^- \to h\,X$ data by the BELLE collaboration~\cite{Seidl:2019jei}, belonging to Region 2, within the specific framework of Refs. \cite{Boglione:2020auc,Boglione:2021wov}.
\section{Phenomenology}
In order to use Eq.~\eqref{eq:cross-sect}, complemented by the definition of unpolarized TMD FF in \eref{tmd_NLL},
one must choose
parametric forms for $M_{\text D}$ and $g_{\text K}$, which describe the non-perturbative behavior of the TMD. Such choices are generally affected by the kinematical region of the data under consideration.
This poses a big challenge since the error estimation of factorization theorems in QCD do not allow for sharp boundaries to be drawn. For instance, the small-$q_{\text T}$ cross section in Eq.~\eqref{eq:cross-sect} and its associated error of $\order{q_{\text T}^2/Q^2}$, do not imply that the formalism should describe the data up to $q_{\text T} \sim Q$, but rather that in this region issues describing the data are to be expected. With no further indication of how far one can extend the description into the larger $q_{\text T}$ region, one is left with model-dependent phenomenological results as the only indication of the validity of the formalism. An algorithm to delineate the contours of $e^+e^- \to hX$ kinematic regions where specific factorization regimes can be applied was developed in Ref.~\cite{Boglione:2021wov}, which we will refer to in our analysis.
Another delicate point is the choice of collinear fragmentation functions. While one expects part of the $z$-dependence of theory lines to come from the behavior of the collinear FFs, there is no restriction regarding a possible $z$-dependence in the function $M_{\text D}$. Again, how appropriate a given set is depends on the parametric form of the model. In the following sections we systematically explain our choices.
For our study we will use a simple minimization procedure of the $\chi^2$ given
by
\begin{align}
\label{e.chi2}
\chi^2=\sum_{j=1}^{n}\frac{(T_j(\{p\})-E_j)^2}{\sigma_j^2}\,,
\end{align}
with $\{E_j\}$ the set of the $n$ data points under consideration and
where the corresponding theory computations $\{T_j\}$ depend on a set $\{p\}$ of $m$ parameters. The uncertainties $\sigma_j$ are treated as independent uncorrelated errors, i.e. different sources of errors provided by the BELLE data set are added in quadrature. Future refinements of our work can be achieved by modifying the definition in \eref{chi2} in order to account for the correlations in the systematic uncertainties. This, however, requires more detailed information about the different sources of such types of errors, which is not available. For now, we proceed by minimizing \eref{chi2} as done in previous related analyses \cite{Bertone:2017tyb,Soleymaninia:2019jqo,Kang:2020yqw,DAlesio:2021dcx,Gamberg:2021iat}.
In order to test goodness-of-fit, we use the $\chi^2$ per degree of freedom, given by $\chi^2_{\text{d.o.f.}}=\chi^2/(n-m)$, which should be close to unity for a model to be considered appropriate. We will estimate the statistical errors of our analysis by determining $2\sigma$-confidence regions based on a straight forward application of the Neyman-Pearson Lemma and Wilks' theorem. Concretely, provided a minimal set of parameters $\{p_0\}$ with $\chi^2_0$, we consider parameter configurations $\{p_i\}$ with $\chi^2_i$ given by
\begin{align}
\chi^2_i<\chi^2_0+\Delta\chi^2\,,
\end{align}
where $\Delta\chi^2$ is \emph{not} an arbitrary tolerance but rather depends on the confidence level and the number of parameters varied. For $c$-$\sigma$ confidence level one has
\begin{align}
\text{erf}\left(\frac{c}{\sqrt{2}}\right)=\int_{0}^{\Delta\chi^2} dx\,\, X^2_{D}(x)\,,
\end{align}
with $X^2(D)$ a chi squared distribution with $D$ degrees of freedom equal to the number of parameters varied.\footnote{This equation gives $\Delta\chi^2=1$ for $1\sigma$ c.l. when varying only one parameter. We consider $2\sigma$ and mostly vary all parameters at once so $\Delta\chi^2$ values will be larger than unity.}
\subsection{TMD FF z-dependence and choice of collinear FFs}
\label{s.zdep}
Similarly to the usual CSS formalism for two-hadron production, the impact parameter space in \eref{tmd_NLL} is constrained at small $b_{\text T}$ by a small distance OPE, hence the appearance of the convolution of collinear FFs with matching coefficients $\cal C$, which we denote by $d \otimes {\cal C} $. This factor provides an important constraint of the $z_h$-dependence for the TMDs. As discussed before, the transition from short to large distance of the TMD is regulated by the $b_{\star}$-prescription, for which a maximum value or ``freezing point" must be set, below which one expects perturbation theory to apply. Such maximum distance, $b_{\text{max}}$ in Eq.~\eqref{eq:bstar}, corresponds to a minimum perturbative scale of $\mu_{min}=2e^{-\gamma_E}/b_{\text{max}}$.
For our studies we choose $b_{\text{max}}=1.0\, \text{GeV}^{-1}$, which ensures that perturbative quantities are never evaluated bellow a scale of $1.12\,\text{GeV}$. This seems like a sensible choice since perturbation theory is known to work well in collinear observables down to a scale of around $1.0\, \text{GeV}$.
With this choice, we turn to the question of choosing a set of collinear FFs. We will compare the NNFF\cite{Bertone:2017tyb} and the JAM20\cite{Moffat:2021dji} next-to-leading order (NLO) sets
\footnote{Note that we use a recent update of the JAM20 pion FFs, obtained from https://github.com/QCDHUB/JAM20SIDIS.}.
These are modern analyses that represent the state of the art in collinear FF extractions and are readily available through LHAPDF~\cite{Buckley:2014ana}. As it can be seen in \fref{conv1}, computation of $d \otimes {\cal C} $ may render significantly different results for each collinear FF set. One may suspect that the extraction of the TMD is sensitive to the choice of collinear functions. It is however not obvious that either of the collinear set is to be preferred over the other. It is entirely possible that by adjusting values of the model parameters for say, $M_{\text D}$, a similar description of the data could be achieved with the two collinear FF sets. By any consideration, the question of which set is more appropriate depends on the choices of the model.
\begin{figure}
\centering
\includegraphics[scale=1]{plot_conv1_new.pdf}
\caption{
\label{f.conv1}
Convolution of the collinear fragmentation function and matching coefficients $d \otimes {\cal C} $ for the NNFF\cite{Bertone:2017tyb} and JAM20\cite{Moffat:2021dji} sets. Here $z$ is fixed at $z=0.425$, but significant differences can also be observed at other values of $z_h$.}
\end{figure}
In order to choose a set, we perform preliminary fits at fixed values of $T=0.875$ and look for the one that better describes the data, in terms of the minimal $\chi^2_{\text {dof}}$. We consider for now only the kinematical ranges $0.375~<~z_h~<~0.725$ and $q_{\text T}/Q<0.20$. This includes enough data points to constrain the tests. At this stage we only attempt to parametrize $M_{\text D}$ and set the exponential factor containing $g_{\text K}$, in last line of \eref{tmd_NLL}, equal to unity.
Notice that according to Ref.~\cite{Boglione:2021wov} data corresponding to $z_h$ bins with $z \le 0.375$ would be dominated by Region 1, which requires a different factorization theorem. For this reason we do not consider them here.
\begin{table}
\caption{Models in impact parameter space used for preliminary tests in this section. First two entries correspond to $z_h$-independent models for $M_{\text D}$. Models labeled as "BK" are proportional to a modified Bessel function of the second kind and correspond to a power law in momentum space. Entries three and four are $z_h$-dependent models for $M_{\text D}$, obtained by modifying the mass parameter of the BK model, as indicated. The last entry introduces $z_h$-dependence to the BK model by a multiplicative factor with Gaussian behavior in $b_{\text T}$.}
\label{t.models1}
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
ID&$M_{\text D}$-model¶meters\\
\hline
\hline
\multicolumn{3}{|c|}{$z_h$-independent models}\\\hline
\multirow{3}{*}{1)Exp-p}&\multirow{3}{*}{$e^{-(M_{\text 0} b_{\text T})^p}$}&\multirow{3}{*}{$M_{\text 0},\,p$}\\
& & \\
& & \\
\hline
\multirow{3}{*}{2)BK}&\multirow{3}{*}{$\cfrac{2^{2-p} (b_{\text T} M_{\text 0})^{p-1} }{\Gamma (p-1)}K_{p-1}(b_{\text T} M_{\text 0})$}&\multirow{3}{*}{$M_{\text 0},\,p$}\\
& & \\
& & \\
\hline
\multicolumn{3}{|c|}{$z_h$-dependent models}\\\hline
\multirow{3}{*}{3)BK-1}&\multirow{3}{*}{$ M_{\text 0}\to M_1\left(1-\eta_1 \log (z_h)\right)$}&\multirow{3}{*}{$M_1,\,\eta_1,\,p$}\\
& & \\
& & \\
\hline
\multirow{3}{*}{4)BK-2}&\multirow{3}{*}{$ M_{\text 0}\to M_2\left(1+\cfrac{\eta_2}{z_h^2}\right)$}&\multirow{3}{*}{$M_2,\,\eta_2,\,p$}\\
& & \\
& & \\
\hline
\multirow{3}{*}{5)BK-g}&\multirow{3}{*}{$e^{(M_gb_{\text T})^2 \log(z_h)} \times $ BK}&\multirow{3}{*}{$M_g,\,M_{\text 0},\,p$}\\
& & \\
& & \\
\hline
\end{tabular}
\end{center}
\end{table}
In a first attempt to test the collinear functions, one may consider models for $M_{\text D}$ with no explicit $z_h$-dependence, and perform fits for fixed values of $z_h$. The choice of models is summarized in the top two entries of \tref{models1}:
model $1$ inspired by a Gaussian-like $b_{\text{T}}$ behavior while model 2, proportional to a modified Bessel function of the second kind, corresponds to a power law in momentum space and is the same functional form considered for $M_{\text D}$ in \cite{Boglione:2020auc}.
As it can be seen in \fref{chi21}, these models result in rather high values of $\chi^2_{\text{dof}}$, giving a bad description of the data. Nonetheless, it is noteworthy that the $\chi^2_{\text{dof}}$ tends to be larger for the JAM20 set. Both models seem to work at $q_{\text T}/Q<0.1$ but deteriorate fast for $0.1<q_{\text T}/Q<0.2$. In the following sub-sections we will set our final $q_{\text T}$-cut to the intermediate value $q_{\text T}/Q<0.15$. For now we will leave this aside and continue to address the $z_h$-dependence.
\begin{figure}
\centering
\includegraphics[scale=0.9]{plot_nnffvsjam20_chi2_1_new20.pdf}
\caption{
\label{f.chi21}
Minimal $\chi^2_{\text{dof}}$ for fits at fixed $T=0.875$ and individual $z_h$-bins in the range $0.375~<~z_h~<~0.725$, for $M_{\text D}$ models with no $z_h$-dependence. Here $q_T/Q<0.20$. Dashed and solid lines correspond respectively to the first and second entries in \tref{models1}. For each model we have two parameters and a total of nine individual fits, one per $z_h$-bin.
Note that even with such large values of $\chi^2_{\text{dof}}$, the mild relative differences between the using JAM20 and NNFF suggest that either set could describe the data to the same quality.}
\end{figure}
Recall that so far we have performed only independent fits at fixed $T=0.875$ and separately for each bin inside the range $0.375~<~z_h~<~0.725$. A useful exercise is to plot the values of the resulting minimal parameters in terms of $z_h$, as is done in \fref{para1}, for the BK model. There, it is clear that if one expects to fit all bins in $z_h$ simultaneously (still at fixed $T=0.875$), some $z_h$-dependence shall be needed in the parametric form for $M_{\text D}$. We remark that an important result of the factorization scheme is that $g_{\text K}$ must be independent of $z_h$.
%
Another interesting aspect of \fref{para1} is that a stronger $z_h$-dependence is observed for the mass parameter $M$ than for the dimensionless parameter $p$. We find that improving the trend of theory lines in the variable $z_h$ is more readily done by introducing a $z_h$-dependence in dimensionful parameters. We have observed this for several cases we tested, although here we only show a few of them. More generally, one could expect strong correlation between all parameters in $M_{\text D}(b_{\text{T}})$. For instance, a closer inspection of the example in \fref{para1} shows that the two parameters shaping the $b_{\text{T}}$ profile of $M_{\text D}$, $M_0$ and $p$, display a similar trend as a function of $z_h$. We will come back on this later on in the next sections.
\begin{figure}[t]
\centering
\includegraphics[scale=0.9]{plot_nnff_para_1_new.pdf}
\caption{
\label{f.para1}
Minimal parameter values for fits at fixed $T=0.875$ and individual $z_h$-bins in the range $0.375~<~z_h~<~0.725$, for the $M_{\text D}$ model in the second entry of \tref{models1} ($z_h$-independent BK model). Here $q_T/Q=0.15$. Results correspond to the solid lines in \fref{chi21}. In this case, where we fit $z_h$-bins separately, the incompatibility of $M$ and $p$ for different $z_h$ suggests that a $z_h$-dependence is needed if the model is to describe the data on a simultaneous fit of the $0.375~<~z_h~<~0.725$ range. It is interesting to note that the dimensionful parameter $M$ exhibits a stronger correlation to $z_h$.
}
\end{figure}
We attempt three different $z_h$-dependent models for $M_{\text D}$, as indicated in the last three entries of \tref{models1}. The first two are modifications of the BK model, where we modify the mass parameter as $M\to M(z)$, adding in each case one more parameter to introduce, respectively, a linear and a logarithmic term. The last one, is the BK model multiplied by $z_h^{(M_g b_{\text T})^2}$, so that the $z_h$-dependence is controlled by this additional multiplicative function and determined by the mass parameter $M_g$. \\
Results for these three models can be seen on the left panel of \fref{chi22}.
Despite the large values of $\chi^2_{\text{dof}}$ for the first two models, we find a considerable improvement with respect to the $z_h$-independent BK model. The third model works indeed much better, which is partly due to its $z_h$-dependence but also to the Gaussian behavior introduced by the factor $z_h^{(M_g b_{\text T})^2}$. The Gaussian behavior of this model improves the description at the large end of the selected range of $q_{\text T}$, giving much lower values of $\chi^2_{\text{dof}}$. For this last model, last entry in \tref{models1}, we perform two more fixed-$T$ fits for $T=0.750$ and $T=0.825$, resulting in $\chi^2$'s roughly three times smaller than those corresponding to models BK1 and BK2. Results are shown on the right panel of \fref{chi22}.
One should be careful to interpret these results. First, while it may seem that the last model should be the obvious choice to extract the unpolarized , the other two $z_h$-dependent models we have considered here are able to describe the data well up to $q_{\text T}/Q<0.1$, as we will show in the following sub-sections. This is a delicate point, since one does not know a priori for which maximum value of $q_{\text T}/Q$ one can still trust that the errors $\order{(q_{\text T}/Q)^2}$ of \eref{tmd_NLL} are small enough so that the formalism is still valid. For instance, if the cut on $q_{\text T}/Q$ was made more restrictive, say $q_{\text T}/Q < 0.1$,
the clear advantage of the Gaussian $z_h$-dependent model, describing the data in the region $0.1<q_{\text T}/Q<0.2$, would become less significant.
We close our preliminary discussion of the $z_h$-dependence by stating the main conclusions of this subsection.
First, a stronger $z_h$-dependence is observed in mass parameters than in dimensionless parameters.
This is an observation that applies to several models we tested, of which we provide one concrete example in \fref{para1}.
In the specific case of \fref{para1}, we also find that $z_h$ may strongly correlate the model parameters $M_0$ and $p$.
Second, in all the preceding discussions, and despite of inadequacies in some of the models considered, $\chi^2_{\text{dof}}$ values tend to be smaller with NNFF,
so this will be our choice for our main analysis, but we will not yet set on a specific model for $M_{\text D}$. Based on our preliminary studies of this section, we expect that using JAM20 would give larger values of $\chi^2_{\text{dof}}$, although not by much.
\begin{figure}
\centering
\includegraphics[scale=0.9]{plot_nnffvsjam20_chi2_2.pdf}
\caption{
\label{f.chi22}
Minimal $\chi^2_{\text{dof}}$ for fits in the kinematic range $0.375~<~z_h~<~0.725$ ($z_h$-bins are fitted simultaneously), for the $z_h$-dependent models for $M_{\text D}$ in the last three entries of \tref{models1}. Left panel: comparison of the results obtained with NNFF\cite{Bertone:2017tyb} and JAM20\cite{Moffat:2021dji}, for fixed $T=0.875$. Right panel: fixed-$T$ fits for $T=\{0.750,0.825,0.875\}$, using the BK model with a gaussian $z_h$-dependent term (last entry in \tref{models1}). Similarly to the results presented in \fref{chi21}, the NNFF consistently produce smaller values of $\chi^2_{\text{dof}}$.
}
\end{figure}
\subsection{ Behavior of the unpolarized TMD FF in the large-$b_{\text T}$ limit. }\label{s.largebT}
In this subsection we will address the behavior of the unpolarized TMD FF in impact parameter space. Specifically, we look at possible parametric forms for $M_{\text D}$ in \eref{tmd_NLL}, paying special attention to the large-$b_{\text T}$ limit. For the purposes of our discussion we identify two different possible meanings for "large-$b_{\text T}$" behavior:
\begin{enumerate}
\item asymptotically large-$b_{\text T}$
\item maximum $b_{\text T}$ accessible through data.
\end{enumerate}
The first one corresponds to the formal limit $b_{\text T}\to\infty$, in which one may write asymptotic expansions for a known parametric form. For instance, the BK model discussed in the previous subsection has an asymptotic limit
\begin{align}
\frac{2^{2-p} (b_{\text T} M_{\text 0})^{p-1} }{\Gamma (p-1)}&K_{p-1}(b_{\text T} M_{\text 0})\nonumber\\
&\to
\sqrt{\pi }\frac{ 2^{\frac{3}{2}-p} (b_{\text T} M)^{p-\frac{3}{2}}}{\Gamma (p-1)}e^{-b_{\text T} M_{\text 0}}\,.
\label{e.BK}
\end{align}
characterized by an exponentially decaying behavior as $b_{\text T}\to\infty$. The second one, instead, refers to the largest region in $b_{\text T}$ that is accessible phenomenologically, i.e., the largest distances at which the data can constrain the model, which can be better determined after carrying out a data analysis.
The largest $b_{\text{T}}$ accessible phenomenologically corresponds to the case of measurements at values of $Q$ small enough that nonperturbative effects are maximized, but large enough that TMD factorization still holds. Even at scales of, say, $Q=2\,\text{GeV}$, it is possible that the asymptotic behaviour of the TMDs cannot be resolved completely. At BELLE kinematics, where $Q\approx10\,\text{GeV}$, it is unlikely that one can find strong constraints for the asymptotic behaviour of TMDs.
\begin{table}
\caption{Models for $M_{\text D}$ in impact parameter space. Both cases shown are obtained by multiplying model BK of \tref{models1}, which corresponds to a power law in momentum space, by an additional function of $b_{\text T}$ and $z_h$. }
\label{t.models2}
\begin{center}
\begin{tabular}{|c|l|c|}
\hline
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|l|}{\multirow{2}{*}{$\quadM_{\text D}=\cfrac{2^{2-p} (b_{\text T} M_{\text 0})^{p-1} }{\Gamma (p-1)}K_{p-1}(b_{\text T} M_{\text 0})\,\,\times\,\,F(b_{\text T},z_h)$}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|l|}{\multirow{2}{*}{$\quad M_z=-M_{\text 1}\log(z_h)$}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\hline
ID&$\qquad\qquad F$-model¶meters\\
\hline
\hline
\multirow{3}{*}{ I }&\multirow{3}{*}{$F=\left(\cfrac{1+\log\left(1+(b_{\text T} M_z)^2\right)}{1+\left((b_{\text T} M_z)^2\right)}\right)^q$}&\multirow{3}{*}{$M_{\text 0},M_{\text 1},p,\,q=8$}\\
& & \\
& & \\
\hline
\multirow{3}{*}{ IG }&\multirow{3}{*}{$F=\exp\left((M_gb_{\text T})^2 \log(z_h)\right)$}&\multirow{3}{*}{$M_{\text 0},\,M_g,\,p$}\\
& & \\
& & \\
\hline
\end{tabular}
\end{center}
\end{table}
This would mean that fitting BELLE data may be possible with parametric forms of distinct asymptotic behaviour.
However, when considering data at smaller energy scales, for which the maximum $b_{\text T}$ accessible is likely larger than that at BELLE energies, one may find inconsistencies in a global fit if the asymptotic behaviour of $b_{\text T}$ is not chosen appropriately Theoretical constraints are important in light of all these issues encountered at lower energy phenomenology, see for example Refs.~\cite{Boglione:2014oea,Collins:2016hqq,Gonzalez-Hernandez:2018ipj,Wang:2019bvb,Boglione:2016bph,Boglione:2019nwk,Boglione:2022gpv}. To do so, we follow some of the considerations made in Ref.~\cite{Collins:2014jpa}. Thus, for this work we will look for a parametric $M_{\text D}$ that in $b_{\text T}$ space decays exponentially, but that is able to describe BELLE data at least as well as model 5 in \tref{models1}, which in the preliminary cases considered so far, seems to be suitable. A possible candidate is shown in \tref{models2}, where for convenience we have explicitly rewritten model 5 of \tref{models1}. Both models in \tref{models2} correspond to a power-like behaviour in momentum space, characterized in $b_{\text T}$ space by the modified Bessel function of the second kind, times an extra factor which we denote as $F$. To make the comparison between exponential and Gaussian asymptotic behaviour more transparent, in this preliminary study we consider only the models in \tref{models2}. Note that even in the case $F=1$ one may recover an exponentially decaying behaviour asymptotically, from the Bessel function alone, as seen in \eref{BK}. We will consider this case later as it requires a detailed explanation of possible final parametric forms, which account for the strong correlations of parameters in $M_{\text D}$ related to the $z_h$ dependence, as noted in the previous subsection.
For now, we will compare how well the models in \tref{models2} may describe the data.
Our aim is to provide a practical example where two models that describe the data reasonably well, are not necessarily constrained in the asymptotically large $b_{\text T}$ limit. Decoupling the question of what is an appropriate parametric form for the $P_{\text T}$ behaviour of $M_{\text D}$ is not independent of the choices to model its $z_h$ dependence. Thus we proceed as follows.
First, we perform three fits at fixed values of $T\;=\;\{0.750,0.825,0.875\}$,
where in each case, we include BELLE data in the region $q_{\text T}/Q<0.20$ and $0.375<z_h<0.725$. To accommodate the $z_h$ dependence we choose a logarithmic behaviour in the function $F$ as shown in \tref{models2}. Since we are not fitting the three $T$ bins simultaneously, we will not be able to also fit $g_{\text K}$, which correlates to thrust, so for now we set $g_{\text K}=0$. Then, we will look at a single case, one value of $T$ and $z_h$, where the $P_{\text T}$ dependence is described well by both models and look at the results in $P_{\text T}$ and $b_{\text T}$ space.
\begin{table}
\caption{
Minimal $\chi^2_{\text{d.o.f.}}$ resulting by fitting the two para\-metric forms for $M_{\text D}$ in \tref{models2}. In each case we perform three independent fits, one for each value $T=\{0.750,0.825,0.875\}$, in the ranges $q_{\text T}/Q<0.2$ and $0.375<z_h<0.725$. As far as the description of the data is con\-cerned all three cases seem to be acceptable, see explanation in the text. }
\label{t.chi21}
\begin{center}
\begin{tabular}{c c c c c}
\hline
\multicolumn{5}{c}{$\chi^2_{\text{d.o.f.}}$ (fixed-$T$ fits)}\\
\hline
&&&&\\
$M_{\text D}$ model &$\,\,\,T=$&0.750&0.825&0.875\\
\hline
&&&&\\
\hline
&&&&\\
I & &1.2&0.38&1.02\\
&&&&\\
IG & &1.46&0.47&1.51\\
&&&&\\
\hline
\end{tabular}
\end{center} §
\end{table}
The results of the fixed-$T$ fits are shown in \tref{chi21}. The smaller values of $\chi^2_{\text{d.o.f.}}$ obtained with model I are related to the choice $q=8$, which allows for a good description of the $z_h$ bins considered. Note that modifying the $z_h$ behaviour in model IG could improve its best fit $\chi^2_{\text{d.o.f.}}$ as well. At this stage we consider both models as candidates to parametrize $M_{\text D}$, since our main interest is to discuss about the $P_{\text T}$ dependence.
\begin{figure}
\centering
\includegraphics[scale=0.9]{modelsvsqt.pdf}
\caption{
\label{f.mdvsqt1}
Best-fit lines for both models in \tref{models2}, obtained by fitting BELLE data for the kinematics $T=0.825$, $0.375<z_h<0.725$ and $P_{\text T}/ z_h Q<0.2$. Note that both lines follow essentially the same profile in the region of the data shown.
}
\end{figure}
Now we look at the case $z_h=0.525$ and $T=0.825$, for which both models describe the data reasonably well. In fact, as seen in \fref{mdvsqt1}, the models of \tref{models2} have the same profile and almost lie on top of each other. Corresponding lines in $b_{\text T}$ space are shown in \fref{mdvsbt1}, where it can be seen that for values $b_{\text T}>4$ GeV$^{-1}$ the cross section calculated using models I and IG deviate. This is of course due to the differences in the asymptotic behaviour of the models. This example simply illustrates that the asymptotic behaviour of the TMD ff is not necessarily constrained by BELLE data after some large value of $b_{\text T}$. However, the reason to prefer an asymptotic behaviour like that of model I comes from the necessity to fit data at lower energies in the future, for which the large-$b_{\text T}$ Gaussian fall off may not be appropriate.
From here on out we will focus on models for $M_{\text D}$ that decay exponentially in the asymptotically large $b_{\text T}$ limit. More precisely
\begin{align}
\label{e.modelasyconst}
\qquad\log(M_{\text D})\underset{b_{\text T}\to\infty}{\sim} - \,C \,b_{\text T} + \lorderb_{\text T},\,
\end{align}
with $C$ a positive mass parameter and where we have used the little-$o$ symbol to indicate sub-linear terms in $b_{\text T}$.
Furthermore, we will explore two different approaches, leading to two classes of models.
The first one is model I in table \tref{models2}, which corresponds to the function of \eref{BK} times the $z_h$-dependent function $F$. The second one, is similar to model I but sets $F=1$ and models the $z_h$ dependence through both the mass parameter $M_{\text 0}$ and the power $p$ of the Bessel function function of \eref{BK}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.9]{modelsvsbt.pdf}
\caption{
\label{f.mdvsbt1}
Best-fit lines for both models in \tref{models2} in $b_{\text T}$ space, obtained by fitting BELLE data for the kinematics $T=0.825$, $0.375<z_h<0.725$ and $q_{\text T}/z_h Q<0.2$. Lines correspond to those in \fref{mdvsqt1}. The deviation of the two theory lines after $b_{\text T}>4\,\text{GeV}^{-1}$ indicates the lack of sensitivity to the asymptotic behaviour of the models in this particular example.
}
\end{figure}
Before performing our extraction, however, we need to
set a parametric form for $g_{\text K}$.
\subsection{Behavior of $g_{\text K}$ in the large-$b_{\text T}$ limit. }
The usual definition of the TMD FF in the CSS formalism differs from that introduced in Ref.~\cite{Boglione:2020cwn} by a non-perturbative function $M_{\text{S}}(b_{\text{T}})$, as explained in Section~\ref{sec:formalism} and given in \eref{sqrtMD}. $M_{\text{S}}(b_{\text{T}})$ is associated to soft gluon effects and originates from the fact that in the latter definition the TMDs are purely collinear objects, while in the CSS definition soft radiation contributions are included in the TMD definition itself.
This means that the non-perturvative function $M_{\text D}(b_{\text{T}})$ introduced in Eq. \eqref{eq:cross-sect}, and discussed in Section \ref{sec:formalism}, cannot be used directly in $e^+e^-$-two hadron production or SIDIS processes, see \eref{sqrtMD}. Note, however, that the non-perturbative function $g_{\text K}$ has been defined to be the same as in the usual CSS formalism, up to a trivial factor of $2$, see \eref{sqrtgK}. Thus, it characterizes the large distance behavior of the Collins-Soper kernel as defined in \cite{Collins:2011zzd}. This is perhaps one of the most useful aspect of the formalism in Refs.~\cite{Boglione:2020auc,Boglione:2020cwn,Boglione:2021vug,Boglione:2021wov} in the context of global fits, since it allows for comparisons of the extracted $g_{\text K}$ with other recent work (see for example Refs. \cite{Bacchetta:2017gcc,Bacchetta:2019sam,Scimemi:2017etj,Scimemi:2019cmh}). In order to choose a suitable parametrization for $g_{\text K}$, we use the following observation as a guiding principle.
In general, one may write the TMD FF in $b_{\text T}$ space as
\begin{align}
\tilde{D}(b_{\text T},\zeta)=&\tilde{D}(b_{\text T},\zeta_0) \exp\left\{-\frac{g_{\text K}}{4} \log\left(\frac{\zeta}{\zeta_0}\right)\right\}\big(...\big)\,,
\end{align}
where only the dependence on $b_{\text T}$ and $\zeta$ has been written explicitly, and the ellipsis indicate other terms containing perturbatively calculable quantities. Using the hypothesis in \eref{modelasyconst} one has that in the large-$b_{\text T}$ limit
\begin{align}
\log\left(\tilde{D}(b_{\text T},\zeta)\right)\overset{b_{\text T}\to\infty}{=}&-C b_{\text T} -\frac{g_{\text K}^{\text{large} \, b_{\text T}}}{4} \log\left(\frac{\zeta}{\zeta_0}\right)+ o(b_{\text T})\,.
\end{align}
We then note that
\begin{align}
\label{e.gkasy}
g_{\text K}\overset{b_{\text T}\to\infty}{=}o(b_{\text T})\implies\log(\tilde{D}(b_{\text T},\zeta))=O(b_{\text T})\,,
\end{align}
independently of $\zeta$ and $\zeta_0$. This seems like a reasonable condition since $\zeta_0$ is a somewhat arbitrary reference scale: for instance, it could be chosen depending on the kinematics of a particular phenomenological analysis. We will consider in this analysis only the hypothesis that asymptotically $g_{\text K}=o(b_{\text T})$. As a counter example, with the same ansatz for the asymptotic behavior of $\tilde{D}(b_{\text T},\zeta)$, \eref{modelasyconst}, choosing the large-$b_{\text T}$ behavior of $g_{\text K}$ to be quadratic would implicitly assign a special role to the reference scale $\zeta_0$, in the sense that in this case for $\zeta=\zeta_0$, $\log(\tilde{D}(b_{\text T},\zeta))=O(b_{\text T})$, while for $\zeta\neq\zeta_0$, $\log(\tilde{D}(b_{\text T},\zeta))=O(b_{\text T}^2)$. Note that one could
set $g_{\text K}$ to be $O(b_{\text T})$ instead of $o(b_{\text T})$ and still have \eref{gkasy} be valid. However, this allows for $\tilde{D}(b_{\text T},\zeta)$ to be divergent in the limit $b_{\text T}\to\infty$, for sufficiently small $\zeta/\zeta_0$ (see also the discussion in Ref.~\cite{Collins:2014jpa}). Note that a sub-linear $b_{\text T}$ behaviour for $g_{\text K}$ has already been suggested by several authors, see for instance Eq.~(79) in \cite{Collins:2014jpa}, Eq.~(40) in \cite{Aidala:2014hva} and Eq.~(24) in \cite{Vladimirov:2020umg}).
Our analysis will be conducted by adopting the following functional forms for the large $b_{\text T}$ behaviour of $g_{\text K}$
\begin{align}
&
g_{\text K}\overset{b_{\text T}\to\infty}{\sim}
\log(\mkb_{\text T}
\label{e.gk-largeb-jogh}\\
&
gk\overset{b_{\text T}\to\infty}{\sim}
(M_{\text K} b_{\text T})^{(1-2p_{\text K})},\qquad 0<p_{\text K}<1/2\,\label{e.gk-largeb-vlad}
\end{align}
where the first expression is similar to that considered in Ref. \cite{Aidala:2014hva} (but with an undetermined power $p_k$), while the second expression corresponds to the model calculation presented in Ref.~\cite{Vladimirov:2020umg} for the CS kernel as $b_{\text T}\to\infty$. We have also considered a constant asymptotic form, as suggested in Ref.~\cite{Collins:2016hqq} but, limited to the data sample we are presently fitting, we obtain consistently larger $\chi^2$s compared to those obtained using a sublinear asymptotic behaviour for $g_{\text K}$.
We stress that our main purpose is to test whether or not $g_{\text K}=o(b_{\text T})$ as $b_{\text T}\to\infty$ is a suitable asymptotic dependence for the non perturbative behavior of the Collins-Soper kernel. In this sense, \eref{gk-largeb-jogh} and \eref{gk-largeb-vlad} should be seen only as a proxy for such hypothesis. Consideration of two models for $g_{\text K}$ will allow us to get a ``measure'' of the correlations between $M_{\text D}$ and $g_{\text K}$ and of the theoretical uncertainties introduced by model choices.
\subsection{Behavior of $g_{\text K}$ in the small-$b_{\text T}$ limit.\label{s.small-bt-gk}}
There is a general consensus that the behavior of $g_{\text K}$ in the small-$b_{\text T}$ limit should be power-like, see for example Refs.~\cite{Collins:2014jpa, Aidala:2014hva, Collins:2017oxh, Vladimirov:2020umg,Scimemi:2019cmh,Bacchetta:2019sam}. Often phenomenological studies have assumed
\begin{align}
g_{\text K}\overset{b_{\text T}\to0}{\sim} c_1 b_{\text T}^2 \,.
\label{e.gksmall}
\end{align}
For instance, Ref.~\cite{Bacchetta:2019sam} uses
\begin{align}
g_{\text K} = c_1 b_{\text T}^2 + c_2 b_{\text T}^4\,,
\end{align}
where a strong suppression at small $b_{\text T}$ was necessary to reach a satisfactory description of Drell-Yan data at extremely large energies, which required high accuracy in the perturbative and logarithmic expansion.
For this analysis, where the perturbative expansion only extends to NLL,
we start by testing two different models for $g_{\text K}$
which ensure a $b_{\text T}^2$ behaviour at small $b_{\text T}$, while respecting the asymptotic trends discussed above. More specifically, we look at the following functional forms:
\begin{align}
\label{e.gksmall21}
& c \log\left(1+\left(\mkb_{\text T}\right)^2\right)\,,\\
\label{e.gksmall22}
& a \, b_{\text T} ^{p_k} \, \Big(1 - e^{-b/a \; b_{\text T} ^{(2-p_k)}}\Big)\,.
\end{align}
Both models show some drawbacks. First of all, the parameter space is not well constrained. Moreover, larger values of $\chi^2$ point to the inadequacy of the power $2$ for $b_{\text T}$.
In fact, in our preliminary tests we find that our fit is rather sensitive to the modulation of $g_{\text K}$ in the large $b_{\text T}$ region.
Remarkably, it shows a strong preference for a sub-linear power or logarithmic raise of $g_{\text K}$,
while definitely ruling out the
$b_{\text T}^2$ or $b_{\text T}^4$
behaviour at large $b_{\text T}$. Indeed, it is likely that increased perturbative accuracy could accommodate for the behaviour of \eref{gksmall} at small $b_{\text T}$.
We therefore relax the constraint that $g_{\text K}$ should go to zero quadratically in the small $b_{\text T}$ limit, by simply requiring it to go to zero as some generic power $p_{\text K}>0$. This will also allow us to reduce the number of free parameters for our final analysis. Thus, we will focus on the following parametrizations
\begin{align}
\label{e.gk-jogh}
g_{\text K} &= \log\left(1+(\mkb_{\text T})^{p_{\text K}}\right)\,\\
\label{e.gk-vlad}
g_{\text K} &= (M_{\text K} b_{\text T})^{(1-2p_{\text K})},\qquad 0<p_{\text K}<1/2\,.
\end{align}
The functional forms in \eref{gk-jogh} and \eref{gk-vlad}, labelled $A$ and $B$ respectively, are summarized in \tref{models3}. They optimize the quality of the fit while keeping the number of free parameters under control.
\bigskip
\subsection{Final Models and Data Kinematics.}
\label{s.finalmodelsandkin}
For our main analysis we focus on the following kinematics
\begin{align}\label{e.kin1}
0.375\leqz_h\leq 0.725\,,\qquad0.750\leq T\leq 0.875\,,
\end{align}
corresponding to Region 2 (see Ref~\cite{Boglione:2021wov}).
Furthermore, as the TMD formalism of Ref.~\cite{Boglione:2020auc,Boglione:2021wov} regards the region in which $q_{\text T}=P_{\text T}/z_h\ll Q$, we adopt the cut
\begin{align}\label{e.kin2}
q_{\text T}/Q\leq 0.15\,,
\end{align}
which gives us some confidence that the appropriate collinear-TMD factorization theorem is applied,
and perform a standard $\chi^2$ minimization procedure for each one of the models summarized in \tref{models3}.
More restrictive cuts make it difficult to find an optimal solution, while less stringent ones result in large values of $\chi^2$.
As mentioned before, for our analysis we consider two different models for each $M_{\text D}$ and $g_{\text K}$, in order to provide a reliable estimation of the uncertainties affecting the extraction of the TMD FF. For $g_{\text K}$ we consider the functional forms in \eref{gk-jogh} and \eref{gk-vlad}, which we call model A and model B respectively. For $M_{\text D}$, our starting point is the Fourier transform of a power law in momentum space, taking into account that a $z_h$-dependence is necessary for a successful description of the BELLE cross sections \cite{Seidl:2019jei}. These two models, labelled I and II, differ only in the treatment of the $z_h$ dependence. In total we have four different cases we will use, which we label as models IA, IB, IIA, and IIB.
\subsubsection{Models IA and IB}
\label{s.models1}
Model I for $M_{\text D}$ was already introduced in \sref{largebT} (see \tref{models2}) and,
as summarized in \tref{models3}, it concentrates the full $z_h$ dependence of $M_{\text D}$ within the extra $F(b_{\text T},z_h)$ factor, which is controlled by the mass-parameter $M_z = -M_1 \log (z_h)$, while the Bessel function and other factors corresponding to the power law in momentum space only depend on $b_{\text T}$.
Thus, models IA and IB have initially six parameters each. In both cases, we find that when trying to fit all of the parameters simultaneously, some are poorly constrained and/or show very strong correlations. This may indicate some "redundancy", i.e. the existence of non-independent parameters. This can be an issue when attempting to provide a transparent statistical interpretation of results. We find that we have to fix a total of three parameters, two for $M_{\text D}$ and one for $g_{\text K}$ in order to avoid such situation.
We choose to fix the dimensionless powers, $p$, $q$ and $p_{\text K}$, so that we will find best fit values of parameters that may have the interpretation of a "typical mass" of the observables.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[scale=1]{powerk_Mk_IA84_T2-4_z6-13_rho15.pdf}}
\subfloat[]{\includegraphics[scale=1]{M_Mk_IA84_T2-4_z6-13_rho15.pdf}}
\caption{
Preliminary study of parameter space using model I for $M_{\text D}$ and A for $g_{\text K}$, see \tref{models3}, with fixed $p=1.51$ and $q=8$. The circles represent parameter configurations in a region where a minimum is found. The empty circles display the value of $\chi^2$ both by color (as in palette) and size (larger circles for smaller values of $\chi^2$), for configurations with $\chi^2_i<\chi^2_0+\Delta\chi^2$, with $\Delta\chi^2=9.72$. This value of $\Delta\chi^2$ corresponds to a $2\sigma$ confidence level for varying 4 parameters simultaneously, in situations where the $\chi^2$ as a function of parameters can be approximated as an ellipsoid around the minimum. In this case, however, such approximation is not valid, hindering an interpretation in terms of confidence levels. Strong correlations as those shown likely indicate some "redundancy" in parameter space. (a) Correlation between $M_{\text K}$ and $p_{\text K}$, where the green circle indicates the minimal configuration. (b) Correlation between $M_{\text K}$ and $M_{\text 0}$.
\label{f.corr1}
}
\end{figure*}
First, we set $p=1.51$, so that the derivative of the Bessel function in model I vanishes at $b_{\text T}=0$, this prevents $M_{\text D}$ from being sharply peaked at $b_{\text T}=0$. After setting the value for $p$, we find that the minimum\footnote{More precisely a "lower bound", not the minimum $\chi^2$ in the mathematical sense. } value of $\chi^2$ one can obtain (for both models IA and IB), corresponds to $q\approx8$, so we fix $q=8$. Finally, provided this choices for $p$ and $q$, we perform a fit in order to obtain the optimal values for the power parameter $p_{\text K}$ for each model IA and IB. We show the results of this last step in \fref{corr1} for model IA (fixed $p=1.51$, $q=8$ and varying $p_{\text K}$), in order to illustrate the need to fix some of the parameters. There, the circles display parameter configurations $i$ with $\chi^2_i$ values that deviate from the minimum $\chi^2_0$ by no more than a ``tolerance''\footnote{This value corresponds to a 2$\sigma$ confidence level for varying 4 parameters, but we do not attempt to make such an interpretation in this particular case.} $\Delta \chi^2=9.72$; green dots represent the minimal configuration.
While in this case it is possible to find a minimum varying $M_{\text 0}$, $M_1$, $M_{\text K}$ and $p_{\text K}$ simultaneously,
very strong correlations appear and parameter configurations significantly deviate from ellipsoidal shapes , as shown in \fref{corr1}. This makes it difficult to draw regions in parameter space as it is usually done, by considering configurations for which $\chi^2_i<\chi^2_0+\Delta\chi^2$ \emph{and} interpret them in terms of confidence levels, i.e. statistical errors of our analysis.
As we will see in the next section, by varying only the three mass parameters $M_{\text 0}$, $M_{\text 1}$ and $M_{\text K}$, parameter space display elliptical profiles for all correlations, allowing for a more sound statistical interpretation.
It is interesting to note that the strong correlations appear also between $M_{\text D}$ and $g_{\text K}$ parameters, as seen in the right panel of \fref{corr1}.
The information regarding the values of $p$, $q$ and $p_{\text K}$ is summarized summarized in \tref{models3}.
We remark that these choices
still allow for enough flexibility in our models.
Note that while we could have treated $p_{\text K}$ as nuisance parameters, for our purposes it is enough to fix them to reasonable values, since we are mostly interested in addressing the compatibility of the asymptotic behaviour of \eref{modelasyconst}, \eref{gk-largeb-jogh} and \eref{gk-largeb-vlad} with BELLE data; for this, it is suffices to consider reasonable profile functions. A possible concern regards the estimation of statistical errors, which may be affected by fixing parameters. However, we
remark that considering different models helps us in giving an estimate of some of the theoretical uncertainties of our extraction. All of our choices for models IA and IB are summarized in \tref{finalchi21}.
\subsubsection{Models IIA and IIB}
Model II stems from different considerations, namely, we do not introduce the extra factor $F$ but rather assign a $z_h$ dependence to the mass and power parameters of the Bessel function themselves, $M$ and $p$.
This offers a nice physical interpretation, especially if we recall that this $b_{\text T}$-distribution originates as the Fourier transform of a
power law, which resembles a propagator,
of the form
$[M(z)^2 + q_T^2]^{-p(z)}$ in $q_{\text T}$-conjugate space.
In this sense, the mass $M(z)$ can be regarded as an \emph{effective mass}, that modifies the mass of the detected hadron $M_h$ in a $z_h$-dependent way. The power $p(z)$ can be re-written as $p(z) = 2 + \gamma_P(z)$, where the whole $z_h$-dependence has been encoded into an
\emph{anomalous dimension} $\gamma_P$.
As for model I, the strong correlations between $p(z)$ and $M(z)$ makes it impossible to extract them simultaneously in a converging fit: therefore, further constraints are required to be able to proceed with our analysis.
For model II we constrain the $z_h$ behavior of $M_{\text D}$ by analytically requiring that the theory lines appropriately reproduce some basic features of the measured cross section, namely the peak height and the width of the $P_T$ distributions, at each single measured value of the kinematic variable $z_h$.
In particular~\cite{Seidl:2019jei}, the width of the measured cross section reaches its maximum at intermediate values of $z_h$ (around $\sim 0.6$, as obtained in Ref.~\cite{Seidl:2019jei}) for all thrust bins belonging to the $2$-jet region. This property can be used as a constraint for the model with the help of a proper change of variables, that trades $p$ and $M$ for the width $W$ and the peak height $P$
\begin{align}
\label{eq:PandW}
&p = \frac{1}{2} \left(\frac{3}{1-R}-1\right);
&\quad M = \frac{W}{z} \, \sqrt{\frac{3}{1-R}}\,,
\end{align}
where $W \geq 0$ and $R$ is the ratio ${P}/{P_{\text{\small max}}}$ between the peak height and its maximum possible value ($0 < R < 1$).
The advantage of this operation is that $R$ and $W$ can be regarded as variables associated to the full cross section and not only to the TMD model.
However, being a mere change of variables, this does not solve any correlation issues, which are simply being moved from ($p$, $M$) to ($R$, $W$).
In particular, observation shows that $R$ and $W$ are inversely proportional with respect to their $z_h$-dependence: where one shows a maximum the other has a minimum and vice-versa.
Therefore, we set:
\begin{align}
\label{eq:PandW_1ansatz}
&R = f(z_h,z_0);
&\quad W = \frac{M_\pi}{f(z_h,z_0)^2},
\end{align}
where $M_\pi = 0.14$ GeV is the mass of charged pions and $f$ has to be a positive-definite function, never larger than $1$ and with a minimum in $z_h=z_0$. This is where the information associated to the experimental observation comes into play,
helping to select an appropriate $z_h$-dependence for the TMD model. In fact,
the function $f$ has a minimum
in the exact point where the width $W$ has a maximum.
One of the simplest functional forms which fulfills such requirements is
\begin{align}
\label{eq:f_def}
f(z,z_0) = 1 - (1-z)^\beta,
\quad\text{with }\beta = \frac{1-z_0}{z_0}.
\end{align}
This is what we adopt for Model II.
The expression of $M_z$ and $p_z$ in terms of $f(z)$
are summarized in \tref{models3}.
\begin{table}
\caption{Models for $M_{\text D}$ and $g_{\text K}$ in impact parameter space for our main analysis. $M_{\text D}$ is obtained by multiplying the BK model, which corresponds to a power law in momentum space, with an additional function of $b_{\text T}$ and $z_h$.
}
\label{t.models3}
\begin{center}
\begin{tabular}{|c|l|c|}
\hline
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|l|}{\multirow{2}{*}{$\qquadM_{\text D}=\cfrac{2^{2-p} (b_{\text T} M_{\text 0})^{p-1} }{\Gamma (p-1)}K_{p-1}(b_{\text T} M_{\text 0})\,\,\times\,\,F(b_{\text T},z_h)$}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\multicolumn{3}{|c|}{\multirow{2}{*}{}}\\
\hline
\hline
ID&$\qquad\qquad M_{\text D}$ model¶meters\\
\hline
\multirow{3}{*}{ I }&\multirow{3}{*}{
$F=\left(\cfrac{1+\log\left(1+(b_{\text T} M_z)^2\right)}{1+(b_{\text T} M_z)^2}\right)^q$}&\multirow{2}{*}{$M_{\text 0},\,M_{\text 1}$
}\\
& & \\
& & $p=1.51,\,\,q=8$ \\
\multirow{2}{*}{ }&\multirow{2}{*}{$\,M_z=-M_{\text 1}\log(z_h)$}&
\\
& & \\
\hline
\multirow{2}{*}{ II }&\multirow{2}{*}{$F=1$}& \\
& &$z_0$ \\
\multirow{3}{*}{ }&\multirow{3}{*}{$\,
M_z = M_h \, \cfrac{1}{z \, f(z)^2} \, \sqrt{\cfrac{3}{1-f(z)}}
$ } &
\\
& & \\
& & \\
\multirow{3}{*}{ }&\multirow{3}{*}{$\,p_z = 1 + \cfrac{3}{2}\;\cfrac{f(z)}{1-f(z)}$} &
\\
& & \\
& & \\
\multirow{3}{*}{ }&\multirow{3}{*}{$f(z) = 1 - (1-z)^\beta$, ~ $\beta = \frac{1-z_0}{z_0}$ }
&
\\
& & \\
& & \\
\hline
\hline
\multicolumn{3}{|c|}{\multirow{1}{*}{$g_{\text K}$ model}}\\
\hline
\multirow{3}{*}{A}&\multirow{3}{*}{$\,\,\,g_{\text K}=\log\left(1+(b_{\text T} M_{\text K})^{p_{\text K}}\right)$}&\multirow{3}{*}{$M_{\text K},\,\,\,p_{\text K}$}\\
& & \\
& & \\
\hline
\multirow{3}{*}{B}&\multirow{3}{*}{$\,\,\,g_{\text K}=M_{\text K} b_{\text T}^{(1-2p_{\text K})}$}&\multirow{3}{*}{$M_{\text K},\,\,\,p_{\text K}$}\\
& & \\
& & \\
\hline
\end{tabular}
\end{center}
\end{table}
Following the indication of these preliminary tests, we will focus on the study of the large $b_{\text T}$ (i.e. small $P_T$) behaviour of the fitted cross sections, leaving the exploration of the small $b_{\text T}$ region to further analyses.
By large $b_{\text T}$, here we mean ``the largest $b_{\text T}$ experimentally accesible'', as the asymptotic behaviour may not be so relevant for this data set, as discussed in \sref{largebT}.
For our main analysis with model II, we will adopt the functional forms of \eref{gk-jogh} and \eref{gk-vlad}, both characterized by two free parameters, $M_{\text K}$ and $p_{\text K}$. This gives two new models, which we label ``IIA'' and ``IIB'' (see \tref{models3}).
We thus minimize $\chi^2$ with respect to the free parameters ($z_0$, $M_{\text K}$, $p_{\text K}$) for models IIA and IIB.
In these two cases, as for model I, we will estimate statistical errors by determining the 2$\sigma$ confidence region in parameter space. Note that, while parameter space shown in next section for model II has a distortion respect to elliptical shapes, we have checked that rescaling the parameters allows to correct for this. Nonetheless, we present results in terms of ($z_0$, $M_{\text K}$, $p_{\text K}$) since they are closely related to features of the data.
Following the above considerations, the main results of our analysis will be presented in the next subsection for all of our models.
\subsection{Phenomenological Results.\label{s.pheno-res} }
With our final choices, we perform fits for each of the considered models,
labeled IA, IB, IIA, IIB, where "I" and "II" indicate the choice of parametrization for $M_{\text D}$ while "A" and "B" indicate the model chosen for $g_{\text K}$, according to the notation introduced in \tref{models3}. In each case we perform a $\chi^2$-minimization procedure using MINUIT \cite{James:1975dr}, fitting a total of 3 parameters in each model.
We estimate parameter errors by considering 2$\sigma$ confidence regions.
In other words, for each model we consider configurations in parameter space around the minimal one, varying all parameters simultaneously and
accepting those for which $\chi^2_i<\chi^2_0+\Delta\chi^2$, with $\Delta\chi^2=8.02$; this value of $\Delta\chi^2$ is consistent with varying three parameters simultaneously. Final results for models IA and IB are reported in \tref{finalchi21}. For models IIA and IIB, results are displayed in \tref{cut1_andrea}.
From a superficial look at \tref{finalchi21}, one may conclude that the quality of model IB is higher, given the smaller values of $\chi^2_{\text{d.o.f.}}$.
However, we note that model IB has the disadvantage that the ellipsoidal approximation extends down to negative values of $M_{\text 0}$, which must be excluded. This is reflected by the asymmetric errors in $M_{\text 0}$ and $M_{\text K}$ in the third column of \tref{finalchi21}.
\begin{table}[t]
\caption{
\label{t.finalchi21}
Minimal $\chi^2_{\text{d.o.f.}}$ obtained by fitting models IA and IB,
according to table \tref{models3}. In each case we perform fits in the kinematical
region of \eref{kin1} and \eref{kin2}.
In both cases IA and IB, all dimensionless parameters are fixed, indicated by in the table by a star. Fixed values as explained in \sref{finalmodelsandkin}. }
\begin{center}
\begin{tabular}{c c c }
\hline
\multicolumn{3}{c}{$q_{\text T}/Q<0.15$ ($\text{pts}=168$)}\\
\hline
&IA&IB \\
\hline
\rows{$\chi^2_{\text{d.o.f.}}$}
&\rows{$1.25$
&\rows{$1.19$
\\
& & \\
\rows{$M_{\text 0}(\rm GeV)$}
&\rows{$0.300^{+0.075}_{-0.062}$
&\rows{$0.003^{+0.089}_{-0.003}$
\\
& & \\
\rows{$M_{\text 1}(\rm GeV)$}
&\rows{$0.522^{+0.037}_{-0.041}$
&\rows{$0.520^{+0.027}_{-0.040}$
\\
& & \\
\rows{$p^{*}$}
&\rows{$1.51$
&\rows{$1.51$
\\
& & \\
\rows{$q^{*}$}
&\rows{$8$
&\rows{$8$
\\
& & \\
\rows{$M_{\text K}(\rm GeV)$}
&\rows{$1.305^{+0.139}_{-0.146}$
&\rows{$0.904^{+0.037}_{-0.086}$
\\
& & \\
\rows{$p_{\text K}^{*}$}
&\rows{$0.609$
&\rows{$0.229$
\\
& & \\
\hline
\end{tabular}
\end{center}
\end{table}
Fits performed with model II have slightly higher $\chi ^2$s, as shown in \tref{cut1_andrea}. This is probably due to the fact that this model, being more tightly constrained, with only one free parameter controlling the $z_h$ behaviour of $M_{\text D}$, shows a limited flexibility compared to model I.
Nonetheless clear differences between models cannot be observed when comparing to data.
We thus consider both models I and II as equally acceptable to describe the general profile of our functions $M_{\text D}$ and $g_{\text K}$.
We choose model IA to display the agreement of our predicted cross sections to the BELLE data in \fref{fitdata}, noting that corresponding comparisons for models IB, IIA, IIB would indeed be very similar. \fref{fitdata} shows two types of errors bands. Darker colored bands represent the statistical uncertainty of the fit. The lighter colored bands are an estimate of the error induced by the collinear fragmentation functions used in the analysis. They are produced by refitting the model function for each of the replicas provided by the NNFFs NLO extraction of Refs.~\cite{Bertone:2017tyb}
For this estimate, only about $65\%$ of the NNFFs replicas allowed for a convergent fit. A more detailed study of such errors is a necessity in this type of studies that need constraints from independent analyses. For now, we consider our estimate as a useful tool to understand the effect of the choice of collinear FFs in a TMD extraction. In fact, it is
useful to observe in \fref{fitdata} that errors from the collinear functions are consistently larger than statistical errors. Arguably, the former render a more realistic picture of the precision at which TMDs can be extracted from data.
It is clear from \fref{fitdata} that the quality of the description of data deteriorates at smaller values of $T$. This is not surprising since the formalism employed \cite{Boglione:2020cwn,Boglione:2020auc,Boglione:2021wov} is expected to fail at smaller values of thrust, where the topology of the $e^+e^- \to hX$ events starts deviating from a 2-jet like configuration.
%
\begin{table}[t]
\caption{Minimal $\chi^2_{\text{d.o.f.}}$ obtained by fitting models IIA and IIB,
according to table \tref{models3}. In each case we perform fits in the kinematical
region of \eref{kin1} and \eref{kin2}. There are no nuisance parameters in model II.}
\label{t.cut1_andrea}
\begin{center}
\begin{tabular}{c c c }
\hline
\multicolumn{3}{c}{$q_{\text T}/Q<0.15$ ($\text{pts}=168$)}\\
\hline
&IIA&IIB \\
\hline
\rows{$\chi^2_{\text{d.o.f.}}$}
&\rows{$1.35$
&\rows{$1.33$
\\
& & \\
\rows{$z_0$}
&\rows{$0.574^{+0.039}_{-0.041}$
&\rows{$0.556^{+0.047}_{-0.051}$
\\
& & \\
\rows{$M_{\text K}(\rm GeV)$}
&\rows{$1.633^{+0.103}_{-0.105}$
&\rows{$0.687^{+0.114}_{-0.171}$
\\
& & \\
\rows{$p_k$}
&\rows{$0.588^{+0.127}_{-0.141}$
&\rows{$0.293^{+0.047}_{-0.038}$
\\
& & \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure*}
\centering
\includegraphics[scale=1]{plot_IA8315.pdf}
\caption{
Results of fitting model IA from \tref{models3}, in the kinematical region of \eref{kin1} and \eref{kin2}. Darker shaded bands represent the statistical uncertainty of the fit at 2$\sigma$ confidence level, and correspond to the parameter configurations of \fref{corr3}. The lighter shaded bands are an estimate of the error induced by the collinear fragmentation functions used in the analysis, and are produced by refitting the model function for each of the replicas provided by the NNFFs NLO extraction of \cite{Bertone:2017tyb}. For a better visualization of results, central lines are not included, but they generally lie in the middle of the thin, darker statistical error bands. Models IB, IIA, IIB give analogous results. We do not show them in the plot as they would be indistinguishable.
\label{f.fitdata}
}
\end{figure*}
Further developments in the theoretical treatment of the interplay between the rapidity divergence regularization and the thrust dependence
will likely improve the quality of the extraction by allowing the possible inclusion of more data points while achieving an improved agreement to data~\cite{Boglione:2021wov}.
We leave this for future work~\cite{Boglione-Simonelli:2022}.
Interesting results are found about $g_{\text K} (b_T)$. We focus on the study of the large $b_{\text T}$ (i.e. small $P_T$) behaviour of the fitted cross sections, leaving to further analyses the exploration of the small $b_{\text T}$ region, on which we are unable to draw definite conclusions, as explained in \sref{small-bt-gk}.
Our fit is rather sensitive to the modulation of $g_{\text K}$ in the large $b_{\text T}$ region. Remarkably, it shows a strong preference for a sub-linear power or logarithmic raise of $g_{\text K}$, while definitely ruling out the
$b_{\text T}^2$ or $b_{\text T}^4$ behaviour at large $b_{\text T}$.
We stress that
by large $b_{\text T}$, here we mean ``the largest $b_{\text T}$ experimentally accesible'', as the asymptotic behaviour may not be so relevant for this data set, as discussed in \sref{largebT}.
It is important to understand the strength of correlations between $M_{\text D}$ and $g_{\text K}$ and the impact of model choices in the extraction of profile functions. Although these two points are not necessarily unrelated, we discuss them separately in what follows.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[scale=1]{M_Mk_IA83_T2-4_z6-13_rho015.pdf}}
\subfloat[]{\includegraphics[scale=1]{g2z_Mk_IA83_T2-4_z6-13_rho015.pdf}}
\caption{
\label{f.corr3}
2$\sigma$ confidence regions centered around the minimum configuration, shown in green, for the fit of model IA of \ref{t.models3} in the kinematical region of \eref{kin1} and \eref{kin2}.
}
\end{figure*}
Firstly, regarding correlations between $M_{\text D}$ and $g_{\text K}$ for a given model, in an ideal scenario one would expect them to be mild, which would provide some level of confidence when comparing results to other analyses or data sets. This situation is however not guaranteed. We find that in fact $M_{\text D}$ and $g_{\text K}$ are
correlated, as shown in \fref{corr3}, where correlations between $M_{\text K}$ and the mass parameters of $M_{\text D}$, $M_{\text 0}$ and $M_{\text 1}$ are displayed for model IA, and in \fref{corr4} where analogous scatter plots are presented for model IIB, for the correlation of $z_0$ with $M_{\text K}$ and $p_{\text K}$. We obtain analogous results for model IB, with the added feature that confidence regions in parameter space appear as ellipses truncated in the region $M_{\text 0}<0$.
For models of type II, the correlation between $M_{\text D}$ and $g_{\text K}$ appears to be stronger than in the parametrizations of type I, so much so that a slight residual deformation from the ellipsoidal form is still visible in \fref{corr4}, although the constraints intrinsically built in model I drastically limit the number of its free parameters.
We checked that a transformation of parameters $M_{\text K}$ and $p_{\text K}$ render scatter plots
with an approximate elliptical shape.
It is noteworthy,
that the regions corresponding to 2$\sigma$ confidence level have well defined contours, allowing for a reliable determination of the error affecting the extracted parameters.
Secondly, we find that the profile of the extracted functions strongly depends on model choices.
Note that the full TMD in momentum space, shown in \fref{tmdu}, shows differences beyond statistical error bands. Discrepancies are more visible when considering separately the results obtained for the extractions of $M_{\text D}$ and $g_{\text K}$, as seen in \fref{mdktilde} where the
profile functions
differ beyond statistical error bands. As such, those discrepancies should be considered as a kind of theoretical error.
While this is only a rough estimate of one kind of theoretical uncertainties, it makes the case that statistical uncertainties are generally not enough to asses the quality of an extraction. Even though this is specially the case in studies like the present one, where only one process is considered, it is a matter of concern even for global fits.
\begin{figure*}
\centering
\subfloat[]{\includegraphics[scale=1]{Mk_z0_andreamodelB.pdf}}
\subfloat[]{\includegraphics[scale=1]{powerk_z0_andreamodelB.pdf}}
\caption{
\label{f.corr4}
2$\sigma$ confidence regions centered around the minimum configuration, shown in green, for the fit of model IIB of \ref{t.models3} in the kinematic region of \eref{kin1} and \eref{kin2}. Here the presence of some correlation among the free parameters controlling the behavior of $M_{\text D}$ and $g_{\text K}$ is signalled by a slight deformation from the expected ellipsoidal shapes.
}
\end{figure*}
\begin{figure}[ht]
\centering
\includegraphics{TMDkT_uq8.pdf}
\caption{
Extractions of the unpolarized TMD FF, \eref{tmd_NLL}, from one-hadron production BELLE data of \cite{Seidl:2019jei}, using models IA,IB,IIA,IIB of \tref{models3}, in the kinematic region of \eref{kin1} and \eref{kin2}. The TMD FF for the $u \to \pi^+ + \pi^-$ channel is shown in momentum space.
\label{f.tmdu}
}
\end{figure}
\begin{figure*}
\centering
\includegraphics[]{MDq8.pdf}
\includegraphics[]{gKq8.pdf}
\caption{
Extractions of $M_{\text D}$ and $g_{\text K}$ in \eref{tmd_NLL} from $e^+e^- \to hX$ BELLE data \cite{Seidl:2019jei}, in the kinematic region of \eref{kin1} and \eref{kin2}. In all cases, 2$\sigma$ statistical error bands are shown. For model IA they correspond to the region of parameter space of \fref{corr3} while for model IIB to \fref{corr4}. Left: $M_{\text D}$ according to model IA,IB,IIA,IIB of \tref{models3}. Right: Corresponding results for $g_{\text K}$.
\label{f.mdktilde}
}
\end{figure*}
We now compare our results against other recent TMD-analyses.
Since the relevant TMD FF in our studies is different from that of the usual CSS, SCET and related treatments (see \eref{sqrtMD}), we can only compare our results for the CS kernel which, up to trivial constant factors, is the same in each scheme.
In \fref{vlad} we plot the CS kernel~\cite{Collins:2011zzd,Collins:2017oxh} computed to NLL-accuracy
\begin{align}
\tilde{K}(b_{\text T};\mu)&= \frac{1}{2}\Bigg[
g_1^{\text{K}} (\lambda) + \frac{1}{L_b^\star} \, g_2^{\text{K}}(\lambda)\bigg]
- \frac{1}{2} g_K(b_T),
\end{align}
where the functions $g_1^{\text{K}}$ and $g_2^{\text{K}}$, which depend only on the combination $\lambda = 2\,\beta_0 \, a_S(\mu) \, L_b^\star$, with $L_b^\star = \log{\left( {\mu}/{\mu_{b_\star}}\right)}$, are reported in Appendix~\ref{app:tmd}.
Our extraction of the CS kernel for all our models is compared to the results obtained in the analyses of PV19~\cite{Bacchetta:2019sam} and SV19~\cite{Scimemi:2019cmh}\footnote{Note that for the CS kernel, PV19 follows the conventions of Ref.~\cite{Collins:2011zzd}, the SV19 results must be multiplied by a factor of $-2$ and ours should be divided by a factor 2.}.
For clarity, we don't show central lines but only error bands in each case.
\fref{vlad} shows a good agreement between our extraction of the CS kernel and the SV19 analysis in the region just above $b_{\text T} \sim 2$ GeV$^{-1}$. Note that these two extractions are based on different factorization schemes and exploit different data sets.
The large $b_{\text T}$ behaviour of our extraction is clearly different from the PV19 results, which adopts a $b_{\text T} ^4$ asymptotic behaviour in order to describe Drell-Yan production data from different experiments on a very wide kinematic range, and up to extremely high energies.
Instead, in the small $b_{\text T}$ region, our extraction of the CS kernel differs from both PV19 and SV19 results,
where the perturbative part of the CS kernel is expected to dominate, making all bands to coincide.
This is mostly due to two factors. First, the behaviour of our model for $g_{\text K}$ at small distances, which approaches zero only as $b_{\text T}^p$, with $0<p<1$, significantly more slowly compared to the $b_{\text T}^2$ behaviour of the PV19 and SV19 parametrizations also at small distances. In fact, the effects of our extractions for $g_{\text K}$ are still significant at relatively small values of $b_{\text T}$. Second, the approximations of \eref{tmd_NLL}, are likely not optimal to describe the small $b_{\text T}$ behaviour of the TMDFF. Future improvements in the perturbative accuracy and a better treatment of the thrust dependence could resolve these discrepancies with respect to the results of the PV19 and SV19 analyses.
\begin{figure}
\includegraphics{Ktildeq8.pdf}
\caption{ Extractions of the CS kernel obtained in this analysis with models IA, IB, IIA, IIB are compared the PV19~\cite{Bacchetta:2019sam} and SV19~\cite{Scimemi:2019cmh} extractions. For clarity, central lines are not shown.
While there is a good agreement between the linear and sub-linear large $b_{\text T}$ behaviour
of this extraction and Ref.~\cite{Scimemi:2019cmh}, the result of Ref.~\cite{Bacchetta:2019sam} shows an evident deviation at large $b_{\text T}$, where $g_{\text K}$ goes like $b_{\text T} ^4$. Discrepancies at small $b_{\text T}$ are due to the higher pQCD accuracy of the PV19 and SV19 analyses. We also note that our models are essentially different at small $b_{\text T}$ compared to those used in Refs.~\cite{Bacchetta:2019sam,Scimemi:2019cmh}, as explained in the text.
\label{f.vlad}
}
\end{figure}
Recently, several lattice QCD calculations of the CS kernel have been performed by different groups and reported in Refs.~\cite{Shanahan:2020zxr,latticeParton:2020uhz,Schlemmer:2021aij, Li:2021wvl,Shanahan:2021tst,LPC:2022ibr}; it is therefore interesting to compare our extraction to some of these computations. We do this in \fref{latticeCS}, where for clarity we compare error bands of all our models
with the most recent calculation of each lattice QCD collaboration,
Refs.~\cite{Schlemmer:2021aij, Li:2021wvl,Shanahan:2021tst,LPC:2022ibr}.
The logarithmic and sub-linear power large $b_{\text T}$ behaviour assumed for our extractions seem to be well supported by lattice QCD estimations of the CS kernel. We note that while our results are in better agreement with the SWZ21\cite{Shanahan:2021tst} and LPC22\cite{LPC:2022ibr} calculations, the general trend of our extractions is also consistent with the ETMC/PKU\cite{Li:2021wvl} and SVZES\cite{Schlemmer:2021aij} results, characterized by a slow variation of the CS kernel at large $b_{\text T}$.
Once again we underline that in our analysis little can be said about the small $b_{\text T}$ behaviour of the CS kernel, thus we focused our attention in the large $b_{\text T}$ regime, where BELLE experimental data offer good coverage.
\begin{figure}
\includegraphics{ktilde_lattice_final.pdf}
\caption{ The CS-kernel obtained in this analysis by adopting models IA, IB, IIA, IIB are compared to the CS kernel computed in lattice QCD in Refs.~\cite{Schlemmer:2021aij, Li:2021wvl,Shanahan:2021tst,LPC:2022ibr}, at $\mu = 2$ GeV. For clarity, central lines for our extractions are not shown and we display only the most recent lattice calculation for each group.
The logarithmic and sub-linear power large $b_{\text T}$ behaviour assumed for our extraction seem to be well supported by lattice QCD estimations of the CS kernel.
\label{f.latticeCS}
}
\end{figure}
\section{Conclusions}
We performed an analysis of recent BELLE data for one hadron production in $e^+e^-$ annihilation \cite{Seidl:2019jei} and extracted the TMD FF following the newly developed formalism of Ref.~\cite{Boglione:2020cwn,Boglione:2020auc,Boglione:2021wov}. In this framework, the short distance behavior of the TMD FF is constrained by collinear FFs, as in the standard CSS and SCET formalisms, while
the long distance behaviour requires the parametrization and determination, via comparison to data, of two functions, $M_{\text D}$ and $g_{\text K}$. We introduced constraints for these functions in the asymptotically large region of $b_{\text T}$, consistently with previous theoretical results from Refs.~\cite{Schweitzer:2012hh,Collins:2014jpa,Aidala:2014hva,Vladimirov:2020umg}.
Our analysis is based on a maximum-likelihood procedure, carried out by $\chi^2$-minimization. Statistical errors are estimated by a standard determination of confidence regions at 2$\sigma$ level.
Upon testing how different choices of available collinear FFs perform when comparing to data, we found that
both JAM20~\cite{Moffat:2021dji} and NNFF~\cite{Bertone:2017tyb} sets, although showing non-negligible differences (at least in some specific regions of $z_h$ and $b_{\text{T}}$) are consistent with the $P_{\text T}$-dependent BELLE cross sections, within our approach.
For our extraction, constraints for both $M_{\text D}$ and $g_{\text K}$ in the asymptotically large $b_{\text T}$ region were imposed. For $M_{\text D}$, we considered models characterized by an exponential asymptotic $b_{\text T}$ decay, according to previous theoretical results from Ref.~\cite{Schweitzer:2012hh,Collins:2014jpa} and argued that, for consistency with universality of the large distance behavior of TMDs, the CS kernel should grow more weakly than a linear function of $b_{\text T}$ in the asymptotic limit. We considered two models for $g_{\text K}$ satisfying that condition, which follow a sub-linear power and a logarithmic behavior, as suggested in
Refs.~\cite{Vladimirov:2020umg} and
\cite{Aidala:2014hva}, respectively, in this limit. We showed that, in the considered kinematic region, all aforementioned constraints imposed in the very large $b_{\text T}$ are consistent with the data.
We remark, however, that the asymptotic behavior of different models plays a role in extending results to smaller scales, and that the slow evolution characteristic of the region of a few GeV can be accommodated by the type of models we tested in this work (see detailed discussion in Ref.~\cite{Collins:2014jpa}).
A remarkable result of this analysis is the insight of the influence of the profile function
of $g_{\text K}$ in the region of intermediate-moderate values of $b_{\text T}$, which we expect to be accessible at BELLE kinematics. Compared to previous studies~\cite{Bacchetta:2019sam, Scimemi:2019cmh}, which gave indications on the preferred behaviour of $g_{\text K}$ at small $b_{\text T}$,
our analysis based on the BELLE data, which correspond to a relatively moderate scale $Q = 10.6$ GeV, shows a significant sensitivity to larger values of $b_{\text T}$. We find clear signals that a $b_{\text T} ^2$ or $b_{\text T} ^4$ functional form is inappropriate to describe the long distance behaviour of the CSS kernel. In fact, the analyzed
data show a definite preference for a logarithmic or sub-linear modulation at large-$b_{\text T}$, in line with the studies of Refs.~\cite{Collins:2017oxh,Vladimirov:2020umg} based on more general formal considerations.
The large $b_{\text T}$ behaviour of our models, supplemented with constraints from BELLE data, seems to be well supported by the lattice determinations of the CS kernel from quasi TMD wave functions~\cite{Schlemmer:2021aij, Li:2021wvl,Shanahan:2021tst,LPC:2022ibr}, which evidence the slow variation of the kernel in this region of $b_{\text T}$. Remarkably, our extractions are in very good agreement with the calculations of Refs.~\cite{Shanahan:2021tst,LPC:2022ibr} where an NLO matching is applied. This is a very important cross-check, as lattice QCD calculations are based on totally different and independent methodologies.
On the other hand, little can be inferred from this analysis about the small-$b_{\text T}$ behaviour of the CS kernel and of $g_{\text K}$. This might be at least partially due to the relatively low energy of the BELLE experiment, but this is an issue which deserves more extensive studies, including higher accuracy in the perturbative expansion. A more rigorous formal treatment will be presented in Ref.~\cite{Boglione-Simonelli:2022}.
A very important theoretical consideration regards the transition between short and long distance behaviour, which should be carefully treated when embedding models into the type of TMD FF definition like that of \eref{tmd_NLL}, where the small $b_{\text T}$ behaviour is, in principle, constrained by collinear factorization. In general, such constraints are not guaranteed unless models are optimally embedded, especially at small and moderate scales. Recently, this and related issues have been comprehensively addressed in Ref.~\cite{Gonzalez-Hernandez:2022ifv} where, based on theoretical considerations, a practical recipe for phenomenology was provided that allows a more reliable combination of models of nonperturbative behaviour into the CSS formalism. These considerations will likely help to resolve some of the issues we found at small $b_{\text T}$ in our analysis. We plan to pursue this techniques in future work.
Another relevant aspect concerns the estimation of the errors affecting the phenomenological extraction of TMDs from experimental data.
It is important to stress that while statistical errors do provide insight into the precision with which TMDs can be extracted, theoretical errors play also an important role, which remarkably affect accuracy. We addressed two sources of such errors and provided rough estimates of their size. First, we considered the effect that the statistical errors of the collinear functions have in the extraction of the unpolarized TMD FF, by refitting our model with each one of the sets provided by the NNFF collaboration. Second, the use of two different models for $g_{\text K}$ allowed us to assess how profile functions extracted depend on model choices, as seen in \fref{mdktilde}. In both cases, our estimates are meant to provide examples of how important it is to perform error estimation beyond statistical uncertainties. More work is needed in order to address these issues with a more robust approach.
A possible future improvement in our analysis regards the treatment of experimental errors. For our work, we added in quadrature all errors provided by the BELLE collaboration which may be a matter of concern, specially regarding correlated systematic errors, since they should be treated on a different footing. This can be done, for instance, by introducing nuisance parameters in the $\chi^2$ statistic, in the form of a shift to theoretical estimates. This, however, likely requires more detailed information about the different sources of correlated systematic uncertainties. In our case, attempting to employ such methodology resulted in large values of the minimal $\chi^2$ although rendering almost identical results in the profile functions.
Although our analysis was carried out on a rather limited subset of the BELLE data, we consider this work an essential first step.
We stress that, to the best of our knowledge, this is the only phenomenological analysis where the thrust dependence of the cross section is explicitly taken into account and well described over three different bins. Other studies~\cite{Kang:2017glf,DAlesio:2020wjq} resort to a combination of the thrust bins, resulting in a cross section which is some sort of average over thrust, or simply integrate it over.
Extending our results to a wider range of thrust and $z_h$ bins requires further formal developments on identifying and extending the optimal kinematic region where the TMD formalism developed for region 2 in $e^+e^-\to h X$ can be successfully applied\cite{Boglione:2021wov,Boglione:2021vug}. Moreover, the connection between the regularization of the rapidity divergences and the thrust dependence must be set on a more solid formal ground, as it crucially affects the correlation among $T$, $P_T$ and $z_h$.
This will likely improve the quality of the extraction by allowing to possibly include more data points while achieving an even better agreement to data~\cite{Boglione-Simonelli:2022}.
\section*{Acknowledgements}
We thank Ted Rogers for useful discussions regarding formal aspects of the CS kernel.
We are grateful to Alexey Vladimirov for making available to us the results of the SV19 extraction and the SVZES lattice calculation of the CS kernel,
and to Andrea Signori for providing results of the PV19 fit.
We thank Xu Feng, Michael Wagman and Qi-An Zhang for interesting discussions on their lattice QCD determination of the Collins-Soper kernel and for providing us with relevant recent results on the subject.\\ This project has received funding from the European Union’s Horizon 2020
research and innovation programme under grant agreement No 824093.
|
1,477,468,750,136 | arxiv | \section{Introduction}
Let $R := \mathbb{Z}_p C_p$ denote the group ring of the cyclic group of
order $p$ over the localisation of $\mathbb{Z} $ at the prime $p$.
The present paper considers free $R$-lattices
$L\cong R^a$.
The main observation in this situation is
Theorem \ref{eltdiv}:
Given two free $R$-modules $M$ and $L$
with
$pM \subseteq L \subseteq M $
then there is an $R$-basis $(g_1,\ldots , g_a)$ of $M$ and $0\leq t\leq a$
such that $(g_1,\ldots , g_t, p g_{t+1},\ldots , p g_a )$ is an
$R$-basis of $L$.
So these lattices do admit a compatible basis.
Applying this observation to Hermitian $R$-lattices shows
that free elementary Hermitian $R$-lattices admit an
invariant splitting (see Theorem \ref{pmodautpfree})
as the orthogonal sum of a free unimodular lattice and a free $p$-modular
lattice.
The results of this note have
been used in the thesis \cite{Eisenbarth} to
study extremal lattices admitting an automorphism of
order $p$ in the case that $p$ divides the level
of the lattice.
\section{Existence of compatible bases}
For a prime $p$ we denote by
$$\mathbb{Z}_{p} := \{ \frac{a}{b} \in \mathbb{Q} \mid p \mbox{ does not divide } b \} $$
the localisation of $\mathbb{Z} $ at the prime $p$.
The following arguments also apply accordingly
to the completion of this discrete valuation ring.
Let $R:=\mathbb{Z} _{p} C_p$ denote the
group ring of the cyclic group $C_p = \langle \sigma \rangle $
of order $p$.
Then $e_1:=\frac{1}{p} (1+\sigma + \ldots + \sigma ^{p-1}) \in \mathbb{Q} C_p $
and $e_{\zeta }:=1-e_1$ are the primitive idempotents in the
group algebra $\mathbb{Q} C_p$ with $\mathbb{Q} C_p = \mathbb{Q} C_p e_1 \oplus \mathbb{Q} C_p e_{\zeta }
\cong \mathbb{Q} \oplus \mathbb{Q} [\zeta _p]$, where $\zeta _p$ is a primitive $p$-th
root of unity. The ring $T:=\mathbb{Z}_p[\zeta_p]$ is a discrete valuation
ring in the $p$-th cyclotomic field $\mathbb{Q}[\zeta _p]$ with
prime element $\pi := (1-\zeta _p)$ and hence
$$Re_1 \oplus R e_{\zeta } \cong \mathbb{Z} _{p} \oplus \mathbb{Z} _{p} [\zeta _p] =: S \oplus T $$
is the unique maximal $\mathbb{Z} _p$-order in $\mathbb{Q} C_p$.
\begin{remark} \label{structR}
With the notation above $T/(\pi ) \cong \mathbb{Z}_p / (p) \cong \mathbb{F}_p $ and
via this natural ring epimorphism
$$ R = \{ (x,y) \in \mathbb{Z} _{p} \oplus \mathbb{Z} _{p} [\zeta _p] \mid
x + p \mathbb{Z}_p = y + \pi \mathbb{Z} _{p} [\zeta _p] \} .$$
$R$ is generated as $\mathbb{Z}_p$-algebra by
$1=(1,1)$ and $1-\sigma = (0,\pi )$.
Moreover $Re_1 \cap R = p R e_1 = p S$ and
$Re_{\zeta } \cap R = \pi R e_{\zeta } = \pi T$ and
the radical $J(R) := pS \oplus \pi T$ of $R$
is the unique maximal ideal of the
local ring $R$.
\end{remark}
By \cite{Reiner} the indecomposable $R$-lattices are the free $R$-module $R$,
the trivial $R$-lattice $\mathbb{Z}_p = Re_1=:S$ and
the lattice $\mathbb{Z}_{p} [\zeta _p]=Re_{\zeta} =:T$
in the rational irreducible faithful representation of $C_p$.
The theorem by Krull-Remak-Schmidt-Azumaya \cite[Chapter 1, Section 11]{FeitBook} ensures that
any finitely generated
$R$-lattice $L$ is a direct sum of indecomposable $R$-lattices
$$L \cong R^a \oplus T^b \oplus S^c .$$
In this note we focus on the case of free $R$-lattices.
Though $R$ is not
a principal ideal domain,
for certain sublattices of free $R$-lattices
there do exist compatible bases:
\begin{theorem} \label{eltdiv}
Let $M\cong R^a$ be a free $R$-lattice of rank $a$.
Assume that $L$ is a free $R$-lattice with $pM \subseteq L \subseteq M$.
Then there is an $R$-basis $(g_1,\ldots , g_a)$ of $M=Rg_1\oplus \ldots \oplus Rg_a$
and $0\leq t\leq a$ such that
$$L = Rg_1\oplus \ldots \oplus Rg_t \oplus pRg_{t+1} \oplus \ldots \oplus p R g_a .$$
\end{theorem}
\begin{proof}
Let $\tilde{S} := M e_1 $ and $\tilde{T} := M e_{\zeta }$.
Now $M \cong R^a$ is a free $R$-lattice, so, as in Remark \ref{structR},
$M $ is a sublattice of
$\tilde{S}\oplus \tilde{T}$ of index $p^a$,
$\tilde{S} \cap M = p \tilde{S} $, and
$\tilde{T} \cap M = \pi \tilde{T} $.
The Jacobson radical is
$J(M) = J(R) M = p\tilde{S} \oplus \pi \tilde{T} $
and of index $p^a$ in $M$.
We proceed by induction on $a$. \\
If $a=1$, then $M=R$, $\tilde{S} \cong S$, $\tilde{T} \cong T$.
As $M/pM \cong \mathbb{F}_pC_p \cong \mathbb{F}_p[x]/(x-1)^p $ is a chain ring,
the $R$-sublattices of $M$ that contain $pM$ form a chain:
$$M \supset
p \tilde{S} \oplus \pi \tilde{T}
\supset
p \tilde{S} \oplus \pi^2 \tilde{T}
\supset \ldots \supset
p \tilde{S} \oplus \pi^{p-2} \tilde{T} \supset p \tilde{S} \oplus p \tilde{T}
\supset pM .$$
The only free $R$-lattices among these are $M$ and $pM$. \\
Now assume that $a>1$.
If $ L \not\subseteq J(M)$ then we may choose $g_1 \in L\setminus J(M)$.
As $g_1\not\in J(M)$ the $R$-submodule $Rg_1$ of $M$ is a free submodule of
both modules $L$ and $M$, so $M=Rg_1 \oplus M'$, $L=Rg_1 \oplus L'$ where
$M'$ and $L'=L\cap M'$ are free $R$-lattices of rank $a-1$ satisfying the
assumption of the theorem and the theorem follows by induction.
So we may assume that
\begin{equation}\label{LJM}
L\subseteq J(M) = p\tilde{S} \oplus \pi \tilde{T} .
\end{equation}
The element $e_1\in \mathbb{Q} C_p$ is a central idempotent in
$\textnormal{End}_R(J(M))$ projecting onto $p\tilde{S}= J(M) e_1$.
The assumption
that $pM \subseteq L \subseteq J(M)$ implies that
$$p\tilde{S} = pM e_1 \subseteq L e_1 \subseteq J(M) e_1 = p\tilde{S} . $$
So $L e_1 = pM e_1 =p\tilde{S}$.
To show that $L=pM$ we first show that $Le_{\zeta } = pM e_{\zeta }$. \\
As $pM\subseteq L$ we clearly have that $pMe_{\zeta} \subseteq L e_{\zeta} $. \\
To see the opposite inclusion put $K := L\cap L e_{\zeta} $
to be the kernel of the projection $e_1: L \to Le_1$.
As $L$ is free, we get as in Remark \ref{structR}
that $K = \pi Le_{\zeta } $.
Let $k $ be maximal such that $ K \subseteq \pi^{k} \tilde{T}$.
Then $k\geq 2$ because $Le_{\zeta } \subseteq \pi \tilde{T}$
(see equation \eqref{LJM}).
\\
Assume that $k \leq p-1$.
There is $\ell \in L$ such that $y=\ell e_{\zeta} \not\in \pi^{k} \tilde{T} $.
As $pMe_1 = Le_1$, there is $m \in M $ such that $pm e_1= \ell e_1$.
Now $pM\subseteq L$ so $pm \in L$ and
$\ell-pm \in K= Ke_{\zeta }$.
\\
We compute $\ell -pm = (\ell -pm ) e_{\zeta } = y - pm e_{\zeta }$.
\\
As $pMe_{\zeta } = p \tilde{T} = \pi^{p-1}\tilde{T}$ and
$y\not\in \pi^{k}\tilde{T}$ the assumption that $k\leq p-1$
shows that $\ell -pm \not \in \pi^k \tilde{T}$,
which contradicts the definition of $k $.
\\
Therefore $k \geq p$ and $L e_{\zeta } \subseteq pM e_{\zeta } $.
\\
Now $pM$ and $L$ both have index $p^a$ in
$pMe_1 \oplus pM e_{\zeta } = Le_1 \oplus Le_{\zeta }$
(again by Remark \ref{structR} as $L$ and $M$ are free).
So the assumption $pM \subseteq L$ implies that $pM = L$.
\end{proof}
\begin{remark}
Let $M\cong T^b\oplus S^c$ and let $L$ be a sublattice of $M$
again isomorphic to $T^b \oplus S^c$.
Then $M=Me_{\zeta } \oplus Me_1$ and
$L=Le_{\zeta } \oplus Le_1$.
By the main theorem for
modules over principal ideal domains there is a
$T$-basis $(x_1,\ldots , x_b)$ of $Me_{\zeta} $
and an $\mathbb{Z}_p$-basis $(y_1,\ldots, y_c) $ of $Me_1$,
as well as $0\leq n_1\leq \ldots \leq n_b$,
$0\leq m_1\leq \ldots \leq m_c$, such that
$L = \bigoplus _{i=1}^b \pi^{n_i} T x_i \oplus
\bigoplus _{i=1}^c p^{m_i} \mathbb{Z}_p y_i .$
\end{remark}
\begin{example}
For general modules $M$, however, Theorem \ref{eltdiv} has no
appropriate analogue.
To see this consider $M\cong R\oplus S $ and choose a pseudo-basis
$(x,y)$ of $M$ such that $x$ generates a free direct summand
and $y$ its complement isomorphic to $S$.
Let $L$ be the $R$-sublattice generated by $p x e_1$ and
$x(1-\sigma ) + y$. As $x(1-\sigma )+y$ generates a free
$R$-sublattice of $M$ and $R(pxe_1)\cong S$ we have
$L \cong S \oplus R $.
For $p>2$ we compute that $pM\subseteq L \subseteq M$. Then the
fact that $|M/L|=p^2$ implies that
these two modules do not admit a compatible pseudo-basis.
\end{example}
\section{Lattices in rational quadratic spaces} \label{autp}
From now on we consider $\mathbb{Z}_{p} $-lattices
$L$ in a non-degenerate rational bilinear space $(V,B)$.
The {\em dual lattice} of $L$ is
$$L^{\#} := \{ x\in V \mid B(x,\ell ) \in \mathbb{Z}_{p} \mbox{ for all } \ell \in L \}.$$
The lattice $L$ is called {\em integral}, if
$L \subseteq L^{\#} $ and {\em elementary},
if $$pL^{\#} \subseteq L \subseteq L^{\#} .$$
Following O'Meara \cite[Section 82 G]{OMeara} we call a
lattice $L$ {\em unimodular} if $L=L^{\#}$ and
{\em $p^j$-modular} if $p^jL^{\#} = L$.
We now assume that $\sigma $ is an automorphism of
$L$ of order $p$, so
$\sigma $ is an orthogonal mapping of $(V,B)$ with $L \sigma = L$.
Then also the dual lattice
$L^{\#} $ is a $\sigma $-invariant lattice in $V$.
As the dual basis of a lattice basis of $L$ is a lattice basis of
$L^{\# }$,
the bilinear form $B$ yields an identification between
$L^{\#}$ and the lattice $\textnormal{Hom} _{\mathbb{Z}_p}(L,\mathbb{Z}_p)$
of $\mathbb{Z}_p$-valued linear forms on $L$.
The $\sigma $-invariance of $B$ shows that this is an isomorphism of
$\mathbb{Z}_p[\sigma ]$-modules.
\begin{remark}
As a $\mathbb{Z}_{p}[\sigma ]$-module we have
$L^{\#} \cong \textnormal{Hom} _{\mathbb{Z}_{p}} (L,\mathbb{Z}_{p}) $.
\end{remark}
As all indecomposable $\mathbb{Z}_{p}[\sigma ]$-lattices are
isomorphic to their homomorphism lattices, we obtain
\begin{proposition}\label{dualtype} (see \cite[Lemma 5.6]{PreprintNebe})
If $L\cong R^a\oplus T^b \oplus S^c $ as $\mathbb{Z}_{p}[\sigma ]$-lattice
then also $L^{\#} \cong R^a\oplus T^b \oplus S^c $.
\end{proposition}
The group ring $R$ comes with a natural involution $\overline{\phantom{x}} $,
the unique $\mathbb{Z}_p$-linear map $\overline{\phantom{x}}:R \to R$
with $\overline{\sigma ^i} = \sigma ^{-i} $ for all $0\leq i\leq p-1$.
This involution
is the restriction of the involution on
the maximal order $S\oplus T$ that is trivial
on $S$ and the complex conjugation on $T$.
\begin{remark}\label{dual}
The $\mathbb{Z}_p$-lattice $R$ is unimodular with respect to
the symmetric bilinear form
$$R\times R \to \mathbb{Z}_p , (x,y) \mapsto \frac{1}{p} \textnormal{Tr} _{reg} (x\overline{y} ) $$
where $\textnormal{Tr} _{reg} : \mathbb{Q} C_p \to \mathbb{Q} $ denotes the regular trace of the
$p$-dimensional $\mathbb{Q} $-algebra $\mathbb{Q} C_p$.
We thus obtain a bijection between the
set of $\sigma $-invariant $\mathbb{Z}_p$-valued bilinear forms on
the $R$-lattice $L$ and the $R$-valued Hermitian forms on $L$:
If $h:L\times L \to R $ is such a Hermitian form, then
$B=\frac{1}{p} \textnormal{Tr} _{reg} \circ h $ is a bilinear $\sigma $-invariant
form on $L$. As $R=R^{\#}$ these forms yield the
same notion of duality.
In particular the dual lattice $L^{\# }$ of a
free lattice $L = \oplus _{i=1}^a R g_i $ is again
free $L^{\#} = \oplus _{i=1}^a R g^*_i $ with the Hermitian dual basis
$(g^*_1,\ldots , g^*_a)$
as a lattice basis, giving a constructive argument for
Proposition \ref{dualtype} for free lattices.
\end{remark}
\section{Free elementary lattices}
\label{pautpmod}
In this section we assume that $L$ is an
elementary lattice and $\sigma$
an automorphism of $L$ of prime order $p$.
Recall that $R$ is the commutative ring
$R:= \mathbb{Z}_{p} [\sigma ] $, so $L$ is an $R$-module.
\begin{theorem} \label{pmodautpfree}
Let $p$ be a prime and let
$L$ be an elementary lattice with an automorphism
$\sigma $ such that
$L\cong R^a$ is a free $R$-module. Then also $L^{\#} \cong R^a$ and
there is an $R$-basis $(g_1,\ldots , g_a)$
of $L^{\#}$ and $0\leq t \leq a$ such that
$(g_1,\ldots , g_t, p g_{t+1},\ldots , p g_a ) $
is an $R$-basis of $L $.
In particular $L$ is the orthogonal sum of the
unimodular free $R$-lattice $L_0:=R g_1\oplus \ldots \oplus R g_t$
and a $p$-modular
free $R$-lattice $L_1 := L_0^{\perp }$.
\end{theorem}
\begin{proof}
Under the assumption both lattices
$L$ and $M:=L^{\#}$
are free $R$-modules satisfying $pM\subseteq L\subseteq M$.
So by Theorem \ref{eltdiv} there is a basis
$(g_1,\ldots , g_a)$ of $M$ such that
$(g_1,\ldots g_t, pg_{t+1}, \ldots , pg_a)$ is a basis of $L$.
Clearly $L$ is an
integral lattice and $L_0:=Rg_1\oplus \ldots \oplus Rg_t$ is a unimodular
sublattice of $L$. By \cite[Satz 1.6]{Kneser} unimodular free sublattices
split as orthogonal summands, so
$L=L_0 \perp L_1$ with $L_1^{\#} = \frac{1}{p} L_1$, i.e.
$L_1$ is $p$-modular.
\end{proof}
Note that the assumption that the lattice is elementary is necessary, as
the following example shows.
\begin{example}\label{dim2}
Let $L=R g_1 \oplus R g_2 $ be a free lattice of rank 2 with
$R$-valued Hermitian form defined by the Gram matrix
$$\left(\begin{array}{cc} (p,0) & (0,\pi ) \\
(0,\overline{\pi}) & (p,0) \end{array} \right) .$$
Here we identify $R$ as a subring of $S\oplus T$, so
$(p,0) = p e_1 = 1+\sigma + \ldots + \sigma ^{p-1} $ and
$(0,\pi) = (0,(1-\zeta_p)) = 1-\sigma \in R $.
Then $L$ is orthogonally indecomposable, because $Le_{\zeta }$
is an orthogonally indecomposable $T$-lattice, but $L$
is not modular.
Note that the base change matrix between $(g_1,g_2)$ and
the dual basis, an $R$-basis of $L^{\#}$, is
the inverse of the Gram matrix above, so
$$\left(\begin{array}{cc} (p^{-1},0) & (0,-\overline{\pi }^{-1} ) \\
(0,-{\pi}^{-1}) & (p^{-1},0) \end{array} \right) .$$
As $(1,0) = e_1 \not\in R$ this shows that $pL^{\#} \not\subseteq L$,
so $L$ is not an elementary lattice.
\end{example}
|
1,477,468,750,137 | arxiv | \section{Acknowledgments}
We thank Nigel Cooper, Davide Dreon, Marius Gächter, Martin Holthaus, Gregor Jotzu, Daniel Malz, Stephan Roschinski, and Oded Zilberberg for inspiring discussions and comments on the manuscript.
We would like to thank Alexander Frank for his contributions to the electronic setup of the experiment.
K.V.~is supported by the ETH Fellowship programme.
This work was partly funded by the SNF (project nos. 169320 and 182650), NCCR-QSIT, QUIC (Swiss State Secretary for Education, Research and Innovation contract no. 15.0019), and ERC advanced grant TransQ (project no. 742579).
\section{Author Information}
The authors declare no competing financial interests. Correspondence and requests for materials should be addressed to K.V.
|
1,477,468,750,138 | arxiv | \section*{Introduction}
Fix an odd prime $p$. Let $A_{p^n}$ be the alternating group on $p^n$ letters. Denote by $\Sigma_{p^n,p}$ a Sylow $p$-subgroup of $A_{p^n}$ and $E^n$ an elementary abelian $p$-group of rank $n$. Then we have the restriction homomorphisms
\begin{align*}&\mbox{Res}(E^n,\Sigma_{p^n,p}): H^*(B\Sigma_{p^n,p}) \longrightarrow H^*(BE^n),\\
&\mbox{Res}(E^n, A_{p^n}): H^*(BA_{p^n}) \longrightarrow H^*(BE^n),
\end{align*}
induced by the regular permutation representation $E^n \subset \Sigma_{p^n,p} \subset A_{p^n}$ of $E^n$ (see M\`ui
\cite{mui2}). Here and throughout the paper, we assume that the coefficients are taken in the prime field $\mathbb Z/p$. Using modular invariant theory of linear groups, M\`ui proved in \cite{mui1,mui2}
that
\begin{align*}&\mbox{ImRes}(E^n,\Sigma_{p^n,p}) = E(U_1,\ldots,U_n) \otimes P(V_1,\ldots,V_n),\\
&\mbox{ImRes}(E^n, A_{p^n})= E(\tilde M_{n,0},\ldots , \tilde M_{n,n-1})\otimes P(\tilde L_n,Q_{n,1},\ldots, Q_{n,n-1}),
\end{align*}
Here and in what follows, $E(., \ldots, . )$ and $P(., \ldots , . )$ are the exterior and polynomial algebras over $\mathbb Z/p$ generated by the variables indicated. $\tilde L_n,\, Q_{,s}$ are the Dickson invariants of dimensions $p^n,\, 2(p^n - p^s)$, and $\tilde M_{n,s},, U_k,\, V_k$ are the M\`ui invariants of dimensions
$p^n-2p^s,\, p^{k-1},\, 2p^{k-1}$ respectively (see Section \ref{s2}).
Let $\mathcal A$ be the mod $p$ Steenrod algebra and let $\tau_s,\, \xi_i$ be the Milnor elements of dimensions $2p^s - 1,\, 2p^i - 2$ respectively in the dual algebra $\mathcal A_*$ of $\mathcal A$. In \cite{mil}, Milnor showed that, as an algebra
$$\mathcal A_* = E(\tau_0,\tau_1,\ldots)\otimes P(\xi_1,\xi_2,\ldots).$$
Then $\mathcal A_*$ has a basis consisting of all monomials $\tau_S\xi^R = \tau_{s_0}\ldots\tau_{s_k}\xi^{r_1}\ldots\xi^{r_m}$, with $S = (s_1,\ldots,s_k),\, 0 \leqslant s_1 < \ldots < s_k$, $R = (r_1, \ldots, r_m),\, r_i \geqslant 0$. Let $St^{S,R}\in \mathcal A$ denote the dual of $\tau_S\xi^R$ with respect to that basis. Then $\mathcal A$ has a basis consisting all operations $St^{S,R}$.
For $S = \emptyset,\, R = (r),\, St^{\emptyset,(r)}$ is nothing but the Steenrod operation $P^r$.
Since $H^*(BG),\ G = E^n,\, \Sigma_{p^n,p}$ or $A_{p^n}$, is an $\mathcal A$-module (see \cite[Chap. VI]{ste}) and the restriction homomorphisms are $\mathcal A$-linear, their images are $\mathcal A$-submodules of $H^*(BE^n)$.
The purpose of the paper is to study the module structures of $\mbox{ImRes}(E^n,\Sigma_{p^n,p})$ and $\mbox{ImRes}(E^n, A_{p^n})$ over the Steenrod algebra $\mathcal A$. More precisely, we prove a duality relation
between $St^{S,R}(\tilde M_{n,s}^\delta Q_{n,s}^{1-\delta})$ and $St^{S',R'}(U_{k+1}^\delta V_{k+1}^{1-\delta})$ for $\delta = 0, 1,\, \ell(R) = k$ and $\ell(R') = n$.
Here by the length of a sequence $T = (t_1,\ldots ,t_q)$ we mean the number $\ell(T) = q$. Using this relation we explicitly compute the action of the Steenrod operations $P^r$ on $U_{k+1},\, V_{k+1},\, M_{n,s}$ and $Q_{n,s}$.
The analogous results for $p = 2$ have been announced in \cite{sum}.
The action of $P^r$ on $V_{k+1}$ and $Q_{n,s}$ has partially studied by Campbell \cite{cam}, Madsen \cite{mad}, Madsen-Milgram \cite{mmi}, Smith-Switzer \cite{ssw}, Wilkerson \cite{wil}. Eventually, this action was completly determined by Hung-Minh \cite{hum} and by Hai-Hung \cite{hhu}, Hung \cite{hun} for the case of the coefficient ring $\mathbb Z/2$.
The paper contains 3 sections. After recalling some needed information on the invariant theory, the Steenrod homomorphism $d_n^*P_n$ and the operations $St^{S,R}$ in Section \ref{s2}, we prove the duality theorem and its corollaries in Section \ref{s3}. Finally, Section \ref{s4} is an application of the duality theorem to determine the action of the Steenrod operations on the Dickson and M\`ui invariants.
\section*{Acknowledgement}
The author expresses his warmest thanks to Professor Hu\`ynh M\`ui for generous help and inspiring guidance. He also thanks Professor Nguy\~\ecircumflex n H.V. H\uhorn ng for helpful suggestions which lead him to this paper.
\section{Preliminaries}\label{s2}
As is well-known $H^*(BE^n) = E(x_1,\ldots,x_n) \otimes P(y_1,\ldots,y_n)$ where $\dim x_i = 1,\,
y_i = \beta x_i$ with $\beta$ the Bockstein homomorphism. Following Dickson \cite{dic} and M\`ui \cite{mui1}, we
define
\begin{align*}
&[e_1, \ldots,e_k] = \det (y_i^{p^{e_j}}),\\
&[1;e_{2}, \ldots, e_k] =
\begin{vmatrix} x_1&\cdots &x_k\\
y_1^{p^{e_{2}}}&\cdots &y_k^{p^{e_{2}}}\\
\vdots&\cdots &\vdots\\
y_1^{p^{e_k}} & \cdots & y_k^{p^{e_k}}
\end{vmatrix} .
\end{align*}
for every sequence of non-negative integers $(e_1 , \ldots , e_k),\, 1 \leqslant k \leqslant n$. We set
\begin{align*}
&L_{k,s} = [0, \ldots, \hat s,\ldots,k],\, L_k = L_{k,k} = [0,\ldots,k-1],\, L_0 = 1,\\
&M_{k,s} = [1;0, \ldots , \hat s,\ldots, k-1],\, 0\leqslant s < k \leqslant n.
\end{align*}
Then $\tilde L_n,\, Q_{n,s},\, \tilde M_{n,s},\, U_k,\, V_k$ are defined by
\begin{align*}
&\tilde L_{n} = L_n^h,\, h = (p-1)/2,\, Q_{n,s} = L_{n,s}/L_n,\, 0\leqslant s \leqslant n,\\
&\tilde M_{k,s} = M_{n,s}L_n^{h-1},\, U_k = M_{k,k-1}L_{k-1}^{h-1},\, V_k = L_k/L_{k-1}, \, 1 \leqslant k \leqslant n.
\end{align*}
Note that $Q_{n,0} = \tilde L_n^2$, $Q_{n,n} = 1$ for any $n > 0$.
Let X be a topological space. Then we have the Steenrod power map
$$P_n: H^q(X)\, \longrightarrow H^{p^nq}
(EA_{p^n}\underset{A_{p^n}}\times X^{p^n}), $$
which sends $u$ to $1 \otimes u^{p^n}$ at the cochain level (see \cite[Chap. VII]{ste}). We also have the diagonal homomorphism
$$d_n^* : H^*(EA_{p^n}\underset{A_{p^n}}\times X^{p^n}) \longrightarrow \, H^*(BE^n )\otimes H^*(X) $$
induced by the diagonal map of $X$, the inclusion $E^n \subset A_{p^n}$ and the K\"unneth formula.
$d_n^*P_n$ has the following fundamental properties.
\begin{prop}[M\`ui \cite{mui1,mui2}]\label{md11}\
\medskip
{\rm (i)} $d_n^*P_n$ is natural monomorphism preserving
cup product up to a sign, more precisely
$$d^*_nP_n(uv) = (-1)^{nhqr}d^*_nP_nud^*_nP_nv\ , $$
where $q = \dim u,\, r = \dim v,\, h = (p-1)/2.$
\medskip
{\rm (ii)} $d^*_nP_n = d^*_{n-s}P_{n-s}d^*_sP_s\ ,\ 0 \leqslant s \leqslant n. $
\medskip
{\rm (iii)} For $H^*(E^1) = E(x)\otimes P(y)$, we have
\begin{align*}
&d_n^*P_nx = (-h!)^nU_{n+1} = (h!)^n\Big(\tilde L_nx + \sum_{s=0}^{n-1}(-1)^{s+1}\tilde M_{n,s}y^{p^s}\Big),\\
&d_n^*P_ny = V_{n+1} = (-1)^n\sum_{s=0}^n(-1)^sQ_{n,s}y^{p^s}.
\end{align*}
where $U_{n+1} = U_{n+1}(x_1, \ldots,x_n,x,y_1,\ldots,y_n,y)$, $V_{n+1} = V_{n+1}(y_1,\ldots,y_n,y)$.
\end{prop}
The following is a description of $d_n^*P_n$ in terms of modular invariants and cohomology operations.
\begin{thm}[{M\`ui \cite[1.3]{mui2}}]\label{dl12} Let $ z \in H^q(X)$,\, $\mu(q)=(h!)^q(-1)^{hq(q-1)/2}$. We then have
$$d_n^*P_nz\, =\, \mu (q)^n \sum_{S,R}
(-1)^{r(S,R)}\tilde M_{n,s_1} \ldots\tilde M_{n,s_k}\tilde L_n^{r_0}Q_{n,
1}^{r_1}\ldots Q_{n,n-1}^{r_{n-1}}\ \otimes\ St^{S,R}z\ .$$
Here the sum rwns over all $(S,R)$ with $S = (s_1, \ldots , s_k)$, $0\leqslant s_1 < \ldots < s_k$, $ R = (r_1,\ldots, r_n),\, r_i \geqslant 0$, $r_0 = q - k - 2(r_1 +\ldots\ +r_n)\geqslant 0,\ r(S,R) = k + s_1 + \ldots\ + s_k + r_1 + 2r_2 +\ \ldots\ + nr_n\ .$
\end{thm}
\section{The duality theorem}\label{s3}
Let $\tilde m_{m,s},\, \tilde \ell_m,\, q_{m,s},\, m = n$ or $k$,
(resp. $u_{k+1},\, v_{k+1}$) be the dual of
$\tilde M_{m,s},\, \tilde L_m,\, Q_{m,s}$ (resp. $U_{k+1},\, V_{k+1}$) in
$$ E(\tilde M_{m,0},\ldots,\tilde M_{m,m-1}) \otimes
P(\tilde L_m,Q_{m,1}\ldots, Q_{m,m-1})$$
(resp. $E(U_{k+1})\otimes P(V_{k+1})$) with respect to the basis consisting of all monomials
\begin{align*}\tilde M_S\tilde Q^H &= \tilde M_{m,s_1} \ldots \tilde M_{m,s_k} \tilde L_m^{h_0}Q_{m,1}^{h_1}\ldots Q_{m,m-1}^{h_{m-1}},
\end{align*}
with $S = (s_1,\ldots, s_k),\, 0 \leqslant s_1 < \ldots < s_k, \, H = (h_0,\ldots,h_{m-1}),\, h_i \geqslant 0 ,$ (resp. $ U_{k+1}^eV_{k+1}^j;\,
e = 0, 1,\ j \geqslant 0$). Let $\Gamma(\tilde\ell_m,q_{m,1},\ldots,
q_{m,m-1})$ (resp. $\Gamma(v_{k+1})$) be the divided polynomial algebra with divided power $\gamma_i,\, i \geqslant 0$ generated by $\tilde\ell_m,\ q_{m, 1}, \ldots$, $
q_{m, m-1}$\ (resp. $v_{k+1}$). We set
$$\tilde m_S\tilde q_H = \tilde m_{m,s_1} \ldots \tilde m_{m, s_k}\gamma_{h_0}(\tilde \ell_m)\gamma_{h_1}(q_{m,1})\ldots
\gamma_{h_{m-1}}(q_{m, m-1}).
$$
For $q \geqslant 0$ and $R = (r_1,\ldots,r_m)$, set
$$ R_q^* = (q - 2(r_1 + \ldots + r_m),r_1,\ldots, r_{m-1}).
$$
Let $V$ be a vector space over $\mathbb Z/p$ and $V^*$ be its dual. Denote by
$$\langle.,.\rangle : V \otimes V^* \longrightarrow \mathbb Z/p$$
the dual pairing.
The main result of the section is
\begin{thm}\label{dl21} Suppose given $e, \delta = 0, 1,\, j \geqslant 0,\, (S,R)$, and $(S',R')$ with $\ell(R) = k, \, \ell(R') = n,\, \ell(S) = t \leqslant k,\, \ell(S') = t' \leqslant n$. Set
$ \sigma = r(S,R) + r(S',R') + s + \delta + (t + [-2p^s])t' + nhk\delta ,$ with $-\delta \leqslant s \leqslant n - \delta$. Then we have
\begin{multline*}\langle \tilde m_S\tilde q_{R^*_{(2-\delta)p^n - e - 2j - t}}\otimes u_{k+1}^e\gamma_j(v_{k+1}), St^{S',R'}\big(U_{k+1}^\delta V_{k+1}^{1-\delta}\big)\rangle \\
= \begin{cases} (-1)^\sigma\langle \tilde m_{S'}\tilde q_{{R'}^*_{(2-\delta)p^k
- t'}}, St^{S,R}\big(\tilde M_{n,s}^\delta Q_{n.s}^{1-\delta}\big)\rangle,
&e + 2j = -[-2p^s],\\ 0 , &\text{otherwise.}
\end{cases} \end{multline*}
Here, by convention, $\tilde M_{n,-1} = \tilde L_n.$
\end{thm}
\begin{proof} We prove the theorem for $\delta = 1$. For $\delta = 0$, it is similarly proved. We set
\begin{align*} U &= U_{n + k +1}(x_1,\ldots,x_k,x'_1,\ldots,x'_n,x,y_1,\ldots,
y_k,y'_1,\ldots,y'_n,y),\\
U' &= U_{n + k +1}(x'_1,\ldots,x'_n,x_1,\ldots,x_k,x,y'_1,\ldots,
y'_n,y_1,\ldots,y_k,y). \end{align*}
It is easy to verify that
\begin{align*} U \ =\ (-1)^{nkh}U'\ .\tag a\end{align*}
Computing directly from Proposition \ref{md11} gives
\begin{align*} U &= (-h!)^{-k}d^*_kP_kU_{n+1}(x'_1,\ldots,x'_n,x,y'_1,\ldots,
y'_n,y)\tag b\\
&= (-h!)^{-k}(-1)^nd_k^*P_k\big(\sum_{s = -1}^{n - 1}
(-1)^{s+1}\tilde M_{n,s}y^{p^s}\big)\\
&= (-1)^n \sum_{s=-1}^{n-1} (-h!)^{-(s+1)k/(|s|
+ 1)}(-1)^{s+1}(d_k^*P_k\tilde M_{n,s})V_{k+1}^{p^s}\ .
\end{align*}
Here by convention $y^{1/p} = x$, and $V_{k+1}^{1/p} = U_{k+1}$.
We observe that $\dim\tilde M_{n,s} = p^n + [-2p^s]$. According to Theorem \ref{dl12} we have
\begin{align*} d^*_kP_k\tilde M_{n,s} = \mu(p^n +
[-2p^s])^k\sum_{S,R} (-1)^{r(S,R)}\tilde M_S\tilde Q^{R^*_{p^n+[-2p^s]-t}}St^{S,R}\tilde M_{n,s} .\tag c\end{align*}
A simple computation shows that
\begin{align*} (-h!)^{ -(s+1)/(|s| + 1)}\mu(p^n +\ [-2p^s]) = (-1)^{nh} .
\tag d\end{align*}
Combining (b),(c) and (d) we get
$$ U = \sum_{s=-1}^{n-1}\Big(\sum_{S,R} (-1)^{n(kh+1)+r(S,R)+s+1}\tilde M_S
\tilde Q^{R^*_{p^n + [-2p^s]-t}} St^{S,R}\tilde M_{n,
s}\Big)V_{k+1}^{p^s} .$$
From this, we see that it implies
\begin{align*} &(-1)^{r(S,R)+n(hk+1)+s+1}\langle \tilde m_S\tilde q_{R^*_{p^n - e - 2j - t}} \otimes \tilde m_{S'}\tilde q_{{R'}^*_{p^k-t'}}\otimes u_{k+1}^e\gamma_j(v_{k+1}), U \rangle\tag e \\
&\qquad = \begin{cases} (-1)^{tt'}\langle \tilde m_{S'}\tilde q_{{R'}^*_{p^k -
t'}}, St^{S,R}(\tilde M_{n,s})\rangle, &e + 2j = -[-2p^s],\\
0\ , &\text{otherwise.}\end{cases}\end{align*}
On the other hand, from Proposition \ref{md11} and Theorem \ref{dl12} we have
\begin{align*} U' &= (-h!)^{-n}d_n^*P_nU_{k+1}(x_1,\ldots,x_k,x,y_1,\ldots,y_k,
y)\\
&=(-h!)^{-n}\mu(p^k)^n \sum_{S',R'} (-1)^{r(S',R')}
\tilde M_{S'}\tilde Q^{{R'}^*_{p^k-t'}}St^{S',R'}U_{k+1}.
\end{align*}
From this and the fact that $(-h!)^{-1}\mu(p^k) = (-1)^{hk}$, we get
\begin{align*} &(-1)^{r(S',R') + n(hk + 1)}\langle \tilde m_S\tilde q_{R^*_{p^n
- e - 2j - t}} \otimes \tilde m_{S'}\tilde q_{{R'}^*_{p^k-t'}}
\otimes u_{k+1}^e\gamma_j(v_{k+1}), U' \rangle\tag f \\
&\quad = (-1)^{t'e}\langle \tilde m_S\tilde q_{R^*_{p^n - e - 2j
- t}} \otimes u_{k+1}^e\gamma_j(v_{k+1}), St^{S', R'}U_{k+1}
\rangle . \end{align*}
Comparing (e) with (f) and using (a), we obtain the theorem for $\delta = 1$.
\end{proof}
Since the basis $\{\tilde M_{S'}\tilde Q^{H'}\}$ of $E(\tilde M_{n,0},\ldots , \tilde M_{n,n-1})\otimes P(\tilde L_n,Q_{n,1}, \ldots,Q_{n,n-1})$ is dual
to the basis $\{\tilde m_{S'}\tilde q^{H'}\}$ of $E(\tilde m_{n,0},\ldots , \tilde m_{n,n-1})\otimes \Gamma(\tilde \ell_n,q_{n,1}, \ldots,q_{n,n-1})$. Hence, we easily
obtain from Theorem \ref{dl21}
\begin{corl}\label{hq22} Set
$$ C_{S',R'} = \langle \tilde m_S\tilde q_{R^*_{(2-\delta)p^n + [-2p^s] - t}}\otimes \gamma_{p^s}(v_{k+1}), St^{S',R'}\big(U_{k+1}^\delta V_{k+1}^{1-\delta}\big)\rangle. $$
We have
$$ St^{S,R}\big(\tilde M_{n,s}^\delta Q_{n,s}^{1-\delta}\big) =
\sum_{S' ,R'} (-1)^\sigma C_{S',R'}\tilde
M_{S'}\tilde Q^{{R'}^*_{(2-\delta)p^k-t'}}.$$
Here, by convention, $\gamma_{1/p}(v_{k+1}) = u_{k+1}.$
\end{corl}
By an analogous argument we obtain
\begin{corl}\label{hq23} Set
$ C_{s,S,R} = \langle \tilde m_{S'}\tilde q_{{R'}^*_{(2-\delta)p^k - t'}},
St^{S,R}\big(\tilde M_{n,s}^\delta Q_{n,s}^{1-\delta}\big)\rangle. $
We have
$$ St^{S',R'}\big(\tilde U_{k+1}^\delta V_{k+1}^{1-\delta}\big) =
\sum_{s=-\delta}^{n-\delta}\Big( \sum_{S,R} (-1)^\sigma C_{s,S,R}
\tilde M_S \tilde Q^{R^*_{(2-\delta)p^n+[-2p^s]-t}}\Big)V_{k+1}^{p^s}\ ,$$
Here, by convention, $V_{k+1}^{1/p} = U_{k+1}$ .
\end{corl}
\section{Applications}\label{s4}
Fix a non-negative integer $r$. Let $\alpha_i = \alpha_i(r)$ denote the $i$-th coefficient in $p$-adic expansion of $r$. That means
$$ r = \alpha_0p^0 + \alpha_1p^1 +\ldots$$
with $0 \leqslant \alpha_i < p,\, i \geqslant 0$. Set $\alpha_i = 0$ for $i < 0$.
The aim of the section is to prove the following four theorems:
\begin{thm}\label{dl31} Set
$c = \frac{(h-1)!}{(h-\alpha_{k-1})!\prod_{0\leqslant i<k}
(\alpha_i-\alpha_{i-1})!}, \ t_i = \alpha_i-\alpha_{i-1},\ 0 \leqslant i < k$. We have
\begin{align*} P^rU_{k + 1} &= \begin{cases}\displaystyle{
c\Big( hU_{k+1} + \sum_{u=0}^{k-1}t_uV_{k+1}\tilde M_{k,u}Q_{k,u}^{-1}\Big)
\prod_{i=0}^{k-1}Q_{k,i}^{t_i}},\ &2r < p^k ,\, t_i
\geqslant 0,\ i < k ,\\
0\ , &\text{otherwise.}\end{cases}
\end{align*}
\end{thm}
\begin{thm}\label{dl32} Set $c = \frac{(h-\alpha_s)(h-1)!}{(h-\alpha_{n-1})!
(\alpha_s+1-\alpha_{s-1})!\prod_{s\not= i<n} (\alpha_i-\alpha_{i-1})!},
t_i = \alpha_i-\alpha_{i-1},\ -1 \leqslant i \ne s,\
t_s = \alpha_s+1-\alpha_{s-1}$,
with $-1 \leqslant s \leqslant n - 1$. We have
\begin{align*} P^r\tilde M_{n,s} &=
\begin{cases} \displaystyle{c\sum_{u=-1}^st_u\tilde M_{n,u}
Q_{n,u}^{t_u-1}\prod_{u\not= i<n} Q_{n,i}^{t_i}} ,
&2r\leqslant p^n+[-2p^s], \alpha_i \geqslant \alpha_{i-1},\\
&s \not= i < n,\ \alpha_s+1\geqslant \alpha_{s-1}\ ,\\
0\ ,&\text{otherwise.}\end{cases}
\end{align*}
\end{thm}
The following two theorems were first proved in \cite{hum} by another method.
\begin{thm}[H\uhorn ng-Minh \cite{hum}]\label{dl33}
\begin{align*}P^rV_{k+1} = \begin{cases} V_{k+1}^p, &r = p^k,\\
\frac{(-1)^{\alpha_{k-1}}\alpha_{k-1}!}{\prod_{\scriptstyle 0
\leqslant i<k}(\alpha _i-\alpha _{i-1})!}\displaystyle{V_{k+1}\prod_{i=0}^{k-1}
Q_{k,i}^{\alpha _i-\alpha _{i-1}}},&
r < p^k , \ \alpha _i \geqslant \alpha _{i-1},\ i <k,\\
0\ , &\text{otherwise.}\end{cases}
\end{align*}
\end{thm}
\begin{thm}[H\uhorn ng-Minh \cite{hum}]\label{dl34} Set
$c = \frac{(-1)^{\alpha _{n-1}}\alpha_{n-1}!(\alpha
_s+1)}{(\alpha _s+1-\alpha _{s-1})!\prod_{s\ne i <n} (\alpha _i-\alpha _{i-1})!}.$ Then
\begin{align*}P^rQ_{n,s} = \begin{cases} Q_{n,s}^p, &r = p^n-p^s,\\
\displaystyle{cQ_{n,s} \prod_{0 \leqslant i <n}
Q_{n,i}^{\alpha_i-\alpha_{i-1}}}, &r < p^n-p^s, \alpha_i\geqslant \alpha
_{i-1},\\
& s\ne i <n,\ \alpha_s+1\geqslant \alpha_{s-1},\\
0\ , &\text{otherwise.}\end{cases}
\end{align*}
\end{thm}
To prove these theorems we need
\begin{nota}\label{kh35} Let $R = (r_1, \ldots , r_n)$ be a sequence of arbitrary integers and $b \geqslant 0$.
Denote by $|R| = \sum_{i=1}^n(p^i-1)r_i$ and $\binom bR$ the coefficient of $y_1^{r_1}\ldots y_n^{r_n}$ in $(1+y_1+\ldots y_n)^b$. That means,
$$\binom bR = \begin{cases} \dfrac{b!}{(b-r_1-\ldots-r_n)!r_1!\ldots r_n!},
&r_1 + \ldots + r_n \leqslant b,\\
0, &r_1 + \ldots + r_n > b.
\end{cases}$$
\end{nota}
The proofs of Theorems \ref{dl31} and \ref{dl33} are based on the duality theorem and the following
\begin{lem}\label{bd36} Let $b$ be a non-negative integer and $\varepsilon = 0,\, 1$. We then have
\begin{align*}
St^{S,R}(x^\varepsilon y^b) = \begin{cases}
\displaystyle{\binom bR x^\varepsilon y^{b + |R|} ,} &S = \emptyset ,\\
\displaystyle{\varepsilon \binom bR y^{b + |R| + p^u}} , &S = (u), \ u \geqslant 0 ,\\
0\ , &\text{otherwise.} \end{cases}
\end{align*}
Here $x$ and $y$ are the generators of $H^*(B\mathbb Z/p) = E(x)\otimes P(y)$.
\end{lem}
\begin{proof} A direct computation using Proposition \ref{md11} shows that
\begin{align*} d_m^*P_m(x^\varepsilon y^b) &= (-1)^{mb}(h!)^{m
\varepsilon}\Big(\tilde L_m^\varepsilon
x^\varepsilon + \varepsilon \sum_{u=0}^{m-1}
(-1)^{u+1}\tilde M_{m,u}y^{p^u}\Big)\\
&\hskip 3 cm \times\Big(\sum_{R=(r_1,\ldots,
r_m)}(-1)^{r(\emptyset ,R)}\binom bR
\tilde Q^{R^*_{2b}}y^{b + |R|}\Big)\\
&= \mu(2b+\varepsilon)^m\Big(\sum_{R=(r_1,\ldots,r_m)}(-1)^{r(\emptyset ,R)}
\binom bR\tilde Q^{R^*_{2b+\varepsilon}}x^\varepsilon y^{b + |R|}\\
&\hskip 1 cm + \varepsilon\sum_{u=0}^{m-1}\sum_{R=(r_1,\ldots,r_m)}
(-1)^{r((u),R)}\binom bR\tilde M_{m,u}
\tilde Q^{R^*_{2b}}y^{b+|R| + p^u}\Big).
\end{align*}
The lemma now follows from Theorem \ref{dl12}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{dl31}] Since $\dim U_{k+1} = p^k$, it is clear that$P^rU_{k+1} = 0$ for $ 2r > p^k$.
Suppose $r \leqslant (p^k-1)/2$. Applying Corollary \ref{hq23} with $\delta = n = 1$ and using Lemma \ref{bd36} we obtain
\begin{align*} P^rU_{k+1} &= \sum_{R = (r_1,\ldots,
r_k)}(-1)^{r(\emptyset,R)+r +hk}
\langle\tilde q_{(r)^*_{p^k}},St^{\emptyset,R}\tilde L_1\rangle
U_{k+1}\tilde Q^{R^*_{p-1}}\\
&\quad + \sum_{u=0}^{k-1}\sum_R(-1)^{r((u),R)+r+kh+1}
\langle\tilde q_{(r)^*_{p^k}},St^{(u),R}\tilde M_{1,0}
\rangle V_{k+1}\tilde M_{k,u}\tilde Q^{R^*_{p-3}}\ .
\end{align*}
Set $\bar r_i = \alpha_i-\alpha_{i-1},\ i < k,\ \bar r_k
= h - \alpha_k,\ \bar R_0 = (\bar r_1,\ldots,\bar r_k),
\bar R_u = (\bar r_1,\ldots,\bar r_u-1,\ldots,\bar r_k),\ 1\leqslant u \leqslant k.$ Computing directly from Lemma \ref{bd36} with $\varepsilon = 0,\ b = h$ or $\varepsilon = 1,\ b = h - 1$ gives
\begin{align*} \langle\tilde q_{(r)^*_{p^k}}, St^{\emptyset,
R}\tilde L_1\rangle &= \begin{cases} \dfrac{h!}{\bar r_0\ldots\bar r_k},\quad &R
= \bar R_0 \\
0\ , &\text{otherwise.}\end{cases}\\
\langle\tilde q_{(r)^*_{p^k}},St^{(u),R}\tilde M_{1,0}\rangle &=
\begin{cases} \dfrac{(h-1)!\bar r_u}{\bar r_0\ldots\bar r_k},\quad &R = \bar R_u\\
0\ , &\text{otherwise.}\end{cases}
\end{align*}
A simple computation shows that
$$r( \emptyset,\bar R_0) + r = r((u),\bar R_u) + r + 1 = hk\
(\text{mod}\ 2).$$
Hence, the theorem is proved.
\end{proof}
\begin{proof}[Proof of Theorem \ref{dl33}] Since $\dim V_{k+1} = 2p^k$, we have only to prove the theorem for $r < p^k$. Note that $Q_{1,1} = 1$. Hence
$$ St^{S,R}Q_{1,1} = \begin{cases} 1\ , \quad &S = \emptyset,\, R = (0,\ldots,0),\\
0,\ & \mbox{otherwise}.
\end{cases}$$
So, $\langle \tilde q_{(r)^*_{2p^k}}, St^{S,R}Q_{1,1}\rangle = 0$ for any $S,\, R$. Remember that $Q_{1,0} = y^{p-1}$. So, applying Corollary \ref{hq23} with $\delta = 0,\, n = 1$ and using Lemma \ref{bd36} with $\varepsilon = 0,\, b = p - 1$, we get
\begin{align*} P^rV_{k+1} = \sum_{R} (-1)^{r(\emptyset ,R) + r}\langle \tilde q_{(r)^*_{2p^k}}, St^{\emptyset,R}Q_{1,0}\rangle \tilde Q^{R^*_{2(p-1)}}V_{k+1}.\tag a
\end{align*}
From Lemma \ref{bd36}, we see that it implies
\begin{align*} &\langle \tilde q_{(r)^*_{2p^k}},St^{\emptyset,R}Q_{1,0}\rangle\tag b\\
&\qquad = \begin{cases} \displaystyle{\binom{p-1}R} , &R = (\alpha_1 - \alpha_0,\ldots,
\alpha_{k-1}-\alpha_{k-2},p-1-\alpha_{k-1}),\\
0\ , &\text{otherwise.}\end{cases}
\end{align*}
Suppose that $R = (\alpha_1 - \alpha_0,\ldots,\alpha_{k-1}-\alpha_{k-2},p-1-\alpha_{k-1}).$ Then we can easily observe that
\begin{align*} &r(\emptyset ,R) + r = 0\ (\text{mod}\ 2),\tag c\\
&\binom{p-1}R = \frac{(-1)^{\alpha_{k-1}}\alpha_{k-1}!}
{\prod_{0\leqslant i < k}(\alpha_i-\alpha_{i-1})},\\
&R^*_{2(p-1)} = (2\alpha_0,\alpha_1-\alpha_0,\ldots,
\alpha_{k-1}-\alpha_{k-2}).
\end{align*}
Theorem \ref{dl33} now follows from (a),(b) and (c).
\end{proof}
Following Corollary \ref{hq22},to determine $P^rM_{n,s}$ and $P^rQ_{n,s}$ we need to compute the action of $St^{S,R}$ on $U_2$ and $V_2$.
\begin{prop}\label{md37} Suppose given $R = (r_1,\ldots,r_n)$, and $0\leqslant u < n$. Set $w_{s} = r_{s+1} + \ldots + r_n$, for $ s \geqslant 0$. Then we have
$$ St^{S,R}U_2 = \begin{cases} \displaystyle{\binom hR\big(\tilde L_1^{\vert
R\vert /h}U_2 + \overset{n-1}{\underset{s=0}\sum}
h^{-1}w_{0}\tilde M_{1,0}\tilde L_1^{(\vert R\vert -
p^{s+1}+1)/h}V_2^{p^s}\big),\quad S = \emptyset},\\
\displaystyle{\binom hR\overset{n-1}{\underset{s=u}\sum}
h^{-1} w_{s}\tilde L_1^{(\vert R\vert - p^{s+1}+p^u+h)/h}V_2^{p^s},
\hskip2.6cm S = (u)},\\
0\ ,\hskip 6cm \text{otherwise.}\end{cases}$$
Here, $|R|$ and $\binom hR$ are defined in Notation \ref{kh35}.
\end{prop}
The proposition will be proved by using Theorem \ref{dl12} and the following
\begin{lem}\label{bd38} Let $u, v$ be non-negative integers with $u \leqslant v$. We have
\medskip
{\rm i)} $[u,v] = \sum_{s=u}^{v-1}V_1^{p^v}-p^{s+1}+p^uV_2^{p^s}$,
{\rm ii)} $[1;v] = V_1^{p^v-h}U_2 + M_{1,0}\sum_{s=0}^{v-1}V_1^{p^v-p^{s+1}}V_2^{p^s}. $
\medskip
Here $[u,v]$ and $[1;v]$ are defined in Section \ref{s2}.
\end{lem}
The proof is straightforward.
\begin{proof}[Proof of Proposition \ref{md37}] Recall that $ M_{2,1} =
x_1y_2 - x_2y_1$. From Proposition \ref{md11} we directly obtain
$$d_n^*P_nM_{2,1} = (-h!)^n \sum_{v=0}^n(-1)^v\tilde L_nQ_{n,v}[1;v] +
\sum_{\scriptstyle 0\leqslant u < n\atop\scriptstyle 0\leqslant v\leqslant n}
(-1)^{u+v+1}\tilde M_{n,u}Q_{n,v}[u,v].$$
Since $L_1 = y_1$ and $2(h - 1) = p - 3$, using Proposition \ref{md11}(iii) with $y = y_1$ and Notation \ref{kh35} we get
$$d_n^*P_nL_1^{h-1} = (-1)^{n(h-1)}\sum_{R'}(-1)^{r(\emptyset,R')}
\binom{h-1}{R'}\tilde Q^{{R'}^*_{p-3}}y_1^{\vert R'\vert +h-1}.$$
We have $U_2 = M_{2,1}L_1^{h-1},\ \dim U_2 = p$ and $\mu(p) = (-1)^hh!$. So, it implies from the above equalities and Proposition \ref{md11} that
\begin{align*} d_n^*P_n U_2 &= \mu(p)^n\Big(\sum_R(-1)^{r(\emptyset,R)}\tilde
Q^{R^*_p}\binom hR \sum_{v=0}^nh^{-1}r_vy_1^{\vert R\vert + h - p^v}[1 ;
v]\\ &+\sum_{u=0}^{ n-1} \sum_R(-1)^{r((u),R)}\tilde M_{n,u}\tilde
Q^{R^*_{p-1}}\binom hR \sum_{v=u}^nh^{-1}r_vy_1^{\vert R\vert + h - p^v}[u; v]\Big). \end{align*}
Then by Theorem \ref{dl12} we have
$$ St^{S,R}U_2 = \begin{cases}\displaystyle{ h^{-1}\binom hR\sum_{v=0}^nr_vy_1^{\vert
R\vert + h - p^v}[1 ; v]},\ &S = \emptyset,\\
\displaystyle{h^{-1} \binom hR\sum_{v=u}^nr_vy_1^{\vert R\vert + h - p^v}[u ; v]},\ &S =
(u),\ u < n,\\
0\ , &\text{otherwise.}\end{cases}$$
Now the proposition follows from Lemma \ref{bd38}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{dl32}] For simplicity, we assume that $0 \leqslant s < n$. Applying Corollary \ref{hq22} with $\delta = k = 1$ and using Proposition \ref{md37} we get
\begin{align*}P^r\tilde M_{n,s} = \sum_{\scriptstyle 0\leqslant u\leqslant s\atop\scriptstyle R = (r_1,\ldots,r_n)}
(-1)^{r((u),R)+r+s+1+nh}C_{(u),R}\tilde M_{n,u}\tilde Q^{R^*_{p-1}}.
\end{align*}
Here $C_{(u),R} = \langle \tilde q_{(r)^*_{p^n-2p^s}}\otimes \gamma_{p^s}(\bar v_2) , St^{(u),R}U_2\rangle .$
If $2r > p^n - 2p^s - 1$, then $P^r\tilde M_{n,s} = 0$ since d$\dim \tilde M_{n,s} = p^n - 2p^s$. Suppose $2r \leqslant p^n - 2p^s - 1$. Set $t_i = \alpha_i-\alpha_{i-1}, $ with $ 0 \leqslant i \ne s,\, n, \ t_s = \alpha_s + 1 - \alpha_{s-1},\ t_n = h - \alpha_{n-1},\ \bar R_0 = (t_1,\ldots,t_n),\ \bar R_u = (t_1,\ldots,t_u - 1,\ldots,
t_n),\ 1 \leqslant u \leqslant n.$ From Proposition \ref{md37} we have
$$ C_{(u),R} = \begin{cases} ct_u\ ,\ &R = \bar R_u,\\
0\ ,&\text{otherwise.}\end{cases}$$
It is easy to verify that
$$ r((u),\bar R_u) + r + s + 1 = nh\ (\text{mod}\ 2).$$
Theorem \ref{dl32} now is proved by combining the above equalities.
\end{proof}
Now we return to the proof of Theorem \ref{dl34}. It is proved by the same argument as given in the proof of Theorem \ref{dl32}. We only compute $St^{S,R}V_2$.
\begin{prop} For $R = (r_1,\ldots, r_n)$, $r_0 = p - r_1 - \ldots - r_n $, we have
$$St^RV_2 = \begin{cases} V_2^{p^s}, \hskip 2cm r_s = p,\ r_i = 0,\ i \ne \ s,\\
\displaystyle{\sum_{s=0}^{n-1}
\frac{(p-1)!(r_{s+1}+\ldots+r_n)}{r_0!\ldots r_n!}
V_1^{\vert R\vert + p-p^{s+1}}V_2^{p^s}},\\
\hskip 2.8cm 0\leqslant r_i < p,\ 0\le
i\leqslant n, \\
0\ , \hskip 2.3cm\text{otherwise.}\end{cases}$$
\end{prop}
\begin{proof}
Recall that $V_2 = y_2^p - y_2y_1^{p-1}$. Applying Proposition \ref{md11} and Lemma \ref{bd36} with $y = y_1$ or $y = y_2$ we get
\begin{align*} d_n^*P_nV_2 &= \sum_{s=0}^n (-1)^{n + s}
Q_{n,s}^py_2^{p^{s+1}}\tag a\\
&\quad - (-1)^n\sum_{u=0}^n\sum_{R'}(-1)^{u+r(\emptyset ,R')}
\binom{p-1}{R'}Q_{n,u} \tilde Q^{{R'}^*_{2(p-1)}}y_1^{\vert R'\vert + p - 1}y_2^{p^u}\\
&= (-1)^n\sum_{s=0}^n (-1)^s Q_{n,s}^p\big(y_2^{p^{s+1}}
- y_2^{p^s}y_1^{(p-1)p^s}\big)\\
&\quad - (-1)^n\sum_{u=0}^n\sum_R(-1)^{r(\emptyset ,R)}\binom{p-1}{R_u}
\tilde Q^{R^*_{2p}}y_1^{\vert R\vert + p - p^u}y_2^{p^u}\Big).
\end{align*}
Here the last sum runs over all $ R = (r_1,\ldots, r_n)$ with $ 0 \leqslant r_i < p, \ 0 \leqslant i \leqslant n, \ R_0 = R, \ R_u = (r_1,
\ldots, r_u-1,\ldots,r_n),\ 1 \leqslant u \leqslant n$.
Let $v$ be the greatest index such that $r_v > 0$. A simple computation shows
\begin{align*} y_1^{|R| + p - p^u}y_2^{p^u}= - y_1^{|R| + p - p^u - p^v}[u , v] + y_1^{|R| + p - p^v}y_2^{p^v}.\tag b
\end{align*}
Combining (a), (b), Lemma \ref{bd38} and the fact that $\sum_{u=0}^n\binom{p-1}{R_u} = 0$ we obtain
\begin{multline*} d_n^*P_nV_2 =
\mu(2p)^n\Big(\sum_{s=0}^n(-1)^sQ_{n,s}V_2^{p^s} \\ +
\sum_R(-1)^{r(\emptyset ,R)}\tilde Q^{R^*_{2p}}\sum_{s=0}^n\sum_{u=s+1}^n\binom{p-1}{R_u}
V_1^{\vert R\vert+p-p^{s+1}}V_2^{p^s}\Big).
\end{multline*}
The proposition now follows from this equality and Theorem \ref{dl12}.
\end{proof}
\bigskip
{ |
1,477,468,750,139 | arxiv | \section{Introduction}
\hspace{1cm}
Since the advent of the modern scientific method,
knowledge of the
physical world has advanced through the interplay of experimental
observation and theoretical analysis. Correspondingly, the terms
experimental physics and theoretical physics have been used to
characterize these two basic methodologies of scientific research.
The introduction of computers, however, has lead to the
establishment of a third fundamental methodology and the term
computational physics has been coined to denote all those
investigations where computers are used in an intrinsic manner
to unravel properties of complex systems.
The methods of computational physics do not offer a replacement
for either experimental investigation or theoretical analysis.
Indeed, although computer modeling shares many of features
of actual experiment, clearly the study of computer simulated
phenomena can never replace the observation of the actual
world. In a more subtle way, computational physics cannot
displace theoretical physics either. The
insights into the phenomena provided by the theoretical analysis
constitute the necessary platform over which a computational
investigation can be launched and are ultimately required to
interpret the latter's results. Thus theoretical physics and
computational physics complement each other and, at their best,
proceed hand in hand.
Because of the large computer power required by some numerical
investigations, it is frequently thought that computational physics is
accessible only to a limited constituency of researchers fortunate
enough to have access to powerful supercomputers. This belief is
wrong, on two accounts. First, not all computational
investigations require very large resources. In many instances,
ingenuity more than powerful hardware is the key to a successful
utilization of the computer. Second, the amazingly rapid pace of
technological development in the field of computers is making these
tools ever more accessible on a world wide scale. For these reasons,
the
International Center for Theoretical Physics has
been offering colleges on computational physics on a rather
regular basis since the first conference and college it held in
1986.
In particle physics some of the most successful computational
applications have taken place in the area of lattice gauge theories,
where computer simulations of quantum fluctuations have been
used to evaluate several non-perturbative quantities of primary
importance. In particular, computational investigations of QCD
have provided evidence of quark confinement as well as the
means of calculating hadronic observables such as meson
and baryon masses, some weak matrix elements, the deconfining
temperature and the value of $\alpha_S$. However, the span of
computational particle physics investigations is by no means
limited to lattice QCD, and in these lectures we will illustrate
another challenging application of numerical
techniques to particle phenomena where computational and
theoretical methods complement each other. We will see, indeed,
that the theoretical analysis of the phenomenon provides the basis
for the computational investigation, which in turn can produce
information that goes well beyond the reach of purely analytic
techniques.
The problem we will study is the possible occurrence of baryon
number violation in high energy electroweak processes. In
perturbation
theory, baryon number is strictly conserved. But, as has been
well known
since the pioneering work of 't Hooft\cite{thooft76},
the axial vector
anomaly implies that baryon number is not conserved in processes
which change the topology of the gauge fields. Baryon number
violating amplitudes are non-perturbative, and viable methods
of calculation are scarce. Basically there are two ways of getting
at non-perturbative information in quantum field theory. One can
use either semi-classical techniques or direct lattice simulations
of the quantum fluctuations. Unfortunately, theories with small
coupling constants are not suited for the latter, so the electroweak
sector of the standard model lies beyond the reach of direct lattice
calculations. This means that semiclassical methods presently
offer the only way to study baryon number violating electroweak
processes.
Electroweak baryon number violation is associated with topology
change of the gauge fields. Classically, gauge field configurations
with different topology (i.e differing by a topologically non-trivial
gauge transformation) are separated by an energy barrier. The
(unstable) solution of the classical equations of motion which lies
at the top of the energy barrier is called the sphaleron\cite{KM}.
At energies lower than the sphaleron energy, topology changing
transitions, and hence baryon number violation, can only occur
via quantum mechanical tunneling. Under certain circumstances,
semiclassical methods can be used to approximate these tunneling
rates. The relevant solution of the Euclidean equations of motion
which describe such tunneling is known as the instanton\cite{bpst}.
All transitions that change topology involve fields of order
$1/g$ and actions $ S = S_0/g^2 $, where $S_0$ is a coupling
constant independent action reexpressed in terms of rescaled fields
$ g A_{\mu}(x) $. As a consequence, all tunneling amplitudes contain
a barrier penetration factor $\exp(-S_0/g^2)$:
\begin{equation}
A \sim \exp(-S_0/g^2),
\label{eq1}
\end{equation}
where $S_0$ is a numerical factor of order one.
The appearance of the square of the electroweak coupling
constant $g$
in the denominator of the exponent in Eq.~\ref{eq1} has three
important implications. First it tells us that the phenomenon
is non-perturbative, in the sense that $A$ has an essential singularity
at $g=0$ and thus cannot be expressed in terms of a perturbative
expansion in powers of $g$. Second, the fact that the actual numerical
value of $g$ is very small indicates that non-perturbative semiclassical
methods, based on saddle point expansions of path integrals around
solutions of the classical equations of motion, are likely to
produce reliable results (to better understand this point, remember
that in units in which $\hbar$ is not set equal to $1$, Eq.~\ref{eq1}
takes the form $A \sim \exp[-S_0/(\hbar g^2)]$ so that small $g$ and
small $\hbar$ are equivalent). Third, the small value of $g$ also
makes these processes apparently irrelevant because the associated transition
rates, proportional to $|A|^2$, turn out
to be abysmally small.
This state of affairs changed a few years ago when Ringwald
\cite{ring} and later Espinosa\cite{esp}
noticed that a summation
of the semiclassical amplitudes over final states gives rise
to factors which increase very rapidly with increasing energy.
This might lead to a compensation of the suppression factor
in Eq.~\ref{eq1} for energies approaching the energy of the
barrier, i.e. the sphaleron energy $E_{sph}$. Intuitively, one
might expect the tunneling suppression factor to become much
less severe as the energy approaches the energy of the barrier.
In particular one might expect it to disappear altogether for
$E>E_{sph}$, i.e. in the region where the topology changing
processes are classically allowed. Investigations
have indeed confirmed that this is precisely what happens in
high temperature electroweak processes\cite{highT} . As the
temperature approaches $E_{sph}$ the barrier-penetration
suppression factor becomes progressively less pronounced,
and electroweak baryon number violation becomes
unsuppressed altogether for temperatures comparable
to the sphaleron energy. The situation is, however, much
less clear for high energy collisions. Phase space
considerations are more
subtle, and simply because one has enough energy to pass
over the barrier does not guarantee that one does. The problem
is that in high energy collisions the initial state is an exclusive
two particle state, which is difficult to incorporate in a
semiclassical treatment of the transition amplitude.
A possible remedy to this situation has recently been proposed
by Rubakov, Son and Tinyakov\cite{rst}, who suggested
that one considers inclusive initial coherent states, but
constrained so that energy and particle number take fixed
average values
\begin{equation}
E = {\epsilon \over g^2},
\label{eq2}
\end{equation}
\begin{equation}
N = {\nu \over g^2}.
\label{eq3}
\end{equation}
In the limit $g \to 0$, with $\epsilon$ and $\nu$ held fixed, the
path integrals giving the transition amplitudes are then dominated
by a saddle point configuration which solves the classical
equations of motion. This permits a semiclassical calculation
of the transition rates. Information on the high energy
collision processes can then be obtained from the limit $\nu \to
0$. While this limit does not strictly reproduce the exclusive
two-particle
initial state, under some reasonable assumptions of continuity
it can be argued that the corresponding transition rates will be
equally suppressed or unsuppressed.
In these lectures we will not reproduce the derivation of the
saddle point equations, which would form the topic of
an extended set of lectures in itself. (Indeed, Rubakov
presented such lectures at the 1992 ICTP Summer School
in High Energy Physics and Cosmology, and the conversations
that one of us held with him then stimulated the investigation
we describe here.)
Rather, we will start from these equations, which we will of
course recapitulate, and describe the computational techniques
used to solve them and the progress we
have made in this direction. For the actual derivation of the
equations the reader should consult Ref. \cite{rst}.
In the next section we illustrate the general properties of topology
changing evolution of the classical fields. For simplicity we
first consider the 2-dimensional Abelian Higgs model. Then we
examine the 4-dimensional $SU(2)$ Higgs model, but restricted
to the spherical {\it Ansatz} to obtain a computationally
tractable system. In Section 3 we investigate the properties of
topology changing processes above the sphaleron barrier,
i.e. in the classically allowed energy domain (see also
Ref. \cite{rs}). And in Section
4 we finally describe the equations introduced by Rubakov,
Son and Tinyakov\cite{rst} and the computational methods
required to solve them.
\section{Topology changing field evolution}
\hspace{1cm}
The 1+1 dimensional Abelian Higgs system is defined
in terms of a complex matter field $\phi(x)$ and an
Abelian gauge potential $A_{\mu}(x)$ with action
\begin{equation}
S = \int dx ^2 ~ \left\{- {1 \over 4} F_{\mu \nu} F^{\mu \nu} +
(D_{\mu} \phi)^* D^{\mu} \phi - \lambda (|\phi|^2 -1 )^2
\right\} \ ,
\label{eq4}
\end{equation}
where the indices run over 0 and 1,
$F_{\mu \nu} = \partial_{\mu} A_{\nu} - \partial_{\nu} A_{\mu}$,
$D_{\mu}\phi = \partial_{\mu}\phi - \imath A_{\mu} \phi$ and
many inessential constants have been eliminated by a suitable
choice of units.
The most important feature of this system is that the vacuum,
i.e. the configuration of minimum energy, occurs for non-vanishing $\phi$,
indeed, with our special choice of units for $|\phi| =1$.
Since this does not specify the phase of $\phi$, there is not
a unique vacuum state, but rather multiple vacua. Still,
because of gauge invariance one must be careful in regard
to the physical significance of the phase of $\phi$. A local
variation of the phase of $\phi$ can always be undone by
a suitable gauge transformation. And since gauge
equivalent configurations must be considered physically
indistinguishable, local variations of the phase of the matter
field do not lead to different vacua. However, variations
of the phase of $\phi$ by multiples of $2 \pi$ (as the coordinate
$x^1$ spans the entire spatial axis) cannot be undone by a
local gauge transformation, and thus define topologically
distinct vacuum states. These vacua differ by the global
topological properties of the field configuration. The condition
$|\phi|=1$ restricts the values of the matter field to the unit
circle (in the complex plane). If we demand that $\phi$ takes
fixed identical values as $x^1 \to \pm \infty$ (a condition we
later relax), then the number of times $\phi$ winds
around the unit circle as $x^1$ spans the entire real axis is a
topological invariant (the winding number) and characterizes
different topologically inequivalent vacuum states.
Figures 1a-c illustrate three contours traced in the complex
plane by the field variable $\phi(x^1)$ as the coordinate $x^1$
spans the entire space axis. Inequivalent vacuum configurations
with winding numbers 0 and 1 respectively are depicted in
Figs.~1a and 1c. In the contour of Fig.~1a the phase of
$\phi$ stays fixed at zero as $x^1$ ranges between $-\infty$
and $+\infty$, whereas it goes once around the unit circle
in Fig.~1c. Thus the corresponding vacuum configurations
have winding number $0$ and $1$. The detailed variation
of the phase is immaterial since it can always be changed
locally by a gauge transformation. Thus, in Fig.~1a for
example, as $x$ varies from $-\infty$ to $+\infty$ the field
does not have to stay fixed, but could wander continuously
on the unit circle provided the net change in phase is zero.
However, the configuration of Fig.~1a cannot be continuously
deformed to that of Fig.~1c without leaving the manifold of zero
energy configurations. Therefore, in an evolution between
neighboring
vacua the field configuration must pass over an energy barrier,
as illustrated in Fig.~1b which singles out the configuration
for which $\phi$ vanishes at a point, rendering its phase
there undefined.
Figures 1d-f add the additional perspective
of spatial dependence for the field $\phi(x^1)$. Figures.~1a-c
can be viewed as projections onto the complex plane orthogonal
to the $x^1$ axis of the curves in Figs.~1d-e.
\begin{figure}
\centerline{
\epsfxsize=120mm
\epsfbox[72 318 540 470]{fig1abc.ps}
}
\centerline{
\epsfxsize=135mm
\epsfbox[72 300 540 450]{fig1def.ps}
}
\caption{\tenrm
Example of two inequivalent vacuum configurations (a, c)
and a field configuration at the top of the energy barrier
separating them (b). Figures a, b and c trace the field
$\phi$ in the
complex plane as the spatial coordinate spans the entire
axis. A three dimensional perspective has been added
in figures d, e and f to illustrate the detailed dependence
of $\phi$ on the spatial coordinate.
}
\end{figure}
\vskip4mm
The condition that $\phi$ should take the same value at
$x^1 = \pm \infty$ can be relaxed. Sometimes it is convenient
to use the time independent gauge freedom to make
$\phi(\infty)$ and $\phi(-\infty)$ differ (while keeping both fixed
in time). Thus, the configurations of Figs.~1a-c can be gauge
transformed into the configurations shown in Figs.~2a-c.
In Fig.~2a the phase of $\phi$ changes by $-\pi$ as $x^1$
goes from $-\infty$ to $+\infty$, whereas in Fig.~2c it rotates
by $\pi$. As in Fig.~1, the two vacuum
configurations differ by a phase of $2\pi$, i.e. by a unit change
of winding number. In the intermediate configuration (Fig.~2b)
the field takes only imaginary values. In this gauge the
configuration which minimizes the energy on top of the barrier
(i.e. the sphaleron configuration) takes a very simple form
\begin{equation}
\phi(x^1)=\imath {\rm th}[\sqrt{\lambda} (x^1-c)], \quad A_{\mu}=0.
\label{eq5}
\end{equation}
A possible parameterization for the entire evolution illustrated
in Fig.~2 can be conveniently written
\begin{equation}
\phi(x^1)=\imath {1-\exp [ \imath \tau -2 \sqrt{\lambda} (x^1-c)] \over
1+\exp [\imath \tau -2 \sqrt{\lambda} (x^1-c)] },
\label{eq6}
\end{equation}
\begin{equation}
A_0=0, \quad
A_1={4 \tau \sqrt{\lambda} \over \pi {\rm ch}[2 \sqrt{\lambda} (x^1-c)]}.
\label{eq7}
\end{equation}
As the reader can easily verify, for $\tau=-\pi/2$ and
$\tau=\pi/2$ the field $\phi$ reduces to a number of
unit modulus precisely spanning the contours of Fig.~2a
and Fig.~2c respectively (as as $x^1$ ranges from $-\infty$
to $+\infty$). The corresponding values of $A_1$ are
chosen to make the gauge covariant derivative of $\phi$
vanish, thus ensuring vacuum. We should point out, however,
that Eqs.~\ref{eq6},\ref{eq7} do not represent the solution of
any special equations of motion (Euclidean or Minkowski).
They are merely a compact parameterization of interpolating
configurations, in terms of two variables $c$ and $\tau$, which
might be useful in studying sphaleron transitions based on the
method of collective coordinates.
\begin{figure}
\centerline{
\epsfxsize=120mm
\epsfbox[72 318 540 470]{fig2abc.ps}
}
\caption{\tenrm
A different gauge equivalent representation of
the configurations illustrated in Fig.~1.
}
\end{figure}
If one couples chiral fermions to the gauge field in the
2-dimensional Abelian Higgs model the fermionic current
has an anomaly which leads to fermion number violation
in the topology changing processes described above.
Thus this model would appear a very convenient system
for a simplified study of baryon number violation in high
energy processes. However, as we will discuss in
the next section, a crucial component of the computational
investigation is the ability to numerically identify the
normal mode amplitudes of the fields in the asymptotic
regime. No matter how non-linear the system may be at
any given point in its evolution, typically the energy will
eventually disperse and bring the system to a regime where
the fields undergo small oscillations about a vacuum
configuration. This dispersion is expected to occur in any
field theoretical system, unless prevented by conservation
laws such as those underlying soliton phenomena.
Now, while the 2-dimensional Abelian Higgs model does
not possess soliton solutions, we have observed computationally
that the decay of the sphaleron in this system nevertheless
gives origin to persistent, localized, large oscillations with
an extremely small damping rate (this observation was also
made by Arnold and McLerran in Ref. \cite{am88}).These
oscillations,
illustrated in Fig.~3, make the system quite unwieldy for
a computational investigation of baryon number violation
based on semiclassical techniques. Thus we eventually
turned our attention to the more realistic 4-dimensional
$SU(2)$ Higgs system, which constitutes the most relevant
component of the full electroweak theory. Because of the
larger dimensionality of space one would expect the energy
to disperse much more readily in this system, an expectation
borne out by results of Hellmund and Kripfganz\cite{hk91},
who observed the onset of a linear regime following the
sphaleron's decay.
\begin{figure}
\centerline{
\epsfxsize=95mm
\epsfbox[27 305 581 746]{2dsph.fig.eps}
}
\caption{\tenrm
Sphaleron decay in the 2-dimensional Abelian
Higgs model: evolution of the $\phi$ field. The values
of the phase of the complex field are coded by shades
of gray, and the modulus of the field by the height of
the surface. The sphaleron decays rather quickly, but
leaves behind a quasi-stable oscillating remnant.
}
\end{figure}
The 3+1 dimensional $SU(2)$ Higgs system is defined
in terms of a complex doublet $\Phi(x)$ and the gauge
potential $A_\mu(x)$ with action
\begin{equation}
S = \int dx ^4 ~ \left\{- {1 \over 4} F_{\mu \nu} F^{\mu \nu} +
(D_{\mu} \phi)^* D^{\mu} \Phi - \lambda (|\Phi|^2 -1 )^2
\right\} \ ,
\label{fourAction}
\end{equation}
where the indices run over $0\cdots 3$ and where
\begin{eqnarray}
F_{\mu\nu} &=& \partial_\mu A_\nu - \partial_\nu A_\mu
- \imath [A_\mu,A_\nu] \\
D_\mu \Phi &=& (\partial_\mu - \imath A_\mu) \Phi
\end{eqnarray}
with $A_\mu = A^a_\mu\sigma^a/2$. We use the standard
metric $\eta_{\mu\nu}={\rm diag}(1,-1,-1,-1)$, and have eliminated
many inessential constants by a suitable choice of units. We
focus on the spherically symmetric configurations of Ratra and
Yaffe\cite{ry88}, which reduce to an effective 2-dimensional
theory. This lower dimensional theory has the full topological
structure of the 4-dimensional system, while having the virtue
of being computationally manageable.
The spherical {\it Ansatz} is given by expressing the gauge
and Higgs fields in terms of six real functions $a_0\, ,\,a_1\, ,
\, \alpha\, , \, \beta\, , \, \mu\ {\rm and}\ \nu\ {\rm of}\ r\
{\rm and}\ t$:
\vfill\break
\begin{eqnarray}
A_0({\bf x},t) &=& \frac{1}{2 } \, a_0(r,t)
\mbox{\boldmath$\sigma$}\cdot {\bf\hat x}
\nonumber\\
A_i({\bf x},t) &=& \frac{1}{2 } \, \big[a_1(r,t)
\mbox{\boldmath$\sigma$}\cdot {\bf\hat x}
\hat x^i+\frac{\alpha(r,t)}{r}(\sigma^i-\mbox{\boldmath$\sigma$}
\cdot {\bf\hat x}\hat x^i)
+\frac{1+\beta(r,t)}{r}\epsilon^{ijk}\hat x^j\sigma^k\big]
\nonumber\\
\Phi({\bf x},t) &=& [ \mu(r,t) + i \nu(r,t)\mbox{\boldmath$\sigma$}
\cdot {\bf\hat x} ]
\xi \ ,
\label{SphAn}
\end{eqnarray}
where ${\bf \hat x}$ is the unit three-vector in the radial direction
and $\xi$ is an arbitrary two-component complex unit vector.
Note that configurations in the spherical {\it Ansatz} remain in
the spherical {\it Ansatz} under gauge transformations of the
form
\begin{eqnarray}
\label{sphgt}
A_\mu &&\to A_\mu + \imath U^\dagger \partial_\mu U
~~~~~~ \mu=0\cdots3 \\
\Phi \, &&\to U \Phi \ ,
\end{eqnarray}
where the gauge function is given by
\begin{equation}
\label{Usph}
U=\exp[\imath\Omega(r,t)\mbox{\boldmath$\sigma$}
\cdot {\bf\hat x}/2] \ .
\end{equation}
Inserting Eqs. \ref{SphAn} directly into Eq. \ref{fourAction},
one obtains an effective 2-dimensional theory with action
\begin{eqnarray}
S = 4\pi \int dt\int^\infty_0dr &&\bigg[-\frac{1}{4}
r^2f^{\mu\nu}f_{\mu\nu}+(D^\mu \chi)^* D_\mu \chi
+ r^2 D^\mu\phi^* D_\mu\phi
\nonumber\\
&& -\frac{1}{2 r^2}\left( ~ |\chi |^2-1\right)^2
-\frac{1}{2}(|\chi|^2+1)|\phi|^2 - {\rm Re}(i \chi^* \phi^2)
\nonumber \\
&& -\lambda \, r^2 \, \left(|\phi|^2 - 1\right)^2 ~
\bigg] \ ,
\label{effAction}
\end{eqnarray}
where the indices now run from $0$ to $1$ and in contrast
to Ref. \cite{ry88} are raised and lowered with
$\eta_{\mu\nu}={\rm diag}(1,-1)$, and where
\begin{eqnarray}
f_{\mu\nu}&=& \partial_\mu a_\nu-\partial_\nu a_\mu\
\label{defConva} \\
\chi &=&\alpha+\imath \beta
\label{defConvb} \\
\phi &=& \mu+\imath \nu\
\label{defConvc} \\
D_\mu \chi &=& (\partial_\mu- \imath \, a_\mu)\chi
\label{defConvd} \\
D_\mu \phi&=& (\partial_\mu - \frac{\imath}{2} \, a_\mu)\phi\ .
\label{defConve}
\end{eqnarray}
The reduced action, Eq. \ref{effAction}, is invariant under
the gauge transformation
\begin{eqnarray}
a_\mu &&\to a_\mu + \partial_\mu \Omega
\label{amueq}\\
\chi \, &&\to e^{\imath \Omega} \chi \ ,
\label{chieq}\\
\phi \, &&\to e^{\imath \Omega/2} \phi
\label{phieq}
\end{eqnarray}
which corresponds to the residual $U(1)$ gauge invariance
of Eqs. \ref{sphgt}-\ref{Usph}.
{}From Eqs. \ref{effAction}-\ref{chieq}, we see that the
spherical {\it Ansatz} effectively
yields a system very similar to the Abelian Higgs model
considered above. In this reduced system the variables
$a_0(r,t)$ and $a_1(r,t)$ play the role of the gauge field,
whereas the variables $\chi(r,t)$, which parameterizes
the residual components of the 4-dimensional gauge field,
and $\phi(r,t)$, which parameterizes the 4-dimensional Higgs
field, both behave as 2-dimensional Higgs fields. Of course,
the presence of metric factors (powers of $r$) in the
action Eq.~\ref{effAction} is a reminder that we are really
dealing with a 4-dimensional system.
Regularity of the 4-dimensional field configuration for $r=0$
requires
\begin{eqnarray}
\label{atzero}
&&\chi(r=0)=-\imath \\
\nonumber
&& {\rm Im}\phi(r=0)=0 \ .
\end{eqnarray}
Although one could modify these boundary conditions
by a singular gauge transformation, this would introduce
unnecessary complications. So we will impose
Eqs.~\ref{atzero}, in addition to some other boundary
conditions that also follow from the regularity of the
4-dimensional configuration.
It is also worth noting here that non-singular
gauge transformations satisfy the condition that the gauge
function $\Omega(r,t)$ vanishes at $r=0$ (or is a multiple
of $2\pi$).
We shall work in the $a_0=0$ (or $A_0=0$) gauge throughout. In
the overlaying 4-dimensional theory, if one compactifies 3-space
to $S^3$ by identifying the points at infinity, it is well known that
the vacua correspond to the topologically inequivalent
ways that $S^3$ can be mapped into $SU(2)\sim S^3$\cite{JR}.
These maps are characterized by the third homotopy group of
$SU(2)$, and a vacuum can be labeled by an integer called the
homotopy index or winding number. The effective 2-dimensional
theory inherits a corresponding vacuum structure. From
Eq.~\ref{effAction} it is apparent that the vacuum states are
characterized by $|\chi|=|\phi|=1$, with the additional constraint
that $\imath \chi^* \phi^2 =-1$ (as well as $D_1 \chi =D_1 \phi =0$).
A convenient zero-winding vacuum is given by
$\chi_{vac}=-\imath$, $\phi_{vac}=1$. Nontrivial vacua can be
obtained from this vacuum via gauge transformations in which
the gauge function $\Omega \to 2\pi n$ (for non-zero integers
$n$) as $r\to\infty$. Note that the compactification of 3-space
ensures that at infinity $\Omega$ is a multiple of $2\pi$. Since
$\Omega$ has been locked down to zero at
$r=0$, the winding of such a gauge transformed configuration
is just the integer $n$. For example, a typical winding-number
one vacuum obtained from the previous trivial vacuum is given
by $\chi_{vac}=-\imath \exp[\imath \theta(r)]$, $\phi_{vac}=
\exp[\imath \theta(r)/2]$ and $a_{1 \, vac}=\partial_r \theta(r)$,
where $\theta(r)$ varies from $0$ to $2\pi$ as $r$ changes from
$0$ to $\infty$. By taking advantage of the freedom of performing
a time independent gauge transformation, however, one can
also choose a gauge where $\chi(r)$ and $\phi(r)$ tend to values
different from $-\imath$ and $1$ as $r\to\infty$ (the condition
$\imath \chi^*\phi^2 \to -1$ must be preserved). Indeed, it will
be convenient to choose one such gauge to parameterize the
sphaleron.
In making a topological transition between two inequivalent
vacua, one must leave the manifold of vacuum configurations
and pass over an energy barrier. Along such a trajectory there
will be a configuration of maximum energy. Of all these maximal
energy configurations, the sphaleron has the lowest
energy and represents a saddle point along the energy
ridge separating inequivalent vacua\cite{KM}. The sphaleron
can be expressed in the spherical {\it Ansatz}, and it is
convenient to choose a gauge in which $a_\mu=0$ and
\begin{eqnarray}
\label{sphSphal}
&&\chi_{sph}(r)=\imath [2 f(r)-1] \\
\nonumber
&& \phi_{sph}(r)=\imath h(r) \ ,
\end{eqnarray}
where $f$ and $h$ vary between $0$ and $1$ as $r$
changes from $0$ to $\infty$ and are chosen to minimize
the energy functional.
\begin{figure}
\centerline{
\epsfxsize=120mm
\epsfbox[72 318 540 470]{fig3abc.ps}
}
\caption{\tenrm
The behavior of the $\phi$ field for a typical topological
transition. The $\chi$ field has a behavior similar to the
one in Fig. 2.
}
\end{figure}
This choice of gauge for the sphaleron is slightly
peculiar in the following sense. Finite energy
configurations, like Eq. \ref{sphSphal}, asymptote
to pure gauge at spatial infinity. Typically a gauge
is chosen so that the appropriate gauge function
is unity at spatial infinity, and then space can be
compactified to the 3-sphere. But consider the spherical
gauge function Eq. \ref{Usph} with $\Omega(r)$ independent
of time and varying from $0$ to $\pi$ as $r$ goes from
$0$ to $\infty$. From Eqs. \ref{chieq}-\ref{phieq} we
see that the
corresponding $\chi,\phi \to \imath$ for large $r$,
which is the same as the sphaleron boundary conditions
at spatial infinity. Note, however, that $U\mid_{r\to\infty}
=\imath
\mbox{\boldmath$\sigma$}\cdot {\bf\hat x}$
is direction dependent, so space cannot be
compactified. An arbitrary element of $SU(2)$ can be
parameterized by ${\rm b}_0 {\bf 1} + \imath \mbox{\boldmath$\sigma$}\cdot
{\bf b}$ where
${\bf 1}$ is the two by two unit matrix and
${\rm b}_0^2+{\bf b}^2=1$. Hence $SU(2)\sim S^3$,
and defining the north and south poles by
$\pm {\bf 1}$, we see that $\imath
\mbox{\boldmath$\sigma$}\cdot {\bf b}$
with ${\bf b}^2=1$ parameterizes the equatorial
sphere. Thus, the sphere at infinity is
mapped onto the equatorial sphere of $SU(2)$.
In this gauge, a topology
changing transition proceeding over the sphaleron
corresponds to a transition where the fields wind
over the lower hemisphere of $SU(2)$ before the
transition and over the upper hemisphere after the
transition, with a net change in winding number still
equal to one. In this gauge, the behavior of the $\chi$
field in a
topological transition will be very similar to the behavior
of the Higgs field in the 2-dimensional model, already
illustrated in Fig.~2. The behavior of the $\phi$ field
is illustrated in Fig. 4. We could of course work in a
compactified gauge where a topological transition
would occur between a field with no winding and a field
with unit winding, as in Fig.~1, but the sphaleron would
look more complicated. The advantage of Eq. \ref{sphSphal}
from a computational perspective is that perturbations about
the sphaleron can be more easily parameterized.
In the sphaleron configuration the $\chi$ field has a zero at
some non-zero value of $r$, whereas the $\phi$ field has a zero
for $r=0$ corresponding to the vanishing at the origin of the
actual 4-dimensional Higgs field. The zero of $\chi$ is
reminiscent of the zero which characterizes the sphaleron
of the Abelian Higgs model. However, as shown in
Ref. ~\cite{fggrs},
it is the zero of the true Higgs field (i.e. the zero
of $\phi$) which carries a deeper significance and should be
associated with the actual occurrence of the topological
transition. Nevertheless, since the phase changes of $\chi$ are
more dramatic than those of $\phi$, for purposes of
illustration it is often more convenient to plot $\chi$.
Figure~5 illustrates a typical topological transition. The
configuration starts as a small excitation about the
trivial vacuum defined above, passes over the sphaleron
and then emerges as a configuration that undergoes a
$2\pi$ phase change.
\begin{figure}
\centerline{
\epsfxsize=80mm
\epsfbox[27 240 581 746]{sph9.2_3.fig.eps}
}
\caption{\tenrm
Behavior of the $\chi$ field in a topology changing transition.
The various shades of gray code the phase of the complex
field. The field starts as an excitation about the trivial vacuum,
passes over the sphaleron and then emerges as an excitation
about the vacuum of unit winding. Note the persistent strip of
$2\pi$ phase change after the wave bounces off the origin.
}
\end{figure}
\section{Numerical study of classically allowed processes}
\hspace{1cm}
For energy larger than the sphaleron energy $E_{sph}$, i.e. for
$\epsilon > \epsilon_{sph}= g^2 E_{sph}$, classical evolution
which changes the topology of the fields becomes possible.
Solutions to the classical equations of motion provide the dominant
contributions in a weak coupling expansion of the path integrals
describing transition processes between coherent states. The
existence of topology-changing solutions indicates that the
corresponding processes, which because of the change of topology
violate baryon number, are not suppressed. However it would be
premature to conclude that baryon number violation can occur with
non-negligible amplitude in high energy collisions. Indeed, the number
of particles in the initial state of such processes is small and, in terms
of the rescaled fields used in the description of the classical evolution,
this converts into a rescaled particle number $\nu = g^2 N$ which
tends to zero as $g \to 0$. Thus, in order to establish evidence for
baryon number non-conservation in high energy collisions, one
must show that topology changing classical evolution can occur
with arbitrarily small $\nu$ in the initial state.
The primary impediment for rapid baryon number violation seems
to be a phase space mismatch between initial states of low multiplicity
and final states of many particles. The authors of Ref. \cite{gr} look at
simplified models and observe that, classically, it is difficult to transfer
energy from a small number of hard modes to a large number of soft
modes. However, Ref. \cite{wdymt} finds that for pure Yang-Mills
theory in 2-dimensions the momenta can be dramatically redistributed,
but unfortunately the initial particle number seems to be rather
large in their domain of applicability. It is the purpose of our
investigation to shed light on the situation in 4-dimensions when the
Higgs field is added.
Given any classical evolution, because of the dispersion of the
energy, the fields will asymptotically approach vacuum values.
Thus, for times $t < -T_i$ and $t > T_f$ and sufficiently large
$T_i, \, T_f$, the equations of motion will reduce to linearized
equations describing the small oscillations of the system about
a vacuum configuration. In this linear regime, the evolution of
the fields will be given by a superposition of independent harmonic
oscillators (the normal modes). In terms of the frequencies $\omega_n$
and amplitudes $a_n$ of these oscillators the (rescaled) energy and
particle number are given by
\begin{equation}
\epsilon = \sum_n \omega_n a_n^* a_n
\label{eq31}
\end{equation}
and
\begin{equation}
\nu = \sum_n a_n^* a_n .
\label{eq32}
\end{equation}
Thus for any classical evolution the energy $\epsilon$ and the
particle numbers $\nu_i$ and $\nu_f$ of the asymptotic initial and
final states are well defined. (The energy is of course conserved
and well defined even in the non-linear regime.) In addition,
because of the fact that the fields approach vacuum values for
$t \to \pm \infty$, the winding numbers of initial and final
configuration are also well defined. Because of the sphaleron
barrier, the energy $\epsilon$ of all the classical solutions with
a net change of winding number is bounded below by the sphaleron
energy $\epsilon_{sph}$. The problem one would like to solve, then,
is whether the initial particle number $\nu_i$ of these solutions can
be arbitrarily small, or more generally, one would like to map the
region spanned by all possible values of $\epsilon$ and $\nu_i$ for
the topology changing classical evolution. The highly non-linear
nature of the equations of motion makes an analytic solution
unlikely, even if one is willing to make the crudest approximations.
The problem is however amenable to solution by computational
techniques. In this section we will illustrate the strategy we have
followed to formulate it on the computer and the progress we have
been able to make towards its solution.
The fundamental computational ingredient consists in the
implementation of a numerical solution of the equations of
motion. We start from the Hamiltonian formulation of the
equations of motion for the continuum system in the $a_0=0$
gauge. In such a formulation the variables
\begin{equation}
a_1(r), \quad \chi(r), \quad \phi(r)
\label{eq33}
\end{equation}
form a set of canonical coordinates, conjugate to the momenta
\begin{eqnarray}
E(r)= &&r^2 \partial_0 a_1, \nonumber \\
\pi_{\chi}(r)=&&\partial_0 \chi, \nonumber \\
\pi_{\phi}(r)=&& r^2\partial_0 \phi \ .
\label{eq34}
\end{eqnarray}
The evolution of these variables is generated by the Hamiltonian
\begin{eqnarray}
H = \int^\infty_0dr \, && \bigg[\frac{E^2}{2r^2} +|\pi_{\chi}|^2
+{|\pi_{\phi}|^2 \over r^2} + |D_r \chi|^2 + r^2 |D_r\phi|^2
\nonumber\\
&& +\frac{1}{2 r^2}\left( ~ |\chi |^2-1\right)^2
+\frac{1}{2}(|\chi|^2+1)|\phi|^2 + {\rm Re}(i \chi^* \phi^2)
\nonumber \\
&& +\lambda \, r^2 \, \left(|\phi|^2 - 1\right)^2 ~
\bigg] \ .
\label{eq35}
\end{eqnarray}
Gauss' law
\begin{equation}
\partial_r E =\imath(\pi_{\chi}^* \chi - \chi^* \pi_{\chi})
+\imath(\pi_{\phi}^* \phi - \phi^* \pi_{\phi})
\label{eq36}
\end{equation}
expresses the residual invariance of the system under time
independent local gauge transformations and is imposed as
a condition on the initial state. It is then automatically conserved
by the equations of motion.
To solve the equations of motion numerically the system must be
discretized. It is convenient to use the formalism of lattice gauge
theories. The $r$-axis is subdivided into $N$ equal subintervals
of length $\Delta r$ with finite length $L= N \, \Delta r$. Thus,
the lattice sites have spatial coordinates $r_i=i\Delta r$ with
$i=0 \cdots N$, and the
midpoints between lattice sites have coordinates
$r_{i+1/2}=(i+1/2)\Delta r$ with $i=0 \cdots N-1$. The fields
$\chi$, $\phi$, $\pi_\chi$ and $\pi_\phi$ are then represented
by discrete variables defined over the end points of the intervals,
$\chi_i=\chi(r_i)$, $\phi_i=\phi(r_i)$, etc.; whereas the fields
$a_1$ and $E$ are defined over the intervals themselves
by $a_{1 i}\equiv a_i=a_1(r_{i+1/2})$ and $E_i=E(r_{i+1/2})$
(for notational
simplicity we have dropped the spatial subscript on the
discretized gauge field). The covariant derivatives are then
replaced by covariant finite differences, e.g.
\begin{equation}
D_r \chi \to { \exp[- \imath a_i\, \Delta r] \,
\chi_{i+1}- \chi_i \over \ \Delta r}
{}~~~~~~~ i=0 \cdots N-1\ ,
\label{eq37}
\end{equation}
and like the gauge fields they are to be thought of as living
on the links between lattice sites.
The rest of the discretization is straightforward. One obtains
a discretized Hamiltonian $H_D$ expressed in terms of a finite set of
variables, which still possesses exact local gauge invariance
under the transformations of Eqs.~\ref{amueq}-\ref{chieq}
provided that $a_1 \to a_1 + \partial_r \Omega$ is replaced
with the finite difference formula
\begin{equation}
a_i \to a_i +\frac{\Omega_{i+1} - \Omega_i} {\Delta r}
~~~~~~~ i=0 \cdots N-1 \ ,
\end{equation}
where $\Omega_i=\Omega(r_i)$. From $H_D$ one can
easily obtain the canonical evolution equations for the
discretized variables. Gauss' law, which now takes a
discretized form, must be imposed on the initial state and
is then preserved (exactly) by the time evolution because
of the gauge invariance of the discretized system. In practice
we have used values of $N$ equal to 256, 512 and 1024
and values of $\Delta r$ equal to 0.2, 0.1 and 0.05 in
respectively study the properties of the
system (with $\lambda=0.1$). We found these parameters
to be adequate for obtaining, on the one hand, a reasonable
approximation to the continuum system and, on the other,
a cut-off on $r$ sufficiently large to allow for an effective
linearization of the equations of motion before the waves
hit the boundary. The restriction to uniform spacing of the
subintervals on the $r$-axis is not fundamental and we have
also implemented a discretization where $\Delta r$ increases
as one moves out on the $r$-axis. In this manner one can
effectively make the system larger and delay the effects of
the impact of the waves
with the boundary without worsening the spatial resolution near
$r=0$, where most of the non-linear dynamics takes place. We
have found however that the advantages one gains hardly warrant
the additional complications introduced by the non-uniform spacing.
For the numerical integration of the time evolution we have used the
leap-frog algorithm. Since this algorithm, or the equivalent velocity
Verlet algorithm, constitutes one of the fundamental techniques for
the integration of ordinary differential equations of the Hamiltonian
type and as such is textbook material, we will not discuss it in any
great detail. Essentially, given conjugate canonical variables
$q_i$ and $p_i$ which obey equations
\begin{eqnarray}
{dq_i \over dt} = g_i(p), \nonumber\\
{dp_i \over dt} = f_i(q),
\label{eq38}
\end{eqnarray}
one evolves the values of $q$ and $p$ from some initial $t$ to
$t+\Delta t$ as follows. In a first step $p_i$ is evolved to the
mid-point of the time interval by
\begin{eqnarray}
p_i \to p_i'=&& p_i+f_i(q) {\Delta t \over 2}, \nonumber \\
q_i \to q_i'=&& q_i \ .
\label{eq39}
\end{eqnarray}
(Although $q_i$ is left unchanged, it is convenient to consider the
step formally as a transformation of the entire set of canonical
variables.) In a second step one evolves the coordinates from
their initial value $q_i=q_i'$ to their value at the end of the interval
\begin{eqnarray}
p_i' \to p_i''=&& p_i',\nonumber \\
q_i' \to q_i''=&& q_i' +g_i(p') \Delta t \ .
\label{eq40}
\end{eqnarray}
Finally the momenta are evolved from their value at the midpoint
to the final value
\begin{eqnarray}
p_i'' \to p_i'''=&& p_i''+f_i(q'') {\Delta t \over 2}, \nonumber \\
q_i'' \to q_i'''=&& q_i'' \ .
\label{eq41}
\end{eqnarray}
One can easily verify that these equations reproduce the correct
continuum evolution from $t$ to $t+\Delta t$ up to errors of order
$(\Delta t)^3$. Moreover, the algorithm has the very nice property
that all three steps above constitute canonical transformations
and that it is reversible (in the sense that starting from $q_i'''$,
$-p_i'''$, up to numerical errors one would end up exactly with
$q_i$, $-p_i$). Another very nice feature of the algorithm is
that, although the evolution of the variables is affected by
errors of order $(\Delta t)^3$, the energy of a harmonic oscillator,
and therefore also of any system which can
be decomposed into a linear superposition of harmonic oscillators,
is conserved exactly (always up to numerical errors).
In a sequence of several iterations of the algorithm after the
momenta have been been evolved by the initial $\Delta t/2$,
the first and third step can be combined into a single step whereby
the momenta are evolved from the midpoint of one interval to the
midpoint of the next one, ``hopping over'' the coordinates, which
are evolved from endpoint to endpoint. This motivates the name
assigned to the algorithm.
With a good grasp on the numerical solution of the equations of
motion, we can now turn to the second crucial component of the
computation, namely the identification of the particle number in
the initial state. One could easily parameterize an initial
configuration of the system consisting of incoming waves in the
linear regime. However, it would be extremely difficult to adjust
the parameters so as to insure that a change of winding number
occurs in the course of the subsequent evolution. For this reason
it is much better to parameterize the initial configuration of the
system at the moment when a change of topology occurs. Thus our
strategy consists in implementing a time-reversed solution of the
equations of motion, where the initial configuration is the
configuration of the system at the moment when it passes over the
sphaleron barrier and the asymptotic configuration for large $t$ will
be interpreted as a time-reversed incoming state.
Topology changing transitions within the spherical ansatz are
characterized by the vanishing of
$\phi$ at $r=0$ and the the vanishing of $\chi$ at nonzero $r$.
For a sequence of configurations that pass directly through the
sphaleron these two zeros occur at the same time. However,
this is not the most general case and the zeros of $\phi$ and
$\chi$ need not occur simultaneously \cite{gi}. So we have
parameterized initial configurations in terms of coefficients
$c_n$ of some suitable expansion of the fields and their conjugate
momenta, constrained only by the fact that the field $\chi$ must
have a zero at some finite $r$, as in the sphaleron configuration.
Furthermore, we can use gauge invariance to make this field pure
imaginary. The field $\phi$ is only restricted to obey the boundary
conditions and does not necessarily vanish at the origin (although
it will vanish at some point in its evolution). \footnote{Of course we
could equally well arrange $\phi$ to be pure imaginary and to
vanish at the origin, with no restriction $\chi$ on other than its
boundary conditions.}
In practice, computational considerations will limit
the number of parameters we will be able to use.
We will then evolve the system until the dispersion of the energy
brings it to the linear domain. At this point, we can calculate
the amplitudes of the normal modes of oscillation and the particle
number of the system in its asymptotic state. It is clear that
every set of coefficients $c_n$ will determine one definite value
for the energy $\epsilon$ and particle number $\nu$ of the asymptotic state.
In calculating the particle number, one
should use only the lower lying modes since higher modes
probe wave lengths of order the lattice spacing. For our
lattice parameters, we found that considering the first 50 to
100 modes is reasonable.
We should point out that an arbitrary initial configuration is
not necessarily guaranteed to change topology. However, by
evolving the configuration both forward and backward in time we
can easily verify whether topology changes, and initial
configurations that do not change the topology can be rejected.
By reversing the time evolution, then, we will have defined
an initial asymptotic configuration with energy $\epsilon$ and
particle number $\nu$, which in the course of its evolution
undergoes a change of topology. By varying the values of the
parameters $c_n$ we will be able to study the properties of such
field evolution and, in particular, explore the domain of permissible
values for $\epsilon$ and $\nu$. It should be obvious at this
point that the determination of the normal mode amplitudes is
another crucial ingredient of the computation.
The normal modes of oscillation can be calculated starting from
the linearized equations of motion. These equations can be obtained
from an expansion of the Hamiltonian up to second order in the
deviation of fields from a vacuum configuration. For conciseness
of presentation we will not reproduce here their explicit form, but
will limit ourselves to a discussion of the general properties of the
normal modes. The normal modes can be obtained by assuming
an oscillatory evolution for the fields of the type
\begin{eqnarray}
\delta \chi(r,t) = && \chi^{(n)}(r) \sin[\omega_n t]
\nonumber \\
\pi_{\chi}(r,t) = && \pi_{\chi}^{(n)}(r) \cos[\omega_n t] \ .
\label{eq42}
\end{eqnarray}
In this equation we have made reference only to one pair of
conjugate fields, but the equation should be complemented with a similar
assumption for the pairs $a_1$ and $E$,
$\delta \phi$ and $\pi_\phi$,
which are a priori all coupled together with $\delta \chi$ and
$\pi_\chi$
in the equations of evolution. We have used the symbol $\delta$
to denote the deviations of the fields $\chi$ and $\phi$ from
their vacuum values $\chi= -\imath$ and $\phi=1$. All of the other
fields vanish in the trivial vacuum configuration.
For the discretized system, $r$ should be replaced by an index $i$.
Substituting the {\it Ansatz} of Eqs.~\ref{eq42} into the equations of motion
one obtains a set of eigenvalue equations. Their solution determines
the possible values of $\omega_n$ as well as the corresponding
eigenfunctions (or more properly eigenvectors in the discretized case)
$\delta \chi^{(n)} (r)$, $\pi_{\chi}^{(n)} (r)$ etc.
We have determined eigenvalues and eigenfunctions both numerically,
for the discretized system, and analytically, for the continuum
system. One finds that the normal modes of oscillations are naturally
grouped together into four sets of modes:
i) a set of modes where only the imaginary part of the fields $\delta \chi$
and $\pi_{\chi}$ is non-vanishing. These correspond to an oscillation of the
modulus of the $\chi$ field .
ii) a set of modes where only the real part of the fields $\delta \phi$
and $\pi_{\phi}$ is non-vanishing. These correspond to an oscillation of the
modulus of the $\phi$ field .
iii) and iv) two sets of modes where the real part of $\delta \chi$,
$\pi_{\chi}$, the imaginary part of $\delta \phi$, $\pi_{\phi}$ as well
as $a_1$ and $E$ are coupled together and non-vanishing. These
correspond to oscillations of the phases of $\chi$ and $\phi$
(in coherence or opposition of phase), accompanied corresponding
oscillations of the gauge fields.
Given the expressions for the eigenfunctions it is possible
to extract the amplitudes of the normal modes of oscillation
$a_n(t)$ by taking suitable convolutions of the eigenfunctions
with the fields and momenta. This procedure exploits various
properties of orthogonality which the eigenfunctions satisfy.
One subtle point, however, involves the need to fix the gauge.
The normal modes are obtained on the basis of an expansion
into small oscillations around the trivial vacuum configuration
with constant fields (one could of course expand around any
fixed vacuum configuration, but the formulae are much simpler
if one expands around constant fields). However there is no
guarantee that the evolution of the fields will lead to an
asymptotic regime of small fluctuations precisely around
such a vacuum configuration. Indeed, in general this will
not happen and the actual configuration will typically differ
from the one used to derive the normal modes by a large
gauge transformation. The remedy is easy enough. One
can perform a gauge transformation to a fixed gauge which
differs at most by small fluctuations from the one where the
expansion has been performed. In our computations we have
used the unitary gauge defined by ${\rm Arg}\chi(r) = -\pi/2$,
transforming the fields to this gauge at some definite time
$t_0$, at which point the appropriate amplitudes may be
extracted. At subsequent times the fields might no
longer be small perturbations about the trivial vacuum,
so every time the amplitudes are calculated we first enter
the above gauge. An alternative approach, which we have
also implemented, consists in deriving the linearized equations
of motion for a complete set of gauge invariant quantities
\cite{gi}(we have used $ |\chi| -1 $, $|\phi| -1 $, the electric
field $E$, the difference of phases ${\rm Arg}\chi -
2{\rm Arg}\phi$ and the time derivatives of all these quantities).
Following a procedure similar to the one outlined above
(cf. Eq.~\ref{eq42}) one can derive the eigenvalues and
eigenfunctions for the small oscillations of these quantities
(the eigenvalues are identical, of course, to those obtained
using gauge variant quantities) and extract the amplitudes
through suitable convolutions with the evolving fields. We
have verified that the two procedures produce identical
results.
\begin{figure}
\centerline{
\epsfxsize=80mm
\epsfbox[27 240 581 746]{4dsph.fig.eps}
}
\caption{\tenrm
Sphaleron decay in the four dimensional $SU(2)$ Higgs model:
evolution of the $\chi$ field. The values of the phase of the
complex field are coded by different shades of gray. When
the evolution reaches the linear regime, a gauge transformation,
indicated by the sudden change of shading, is performed to
extract the normal mode amplitudes.}
\end{figure}
We can now turn our attention to the figures that illustrate
our results for a system with coupling constant $\lambda = 0.1$.
Figure 6 illustrates the behavior of the field $\chi$ following
the decay of the sphaleron after a slight initial perturbation.
We have found it very convenient and informative to use color
to code the phase of the complex fields. Unfortunately the illustrations in
these pages cannot be reproduced in color
and we have tried to render the variation of the phase with
a gray scale. From Fig.~6 it is clear that the energy, which
is concentrated in the neighborhood of $r=0$ in the sphaleron
configuration, disperses and gives rise to a pattern of outgoing
waves. The sudden variation of tonality at some point in the time
evolution indicates the change of phase induced by the gauge
transformation to the unitary gauge. In Fig.~7 we display the
behavior of the particle number in the four normal modes of
oscillations as function of time. It is apparent that, after an
initial transition period where the particle number is not well
defined, the quantities settle to values which are reasonably
constant in time. We take this as evidence that the system has
reached an asymptotic regime where one can meaniningfully
define a conserved particle number. Finally, the evolution of
the system can be time reversed, as we have discussed above,
and the (time-reversed) final configuration can be considered
as an asymptotic initial configuration with definite energy and
particle number. The previously shown Fig.~5 illustrates the
evolution obtained taking one such asymptotic initial
configuration. The incoming waves are seen to merge in the
neighborhood of the origin, where a change of topology takes
place. The fact that the winding number of the field configuration
has changed is indicated by the strip of rapidly varying tonality
which persists in the neighborhood of the origin and codes the
variation of the phase of $\chi$. With color, this strip would
appear as a vivid rainbow, left over as a marker of the change
of topology of the evolving fields.
\begin{figure}
\centerline{
\epsfxsize=100mm
\epsfbox{partnum.eps}
}
\caption{\tenrm
Sphaleron decay in the four dimensional $SU(2)$ Higgs model:
behavior of the particle number in the four normal modes of
oscillation of the linearized system as function of time. The sum
in Eq. 27 extends over the first 50 modes for each of the four
mode types.
}
\end{figure}
The configuration which sits on top of the sphaleron barrier
can be parameterized by expanding into suitable components
a complete set of independent fields. We have chosen these
fields to be the perturbations $ \Delta \chi$ and $\Delta \phi$
(not necessarily small) of the $\chi$, $\phi$ fields of the sphaleron,
the field $a_1$ and the momenta $\pi_{\chi}$, $\pi_{\phi}$. We
avoid a redundant gauge degree of freedom by taking $\Delta
\chi$ purely imaginary (like the $\chi$ field of the sphaleron
itself). The final field needed to specify the initial configuration,
i.e. the electric field $E$, can then be derived from Gauss law.
We have parameterized these fields in terms of Bessel functions,
chosen in such a way to respect appropriate boundary conditions
at $r=0$ and $r=L$. The coefficients $c_n$ of the expansion can
now be varied and, in this way, one can explore the region in the
$\epsilon$-$\nu$ plane spanned by all of the topology changing
classical solutions.
Of course, the space of topology changing configurations is
infinite and a random exploration of such space would not lead
to very useful results. We must keep in mind that the interesting
question is whether there is a lower bound in $\nu$ or, more
generally, what is the lower boundary of the region spanned
by all topology changing solutions in the $\epsilon$-$\nu$
plane. This question can be investigated by methods of
stochastic sampling. One can perform random small steps in
the space of all configurations by varying the parameters $c_n$
stochastically. After each change the new configuration is
evolved until one can extract the particle number in the linear
regime. Then the variations of energy and particle number
$\Delta \epsilon$ and $\Delta \nu$ induced by the change
$\Delta c_n$ become well defined. A standard Metropolis
Monte Carlo sampling technique consists in accepting or
rejecting the change according to the value of the quantity
$\Delta F=\beta \Delta \epsilon + \mu \Delta \nu$, where $\beta$
and $\mu$ are parameters that weight what region of the
$\epsilon$-$\nu$ plane is explored. To be more precise, the
change is accepted with conditional probability $ P = {\rm Min}
[1, \exp (- \Delta F)] $, which has the effect of producing
configurations distributed according to a measure
proportional to $\exp (-\beta \epsilon -\mu \nu)$. The
parameters $\beta$ and $\mu$ play the role of inverse
temperature and chemical potential. By choosing these
parameters appropriately one can drive the sampling towards
low values of energy and particle number and thus explore
the interesting region in the $\epsilon$-$\nu$ plane. We have
begun implementing this procedure and Fig.~8 illustrates our
first results. It is interesting to note that the decay of a
(slightly perturbed) sphaleron gives rise to a particle number
$\nu \approx 1.9 (4\pi)$. For $g=0.6$ this corresponds to
$N\approx 66$ physical particles. From Fig.~8 one can
see that our sampling procedure has produced configurations
with comparable energy and much smaller particle number.
Of course the ultraviolet cutoff induced by the lattice discretization
puts a lower limit on the ratio $\nu/\epsilon$, which occurs when
only the highest mode is excited. We have used lattice parameters
$\Delta r = 0.2$, $N=256$ with $L=51.2$ and have considered
only the first 50 modes for each of the four types of normal modes.
Hence the minimum value of $\nu/\epsilon$ is of order
$ 1/\omega_{max}
\sim L/n_{max}\pi \sim 0.3$. Given our lattice resolution, we have
saturated the lowest bound in particle number that we are
sensitive to. This may be an indication that there is no lower
bound on $\nu$, but our calculation is still at a very preliminary
stage and much more work will be needed to establish reliable
results.
\begin{figure}
\centerline{
\epsfxsize=100mm
\epsfbox{mc.2.eps}
}
\caption{\tenrm
Monte Carlo results with lattice parameters of $\Delta r = 0.2$,
$N=256$ and L=51.2 and Higgs coupling $\lambda=0.1$.
}
\end{figure}
\section{High energy baryon number violating processes below
the sphaleron barrier}
\hspace{1cm}
In the work of Ref.~\cite{rst}
Rubakov, Son and Tinyakov relate the probability
of a topology changing process from
a coherent state
to the action of a complexified classical evolution along
a special contour in the complex time plane.
As shown in Fig.~9, this contour consists of a
semiaxis parallel to the real axis
$\alpha = (-\infty +\imath T \longrightarrow \imath T)$,
followed by a segment
$\beta=(\imath T \longrightarrow 0$) of length $T$ along the imaginary axis,
followed by the real positive semiaxis
$\gamma = (0 \longrightarrow +\infty)$.
The word ``complexified classical evolution'' refers to the fact that
the equations of motion must be analytically continued to complex fields.
This has to be handled with some care for the case in which fields,
such as $\chi(r)$ and $\phi(r)$, are already complex to begin with.
In this case $\chi(r)$ and its complex conjugate, which we will denote
here by $\bar \chi(r)$, must be considered as formally independent
variables, which can be analytically continued separately, and do not
necessarily satisfy the ``reality condition'' $\bar\chi(r)=\chi(r)^*$.
The same applies to $\phi(r)$ and $\bar\phi(r)$.
\begin{figure}
\centerline{
\epsfxsize=100mm
\epsfbox{contour.eps}
}
\caption{\tenrm
Complex time contour.
}
\end{figure}
The boundary conditions are of special importance. The solution
must be real, in the sense specified above,
along $\gamma$, where the asymptotic fields
for $t \to +\infty$ represent the final
state. The condition on the particle
number in the initial state translates instead into the requirement
that for very early times in the evolution (i.e.~asymptotically
for $t \to -\infty + \imath T$ along $\alpha$)
the fields must reduce to a superposition of normal
modes of oscillation with amplitudes satisfying the equation
\begin{equation}
\bar a(- k) = e^{\theta} a^*( k)
\label{eq43}
\end{equation}
where $k$ is an index characterizing the radial momentum of the waves,
and $\theta$ plays the role of a chemical potential conjugate
to the particle number in the initial state. It is clear from
Eq.~\ref{eq43} that for $\theta \ne 0$ the fields cannot be real
along $\alpha$.
Computationally, it is convenient to think again of a time reversed evolution
by which, starting from an initial configuration at $t=0$ the fields
undergo a Euclidean evolution along the imaginary time axis
to $t=\imath T$ (i.e. following the oriented segment $-\beta$)
and then a Minkowski evolution (along $-\alpha$) from $\imath T$
to $-\infty + \imath T$.
For $t=0$ for an $x$-axis discretized with $N$ sites
we have as free variables $N$ real values of the field and $N$ pure
imaginary values of the conjugate momentum per each independent
canonically conjugate field-momentum pair. The conditions that
the normal mode amplitudes must
satisfy Eq.~\ref{eq43} amount to $N$ complex equations, i.e.~$2N$ real
equations again for each canonically conjugate field-momentum pair.
Thus, in principle,
one could evolve the fields from an initial {\it Ansatz} at $t=0$ and
adjust the initial variables so that Eq.~\ref{eq43} is satisfied.
In practice, since the evolution equations along the imaginary time
axis are elliptic, one cannot perform a forward integration. Rather,
one must resort to some relaxation procedure or other
global algorithm, by which one solves the evolution
equations as a set of simultaneous non-linear equations
for all points of a space-time grid. The situation is further
complicated by the fact that, with complexified fields,
one cannot just minimize a Euclidean action integral.
We have developed a ``second order'' formulation, by
which we minimize a constraint functional obtained from
the modulus squared of the functions that must vanish
at all grid points (an earlier part of this study was done in
collaboration with Timothy Vaughan). In this case also
we have used the formalism of lattice gauge theory to
obtain a gauge invariant discretization. We tested our
procedure in the context of the $2D$ Abelian Higgs model
(one space, one time dimensions), where we found that it
did reproduce the expected Euclidean solutions, including
solutions with multiple bounces of the fields between two
different topological sectors.
Another crucial component of the calculation consists in
solving the evolution equations along the semiaxis $-\alpha$,
from $\imath T$ to $-\infty + \imath T$ and extracting the normal
mode amplitudes $a(k), \bar a(k)$. Here we feel that we have
formalism and algorithm already in place, although we need to
check that the integration remains stable with complexified fields.
In conclusion, the study of topology changing processes below
the sphaleron barrier presents some additional challenges, the
most notable being the need to integrate elliptic, rather than
hyperbolic, equations over part of complex time path. The number
of degrees of freedom and the complexity of the calculation,
however, are not substantially different from those which
characterize the numerical investigations of classically allowed
processes, where, as we have shown above, the power of the
computational tools are well adequate to produce interesting
and accurate results. This warrants the expectation that even
the classically forbidden processes will be amenable to a
successful computational analysis.
{\bf Acknowledgments}
This research was supported
in part under DOE grant DE-FG02-91ER40676 and NSF grant
ASC-940031. We wish to thank V.~Rubakov for very interesting
conversations which stimulated the investigation described here,
A.~Cohen, K.~Rajagopal and P.~Tinyakov for valuable discussions,
and T.~Vaughan for participating in an early stage of this work.
|
1,477,468,750,140 | arxiv | \section{Introduction}\label{introduction}
The initial mass function (IMF) of stars deeply influences subsequent galaxy evolution and star formation history.
Stellar feedback from massive stars alter the physical state and structure of the interstellar medium \citep[e.g.][]{McKee1977}.
Supernovae (SNe) provide the interstellar medium with heavy elements and foster chemical evolution of galaxies.
On the other hand, low mass stars account for the most of stellar mass of aged galaxies due to their longevity.
Even primordial low-mass stars, if formed, can still survive as main sequence stars in the Galaxy \citep[e.g.][]{Machida2008,Clark2011}.
The mass of stars likely depends on the metallicity of the forming environments.
At the solar metallicity, the IMF has the peak around $0.4~M_{\odot}$, which means the most stars are low-mass objects \citep{Kroupa2001,Chabrier2003}.
On the other hand, the IMF of primordial stars was theoretically predicted with recent radiative-hydrodynamics simulations in the cosmological context. \citep{Hirano2014,Hirano2015, Susa2014, Stacy2016, Hosokawa2016}.
Those numerical simulations suggest that the primordial stars were formed in the mass range in $\sim 10~M_{\odot}$ - a few 100 $M_{\odot}$ with the typical mass being much larger than that of the sun.
This transition in the characteristic stellar mass from massive to low-mass objects can be ascribed to the change in fragmentation mass of star-forming
clumps with accumulation of metals \citep{Omukai2000, Bromm2001}.
In literatures, the transitional metallicity is often called the critical metallicity $Z_{\rm crit}$.
Fragmentation of clumps is induced by the dust cooling \citep{Schneider2002} as well as metal-line cooling \citep{Bromm2001}.
With gas-phase metals, clumps fragment into massive dense cores of no smaller than a few $10~M_{\odot}$ even with $Z \ga 10^{-3.5} ~Z_{\odot}$ \citep{Bromm2003,Santoro2006}.
On the other hand, the fragmentation mass becomes as small as $0.1-1~M_{\odot}$ in the case of taking into account the dust cooling for $Z \ga 10^{-6} - 10^{-5}~Z_{\odot}$ \citep{Omukai2005,Schneider2003} in the case that the dust depletion factor, which is defined by
\begin{eqnarray}
f_{\rm dep} \equiv \frac{M_{\rm dust}}{M_{\rm dust} + M_{\rm metal}}, \label{1002.1}
\end{eqnarray}
where $M_{\rm metal}$ is the metal mass in the gas phase and $M_{\rm dust}$ is that taken into dust grains, is the same as in the Galaxy.
By converting this metallicity into dust-to-gas ratio $\mathcal D$, the critical dust-to-gas ratio is $\mathcal D_{\rm crit} = \left[2.6-6.3 \right]\times 10^{-9}$ \citep{Schneider2012}.
Therefore, the dust cooling is supposed to produce low-mass stars of $\lesssim 1~M_{\odot}$ even at such low metallicity as $10^{-6} - 10^{-5}~Z_{\odot}$.
Recent survey observations successfully discovered several metal-poor stars with metallicity lower than $\sim 10^{-4}~Z_{\odot}$, but these ultra-metal poor stars are very rare in the Galaxy \citep[e.g.][]{Frebel2015}.
\citet{Salvadori2007} indicated that the threshold of metallicity for the formation of low-mass stars should be $\sim 10^{-4}Z_{\odot}$ in order to reproduce the metallicity distribution of observed metal-poor stars in the Galactic halo.
This value does not coincide with the theoretical expectation for the critical metallicity for dust-induced fragmentation, $Z_{\rm crit} = 10^{-6} -10^{-5}Z_{\odot}$.
This discrepancy can be alleviated by reducing the dust depletion factor in low-metallicity environments.
Namely, if the dust depletion factor is smaller than in the solar neighborhood,
$\mathcal D$ may still fall short of $\mathcal D_{\rm crit}$
even for the gas with nominal metallicity higher than $Z_{\rm crit}$.
One way to reduce $f_{\rm dep}$ is slower growth of dust mass in low-metallicity interstellar medium \citep[e.g.,][]{Asano2013}.
Recently \citet{Remy-Ruyer2014} suggested that the observed lower dust depletion factor in the low-metallicity galaxies in the local universe could be explained by such mechanism.
Another possible mechanism, which we focus on in this paper, is the dust evacuation from star-forming regions caused by radiation feedback from massive stars.
In this case, dust grains are pushed out due to the radiation force and can be decoupled from the gas component.
In fact, the lower amount of dust grains in the observed H{\sc ii} regions around massive stars than in the H{\sc i} gas is likely to be caused by such mechanism \citep{Draine2011,Akimkin2015, Akimkin2017,Ishiki2018}.
Also, the radiation force has been claimed to be able to expel dust grains even from galactic haloes \citep{Chiao1972,Ferra1991}.
Similarly, we speculate that the dust evacuation also occurs in low-metallicity star-forming clouds in the early universe, thereby allowing massive star formation to continue even with certain metal enrichment.
In this paper, we study the condition for the formation of dust-free star-forming clouds as a result of dust evacuation by the radiation force.
We have found that the dust evacuation successfully occurs in clouds with high column density $N_{\rm H} \simeq 10^{24} - 10^{26}~{\rm cm^{-2}}$ and very low metallicity $Z \la 10^{-2}~Z_{\odot}$.
In terms of galaxy properties, this occurs more easily for higher halo mass and formation redshift, e.g., halos of $\sim 10^{9}~M_{\odot}$ ( $\sim 10^{7}~M_{\odot}$ ) at $z \sim 3$ ( $z\sim 9$, respectively ), as long as very low-metallicity gas is available.
We organize the rest of the paper as follows.
In Section \ref{dust_evac1}, we first consider the dust evacuation from a homogeneous spherical star-forming cloud.
We then estimate the conditions for the dust evacuation from a galactic disk in Section \ref{dustevac2}.
In Section \ref{suppression}, we investigate the possible effects that inhibit the dust evacuation.
Finally, we summarize our results and give discussions in Section \ref{matome}.
The effect of the Coulomb drag force on the terminal velocity of grains is described in Appendix \ref{apd1}.
\section{Dust grain Evacuation from a Star Forming Cloud}\label{dust_evac1}
\subsection{Formation efficiency and radiation emissivity of star clusters}
\label{eps_and_strformationrate}
We first consider the simplest case of spherical and uniform density star-forming clouds.
For a cloud with the mass $M_{\rm cl}$ , radius $R_{\rm cl}$ and
star formation efficiency (SFE) $\epsilon_{*}$,
the total stellar mass $M_{*}$ formed in the cloud is given by
\begin{eqnarray}
M_{*} = \epsilon_{*} M_{\rm cl} = 10^{5}~ M_{\odot} \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm cl}}{10^6 ~M_{\odot}} \right). \label{0723.1}
\end{eqnarray}
The SFE $\epsilon_{*}$ is likely to depend on the properties of star-forming clouds \citep{Katz1992}. Here we estimate the growth rate of the total stellar mass based on the local free-fall time $t_{\rm ff}$ as
\begin{eqnarray}
\frac{d M_{*}}{d t} = c_{*} \frac{M_{\rm cl} - M_{*}}{t_{\rm ff}}, \label{0723.2}
\end{eqnarray}
where $c_{*}$ is a parameter characterizing the star formation rate (SFR), and $t_{\rm ff}$ is defined as
\begin{eqnarray}
t_{\rm ff} = \sqrt{\frac{3 \pi}{32 G m_{\rm H} n_{\rm H}}}, \label{0907.1}
\end{eqnarray}
where the number density of gas $n_{\rm H}$ is given as
\begin{eqnarray}
n_{\rm H} = \frac{M_{\rm cl}}{\frac{4}{3} \pi R_{\rm cl}^{3} m_{\rm H}} = 9.7 \times 10^{3} {\rm cm^{-3}} \left( \frac{R_{\rm cl}}{10~ {\rm pc}} \right)^{-3} \left( \frac{M_{\rm cl}}{10^{6}~M_{\odot}} \right). \label{0729.1}
\end{eqnarray}
We estimate the total stellar mass $M_{*}$ by integrating Equation \eqref{0723.2} with the condition $M_{*} = 0$ at $t=0$:
\begin{eqnarray}
M_{*} = M_{\rm cl} \left[ 1 - \exp \left(- c_{*} \frac{t}{t_{\rm ff}} \right) \right]. \label{0723.3}
\end{eqnarray}
Massive stars are expected to end their lives as SNe.
Their feedback likely destroys the star-forming clouds and stops further star formation.
In addition, newly formed dust grains will be supplied in those events.
Therefore, we use the SFE at the lifetime of OB stars, $t = t_{\rm OB}$, in estimating the condition for the dust evacuation:
\begin{eqnarray}
\epsilon_{*} = \left[ 1 - \exp \left(- c_{*} \frac{t_{\rm OB}}{t_{\rm ff}} \right) \right]. \label{0723.4}
\end{eqnarray}
The relation among $\epsilon_{*}$, $M_{\rm cl}$, $R_{\rm cl}$ and $c_{*}$ is shown in Figure \ref{fig0806.1}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./c_epsilon.pdf}
\end{center}
\caption{The SFE $\epsilon_{*}$ as a function of the cloud mass $M_{\rm cl}$, the cloud radius $R_{\rm cl}$ and $c_{*}$ ( Eq. \ref{0723.4}).
We used $t_{\rm OB} = 2.5 \times 10^{6} {\rm yr}$.}
\label{fig0806.1}
\end{figure}
For $M_{\rm cl} = 10^{6}~M_{\odot}$, $R_{\rm cl} = 10~{\rm pc}$,
the SFE becomes $\epsilon_{*} = 0.38$ ($4.7 \times 10^{-2}$) for $c_{*} = 0.1$ ($0.01$, respectively).
We use the IMF $\, \psi (m_{*}) \,$ of \citet{Larson1998}:
\begin{eqnarray}
\psi (m_{*})= \frac{dN}{d \log m_{*}} \propto \left(1 + \frac{ m_{*} }{ m_{\rm ch}} \right)^{-1.35}, \label{3.1.1}
\end{eqnarray}
where $m_{\rm ch}$ is the characteristic stellar mass formed.
In this paper, we consider the case of $m_{\rm ch} = 10~M_{\odot}$ as the fiducial one,
which is claimed by \citet{Komiya2007} based on the carbon enhanced metal-poor (CEMP) star statistics in the Galaxy.
We also study the cases with low stellar mass $m_{\rm ch} = 1~M_{\odot}$ as in the solar neighborhood \citep[e.g.,][]{Kroupa2001}, and
very massive stars $m_{\rm ch} = 50~M_{\odot}$ as in the primordial star formation \citep{Hosokawa2011}.
We take the mass range from $0.1~M_{\odot}$ to $300~M_{\odot}$ in all the cases.
Next, we estimate the total luminosity $L_{\rm tot}$ and the total ionizing photon emissivity $S_{\rm tot}$ of a star cluster.
We use the stellar isochrone calculated by \citet{Chen2015} with metallicity $Z =10^{-2}~Z_{\odot}$ and at $t = 10^{6}~{\rm yr}$
roughly corresponding to half the average cloud lifetime.
Figure \ref{zu3.1} illustrates the luminosity $L_{*}(m_{*})$, the effective temperature $T_{\rm eff}(m_{*})$,
and the ionizing photon emissivity $S_{*}$ as functions of the stellar mass.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./M_model.pdf}
\end{center}
\caption{The luminosity $L_{*}$ (top), the effective temperature $T_{\rm eff}$ (middle) and ionizing photon emissivity $S_{*}$ (bottom) of stars at metallicity $10^{-2}~Z_{\odot}$ and $t = 10^{6}~{\rm yr}$ as a function of their mass \citep{Chen2015}. \label{zu3.1}}
\end{figure}
The luminosity and the ionizing photon emissivity per
unit stellar mass are calculated by taking the average with the weight of the IMF (Eq. \ref{3.1.1}).
They are presented in Table \ref{tabal0729.1}.
\begin{table}
\caption{The luminosity and the ionizing photon emissivity per unit stellar mass for different characteristic stellar mass $m_{\rm ch}$.
The numbers are in unit of $( L_{\odot} ~ M_{\odot}^{-1} )$ and $( s^{-1}~M_{\odot}^{-1} )$.}
\label{tabal0729.1}
\centering
\begin{tabular}{ccc}
\hline \hline
$m_{\rm ch} \, [\, M_{\odot} \,]$ & $\langle L_{*} / m_{*} \rangle $ & $\langle S_{*} / m_{*} \rangle$ \\
\hline
$1 $ & $2.9 \times 10^{3}$ & $2.2 \times 10^{47} $ \\
$10 $ & $6.7 \times 10^{3}$ & $5.0 \times 10^{47} $ \\
$50 $ & $1.1 \times 10^{4}$ & $8.3 \times 10^{47} $ \\
\hline
\end{tabular}
\end{table}
In the fiducial case with $m_{\rm ch} = 10~M_{\odot}$, the total luminosity and ionization emissivity are
\begin{eqnarray}
&& L _{\rm tot} = 6.7 \times 10^{8} \, L _{\odot} \, \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm cl}} {10^{6}M_{\odot}} \right), \label{3.1.5}
\end{eqnarray}
and
\begin{eqnarray}
&& S_{\rm tot} = 5.0 \times 10^{52} \, {\rm s^{-1}} \, \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm cl}} {10^{6}M_{\odot}} \right). \label{3.1.6}
\end{eqnarray}
\subsection{Dust evacuation time}\label{sec.det}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./q_reemit.pdf}
\end{center}
\caption{
The absorption efficiency factor $Q$ to geometric area as a function of dust size.
Blue and red lines show the values to the stellar radiation of black body with the effective temperature of $2 \times 10^{4}~\rm K$ and the diffuse thermal emission of dust with the temperature of $100~\rm K$, respectively.
The solid- and dashed-lines represent graphite and silicate dust grains.
We calculate the efficiency factor $Q$ from Mie theory with the dielectric function of \citet{Draine1984} and \citet{Draine2003}. \label{zu0807.6}
}
\end{figure}
The motion of dust grains is governed by three forces:
radiation force $F_{\rm rad}$, gravity $F_{\rm grav}$ and drag force $F_{\rm drag}$.
We estimate the dust terminal velocity $v_{\rm d}$ as follows.
Radiation force $F_{\rm rad}$ exerted on a dust grain is
\begin{eqnarray}
F_{\rm rad} = \frac{\pi a^{2} Q L_{\rm tot}}{4 \pi r^{2} c}, \label{0915.1}
\end{eqnarray}
where $a$ is the radius of the dust grain and $Q$ is the absorption efficiency factor relative to the geometrical cross section.
The efficiency factor $Q$ depends on the size of dust grains and the frequency of radiation.
When the cloud is optically thin, radiation field inside the cloud is dominated by the direct light
from massive stars, which can be approximated by the black-body spectrum of $\sim 2 \times 10^{4} ~{\rm K}$.
On the other hand, in the optically thick case the stellar light is absorbed and re-emitted by dust grains,
and the radiation is dominated by the the diffuse light, which is roughly the black body of $\sim 100 ~{\rm K}$ \citep[e.g.,][]{Fukushima2017}.
Figure \ref{zu0807.6} shows
the size dependence of the efficiency factor $Q$ of graphite and silicate grains both for the direct and diffuse light.
Note $Q \simeq 1$ ($\simeq 10^{-2}$) for the direct (diffuse, respectively) light for grains with $a \simeq 0.1~\mu {\rm m}$.
The ratio of radiation force to the gravity on a dust grain is
\begin{align}
\frac{F_{\rm rad}}{F_{\rm grav}} &= \frac{\pi a^2 Q L_{\rm tot}}{4 \pi c G M_{\rm cl} m_{\rm d}} \nonumber \\
&= 1.3 \times 10^{3} \left( \frac{Q}{1} \right)\left( \frac{\rho_{\rm d}}{3 {\rm \hspace{1mm} g \hspace{1mm} cm^{-3}} } \right)^{-1} \left( \frac{a}{0.1\mu {\rm m}} \right)^{-1} \left( \frac{\epsilon_{*}}{0.1} \right). \label{2.1.4}
\end{align}
That is, the gravity is negligible compared to the radiation force.
The terminal velocity of dust $v_{\rm d}$ is determined by the balance between radiation and drag forces alone.
We consider the collisional gas drag $F_{\rm collision}$ and the Coulomb drag $F_{\rm Coulomb}$ as the
drag force on dust grains, $F_{\rm drag}=F_{\rm collision}+F_{\rm Coulomb}$ (see Appendix \ref{apd1} for the expression).
The Coulomb drag and thus the relative velocity of dust grains sensitively depend on the ionization degree $x_{\rm e}$ in the cloud (for the estimation of the terminal velocity, see Appendix \ref{apd1}).
Figure \ref{zu0907.1} shows the terminal velocity of a dust grain with $0.1~\mu {\rm m}$
as a function of ionization degree for the cloud with $M_{\rm cl} = 10^{6}~M_{\odot}$, $R_{\rm cl} = 10~{\rm pc}$ and $\epsilon_{*} = 0.1$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./vd_01.pdf}
\end{center}
\caption{
The terminal velocity of dust grains in a star-forming cloud $M_{\rm cl} = 10^{6}~ M_{\odot}$, $R_{\rm cl} = 10 ~ {\rm pc}$ and $\epsilon_{*} = 0.1$.
The solid and dashed lines show the velocities estimated with and without Coulomb drag force, respectively.
The blue and red lines are for the graphite and silicate grains, respectively.
\label{zu0907.1}}
\end{figure}
The solid lines show the terminal velocities calculated by considering both the collisional and Coulomb drag forces, while
the dashed lines are those with the collisional drag force alone.
For $x_{\rm e} \ga 10^{-5}$, the Coulomb drag dominates the collisional drag, and the velocity of dust grains suddenly falls below $1 ~{\rm km \, s^{-1}}$.
In contrast, the Coulomb drag does not work for $x_{\rm e} \la 10^{-5}$, and the terminal velocity is $\sim 10 ~ {\rm km \, s^{-1}}$, set by the collisional drag.
In our model, free electrons are supplied by the ionization either of heavy elements by far ultraviolet (FUV) light
or of hydrogen by cosmic rays.
The typical number density $n(\rm M^{+})$ of heavy elements ionized by FUV photons of $h \nu < 13.6 \rm{e V}$ in H{\sc i} regions is estimated by \citet{Draine2011a} as
\begin{eqnarray}
x_{\rm M} = n({\rm M^{+}}) / n_{\rm H} \simeq 1.1 \times 10^{-4} \label{0709.13}
\end{eqnarray}
for the solar metallicity gas.
Therefore, in the low-metallicity case of $Z \la 10^{-1}Z_{\odot}$, we expect that $x_{\rm M} \la 10^{-5}$ and
thus the Coulomb drag can be neglected.
The hydrogen ionization fraction by cosmic rays is calculated by the balance between the ionization rate of cosmic rays $\xi_{\rm CR}$ and the recombination rate of ionized hydrogen,
\begin{eqnarray}
n_{\rm H} \xi_{\rm CR}
= \alpha_{\rm B} n_{\rm H}^{2} x_{\rm e}^{2}, \label{0802.1}
\end{eqnarray}
where $\alpha_{\rm B}$ is the case-B recombination coefficient.
Thus, $x_{\rm e}$ is given as,
\begin{eqnarray}
x_{\rm e} &=& \left( \frac{\xi_{\rm CR}}{\alpha_{\rm B} n_{\rm H}} \right)^{1/2} \nonumber \\
& = & 3.8 \times 10^{-5} \left(\frac{\xi_{\rm CR}}{10^{-16} {\rm s^{-1}} } \right)^{1/2} \left( \frac{R_{\rm cl}}{10 ~ \rm{pc}} \right)^{3/2} \left( \frac{M_{\rm cl}}{10^{6} ~M_{\odot}} \right)^{-1/2} , \nonumber \\ \label{0802.2}
\end{eqnarray}
where we have used $\alpha_{\rm B} = 7 \times 10^{-12} ~{\rm cm^{3} \, s^{-1} }$ at $ T = 100 ~ {\rm K}$ \citep{Hummer1987}.
Ionized hydrogen also recombines via the charge exchange reactions with heavy elements (e.g. CO)
and the ionization fraction becomes smaller than $10^{-6}$ for number density higher than $\sim 10^{3} ~{\rm cm^{-3}}$\citep{McKee1989}.
Thus Equation \eqref{0802.2} gives the upper limit of ionization degree, which is likely to be smaller
than the critical value above which the Coulomb drag becomes important.
Therefore, we ignore the Coulomb drag below and estimate the dust terminal velocity from the balance between the collisional drag and radiation forces.
Here, we only consider the cosmic rays as the ionization source.
In section \ref{sec4.1}, we discuss the effects of other sources, such as X-rays or mixing of ionized and neutral gases.
For supersonic relative velocity between the gas and dust, the collisional drag force can be written as
$F_{\rm collision}= \pi a^2v_{\rm d}^2 n_{\rm H} m_{\rm H}$ \citep{DS1979}.
The terminal velocity $v_{\rm d}$ is then given as
\begin{eqnarray}
v_{\rm d} &=& \sqrt{\frac{L_{\rm tot} Q}{4 \pi R_{\rm cl}^2 c n_{\rm H} m_{\rm H}}} \nonumber \\
&=& 6.7 ~ {\rm km \hspace{1mm} s^{-1}} \left( \frac{Q}{1} \right)^{1/2} \left( \frac{\epsilon_{*}}{0.1} \right)^{1/2}
\left( \frac{R_{\rm cl}}{10 ~ {\rm pc}} \right)^{1/2}. \label{1.6.3}
\end{eqnarray}
Dust evacuation time $t_{\rm evac}$ from a star-forming cloud is estimated as
\begin{eqnarray}
t_{\rm evac} &=& \frac{R_{\rm cl}}{v_{\rm d}} \nonumber \\
&=& \sqrt{\frac{4 \pi c n_{\rm H} m_{\rm H}}{L_{\rm tot}Q} } R_{\rm cl}^2 \nonumber \\
&=& 1.5 \times 10^{6} ~ {\rm yr} \left( \frac{Q}{1} \right)^{-1/2} \left( \frac{\epsilon_{*}}{0.1} \right)^{-1/2} \left( \frac{R_{\rm cl}}{10~ {\rm pc}} \right)^{1/2}. \label{1.6.4}
\end{eqnarray}
Note that $t_{\rm evac}$ increases with the cloud radius.
Note that the drag time $t_{\rm drag}$ required for transporting momentum from a grain to the gas is given as
\begin{align}
t_{\rm drag} & = \frac{m_{\rm d} v_{\rm d}}{F_{\rm drag}} \nonumber \\
& = 1.2 \times 10^{2} ~{\rm yr} \left( \frac{Q}{1} \right)^{-1/2} \left( \frac{\rho_{\rm d}}{3 \,{\rm g \, cm^{-3}}} \right) \nonumber \\
&\hspace{1cm} \times \left( \frac{a}{0.1 \mu {\rm m}} \right) \left( \frac{\epsilon_{*}}{0.1} \right)^{-1/2} \left( \frac{R_{\rm cl}}{10 ~{\rm pc}} \right)^{5/2} \left( \frac{M_{\rm cl}}{10^{6}M_{\odot}} \right)^{-1}, \label{0807.1}
\end{align}
which is much shorter than the evacuation time $t_{\rm evac}$ (Eq. \ref{1.6.4}).
Therefore, our use of the terminal velocity as the dust velocity is justified.
In estimating $t_{\rm evac}$, we have used the efficiency factor of $Q=1$ corresponding
to the optically thin case.
If the cloud is optically thick, on the other hand, the radiation force on to dust grains is imparted mostly
by the diffuse light and the efficiency factor drops to $Q=10^{-2}$.
With such low efficiency factor, the dust evacuation time (Eq. \ref{1.6.4}) becomes as long as
$t_{\rm evac} = 1.5 \times 10^{7}{\rm yr}$, exceeding the lifetime of massive stars $t_{\rm OB}$.
Thus, optically-thick clouds would be destructed or replenished with newly formed dust grains
due to the SN feedback before dust grains are evacuated.
Therefore, the clouds must be optically thin initially to become dust-free by the dust evacuation .
Using the dust opacity $\kappa = 350~ {\rm cm^{2} \hspace{1mm} g^{-1}} \left( Z / Z_{\odot} \right)$
at the wavelength $\lambda _{\rm max} = 0.254 \rm \mu m$,
which is the peak of the black body spectrum of $T = 2 \times 10^{4}~{\rm K}$,
we estimate the optical depth of the cloud as
\begin{eqnarray}
\tau &=& \rho \kappa R_{\rm cl} \nonumber \\
&=& 175 \left( \frac{\kappa}{350 \, {\rm cm^{2} g^{-1}}} \right) \left( \frac{R_{\rm cl}}{10 ~ \rm{pc}} \right)^{-2} \left( \frac{M_{\rm cl}}{10^{6}M_{\odot}} \right) \left( \frac{Z}{Z_{\odot}} \right). \label{2.1.5}
\end{eqnarray}
The condition for optical thinness thus corresponds to metallicity
\begin{eqnarray}
Z < 5.7 \times 10^{-3} Z_{\odot} \left( \frac{\kappa}{350 \, {\rm cm^{2} g^{-1}}} \right)^{-1} \left( \frac{R_{\rm cl}}{10 ~ \rm{pc}} \right)^{2} \left( \frac{M_{\rm cl}}{10^{6}M_{\odot}} \right)^{-1} .\label{0726.1}
\end{eqnarray}
In the estimation above, we do not consider the dust size dependence
of the efficiency factor and always use the value $Q=1$.
As seen in Figure \ref{zu0807.6}, this is only true for grains larger than $\sim 0.01\mu {\rm m}$.
For grains smaller than $0.01\mu {\rm m}$, the efficiency factor becomes $Q \lesssim 0.3$,
which makes the evacuation time longer due to the weaker radiation force.
In fact, by substituting $Q = 0.3$ into Equation \eqref{1.6.4}, dust evacuation time becomes $t_{\rm evac} = 2.7 \times 10^{6}~{\rm yr}$, slightly exceeding $t_{\rm OB}$.
In this case, small grains with size $a<0.01 ~ \mu {\rm m}$ would remain in the cloud.
In Section \ref{sec4.2}, we additionally discuss cases in which small dust grains are dominant.
The dependence of timescales on the grain size is discussed in Appendix \ref{apd1}.
\subsection{Cloud lifetime}\label{cloud_dest}
For star formation from the dust-free gas, dust evacuation time must to be shorter than destruction time of the cloud.
Lifetime of the star-forming cloud $t_{\rm cl}$ is estimated with
the timescale that either type-II SNe occur $t_{\rm OB}$ or the ionizing front reaches the boundary of the cloud $t_{\rm HII}$:
\begin{eqnarray}
t_{\rm cl} = {\rm min} (t_{\rm OB}, t_{\rm HII}). \label{0907.2}
\end{eqnarray}
The timescale of SNe is determined by the lifetime of massive stars,
and we use $t_{\rm OB} = 2.5\times 10^{6}~\rm{yr}$, which is the lifetime for $m_{*} = 120 M_{\odot}$ \citep{Schaerer2002}, in this work.
The expansion time of an H{\sc ii} region $t_{\rm HII}$ can be calculated as follows.
The ionizing front reaches the Str\"omgren radius
\begin{eqnarray}
R_{\rm S0} &=& \left( \frac{3 {S_{\rm tot}}}{4 \pi n_{\rm H}^2 \alpha_{\rm B}} \right)^{1/3} \nonumber \\
&=& 2.5 {\rm pc} \left( \frac{\epsilon_{*}}{0.1} \right)^{1/3} \left( \frac{R_{\rm cl}}{10 ~ \rm{pc}} \right)^{2} \left( \frac{M_{\rm cl}}{10^6 M_{\odot}} \right)^{-1/3}, \label{1.6.10}
\end{eqnarray}
almost instantly in a few hundred years or so.
Here, we have used Equation \eqref{3.1.6} for number density of gas and case-B recombination rate $\alpha_{\rm B} = 2.6 \times 10^{-13} {\rm cm^{3} \, s^{-1}}$ \citep[at $T=10^{4}~\rm K$, ][]{Osterbrock2006},
Thereafter the pressure imbalance drives the further expansion of the H{\sc ii} region.
The radius of the H{\sc ii} region in this phase is given by \citep{Spitzer1978}
\begin{eqnarray}
R_{\rm HII} = R_{\rm S0} \left[1 + \frac{7}{4} \frac{c_{\rm HII} t}{R_{\rm S0}} \right]^{4/7}. \label{1.6.11}
\end{eqnarray}
By equating $R_{\rm HII}$ with the cloud radius $R_{\rm cl}$, we obtain
the time for H{\sc ii} region to reach the cloud boundary:
\begin{eqnarray}
t_{\rm HII} &=& \frac{4}{7} \frac{R_{\rm S0}}{c_{\rm HII}} \left[ \left( \frac{R_{\rm cl}}{R_{\rm S0}} \right)^{7/4} - 1 \right] \nonumber \\
&\simeq& 1.4 \times 10^{6} {\rm yr} \left(\frac{\epsilon_{*}}{0.1}\right)^{-1/4} \left(\frac{R_{\rm cl}}{10 ~{\rm pc}}\right)^{1/4} \left(\frac{M_{\rm cl}}{10^{6}M_{\odot}}\right)^{1/4}. \hspace{5mm} \label{1.6.12}
\end{eqnarray}
In the numerical estimate above, we have used the temperature in the H{\sc ii} region $T =10^{4}~{\rm K}$ and
the sound speed $c_{\rm HII} = 11.4~ {\rm km \, s^{-1}}$.
Note that the expansion time of H{\sc ii} region has weak dependence on the cloud radius compared with the dust evacuation time (Eq. \ref{1.6.4}).
We now examine whether the radiation force on the dust destroys the star-forming cloud itself.
Since the ratio of radiation force to the gravity at $R_{\rm cl}$ is
\begin{eqnarray}
\frac{F_{\rm rad}}{F_{\rm grav}} &=& \frac{\kappa_{\rm d} L_{*}}{4 \pi c G M_{\rm cl}} \nonumber \\
&=& 1.8 \times 10^{-2} \left( \frac{\kappa}{350 ~ {\rm cm^{2} g^{-1}}} \right) \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{Z}{10^{-3}Z_{\odot}} \right), \label{2.1.3}
\end{eqnarray}
the radiation force is smaller than gravity on the gas component in optically-thin clouds.
Such clouds can avoid being disrupted by the radiation force.
\subsection{The condition for dust-free cloud formation} \label{sec.dustless}
We here obtain the condition for formation of the dust-free star-forming clouds.
In order for stars to form from the dust-free gas, the dust evacuation time be shorter
than the cloud lifetime $t_{\rm cl}$, and also some gas still be available at this moment
for subsequent star formation.
We consider the cases with the star-formation rate parameter $c_{*} = 0.01 - 1$,
which is related to the SFE $\epsilon_{*}$ by Equation
(\ref{0723.4}).
Figure \ref{zu0807.3} shows
the condition for the formation of a dust-free star-forming cloud in the case with the characteristic stellar mass
$m_{\rm ch} = 10~M_{\odot}$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./Mz2_10.pdf}
\end{center}
\caption{
The conditions for the formation of dust-free star-forming clouds in the case of $m_{\rm ch} = 10M_{\odot}$.
The black, red, and blue solid lines denote the condition of $t_{\rm evac} < t_{\rm cl}$
in the cases with $c_{*} = 1$, $0.1$ and $0.01$, respectively.
Above the dashed line, SFE becomes $\epsilon_{*} > 0.9$.
The blue and red shaded regions represent that the dust evacuation occurs and more than 10 per cent of initial gas remains at this epoch.
In the case with $c_{*} = 1$, there is no such area.
The black-dot lines show the boundaries at where the optical depth becomes $\tau=1$ at each metallicity, and clouds are optically thin below these lines.
The relation for $\tau = 1$ between the column density $N_{\rm H}$ and metallicity $Z$ is denoted as $N_{\rm H} = 3 \times 10^{25} ~ {\rm cm^{-2}} \left( Z / 10^{-4} Z_{\odot} \right)$.
\label{zu0807.3}}
\end{figure}
The color-shaded areas show the parameter ranges of clouds that become dust-free for different values of $c_{*}$.
The black, red and blue lines correspond to the cases with $c_{*}= 1, 0.1$, and $0.01$, respectively.
The condition of the optical depth $\tau = 1$ is also shown by the thin dotted lines for indicated metallicities,
and the clouds are optically thin below them.
For smaller $c_{*}$, the dust evacuation time $t_{\rm evac}$ becomes longer from Equation \eqref{1.6.4}
and the clouds must be more compact for the dust evacuation.
The SFE $\epsilon_{\ast}$ is larger than $0.9$
above the dashed lines in Figure \ref{zu0807.3}.
In these regions, most of the gas has been consumed by star formation by the time of the dust evacuation.
As a result, star formation from the dust-free gas is only realized in the color-shaded areas between
the solid and dashed-lines.
In the case with $c_{*} = 1$,
there is no shaded area, which means that star formation from the dust-free gas does not occur in any clouds below $10^{8}~M_{\odot}$.
In the case with $c_{*} = 0.1$, this condition roughly corresponds to the column density $ 10^{24} ~{\rm cm^{-2}} \la N_{\rm H} \la 10^{25}~{\rm cm^{-2}} $ and the metallicity $Z \la 10^{-2} ~Z_{\odot}$.
Also, in the case with $c_{*} = 0.01$, the condition becomes $ 3 \times 10^{24} ~{\rm cm^{-2}} \la N_{\rm H} \la 10^{26}~{\rm cm^{-2}} $ and $Z \la 10^{-3} ~Z_{\odot}$.
Namely, in compact and low-metallicity clouds, the dust evacuation is likely to occur.
Comparing this value with the column density of local giant molecular clouds (GMCs) $N_{\rm H} \simeq 10^{22}~{\rm cm^{-2}}$ \citep{Solomon1987} or OB star-forming clumps $N_{\rm H} \simeq 10^{24}~{\rm cm^{-2}}$ \citep{Plume1997}, the dust evacuation does not occur in typical GMCs, while it is satisfied in local massive star-forming clumps if the metallicity were low enough.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./Mz2_1.pdf}
\end{center}
\caption{
Same as Figure \ref{zu0807.3}, bur for the case of lower characteristic stellar mass $m_{\rm ch} = 1 M_{\odot}$.
In this case, the dust-free star formation condition is satisfied also for $c_{*} = 1$ in a very narrow strip with $ \ga10^{7}~M_{\odot}$.
\label{zu0807.4}}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{.//Mz2_50.pdf}
\end{center}
\caption{Same as Figure \ref{zu0807.3}, but for the case of higher characteristic stellar mass
$m_{\rm ch} = 50 M_{\odot}$. \label{zu0807.5}}
\end{figure}
The cases with different charasteristic stellar mass
$m_{\rm ch} = 1$ and $50 M_{\odot}$ are shown in
Figures \ref{zu0807.4} and \ref{zu0807.5}, respectively.
For higher $m_{\rm ch}$,
the luminosity per unit mass becomes higher (Table~\ref{tabal0729.1}),
which facilitates the dust evacuation and thus results in the wider shaded regions, and vice versa.
The condition on $(M_{\rm cl}, R_{\rm cl})$ for the dust evacuation, however, does not depend so much on the value of $m_{\rm ch}$.
As a rule of thumb, for star formation from the dust evacuated gas to occur, the clouds should be massive or compact $N_{\rm H} \simeq 10^{24} - 10^{26}~{\rm cm^{-2}}$, and low-metallicity $Z \la 10^{-2}~Z_{\odot}$.
In addition, the SFR parameter must be somewhat smaller than unity.
\section{Dust evacuation from galactic disks}\label{dustevac2}
We have studied the dust evacuation from spherical star-forming clouds.
In this section, by applying the model developed in Section \ref{dust_evac1},
we study the dust evacuation from disk-like structure likely to form in low-metallicity galaxies
in the early universe.
\subsection{Galactic disk model}
We consider a halo with mass $M_{\rm h}$ virializing at redshift $z_{\rm vir}$ and a rotationally-supported disk within it \citep{Mo1998,Oh2002}.
The virial radius is given by
\begin{eqnarray}
R_{\rm vir} = 2.2 {\rm kpc} \left( \frac{1 + z_{\rm vir}}{10} \right)^{-1} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3}. \label{0814.3}
\end{eqnarray}
Using the spin parameter $\lambda = J | E | ^{1/2} / G M_{\rm h}^{5/2}$, where $E$ and $J$ are the total energy and the total angular momentum of the halo,
the disk radius is estimated as
\begin{eqnarray}
R_{\rm d} = \frac{\lambda}{\sqrt{2}} R_{\rm vir} = 77 {\rm pc} \left( \frac{\lambda}{0.05} \right) \left( \frac{1+z_{\rm vir}}{10} \right)^{-1} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3}. \label{0816.1}
\end{eqnarray}
The number density of the gas at radius $r$ and vertical height $z$ is
\begin{eqnarray}
n(r,z) = n_{0} \exp \left( - \frac{2 r}{R_{\rm d}} \right) {\rm sech} ^{2} \left( \frac{z}{\sqrt{2} z_{0}} \right), \label{0814.1}
\end{eqnarray}
where $z_{0}$ is the vertical scale height,
\begin{eqnarray}
z_{0} = \frac{c_{\rm s}}{4 \pi G \mu m_{\rm H} n_{0}} e^{r/R_{\rm d}}. \label{0814.2}
\end{eqnarray}
We set the mean molecular weight $\mu = 1$.
Note that the disk mass can be written as
\begin{eqnarray}
M_{\rm d} = \int dz \int 2 \pi r dr \mu m_{\rm H} n(r, z) = f_{\rm d} (\Omega_{\rm b}/ \Omega_{\rm m}) M_{\rm h},
\end{eqnarray}
where $f_{\rm d}$ is the baryon mass function taken into the disk.
Substituting Equations \eqref{0814.3}, \eqref{0816.1}, \eqref{0814.1}, and \eqref{0814.2} into the above,
we obtain number density at the center of the disk
\begin{align}
n_{0} = 2.0 \times 10^{4} {\rm cm^{-3}} \left( \frac{f_{\rm d}}{0.5} \right)^{2} & \left( \frac{\lambda}{0.05} \right)^{-4} \left( \frac{1+z_{\rm vir}}{10} \right)^{4} \nonumber \\
& \times \left( \frac{T}{8000~ {\rm K}} \right)^{-1} \left( \frac{M_{\rm h}}{10^{9}~M_{\odot}} \right)^{2/3}. \label{0816.2}
\end{align}
From Equation \eqref{0814.2}, the scale height of the disk is
\begin{align}
z_{0} = 1.6 \, {\rm pc} \, \left( \frac{f_{\rm d}}{0.5} \right)^{-1} & \left( \frac{\lambda}{0.05} \right)^{2} \left( \frac{1+z_{\rm vir}}{10} \right)^{-2} \nonumber \\
& \times \left( \frac{T}{8000 {\rm K}} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{-1/3} \exp \left( \frac{r}{R_{\rm d}} \right), \label{0816.3}
\end{align}
and the disk surface density is
\begin{align}
\Sigma_{\rm gas} (r) &= \int dz \mu m_{\rm H} n (r, z) \nonumber \\
&= 0.46 \, {\rm g \, cm^{-2}} \, \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \nonumber \\
& \hspace{2.5cm} \times \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3} \exp \left( - \frac{r}{R_{\rm d}} \right) . \label{0816.4}
\end{align}
As in Section \ref{eps_and_strformationrate}, we estimate the SFE $\epsilon_{*}$ by Equation \eqref{0723.4}.
Since the gas density does not decrease remarkably up to the scale height $z_{0}$ according to Equation \eqref{0814.1},
we use the density on the equatorial plane ($z=0$) in calculating $\epsilon_{*}$ in the disk.
The radial distribution of SFE is presented in Figure \ref{zu.0908.1}
for three cases of $c_{*} = 1, 0.1$, and $0.01$.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./eps_spin.pdf}
\end{center}
\caption{ The radial distribution of SFE in a galactic disk with
$M_{\rm h} = 10^{9} M_{\odot}$, $ 1 + z_{\rm vir} = 10$, $T_{\rm g} = 8000{\rm K}$, $f_{\rm d} = 0.5$ and $\lambda = 0.05$
for three values of the star-formation rate parameter $c_{*} = 1, 0.1$, and $0.01$ from top to bottom. \label{zu.0908.1}}
\end{figure}
The SFE is almost constant inside the disk radius $R_{\rm d}$ and decreases rapidly outside.
In the case with $c_{*} = 1$, the SFE is as high as $\sim 1$ in the disk.
The stellar surface density is thus given by
\begin{align}
\Sigma_{*} &= \epsilon_{*} \Sigma_{\rm gas} & \nonumber \\
&= 2.2 \times 10^{2} \, M_{\odot} {\rm pc^{-2}} \, \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \nonumber \\
& \hspace{1cm} \times \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3} \exp \left( - \frac{r}{R_{\rm d}} \right). \label{0816.5}
\end{align}
The luminosity and the ionizing photon emissivity per unit area, $\mathcal L$ and $\mathcal S$,
can be calculated from the surface density of stars $\Sigma_{*}$ and luminosity/emissivity per unit stellar mass given in Table 1.
For example, in the case of $m_{\rm ch} = 10M_{\odot}$, they are
\begin{align}
\mathcal L = 1.5 \times 10^{6} \, L_{\odot} \, {\rm pc^{-2}} \, & \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \nonumber \\
& \times \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3} \exp \left( - \frac{r}{R_{\rm d}} \right), \label{0816.6}
\end{align}
and
\begin{align}
\mathcal S = 1.1 \times 10^{50} \, {\rm s^{-1}} \, {\rm pc^{-2}} \, & \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \nonumber \\
& \times \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3} \exp \left( - \frac{r}{R_{\rm d}} \right). \label{0816.7}
\end{align}
\subsection{Dust evacuation time from galactic disks}
We assume that stars are distributed on the mid-plane of galactic disks.
As in Section \ref{sec.det}, we derive the condition for dust evacuation by calculating the evacuation time $t_{\rm evac}$
at the scale height $z = z_{0}$ (Eq. \ref{0814.2}).
We use the cylindrical coordinate $(r, \theta, z)$ where the $z$-axis passes through the center of the galaxy and perpendicular to the disks.
From the axisymmetry, no $\theta$-dependence appears.
Using the SFE $\epsilon_{*}$ (Eq. \ref{0723.4}), the luminosity and the ionizing photon emissivity
per unit area in the disk are given as (Eqs. \ref{0816.6} and \ref{0816.7}),
\begin{eqnarray}
\mathcal L = \mathcal L _{0} \left( 1- \exp \left[ - c_{*} \frac{t_{\rm OB}}{t_{\rm ff}} \exp \left( -r/R_{\rm d} \right) \right] \right) \exp \left( - r / R_{\rm d} \right), \label{0908.1}
\end{eqnarray}
and
\begin{eqnarray}
\mathcal S = \mathcal S _{0} \left( 1- \exp \left[ - c_{*} \frac{t_{\rm OB}}{t_{\rm ff}} \exp \left( -r/R_{\rm d} \right) \right] \right) \exp \left( - r / R_{\rm d} \right), \label{0908.2}
\end{eqnarray}
where
\begin{align}
\mathcal L _{0} = 1.5 \times 10^{7} & \, L_{\odot} \, {\rm pc^{-2}} \, \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3}, \label{0908.3}
\end{align}
and
\begin{align}
\mathcal S_{0} = 1.1 \times 10^{51} & \, {\rm s^{-1}} \, {\rm pc^{-2}} \, \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3}. \label{0908.3.2}
\end{align}
The $r$- and $z$-components of the radiation force on a dust grain with radius $a$ locating at $(r_{0},z_{0})$ are
\begin{align}
F_{{\rm rad}, r} &= \frac{\pi a^{2} Q}{4 \pi c} \int ^{\infty}_{0} dr \int ^{2 \pi}_{0} r d \theta \frac{- (r \cos \theta - r_{0})}{\left[ r^{2} + r_{0}^{2} - 2 r r_{0} \cos \theta + z_{0}^{2} \right]^{3/2}} \mathcal L \nonumber \\
& = \frac{\pi a^{2} Q \mathcal L_{0}}{4 \pi c} \phi_{r}, \label{0816.10}
\end{align}
and
\begin{align}
F_{{\rm rad}, z} &= \frac{\pi a^{2} Q}{4 \pi c} \int ^{\infty}_{0} dr \int ^{2 \pi}_{0} r d \theta \frac{z_{0}}{\left[ r^{2} + r_{0}^{2} - 2 r r_{0} \cos \theta + z_{0}^{2} \right]^{3/2}} \mathcal L \nonumber \\
& = \frac{\pi a^{2} Q \mathcal L_{0}}{4 \pi c} \phi_{z}, \label{0908.3.3}
\end{align}
respectively.
With definition of $\mathcal R = r / R_{\rm d}$, $\mathcal R_{0} = r_{0} / R_{\rm d}, \mathcal Z_{0} = z_{0} / R_{\rm d}$ and
$C = c_{*} t_{\rm OB} / t_{\rm ff}$,
\begin{align}
\phi_{r} = \int ^{\infty}_{0} d \mathcal R \int ^{2 \pi} _{0} d \theta & \frac{(\mathcal R_{0} - \mathcal R \cos \theta) \mathcal R}{\left[ (\mathcal R^{2} + \mathcal R_{0}^{2} - 2 \mathcal R \mathcal R_{0} \cos \theta) + \mathcal Z_{0}^{2} \right]^{3/2}} \nonumber \\
& \times \left( 1 - \exp \left[ - C \exp \left( - \mathcal R \right) \right] \right) \exp \left( - \mathcal R \right), \label{0908.4}
\end{align}
and
\begin{align}
\phi_{z} = \int ^{\infty}_{0} d \mathcal R \int ^{2 \pi} _{0} d \theta & \frac{\mathcal R \mathcal Z_{0}}{\left[ (\mathcal R^{2} + \mathcal R_{0}^{2} - 2 \mathcal R \mathcal R_{0} \cos \theta) + \mathcal Z_{0}^{2} \right]^{3/2}} \nonumber \\
& \times \left( 1 - \exp \left[ - C \exp \left( - \mathcal R \right) \right] \right) \exp \left( - \mathcal R \right). \label{0908.5}
\end{align}
The radial dependence of $\phi_{r}$ and $\phi_{z}$ is presented in Figure \ref{zu0816.1}.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./psi2.pdf}
\caption{ The radial dependence of $\phi_{r}$ (blue) and $\phi_{z}$ (red) in the case with
$\mathcal Z_{0} = 2.0 \times 10^{-2} \exp \left( \mathcal R_{0} \right)$. The different lines show the cases
with $c_{*} = 1$, $0.1$, and $0.01$ from top to bottom. \label{zu0816.1}}
\end{center}
\end{figure}
The dust terminal velocity is estimated by the balance between the
radiation force $F_{\rm rad} = \sqrt{\left(F_{\rm rad, r} \right)^{2} + \left(F_{\rm rad, z} \right)^{2}}$ and the collisional drag force
$F_{\rm drag} = \pi a^{2} v_{\rm d}^{2} n_{\rm H} m_{\rm H}$.
Here, we consider the dust evacuation in the vertical direction.
The vertical component of terminal velocity $v_{{\rm d}, z}$ is
\begin{eqnarray}
v_{{\rm d}, z} = \sqrt{\frac{\mathcal L_{0} Q}{4 \pi c n_{\rm H} m_{\rm H}}} \phi_{\rm z} \left( \phi_{r}^{2} + \phi_{z}^{2} \right)^{-1/4}, \label{0816.13}
\end{eqnarray}
whose value at the center of the disk is
\begin{align}
v_{{\rm d}, z} (r=0) & = 1.4 \times 10^{6} {\rm cm \, s^{-1}} \left( \frac{f_{\rm d}}{0.5} \right)^{-1/2} \left( \frac{\lambda}{0.05} \right) \left( \frac{1+z_{\rm vir}}{10} \right)^{-1} \nonumber \\
&\times \left( \frac{\phi_{z}}{2.7} \right)^{1/2} \left( \frac{T_{\rm gas}}{8000~{\rm K}} \right)^{1/2} \left( \frac{Q}{1} \right)^{1/2} \left( \frac{M_{\rm h}}{10^{9}~M_{\odot}} \right)^{-1/6}. \label{0816.14}
\end{align}
Figure \ref{zu0818.1} shows the radial dependence of the dust terminal velocity $v_{\rm d, z}$ (Eq. \ref{0816.13}) at a disk scale height $(z=z_{0})$.
The terminal velocity is almost constant inside the disk radius $R_{\rm d}$,
and increases outward as the drag force becomes small with decreasing gas density.
The dust evacuation time $t_{\rm evac}$ is given by
\begin{align}
t_{\rm evac} &= \frac{z_{0}}{v_{{\rm d},z}} \nonumber \\
& = 1.1 \times 10^{5} \, {\rm yr} \, \left( \frac{f_{\rm d}}{0.5} \right)^{-1/2} \left( \frac{\lambda}{0.05} \right) \left( \frac{1+z_{\rm vir}}{10} \right)^{-1} \left( \frac{\phi _{\rm z}}{2.7} \right)^{-1/2} \nonumber \\
& \hspace{1.5cm} \times \left( \frac{T_{\rm gas}}{8000 {\rm K}} \right)^{1/2} \left( \frac{Q}{1} \right)^{-1/2} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{-1/6}. \label{0816.15}
\end{align}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./vdz2.pdf}
\caption{ The vertical component of the dust velocity $v_{{\rm d},z}$ at the scale height $z_{0}$ as a function of radius $r$ in the case with $f_{\rm d} = 0.5$, $\lambda = 0.05$, $1+z_{\rm vir} = 10$, $T_{\rm gas} = 8000~{\rm K}$, $Q = 1$, $\epsilon_{*} = 0.1$ and $M_{\rm h} = 10^{9}M_{\odot}$.
The different lines show the case with $c_{*} = 1$, $0.1$ and $0.01$, from the top to the bottom.
\label{zu0818.1}}
\end{center}
\end{figure}
In Figure \ref{zu0818.2}, we show the radial dependence of dust evacuation time $t_{\rm evac}$.
It is almost constant inside the disk radius $R_{\rm d}$ as expected from the behavior of $v_{{\rm d},z}$ and increases rapidly at $r>R_{\rm d}$ due to the increasing scale height (Eq. \ref{0816.3}).
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./tevac2.pdf}
\caption{ Dust evacuation time $t_{\rm evac}$, expansion time of H{\sc ii} regions $t_{\rm HII}$ and lifetime of OB stars $t_{\rm OB}$
at the disk scale height $z_{0}$ in the case with $f_{\rm d} = 0.5$, $\lambda = 0.05$, $1+z_{\rm vir} = 10$, $T= 8000{\rm K}$, $Q = 1$, $\epsilon_{*} = 0.1$ and $M_{\rm h} = 10^{9}M_{\odot}$.
The cases with $c_{*} = 1$, $0.1$, and $0.01$ are shown as indicated for $t_{\rm evac}$ and $t_{\rm HII}$ \label{zu0818.2}.}
\end{center}
\end{figure}
\subsection{Vertical expansion of the H{\sc ii} region in the disk}\label{dust_less_gala}
We seek the condition for star formation from dust-free galactic disks by
comparing the lifetime of OB stars, $t_{\rm OB}$, and the expansion time of the H{\sc ii} region, $t_{\rm HII}$, with the dust evacuation time, $t_{\rm evac}$,
as in Section \ref{cloud_dest} and \ref{sec.dustless}.
We define the Str\"omgren height $z_{\rm S0}$ where the number of
ionizing photons emitted from $z=0$ equals the integrated recombination rate of hydrogen atoms:
\begin{eqnarray}
\int ^{z_{\rm S0}}_{-z_{\rm S0}} dz n_{\rm H}^{2} \alpha_{\rm B} = \mathcal S. \label{0818.1}
\end{eqnarray}
The ionizing-photon emissivity per unit area $\mathcal S$ is given by Equation \eqref{0908.3}.
Neglecting the density variation with height and substituting Equation \eqref{0814.1}, \eqref{0816.2} and \eqref{0908.3} into \eqref{0818.1},
we obtain
\begin{align}
z_{\rm S0} = 1.9 \times & 10^{-2} \, {\rm pc} \left( \frac{f_{\rm d}}{0.5} \right)^{-3} \left( \frac{\lambda}{0.05} \right)^{6} \left( \frac{1+z_{\rm vir}}{10} \right)^{-6} \nonumber \\
& \times \left( \frac{T}{8000{\rm K}} \right)^{2} \left( \frac{\epsilon_{*}}{0.1} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{-1} \exp \left( \frac{3 r}{R_{\rm d}} \right) .\label{0818.2}
\end{align}
As in Equation \eqref{1.6.12}, the time for the H{\sc ii} region to expand up to $z=z_{\rm 0}$ is estimated as
\begin{eqnarray}
t_{\rm HII} = \frac{4}{7} \frac{z_{\rm S0}}{c_{\rm HII}} \left[ \left( \frac{z_{0}}{z_{\rm S0}} \right)^{7/4} -1 \right]. \label{0818.3}
\end{eqnarray}
In particular, at the center of the disk,
\begin{align}
t_{\rm HII} (r = 0) = 2.2 \times & 10^{6} \, {\rm yr} \left( \frac{f_{\rm d}}{0.5} \right)^{1/2} \left( \frac{\lambda}{0.05} \right)^{-1} \left( \frac{1+z_{\rm vir}}{10} \right) \nonumber \\
& \times \left( \frac{T}{8000{\rm K}} \right)^{1/4}\left( \frac{\epsilon_{*}}{0.1} \right)^{-3/4} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/6} .
\label{0818.4}
\end{align}
The expansion time of the H{\sc ii} region $t_{\rm HII}$ is shown in Figure \ref{zu0818.2} as a function of the radius.
At $r < R_{\rm d}$, $t_{\rm HII}$ is longer than $t_{\rm evac}$ because
the initial Str\"omgren height of the H{\sc ii} region $z_{\rm S0}$ (Eq. \ref{0818.2}) is much smaller than the disk scale height $z_{0}$.
As the radius exceeds $R_{\rm d}$, the scale height of the H{\sc ii} region $z_{\rm HII}$ rapidly increases,
resulting in shorter $t_{\rm HII}$ than $t_{\rm evac}$.
\subsection{Condition for dust evacuation from galactic disks}
For the dust evacuation (Eq. \ref{0907.2}), the evacuation time $t_{\rm evac}$ must be shorter than lifetime of the cloud, $t_{\rm cl} = {\rm min} (t_{\rm OB}, t_{\rm HII})$.
For example, in Figure \ref{zu0818.2} for the halo shown ($M_{\rm h} = 10^{9}~M_{\odot}$, $1+z_{\rm vir} = 10$, $T_{\rm gas} = 8000~{\rm K}$, $\lambda = 0.05$ and $f_{\rm d} = 0.5$), the dust evacuation occurs inside the galactic disk $r < R_{\rm d}$ and the dust-free gas is formed in all the three cases.
Additionally, there is a condition on metallicity of the gas.
As explained in Section \ref{sec.det}, the direct light from stars is absorbed and re-emitted
as thermal emission of dust grains in optically thick disks,
and radiation force on dust grains becomes about two orders of magnitude smaller.
Therefore, the optical depth to the direct light along the vertical direction must be less than unity for the dust evacuation:
\begin{align}
\tau &= \int ^{\infty}_{-\infty} \rho \kappa dz \nonumber \\
&= 1.6 \times 10^{2} \left( \frac{Z}{Z_{\odot}} \right) \left( \frac{f_{\rm d}}{0.5} \right) \left( \frac{\lambda}{0.05} \right)^{-2} \left( \frac{1+z_{\rm vir}}{10} \right)^{2} \nonumber \\
& \hspace{1.5cm} \times \left( \frac{\kappa}{350 {\rm cm^{2} g^{-1}}} \right) \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{1/3} \exp \left( \frac{-2r}{R_{\rm d}} \right) < 1. \label{0818.5}
\end{align}
This requires low metallicity environments of
\begin{align}
Z < 6.1 \times 10^{-3}Z_{\odot} \left( \frac{f_{\rm d}}{0.5} \right)^{-1} & \left( \frac{\lambda}{0.05} \right)^{2} \left( \frac{1+z_{\rm vir}}{10} \right)^{-2} \nonumber \\
& \times \left( \frac{\kappa}{350 {\rm cm^{2} g^{-1}}} \right)^{-1} \left( \frac{M_{\rm h}}{10^{9}M_{\odot}} \right)^{-1/3}. \label{0818.6}
\end{align}
Next we derive the constraint on the halo mass $M_{\rm h}$ and redshift $1+z_{\rm vir}$
under the condition of the spin parameter $\lambda = 0.05$ and
the baryon mass fraction taken into the disks $f_{\rm d} = 0.5$.
Figure \ref{zu0908_2} shows the range of halo parameters for the dust evacuation from the disks.
Note that since the timescale $t_{\rm evac}$ and $t_{\rm cl}$ hardly change inside the disk $r < R_{\rm d}$,
it suffices to consider the condition $t_{\rm evac}<t_{\rm cl}$ at the center ($r=0$).
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./Mvd.pdf}
\caption{ Conditions for the dust evacuation from galactic disks.
The thick solid lines correspond to the hallo mass $M_{\rm h}$ satisfying the condition $t_{\rm evac} = t_{\rm cl}$.
The thick dashed lines show the galaxies where the SFE becomes $\epsilon_{*} = 0.9$.
Black, red and blue colors correspond to the cases with $c_{*} = 1$, $0.1$ and $0.01$, respectively.
The color-shaded areas between them represent the regions that satisfy both the conditions $t_{\rm evac} < t_{\rm cl}$ and $\epsilon_{*} < 0.9$.
Namely, those conditions are satisfied in the upper side of the blue-solid line (both in the blue and red shaded regions) for $c_{*} = 0.1$, and between the red-solid and red-dashed lines (in the red-shaded region) for $c_{*}=0.01$, respectively.
In the case with $c_{*} = 1$, there is no such region.
The galactic disks become optically thin below the thin dotted lines for the indicated metallicity.
The yellow-solid lines show the halo mass and redshift when $1$, $2$ and $3~\sigma$ fluctuations are virialized
\citep{Barkana2001,Eisenstein1998,Eisenstein1999,Planck2016}.
\label{zu0908_2}}
\end{center}
\end{figure}
In Figure \ref{zu0908_2}, black, red and blue colors correspond to the cases with the SFR parameter $c_{*} = 1, 0.1$, and $0.01$, respectively.
The condition $t_{\rm evac}<t_{\rm cl}$ is satisfied above the solid line, but
above the dashed line the SFE is already $>0.9$ at $t_{\rm evac}$.
Thus, the color-shaded regions between the two lines represents the parameter space where the dust evacuation occurs
and gases still remain at this time, resulting in star formation from the dust-free gas.
Also the thin-dotted lines show the contour of the maximum metallicity allowed (Eq. \ref{0818.6}) for a dust evacuation
for a given parameter set ($M_{\rm h}, z_{\rm vir}$).
Additionally, the yellow-solid lines show the relation between the halo mass and the formation redshift for the $1$, $2$ and $3 \sigma$ cosmological density fluctuations.
Here, we use the cosmological parameters of \citet{Planck2016} and \citet{Eisenstein1998,Eisenstein1999}.
For higher halo mass and formation redshift, the dust evacuation time $t_{\rm evac}$ is shorter (Eq. \ref{0816.15}) and the H{\sc ii} region expansion time $t_{\rm HII}$ is longer (Eq. \ref{0818.4}).
Thus the minimum halo mass for the dust evacuation increases with decreasing redshift.
As an example, for a halo with mass $M_{\rm h} = 10^{9}M_{\odot}$,
the condition for dust evacuation is satisfied if formed at $z_{\rm vir} \gtrsim 3$
with metallicity $Z \lesssim 10^{-1} - 10^{-2}~Z_{\odot}$
with little dependence on $c_{\ast}$.
With lower halo mass, formation redshift must be earlier,
e.g., $ z_{\rm vir} \gtrsim 9$ with $M_{\rm h}=10^{7}M_{\odot}$.
As shown in Figure \ref{zu0908_2}, the lowest halo mass for the dust evacuation is equivalent to the $2 \sigma$ ($1 \sigma$) fluctuation at $z_{\rm vir} \simeq 15$ ($\simeq 5$, respectively).
With larger value of $c_{*}$, the H{\sc ii} region expansion time, $t_{\rm HII}$, becomes shorter.
In addition, since the galactic disk must be optically thin,
the dust evacuation only occurs at metallicity lower than $\sim 10^{-2}Z_{\sun}$,
depending somewhat on the halo mass and formation redshift.
\section{Caveats} \label{suppression}
In Sections \ref{dust_evac1} and \ref{dustevac2},
we have modelled the dust evacuation under the assumption that the ionization degree
is lower than $\sim 10^{-5}$ so that the Coulomb drag is inefficient.
The cosmic rays (CRs) or X-rays from nearby star-forming galaxies, however, can boost
the ionization degree in the interstellar medium.
In addition, although we assumed the efficiency factor of dust photo-absorption $Q=1$
in calculating the evacuation time, it becomes smaller for grains with size $ \la 0.01 {\rm \mu m}$
and the radiation force on dust becomes weaker.
Also, we have not considered magnetic fields, which prevents the perpendicular motion of charged grains.
In the following, we discuss those effects on the dust evacuation.
\subsection{Possible enhancement of ionization degree} \label{sec4.1}
As discussed in Section \ref{sec.det}, the Coulomb drag suppresses the motion of dust grains and makes
the evacuation time longer than the lifetime of massive stars for ionization degree $x_{\rm e} > 10^{-5}$.
The increase of ionization degree is likely caused by the cosmic rays, X-rays
from nearby galaxies, or mixing with ionized gas from H{\sc ii} regions or stellar winds.
In the following, we estimate their impact on the ionization degree.
\subsubsection{Cosmic rays}
We here follow \citet{Stacy2007} for estimating the ionization degree by CRs,
which were supposed to be produced in SN remnants in a different nearby galaxy.
Since we are considering the dust evacuation before the first SN explodes,
we assume the CRs and X-rays are coming from other nearby galaxies.
The ionization rate $\xi_{\rm CR}$ is related with the energy density of CRs $U_{\rm CR}$ as:
\begin{eqnarray}
\xi_{\rm CR} = 1.4 \times 10^{-17} {\rm s^{-1}} \left( \frac{U_{\rm CR}}{10^{-15} \, {\rm erg \, cm^{-3}}} \right), \label{0121.1}
\end{eqnarray}
where the energy distribution is assumed to be the same power law as in the Milky Way with the slope $-2.65$ \citep{Blumer2009,Draine2011a},
ranging from $10^{6} \, {\rm eV}$ to $10^{15} \, {\rm eV}$.
Assuming an efficiency 0.1 of SN energy $10^{51}{\rm erg}$ is used in the CR acceleration,
we adopt the total CR energy generated in a SN remnant $E_{\rm CR} = 10^{50} \, {\rm erg}$.
Using the mass fraction of massive stars $f_{\rm OB}=0.74$
and their average mass $\bar m_{\rm OB}=29M_{\sun}$ for the IMF of Equation \eqref{3.1.1},
the SN rate is
\begin{align}
\dot N_{\rm SN} &= \frac{f_{\rm OB} {\rm SFR}}{\bar m_{\rm OB}} \nonumber \\
&= 2.6 \times 10^{-2} \, {\rm yr^{-1}} \left( \frac{{\rm SFR}}{1 \, M_{\odot} \, {\rm yr^{-1}}} \right). \label{0122.2}
\end{align}
At the distance $d$ from the CR source galaxy,
the CR energy density is given by
\begin{align}
U_{\rm CR} &= \frac{\dot N_{\rm SN } E_{\rm CR}}{4 \pi d^2 v_{\rm CR}} \nonumber \\
&= 3.4 \times 10^{-15} \, {\rm erg \, cm^{-3}} \left( \frac{d}{10 \, {\rm kpc}} \right)^{-2} \left( \frac{{\rm SFR}}{1 \, M_{\odot} {\rm yr^{-1}}} \right), \label{0122.3}
\end{align}
where $v_{\rm CR} = 8.8 \times 10^{-2} c$ is the average velocity of CRs \citep{Stacy2007}.
By substituting Equation \eqref{0122.3} into Equation \eqref{0121.1},
we obtain the CR ionization rate:
\begin{eqnarray}
\xi_{\rm CR} = 4.8 \times 10^{-17} \, {\rm s^{-1}} \, \left( \frac{d}{10 \, {\rm kpc}} \right)^{-2} \left( \frac{{\rm SFR}}{1 \, M_{\odot} {\rm yr^{-1}}} \right). \label{0122.4}
\end{eqnarray}
Using this rate in Equation (\ref{0802.2}),
the ionization degree is given by
\begin{align}
x_{\rm e} = & 3.5 \times 10^{-5} \left( \frac{R_{\rm cl}}{10 ~ {\rm pc}} \right)^{3/2} \nonumber \\
& \times \left( \frac{M_{\rm cl}}{10^{6} ~ M_{\odot}} \right)^{-1/2}
\left( \frac{d}{10 ~ {\rm kpc}} \right)^{-1} \left( \frac{{\rm SFR}}{1 ~ M_{\odot} {\rm yr^{-1}}} \right)^{1/2}. \label{0122.5}
\end{align}
This means that if the source galaxy with ${\rm SFR} \gtrsim 1~M_{\odot} {\rm yr^{-1}}$ is within 40 kpc, the CRs raise the ionization degree above the threshold value $10^{-5}$ for the Coulomb drag, resulting in the suppression of the dust evacuation.
In the above discussion, we assumed that CRs stream away freely from the source galaxy.
Magnetic fields, if present with sufficient strength, confine CRs in the source galaxy.
Given that SN remnants are inside the galaxy, the CR intensity is enhanced not only by their proximity but also by this magnetic effect.
In this case, the ionization degree becomes much higher compared with Equation \eqref{0122.5}, and the dust evacuation is strongly inhibited.
In contrast, if the CR sources are outside the galaxy under consideration, the ionization rate would be reduced because the magnetic field protects the galaxy from the CR incidence.
\subsubsection{X-rays}
Massive binary stars can evolve to X-ray binaries.
We evaluate the ionization by such X-ray sources based on the model of \citet{Glover2003}.
We again assume the sources are in a nearby galaxy at distance $d$ from the cloud under consideration.
The X-ray luminosity is related to the star formation rate as
\begin{eqnarray}
L_{\rm X} = 6.0 \times 10^{38} ~ {\rm erg \, s^{-1}} \left( \frac{{\rm SFR}}{1~M_{\odot} {\rm yr^{-1}}} \right). \label{0122.6}
\end{eqnarray}
For the power-law spectrum with the slope $- 1.5$,
the X-ray intensity is given by
\begin{align}
J_{\rm X} = 2.3 \times 10^{-25} & ~ {\rm erg \, s^{-1} \, cm^{-2} \, sr^{-1} \, Hz^{-1}} \nonumber \\
& \times \left(\frac{\nu}{\nu_{0}} \right)^{-1.5} \left( \frac{d}{10 \, {\rm kpc}} \right)^{-2} \left( \frac{{\rm SFR}}{1 ~ M_{\odot} {\rm yr^{-1}}} \right), \label{0122.7}
\end{align}
where $h \nu_{\rm 0} = 1~{\rm eV}$.
The primary ionization rate of X-rays is thus given by
\begin{align}
\xi_{\rm X,p} &= \int \frac{4 \pi J_{\rm X}(\nu)}{h \nu} \sigma_{\rm H} (\nu) d \nu \nonumber \\
&= 6.8 \times 10^{-23} ~ {\rm s^{-1}} \left( \frac{d}{10~{\rm kpc}} \right)^{-2} \left( \frac{{\rm SFR}}{1~M_{\odot}{\rm yr^{-1}}} \right), \label{0122.8}
\end{align}
where $\sigma_{\rm H}(\nu)$ is the cross section of hydrogen and we have set the energy range from $2~{\rm keV}$ to $10 ~{\rm keV}$.
Secondary ionization is dominant in the case of X-ray ionization.
Using the secondary ionization rate $\phi^{\rm H}$ by \citet{Wolfire1995}, we obtain the X-ray ionization rate as
\begin{align}
\xi_{\rm X} &= \xi_{\rm X,p} (1+\phi^{\rm H}) \nonumber \\
&= 8.8 \times 10^{-21} ~{\rm s^{-1}} \left( \frac{d}{10~{\rm kpc}} \right)^{-2} \left( \frac{\rm SFR}{1~M_{\odot}{\rm yr^{-1}}} \right). \label{0122.9}
\end{align}
Comparing with the CR ionization (Eq. \ref{0122.4}),
X-ray ionization rate is more than two orders of magnitude lower and
has little impact on the dust evacuation.
\subsubsection{Mixing with ionized medium}
Near the boundary with ionized gases such as the H{\sc ii} regions or ionized stellar winds,
mixing with neutral and ionized medium may occur due, e.g., to hydrodynamical instabilities.
Although this raises the ionization degree in the neutral gas temporarily,
it decreases in the recombination time, which
can be estimated for the ionization degree $x_{\rm e} = 10^{-5}$ as
\begin{align}
t_{\rm rec} &= \left( \alpha_{\rm B} n_{\rm H} x_{\rm e} \right)^{-1} \nonumber \\
&= 4.5 \times 10^{4} ~{\rm yr} \left( \frac{n_{\rm H}}{10^{4} ~{\rm cm^{-3}}} \right)^{-1} \left( \frac{x_{\rm e}}{10^{-5}}\right)^{-1}. \label{0122.10}
\end{align}
Since the recombination time is much shorter than the dust evacuation time
$t_{\rm evac}$ (Eq. \ref{1.6.4}, \ref{0816.15}), mixing with the ionized gas does not affect the dust evacuation
unless the ionized gas is constantly supplied by some mechanism in a timescale shorter than $\sim t_{\rm rec}$.
\subsection{Small dust grains} \label{sec4.2}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{./small_grain.pdf}
\caption{Same as Figure \ref{zu0807.3}, but for the case of the smaller efficiency factor of $Q = 0.3$. \label{zu0123.1}}
\end{center}
\end{figure}
In previous sections, we assumed that the dust absorption efficiency factor $Q=1$, corresponding to grains
larger than $\sim 0.01~\mu{\rm m}$.
If small grains are dominant, however, the efficiency factor decreases and the dust evacuation time becomes longer as $Q^{-1/2}$
(Eqs. \ref{1.6.4} and \ref{0816.15}).
Dust grains in the early Universe (at $z>6$) are mainly produced in SN events because intermediate-mass stars ($<8~M_{\odot}$) take more than $1~{\rm Gyr}$ to become AGB stars which produce a large quantity of dust grains \citep{Morgan2003,Dwek2007}.
Theoretical calculations indicate that small grains $a< 0.01~\mu{\rm m}$
have a flat ''top-heavy'' size distribution in the early Universe \citep{Todini2001,Schneider2012},
unlike the so-called MRN distribution \citep{Mathis1977} in the
local interstellar medium, which has a ``bottom-heavy''power-law with the slope $\alpha = -3.5$ in the range $a \sim 0.01 - 1.0~\mu{\rm m}$.
In addition, small grains $\la 0.05~\mu{\rm m}$ are subsequently destroyed by the passage of the SN reverse shock
in their formation site \citep{Nozawa2007,Bianchi2007}.
Since the efficiency factor $Q \sim 1$ (Figure \ref{zu0807.6}) for grains larger than $0.01~\mu{\rm m}$,
our assumption $Q=1$ would be reasonable for dust in the early universe.
Considering uncertainties in the dust size distribution, however,
we also calculate the case of small grains with $a=5\times10^{-3}~\mu{\rm m}$, which have $Q \simeq 0.3$.
Figure \ref{zu0123.1} presents the condition for formation of dust-free star-forming clouds.
Due to smaller efficiency factor, the parameter range for the dust-free cloud formation becomes smaller
compared to the fiducial case (Figure \ref{zu0807.3}).
Only massive compact clouds satisfy the dust evacuation condition
if the typical dust size is smaller than $\sim 0.01 \mu{\rm m}$.
Further studies are needed about the size distribution of grains in the early Universe.
\subsection{Magnetic field} \label{sec4.3}
Dust grains have positive electric charge due to photoelectric effect so that
their motion is influenced by the presence of magnetic field.
In fact, the gyro-radius of a grain
\begin{align}
r_{\rm gyro} &= \frac{m_{\rm d} v_{\rm d} c}{q B} \nonumber \\
&= 9.4 \times 10^{-7} ~{\rm pc} \left( \frac{q}{600~e} \right)^{-1} \left( \frac{B}{100~\mu{\rm G}} \right)^{-1} \nonumber \\
& \hspace{2cm} \times \left( \frac{\rho_{\rm d}}{3~{\rm g \, cm^{-3}}} \right) \left( \frac{a_{\rm d}}{0.1~\mu{\rm m}} \right)^{3} \left( \frac{v_{\rm d}}{7 ~ {\rm km \, s^{-1}}} \right), \label{0124.1}
\end{align}
is much smaller than the star-forming regions
if the magnetic field is as strong as in local molecular clouds $\sim 100 ~\mu{\rm G}$ \citep{Crutcher1991}.
Below, we evaluate the dust drift velocity taking the effect of magnetic field into account \citep{Draine2011}.
For simplicity, we assume that the magnetic field is parallel to the $z$-axis, $\bm{B} = (0,0,B)$,
and the radiation force direction is at angle $\theta$ to the field orientation, $\bm{F}_{\rm rad}=(F_{\rm rad} \sin \theta,0,F_{\rm rad} \cos \theta)$.
Dust grains have steady time-averaged velocity $\bar{\bm{v}}$ with rotational motion around the magnetic field,
with $x$, $y$, and $z$-components, respectively,
\begin{eqnarray}
\bar{v}_{\rm x} &=& \frac{1}{1 + ( \omega t_{\rm drag} )^{2}} \frac{F_{\rm rad} t_{\rm drag}}{m_{\rm d}} \sin \theta, \label{0131.1} \\
\bar{v}_{\rm y} &=& - \frac{(\omega t_{\rm drag})}{1 + ( \omega t_{\rm drag} )^{2}} \frac{F_{\rm rad} t_{\rm drag}}{m_{\rm d}} \sin \theta,\label{0131.2} \\
\bar{v}_{\rm z} &=& \frac{F_{\rm rad} t_{\rm drag}}{m_{\rm d}} \cos \theta. \label{0131.3}
\end{eqnarray}
The drift velocity becomes
\begin{eqnarray}
v_{\rm d} = \sqrt{\frac{1 + \cos ^{2} \theta (\omega t_{\rm drag})^{2}}{1 + (\omega t_{\rm drag})^{2}}} \frac{F_{\rm rad} t_{\rm drag}}{m_{\rm d}}, \label{0131.4}
\end{eqnarray}
where $\omega=qB/m_{\rm d}c$ is the gyro-frequency and $t_{\rm drag}$ is the drag time (Eq. \ref{0807.1}).
From Equation \eqref{0131.4}, we can see that the magnetic field greatly hinders the grain motion
when
\begin{align}
\omega t_{\rm drag} &= 0.26 \left( \frac{q}{600~e} \right) \left( \frac{B}{0.1~\mu{\rm G}} \right) \left( \frac{a_{\rm d}}{0.1~\mu{\rm m}} \right)^{-2} \nonumber \\
& \hspace{2cm} \times \left( \frac{v_{\rm d}}{7 ~ {\rm km \, s^{-1}}} \right)^{-1} \left( \frac{n_{\rm H}}{10^{4}~{\rm cm^{-3}}} \right)^{-1} \label{0124.2}
\end{align}
is much larger than unity.
In particular, the dust velocity is reduced by a factor of $1/\sqrt{1 + (\omega t_{\rm drag})^{2}}$
from the value without a magnetic field when the direction of the radiation force is vertical to the field.
Magnetic fields in first galaxies are usually expected to be much weaker than in the Milky Way \citep{Koh2016}.
As seen in Equation \eqref{0124.2}, as long as the field is weaker than $\sim 0.1 ~\mu {\rm G}$, $\omega t_{\rm drag} \leq 1$ and
the dust evacuation is not suppressed.
Magnetic fields, however, can be potentially amplified by turbulent small-scale dynamo to equipartition, which corresponds to $\sim 100 ~ {\mu G}$ \citep{Schober2012}.
In this case, the dust drift in the vertical direction is strongly suppressed but dust grains can still drift in the direction along magnetic fields.
Thus, the suppression effect by magnetic fields would remain at the factor of a few level.
For further discussion on this problem, detailed numerical calculation is awaited.
\section{Summary and discussion}\label{matome}
We have investigated whether the dust grains are evacuated from star-forming regions
by stellar radiation feedback so that star formation from the dust-free gas ensues
in some environments in the early universe.
We have obtained the condition for the dust evacuation by comparing the dust evacuation time, $t_{\rm evac}$, with the smaller of
the H{\sc ii} region expansion time, $t_{\rm HII}$, and the OB star lifetime $t_{\rm OB}$.
Our findings are summarized as follows:
\begin{itemize}
\item[(1)]
As star-forming clouds become compact, the evacuation time decreases.
Therefore, the dust evacuation can occur for compact star-forming clouds whose column density is $N_{\rm H} \simeq 10^{24} \sim 10^{26}~{\rm cm^{-2}}$, corresponding to the radius of $1 - 10~{\rm pc}$ for the mass $M_{\rm cl} = 10^{6}M_{\odot}$.
The radiation force on dust grains is reduced significantly if the clouds are optically thick to the radiation from stars.
This imposes the condition that metallicity should be less than $\sim 10^{-2}~Z_{\odot}$.
\item[(2)]
The dust evacuation from galactic disks occurs more easily in more massive halos and at earlier formation redshift.
For example, halos of $\sim 10^{9}~M_{\odot}$ ($\sim 10^{7}~M_{\odot}$) formed at $z \sim 3$ ($z \sim 9$, respectively) can induce the dust evacuation.
We find that $t_{\rm evac}$ and $t_{\rm HII}$ are almost constant inside the galactic disks.
Therefore the dust evacuation occurs in the entire galactic disks once the condition for the dust evacuation is satisfied at the center of disks.
\end{itemize}
We expect that the dust-to-gas mass ratio is reduced remarkably or even becomes zero due to the dust evacuation.
On the other hand, the dust depletion factor $f_{\rm dep}$ is $\sim 0.5$ in the solar neightborhood \citep{Pollack1994}, and it is theoretically suggested to be smaller in very low-metallicity environments \citep{Asano2013,Remy-Ruyer2014}.
Thus even though the dust is totally evacuated, the left-over gas metallicity is not significantly lower than the original (gas and dust) metallicity.
In the dust evacuated gas, fragmentation would be suppressed because the dust cooling and ${\rm H_{2}}$ formation on grain surfaces are quenched.
Therefore, more massive stars tend to form in such regions.
It is theoretically expected that star-forming clumps can fragment to dense cores of sub-solar mass when the metallicity is higher than the critical value $Z_{\rm crit} = 10^{-6} - 10^{-5}~Z_{\odot}$ \citep{Omukai2005, Schneider2012}.
However, the observed metallicity distribution of metal-poor stars is better reproduced with the critical metallicity at $Z_{\rm crit} = 10^{-4}~Z_{\odot}$ \citep{Salvadori2007}, much higher than the theoretical expectation.
Reduction of depletion factor $f_{\rm dep}$ in low-metallicity environments may explain this difference in the theoretical and empirical critical metallicities $Z_{\rm crit}$.
This may be caused by slow dust growth in low-metallicity environments \citep{Asano2013,Remy-Ruyer2014}.
We expect that the dust evacuation also contributed to reduce the depletion factor.
At the metallicity $Z \sim Z_{\rm crit}$, the dust-to-gas ratio $\mathcal D$ is easily reduced by the dust evacuation below the critical value $\mathcal D_{\rm crit}$.
In order for $\mathcal D$ to exceed $\mathcal D_{\rm crit}$, the metallicity needs to be higher than the theoretical critical metallicity in the dust-evacuated gas.
Therefore, the dust evacuation boosts the critical metallicity $Z_{\rm crit}$ compared with the theoretical prediction for $f_{\rm dep}$ similar to the Galactic value.
As the dust size decreases, the radiation force on dust grains becomes weaker due to the lower absorption efficiency.
Therefore, small dust grains with the size less than $\sim 0.01\mu {\rm m}$
remain within star-forming clouds despite the stellar radiation feedback.
\citet{Nozawa2007} and \citet{Schneider2012} indicated that the small grains of $\lesssim 0.01 \rm \mu m$
could be destructed in the reverse shock in SN remnants.
Cosmic rays (CRs) and Magnetic fields can suppress the dust evacuation.
CRs raise the ionization degree higher than $\sim 10^{-5}$, if the source galaxy with SFR $\gtrsim 1 ~M_{\odot}{\rm yr^{-1}}$ is within $40~{\rm kpc}$ or SN remnants are inside the same galaxy.
In this case, the dust evacuation is significantly suppressed due to the Coulomb drag.
Also, the magnetic fields can be stronger than $0.1 ~\mu {\rm G}$ via turbulent small-scale dynamo amplification \citep{Schober2012}, and suppress the dust drift in the vertical direction.
A large fraction of observed metal-poor stars are CEMP stars \citep{Frebel2015,Aoki2006,Aoki2013,Yoon2016}.
For formation of CEMP stars, such scenarios as accretion of gases with metals but devoid of dust onto metal-poor stars from interstellar medium \citep{Johnson2015}, mass transfer from the companion star in the binary system \citep{Komiya2007} or metal enrichment from the faint SNe of the primordial stars \citep{Nomoto2013} are proposed.
The dust evacuation can also explain the observed CEMP star composition by altering the chemical composition of the gas from the original value.
The degree of depletion into grains sensitively depends on the metal species:
for example, iron is more depleted into grains than carbon, according to the observations of interstellar medium \citep[e.g.,][]{Jenkins2009,Johnson2015}.
Therefore, the dust evacuation can enhance the carbon abundance relative to iron.
\section*{Acknowledgements}
The authors wish to express our cordial thanks to Profs Takahiro
Tanaka and Takashi Hosokawa for their continual interest and advices.
We also thank Daisuke Nakauchi, Kazu Sugimura and Ryoki Matsukoba for for fruitful discussions.
We would like to thank to an anonymous reviewer for the constructive comments and advices, especially
for Sec. 4.
This work is supported in part by MEXT/JSPS KAKENHI grants (KO:25287040, 17H01102, HY:17H04827).
|
1,477,468,750,141 | arxiv | \section{Introduction}
Let $L_1,\ldots,L_4\in\mathbb{Z}[x_1,x_2]$ be binary linear forms,
and let $\mathcal{R}\subset \mathbb{R}^2$ be any bounded region.
This paper is motivated by the question of determining conditions on
$L_1,\ldots,L_4$ and $\mathcal{R}$ under which it is possible to establish
an asymptotic formula for the sum
$$
S(X):=\sum_{{\bf x}=(x_1,x_2)\in \mathbb{Z}^2\cap X\mathcal{R}}r(L_1({\bf x}))r(L_2({\bf x}))r(L_3({\bf x})) r(L_4({\bf x})),
$$
as $X\rightarrow \infty$, where $X\mathcal{R}:= \{X{\bf x}: {\bf x}\in \mathcal{R}\}.$
The problem of determining an upper bound for $S(X)$ is substantially
easier. In fact the main result in the authors' recent investigation
\cite{nair}
into the average order of arithmetic functions over the values of binary forms can
easily be used to show that
$
S(X)\ll X^2,
$
provided that no two of $L_1,\ldots,L_4$ are proportional.
In trying to establish an asymptotic formula for $S(X)$ there is no
real loss in generality in restricting ones attention to
the corresponding sum in which one of the variables $x_1,x_2$ is
odd. For $j \in \{*,0,1\}$, let us write $S_{j}(X)$ for the
corresponding
sum in which $x_1$ is odd and $x_2 \equiv j \bmod{2}$, where the case
$j=*$ means that no $2$-adic restriction is placed on $x_2$.
Our point of departure is recent work of Heath-Brown \cite{h-b03},
which establishes an asymptotic formula for $S_*(X)$ when
$L_1,\ldots,L_4$ and $\mathcal{R}$ satisfy the following normalisation
hypothesis:
\begin{enumerate}
\item[(i)]
$\mathcal{R}$ is an open, bounded and convex region, with a piecewise continuously
differentiable boundary,
\item[(ii)]
no two of $L_1,\ldots,L_4$ are proportional,
\item[(iii)]
$L_i({\bf x})>0$ for all ${\bf x}
\in \mathcal{R}$,
\item[(iv)]
we have $L_i({\bf x})\equiv x_1 \bmod{4}$.
\end{enumerate}
Here, as throughout our work, the index $i$ denotes a generic
element of the set $\{1,2,3,4\}$.
We will henceforth say that $L_1,\ldots,L_4, \mathcal{R}$ ``satisfy \textsf{NH}$_0$'' if these
four conditions hold.
The first three conditions are all quite natural, and don't impose any
serious constraint on $L_1,\ldots,L_4, \mathcal{R}$. The fourth condition is
more problematic however, especially when it comes to applying the
result in other contexts.
We will return to this issue shortly. For the moment we
concern ourselves with presenting a refinement of Heath-Brown's
result. It will be necessary to begin by introducing some more
notation.
For given $L_1,\ldots,L_4,\mathcal{R}$ we
will write
\begin{equation}
\label{eq:Linf}
L_\infty=L_\infty(L_1,\ldots,L_4):=\max_{1\leq i\leq 4}\|L_i\|,
\end{equation}
where $\|L_i\|$ denotes the maximum
modulus of the coefficients of $L_i$, and
\begin{equation}
\label{eq:rinf}
r_\infty=r_\infty(\mathcal{R}):=\sup_{{\bf x}\in\mathcal{R}}\max\{|x_1|,|x_2|\}.
\end{equation}
Furthermore, let
\begin{equation}
\label{eq:r'}
r'=r'(L_1,\ldots,L_4,\mathcal{R}):=\sup_{{\bf x}\in\mathcal{R}}\max_{1\leq i \leq
4}|L_i(\mathbf{x})|.
\end{equation}
Define the real number
\begin{equation}\label{defeta}
\eta:=1-\frac{1+\log\log 2}{\log 2},
\end{equation}
with numerical value $0.08607\ldots$, and let $\chi$ be the
non-principal character modulo $4$ defined multiplicatively by
$$
\chi(p):=\left\{
\begin{array}{ll}
+1, & \mbox{if $p\equiv 1\bmod{4}$},\\
-1, & \mbox{if $p\equiv 3\bmod{4}$},\\
0, & \mbox{if $p=2$}.
\end{array}
\right.
$$
We are now ready to reveal our first result.
\begin{thm}\label{main0}
Assume that $L_1,\ldots,L_4,\mathcal{R}$ satisfy \textsf{NH}$_0$, and
let $\varepsilon>0$. Suppose that $r'X^{1-\varepsilon}\geq 1$.
Then we have
$$
S_{*}(X)
= 4\pi^4 \meas(\mathcal{R})X^2 \prod_{p>2}\sigma_p^* +
O\Big(\frac{L_\infty^{ \varepsilon}r_\infty r'X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
where
\begin{equation}
\label{eq:sig*}
\sigma_p^*:=\Big(1-\frac{\chi(p)}{p}\Big)^4
\sum_{a,b,c,d=0}^{\infty}\chi(p)^{a+b+c+d}\rho_*(p^a,p^b,p^c,p^d)^{-1},
\end{equation}
and
\begin{equation}
\label{eq:rho*}
\rho_*(\mathbf{h}):=\det\{{\bf x}\in\mathbb{Z}^2: h_i\mid L_i({\bf x})\}
\end{equation}
as a sublattice of $\mathbb{Z}^2$.
Moreover, the product $\prod \sigma_p^*$ is absolutely convergent.
\end{thm}
The implied constant in this estimate is allowed to depend upon the
choice of $\varepsilon$, a convention that we will adopt
for all of the implied constants in this paper.
It would be straightforward to replace the term $(\log X)^\varepsilon$ by
$(\log\log X)^A$ in the error term, for some explicit value of $A$.
For the purposes of comparison, we note that \cite[Theorem 1]{h-b03}
consists of an asymptotic formula for $S_*(X)$ with error
$$
O_{L_1\ldots,L_4,\mathcal{R}}\Big( \frac{X^2(\log\log
X)^{15/4}}{(\log X)^{\eta/2}}\Big).
$$
Here there is an unspecified dependence on $L_1,\ldots,L_4,\mathcal{R}$,
and $\eta$ is given by \eqref{defeta}.
Thus Theorem \ref{main0} is stronger than \cite[Theorem 1]{h-b03}
in two essential aspects. Firstly, we have been able to obtain complete
uniformity in $L_1,\ldots,L_4,\mathcal{R}$ in the error term, and
secondly, our exponent of $\log X$ is almost twice the size.
Our next result extends Theorem \ref{main0} to
points running over vectors belonging to suitable sublattices of
$\mathbb{Z}^2$. The advantages of such a generalisation will be made clear
shortly.
For any $\mathbf{D}=(D_1,\ldots,D_4) \in \mathbb{N}^4$, we let
\begin{equation}
\label{eq:lattice}
\mathsf{\Gamma}_{\mathbf{D}}=\mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4):= \{{\bf x}\in\mathbb{Z}^2: D_i\mid L_i({\bf x})\}.
\end{equation}
Then $\mathsf{\Gamma}_{\mathbf{D}}\subseteq \mathbb{Z}^2$ is an integer lattice of
rank $2$. Next, let $\mathbf{d}=(d_1,\ldots,d_4)\in\mathbb{N}^4$ and assume that $d_i\mid D_i$. In particular it follows that
$
\mathsf{\Gamma}_{\mathbf{D}}\subseteq \mathsf{\Gamma}_{\mathbf{d}}.
$
Throughout this paper we will focus our attention on $(\mathbf{d},\mathbf{D})
\in \mathcal{D}$, where
\begin{equation}
\label{eq:D}
\mathcal{D}:= \big\{
(\mathbf{d}, \mathbf{D})\in \mathbb{N}^8: 2\nmid d_iD_i, ~d_i\mid D_i
\big\}.
\end{equation}
For $j \in \{*,0,1\}$ the goal is to establish
an asymptotic formula for
\begin{equation}
\label{eq:Sj}
S_{j}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}}):=\sum_{\tolt{{\bf x}\in \mathsf{\Gamma}_{\mathbf{D}}\cap
X\mathcal{R}}{2\nmid x_1}{x_2\equiv j \bmod{2}}}
r\Big(\frac{L_1({\bf x})}{d_1}\Big)
r\Big(\frac{L_2({\bf x})}{d_2}\Big)r\Big(\frac{L_3({\bf x})}{d_3}\Big)
r\Big(\frac{L_4({\bf x})}{d_4}\Big).
\end{equation}
It is clear that
$S_{j}(X)=S_j(X;(1,1,1,1),\mathbb{Z}^2)$ for each $j \in \{*,0,1\}$, in the
above notation.
For given $\mathbf{d}\in \mathbb{N}^4$ with odd components, let us
say that $L_1,\ldots,L_4,\mathcal{R}$ ``satisfy \textsf{NH}$_0(\mathbf{d})$''
if they satisfy the conditions in \textsf{NH}$_0$, but with (iv) replaced
by
\begin{enumerate}
\item[(iv)$_{\mathbf{d}}$]
we have $L_i({\bf x})\equiv d_i x_1 \bmod{4}$.
\end{enumerate}
When $d_i\equiv 1 \bmod 4$ for each $i$, it is clear that
(iv)$_\mathbf{d}$ coincides with (iv).
Let $[a,b]$ denote the least common multiple of any two positive
integers $a,b$.
The results that we obtain involve the quantity
\begin{equation}
\label{eq:rho0}
\rho_0(\mathbf{h})
:=\frac{\det
\mathsf{\Gamma}\big(([D_1,d_1h_1],\ldots,[D_4,d_1h_4]);L_1,\ldots,L_4\big)}{\det \mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4)},
\end{equation}
which we will occasionally denote by $\rho_0(\mathbf{h};\mathbf{D};L_1,\ldots,L_4)$.
Specifically we have local factors
\begin{equation}
\label{eq:sig}
\sigma_p:=\Big(1-\frac{\chi(p)}{p}\Big)^4
\sum_{a,b,c,d=0}^{\infty}\chi(p)^{a+b+c+d}\rho_0(p^a,p^b,p^c,p^d)^{-1},
\end{equation}
defined for any prime $p>2$. In view of \eqref{eq:sig*} and \eqref{eq:rho*}, we note that
$\rho_0(\mathbf{h})=\rho_*(\mathbf{h})$ and $\sigma_p=\sigma_p^*$ when $D_i=1$,
since then $\mathsf{\Gamma}_{\mathbf{D}}=\mathbb{Z}^2$.
Bearing all this notation in mind, we have the following result.
\begin{thm}\label{main1}
Let $(\mathbf{d},\mathbf{D})
\in \mathcal{D}$ and assume that $L_1,\ldots,L_4,\mathcal{R}$ satisfy \textsf{NH}$_0(\mathbf{d})$.
Let $\varepsilon>0$ and suppose that $r'X^{1-\varepsilon}\geq 1$.
Let $j \in \{*,0,1\}$. Then we have
$$
S_{j}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})
= \frac{\delta_j\pi^4 \meas(\mathcal{R})}{\det \mathsf{\Gamma}_{\mathbf{D}}}X^2 \prod_{p>2}\sigma_p +
O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon}r_\infty r'X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
where $D:=D_1D_2D_3D_4$ and
\begin{equation}
\label{eq:dj}
\delta_j:=\left\{
\begin{array}{ll}
2, & \mbox{if $j=0,1$},\\
4, & \mbox{if $j=*$},
\end{array}
\right.
\end{equation}
and $L_\infty, r_\infty, r'$
are given by \eqref{eq:Linf},
\eqref{eq:rinf} and \eqref{eq:r'}, respectively.
Moreover, the product $\prod \sigma_p$ is absolutely convergent.
\end{thm}
Taking $d_i=D_i=1$ and $j=*$ in the statement of Theorem~\ref{main1},
so that in particular $\mathsf{\Gamma}_{\mathbf{D}}=\mathbb{Z}^2$, we retrieve Theorem \ref{main0}.
In fact Theorem \ref{main1} is a rather routine deduction from
Theorem~\ref{main0}. This will be carried out in \S \ref{lattices}.
We now return to the normalisation conditions (i)--(iv)$_\mathbf{d}$ that form the
basis of Theorem \ref{main1}. As indicated above, one of the main
motivations behind writing this paper has been to weaken these conditions
somewhat. In fact we will be able to replace condition (iv)$_{\mathbf{d}}$ by either of
\begin{enumerate}
\item[(iv$'$)$_{\mathbf{d}}$]
the coefficients of $L_3,L_4$ are all non-zero and
there exist integers $k_1,k_2\geq 0$ such that
$$
2^{-k_1}L_1({\bf x})\equiv d_1x_1 \pmod{4}, \quad 2^{-k_2}L_2({\bf x})\equiv d_2x_1 \pmod{4},
$$
\end{enumerate}
or
\begin{enumerate}
\item[(iv$''$)$_{\mathbf{d}}$]
the coefficients of $L_3,L_4$ are all non-zero and
there exist integers $k_1,k_2\geq 0$ such that
$$
2^{-k_1}L_1({\bf x})\equiv d_1x_1 \pmod{4}, \quad 2^{-k_2}L_2({\bf x})\equiv x_2 \pmod{4}.
$$
\end{enumerate}
Accordingly, we will say that $L_1,\ldots,L_4, \mathcal{R}$ ``satisfy
\textsf{NH}$_1(\mathbf{d})$'' if they satisfy conditions (i)--(iii) and (iv$'$)$_{\mathbf{d}}$,
and we will say that $L_1,\ldots,L_4, \mathcal{R}$ ``satisfy
\textsf{NH}$_2(\mathbf{d})$'' if
together with (i)--(iii), they satisfy condition~(iv$''$)$_{\mathbf{d}}$.
The condition that none of the coefficients of $L_3,L_4$ are zero is
equivalent to the statement that neither $L_3$ nor $L_4$ is
proportional to $x_1$ or $x_2$. Condition (ii) ensures that no two of $L_1,\ldots,L_4$
are proportional, and so if $L_3$ or $L_4$ is proportional to one of
$x_1$ or $x_2$, then there are at least two forms among $L_1,\ldots,L_4$
that are not proportional to $x_1$ or $x_2$. After a possible relabeling,
therefore, one may always assume that the coefficients
of $L_3,L_4$ are non-zero.
The asymptotic formula that we obtain under these new
hypotheses is more complicated than Theorem \ref{main1}, and
intimately depends on the coefficients of $L_3, L_4$.
Suppose that
\begin{equation}\label{L3L4}
L_3({\bf x})=a_3x_1+b_3x_2,\quad L_4({\bf x})=a_4x_1+b_4x_2,
\end{equation}
and write
$$
\mathbf{A}=\Big( \begin{array}{cc}
a_3&b_3\\
a_4 &b_4
\end{array}
\Big),
$$
for the associated matrix. In particular for $L_1,\ldots,L_4$
satisfying any of the normalisation conditions above, we may assume
that $\mathbf{A}$ is an integer valued matrix with non-zero determinant and
non-zero entries.
Let $(j,k)\in \{*,0,1\}\times\{0,1,2\}$.
We proceed to introduce a quantity
$\delta_{j,k}(\mathbf{A},\mathbf{d})\in \mathbb{R}$, which
will correspond to the $2$-adic density of vectors ${\bf x}\in \mathbb{Z}^2$ with
$x_1\equiv 1 \bmod 4$
and $x_2 \equiv j \bmod{2}$, for which the
corresponding summand in \eqref{eq:Sj} is non-zero for $L_1,\ldots,L_4,\mathcal{R}$
satisfying \textsf{NH}$_k(\mathbf{d})$.
Let
\begin{equation}
\label{eq:Ar}
E_n:=\{x\in \mathbb{Z}/2^n\mathbb{Z} : ~\exists ~\nu\in\mathbb{Z}_{\geq 0}, ~
2^{-\nu}x\equiv 1 \bmod{4}\},
\end{equation}
for any $n \in \mathbb{N}$. Then we may set
\begin{equation}\label{eq:sig2}
\delta_{j,k}(\mathbf{A},\mathbf{d}):=\lim_{n\to\infty} \frac{1}{2^{2n-4}} \#\left\{
{\bf x}\in(\mathbb{Z}/2^n\mathbb{Z})^2 :
\begin{array}{l}
x_1\equiv 1 \bmod 4\\
x_2\equiv j \bmod{2}\\
L_i({\bf x})\in d_iE_n
\end{array}\right\}.
\end{equation}
This limit plainly always exists and is contained in the interval $[0,4]$.
It will ease notation if we simply write $\delta_{j,k}(\mathbf{A})$ for
$\delta_{j,k}(\mathbf{A},\mathbf{d})$ in all that follows. We will calculate
this quantity explicitly in \S \ref{2-adic}. We are
now ready to reveal our main result.
\begin{thm}\label{main2}
Let $(\mathbf{d},\mathbf{D}) \in \mathcal{D}$ and assume that $L_1,\ldots,L_4,\mathcal{R}$
satisfy \textsf{NH}$_k(\mathbf{d})$
for $k\in \{0,1,2\}$.
Let $\varepsilon>0$ and suppose that $r'X^{1-\varepsilon}\geq 1$.
Let $j \in \{*,0,1\}$. Then we have
$$
S_j(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})
=cX^2 + O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon}r_\infty r'X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
where
$$
c=
\delta_{j,k}(\mathbf{A})\frac{\pi^4 \meas(\mathcal{R})}{\det \mathsf{\Gamma}_{\mathbf{D}}} \prod_{p>2}\sigma_p.
$$
\end{thm}
It is rather trivial to check that $\delta_{j,0}(\mathbf{A})=\delta_j$,
in the notation of \eqref{eq:dj}.
Hence the statement of Theorem~\ref{main2} reduces to Theorem
\ref{main1} when $k=0$. The proof of Theorem \ref{main2} for $k=1,2$ uses Theorem~\ref{main1} as a
crucial ingredient, but it will be significantly more complicated than
the corresponding deduction of Theorem~\ref{main1} from Theorem
\ref{main0}. This will be carried out in \S \ref{s:t2}.
The underlying idea is to find appropriate linear transformations
that take the relevant linear forms into forms that satisfy the
normalisation conditions (i)--(iv)$_{\mathbf{d}}$, thereby
bringing the problem in line for an application of Theorem~\ref{main1}.
In practice the choice of transformation depends closely upon the
coefficients of $L_3, L_4$, and a careful case by case analysis is necessary
to deal with all eventualities.
While interesting in its own right, the study of sums like
\eqref{eq:Sj} is intimately related to problems involving the
distribution of integer and rational points on algebraic varieties.
In fact estimating $S_j(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$ boils
down to counting integer points on the affine variety
\begin{equation}
\label{eq:torsor}
L_i(x_1,x_2)=d_i(s_i^2+t_i^2), \quad (1\leq i\leq 4),
\end{equation}
in $\mathbb{A}^{10}$, with $x_1,x_2$ restricted in some way.
Viewed in this light it might be expected that the constant $c$ in Theorem
\ref{main2} admits an interpretation as a product of local
densities. Our next goal is to show that this is indeed the case.
Let $\boldsymbol{\lambda}=(\lambda_1,\ldots, \lambda_4)\in\mathbb{Z}_{\geq 0}^4$
and let $\boldsymbol{\mu}=(\mu_1,\ldots, \mu_4)\in\mathbb{Z}_{\geq 0}^4$.
Given any prime $p>2$, let
$$
N_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p^n):=\#\Big\{({\bf x},\mathbf{s},\mathbf{t})\in
(\mathbb{Z}/p^n\mathbb{Z})^{10}: \begin{array}{l}
L_i(x_1,x_2)\equiv p^{\lambda_i}(s_i^2+t_i^2) \bmod{p^n}\\
p^{\mu_i} \mid L_i(x_1,x_2)
\end{array}
\Big\},
$$
and define
\begin{equation}
\label{eq:def-sigp}
\omega_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p):=\lim_{n\rightarrow \infty}p^{-6n-\lambda_1-\cdots-\lambda_4}N_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p^n).
\end{equation}
This corresponds to the $p$-adic density on a variety of the
form \eqref{eq:torsor}, in which the points are restricted to lie on a
certain sublattice of $\mathbb{Z}/p^n\mathbb{Z}$.
Turning to the case $p=2$, let
$$
N_{j,k,\mathbf{d}}(2^n):=\#\Big\{({\bf x},\mathbf{s},\mathbf{t})\in
(\mathbb{Z}/2^n\mathbb{Z})^{10}: \begin{array}{l}
L_i(x_1,x_2)\equiv d_i(s_i^2+t_i^2) \bmod{2^n}\\
x_1 \equiv 1 \bmod{4}, ~x_{2}\equiv j \bmod{2}
\end{array}
\Big\},
$$
for any $(j,k) \in \{*,0,1\}\times\{0,1,2\}$ and any $\mathbf{d}\in\mathbb{N}^4$ such that $2\nmid
d_1\cdots d_4$. Here the subscript $k$ indicates that
$L_1,\ldots,L_4,\mathcal{R}$ are assumed to satisfy \textsf{NH}$_k(\mathbf{d})$.
The corresponding $2$-adic density is given by
\begin{equation}
\label{eq:eq:def-dig2}
\omega_{j,k,\mathbf{d}}(2):=\lim_{n\rightarrow \infty}2^{-6n}N_{j,k,\mathbf{d}}(2^n).
\end{equation}
Finally, we let $\omega_{\mathcal{R}}(\infty)$ denote the archimedean density of
solutions to the system of equations
\eqref{eq:torsor}, for which $({\bf x},\mathbf{s},\mathbf{t})\in\mathcal{R}\times
\mathbb{R}^{8}$.
We will establish the
following result in \S \ref{s:local}.
\begin{thm}\label{main3}
We have
$$
c=\omega_{\mathcal{R}}(\infty)
\omega_{j,k,\mathbf{d}}(2)\prod_{p>2}\omega_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p),
$$
in the statement of Theorem \ref{main2}, with
\begin{align*}
\boldsymbol{\lambda}=\big(\nu_p(d_{1}),\ldots,\nu_p(d_{4})\big),
\quad \boldsymbol{\mu}=\big(\nu_p(D_{1}),\ldots,\nu_p(D_{4})\big).
\end{align*}
\end{thm}
It turns out that the system of equations in \eqref{eq:torsor} play
the role of descent varieties for the pair of equations
$$
L_1(x_1,x_2)L_2(x_1,x_2)=x_3^2+x_4^2, \quad
L_3(x_1,x_2)L_4(x_1,x_2)=x_5^2+x_6^2,
$$
for binary linear forms $L_1,\ldots,L_4$ defined over $\mathbb{Z}$.
This defines a geometrically integral threefold $V\subset \mathbb{P}^5$, and it is
natural to try and estimate the number $N(X)$ of rational points
on $V$ with height at most $X$, as $X\rightarrow \infty.$ In fact there is a
very precise conjecture due to Manin \cite{fmt} which relates the
growth of $N(X)$ to the intrinsic geometry of $V$. It is easily
checked that $V$ is a singular
variety with finite singular locus consisting of double points. If $\widetilde{V}$ denotes the minimal
desingularisation of $V$, then the Picard group of $\widetilde{V}$ has
rank $1$. Moreover, $K_{\widetilde{V}}+2H$ is effective,
where $K_{\widetilde{V}}$ is a canonical divisor and $H$ is a hyperplane
section. Thus Manin's conjecture predicts the asymptotic behaviour
$
N(X)=c_V
X^2(1+o(1)),
$
as $X\rightarrow \infty$, for a suitable constant $c_V\geq 0$.
Building on his
investigation \cite[Theorem 1]{h-b03} into the sum $S_*(X)$ defined above, Heath-Brown provides
considerable evidence for this conjecture when
$L_1,\ldots,L_4, \mathcal{R}$ satisfy a certain normalisation hypothesis, which he
labels \textbf{NC2}. This coincides with the conditions (i)--(iii) in
\textsf{NH}$_0$, but with (iv) replaced by the condition that
$$
L_1({\bf x})\equiv L_2({\bf x}) \equiv \nu x_1 \pmod{4},
\quad
L_3({\bf x})\equiv L_4({\bf x}) \equiv \nu' x_1 \pmod{4},
$$
for appropriate $\nu,\nu' =\pm 1.$
The outcome of Heath-Brown's investigation is \cite[Theorem
2]{h-b03}. Under \textbf{NC2} this establishes the existence of a constant $c\geq 0$ and a function
$E(X)=o(X^2)$, such that
\begin{equation}
\label{eq:bell}
\sum_{\colt{{\bf x} \in \mathbb{Z}^2\cap X \mathcal{R}}{x_1\equiv 1\bmod{2}}}
r(L_1({\bf x})L_2({\bf x}))r(L_3({\bf x})L_4({\bf x}))=cX^2+O(E(X)).
\end{equation}
The explicit value of $c$ is rather complicated to state and will not
be given here. One of the features of Heath-Brown's proof is that it doesn't
easily lead to an explicit error function $E(X)$.
An examination of the proof reveals that this can be traced back to an
argument involving dominated convergence in the proof of \cite[Lemma
6.1]{h-b03}, thereby allowing Heath-Brown to employ \cite[Theorem 1]{h-b03}, which
is not uniform in any of the relevant
parameters. Rather than using \cite[Theorem~1]{h-b03} to estimate the sums
$S(d,d')$ that occur in his analysis, however, it is possible to employ our
Theorem \ref{main1}. The advantage in doing so is that the
corresponding error term is completely uniform in
the parameters $d,d'$, thus circumventing the need for the
argument involving dominated convergence.
Rather than labouring the details, we will content ourselves with merely recording the outcome of this
observation here.
\begin{cor}
One has $E(X)=X^2(\log X)^{-\eta/3+\varepsilon}$ in \eqref{eq:bell}, for any $\varepsilon>0$.
\end{cor}
In addition to the threefold $V\subset \mathbb{P}^5$ defined above, it turns out that the estimates in this
paper can play an important role in analysing the arithmetic of other rational varieties.
Indeed, one of the motivating factors behind writing this paper has
been to prepare the way for a verification of the Manin conjecture for
certain surfaces of the shape
$$
x_1x_2=x_3^2, \quad x_3(ax_1+bx_2+cx_3) =x_3^2+x_4^2,
$$
in forthcoming joint work with Emmanuel Peyre.
These equations define singular del Pezzo surfaces of degree $4$ in
$\mathbb{P}^4$, of the type first considered by Iskovskikh. These are arguably the
most interesting examples of singular quartic del Pezzo surfaces since
they are the only ones for which weak approximation may fail.
On solving the first equation in integers, and substituting into the
second equation, one is led to consider the family of equations
$$
h^2y_1y_2(ay_1^2+by_2^2+cy_1y_2) =s^2+t^2,
$$
for $h$ running over a suitable range. Studying the distribution of
integer solutions to this system of equations therefore
amounts to estimating sums of the shape
$$
\sum_{y_1,y_2}r(h^2y_1y_2(ay_1^2+by_2^2+cy_1y_2)),
$$
uniformly in $h$. By choosing $a,b,c$ such that $c^2-4ab$ is a square,
one can show that this sum is related to sums of the sort
\eqref{eq:Sj}, but for which Heath-Brown's original
normalisation conditions in \textsf{NH}$_0$ are no longer met.
Thus we have found it desirable to generalise the work of \cite{h-b03} to the extent enjoyed in the
present paper.
As a final remark we note that at the expense of extra work further
generalisations of our main results are possible. For example it would
not be difficult to extend the work to deal with analogues of \eqref{eq:Sj} in which
$r$ is replaced by a $r_\Delta$-function that counts
representations as norms of elements belonging to an arbitrary imaginary
quadratic field of discriminant $\Delta$.
\begin{notat}
Throughout our work $\mathbb{N}$ will denote the set of positive
integers. Moreover, we will follow
common practice and allow the arbitrary small parameter $\varepsilon>0$ to
take different values at different parts of the argument.
All order constants will be allowed to depend on $\varepsilon$.
\end{notat}
\begin{ack}
The authors are grateful to G\'erald Tenenbaum for
discussions that have led to the overall improvement in the error term of
Theorem \ref{main0}, and to Emmanuel Peyre for discussions relating to
the interpretation of the constant in Theorem \ref{main3}.
Part of this work was undertaken while the
second author was visiting the first author at the {\em Universit\'e
de Paris VII}, the hospitality and financial support
of which is gratefully acknowledged.
\end{ack}
\section{Interpretation of the constant}\label{s:local}
Our task in this section is to establish Theorem \ref{main3}.
We begin with some preliminary facts.
Let $A\in\mathbb{Z}$ and let $\alpha\in \mathbb{Z}_{\geq 0}$. For any prime power $p^n$, we write
\begin{equation}
\label{eq:sum2}
S_\alpha(A;p^n):=\#\{(x, y) \in(\mathbb{Z}/p^n\mathbb{Z})^2: p^{\alpha}(x^2+y^2) \equiv A \bmod{p^n}\}.
\end{equation}
If $\alpha\leq n$ then it is not hard to see that
\begin{equation}
\label{eq:al-0}
S_\alpha(A;p^n)=p^{2\alpha} S_0(A/p^\alpha;p^{n-\alpha}),
\end{equation}
when $\alpha\leq \nu_p(A)$ and $S_\alpha(A;p^n)=0$ otherwise. In the case
$\alpha=0$ we have
\begin{equation}
\label{eq:s0-1}
S_0(A;p^n)=\left\{\begin{array}{ll}
p^n+np^n(1-1/p), &\mbox{if $\nu_p(A)\geq n$},\\
(1+\nu_p(A))p^{n}(1-1/p),
& \mbox{if $\nu_p(A)<n$},
\end{array}
\right.
\end{equation}
when $p\equiv 1\bmod{4}$. This formula has been employed by Heath-Brown
\cite[\S 8]{h-b03} in a similar context. When $p\equiv 3 \bmod{4}$, he
notes that
\begin{equation}
\label{eq:s0-3}
S_0(A;p^n)=\left\{\begin{array}{ll}
p^{2[n/2]}, &\mbox{if $\nu_p(A)\geq n$},\\
p^{n}(1+1/p), & \mbox{if $\nu_p(A)<n$ and $2\mid \nu_p(A)$},\\
0, & \mbox{if $\nu_p(A)<n$ and $2\nmid \nu_p(A)$}.
\end{array}
\right.
\end{equation}
Finally, when $p=2$ and $n\geq 2$, we have
\begin{equation}
\label{eq:s0-2}
S_0(A;2^n)=\left\{\begin{array}{ll}
2^{n+1}, & \mbox{if $2^{-\nu_2(A)}A\equiv 1\bmod{4}$,}\\
0, & \mbox{otherwise.}
\end{array}
\right.
\end{equation}
Note that Heath-Brown states this formula only for odd $A$ that are
congruent to $1$ modulo $4$, but the general case is easily checked.
Indeed, if $\nu=\nu_2(A)$, then one notes that
$2\mid \hcf(x,y)$ in the definition of $S_0(A;2^n)$ if $\nu\geq 2$,
and $2\nmid xy$ if $\nu=1$. In the
former case one therefore has $S_0(A;2^n)=4S_0(A/4;2^{n-2})$, and in
the latter case one finds that $S_0(A;2^n)=2^{n+1}$.
Let $L_1,\ldots,L_4\in\mathbb{Z}[x_1,x_2]$ be arbitrary linear forms, and
recall the definition \eqref{eq:rho*} of the determinant $\rho_*(\mathbf{h})$.
It follows from the multiplicativity of $\rho_*$ that
$$
\frac{1}{\det \mathsf{\Gamma}_{\mathbf{D}}}
\prod_{p>2}\sigma_p =\prod_{p>2} c_p
$$
in the statement of Theorem \ref{main2}, with
$$
c_p =\Big(1-\frac{\chi(p)}{p}\Big)^4
\sum_{n_i\geq 0}\frac{\chi(p)^{n_1+\cdots+n_4}}{\rho_*(p^{\max\{\nu_p(D_1),
\nu_p(d_1)+n_1\}},\ldots, p^{\max\{\nu_p(D_4), \nu_p(d_4)+n_4\}})}.
$$
We claim that
\begin{equation}
\label{eq:claim-un}
c_p=\omega_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p),
\end{equation}
for each $p>2$, where $\omega_{\boldsymbol{\lambda},\boldsymbol{\nu}}(p)$ is given by
\eqref{eq:def-sigp} and the values of $\boldsymbol{\lambda},\boldsymbol{\nu}$ are as in the
statement of Theorem \ref{main3}. The proof of this claim will be in
two steps: the case $p\equiv 1\bmod{4}$ and the
case $p\equiv 3\bmod 4$.
\begin{lem}\label{lem:p=1}
Let $p\equiv 1\bmod{4}$ be a prime. Then \eqref{eq:claim-un} holds.
\end{lem}
\begin{proof}
Let $A\in\mathbb{Z}$, and let $p\equiv 1\bmod{4}$ be a prime.
On combining \eqref{eq:s0-1} with \eqref{eq:al-0} it follows that
$$
S_\alpha(A;p^n)=(1+\nu_p(A)-\alpha)p^{n+\alpha}(1-1/p),
$$
provided that $\alpha\leq \nu_p(A)<n$.
Our plan will be to fix $p$-adic valuations $\nu_i$ of
$L_i({\bf x})$, and to then use this formula to count the resulting
number of $\mathbf{s},\mathbf{t} \in (\mathbb{Z}/p^n\mathbb{Z})^{4}$ in $N_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p^n)$.
Note that we must have
$$
\nu_i\geq M_i:=\max\{\lambda_i, \mu_i\}.
$$
It follows that
\begin{align*}
N_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p^n)
=&p^{4n+\lambda_1+\cdots+\lambda_4}
\Big(1-\frac{1}{p}\Big)^4\sum_{\nu_i\geq M_i}M_{\boldsymbol{\nu}}(p^n)\prod_{1\leq
i\leq 4}(1+\nu_i-\lambda_i)\\
&\quad+O(n^4p^{5n}),
\end{align*}
where $M_{\boldsymbol{\nu}}(p^n)$ counts the number of ${\bf x} \bmod{p^n}$ such that
$p^{\mu_i}\mid L_i({\bf x})$ and $\nu_p(L_i({\bf x}))=\nu_i$.
But then
\begin{align*}
M_{\boldsymbol{\nu}}(p^n)
&=\sum_{\mathbf{e}\in \{0,1\}^4} (-1)^{e_1+\cdots +e_4} \#\big\{{\bf x}
\bmod{p^n}: p^{\max\{\nu_i+e_i, \mu_i\}}\mid L_i({\bf x})\big\} \\
&=\sum_{\mathbf{e}\in \{0,1\}^4} (-1)^{e_1+\cdots +e_4} \#\big\{{\bf x}
\bmod{p^n}: p^{\nu_i+e_i}\mid L_i({\bf x})\big\} \\
&=p^{2n}\sum_{\mathbf{e}\in \{0,1\}^4} \frac{(-1)^{e_1+\cdots +e_4}}{
\rho_*(p^{\nu_1+e_1}, \ldots, p^{\nu_4+e_4})}.
\end{align*}
Making the change of variables $n_i=\nu_i+e_i-\lambda_i$, and noting that
$\nu_i+e_i\geq M_i+e_i\geq M_i$, we therefore deduce that
\begin{align*}
\sigma_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p)
=&\Big(1-\frac{1}{p}\Big)^4\sum_{n_i\geq M_i-\lambda_i }
\rho_*(p^{\lambda_1+n_1}, \ldots, p^{\lambda_4+n_4})^{-1}\\
&\quad \times
\sum_{0\leq e_i\leq \min\{1,\lambda_i+n_i-M_i\}} (-1)^{e_1+\cdots +e_4}
\prod_{1\leq i\leq 4}(1+n_i-e_i).
\end{align*}
Now it is clear that
$$
\sum_{0\leq e\leq \min\{1,\lambda+n-M\}}
\hspace{-0.2cm}
(-1)^{e}
(1+n-e)=
\left\{
\begin{array}{ll}
1,& \mbox{if $\lambda+n-M\geq 1$},\\
1+M-\lambda,& \mbox{if $\lambda+n-M=0$}.
\end{array}
\right.
$$
Since $1+M-\lambda=\#\mathbb{Z}\cap [0,M-\lambda]$, a little thought reveals that
\begin{align*}
\sigma_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p)
&=\Big(1-\frac{1}{p}\Big)^4\sum_{n_i\geq 0}
\rho_*(p^{\max\{M_1,\lambda_1+n_1\}}, \ldots,
p^{\max\{M_4,\lambda_4+n_4\}})^{-1}\\
&=\Big(1-\frac{1}{p}\Big)^4\sum_{n_i\geq 0}
\rho_*(p^{\max\{\mu_1,\lambda_1+n_1\}}, \ldots, p^{\max\{\mu_4,\lambda_4+n_4\}})^{-1}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
\begin{lem}
Let $p\equiv 3\bmod{4}$ be a prime. Then \eqref{eq:claim-un} holds.
\end{lem}
\begin{proof}
Let $\alpha\in\mathbb{Z}_{\geq 0}$ and $A\in\mathbb{Z}$, and recall the definition \eqref{eq:sum2} of
$S_\alpha(A;p^n)$. Combining \eqref{eq:s0-3} with \eqref{eq:al-0}, and arguing precisely as
in the proof of Lemma~\ref{lem:p=1}, we conclude that
\begin{align*}
N_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p^n)
=&p^{6n+\lambda_1+\cdots+\lambda_4}\Big(1+\frac{1}{p}\Big)^4\sum_{\colt{\nu_i\geq M_i}{2\mid
\nu_i-\lambda_i}}
\sum_{\mathbf{e}\in \{0,1\}^4} \frac{(-1)^{e_1+\cdots +e_4}}{
\rho_*(p^{\nu_1+e_1}, \ldots, p^{\nu_4+e_4})}\\
&\quad +O(n^4p^{5n}).
\end{align*}
Making the change of variables $n_i=\nu_i+e_i-\lambda_i$, it follows that
\begin{align*}
\sigma_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p)
=&\Big(1+\frac{1}{p}\Big)^4\sum_{n_i\geq M_i-\lambda_i }
\rho_*(p^{\lambda_1+n_1}, \ldots, p^{\lambda_4+n_4})^{-1}\\
&\quad \times
\sum_{\colt{0\leq e_i\leq \min\{1,\lambda_i+n_i-M_i\}}{e_i\equiv
n_i \bmod{2}}} (-1)^{e_1+\cdots +e_4}.
\end{align*}
This time we find that the summand can be expressed in terms of
$$
\sum_{\colt{0\leq e\leq \min\{1,\lambda+n-M\}}{e\equiv
n \bmod{2}}}
\hspace{-0.2cm}
(-1)^{e}
=
\left\{
\begin{array}{ll}
(-1)^{n},& \mbox{if $\lambda+n-M\geq 1$},\\
1,& \mbox{if $\lambda+n-M=0$ and $2\mid M-\lambda$},\\
0,& \mbox{if $\lambda+n-M=0$ and $2\nmid M-\lambda$}.
\end{array}
\right.
$$
Since $\sum_{0\leq n\leq M-\lambda}(-1)^{n}$ is equal to $1$ if
$M-\lambda$ is even, and $0$ otherwise, we conclude that
\begin{align*}
\sigma_{\boldsymbol{\lambda},\boldsymbol{\mu}}(p)
&=\Big(1+\frac{1}{p}\Big)^4\sum_{n_i\geq 0}
\frac{(-1)^{n_1+\cdots+n_4}}{\rho_*(p^{\max\{\mu_1,\lambda_1+n_1\}}, \ldots,
p^{\max\{\mu_4,\lambda_4+n_4\}})}.
\end{align*}
This completes the proof of the lemma.
\end{proof}
We now turn to the $2$-adic density, for which we claim that
\begin{equation}
\label{eq:claim-deux}
\delta_{j,k}(\mathbf{A})=\omega_{j,k,\mathbf{d}}(2),
\end{equation}
where $\delta_{j,k}(\mathbf{A})$ is given by \eqref{eq:sig2} and
$\omega_{j,k,\mathbf{d}}(2)$ is given by \eqref{eq:eq:def-dig2}.
On recalling the definition \eqref{eq:Ar} of $E_n$,
it follows from \eqref{eq:s0-2} that
\begin{align*}
N_{j,k,\mathbf{d}}(2^n)
=&2^{4n+4}
\#\left\{
{\bf x}\in \mathbb{Z}/2^n\mathbb{Z} : \begin{array}{l}
L_i({\bf x})\in d_iE_n\\
x_1 \equiv 1 \bmod{4}, ~x_{2}\equiv j \bmod{2}
\end{array}
\right\}.
\end{align*}
But then
\begin{align*}
\omega_{j,k,\mathbf{d}}(2)
&=
\lim_{n\to\infty} \frac{1}{2^{2n-4}}
\#\left\{
{\bf x}\in \mathbb{Z}/2^n\mathbb{Z} : \begin{array}{l}
L_i({\bf x})\in d_iE_n\\
x_1 \equiv 1 \bmod{4}, ~x_{2}\equiv j \bmod{2}
\end{array}
\right\},
\end{align*}
which is just $\delta_{j,k}(\mathbf{A})$. This completes the proof of \eqref{eq:claim-deux}.
Finally we turn to the archimedean density $\omega_{\mathcal{R}}(\infty)$
of points on the variety \eqref{eq:torsor} for which ${\bf x}\in\mathcal{R}$.
We claim that
\begin{equation}
\label{eq:claim-trois}
\omega_{\mathcal{R}}(\infty)
=\pi^4 \meas(\mathcal{R}).
\end{equation}
Our assumptions on $L_1,\ldots,L_4,\mathcal{R}$ imply that
$L_i({\bf x})>0$ for all ${\bf x}\in\mathcal{R}$.
To begin with, it is clear that
$$
\omega_{\mathcal{R}}(\infty)=2^{8}\omega_{\mathcal{R}}^+(\infty),
$$
where $\omega_{\mathcal{R}}^+(\infty)$ is defined as for
$\omega_{\mathcal{R}}(\infty)$, but with the additional constraint that $s_i,t_i>0$.
We will calculate $\omega_{\mathcal{R}}^+(\infty)$
by parametrising the points via the $t_i$, using the Leray form.
In this setting the Leray form is given by
$$
(2^4t_1t_2t_3t_4)^{-1}\d s_1\cdots \d s_4\d x_1\d x_2.
$$
On making the
substitution $t_i=\sqrt{d_i^{-1} L_i({\bf x})-s_i^2}$, and noting that
$$
\int_0^{\sqrt{A}} \frac{\d s}{\sqrt{A-s^2}}=\frac{\pi}{2},
$$
we therefore conclude that
\begin{align*}
\omega_{\mathcal{R}}(\infty)
&=2^{4}
\int_{{\bf x}\in \mathcal{R}} \Big(\prod_{1\leq i\leq
4}\int_0^{\sqrt{d_i^{-1} L_i({\bf x})}}\frac{\d s}{\sqrt{d_i^{-1} L_i({\bf x})-s^2}}\Big)\d x_1\d x_2\\
&=\pi^4 \meas(\mathcal{R}),
\end{align*}
as required for \eqref{eq:claim-trois}.
Bringing together \eqref{eq:claim-un}, \eqref{eq:claim-deux} and
\eqref{eq:claim-trois}, we easily deduce the statement of Theorem \ref{main3}.
\section{The $2$-adic densities}\label{2-adic}
In this section we explicitly calculate the value of the $2$-adic
densities $\delta_{j,k}(\mathbf{A})=\delta_{j,k}(\mathbf{A},\mathbf{d})$
in \eqref{eq:sig2}. In effect this will simplify the process of
deducing Theorem \ref{main2}. Let $L_1,\ldots,L_4\in \mathbb{Z}[x_1,x_2]$ be
arbitrary linear forms that satisfy any of the normalisation
conditions from the introduction, with $L_3, L_4$ given by
\eqref{L3L4}. In particular, it is clear that
there exist integers $k_3,k_4\geq 0$ such that
\begin{equation}
\label{eq:L34}
2^{-k_3}L_3({\bf x})=2^{\mu_3}a_3'x_1+2^{\nu_3}b_3'x_2, \quad
2^{-k_4}L_4({\bf x})=2^{\mu_4}a_4'x_1+2^{\nu_4}b_4'x_2,
\end{equation}
for integers $a_i',b_i'$ such that
\begin{equation}
\label{eq:aibi}
a_3'a_4'b_3'b_4'(a_3'b_4'-a_4'b_3')\neq 0, \quad 2\nmid a_3'a_4'b_3'b_4',
\end{equation}
and integers $\mu_i,\nu_i\geq 0$ such that
\begin{equation}
\label{eq:munu}
\mu_3\nu_3=\mu_4\nu_4=0.
\end{equation}
We are now ready to proceed with the calculation
of $\delta_{j,k}(\mathbf{A})$, whose value will depend intimately on $j,k$, $\mathbf{d}$
and the values of the coefficients in \eqref{eq:L34}.
The calculations in this section are routine and so we will be
brief. In fact we will meet these calculations again in \S \ref{s:t2}
under a slightly different guise.
Recall the definition \eqref{eq:Ar} of $E_n$ for any $n \in \mathbb{N}$, and
the definition \eqref{eq:sig2} of $\delta_{j,k}(\mathbf{A})$,
for $L_1,\ldots,L_4,\mathcal{R}$ satisfying \textsf{NH}$_k(\mathbf{d})$.
When $k=0$, it easily follows from our normalisation conditions that
$L_i({\bf x}) \in d_i E_n$
for any integer vector ${\bf x}$ such that $x_1 \equiv 1 \bmod{4}$. Hence
\begin{equation}
\label{eq:C}
\begin{split}
\delta_{j,0}(\mathbf{A}) &=
\delta_j,
\end{split}
\end{equation}
in the notation of \eqref{eq:dj}.
Let us now suppose that $j=k=1$. Then clearly
\begin{equation}\label{eq:sig2'}
\delta_{1,1}(\mathbf{A})=\lim_{n\to\infty} \frac{1}{2^{2n-4}} \#\Big\{
{\bf x}\in(\mathbb{Z}/2^n\mathbb{Z})^2 :
\begin{array}{l}
x_1\equiv 1 \bmod 4, ~2\nmid x_2 \\
d_3L_3({\bf x}), d_4L_4({\bf x}) \in E_n
\end{array}\Big\}.
\end{equation}
It follows from \eqref{eq:munu} that at most two of
$\mu_3,\mu_4,\nu_3,\nu_4$ can be non-zero. An easy calculation shows that
\begin{equation}
\label{delta1}
\delta_{1,1}(\mathbf{A})=\left\{
\begin{array}{ll}
1, & \mbox{if $b_3'd_3-2^{\mu_3}\equiv b_4'd_4-2^{\mu_4} \bmod 4 $,}\\
0 , & \mbox{otherwise},
\end{array}
\right.
\end{equation}
when $\nu_3=\nu_4=0$ and $\mu_3,\mu_4\geq 1$.
Similarly, we deduce that
$$
\delta_{1,1}(\mathbf{A})=\left\{
\begin{array}{ll}
2, & \mbox{if $ a_j' \equiv d_j-2^{\nu_j} \bmod 4$ for $j=3,4$},\\
0 , & \mbox{otherwise},
\end{array}
\right.
$$
when $\mu_3=\mu_4=0$ and $\nu_3,\nu_4\geq 1$.
Let $j_1,j_2$ denote distinct elements from the set $\{3,4\}$. Then it
follows from \eqref{eq:sig2'} that
\begin{equation}
\label{delta3}
\delta_{1,1}(\mathbf{A})=\left\{
\begin{array}{ll}
1, & \mbox{if $ a_{j_1}' \equiv d_{j_1}-2^{\nu_{j_1}} \bmod 4 $},\\
0 , & \mbox{otherwise},
\end{array}
\right.
\end{equation}
when $\mu_{j_1}=\nu_{j_2}=0$ and $\mu_{j_2}, \nu_{j_1}\geq 1$.
Still with the notation $\{j_1,j_2\}=\{3,4\}$, a simple calculation
reveals that
\begin{equation}
\label{delta4}
\delta_{1,1}(\mathbf{A})=\left\{
\begin{array}{ll}
1, & \mbox{if $a_{j_2}'\equiv d_{j_2}-2^{\nu_{j_2}}\ \bmod 4$},\\
0, & \mbox{otherwise},
\end{array}
\right.
\end{equation}
when $\mu_3=\mu_4=\nu_{j_1}=0$ and $\nu_{j_2}\geq 1$.
In performing this calculation it is necessary to calculate the
contribution to the right hand side of \eqref{eq:sig2'} for fixed
values of $n$ and fixed $2$-adic valuation $\xi$ of
$a_3'x_1+b_3'x_2$, before then summing over all possible values of $\xi\geq
1$. In a similar fashion, one finds
\begin{equation}
\label{delta5}
\delta_{1,1}(\mathbf{A})=1/2,
\end{equation}
when $\nu_3=\nu_4=\mu_{j_1}=0$ and $\mu_{j_2}\geq 1$.
It remains to handle the case in which all the $\mu_j,\nu_j$ are
zero. For this we set
\begin{equation}
\label{eq:v}
v:=\nu_2(a'_3b'_4-a'_4b'_3),
\end{equation}
which must be a positive integer, since $a_j',b_j'$ are all odd.
Thus we have
\begin{equation}
\label{delta6}
\delta_{1,1}(\mathbf{A})=\left\{
\begin{array}{ll}
1/2, & \mbox{if $v=1$},\\
1-3/2^{v}, & \mbox{if $v\geq 2$ and $b_3'd_3 \equiv b_4'd_4 \bmod 4$},\\
3/2^{v}, & \mbox{if $v\geq 2$ and $b_3'd_3 \equiv -b_4'd_4 \bmod 4$},
\end{array}
\right.
\end{equation}
when $\mu_3=\mu_4=\nu_3=\nu_4=0$.
When $j\neq 1$, and $k\neq 0$, we will find it convenient to
phrase our formulae for $\delta_{j,k}(\mathbf{A})$ in terms of $\delta_{1,k}(\mathbf{A})$.
We claim that
\begin{equation}
\label{eq:1811.1}
\delta_{0,k}(\mathbf{A})=\sum_{\xi=1}^\infty \frac{ \delta_{1,k}(\mathbf{A}\mathbf{M}_\xi) }{2^{\xi}},\quad
\delta_{*,k}(\mathbf{A})=\sum_{\xi=0}^\infty \frac{ \delta_{1,k}(\mathbf{A}\mathbf{M}_\xi)
}{2^{\xi}}
\end{equation}
when $k=1$ or $2$, where
\begin{equation}\label{defMxi}
\mathbf{M}_\xi:=\Big(
\begin{array}{cc}
1&0\\
0 &2^\xi
\end{array}
\Big).
\end{equation}
Here the formula for $\delta_{0,k}(\mathbf{A})$ is not hard to establish, and follows on
extracting the $2$-adic valuation of $x_2$ in \eqref{eq:sig2}. The
formula for $\delta_{*,k}(\mathbf{A})$ follows on noting that $\delta_{*,k}(\mathbf{A})=
\delta_{0,k}(\mathbf{A})+\delta_{1,k}(\mathbf{A})$. Finally, we express $\delta_{1,2}(\mathbf{A})$
in terms of $\delta_{*,1}(\mathbf{A})$ via the transformation
\begin{equation}
\label{defM}
\mathbf{M}_{c,d_2}:=\Big(
\begin{array}{cc}
1&0\\
\kappa+4c &4
\end{array}
\Big),
\end{equation}
where $\kappa=\pm 1$ denotes the residue modulo $4$ of $d_2$, and
$c\in \{0,1,2\}$ is any parameter we care to choose.
It is not hard to see that
\begin{equation}
\label{eq:1811.2}
\delta_{1,2}(\mathbf{A})=\frac{\delta_{*,1}(\mathbf{A}\mathbf{M}_{c,d_2})}{4},
\end{equation}
using the fact that $x_1\equiv 1 \bmod{4}$ and $ x_2 \equiv d_2 \bmod{4}$.
\section{Proof of Theorem \ref{main0}}
Our proof follows that given by Heath-Brown for \cite[Theorem
1]{h-b03}, but with extra care taken to keep track of the error
term's dependence on $L_1, \ldots, L_4$ and $\mathcal{R}$.
Our improvement in the exponent of $\log X$ will emerge through a
modification of the the final stages of the argument.
Let $X\mathcal{R}_4:=\{{\bf x}\in\mathbb{Z}^2\cap X\mathcal{R}: x_1\equiv 1 \bmod 4\}$, and for
given $\mathbf{d}\in \mathbb{N}^4$ let $\mathcal{R}(\mathbf{d})\subseteq \mathcal{R}$ denote a convex
region depending on $\mathbf{d}$. We write $X\mathcal{R}_4(\mathbf{d})$ for the set
$ \{{\bf x}\in\mathbb{Z}^2\cap X\mathcal{R}(\mathbf{d}): x_1\equiv 1 \bmod 4\}$. The first step of the argument involves
modifying the ``level of distribution'' result that is employed by Heath-Brown \cite[Lemma 2.1]{h-b03}.
\begin{lem}\label{lem21}
Let $X\geq1$ and $Q_1,Q_2,Q_3,Q_4\ge 2$. Write
$Q=\max_i Q_{i}$ and $V=Q_1 Q_2 Q_3 Q_4.$
Then there is an absolute constant $A>0$ such that
\begin{align*}
\sum_{\tolt{\mathbf{d}\in\mathbb{N}^4}{d_i\le Q_i}{2\nmid d_i}}
\left|\#\big(\mathsf{\Gamma}_{\mathbf{d}}\cap X\mathcal{R}_4(\mathbf{d}) \big)
-\frac{\meas(\mathcal{R}(\mathbf{d}))X^2}{4\det\mathsf{\Gamma}_{\mathbf{d}}}\right|\\
\ll L_\infty^{\varepsilon}r_\infty X( V^{1/2}(\log Q)^{A}+ Q )+V.
\end{align*}
\end{lem}
\begin{proof}
We appeal to work of Daniel
\cite[Lemma 3.2]{daniel}. This gives
\begin{equation}\label{eqdaniel}
\left|\#\big(\mathsf{\Gamma}_{\mathbf{d}}\cap X\mathcal{R}_4(\mathbf{d}) \big)
-\frac{\meas(\mathcal{R}(\mathbf{d}))X^2}{4\det\mathsf{\Gamma}_{\mathbf{d}}}\right|\
\ll r_\infty\frac{X}{|\mathbf{v}|}+1,
\end{equation}
for some vector $\mathbf{v} \in \mathsf{\Gamma}_{\mathbf{d}}$ with coprime coordinates,
such that
$$
|\mathbf{v}|\ll (\det \mathsf{\Gamma}_{\mathbf{d}})^{1/2}\leq (d_1d_2d_3d_4)^{1/2}\leq
V^{1/2}.
$$
The contribution from the second term in \eqref{eqdaniel} is clearly
$O(V)$. To complete the proof of the lemma it will suffice to show
that
\begin{equation}
\label{eq:2211.1}
\sum_{\colt{\mathbf{d}\in\mathbb{N}^4}{d_i\le Q_i}}\frac{1}{|\mathbf{v}|}\ll L_\infty^{\varepsilon} (
V^{1/2}(\log Q)^{A}+ Q ),
\end{equation}
for some absolute constant $A>0$.
Let $\sigma_1$ denote the contribution from the case in which
$L_1(\mathbf{v})\cdots L_4(\mathbf{v})\neq 0$, and let $\sigma_2$ denote the
remaining contribution. We then have
$$
\sigma_1\leq \sum_{\colt{|\mathbf{v}|\ll
V^{1/2}}{L_i(\v)\neq 0}}\frac{1}{|\mathbf{v}|}\sum_{\tolt{\mathbf{d}\in\mathbb{N}^4}{d_i\le Q_i}{d_i\mid L_i(
\mathbf{v})}}1 \ll
L_\infty^{\varepsilon}\tau(F(\mathbf{v})),
$$
where $\tau$ is the divisor function and $F$ is a primitive
binary form that is proportional to $L_1\cdots L_4.$
A simple application of \cite[Corollary 1]{nair} now reveals that
there exists a constant $A>0$ such that
$$
\sum_{|\mathbf{v}|\leq x }\tau(F(\mathbf{v}))\ll L_\infty^\varepsilon x^2(\log
x)^A.
$$
We therefore obtain the estimate $
\sigma_1\ll L_\infty^{\varepsilon} V^{1/2}(\log Q)^{A},
$
on carrying out a dyadic summation for the range of $\mathbf{v}$, which is
satisfactory for \eqref{eq:2211.1}.
Turning to a bound for $\sigma_2$, we suppose that $i_0\in
\{1,2,3,4\}$ is an index for which
$L_{i_0}(\mathbf{v})=a_{i_0}v_1+b_{i_0}v_2=0$. Since
$\hcf(v_1,v_2)=1$, we have $v_1\mid b_{i_0}$ and $v_2\mid a_{i_0}$. If
$j\neq {i_0}$, then $L_j(\mathbf{v})\neq 0$ because $L_{i_0}$ and $L_j$
aren't proportional. Moreover, we have $|L_j(\mathbf{v})|\leq 2
L_\infty^2$ and the number of possible values of $L_j(\mathbf{v})$ is
bounded by $O( L_\infty^\varepsilon)$. Since $d_j\mid L_j(\mathbf{v})$,
the number of available $d_j$ is $O( L_\infty^{\varepsilon})$, whereas
the number of $d_{i_0}$ is bounded by $Q_{i_0}\leq Q$. Thus it follows
that
$
\sigma_2\ll L_\infty^{\varepsilon} Q,
$
which therefore completes the proof of \eqref{eq:2211.1}.
\end{proof}
Recall the definition \eqref{eq:r'} of $r'=r'(L_1,\ldots,L_4,\mathcal{R})$. It
will be convenient to set
$$
X':=r'X
$$
in what follows, and to assume that
$r'X^{1-\varepsilon}\geq 1$. In particular this ensures that $\log X' \gg \log
X$.
Our next task is to establish a uniform version of
\cite[Lemma 3.1]{h-b03}. The reader is recommended to consult \cite{h-b03} for full
details of the ensuing argument, since we will only stress those parts
where modification is needed.
When $0<m\leq X'$ and $m\equiv 1\bmod{4}$, we may write
\begin{align*}
r(m)=4\sum_{\colt{d\mid m}{d\leq {X'}^{1/2}}}\chi (d)+
4\sum_{\colt{e\mid m}{m>e {X'}^{1/2}}}\chi (e)=4A_+(m)+4A_-(m),
\end{align*}
say,
as in \cite{h-b03}.
This will be employed with $m=L_i(\mathbf{x})$ for $1\leq i\leq 3$. The conditions
$L_i(\mathbf{x})\equiv v_1 \bmod{4}$ and $v_1\equiv 1 \bmod{4}$ yield
$m\equiv 1 \bmod{4}$. In a similar fashion, we may write
$$
r(m) =4B_+(m)+4C(m)+4B_-(m),
$$
under the same hypotheses on $m$, with
$$
B_+(m):=\sum_{\colt{d\mid m}{d\leq Y}}\chi (d),\quad C(m):=
\sum_{\colt{d\mid
m}{Y<d\leq
X'/Y}}\!\!\!\!\!\chi (d),\quad
B_-(m):=\sum_{\colt{e\mid m}{m>e X'/Y}}\!\!\!\!\chi (e).
$$
Here $1\leq Y\leq {X'}^{1/2}$ is a parameter to be chosen in due
course. This
formula will be used with $m=L_4(\mathbf{x})$. The variable $e$ in $A_-(L_i({\bf x}))$ and $B_-(L_4({\bf x}))$
will satisfy $e\leq {X'}^{1/2}$ and $e\leq Y$, respectively.
On writing
$$
S_{\pm,\pm,\pm,\pm}
:=\sum_{\mathbf{x}\in
X\mathcal{R}_4}A_\pm(L_1(\mathbf{x}))A_\pm(L_2(\mathbf{x}))A_\pm(L_3(\mathbf{x}))
B_\pm(L_4(\mathbf{x})),
$$
we obtain
$$S_*(X)=4S_0+4^4\sum S_{\pm,\pm,\pm,\pm},$$
which is the analogue of \cite[Eq. (3.4)]{h-b03}.
Let us consider the sum $S_{+,+,-,-}$, the other $15$ sums being handled
similarly. Write $Q_1=Q_2=Q_3={X'}^{1/2}$ and $Q_4=Y$. Then
$$
S_{+,+,-,-}= \sum_{\colt{\mathbf{d}\in\mathbb{N}^4}{d_i\le Q_i}}\chi
(d_1d_2d_3d_4)\#\bigl(\mathsf{\Gamma}_{\mathbf{d}}\cap X\mathcal{R}_4(\mathbf{d})
\bigr),
$$
where $\mathcal{R}(\mathbf{d}):=\{ {\bf x}\in \mathcal{R}: L_3({\bf x})>d_3{X'}^{1/2},
~L_4({\bf x})>d_4X'/Y\}$.
An application of Lemma \ref{lem21} therefore implies that
\begin{equation}
\label{eq:2311.1}
S_{+,+,-,-}= \sum_{\colt{\mathbf{d}\in\mathbb{N}^4}{d_i\le Q_i}}\chi
(d_1d_2d_3d_4)
\frac{ \meas(\mathcal{R}(\mathbf{d}))X^2}{4\det\mathsf{\Gamma}_{\mathbf{d}}} +O(T),
\end{equation}
with
$$
T:=L_\infty^\varepsilon r_\infty X
{X'}^{3/4}Y^{1/2}(\log X')^{A} + {X'}^{3/2}Y,
$$
and $A\geq 2$.
Choosing $Y={X'}^{1/2}/(\log X')^{2A+2}$, we obtain
$$
T\ll \frac{L_\infty^\varepsilon r_\infty r'X^2}{\log X'}+
\frac{{r'}^2X^2}{(\log X')^{2A+2}}.
$$
We claim that it is possible to take
\begin{equation}
\label{eq:2311.2}
T\ll \frac{L_\infty^\varepsilon r_\infty r'X^2}{\log X}
\end{equation}
in \eqref{eq:2311.1}. When $r'\leq r_\infty(\log X')^{2A+1}$ this is
trivial, since the assumption $r'X^{1-\varepsilon}\geq 1$ yields
$\log X' \gg \log X$.
Suppose now that $r'>r_\infty(\log X')^{2A+1}\gg r_\infty (\log
X)^{2A+1}$.
Then on returning to the original
definition of $S_{\pm,\pm,\pm,\pm}$, it follows from
an easy application of \cite[Corollary 1]{nair} that
\begin{align*}
S_{+,+,-,-}
\ll \sum_{\mathbf{x}\in
X\mathcal{R}_4}\tau\big(L_1(\mathbf{x})L_2(\mathbf{x})L_3(\mathbf{x})L_4(\mathbf{x})\big)
&\ll L_\infty^\varepsilon r_\infty^2 X^2 (\log X)^4\\
&\ll L_\infty^\varepsilon r_\infty r' X^2 (\log X)^{3-2A}.
\end{align*}
Thus we may certainly take \eqref{eq:2311.2} in
\eqref{eq:2311.1} in this case too.
Although we will omit the details here, it is easy to modify the
argument of \cite{h-b03} to deduce that the main term in
\eqref{eq:2311.1} is
$$
\frac{\pi^4 \meas(\mathcal{R})X^2}{4^5} \prod_{p>2}\sigma_p^*
+O\big(L_\infty^\varepsilon r_\infty r' X^{79/40+\varepsilon}\big),
$$
and similarly for all the $S_{\pm,\pm,\pm,\pm}$.
Bringing all of this together we have therefore established the
following result.
\begin{lem}\label{lem31}
Assume that $r'X^{1-\varepsilon}\geq 1$. Then we have
$$
S_*(X)=4\pi^4 \meas(\mathcal{R})X^2 \prod_{p>2}\sigma_p^*+4S_0+
O\Big(\frac{L_\infty^\varepsilon r_\infty r'X^2}{\log X}\Big),
$$
where
$$
S_0:=\sum_{\mathbf{x}\in
X\mathcal{R}_4}r(L_1(\mathbf{x}))r(L_2(\mathbf{x}))r(L_3(\mathbf{x}))
C(L_4(\mathbf{x})).
$$
\end{lem}
To conclude our treatment of $S_*(X)$ we must estimate
$S_0$. Let
$$
\mathcal{B}:=\{ m\in\mathbb{Z}: \exists d\mid m, Y<d\leq X'/Y\}\cap \{m\in \mathbb{Z}:\exists
\mathbf{x}\in X\mathcal{R}_4, L_4(\mathbf{x})=m\}.
$$
Then as in \cite{h-b03}, we write
\begin{equation}
\label{eq:2311.3}
S_0\ll \sum_{m\in \mathcal{B}} S_0(m)|C(m)|,
\end{equation}
where
$$
S_0(m):=\sum_{\mathbf{x}\in
\mathcal{A}(m)}r(L_1(\mathbf{x}))r(L_2(\mathbf{x}))r(L_3(\mathbf{x}))
$$
and $\mathcal{A}(m):=\{ \mathbf{x}\in X\mathcal{R}_4: L_4(\mathbf{x})=m\}. $ We proceed
to establish the following estimate
\begin{lem}\label{l:2311.4}
There exists an absolute constant $c_0>0$ such that
$$
S_0(m)\ll L_\infty^\varepsilon r_\infty X(\log\log X')^{c_0}.
$$
\end{lem}
\begin{proof}
We begin by recalling the notation used in \cite{h-b03}, with only
very minor modifications. Suppose that $L_i({\bf x})=a_ix_1+b_ix_2$ with
$a_i\equiv 1 \bmod{4}$ and $b_i\equiv 0 \bmod{4}$. Then we have
$x_1=(m-b_4x_2)/a_4$ and
$$
L_i({\bf x})=\frac{A_im+B_in}{a_4}=L_i'(m,n),
$$
with $A_i=a_i,$ $n=x_2$ and $B_i=a_4b_i-a_ib_4$. Its crucial to
observe that $B_1B_2B_3\neq 0$ since none of $L_1,L_2,L_3$ are
proportional to $L_4$. We will use the inequality
$r(L_i'(m,n))\leq r(a_4(A_im+B_in))$. Note that
$$
a_4(A_im+B_in)=a_4\hcf(A_im,B_i)(A_i'(m)+B_i'n)
$$
with $B_i':=B_i /\hcf(A_im,B_i)$ and $A'_i(m)=A_im/\hcf(A_im,B_i)$. In
particular these coefficients are coprime.
Write
$$
H= a_4^3B_1B_2B_3 \prod_{1\leq i\neq j\leq 3}|a_ib_j-a_jb_i|,
$$
and introduce the multiplicative function $r_1$, given by
$$
r_1(p^\nu)=\left\{
\begin{array}{ll}
\nu+1, & \mbox{if $p\mid H$},\\
r(p^\nu) , & \mbox{otherwise}.
\end{array}
\right.
$$
Then we have
\begin{align*}
r(L_1(\mathbf{x}))r(L_2(\mathbf{x}))r(L_3(\mathbf{x}))
&\leq r(a_4^3)r(B_1B_2B_3)\prod_{i=1}^3r_1(A_i'(m)+B_i'n)
\\&\ll L_\infty^\varepsilon r_1\big(G_m(n)\big),
\end{align*}
where $G_m(X):=\prod_{i=1}^3 (A_i'(m)+B_i'X)$ is a primitive cubic
polynomial with coefficients bounded in size by $O(L_\infty^6)$.
Bringing all of this together we have so far shown that
$$
S_0(m)\ll L_\infty^\varepsilon\sum_{n\leq r_\infty X}r_1(G_m(n)).
$$
It now follows from \cite[Theorem 2]{nair} that there exists an
absolute constant $c_0>0$ such that
$$
S_0(m)
\ll L_\infty^\varepsilon r_\infty X(\log\log m)^{c_0}
\ll
L_\infty^\varepsilon r_\infty X(\log\log X')^{c_0},
$$
since visibly $S_0(m)=0$ unless $m \leq r'X=X'$. This completes the
proof of the lemma.
\end{proof}
It remains to consider the sum $\sum_{m\in \mathcal{B}}|C(m)|$ in
\eqref{eq:2311.3}. It is precisely at this point that our
argument diverges from the proof of Heath-Brown.
Define the function
\begin{equation}
\label{eq:Q}
Q(\lambda):=\lambda\log\lambda-\lambda+1.
\end{equation}
Then we have
$$
\max_{\lambda\in (1,2)}\min\{ Q(\lambda) ,2Q(\lambda
/2)\} =Q(1/\log 2)=2Q(1/(2\log 2))=\eta,
$$
where $\eta$ is given by \eqref{defeta}.
With this in mind, we have the following result.
\begin{lem}\label{l:2311.5}
We have
$$
\sum_{m\in \mathcal{B}}|C(m)|\ll \frac{r'X(\log\log X')^{9/4}}{(\log X')^{\eta}}.
$$
\end{lem}
In view of the fact that $|C(m)| \geq 1$ for any $m$ such that
$C(m)\neq 0$, we deduce from \cite[part (ii) of Theorem 21]{HT} that
one cannot hope to do much better than this estimate, since
up to multiplication by powers of $\log\log X'$
it is the true order of magnitude of the set $\mathcal{B}$.
\begin{proof}[Proof of Lemma \ref{l:2311.5}]
Define the sum
$$
\sigma(X';v):=\sum_{1\le m\le X'}|C(m)|^2v^{\Omega(m)},
$$
for any real number $v\in [0,1]$, where $\Omega(m)$ denotes the
total number of prime factors of $m$.
A crucial ingredient in the proof of Lemma \ref{l:2311.5} will be the estimate
\begin{equation}\label{majsigma}
\sigma(X';v) \ll X'(\log\log X')^3 (\log Y)^{2v-2}.
\end{equation}
This coincides with the estimate obtained by Heath-Brown in \cite[\S
5]{h-b03} when $v=1$. To establish \eqref{majsigma} we begin by expanding
$|C(m)|^2$ and drawing out the highest common factor of the variables
involved. This gives
\begin{align*}
|C(m)|^2=\sum_{h \mid m} \chi(h^2)
\sum_{\colt{k_1\mid m/h}{Y<hk_1\leq
X'/Y}}\chi(k_1)
\sum_{\tolt{k_2\mid m/hk_1}{Y<hk_2\leq
X'/Y}{\hcf(k_1,k_2)=1}}\chi(k_2).
\end{align*}
Once substituted into $\sigma(X';v)$, let us write $\sigma_1$
for the overall contribution from $h \leq Y$ and $\sigma_2$ for
the contribution from the remaining $h$. Note that
we must have $Y<h \leq X'/Y$ in $\sigma_2$, since $h \leq hk_1\leq X'/Y$. Write $Z:=X'/Y$. Then we have
\begin{align*}
\sigma_1
&=
\sum_{h\leq Y}\chi(h^2)v^{\Omega( h)}\sum_{Y/h< k_1 \leq Z/h}
\chi(k_1) v^{\Omega(k_1)}\sum_{n< Z/k_1} v^{\Omega(n)}
\sum_{k_2}\chi(k_2)v^{\Omega(k_2 )},
\end{align*}
where the final summation is over integers $k_2$ such that $\hcf(k_1,k_2)=1$
and $Y/h<k_2\leq \min \{Z/h, X'/hk_1n\}$.
Here the inequality $n< Z/k_1$ follows from the two inequalities
$n\leq X'/hk_1k_2$ and $hk_2>Y$. We will need the basic estimates
\begin{equation}
\label{eq:112.2}
\sum_{n\leq x} v^{\Omega(n)}\ll x(\log 2x)^{v-1},
\end{equation}
and
\begin{equation}
\label{eq:112.1}
\sum_{\colt{k_2\leq x}{\hcf(k_1,k_2)=1}}\chi(k_2)v^{\Omega(k_2 )}
\ll \tau(k_1)x\exp\{-3\sqrt{\log 2x}\},
\end{equation}
for any $v\in [0,1]$.
When $k_1=1$ the latter bound follows from the fact that the
corresponding
Dirichlet series can be embedded holomorphically into a zero-free region
for $L(s,\chi)$. The general case then follows from an application of
M\"obius inversion.
For fixed values of $h$ and $k_1$,
\eqref{eq:112.1} and \eqref{eq:112.2} imply that the
overall contribution to $\sigma_1$ from $n \leq X'/Zk_1$ is
\begin{align*}
&\ll \frac{\tau(k_1)Z}{h} \exp\{-3\sqrt{\log 2Y/h}\}\sum_{n \leq X'/Zk_1} v^{\Omega(n)} \\
&\ll \frac{\tau(k_1)X'}{hk_1}(\log (2\max\{1,hY^2/X'\}))^{v-1}\exp\{-3\sqrt{\log 2Y/h}\}.
\end{align*}
Here we have used the fact that $ X'/Zk_1 \geq hX'/Z^2=hY^2/X'$, since
$k_1\leq Z/h$. Next, on breaking the interval into dyadic intervals we deduce from \eqref{eq:112.2} that
\begin{align*}
\sum_{Y/k_1<n\leq Z/k_1} \frac{v^{\Omega(n)}}{n}
&\ll \log (X'/Y^2) \max_{H>hY/Z} \sum_{H<n\leq 2H} \frac{v^{\Omega(n)}}{n}\\
&\ll \log (X'/Y^2)(\log (2\max\{1,hY^2/X'\}))^{v-1},
\end{align*}
for $v\in [0,1]$. For fixed values of $h$ and $k_1$, it therefore follows from
\eqref{eq:112.1} that the
contribution from $n > X'/Zk_1$ is
\begin{align*}
&\ll \frac{\tau(k_1)X'}{hk_1} \exp\{-3\sqrt{\log 2Y/h}\}\sum_{Y/k_1<n \leq Z/k_1} \frac{v^{\Omega(n)}}{n} \\
&\ll \frac{\tau(k_1)X'}{hk_1}\log (X'/Y^2)
(\log (2\max\{1,hY^2/X'\}))^{v-1}\exp\{-3\sqrt{\log 2Y/h}\}.
\end{align*}
Combining these estimates with partial summation,
we therefore deduce that
\begin{align*}
\sigma_1 &\ll X'(\log\log X') \sum_{h\leq Y}\Big(\frac{ v^{\Omega( h)}}{h} (\log
(Z/h))^2 (\log (2\max\{1,hY^2/X'\}))^{v-1}\\
&\qquad\qquad\qquad\qquad\qquad\qquad\times \exp\{-3\sqrt{\log 2Y/h}\}\Big)\\
&\ll X'(\log\log X')^3(\log Y)^{2v-2},
\end{align*}
which is satisfactory for \eqref{majsigma}.
To bound $\sigma_2$, we estimate trivially the sum over $k_2$ as $\min
\{Z/h, X'/hk_1n\}$. Arguing as above, it follows that
\begin{align*}
\sigma_2
&\ll X'\log (X'/Y^2)\sum_{Y<h\leq Z}\frac{ v^{\Omega( h)}}{h}\sum_{ k_1 \leq Z/h }
\frac{(\log Y)^{v-1} }{ k_1}\\&\ll X'(\log\log X')^3(\log Y)^{2v-2}.
\end{align*}
This therefore completes the proof of \eqref{majsigma}.
The rest of the argument is inspired by the proof of \cite[Theorem 21(ii)]{HT}.
Let $E:=\{\mbox{$p$ prime}: 2<p\leq Y\}$, and introduce the quantities
$$
\Omega(m,E):=\sum_{\colt{p^\nu\parallel m}{p\in E}}\nu,\qquad
E(x):=\sum_{\colt{p \leq x}{p\in E}}\frac{1}{p},
$$
for any $m\in \mathbb{N}$ and any $x>0$.
We will make use of the well-known bound (cf. \cite[Exercise 04]{HT})
\begin{equation}
\label{eq:classic}
\#\{ m\leq x: \Omega(m,E)\geq \lambda E(x)\}
\ll \frac{x}{(\log
x)^{Q(\lambda)}(\log\log x)^{1/2}},
\end{equation}
where $Q$ is given by \eqref{eq:Q}, and which is valid for any $\lambda\in [1,2]$.
We observe that
\begin{equation}
\label{eq:2411.1}
\sum_{m\in \mathcal{B}}|C(m)| \leq \sum_{1 \leq m\leq X'}
\Big|\sum_{\colt{d\mid m}{Y<d\leq Z}}\chi(d)\Big| ,
\end{equation}
where
$$
Y=\frac{{X'}^{1/2}}{(\log X')^{2A+2}}, \quad
Z=\frac{X'}{Y}={X'}^{1/2}(\log X')^{2A+2}.
$$
We will break the sum over $m$ into three parts.
Let $\mathcal{B}_1$ denote the set of positive integers $m \leq X'$
such that
$$
\Omega(m,E)\leq E(X')/\log 2,
$$
let $\mathcal{B}_2$ denote the corresponding set for which
$$
E(X')/\log 2< \Omega(m,E) \leq 2E(X'),
$$
and let $\mathcal{B}_3$ denote the remaining set of positive integers $m
\leq X'$.
We will write $S_j=\sum_{m \in \mathcal{B}_j}
|\sum_{d}\chi(d)|$, for $1\leq j\leq 3$, with the conditions on $d$ as in \eqref{eq:2411.1}.
We then have
\begin{align*}
S_1&\leq \sum_{m\in \mathcal{B}_1}\sum_{\colt{d\mid m}{Y<d\leq Z}}1
=\sum_{ h+k\leq E(X')/\log 2}\sum_{\colt{Y<d\leq Z}{\Omega(d,E)=h}}\sum_{\colt{n\leq
X'/d}{\Omega(n,E)=k}}1.
\end{align*}
Since $E(X'/d)=E(X')$ for $d\leq Z$, an application of \cite[Theorem 08]{HT} yields
$$
\sum_{\colt{n\leq
X'/d}{\Omega(n,E)=k}}1\ll \frac{X'}{d}\exp\{-E(X')\}\frac{E(X')^k}{k!},
$$
uniformly for $k\leq (3-\varepsilon)E(X')$. Hence a
repeated application of \cite[Theorem 08]{HT} reveals that
$$
\sum_{\colt{Y<d\leq Z}{\Omega(d,E)=h}}\sum_{\colt{n\leq
X'/d}{\Omega(n,E)=k}}1\ll
X' \log (Z/Y) \exp\{-2E(X')\}\frac{E(X')^h}{h!}\frac{E(X')^k}{k!},
$$
uniformly for $h,k\leq (3-\varepsilon)E(X')$. It is clear that
$\log (Z/Y)\ll \log\log X'$
and
\begin{equation}
\label{eq:0412.1}
E(X')=E(Y)=\log\log Y+O(1) = \log\log X'+O(1).
\end{equation}
Moreover, the binomial theorem implies that
$$
\ell!\sum_{h+k=\ell}\frac{1}{h!k!}=\sum_{0\leq h\leq \ell}\frac{\ell!}{h!(\ell-h)!}=2^\ell,
$$
for fixed $\ell$. We therefore deduce
from \cite[Theorem 09]{HT} that
\begin{align*}
S_1&\ll X' \log\log X' \sum_{ \ell\leq E(X')/\log 2}
\exp\{-2E(X')\}\frac{(2E(X'))^{\ell}}{\ell!}\\ &\ll X'(\log\log X')^{1/2}\exp\{
-2Q(1/(2\log 2))E(X')\}\\
&\ll X'(\log\log X')^{1/2} (\log X')^{-\eta},
\end{align*}
which is satisfactory for the lemma.
We now turn to $S_2$. Let $S_2(\ell)$ denote the overall contribution to $S_2$ from $m$ such
that $\Omega(m,E)=\ell$. There are clearly
$O(\log\log X')$ possible values for $\ell$. Write $\ell=\lambda E(X')$, for
some $\lambda \in (1/\log 2,2]$.
Then on combining the Cauchy--Scharwz inequality with \eqref{majsigma}
and \eqref{eq:classic}, we obtain
\begin{align*}
S_2(\ell)^2&\ll \frac{X' }{(\log X')^{Q(\lambda)}(\log\log X')^{1/2} }
\Big((\lambda/2)^{-\lambda E(X')}\sigma (X',\lambda/2)\Big)\\
&\ll \frac{{X'}^2 (\log\log X')^{5/2} }{
(\log X')^{Q(\lambda) +\lambda(\log (\lambda/2)-1)+2} },
\end{align*}
since $E(X')=\log\log X'+O(1)$ by \eqref{eq:0412.1}.
Hence it follows that
$$
S_2=\sum_{\ell\ll \log\log X'} S_2(\ell) \ll \frac{{X'} (\log\log X')^{9/4} }{
(\log X')^{Q(\lambda)/2 +\lambda(\log (\lambda/2)-1)/2+1} }.
$$
This is satisfactory for the statement of the lemma, since
$$
Q(\lambda)/2 +\lambda(\log (\lambda/2)-1)/2+1\geq Q(1/\log 2),
$$
for $\lambda\geq 1/\log 2$.
It remains to deal with the sum $S_3$, which corresponds to a summation over
positive integers $m\leq X'$ for which
$\Omega(m,E)> 2E(X').$
For this we will combine the Cauchy--Schwarz
inequality with \eqref{majsigma} for $v=1$ and the bound
\eqref{eq:classic}, to deduce that
$$
S_3\ll \Big(\frac{X' \sigma(X',1)}{(\log X')^{Q(2)} (\log\log X')^{1/2}}\Big)^{1/2}\ll
\frac{X' (\log\log X')^{5/4}}{(\log X')^{Q(2)/2} }.
$$
This too is satisfactory for the statement of the lemma, since
$Q(2)/2>\eta$, and so completes its proof.
\end{proof}
Combining Lemmas \ref{l:2311.4} and \ref{l:2311.5} in
\eqref{eq:2311.3}, we may now conclude that there exists an absolute
constant $c_1>0$ such that
$$
S_0\ll \frac{L_\infty^\varepsilon r_\infty r' X^2 (\log\log
X')^{c_1}}{(\log X')^{\eta}} \ll
\frac{L_\infty^\varepsilon r_\infty r' X^2 }{(\log X')^{\eta-\varepsilon}}
\ll
\frac{L_\infty^\varepsilon r_\infty r' X^2 }{(\log X)^{\eta-\varepsilon}},
$$
since we have assumed that $r'X^{1-\varepsilon}\geq 1$ in the statement of
Theorem \ref{main0}. Once inserted into Lemma \ref{lem31}, this
therefore completes the proof of the theorem.
\section{Linear transformations}
Our proof of Theorems \ref{main1} and \ref{main2} will involve first
establishing the relevant estimate for a specific choice of $j \in
\{*,0,1\}$. The corresponding estimate for the remaining values of $j$
will be obtained via simple changes of variables.
Thus it will be important to consider the effect of
linear transformations on the sums \eqref{eq:Sj}, and that is the
purpose of the present section.
We begin by recording a preliminary result from group theory. For any
group $G$ and any subgroup $H \subseteq G$, write $[G:H]$
for the index of $H$ in $G$.
\begin{lem}\label{lem:group}
Let $A, B $ be subgroups of finite index in a group $G$,
such that $[G:A]$ and $[G:B]$ are coprime. Then we have
$$
[G:A\cap B]=[G:A][G:B].
$$
\end{lem}
\begin{proof}
For any $x,y\in G$ we
claim that either $xA\cap yB$ is empty, or else it is a left coset
of $A\cap B$ in $G$. Indeed, supposing that
$xA\cap yB$ is non-empty, we let $c\in
xA\cap yB$. Note that $xA=cA$ and
$yB=cB$. But then it follows that
$$
xA\cap yB=cA\cap cB=c(A\cap B)
$$
as required. Thus it follows that the total number of left
cosets of $A\cap B$ in $G$ is
$$
[G:A\cap B]\leq [G:A][G:B].
$$
However, by Lagrange's theorem we have $[G:A\cap B]=[G:A][A:A\cap B]$,
whence $[G:A]$ divides $[G:A\cap B]$. Similarly,
$[G:B]$ divides $[G:A\cap B]$. Thus it follows that
$$
[G:A][G:B]\leq [G:A\cap B],
$$
since $\hcf([G:A],[G:B])=1$.
Once coupled with our upper bound for $[G:A\cap B]$, this completes
the proof of the lemma.
\end{proof}
It will be useful to have a convenient way of referring
back to the statements of our main results.
Let us say that ``Hypothesis-$(j,k)$'' holds if
$S_j(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$ satisfies the asymptotic formula
described in Theorem~\ref{main2} for all
$L_1,\ldots,L_4,\mathcal{R}$ that satisfy \textsf{NH}$_k(\mathbf{d})$.
Thus Hypothesis-$(j,k)$ amounts to the established existence of an asymptotic
formula
$$
S_j(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})=\delta_{j,k}(\mathbf{A}) C_0X^2 +
O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon}r_\infty r'X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
for $r'X^{1-\varepsilon}\geq 1$, under the assumption that
\textsf{NH}$_k(\mathbf{d})$ holds. Here
\begin{equation}
\label{eq:constant}
C_0=C_{0}(L_1,\ldots,L_4;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}},\mathcal{R}):=
\frac{\pi^4 \meas(\mathcal{R})}{\det \mathsf{\Gamma}_{\mathbf{D}} } \prod_{p>2}\sigma_p,
\end{equation}
and $\sigma_p$ is given by \eqref{eq:rho0} and \eqref{eq:sig}.
Let $L_1,\ldots,L_4\in\mathbb{Z}[x_1,x_2]$ be binary linear forms, and let
$\mathcal{R}\subset \mathbb{R}^2$. Let $(\mathbf{d}, \mathbf{D})\in \mathcal{D}$, where $\mathcal{D}$
is given by \eqref{eq:D}, and set
\begin{equation}
\label{eq:X}
\mathcal{X}:=\mathsf{\Gamma}_{\mathbf{D}}\cap X\mathcal{R}.
\end{equation}
Then for a given matrix
$\mathbf{M}\in \mathrm{GL}_2(\mathbb{Z})$,
we define the sum
$$
S_{\mathbf{M}}:=\sum_{\colt{\mathbf{y}\in \mathbb{Z}^2, ~\mathbf{M}\mathbf{y}\in \mathcal{X}}{2\nmid y_1, ~y_2\equiv
j\bmod{2}}}
r\Big(\frac{L_1(\mathbf{My})}{d_1}\Big)
r\Big(\frac{L_2(\mathbf{My})}{d_2}\Big)r\Big(\frac{L_3(\mathbf{My})}{d_3}\Big)
r\Big(\frac{L_4(\mathbf{My})}{d_4}\Big).
$$
Here, as throughout this paper, we let
$\mathrm{GL}_2(\mathbb{Z})$ denote the set of non-singular $2\times 2$ integer
valued matrices with non-zero determinant.
Note that $S_{\mathbf{M}}$ depends on $X, \mathbf{d}, \mathbf{D}, L_1,\ldots,L_4$
and $j$, in addition to $\mathbf{M}$. In particular we have
$S_{\mathbf{M}}=S_{j}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$, when
$\mathbf{M}$ is the identity matrix.
In general let us write $\|\mathbf{M}\|$ to denote the maximum modulus of the
coefficients of $\mathbf{M}$. Bearing all this notation in mind, the following elementary result will
prove useful.
\begin{lem}\label{lem:linear}
Let $(j,k) \in \{*,0,1\}\times \{0,1,2\}$ and suppose
Hypothesis-$(j,k)$ holds.
Let $\mathbf{M}\in \mathrm{GL}_2(\mathbb{Z}) $ such that $\det \mathbf{M}=2^m$ for some $m
\in \mathbb{Z}_{\geq 0}$,
and define
$M_i(\mathbf{y}):=L_i(\mathbf{My})$. Let $\varepsilon>0$ and suppose that
$r'(L_1,\ldots,L_4,\mathcal{R})X^{1-\varepsilon}\geq 1$.
Assume that $M_1,\ldots,M_4,\mathcal{R}$ satisfy \textsf{NH}$_k(\mathbf{d})$.
Then we have
$$
S_{\mathbf{M}}=\frac{\delta_{j,k}(\mathbf{A}\mathbf{M}) C_0}{\det \mathbf{M}} X^2 +
O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} \|\mathbf{M}\|^{ \varepsilon}r_\infty(\mathcal{R}_\mathbf{M}) r'
X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
where $D=D_1 \cdots D_4$,
$L_\infty=L_\infty(L_1,\ldots,L_4)$, $r'=r'(L_1,\ldots,L_4,\mathcal{R})$,
and
\begin{equation}
\label{eq:2311.6}
\mathcal{R}_{\mathbf{M}}:=\{\mathbf{M}^{-1}\mathbf{z}: \mathbf{z}\in \mathcal{R}\}.
\end{equation}
\end{lem}
It is important to note that the definition of $\sigma_p$ that appears
in \eqref{eq:constant} is precisely as in
\eqref{eq:sig}. Thus it involves lattices that depend
on $L_1,\ldots,L_4$, rather than $M_1,\ldots, M_4$. The net outcome
of Lemma \ref{lem:linear} is that
for linear transformations that preserve the relevant normalisation
conditions and have determinant $2^m$ for some $m \geq 0$, the main term of the corresponding asymptotic formula
should be multiplied by
$
\delta_{j,k}(\mathbf{A}\mathbf{M})(\delta_{j,k}(\mathbf{A})\det \mathbf{M})^{-1}.
$
\begin{proof}[Proof of Lemma \ref{lem:linear}]
Recall the definition \eqref{eq:X} of $\mathcal{X}$, and the notation
introduced in \eqref{eq:lattice}.
We begin by noting
that $\mathbf{M}\mathbf{y} \in \mathcal{X}$ if and only if
$\mathbf{y} \in \mathsf{\Lambda}_{\mathbf{M}}\cap \mathcal{R}_{\mathbf{M}}$, where
$$
\mathsf{\Lambda}_{\mathbf{M}}:=\{\mathbf{y}\in \mathbb{Z}^2: D_i \mid L_i(\mathbf{My})\} =
\mathsf{\Gamma}(\mathbf{D};M_1,\ldots,M_4),
$$
and $\mathcal{R}_{\mathbf{M}}$ is given by \eqref{eq:2311.6}.
Moreover, $M_1,\ldots,M_4, \mathcal{R}_{\mathbf{M}}$ will satisfy
\textsf{NH}$_k(\mathbf{d})$ if $M_1,\ldots,M_4, \mathcal{R}$ do.
We claim that
\begin{equation}
\label{eq:claim}
\det \mathsf{\Lambda}_\mathbf{M}=\det \mathsf{\Gamma}(\mathbf{D};M_1,\ldots,M_4)=\det \mathsf{\Gamma}(\mathbf{D};L_1,
\ldots,L_4),
\end{equation}
for any matrix $\mathbf{M}\in \mathrm{GL}_2(\mathbb{Z})$ such that
$\hcf(\det\mathbf{M}, D)=1$. In particular, since $\mathbf{M}$ has
determinant $2^m$ for some $m\in \mathbb{Z}_{\geq 0}$, this holds for any
$\mathbf{D}\in\mathbb{N}^4$ such that $2\nmid D$. Assume \eqref{eq:claim} to
be true for the moment, and note that
$$
\meas(\mathcal{R}_{\mathbf{M}})=\frac{\meas(\mathcal{R})}{\det\mathbf{M}},\qquad
r'(M_1,\ldots,M_4, \mathcal{R}_\mathbf{M})=r'( L_1,\ldots,L_4,\mathcal{R})=r',
$$
in the notation of \eqref{eq:r'}. Recalling the definitions in
\eqref{eq:Linf} and \eqref{eq:rinf},
we therefore deduce from Hypothesis-$(j,k)$ that
\begin{align*}
S_{\mathbf{M}}
=&\frac{\delta_{j,k}(\mathbf{A}\mathbf{M})
\pi^4 \meas(\mathcal{R}_{\mathbf{M}})}{\det
\mathsf{\Gamma}(\mathbf{D};M_1,\ldots,M_4) )}X^2 \prod_{p>2}\sigma_p'\\ &\quad +
O\Big(D^\varepsilon L_\infty(M_1,\ldots,M_4)^{ \varepsilon}r_\infty(\mathcal{R}_\mathbf{M}) r'
\frac{X^2}{(\log X)^{\eta-\varepsilon}}\Big)\\
=&\frac{\delta_{j,k}(\mathbf{A}\mathbf{M})
\pi^4 \meas(\mathcal{R})}{(\det \mathbf{M})(\det \mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4))
)}X^2 \prod_{p>2}\sigma_p' \\
&\quad +
O\Big( D^\varepsilon L_\infty(M_1,\ldots,M_4)^{ \varepsilon}r_\infty(\mathcal{R}_\mathbf{M}) r'
\frac{X^2}{(\log X)^{\eta-\varepsilon}}\Big),
\end{align*}
where
$$
\sigma_p'=
\Big(1-\frac{\chi(p)}{p}\Big)^4
\sum_{a,b,c,d=0}^{\infty}\chi(p)^{a+b+c+d}\rho_0(p^a,p^b,
p^c,p^d;\mathbf{D};M_1,\ldots,M_4)^{-1}.
$$
On noting that $L_\infty(M_1,\ldots,M_4) \leq
L_\infty(L_1,\ldots,L_4)\|\mathbf{M}\|$, we see that the error term in this estimate for $S_\mathbf{M}$ is
as claimed in the statement of the lemma.
Moreover, \eqref{eq:rho0} and \eqref{eq:claim} give
\begin{align*}
\rho_0(\mathbf{h};\mathbf{D};M_1,\ldots,M_4)
&=\frac{\det
\mathsf{\Gamma}\big(([D_1,d_1h_1],\ldots,[D_4,d_4h_4]);M_1,\ldots,M_4\big)}{\det
\mathsf{\Gamma}(\mathbf{D};M_1,\ldots,M_4)} \\ &=\rho_0(\mathbf{h};\mathbf{D};L_1,\ldots,L_4),
\end{align*}
for any $\mathbf{h}\in \mathbb{N}^4$ such that $2\nmid h_1\cdots h_4$.
Hence $\sigma_p'= \sigma_p$.
In order to complete the proof of Lemma \ref{lem:linear} it remains to
establish \eqref{eq:claim}. For any matrix $\mathbf{N}\in
\mathrm{GL}_2(\mathbb{Z})$ and any lattice $\mathsf{\Lambda}\subseteq \mathbb{Z}^2$, it is easily checked
that
$$\det (\mathbf{N}\mathsf{\Lambda}) = \det \mathbf{N} \det \mathsf{\Lambda},
$$
where $\mathbf{N}\mathsf{\Lambda}:=\{\mathbf{N}{\bf x}: {\bf x} \in \mathsf{\Lambda}\}$. It therefore follows
that
$$
\det \mathsf{\Lambda}_{\mathbf{M}}= \frac{\det (\mathbf{M}\mathsf{\Lambda}_\mathbf{M})}{\det \mathbf{M}}.
$$
Note that $\mathbf{M}\mathsf{\Lambda}_{\mathbf{M}}=\mathsf{M}\cap
\mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4)$,
where $\mathsf{M}=\{\mathbf{M}\mathbf{y}: \mathbf{y}\in\mathbb{Z}^2\}$. In particular we have $\det
\mathsf{M}=\det \mathbf{M}$. To establish \eqref{eq:claim}, it therefore suffices to
show that
$$
\det (\mathsf{L}\cap
\mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4))=(\det \mathsf{L}) (\det \mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4))
$$
for any lattice $\mathsf{L} \subseteq \mathbb{Z}^2$ such that
$\hcf(\det\mathsf{L}, D_1D_2D_3D_4)=1$.
But this follows immediately from Lemma \ref{lem:group}, since the
determinant of a sublattice of $\mathbb{Z}^2$ is equal to its index
in $\mathbb{Z}^2$.
\end{proof}
\section{Proof of Theorem \ref{main1}}\label{lattices}
We are now ready establish the statement of Theorem
\ref{main1}. The proof will be in two stages: first we will establish the result for
$j=*$, and then we will proceed to handle the cases $j\in \{0,1\}$.
Our proof of the estimate for $j=*$ is actually a straightforward
generalisation of an argument already present in Heath-Brown's work
\cite[\S 7]{h-b03}, but we will include full details here for
the sake of completeness.
Assume that $(\mathbf{d},\mathbf{D})\in \mathcal{D}$, where
$\mathcal{D}$ is given by \eqref{eq:D}. In particular it follows that
there exists ${\bf x}\in\mathsf{\Gamma}_{\mathbf{D}}$ such that $x_1\equiv 1 \bmod{4}$,
where $\mathsf{\Gamma}_{\mathbf{D}}$ is given by \eqref{eq:lattice}.
Indeed, the vector ${\bf x}=D_1^2D_2^2D_3^2D_4^2(1,1)$ is clearly
satisfactory.
In estimating $S_{*}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$, our goal is to replace the summation
over lattice points ${\bf x} \in \mathsf{\Gamma}_{\mathbf{D}}$ by a summation over all
integer points restricted to a certain region.
Given any basis $\mathbf{e}_1, \mathbf{e}_2$ for $\mathsf{\Gamma}_{\mathbf{D}}$, let
$M_i(\v)$ be the linear form obtained from
$d_i^{-1}L_i({\bf x})$ via the change of variables ${\bf x}\mapsto v_1\mathbf{e}_1+v_2\mathbf{e}_2$.
We claim that there is a choice of basis such that
\begin{equation}\label{20-trans'}
M_i(\v)\eqm{v_1}{4},
\end{equation}
for each $i$, and also
\begin{equation}
\label{eq:3011.3}
\|\mathbf{M}\|\ll \det \mathsf{\Gamma}_{\mathbf{D}},
\end{equation}
where $\mathbf{M}$ denotes the matrix formed from the basis vectors
$\mathbf{e}_1,\mathbf{e}_2$. To check the claim we let $\mathbf{e}_1, \mathbf{e}_2$
be a minimal basis for $\mathsf{\Gamma}_{\mathbf{D}}$.
Thus we may assume that
\begin{equation}
\label{eq:3011.2}
|\mathbf{e}_1||\mathbf{e}_2| \ll \det \mathsf{\Gamma}_{\mathbf{D}}.
\end{equation}
Now there must exist integers $w_1,w_2$ such that
$
w_1 e_{11}+ w_2 e_{21} \equiv 1 \bmod{4},
$
since we have seen that there exists ${\bf x}\in\mathsf{\Gamma}_{\mathbf{D}}$ such that $x_1\equiv 1 \bmod{4}$.
In particular we may assume without loss of generality that $e_{11}$ is
odd, and after multiplying $\mathbf{e}_1$ by $\pm 1$, we may as well
assume that $e_{11}\equiv 1 \bmod{4}$. Next, on replacing $\mathbf{e}_2$ by
$\mathbf{e}_2-k\mathbf{e}_1$ for a suitable integer $k\in \{0,1,2,3\}$, we may further assume
that $4\mid e_{21}$. In view of \eqref{eq:3011.2}, this basis certainly satisfies \eqref{eq:3011.3}.
Moreover, the normalisation conditions on $L_1,\ldots,L_4$ imply that
$$
d_iM_i(\v)= L_i(v_1\mathbf{e}_1+v_2 \mathbf{e}_2)
\equiv d_i(v_1e_{11}+v_2 e_{21}) \equiv d_iv_1 \pmod{4},
$$
which therefore establishes \eqref{20-trans'} since each $d_i$ is odd.
Note that we must sum only over odd values of $v_1$, since we
have been summing over odd $x_1$ in
$S_{*}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$.
On recalling the definition \eqref{eq:2311.6} of $\mathcal{R}_{\mathbf{M}}$,
we may therefore deduce that
\begin{align*}
S_{*}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})
&=\sum_{\colt{\v \in \mathbb{Z}^2\cap X\mathcal{R}_{\mathbf{M}}}{2\nmid v_1}}
r\big(M_1(\v)\big)\cdots r\big(M_4(\v)\big).
\end{align*}
Note that \eqref{20-trans'} holds by construction, and also $M_i(\v)>0$
for every $\v$ in the summations.
We are therefore in a position to apply Theorem \ref{main0} to
estimate this quantity.
In view of \eqref{eq:3011.3} and the fact that $\det \mathsf{\Gamma}_{\mathbf{D}}\mid
D=D_1\cdots D_4$, we may deduce that
$$
L_\infty(M_1,\ldots,M_4)\leq \|\mathbf{M}\| L_\infty(L_1,\ldots,L_4) \ll
D L_\infty,
$$
where $L_\infty=L_\infty(L_1,\ldots,L_4)$, as
usual. Next we deduce from \eqref{eq:3011.3} that
$$
r_\infty(\mathcal{R}_{\mathbf{M}})\leq \frac{\|\mathbf{M}\|}{|\det \mathbf{M}|} r_\infty(\mathcal{R}) \ll
r_\infty(\mathcal{R})=r_\infty,
$$
since $|\det \mathbf{M}|=\det \mathsf{\Gamma}_{\mathbf{D}}$, and furthermore
$$
r'(M_1,\ldots,M_4,\mathcal{R}_{\mathbf{M}})=r'(L_1,\ldots,L_4,\mathcal{R})=r'.
$$
Moreover, it is clear that $\meas(\mathcal{R}_{\mathbf{M}})=\meas(\mathcal{R})/|\det
\mathbf{M}|$. It therefore follows
from Theorem \ref{main0} that
$$
S_{*}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})
= \frac{4\pi^4 \meas(\mathcal{R})}{\det \mathsf{\Gamma}_{\mathbf{D}}}X^2 \prod_{p>2}\sigma_p^* +
O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon}r_\infty r'X^2}{(\log X)^{ \eta-\varepsilon}}\Big),
$$
where $\sigma_p^*$ is given by \eqref{eq:sig*}, but with $\rho_*(\mathbf{h})=
\det\mathsf{\Gamma}(\mathbf{h};M_1,\ldots,M_4)$. To calculate this quantity we note
that it is just the index of
$$
\mathsf{\Lambda}_1=\{{\bf x}=v_1\mathbf{e}_1+v_2\mathbf{e}: \v\in\mathbb{Z}^2, h_i\mid M_i(\v)\}
$$
in
$
\mathsf{\Lambda}_2=\{{\bf x}=v_1\mathbf{e}_1+v_2\mathbf{e}: \v\in\mathbb{Z}^2\},
$
whence
\begin{align*}
\rho_*(\mathbf{h})=[\mathsf{\Lambda}_1:\mathsf{\Lambda}_2]=
\frac{\det\mathsf{\Lambda}_1}{\det \mathsf{\Lambda}_2}
&=\frac{\det \{{\bf x}\in\mathsf{\Gamma}(\mathbf{D};L_1\ldots,L_4): d_ih_i \mid
L_i({\bf x})\}}{\det \mathsf{\Gamma}(\mathbf{D};L_1,\ldots,L_4)}\\
&=\rho_0(\mathbf{h};\mathbf{D};L_1,\ldots,L_4),
\end{align*}
in the notation of \eqref{eq:rho0}. This therefore establishes the
estimate in Theorem~\ref{main1} when $j=*$.
In order to complete the proof of Theorem \ref{main1} it remains
to handle the cases $j=0,1$. For this we carry out the change of
variables ${\bf x}=\mathbf{M}\mathbf{y}$, with
$$
\mathbf{M}=\Big(
\begin{array}{cc}
1&0\\
j &2
\end{array}
\Big).
$$
This has the effect of transforming the sum into one over integers
$\mathbf{y}$ such that $y_1$ is odd, without any restriction on $y_2$.
Moreover, it is clear that
$L_i(\mathbf{M}\mathbf{y})=L_i(y_1,jy_1+2y_2)\equiv d_i y_1 \bmod{4}$,
so that together with $\mathcal{R}$, the new linear forms satisfy
\textsf{NH}$_0(\mathbf{d})$.
Since we have already seen that Hypothesis-$(*,0)$ holds, we may
deduce from Lemma~\ref{lem:linear} that
$$
S_{j}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})=
\frac{\delta_{*,0}(\mathbf{A}\mathbf{M})C_0 }{2}X^2
+
O\Big(
\frac{ D^\varepsilon L_\infty^{ \varepsilon} r_\infty r'X^2}{(\log
X)^{\eta-\varepsilon}}\Big),
$$
for $j=0,1$, where $C_0$ is given by \eqref{eq:constant}.
The statement of Theorem \ref{main1} follows
since $\delta_{*,0}(\mathbf{A}\mathbf{M})=\delta_*=4$, by \eqref{eq:C}.
\section{Proof of Theorem \ref{main2}}\label{s:t2}
We are now ready to establish Theorem \ref{main2}.
Let $(j,k)\in
\{*,1,2\}\times \{1,2\}$ and let $(\mathbf{d},\mathbf{D})\in \mathcal{D}$.
It will ease notation if we write
$S_{j,k}(X)$ to denote the sum $S_{j}(X;\mathbf{d},\mathsf{\Gamma}_{\mathbf{D}})$, when
$L_1,\ldots,L_4,\mathcal{R}$ are assumed to satisfy \textsf{NH}$_k(\mathbf{d})$.
Furthermore, let us write
\begin{equation}
\label{eq:sab}
\mathcal{S}_{\alpha}:=\{\mathbf{y}\in \mathbb{Z}^2: ~y_1 \equiv 1 \bmod{4}, ~y_2
\equiv \alpha \bmod{2}\},
\end{equation}
for $\alpha \in \{*,0,1 \}$.
We begin by showing how an estimate for $k=1$ can be used to
deduce a corresponding estimate for the case $k=2$.
Suppose that $k=2$ and $j=1$. We may clearly assume that the summation in
$S_{1,2}(X)$ is only over values of $x_1\equiv 1
\bmod{4}$ and $x_2 \equiv d_2
\bmod{4}$, since the summand vanishes unless
$$
d_1x_1 \equiv 2^{-k_1}L_1({\bf x}) \equiv d_1 \pmod{4},\quad
x_2 \equiv 2^{-k_2}L_2({\bf x}) \equiv d_2 \pmod{4}.
$$
Write $\kappa=\pm 1$ for the residue modulo $4$ of $d_2$, and
choose an integer $c$ such that
$$
a_j+ b_j(\kappa+4c) \neq 0,
$$
for $j=3,4$, where $a_j,b_j$ are as in \eqref{L3L4}.
This is plainly always possible with $c\in \{0,1,2\}$.
We will carry out the transformation ${\bf x}=\mathbf{M}_{c,d_2}\mathbf{y}$, with $\mathbf{M}_{c,d_2}$ given by \eqref{defM}.
Such a transformation is valid if and only if there exists an integer
$y_2$ such that
$x_2-(\kappa+4c)x_1=4y_2$ where $\kappa \equiv d_2 \bmod 4$. Thus the
transformation is certainly valid for
$x_1\equiv 1 \bmod{4}$ and $x_2 \equiv d_2 \bmod{4}$, bringing
the linear forms into new forms
$M_i(\mathbf{y})=L_i(\mathbf{M}_{c,d_2}\mathbf{y})$, say. It is not hard to see that
$M_1,\ldots,M_4,\mathcal{R}$ will satisfy \textsf{NH}$_1(\mathbf{d})$.
There is now no $2$-adic restriction on $y_2$,
so that the summation is over $\mathbf{y}\in \SS_{*}$, in the notation of
\eqref{eq:sab}. We clearly have $r_\infty(\mathcal{R}_{\mathbf{M}_{c,d_2}})\ll r_\infty(\mathcal{R} ).$
By combining Lemma \ref{lem:linear} with the assumption that Hypothesis-$(*,1)$
holds, we therefore obtain
$$
S_{1,2}(X)
=\frac{\delta_{*,1}(\mathbf{A}\mathbf{M}_{c,d_2})C_0}{4}X^2
+O\Big(
\frac{ D^\varepsilon L_\infty^{ \varepsilon} r_\infty r'X^2}{(\log
X)^{\eta-\varepsilon}}\Big),
$$
where $C_0$ is given by \eqref{eq:constant}. This is clearly
satisfactory for the statement of Theorem \ref{main2}, since \eqref{eq:1811.2} yields
$\delta_{1,2}(\mathbf{A})=\delta_{*,1}(\mathbf{A}\mathbf{M}_{c,d_2})/4$.
To handle $S_{0,2}(X)$ we will need to extract $2$-adic powers from
the variable $x_2$. Accordingly, we write $x_1=y_1$ and $x_2=2^\xi
y_2$,
for $\xi\geq 1$ and $y_2\equiv 1 \bmod{2}$. This corresponds to the transformation
${\bf x}=\mathbf{M}_\xi\mathbf{y}$ with $\mathbf{M}_\xi$ given by \eqref{defMxi}.
The resulting linear forms $M_i(\mathbf{y})=L_i(\mathbf{M}_\xi\mathbf{y})$ will continue to satisfy
\textsf{NH}$_2(\mathbf{d})$,
and the summation will be over $\mathbf{y} \in \SS_1$. Moreover, the
restriction ${\bf x} \in X\mathcal{R}$ in the definition of $S_{0,2}(X)$ forces the upper bound
$\xi \leq \log(r_\infty X)$. It turns that this is too crude for our purposes and we
must work a little harder to control the contribution from large values of $\xi$.
Recall the definitions \eqref{eq:Linf}, \eqref{eq:rinf} of $L_\infty$
and $r_\infty$. We will show that
\begin{equation}\label{queue}
\sum_{\colt{\mathbf{y} \in \mathbb{Z}^2}{\mathbf{M}_\xi \mathbf{y} \in \mathcal{X}}}
r\Big(\frac{L_1(\mathbf{M}_\xi\mathbf{y} )}{d_1}\Big)
r\Big(\frac{L_4(\mathbf{M}_\xi\mathbf{y})}{d_4}\Big)
\ll (D2^{\xi}L_\infty)^{\varepsilon}
\Big(r_\infty^2
\frac{ X^2}{2^{\xi }}
+ r_\infty^{1+\varepsilon} X^{1+\varepsilon }\Big).
\end{equation}
Define the multiplicative function $r_1$ via
$$
r_1(p^\nu)=\left\{
\begin{array}{ll}
1+\nu, &\mbox{if $p\mid d_1d_2d_3d_4$,}\\
r(p^\nu), &\mbox{if $p\nmid d_1d_2d_3d_4$,}
\end{array}
\right.
$$
for any prime power $p^\nu$.
Then we have
$$
r\Big(\frac{L_1(\mathbf{M}_\xi\mathbf{y} )}{d_1}\Big)
\cdots
r\Big(\frac{L_4(\mathbf{M}_\xi\mathbf{y})}{d_4}\Big)
\leq r_1(F(\mathbf{y})),
$$
where $F(\mathbf{y})=L_1(\mathbf{M}_\xi\mathbf{y} ) \cdots L_4(\mathbf{M}_\xi\mathbf{y})$.
The maximum modulus of the coefficients of this binary form is
$O(L_\infty^4 2^{4\xi}).$ Hence \eqref{queue} follows easily on taking $X_1=r_\infty X$ and
$X_2=2^{-\xi}r_\infty X$ in \cite[Corollary 1]{nair}.
Note that it would not be sufficient to work instead with the trivial upper bound
$O(L_\infty^\varepsilon r_\infty^{2+\varepsilon }2^{-\xi}X^{2+\varepsilon})$.
To complete our estimate for $S_{0,2}(X)$ we will combine Lemma \ref{lem:linear} with Hypothesis-$(1,2)$
to handle the contribution from $\xi\leq \xi_1$,
and we will use \eqref{queue} to handle the contribution from $\xi_1
<\xi \leq \log (r_\infty X)$, for a value of $\xi_1$ to be determined.
We claim that
\begin{equation}\label{eq:2411.2}
r_\infty\leq 2 L_\infty r'.
\end{equation}
To see this, suppose that $\mathbf{z}\in \mathcal{R}$ is such that
$r_\infty=|z_1|$, say. Then it follows that
$$
r_\infty \leq |a_3b_4-a_4b_3||z_1| = |b_4L_3(\mathbf{z})-b_3L_4(\mathbf{z})| \leq
2L_{\infty} r',
$$
in the notation of \eqref{L3L4}. Write
$$
E_1=\frac{ 2^{ \varepsilon \xi} X^2}{(\log
X)^{\eta-\varepsilon}}, \quad
E_2= L_\infty 2^{-\xi+\varepsilon \xi}X^2+
{r'}^{\varepsilon}2^{\varepsilon\xi}X^{1+\varepsilon },
$$
and choose $\xi_1\in\mathbb{N}$ such that
$2^{\xi_1-1}<L_\infty (\log X)^{\eta}\leq 2^{\xi_1}$.
Next we note that
$$
C_0\ll D^\varepsilon\frac{r_\infty^2}{\det \mathsf{\Gamma}_{\mathbf{D}}}\ll
D^\varepsilon L_\infty r_\infty r',
$$
in \eqref{eq:constant}. Hence we deduce from \eqref{eq:1811.1} and \eqref{eq:2411.2} that
\begin{align*}
S_{0,2}(X)
=&\sum_{\xi =1}^{\xi_1} \frac{\delta_{1,2}(\mathbf{A}\mathbf{M}_\xi) C_0 }{2^{\xi}}X^2
+
O\Big(D^\varepsilon L_\infty^{ \varepsilon} r_\infty r'
\big(\sum_{\xi=1}^{\xi_1}
E_1 +
\hspace{-0.2cm}
\sum_{\xi=\xi_1+1}^{\log(r_\infty X)}
E_2 \big)\Big)\\
\\
=&\sum_{\xi=1}^{\infty} \frac{\delta_{1,2}(\mathbf{A}\mathbf{M}_\xi) C_0 }{2^{\xi}}X^2 +
O\Big(\frac{D^\varepsilon L_\infty^{\varepsilon} r_\infty r' X^2}{(\log
X)^{\eta-\varepsilon}}\Big)\\
=&\delta_{0,2}(\mathbf{A}) C_0 X^2+
O\Big(\frac{D^\varepsilon L_\infty^{\varepsilon} r_\infty r' X^2}{(\log
X)^{\eta-\varepsilon}}\Big).
\end{align*}
This completes the treatment of $S_{0,2}(X)$.
The estimate for $S_{*,2}(X)=S_{0,2}(X)+S_{1,2}(X)$
is now an immediate consequence of our
estimates for $S_{0,2}(X)$ and $S_{1,2}(X)$.
Indeed we plainly have
\begin{align*}
\delta_{*,2}(\mathbf{A})=\delta_{0,2}(\mathbf{A})+\delta_{1,2}(\mathbf{A})
=\sum_{\xi=0}^{\infty} \frac{\delta_{1,2}(\mathbf{A}\mathbf{M}_\xi) }{2^{\xi}}.
\end{align*}
The argument that we have presented here makes crucial use of our
previous work \cite{nair} to control the contribution from large
values of $\xi$ that feature in the change of variables. This
basic technique will recur at several points in the proof of Theorem
\ref{main2}. Rather than repeating the exact same details
each time, however, we will merely refer the reader back to \eqref{queue} in
order to draw attention to this basic chain of reasoning.
Let $j\in \{*,0,1\}$.
It remains to estimate $S_{j,1}(X)$. In fact it will suffice to deal only with the case
$j=1$. Indeed, the remaining cases are handled just as above,
leading to \eqref{eq:1811.1} in the case $k=1$.
Assume that $L_1,\ldots,L_4,\mathcal{R}$ satisfy \textsf{NH}$_1(\mathbf{d})$.
We have
$$
S_{1,1}(X)=\sum_{{\bf x}\in \SS_1\cap \mathcal{X}}
r\Big(\frac{L_1({\bf x})}{d_1}\Big)
r\Big(\frac{L_2({\bf x})}{d_2}\Big)r\Big(\frac{L_3({\bf x})}{d_3}\Big)
r\Big(\frac{L_4({\bf x})}{d_4}\Big),
$$
where $\SS_1$ is given by \eqref{eq:sab} and $\mathcal{X}=\mathsf{\Gamma}_{\mathbf{D}}\cap X\mathcal{R}$.
Let us write $S(X)=S_{1,1}(X)$ for short. Our aim is to find
a linear change of variables
${\bf x}=\mathbf{M}\mathbf{y},$ for some $\mathbf{M}\in \mathrm{GL}_2(\mathbb{Z})$,
taking the linear forms $L_i$ into forms
$M_i(\mathbf{y})=L_i(\mathbf{M}\mathbf{y})$ such that
\begin{equation}
\label{eq:necc}
2^{-\ell_i}M_i(\mathbf{y})\equiv d_i y_1 \pmod{4},
\end{equation}
for certain $\ell_i\in \mathbb{Z}_{\geq 0}$.
On setting $M_i'=2^{-\ell_i}M_i$, so that $M_1',\ldots,M_4'$
satisfy \textsf{NH}$_0(\mathbf{d})$,
we will then be in a position to apply Lemma
\ref{lem:linear} under the assumption that Hypothesis-$(j,0)$ holds for
$j\in \{*,0,1\}$. Indeed, we have already seen that Theorem
\ref{main1} holds in the previous section.
Let ${\bf x} \in \SS_1\cap \mathcal{X}$, so that $x_1 \equiv 1 \bmod{4}$ and $2\nmid
x_2$. Recall the assumption that \eqref{eq:L34} holds for appropriate
$k_j,a_j',b_j',\mu_j,\nu_j$. At certain points of the argument we will find it convenient to extract
$2$-adic factors from the terms $2^{-k_j}L_j({\bf x})$. Let us write
\begin{equation}
\label{eq:extract2}
{\xi_j}=\nu_2\big(2^{-k_j}L_j({\bf x})\big),
\end{equation}
for $j=3,4$. This will allow certain linear transformations to take
place, and it turns
out that the matrices needed to bring $L_i$ in line with
\eqref{eq:necc} will all take the shape
\begin{equation}
\label{eq:base-matrix}
\mathbf{M}=\Big(
\begin{array}{cc}
1&0\\
A &2^{\xi+2}
\end{array}
\Big),
\end{equation}
for appropriate non-negative integers $A\in [0,2^{\xi+2})$
and $\xi$. Here $\xi$ will be a simple function of $\xi_3$ and $\xi_4$.
Assuming that we are now in a position to combine
Lemma \ref{lem:linear} with Hypothesis-$(j,0)$,
we will then obtain a contribution
\begin{equation}\label{eq:e-game}
\begin{split}
&=\frac{ \delta_{j,0}(\mathbf{A}\mathbf{M}) C_0 }{2^{\xi+2}} X^2
+O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' 2^{\xi\varepsilon}X^2}{(\log
X)^{\eta- \varepsilon}}\Big)\\
&=\frac{ \delta_{j}C_0 }{2^{\xi+2}} X^2
+O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r'2^{\xi\varepsilon} X^2}{(\log
X)^{\eta- \varepsilon}}\Big),
\end{split}
\end{equation}
since \eqref{eq:C} implies that $\delta_{j,0}(\mathbf{B})=\delta_j$, and furthermore,
$$
r_\infty(\mathcal{R}_\mathbf{M}) \leq
\frac{\|\mathbf{M}\|}{\det \mathbf{M}}
r_\infty(\mathcal{R}) = r_\infty(\mathcal{R})=r_\infty.
$$
Finally, we will need to sum this quantity
over all available $\xi_3,\xi_4$. It is here that we must return to
\eqref{queue} and repeat the sort of argument used there to handle
the large values of $\xi_3$ and $\xi_4$.
Under any transformation ${\bf x}=\mathbf{M}\mathbf{y}$, with $\mathbf{M}$ taking the shape
\eqref{eq:base-matrix}, it follows from condition (iv$'$)$_{\mathbf{d}}$ in the introduction that
$$
2^{-k_j}L_j(\mathbf{M}\mathbf{y})\equiv d_jy_1 \pmod{4}
$$
for $j=1,2$. As long as our transformations have this general shape
therefore, we will be able to focus our attention on the effect that
the transformation has on the linear forms $L_3,L_4.$
Unfortunately, bringing these forms into the required shape isn't entirely
straightforward, and the permissible choice of $\mathbf{M}$ depends
intimately upon the values of $a_j',b_j',\mu_j,\nu_j$ in \eqref{eq:L34}.
We may assume that these constants satisfy
\eqref{eq:aibi} and \eqref{eq:munu}, and we proceed to consider a
number of distinct subcases separately.
\subsection{The case $\max\{\mu_3,\nu_3\}\geq 1$ and $\max\{\mu_{4},\nu_4\}\geq 1$}
This case is equivalent to the case in which precisely two of the
exponents $\mu_3,\mu_4,\nu_3,\nu_4$ are non-zero, which in turn is
equivalent to the statement that $\mu_j+\nu_j\geq 1$ for $j=3,4$, since $\mu_3\nu_3=\mu_4\nu_4=0$.
In particular it follows that $2^{-k_j}L_j({\bf x})$ is odd for any odd values of $x_1,x_2$.
Recall that the
summation is over $x_1\equiv 1 \bmod{4}$
and $x_2$ odd in $S(X)$.
Let us write $g$ for the number of values of $\gamma \in \{-1,1\}$
such that
\begin{equation}
\label{eq:1611.1}
2^{-k_j}L_j(1,\gamma) =2^{\mu_j}a_j' + 2^{\nu_j}b_j'\gamma \equiv d_j \pmod{4}
\end{equation}
for $j=3$ and $4$. Our aim is to show that
\begin{equation}\label{eq:d11}
\delta_{1,1}(\mathbf{A})=g,
\end{equation}
which we claim is satisfactory for \eqref{delta1}--\eqref{delta3}.
To see this, we suppose first that $\nu_3,\nu_4\geq 1$. Then it is clear that
$g=2$ if $a_j'$ is congruent to $ d_j-2^{\nu_j}$ modulo $4$ for $j=3,4$, and $g=0$
otherwise. When $\mu_3,\mu_4\geq1$, we have $g=1$ if
$b_3'd_3-2^{\mu_3}\equiv
b_4'd_4-2^{\mu_4} \bmod 4$, and $g=0$ otherwise.
When $\mu_4,\nu_3\geq 1$ we have $g=1$ when
$a_3'\equiv d_3-2^{\nu_3} \bmod{4}$, the value of $\gamma$ being given by
the residue of $b_4'd_4-2^{\mu_4}$ modulo $4$, and $g=0$ otherwise.
Finally, the case $\mu_3,\nu_4\geq 1$ is symmetric.
It remains to establish \eqref{eq:d11}.
We may clearly proceed under the assumption that $g\geq 1$.
Let us write $S(X)=\sum_{\gamma} S(X;\gamma)$, where $S(X;\gamma)$ is the overall
contribution to $S(X)$ from vectors such that $x_2 \equiv \gamma
\bmod{4}$,
and the summation is over the $g$ values of $\gamma$ for which \eqref{eq:1611.1} holds.
We will carry out the transformation
$$
\mathbf{M}=\Big(
\begin{array}{cc}
1&0\\
\gamma &4
\end{array}
\Big).
$$
This transformation is valid if and only if there exists an integer
$y_2$ such that $x_2=\gamma y_1+4y_2$, for each ${\bf x}$ in $S(X)$. This is
clearly true for $x_1=y_1 \equiv 1\bmod{4}$ and $x_2 \equiv \gamma \bmod{4}$.
Next we observe that \eqref{eq:necc} holds for the
new linear forms $M_i(\mathbf{y})=L_i(\mathbf{M}\mathbf{y})$, since \eqref{eq:1611.1} holds
for $j=3,4$. The summation over $\mathbf{y}$ is now over $\mathbf{y}\in
\mathcal{S}_{*}$, since as usual the condition $y_1 \equiv 1 \bmod{4}$
is
automatic for odd values of $y_1$ such that $r(M_1(\mathbf{y})/d_1)\neq 0$.
In line with \eqref{eq:e-game}, we therefore deduce from Lemma
\ref{lem:linear} and Hypothesis-$(*,0)$ that
\begin{align*}
S(X;\gamma)
&=
\frac{ \delta_{*}C_0 }{4} X^2 +O\Big(\frac{D^\varepsilon L_\infty^{\varepsilon} r_\infty r' X^2}{(\log
X)^{\eta- \varepsilon}}\Big)
=C_0 X^2 +O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' X^2}{(\log
X)^{\eta- \varepsilon}}\Big),
\end{align*}
when $\gamma$ is admissible. We complete the proof of \eqref{eq:d11} by summing over
the $g$ admissible choices for $\gamma$.
\subsection{The case $\mu_3=\mu_4=0$ and $\max\{\nu_3,\nu_4\}\geq 1>
\min\{\nu_3,\nu_4\}=0$}
For reasons of symmetry we may restrict ourselves to the case
$\nu_3\geq 1$ and $\nu_4=0$. For ${\bf x} \in \SS_1\cap \mathcal{X}$ the term
$2^{-k_3}L_3({\bf x})$ is odd, whereas $2^{-k_4}L_4({\bf x})$ is always
even. We note that $r(L_3({\bf x})/d_3)$ is non-zero if and only if
$a_3'\equiv d_3-2^{\nu_3} \bmod 4$. We must show that \eqref{delta4}
holds with $(j_1,j_2)=(4,3)$.
Let us write $\xi_4=\nu_2(2^{-k_4}L_4({\bf x}))$, as in
\eqref{eq:extract2}. Then necessarily $\xi_4\geq1$, since
${\bf x}\in \SS_1$. We now see that in order for $r(2^{-k_4-\xi_4}L_4({\bf x})/d_4)$
to be non-zero, it is necessary and sufficient that
\begin{equation}
\label{eq:1711.1}
x_2\equiv(d_42^{\xi_4}-a'_4x_1)\overline{b_4'}
\equiv(d_42^{\xi_4}-a'_4)\overline{b_4'}x_1 \pmod {2^{{\xi_4}+2}},
\end{equation}
where $\overline{b_4'}$ is the multiplicative inverse of $b_4'$ modulo $2^{{\xi_4}+2}$.
Here, we have used that the fact $x_1\equiv 1 \bmod{4}$ in the
summation over ${\bf x}$. For each ${\xi_4}\geq 1 $
we make the transformation
\begin{equation}
\label{eq:0112.1}
\mathbf{M}=\Big(
\begin{array}{cc}
1&0\\
A &2^{{\xi_4}+2}
\end{array}
\Big),
\end{equation}
where $A\in [0,2^{{\xi_4}+2}) $ denotes the residue of
$(d_42^{\xi_4}-a'_4)\overline{b_4'}$ modulo $2^{{\xi_4}+2}$. This
brings $L_3, L_4$ into a satisfactory shape for
\textsf{NH}$_0(\mathbf{d})$, by which we mean that
$2^{-k_3}L_3(\mathbf{M}\mathbf{y}) \equiv d_3y_1 \bmod{4}$ and
$2^{-k_4-\xi_4}L_4(\mathbf{M}\mathbf{y}) \equiv d_4y_1 \bmod{4}$.
Moreover, the summation is now over
$\mathbf{y}\in \SS_*$.
In line with \eqref{eq:e-game}, and using the estimate \eqref{queue} to
handle large values of $\xi_4$, we therefore deduce from Lemma
\ref{lem:linear} and Hypothesis-$(*,0)$ that
\begin{align*}
S(X)&=
\sum_{\xi_4=1}^\infty \frac{ \delta_{*}C_0 }{2^{\xi_4+2}} X^2
+O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' X^2}{(\log
X)^{\eta- \varepsilon}}\Big)
=C_0 X^2
+O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' X^2}{(\log
X)^{\eta- \varepsilon}}\Big).
\end{align*}
Thus $\delta_{1,1}(\mathbf{A})=1$ when $a_3'\equiv d_3 -2^{\mu_3}\bmod 4$,
as claimed in \eqref{delta4}.
\subsection{The case $\nu_3=\nu_4=0$ and $\max\{\mu_3,\mu_4\}\geq 1>
\min\{\mu_3,\mu_4\}=0$}
The treatment of this case runs parallel to the previous section. For
reasons of symmetry we may restrict ourselves to the case
$\mu_3\geq 1$ and $\mu_4=0$. For ${\bf x} \in \SS_1\cap \mathcal{X}$ the term
$2^{-k_3}L_3({\bf x})$ is odd, whereas $2^{-k_4}L_4({\bf x})$ is always
even. We now observe that $r(L_3({\bf x})/d_3)$ is non-zero if and only if
$x_2\equiv b_3'd_3-2^{\mu_3} \bmod 4$. Our task is to show that
\eqref{delta5} holds.
Let us write $\xi_4=\nu_2(2^{-k_4}L_4({\bf x}))\geq 1$. Arguing as above we see that in order for
$r(2^{-k_4-\xi_4}L_4({\bf x})/d_4)$
to be non-zero, it is necessary and sufficient that \eqref{eq:1711.1}
holds. In particular we must take care to sum only over
those $\xi_4$ for which
\begin{equation}
\label{eq:1611.2}
a_4' +b_3'b_4'd_3\equiv 2^{\mu_3}+2^{\xi_4} \pmod{4}.
\end{equation}
For each such ${\xi_4}$
we make the transformation \eqref{eq:0112.1} as above, which
again brings $L_3,L_4$ into a satisfactory shape for
\textsf{NH}$_0(\mathbf{d})$,
and the summation is over $\mathbf{y}\in \SS_*$.
We may now deduce from Lemma
\ref{lem:linear} and Hypothesis-$(*,0)$, together with the argument
involving \eqref{queue}, that
$$
S(X)=
\sum_{\xi_4} \frac{ \delta_{*}C_0 }{2^{\xi_4+2}} X^2
+O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' X^2}{(\log
X)^{\eta- \varepsilon}}\Big),
$$
where the sum is over $\xi_4\geq 1$ such that \eqref{eq:1611.2}
holds. If $a_4' +b_3'b_4'd_3-2^{\mu_3} \equiv 2 \bmod{4}$,
then we must restrict attention to the single value $\xi_4=1$, which gives
$\delta_{1,1}(\mathbf{A})=1/2$. If however $a_4' +b_3'b_4'd_3-2^{\mu_3} \equiv 0
\bmod{4}$,
then we must restrict attention to $\xi_4\geq 2$, giving
$\delta_{1,1}(\mathbf{A})=\sum_{\xi_4=2}^\infty 2^{-\xi_4}=1/2.$
This therefore confirms \eqref{delta5}.
\subsection{The case $\mu_3=\nu_3=\mu_4=\nu_4=0$}
We reason in an analogous manner to the previous sections.
Our valuation of $\delta_{1,1}(\mathbf{A})$ will depend on the $2$-adic
valuation $v$ of $a'_3b'_4-a'_4b'_3$, as defined in \eqref{eq:v}.
Our aim is to show that \eqref{delta6} holds.
Let ${\bf x} \in \SS_1\cap \mathcal{X}$, and introduce parameters $\xi_3,\xi_4\geq
1$ such that \eqref{eq:extract2} holds for $j=3,4$.
Let us deal with the case $\xi_4\geq \xi_3$.
The system
$$
a'_3x_1+b_3'x_2\equiv 0 \pmod{2^{\xi_3}},\quad a'_4x_1+b_4'x_2\equiv 0
\pmod{2^{\xi_4}}
$$
is equivalent to
$$
(a'_3b'_4-a'_4b'_3)x_1\equiv 0 \pmod{2^{\xi_3}},\quad a'_4x_1+b_4'x_2 \equiv 0
\pmod{2^{\xi_4}}.
$$
Let us write $a'_3b'_4-a'_4b'_3=2^{v}c_{34}$, with $c_{34}$ odd. We
clearly have $\xi_3\leq v$. Moreover, the term $r( 2^{-k_4-\xi_4}L_4({\bf x})/d_4)$ is
non-zero if and only if \eqref{eq:1711.1}
holds. Assuming this to be the case, we must therefore have
$$
a'_3x_1+b_3'x_2\equiv
\big(a_3'+b_3'\overline{b_4'}( d_42^{\xi_4}-a_4')\big)x_1
\equiv
\overline{b_4'}c_{34}2^v+ b_3'\overline{b_4'} d_4 2^{\xi_4}
\pmod{2^{\xi_3+2}}.
$$
Provided that
\begin{equation}
\label{eq:1711.2}
\overline{b_4'}c_{34}2^v+ b_3'\overline{b_4'} d_4 2^{\xi_4}
\equiv 2^{\xi_3} d_3\pmod{2^{\xi_3+2}},
\end{equation}
therefore, it follows that we may again carry out the transformation
\eqref{eq:0112.1} to bring $L_3,L_4$ into a satisfactory shape for
\textsf{NH}$_0(\mathbf{d})$. The summation is now over $\mathbf{y}\in \SS_*$.
We easily deduce from Lemma \ref{lem:linear} and Hypothesis-$(*,0)$
that there is the contribution
$$
\frac{ \delta_{*}C_0 }{2^{\xi_4+2}} X^2
+ O\Big(\frac{D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' 2^{\varepsilon
\xi_4} X^2}{(\log X)^{\eta- \varepsilon}}\Big),
$$
for fixed $1\leq \xi_3\leq \xi_4$ such that \eqref{eq:1711.2}
holds.
Using an estimate of the type \eqref{queue}, it is an easy
matter to deduce that the overall contribution to the error in summing
over the available $\xi_3,\xi_4$ is
$ O\big( {D^\varepsilon L_\infty^{ \varepsilon} r_\infty r' X^2}{(\log
X)^{-\eta +\varepsilon}}\big)$.
Moreover, we deduce that
$$
\delta_{1,1}(\mathbf{A})=
\sum_{\xi_3=\xi_4} \frac{1}{2^{\xi_4}}+2
\sum_{\xi_3<\xi_4} \frac{1}{2^{\xi_4}},
$$
for a summation over $\xi_3,\xi_4\geq 1$ such that \eqref{eq:1711.2} holds. To evaluate
this quantity we consider a number of subcases, beginning
with the contribution from $\xi_3=\xi_4$. Then we must have
$1\leq \xi_3\leq v-1$ and $b_3'\overline{b_4'}d_4+2^{v-\xi_3}\equiv d_3 \bmod
4$. Let us write $W_1$ for the set of all such positive integers $\xi_3$.
Then we obtain the overall contribution
\begin{equation}
\label{eq:1711.3}
\sum_{\xi\in W_1}
\frac{1}{2^{\xi}}=
\left\{
\begin{array}{ll}
0, &\mbox{if $v=1$},\\
1-1/2^{v-2}, &\mbox{if $v\geq 2$ and $b_3'd_3\equiv b_4'd_4 \bmod{4}$},\\
1/2^{v-1}, &\mbox{if $v\geq 2$ and $b_3'd_3\equiv -b_4'd_4 \bmod{4}$},\\
\end{array}
\right.
\end{equation}
Turning to the contribution from $\xi_3<\xi_4$, it follows from
\eqref{eq:1711.2} that $\xi_3=v$ and
$ \overline{b_4'}c_{34} + 2^{\xi_4-v}\equiv d_3\ \bmod 4$.
Write $W_2$ for the set of all such vectors $(\xi_3,\xi_4)\in \mathbb{N}^2$.
Then a little thought reveals that we obtain a contribution
$$
2\sum_{(\xi_3,\xi_4)\in W_2}
\frac{1}{2^{\xi_4}}=\frac{1}{2^{v}}
$$
from this case. Combining this with \eqref{eq:1711.3}, we therefore
conclude the proof of \eqref{delta6}.
|
1,477,468,750,142 | arxiv | \section{Introduction}
The study of collective excitations is one of the main areas of interest
for the experimental and theoretical research activity in trapped Bose-condensed
gases (for a review of experimental and theoretical investigations see
respectively \cite{SS} and \cite{DGPS99}). At low temperatures, the frequencies of
the low-energy collective oscillations of the condensate have been measured with
great accuracy \cite{JILA96,MIT99}, and found in very good agreement with the predictions
of the mean-field Gross-Pitaevskii theory \cite{ERBDC96,S96}. In a series of experiments
carried out at JILA \cite{JILA97} and MIT \cite{MIT98} the excitations of a trapped Bose gas
have also been explored as a function of temperature. The main features are: on the one hand
oscillations of both the condensate and the thermal cloud are visible and, on the other hand,
the oscillations are increasingly damped as temperature is raised and temperature dependent
frequency shifts are also observed.
A theoretical description which correctly accounts for these phenomena has not yet been fully
developed.
At finite temperature the dynamics of Bose-condensed systems is complicated. Depending on the
temperature, density and frequency of the excitations one is probing different regimes (for an
exhaustive discussion see the books \cite{NP90} and \cite{G93}). If the frequency $\omega$ is
much smaller than the inverse of the typical collision time $\tau_c$: $\omega\tau_c\ll 1$, the
excitations are collective collisional modes, which are described by the theory of two-fluid
hydrodynamics. In terms of length scales this regime is equivalently defined by the condition:
$\lambda_{ex}\gg \ell_{mfp}$, where $\lambda_{ex}$ is the wavelength of the excitation and
$\ell_{mfp}$ is its mean free path. At low temperatures and low values of the density the mean
free path becomes comparable with the size of the system. In this case, which corresponds to the
condition $\omega\tau_c\gg 1$, one is probing the collisionless regime, which is properly described
by mean-field theories.
Collisionless modes can be further distinguished into collective and single-particle excitations,
depending on whether the excitation energy lies respectively well below or above the chemical
potential $\mu$. Single-particle excitations have wavelength much smaller than the healing length
of the condensate, which is defined as $\xi=1/\sqrt{8\pi a n_0}$, where $a$ is the $s$-wave scattering
length and $n_0$ is the condensate density. On the contrary, collective modes satisfy the condition:
$\lambda_{ex}\gg\xi$. Finally, in harmonically trapped systems, collective modes can behave
semiclassically if their energy is much larger than the typical trapping energy: $\hbar\omega_{ho}
\ll\hbar\omega\ll\mu$, where $\omega_{ho}$ is the harmonic oscillator frequency. If instead
$\hbar\omega\sim\hbar\omega_{ho}$, the discretization of levels becomes important and one is not
allowed to treat the excitation as quasiclassical.
Collective modes in the collision-dominated regime have been investigated in harmonically trapped
systems by several authors \cite{HYD}. The present work is focused on the study of collective
excitations in the collisionless regime. In the last years a large number of theoretical papers
have appeared in the literature addressing this problem. Mean-field approaches, which extend to
finite temperature the Gross-Pitaevskii equation for the order parameter, have been put forward
\cite{G96} and applied to the calculation of the low-energy modes in traps
\cite{HZG97,HDB98,SZ99}. However, in these approaches the noncondensate component is treated as a
static thermal bath and its dynamic coupling to the oscillations of the condensate is neglected.
The results obtained for the collective modes do not adequately reproduce the features observed
in experiments, in particular these approaches do not account for the damping of the excitations.
More accurate time-dependent mean-field schemes have been proposed \cite{PB96,MT97,G98,RB99},
which describe
the coupled dynamics of the condensate and noncondensate components. These methods have been
applied to the study of damping in trapped systems \cite{G98} and agree with results obtained
from perturbation theory \cite{PS97,FSW98}. Explicit calculation of the damping rate of the
low-energy modes in harmonic traps has been carried out in \cite{FSW98,BS98,GP99,RCGS99} and found in
good agreement with experiments. Similar methods have also been applied to the calculation of
the temperature dependence of the frequency shifts in the collisionless regime.
Bijlsma and Stoof \cite{BS98-1} have used a variational ansatz to describe the time evolution of the
condensate and the thermal cloud and have calculated the frequencies of the coupled modes in which
the two
components move either in phase or out of phase. These authors also suggest that the avoided crossing
between the in and out of phase modes might be the reason of the features observed for the $m=0$
mode at JILA \cite{JILA97}. Olshanii has explicitly analyzed the JILA $m=2$ mode \cite{JILA97},
suggesting that the observed temperature dependence of the excitation frequency might be due to a
strong resonance between the oscillation frequency of the condensate and one of the eigenfrequencies
of the thermal cloud \cite{O98}. Fedichev and Shlyapnikov \cite{FS98} have developed a
Green's function perturbation scheme for inhomogeneous Bose-condensed gases at finite temperature
and have calculated energy shifts and damping rates of quasiclassical collective modes, which
satisfy the condition $\hbar\omega_{ho}\ll\epsilon\ll\mu$, being $\epsilon$ the energy of the
excitation. Very recently, Reidl {\it et al.} \cite{RCGS99} have calculated by the dielectric
formalism the frequency shift of the $m=0$ and $m=2$ modes, and compared the results with the
JILA experiments.
In the present work we derive, within a time-dependent mean-field scheme, coupled equations
for the dynamics of the condensate and noncondensate components. These equations are solved
perturbatively to
second order in the interaction coupling constant. For homogeneous systems this approach is equivalent
to the finite-temperature extension of the Beliaev approximation discussed in \cite{SG98}.
In the homogeneous case we give explicit results for the temperature dependence of the velocity of
zero sound, which include effects beyond the Bogoliubov theory. We also apply our analysis to
harmonically trapped systems in the thermodynamic limit. In this regime, which is reached for systems
with a very large number of trapped particles, one can use the Thomas-Fermi approximation for the
condensate and neglect finite-size effects. Under these conditions, which are not difficult to
realize in experiments (see e.g. \cite{MIT98}), the frequencies of the collective modes are found
to change with temperature due to static and dynamic correlations beyond the Gross-Pitaevskii theory.
We calculate, as a function of temperature, the frequency shifts of the lowest compressional and
surface
modes. We find that at the intermediate temperatures $T\sim$ 0.6-0.7 $T_c$, where $T_c$ is the
transition
temperature, the fractional shift due to beyond mean-field effects is of the order of 5\%.
This result should be compared with the corresponding correction
predicted at very low temperatures \cite{PS98,BP99} and arising from quantum fluctuations, which
turns out to be typically of the order of 0.5\%.
The structure of the paper is as follows. In Sec. II we develop the time-dependent mean-field
scheme and derive coupled equations for the small-amplitude oscillations of the condensate and
noncondensate components. Sec. III is devoted to spatially homogeneous systems. First we develop
the perturbation scheme and hence we calculate to second order in the interaction coupling
constant the equation of state of the system and the speed and damping rate of
zero sound. In Sec. IV we apply the same perturbation scheme to harmonically trapped systems in the
thermodynamic limit. We calculate the temperature dependence of the frequency shift of the low-energy
collective modes and discuss the comparison with experiments. Finally, we show that at zero
temperature our approach reproduces the hydrodynamic equations of superfluids.
\section{Time-dependent mean-field scheme}
Our starting point is the grand-canonical Hamiltonian of the system in the
presence of an inhomogeneous external potential $V_{ext}({\bf r})$.
In terms of the creation and annihilation particle field operators
$\psi^{\dagger}({\bf r},t)$ and $\psi({\bf r},t)$, the Hamiltonian
takes the form
\begin{eqnarray}
H^{\prime} \equiv H-\mu N &=& \int d{\bf r} \;\psi^{\dagger}({\bf r},t)
\left( -\frac{\hbar^2\nabla^2}
{2m} + V_{ext}({\bf r}) - \mu \right) \psi({\bf r},t) \nonumber \\
&+& \frac{{\rm g}}{2} \int\;d{\bf r} \;\psi^{\dagger}({\bf r},t)
\psi^{\dagger}({\bf r},t)
\psi({\bf r},t)\psi({\bf r},t) \;\;.
\label{gch}
\end{eqnarray}
In the above equation we have assumed a point-like interaction between
particles $V({\bf r}-{\bf r}')={\rm g}\delta({\bf r}-{\bf r}')$,
with the coupling constant ${\rm g}$ given by the expression
${\rm g}=4\pi\hbar^2a/m$, to lowest order in the $s$-wave scattering
length $a$. The equation of motion for the particle field operator
follows directly from the Heisenberg equation and reads
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\psi({\bf r},t) &=& \left[\psi({\bf r},t),
H^{\prime}\right] \nonumber \\
&=& \left(-\frac{\hbar^2\nabla^2}{2m}+V_{ext}({\bf r})-\mu\right)
\psi({\bf r},t) +
{\rm g} \, \psi^{\dagger}({\bf r},t)\psi({\bf r},t)\psi({\bf r},t) \;\;.
\label{heq}
\end{eqnarray}
The dynamic equations derived in this section correspond to the linearized
time-dependent Hartree-Fock-Bogoliubov (TDHFB) approximation. This
self-consistent mean-field scheme is based on the following prescriptions
(we use the notations of Ref. \cite{G96}):
\begin{eqnarray}
a)\;\;\; && \psi({\bf r},t)=\Phi({\bf r},t)+\tilde{\psi}({\bf r},t)
\nonumber \\
&& \Phi({\bf r},t)=\langle\psi({\bf r},t)\rangle \nonumber \\
&& \langle\tilde{\psi}({\bf r},t)\rangle=0 \nonumber \\
&& \nonumber \\
b)\;\;\; && \langle\tilde{\psi}^{\dagger}({\bf r},t)\tilde{\psi}
({\bf r},t)\rangle = \tilde{n}({\bf r},t) \nonumber \\
&& \langle\tilde{\psi}({\bf r},t)\tilde{\psi}({\bf r},t)
\rangle = \tilde{m}({\bf r},t) \nonumber \\
&& \nonumber \\
c)\;\;\; && \tilde{\psi}^{\dagger}({\bf r},t)
\tilde{\psi}^{\dagger}({\bf r},t)\tilde{\psi}({\bf r},t)
\tilde{\psi}({\bf r},t)= 4\tilde{n}({\bf r},t)\tilde{\psi}^{\dagger}({\bf r},t)
\tilde{\psi}({\bf r},t) \nonumber \\
&& \;\;\;\;\;\;\;\; + \tilde{m}({\bf r},t)\tilde{\psi}^{\dagger}({\bf r},t)
\tilde{\psi}^{\dagger}({\bf r},t) + \tilde{m}^{\ast}({\bf r},t)\tilde{\psi}({\bf r},t)
\tilde{\psi}({\bf r},t)
\nonumber \\
&& \nonumber \\
d)\;\;\; && \langle\tilde{\psi}({\bf r},t)\tilde{\psi}({\bf r},t)
\tilde{\psi} ({\bf r},t)\rangle = 0 \nonumber \\
&& \langle\tilde{\psi}^{\dagger}({\bf r},t)
\tilde{\psi}({\bf r},t) \tilde{\psi}({\bf r},t)\rangle = 0 \;\;. \nonumber
\end{eqnarray}
The averages $\langle ... \rangle$ in $a)$, $b)$ and $d)$ are
nonequilibrium averages, while time-independent equilibrium averages are
indicated in this paper with the symbol $\langle ... \rangle_0$.
The prescription $a)$ is the usual decomposition of the field operator
into a condensate and a noncondensate component and defines the condensate
wave function $\Phi({\bf r},t)$.
Prescription $b)$ defines the normal, $\tilde{n}({\bf r},t)$, and anomalous,
$\tilde{m}({\bf r},t)$, noncondensate particle densities. In terms of these
quantities the interaction term in the Hamiltonian (\ref{gch}) quartic in the
noncondensate components of $\psi({\bf r},t)$ can be approximated using the
factorization given by prescription $c)$. Finally, in prescription
$d)$ all averages of cubic products of noncondensate operators are set to zero.
This is expected to be a good approximation for dilute systems. The inclusion of
the triplet correlations $\langle\tilde{\psi}\tilde{\psi}\tilde{\psi}\rangle$ and
$\langle\tilde{\psi}^{\dagger}\tilde{\psi}\tilde{\psi}\rangle$ in a time-dependent
self-consistent mean-field scheme is discussed in \cite{PB96,RB99}.
By using the above prescriptions one gets the following equation of
motion for the condensate wave function
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\Phi({\bf r},t) &=& \left(-\frac{\hbar^2
\nabla^2}{2m}+V_{ext}({\bf r})-\mu\right)\Phi({\bf r},t) +
{\rm g}|\Phi({\bf r},t)|^2
\Phi({\bf r},t) \nonumber \\
&+& 2{\rm g}\Phi({\bf r},t) \tilde{n}({\bf r},t)
+ {\rm g}\Phi^{\ast}({\bf r},t) \tilde{m}({\bf r},t)\;\;.
\label{gpeq}
\end{eqnarray}
This equation includes the dynamic coupling between the condensate and
the noncondensate particles. If we neglect these effects,
$\tilde{n}=\tilde{m}=0$, equation (\ref{gpeq}) reduces to the usual
Gross-Pitaevskii (GP) equation.
We are interested in the small-amplitude oscillations of the condensate,
which is only slightly displaced from its stationary value
$\Phi_0({\bf r})=\langle\psi({\bf r})\rangle_0$
\begin{equation}
\Phi({\bf r},t)=\Phi_0({\bf r})+\delta\Phi({\bf r},t) \;\;,
\label{flucphi}
\end{equation}
where $\delta\Phi({\bf r},t)$ is a small fluctuation.
In the same way, we consider small fluctuations of the normal and anomalous
particle densities
\begin{eqnarray}
\tilde{n}({\bf r},t) &=& \tilde{n}^0({\bf r})
+ \delta \tilde{n}({\bf r},t) \nonumber \\
\tilde{m}({\bf r},t) &=& \tilde{m}^0({\bf r}) + \delta \tilde{m}({\bf r},t)
\label{flucden}
\end{eqnarray}
around their equilibrium values $\tilde{n}^0({\bf r})=\langle\tilde{\psi}
^{\dagger}({\bf r})\tilde{\psi}({\bf r})\rangle_0$ and $\tilde{m}^0({\bf r})=
\langle\tilde{\psi}({\bf r})\tilde{\psi}({\bf r})\rangle_0$.
The real wave function $\Phi_0({\bf r})$
satisfies the stationary equation \cite{N1}
\begin{equation}
\left(-\frac{\hbar^2\nabla^2}{2m}+V_{ext}({\bf r})-\mu+{\rm g}n_0({\bf r})+
2{\rm g}\tilde{n}^0({\bf r})+{\rm g}\tilde{m}^0({\bf r})\right)\Phi_0({\bf r}) = 0 \;\;,
\label{statgp}
\end{equation}
where
$n_0({\bf r})=|\Phi_0({\bf r})|^2$ is the condensate density.
The time-dependent equation for $\delta\Phi({\bf r},t)$ is obtained by
linearizing the equation of motion (\ref{gpeq})
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\delta\Phi({\bf r},t) &=& \left( -\frac
{\hbar^2\nabla^2}{2m}+V_{ext}({\bf r})-\mu+2{\rm g} n({\bf r})\right)\delta\Phi
({\bf r},t) \nonumber\\
&+& \left({\rm g}n_0({\bf r})+{\rm g}\tilde{m}^0({\bf r})\right)
\delta\Phi^{\ast}({\bf r},t)
+ 2{\rm g}\Phi_0({\bf r})\delta
\tilde{n}({\bf r},t) + {\rm g}\Phi_0({\bf r})\delta \tilde{m}({\bf r},t) \;\;,
\label{fluceq}
\end{eqnarray}
where we have introduced the total equilibrium density
$n({\bf r})=n_0({\bf r})+\tilde{n}^0({\bf r})$.
Both the stationary wave function $\Phi_0$ and the fluctuations $\delta\Phi$
depend through Eqs. (\ref{statgp}) and (\ref{fluceq}) on the normal and anomalous
noncondensate particle densities, for which we need independent equations for their
equilibrium values and time evolution. To this purpose it is convenient to express
the noncondensate operators $\tilde{\psi}$, $\tilde
{\psi}^{\dagger}$ in terms of quasiparticle operators $\alpha_i$, $\alpha_i^
{\dagger}$ by means of the generalization to inhomogeneous systems of the Bogoliubov
canonical transformations \cite{F72}
\begin{eqnarray}
\tilde{\psi}({\bf r},t) &=& \sum_i\left(u_i({\bf r})\alpha_i(t) + v_i^{\ast}
({\bf r})\alpha_i^{\dagger}(t)\right) \;\;,
\nonumber \\
\tilde{\psi}^{\dagger}({\bf r},t) &=& \sum_i\left( u_i^{\ast}({\bf r})
\alpha_i^{\dagger}(t) + v_i({\bf r})\alpha_i(t)\right) \;\;.
\label{bogtrans}
\end{eqnarray}
The normalization condition for the functions $u_i({\bf r})$, $v_i({\bf r})$,
which ensures that the quasiparticle operators $\alpha_i$, $\alpha_i^{\dagger}$
satisfy Bose commutation relations, reads
\begin{equation}
\int d{\bf r} \left[u_i^{\ast}({\bf r})u_j({\bf r}) - v_i^{\ast}({\bf r})
v_j({\bf r})\right] = \delta_{ij} \;\;.
\label{norm}
\end{equation}
The time evolution of $\tilde{n}({\bf r},t)$ and $\tilde{m}({\bf r},t)$ can be obtained
from the Heisenberg equations for the products of quasiparticle operators $\alpha_i^{\dagger}(t)
\alpha_j(t)$ and $\alpha_i(t)\alpha_j(t)$
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t} \langle\alpha_i^{\dagger}(t)\alpha_j(t)\rangle
&=& \langle \Bigl[
\alpha_i^{\dagger}
(t)\alpha_j(t),H^{\prime} \Bigr] \rangle \;\;,
\nonumber \\
i\hbar\frac{\partial}{\partial t} \langle\alpha_i(t)\alpha_j(t)\rangle
&=& \langle \Bigl[ \alpha_i
(t)\alpha_j(t),H^{\prime} \Bigr] \rangle \;\;.
\label{fgeq}
\end{eqnarray}
In the above equations the commutators are calculated using the mean-field prescriptions
$a)$-$d)$ and the canonical transformation (\ref{bogtrans}). The calculation can be easily
done by noticing that in the Hamiltonian (\ref{gch}) only the terms quadratic and quartic
in the noncondensate operators $\tilde\psi$, $\tilde\psi^{\dagger}$ give non vanishing
contributions, as we set to zero averages of single and cubic products of noncondensate
operators.
At equilibrium, we take the occupation of quasiparticle levels to be diagonal,
$\langle\alpha_i^\dagger\alpha_j\rangle_0=\delta_{ij}f_i^0$, while anomalous averages of
quasiparticles are zero, $\langle\alpha_i\alpha_j\rangle_0=0$. With these conditions, the
stationary equations $\langle\Bigl[\alpha_i^{\dagger}(t)\alpha_j(t),H^{\prime} \Bigr]\rangle_0 =
\langle\Bigl[\alpha_i(t)\alpha_j(t),H^{\prime} \Bigr]\rangle_0=0$ yield the following coupled
equations for the quasiparticle amplitudes $u_i({\bf r})$, $v_i({\bf r})$
\begin{eqnarray}
{\cal L}u_i({\bf r})+[{\rm g}n_0({\bf r})+{\rm g}\tilde{m}^0({\bf r})]
v_i({\bf r}) &=& \epsilon_iu_i({\bf r}) \;\;,
\nonumber \\
{\cal L}v_i({\bf r})+[{\rm g}n_0({\bf r})+{\rm g}\tilde{m}^0({\bf r})]
u_i({\bf r}) &=& - \epsilon_i
v_i({\bf r}) \;\;,
\label{bogeqs}
\end{eqnarray}
where we have introduced the hermitian operator
\begin{equation}
{\cal L} = - \frac{\hbar^2\nabla^2}{2m} + V_{ext}({\bf r}) - \mu
+ 2{\rm g}n({\bf r}) \;\;.
\label{Lop}
\end{equation}
The coupled Eqs. (\ref{bogeqs}) correspond to the static Hartree-Fock-Bogoliubov (HFB) equations
as described in Ref. \cite{G96}, and the $\epsilon_i$ are the quasiparticle energies which
fix the quasiparticle occupation numbers at equilibrium $f_i^0=[e^{\epsilon_i/k_BT}-1]^{-1}$.
The equilibrium values of the normal and anomalous particle densities are
written as
\begin{eqnarray}
\tilde{n}^0({\bf r}) &=& \sum_i \left\{ [|u_i({\bf r})|^2+|v_i({\bf r})|^2]f_i^0
+ |v_i({\bf r})|^2 \right\} \;\;,
\nonumber\\
\tilde{m}^0({\bf r}) &=& \sum_i \left\{ 2 u_i({\bf r})v_i^{\ast}({\bf r}) f_i^0
+ u_i({\bf r}) v_i^{\ast}({\bf r}) \right\} \;\;.
\label{eqdens}
\end{eqnarray}
Out of equilibrium we define the following normal and anomalous quasiparticle
distribution function
\begin{eqnarray}
f_{ij}(t) &=& \langle\alpha_i^{\dagger}(t)\alpha_j(t)\rangle -
\delta_{ij}f_i^0 \;\;,
\nonumber \\
g_{ij}(t) &=& \langle\alpha_i(t)\alpha_j(t)\rangle \;\;,
\label{qpdens}
\end{eqnarray}
in terms of which the fluctuations of $\tilde{n}$ and $\tilde{m}$ take the
form
\begin{eqnarray}
\delta\tilde{n}({\bf r},t)&=&\sum_{ij} \left\{ [u_i^{\ast}({\bf r})u_j({\bf r})
+ v_i^{\ast}({\bf r})v_j({\bf r})]f_{ij}(t)+u_i({\bf r})v_j({\bf r})g_{ij}(t)
+ u_i^{\ast}({\bf r})v_j^{\ast}({\bf r})g_{ij}^{\ast}(t) \right\} \;\;,
\nonumber\\
\delta\tilde{m}({\bf r},t)&=&\sum_{ij} \left\{ 2 v_i^{\ast}({\bf r})u_j({\bf r})
f_{ij}(t)+u_i({\bf r})u_j({\bf r})g_{ij}(t)
+ v_i^{\ast}({\bf r})v_j^{\ast}({\bf r})g_{ij}^{\ast}(t) \right\} \;\;.
\label{neqdens}
\end{eqnarray}
Up to linear terms in the fluctuations $\delta\Phi$, $\delta\tilde{n}$ and
$\delta\tilde{m}$, the equation of motion for $f_{ij}$ gives the result \cite{N2}
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}f_{ij}(t)
&=& (\epsilon_j-\epsilon_i)f_{ij}(t) \,+\, 2{\rm g}(f_i^0-f_j^0)
\nonumber \\
&\times& \int d{\bf r} \; \Phi_0
\biggl[ \delta\Phi(t) \biggl(u_iu_j^{\ast}+v_iv_j^{\ast}+v_iu_j^{\ast}\biggr)
+ \delta\Phi^{\ast}(t) \biggl(u_iu_j^{\ast}+v_iv_j^{\ast}+u_iv_j^{\ast}\biggr)
\biggr]
\label{feq} \\
&+& {\rm g}(f_i^0-f_j^0)\int d{\bf r} \biggl[ 2\delta\tilde{n}(t)
\biggl(u_iu_j^{\ast}+v_iv_j^{\ast}\biggr) +\delta\tilde{m}(t)v_iu_j^\ast
+\delta\tilde{m}^\ast(t)u_iv_j^\ast\biggr] \;\;. \nonumber
\end{eqnarray}
Analogously, for the time evolution of the anomalous quasiparticle distribution
function $g_{ij}$, one obtains in the linear regime
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}g_{ij}(t)
&=& (\epsilon_j+\epsilon_i)g_{ij}(t) \,+\, 2{\rm g}(1+f_i^0+f_j^0)
\nonumber \\
&\times& \int d{\bf r} \; \Phi_0
\biggl[ \delta\Phi(t) \biggl(u_i^{\ast}v_j^{\ast}+v_i^{\ast}u_j^{\ast}
+u_i^{\ast}u_j^{\ast}\biggr)
+ \delta\Phi^{\ast}(t) \biggl(u_i^{\ast}v_j^{\ast}+v_i^{\ast}u_j^{\ast}
+v_i^{\ast}v_j^{\ast}\biggr)
\biggr]
\label{geq} \\
&+& {\rm g}(1+f_i^0+f_j^0)\int d{\bf r} \biggl[ 2\delta\tilde{n}(t)
\biggl(u_i^{\ast}v_j^{\ast}+v_i^{\ast}u_j^{\ast}\biggr)
+\delta\tilde{m}(t)u_i^{\ast}u_j^\ast
+\delta\tilde{m}^\ast(t)v_i^{\ast}v_j^\ast\biggr] \;\;.\nonumber
\end{eqnarray}
The first term on the r.h.s. of Eqs. (\ref{feq}), (\ref{geq}) describes
the free evolution of the quasiparticle states, corresponding to quasiparticle
operators which evolve in time according to $\alpha_i(t)=e^{-i\epsilon_i t/\hbar}
\alpha_i$. The second and the third term describe, respectively, the coupling to the
oscillations of the condensate and to the fluctuations $\delta\tilde{n}$ and
$\delta\tilde{m}$ of the normal and anomalous particle density.
Above the Bose-Einstein transition temperature, where the system becomes normal and
$\Phi=\tilde{m}=0$, Eq. (\ref{feq}) corresponds to the linearized time-dependent
Hartree-Fock (TDHF) equation. In the semiclassical limit TDHF is equivalent to the
collisionless Boltzmann equation for the particle distribution function \cite{RS80}.
In the framework of mean-field theories, coupled time-dependent equations for the
condensate and noncondensate components of a Bose gas have been discussed by many
authors. The TDHF scheme is discussed in \cite{MT97}.
Moreover, coupled equations of motion for the condensate and for correlation functions
of pairs and triplets of noncondensate particles have been derived in \cite{PB96},
and studied in the linear response regime in \cite{RB99}. Similar kinetic equations
were derived by Kane and Kadanoff \cite{KK65} using the formalism of non-equilibrium
Green's functions developed in \cite{KB62}. Recently, the approach of Kane and Kadanoff
has been extended to deal with a trapped Bose-condensed gas in \cite{ITG99}.
The stationary Eq. (\ref{statgp}) for $\Phi_0$,
with the normal and anomalous particle densities at equilibrium given by
(\ref{eqdens}), and Eqs. (\ref{bogeqs}) for the quasiparticles correspond
to the static self-consistent HFB approximation as reviewed by Griffin in
\cite{G96}. These equations have been solved for harmonically trapped
gases in \cite{HDB98}. As it is well known \cite{GA59}, in the case of homogeneous
systems, the HFB excitation energies exhibit an unphysical gap at long
wavelengths, which is fixed by the anomalous particle density:
$\Delta=2{\rm g}\sqrt{n_0|\tilde{m}^0|}$ (see Ref. \cite{G96}).
If one neglects the anomalous particle density, $\tilde{m}^0=0$, the
HFB equations reduce to the so-called HFB-Popov approximation \cite{P65,G96,SG98},
which is gapless and in the high temperature regime coincides with the Hartree-Fock
scheme \cite{GSL81}. The HFB-Popov approximation has been applied by several authors
to interacting bosons in harmonic traps, both to calculate the frequencies of the
collective modes \cite{HZG97} and to study the thermodynamic properties of the system
\cite{GPS96,GPS97,DGGPS97}. Gapless static mean-field approximations, alternative to
HFB-Popov, have been put forward and discussed in \cite{PMCB98,RB99}.
Finally, by neglecting both the normal and the anomalous particle density,
$\tilde{n}^0=\tilde{m}^0=0$, the HFB equations reduce to the Gross-Pitaevskii theory.
From Eq. (\ref{statgp}) one recovers the stationary GP equation, while Eqs. (\ref{bogeqs})
follow from the linearization of the time-dependent GP equation. At very low temperatures,
where effects arising from the depletion of the condensate are negligible, the Gross-Pitaevskii
theory is well suited to describe dilute gases in traps. For these systems the linearized GP equation
has been solved by many authors \cite{ERBDC96,EDCB96,DGGPS97} and successfully compared to
experiments.
The linearized TDHFB mean-field approximation is a closed set of self-consistent equations
which describe the small oscillations of the
system around the static HFB solution. The dynamic Eq. (\ref{fluceq}) describes the fluctuation
$\delta\Phi$ of the condensate around the stationary solution $\Phi_0$, while Eqs.
(\ref{neqdens})-(\ref{geq}) describe the small oscillations
$\delta\tilde{n}$, $\delta\tilde{m}$ of the normal and anomalous particle density around
their equilibrium values $\tilde{n}^0$, $\tilde{m}^0$. Since the equations for the time
evolution of $\delta\Phi$, $\delta\tilde{n}$ and $\delta\tilde{m}$ are derived from the
corresponding exact Heisenberg equations, it can be easily checked that the linearized TDHFB
approach preserves important conservation laws, such as number of particles and energy conservation.
This is a general feature of time-dependent mean-field approximations \cite{RS80,BR86}.
Another important feature of linearized TDHFB is that, even though the quasiparticle energies
obtained from Eqs. (\ref{bogeqs}) exhibit a gap
at long wavelengths: $\epsilon_p\to\Delta$ as $p\to 0$, the self-consistent solution of
Eq. (\ref{fluceq}) is gapless. In fact, one can show that
this self-consistent solution satisfies the Hugenholtz-Pines theorem \cite{HP59}.
There are some questions one should address before embarking on the difficult task of
a self-consistent solution of the linearized TDHFB equations. A first problem concerns the
equilibrium anomalous density $\tilde{m}^0$ [see Eq. (\ref{eqdens})], which is ultraviolet
divergent. To second order in the interaction and for homogeneous systems the ultraviolet divergence
is canceled by the renormalization of the coupling constant (see e.g. \cite{P72}):
${\rm g}\to{\rm g}\left(1+{\rm g}\frac{1}{V}\sum_{\bf p}m/p^2\right)$. How to include
the renormalization of ${\rm g}$ in a self-consistent calculation and how to extend this
renormalization procedure to inhomogeneous systems is still
an open problem. Recently, Burnett and coworkers \cite{PMCB98,RB99} have shown that there is no
need of renormalization of ${\rm g}$ if one uses, instead of a contact potential, an effective
interaction, the many-body T-matrix, which includes self-consistently the effects arising from the
anomalous density. Another problem concerns the gap exhibited by the quasiparticle energies in
Eqs. (\ref{bogeqs}). The self-consistent solution of these equations defines the equilibrium state of
the system: it fixes the noncondensate densities $\tilde{n}^0$ and $\tilde{m}^0$ through
Eqs. (\ref{eqdens}),
and the chemical potential $\mu$ and the condensate wavefunction $\Phi_0$ through Eq. (\ref{statgp}).
Even though the small oscillations around the static HFB solutions
give rise to a gapless spectrum for the fluctuations of the condensate, properties
such as the phonon velocity and their damping rate will be affected by an incorrect description
of the system at equilibrium, originating from the unphysical gap $\Delta$ in the quasiparticle
energies. In particular, if $k_BT\sim\Delta$, one expects a strong influence of the gap on the
temperature dependence of these properties. In the present work we will not solve the linearized
TDHFB equations self-consistently, instead, we solve them perturbativelly to order ${\rm g}^2$.
By this method we avoid the problems mentioned above, in particular, the quasiparticle states
in (\ref{bogeqs}) are properly described by Bogoliubov theory, which is gapless.
Another point which deserves some comments is the Kohn mode (dipole mode). As it is well known,
in the presence of harmonic confinement the center of mass degrees of freedom separate from all
other degrees of freedom, giving rise to a collective mode of the system, the Kohn mode, in which
the equilibrium density profile oscillates rigidly at the trap frequency. The linearized TDHFB
equations obtained in this section do not describe this mode, as they do not account for the motion
of the center of mass. In fact, these equations correctly describe excitations for which the center
of mass is at rest, and we will restrict our analysis to this class of excitations. The Kohn mode is
associated with broken translational symmetry and is often referred
to as a ``spurious'' mode. For a discussion of spurious states and their appearance in linearized
time-dependent mean-field theories see Ref. \cite{BR86}.
\section{Spatially homogeneous system}
In this section we will develop, starting from the dynamic equations for the condensate
and noncondensate components derived in the previous section, a perturbation scheme for
the elementary excitations in a homogenous system. Explicit results for the temperature
dependence of the chemical potential, damping rate and speed of zero
sound are given in the limit $a^3n_0\ll 1$, where $n_0=n_0(T)$ is the
equilibrium value of the condensate density at temperature $T$. At zero temperature, the
calculation presented here is equivalent to the Beliaev second-order approximation of the
single-particle Green's function \cite{B58}. In the high temperature regime, $k_BT\gg\mu$,
our approach corresponds to the finite-temperature extension of the Beliaev approximation
recently employed in Refs. \cite{SG98,FS98} (the former reference gives also a systematic
account of the various perturbation schemes for a uniform Bose gas). The perturbation
expansion carried out in the present work holds to order $(a^3n_0)^{1/2}$ for any temperature
in the condensed phase, except clearly very close to the transition, where the time-dependent
mean-field equations we used as starting point break down.
In a spatially homogeneous system the condensate wave function at equilibrium is constant,
$\Phi_0({\bf r})=\sqrt{n_0}$. The stationary equation (\ref{statgp}) reads then
\begin{equation}
\mu = {\rm g}n_0 + 2{\rm g}\tilde{n}^0 + {\rm g}\tilde{m}^0 \;,
\label{homcp}
\end{equation}
and represents the equation of state of the system, which fixes the chemical potential
as a function of the condensate density $n_0$ and the temperature $T$.
By using the above result for the chemical potential, equation (\ref{fluceq}) for the
fluctuations of the condensate becomes
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\delta\Phi({\bf r},t) &=& \left( -\frac
{\hbar^2\nabla^2}{2m}+{\rm g} n_0-{\rm g} \tilde{m}^0 \right)\delta\Phi
({\bf r},t)
+ \left({\rm g}n_0+{\rm g}\tilde{m}^0\right)
\delta\Phi^{\ast}({\bf r},t)
\nonumber\\
&+& 2{\rm g}\sqrt{n_0}\delta
\tilde{n}({\bf r},t) + {\rm g}\sqrt{n_0}\delta \tilde{m}({\bf r},t) \;\;.
\label{homfc}
\end{eqnarray}
In the above equation the terms involving the equilibrium anomalous density $\tilde{m}^0$
account for the coupling between the fluctuations of the condensate and the static distribution
of noncondensate atoms, while the terms containing $\delta\tilde{n}$ and $\delta\tilde{m}$
describe the dynamic coupling between the condensate and the fluctuations of the noncondensate
component.
\subsection{Perturbation scheme}
The perturbation scheme applied to Eqs. (\ref{homcp}), (\ref{homfc}) consists in treating
the terms which give the static and dynamic coupling to the noncondensate component to {\it second}
order in ${\rm g}$. It means that the static and fluctuating parts of the normal and
anomalous density need to be calculated only to {\it first} order in
{\rm g}. To accomplish this task one must retain in the equations for the quasiparticles
(\ref{bogeqs}), (\ref{feq}) and (\ref{geq}) only the terms which describe the coupling to the
condensate and neglect all terms arising from the coupling to the noncondensate component.
Let us suppose that the condensate oscillates with frequency $\omega$ and wave vector
${\bf q}/\hbar$
\begin{equation}
\delta\Phi({\bf r},t) = \frac{\delta\Phi_1({\bf q})}{\sqrt{V}} e^{i{\bf q}\cdot{\bf r}/\hbar}
e^{-i\omega t}
\;\;\;, \;\;\;\;
\delta\Phi^{\ast}({\bf r},t) = \frac{\delta\Phi_2({\bf q})}{\sqrt{V}}
e^{i{\bf q}\cdot{\bf r}/\hbar} e^{-i\omega t} \;\;.
\label{homfou}
\end{equation}
Furthermore, the quasiparticle amplitudes be described by plane-wave functions
\begin{equation}
u_{\bf p}({\bf r})=\frac{u_p}{\sqrt{V}}e^{i{\bf p}\cdot{\bf r}/\hbar} \;\;\;,
v_{\bf p}({\bf r})=\frac{v_p}{\sqrt{V}}e^{i{\bf p}\cdot{\bf r}/\hbar} \;\;.
\label{hompw}
\end{equation}
To first order in ${\rm g}$ the quasiparticle equations (\ref{bogeqs}) become
\begin{eqnarray}
(p^2/2m + {\rm g}n_0)u_p + {\rm g}n_0 v_p &=& \epsilon_p u_p \;\;,
\nonumber \\
(p^2/2m + {\rm g}n_0)v_p + {\rm g}n_0 u_p &=& - \epsilon_p v_p \;\;.
\label{hombe}
\end{eqnarray}
These coupled equations coincide with the well-known Bogoliubov equations for the real
quasiparticle amplitudes $u_p$, $v_p$, which satisfy the following relations
\begin{eqnarray}
&& u_p^2=1+v_p^2=\frac{(\epsilon_p^2+{\rm g}^2n_0^2)^{1/2}+\epsilon_p}
{2\epsilon_p} \;\;,
\nonumber \\
&& u_pv_p = - \frac{{\rm g}n_0}{2\epsilon_p} \;\;,
\label{homuv}
\end{eqnarray}
with the quasiparticle energy $\epsilon_p$ given by the Bogoliubov spectrum
\begin{equation}
\epsilon_p = \left(\left(\epsilon_p^0+{\rm g}n_0\right)^2
-{\rm g}^2n_0^2\right)^{1/2} \;\;,
\label{hombs}
\end{equation}
being $\epsilon_p^0=p^2/2m$ the free-particle energy.
Notice that, by employing the equation of state (\ref{homcp}), the full HFB equations
(\ref{bogeqs}) would coincide with the matrix equation
(\ref{hombe}) apart from a term ${\rm g}\tilde{m}_0$. This term would appear with the
{\it minus} sign in the diagonal term and with the {\it plus} sign in the off-diagonal term,
and is responsible for the gap in $\epsilon_p$ as $p\to 0$. Since we use the Bogoliubov
result (\ref{hombs}), we avoid the problem of the gap in the quasiparticle spectrum.
In the same approximation one must neglect in Eqs. (\ref{feq}) and (\ref{geq}) the terms which
describe the coupling to the fluctuations $\delta\tilde{n}$ and $\delta\tilde{m}$ of the
noncondensate component. Due to the coupling to the
condensate, which acts as a time-dependent external drive, the only elements of the matrices
$f_{{\bf p},{\bf p}^{\prime}}$, $g_{{\bf p},{\bf p}^{\prime}}$ and $g^{\ast}_{{\bf p},{\bf p}
^{\prime}}$ which oscillate at the frequency $\omega$ are given by
\begin{eqnarray}
f_{{\bf p},{\bf q}+{\bf p}}(\omega) &=& {\rm g}\frac{\sqrt{n_0}}{\sqrt{V}}
\frac{f_p^0-f_{|{\bf q}+{\bf p}|}^0}{\hbar\omega+(\epsilon_p-\epsilon_{|{\bf q}+{\bf p}|})
+i0}
\biggl[\left(\delta\Phi_1-\delta\Phi_2\right)
\left(v_pu_{|{\bf q}+{\bf p}|}-u_pv_{|{\bf q}+{\bf p}|}\right) \biggr.
\nonumber\\
&+& \biggl.\left(\delta\Phi_1+\delta\Phi_2\right)
\left(2u_pu_{|{\bf q}+{\bf p}|}+2v_pv_{|{\bf q}+{\bf p}|}+v_pu_{|{\bf q}+{\bf p}|}
+u_pv_{|{\bf q}+{\bf p}|}\right)\biggr] \;\;,
\nonumber\\
g_{{\bf p},{\bf q}-{\bf p}}(\omega) &=& {\rm g}\frac{\sqrt{n_0}}{\sqrt{V}}
\frac{1+f_p^0+f_{|{\bf q}-{\bf p}|}^0}{\hbar\omega-(\epsilon_p+\epsilon_{|{\bf q}-{\bf p}|})
+i0}
\biggl[\left(\delta\Phi_1-\delta\Phi_2\right)
\left(u_pu_{|{\bf q}-{\bf p}|}-v_pv_{|{\bf q}-{\bf p}|}\right) \biggr.
\nonumber\\
&+& \biggl.\left(\delta\Phi_1+\delta\Phi_2\right)
\left(2u_pv_{|{\bf q}-{\bf p}|}+2v_pu_{|{\bf q}-{\bf p}|}+u_pu_{|{\bf q}-{\bf p}|}
+v_pv_{|{\bf q}-{\bf p}|}\right)\biggr] \;\;,
\label{homfg} \\
g^{\ast}_{{\bf p},-{\bf q}-{\bf p}}(\omega) &=& {\rm g}\frac{\sqrt{n_0}}{\sqrt{V}}
\frac{1+f_p^0+f_{|{\bf q}+{\bf p}|}^0}{\hbar\omega+(\epsilon_p+\epsilon_{|{\bf q}+{\bf p}|})
+i0}
\biggl[\left(\delta\Phi_1-\delta\Phi_2\right)
\left(u_pu_{|{\bf q}+{\bf p}|}-v_pv_{|{\bf q}+{\bf p}|}\right) \biggr.
\nonumber\\
&-& \biggl.\left(\delta\Phi_1+\delta\Phi_2\right)
\left(2u_pv_{|{\bf q}+{\bf p}|}+2v_pu_{|{\bf q}+{\bf p}|}+u_pu_{|{\bf q}+{\bf p}|}
+v_pv_{|{\bf q}+{\bf p}|}\right)\biggr] \;\;.
\nonumber
\end{eqnarray}
In the above equations the frequency $\omega$ has been chosen with an infinitesimally small
component on the positive imaginary axis.
As it is well known (see e.g. Ref. \cite{P72}), to treat consistently to order ${\rm g}^2$
the properties of a Bose-condensed gas one must include the proper renormalization of the
coupling constant ${\rm g}\to{\rm g}\left(1+{\rm g}\frac{1}{V}\sum_{\bf p}m/p^2\right)$.
This renormalization of ${\rm g}$ is crucial because it cancels exactly the large-$p$
ultraviolet divergence exhibited by the equilibrium anomalous density $\tilde{m}^0$.
In fact, by using the renormalized value of ${\rm g}$, the term ${\rm g}n_0+{\rm g}\tilde{m}^0$
present in Eqs. (\ref{homcp}), (\ref{homfc}) becomes
\begin{equation}
{\rm g}n_0+{\rm g}\tilde{m}^0 \to {\rm g}n_0+{\rm g}^2n_0\frac{1}{V}\sum_{\bf p}
\left( \frac{m}{p^2}-\frac{1+2f_p^0}{2\epsilon_p} \right) \;\;,
\label{homrg}
\end{equation}
and is well behaved at large $p$.
To order ${\rm g}^2$, Eq. (\ref{homfc}) for the fluctuations of the condensate
can be finally written in the form
\begin{eqnarray}
\hbar\omega \left(\delta\Phi_1+\delta\Phi_2\right) &=&
\frac{q^2}{2m}\left(\delta\Phi_1-\delta\Phi_2\right)
+ \frac{\Sigma_{11}({\bf q},\omega)-\Sigma_{11}({\bf q},-\omega)}{2}
\left(\delta\Phi_1+\delta\Phi_2\right)
\nonumber\\
&+& \left[\frac{\Sigma_{11}({\bf q},\omega)+\Sigma_{11}({\bf q},-\omega)}{2}
-\Sigma_{12}({\bf q},\omega)\right]\left(\delta\Phi_1-\delta\Phi_2\right) \;\;,
\nonumber\\
\hbar\omega \left(\delta\Phi_1-\delta\Phi_2\right) &=& \left(\frac{q^2}{2m} + 2{\rm g}n_0\right)
\left(\delta\Phi_1+\delta\Phi_2\right)
+ \frac{\Sigma_{11}({\bf q},\omega)-\Sigma_{11}({\bf q},-\omega)}{2}
\left(\delta\Phi_1-\delta\Phi_2\right)
\label{homg2}\\
&+& \left[\frac{\Sigma_{11}({\bf q},\omega)+\Sigma_{11}({\bf q},-\omega)}{2}
+\Sigma_{12}({\bf q},\omega)\right]\left(\delta\Phi_1+\delta\Phi_2\right) \;\;.
\nonumber
\end{eqnarray}
In the above equations the self-energies $\Sigma_{11}({\bf q},\omega)$ and
$\Sigma_{12}({\bf q},\omega)$ are proportional to ${\rm g}^2$. They are obtained from
Eq. (\ref{homfc})
through the expressions (\ref{neqdens}), which give $\delta\tilde{n}$ and $\delta\tilde{m}$
in terms of the
matrices $f_{{\bf p},{\bf p}^{\prime}}$ and $g_{{\bf p},{\bf p}^{\prime}}$, and through Eqs.
(\ref{homfg}) and (\ref{homrg}). After some straightforward algebra one finds for the self-energy
$\Sigma_{11}$
\begin{eqnarray}
\Sigma_{11}({\bf q},\omega) &=& {\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}
\left(\frac{m}{p^2}-\frac{1+2f_p^0}{2\epsilon_p}\right)
+{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\left(f_p^0-f_k^0\right)
\nonumber\\
&\times& \left( \frac{2B_pA_k+8C_pA_k+4B_pB_k+4C_pC_k}{\hbar\omega+(\epsilon_p-\epsilon_k)+i0}
-\frac{2A_pB_k+8C_pB_k+4A_pA_k+4C_pC_k}{\hbar\omega-(\epsilon_p-\epsilon_k)+i0}\right)
\nonumber\\
&+& {\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\frac{1+2f_p^0}{\epsilon_p}
+{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\left(1+f_p^0+f_k^0\right)
\label{homA}\\
&\times& \left( \frac{2A_pA_k+8C_pA_k+4A_pB_k+4C_pC_k}{\hbar\omega-(\epsilon_p+\epsilon_k)+i0}
-\frac{2B_pB_k+8C_pB_k+4B_pA_k+4C_pC_k}{\hbar\omega+(\epsilon_p+\epsilon_k)+i0}\right) \;\;,
\nonumber
\end{eqnarray}
where we have introduced the notations: ${\bf k}={\bf q}+{\bf p}$, $A_p=u_p^2$, $B_p=v_p^2$
and $C_p=u_pv_p$. Analogously for $\Sigma_{12}$ one has
\begin{eqnarray}
\Sigma_{12}({\bf q},\omega) &=& {\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}
\left(\frac{m}{p^2}-\frac{1+2f_p^0}{2\epsilon_p}\right)
+{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\left(f_p^0-f_k^0\right)
\nonumber\\
&\times& \left( \frac{4C_pB_k+4C_pA_k+4B_pB_k+6C_pC_k}{\hbar\omega+(\epsilon_p-\epsilon_k)+i0}
-\frac{4C_pA_k+4C_pB_k+4A_pA_k+6C_pC_k}{\hbar\omega-(\epsilon_p-\epsilon_k)+i0}\right)
\nonumber\\
&+& {\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\left(1+f_p^0+f_k^0\right)
\label{homB}\\
&\times& \left( \frac{4C_pB_k+4C_pA_k+4A_pB_k+6C_pC_k}{\hbar\omega-(\epsilon_p+\epsilon_k)+i0}
-\frac{4C_pB_k+4C_pA_k+4B_pA_k+6C_pC_k}{\hbar\omega+(\epsilon_p+\epsilon_k)+i0}\right) \;\;.
\nonumber
\end{eqnarray}
The above expressions for $\Sigma_{11}$ and $\Sigma_{12}$ coincide with the second-order
self-energies explicitly calculated at finite temperature by Shi and Griffin using diagrammatic
methods \cite{SG98}.
At zero temperature they correspond to Beliaev's results \cite{B58}, while in the
high-temperature regime, $k_BT\gg{\rm g}n_0$, they have been recently discussed by Fedichev
and Shlyapnikov \cite{FS98}.
By neglecting in (\ref{homg2}) the terms proportional to the self-energies, one is left with
the equations for
the fluctuations of the condensate to first order in ${\rm g}$. These equations coincide with
the quasiparticle equations (\ref{hombe}). The solution is then given by $\delta\Phi_1({\bf q})
=u_q$ and $\delta\Phi_2({\bf q})=v_q$, with $u_q$ and $v_q$ given by (\ref{homuv}). The
excitation energy is given by the Bogoliubov spectrum (\ref{hombs}).
To order ${\rm g}^2$ one writes
\begin{equation}
\hbar\omega = \epsilon_q + \delta E - i\gamma \;\;,
\label{homee}
\end{equation}
where $\delta E$ is the real part of the frequency shift and $\gamma$ is the damping rate.
It is straightforward to obtain the second-order correction to $\hbar\omega$ from Eqs. (\ref{homg2}),
one finds
\begin{equation}
\delta E - i\gamma = \Sigma_{11}({\bf q},\epsilon_q)u_q^2+2\Sigma_{12}({\bf q},\epsilon_q)u_qv_q
+\Sigma_{11}({\bf q},-\epsilon_q)v_q^2 \;\;,
\label{homde}
\end{equation}
where the self-energies have been calculated for $\hbar\omega=\epsilon_q$.
After some algebra one can cast Eq. (\ref{homde}) in the more convenient form
\begin{eqnarray}
\delta E - i\gamma &=& (u_q+v_q)^2\; {\rm g}^2n_0\frac{1}{V}\sum_{\bf p}\frac{m}{p^2}
- (u_q-v_q)^2\;{\rm g}\tilde{m}^0
\nonumber\\
&+& 4{\rm g}^2 \frac{1}{V}\sum_{\bf p}\left(f_p^0-f_k^0\right)
\left(\frac{A_{p,k}^2}{\epsilon_q+(\epsilon_p-\epsilon_k)
+i0}\right)
\label{homde1}\\
&+& 2{\rm g}^2 \frac{1}{V}\sum_{\bf p}\left(1+f_p^0+f_k^0\right)
\left(\frac{B_{p,k}^2}{\epsilon_q-(\epsilon_p+\epsilon_k)
+i0}-\frac{\tilde{B}_{p,k}^2}{\epsilon_q+(\epsilon_p+\epsilon_k)
+i0}\right)\;,
\nonumber
\end{eqnarray}
where we use ${\bf k}={\bf q}+{\bf p}$ and we have introduced the matrices
\begin{eqnarray}
A_{p,k} &=& \frac{\sqrt{n_0}}{2} \left[ (u_q+v_q) (2u_pu_k+2v_pv_k+v_pu_k+u_pv_k)
+ (u_q-v_q) (v_pu_k-u_pv_k) \right] \;\;,
\nonumber\\
B_{p,k} &=& \frac{\sqrt{n_0}}{2} \left[ (u_q+v_q) (2u_pv_k+2v_pu_k+u_pu_k+v_pv_k)
+ (u_q-v_q) (u_pu_k-v_pv_k) \right] \;\;,
\label{homAB}\\
\tilde{B}_{p,k} &=& \frac{\sqrt{n_0}}{2} \left[ (u_q+v_q) (2u_pv_k+2v_pu_k+u_pu_k+v_pv_k)
- (u_q-v_q) (u_pu_k-v_pv_k) \right] \;\;.
\nonumber
\end{eqnarray}
Result (\ref{homde1}) gives the correction to the Bogoliubov elementary
excitation energy $\epsilon_q$. The first and second term on the r.h.s. of (\ref{homde1}) arise,
respectively,
from the renormalization of ${\rm g}$ and from the equilibrium anomalous density $\tilde{m}^0$.
The last two terms arise, instead, from the dynamic coupling between the condensate and the
noncondensate component. The real part of the r.h.s. of (\ref{homde1}) gives
the frequency shift $\delta E$, while the imaginary part gives the damping rate $\gamma$.
Notice that, concerning the damping rate, result (\ref{homde1}) coincides with the calculation
carried out
within the Popov approximation [Eq. (39) of \cite{G98}], where the condition $\tilde{m}^0=0$
was assumed. However, as discussed in \cite{G98}, the frequency shift $\delta E$ is not given
correctly to order ${\rm g}^2$ by the Popov approximation. In fact, the static anomalous density
$\tilde{m}^0$ and the renormalized coupling constant contribute to the real part of Eq. (\ref{homde1}).
\subsection{Equation of state}
Let us first discuss the equation of state (\ref{homcp}). By calculating $\tilde{n}^0$ and
$\tilde{m}^0$ using the equilibrium expressions (\ref{eqdens}) with the first-order quasiparticle
amplitudes and energies given by (\ref{homuv}) and (\ref{hombs}), one finds to order ${\rm g}^2$
\begin{equation}
\mu= {\rm g}n_0 + 2{\rm g}n_T^0 + {\rm g}n_0 (a^3n_0)^{1/2} H(\tau) \;\;.
\label{homdcp}
\end{equation}
In the above equation $n_T^0=\zeta(3/2)\lambda_T^{-3}\simeq 2.612\lambda_T^{-3}$ is the noncondensate
density of an ideal gas, which is fixed by the thermal wavelength $\lambda_T=\hbar\sqrt{2\pi/mk_BT}$.
Moreover, $H(\tau)$ is a dimensionless function
of the reduced temperature $\tau=k_BT/{\rm g}n_0$ given by
\begin{equation}
H(\tau) = \frac{40}{3\sqrt{\pi}} + \frac{\sqrt{32}}{\sqrt{\pi}}\tau\int_0^{\infty} dx
\frac{1}{e^x-1}\left(\sqrt{u-1}\frac{2u-1}{u}-2\sqrt{\tau x}\right) \;\;,
\label{homH}
\end{equation}
where we have introduced the quantity $u=\sqrt{1+\tau^2x^2}$.
Result (\ref{homdcp}) gives the chemical potential as a function of the equilibrium condensate
density $n_0$ and the temperature $T$ to second order in ${\rm g}$. It coincides with the
result of Shi and Griffin [Eq. (7.9) of \cite{SG98}].
Notice that the sum of the first two terms on the r.h.s. of (\ref{homdcp})
corresponds to the chemical potential calculated to first order in ${\rm g}$.
In Fig. 1 the dimensionless function $H(\tau)$ is plotted as a function of the reduced temperature
$\tau$.
At low temperatures, $\tau\ll 1$, the function $H(\tau)$ can be expanded as
\begin{equation}
H(\tau) \simeq \frac{40}{3\sqrt{\pi}} - \sqrt{32}\zeta(3/2)\tau^{3/2} + \frac{2\pi^{3/2}}{3}\tau^2
+ \frac{\pi^{7/2}}{10}\tau^4 \;\;.
\label{homHLT}
\end{equation}
In the same regime of temperatures, the condensate density is given in terms of the total
density $n=n_0+\tilde{n}^0$ by the following relation
\begin{equation}
n_0 \simeq n\left[1-(a^3n_0)^{1/2}\left(\frac{8}{3\sqrt{\pi}}+\frac{2\pi^{3/2}}{3}\tau^2
-\frac{\pi^{7/2}}{30}\tau^4 \right)\right] \;\;,
\label{homcLT}
\end{equation}
valid to order ${\rm g}^2$. In the above expression the first term in brackets corresponds to the
quantum depletion of the condensate, while the other two terms account for the thermal depletion
caused by phonon-type excitations. By using Eqs. (\ref{homHLT}) and (\ref{homcLT}),
one gets the following result for the low-temperature behavior of the chemical potential in
terms of the density $n$
\begin{eqnarray}
\mu &\simeq& {\rm g}n\left[1+(a^3n_0)^{1/2}\left(\frac{32}{3\sqrt{\pi}}+\frac{2\pi^{7/2}}{15}\tau^4
\right)\right]
\nonumber\\
&\simeq& \mu(T=0) + \frac{\pi^2}{60}\frac{(k_BT)^4}{n_0\hbar^3c_B^3} \;\;,
\label{homcpLT}
\end{eqnarray}
where $\mu(T=0)={\rm g}n[1+32(a^3n_0)^{1/2}/3\sqrt{\pi}]$ is the value of the chemical potential
at zero temperature \cite{LP80}, and $c_B=\sqrt{{\rm g}n_0/m}$ is the Bogoliubov velocity of sound.
The $T^4$ term in (\ref{homcpLT}) coincides with the result obtained from the thermodynamic relation
$\mu=(\partial F/\partial N)_{V,T}$, where $F$ is the low-temperature expansion of the free energy
of a Bose gas \cite{LP80,G98}.
At high temperatures, $\tau\gg 1$, the function $H(\tau)$ yields the asymptotic result $H(\tau)
\to - 12\sqrt{\pi}\tau$, and for the chemical potential in this regime of temperatures one gets
\begin{equation}
\mu \simeq {\rm g}n_0 + 2{\rm g}n_T^0 - 12\sqrt{\pi}(a^3n_0)^{1/2}k_BT \;\;,
\label{homcpHT}
\end{equation}
which coincides with the result obtained by Popov \cite{P65,SG98,FS98}.
\subsection{Zero sound: damping rate and velocity of propagation}
Zero sound, or Bogoliubov sound, is a collective oscillation of the system in the collisionless
regime, for which the restoring force acting on a given particle comes from the mean-field created
by the other particles. At zero temperature, zero sound coincides with ordinary sound and the
velocity of propagation $c$ is fixed by the compressibility of the system
$mc^2=(\partial P/\partial n)_{T=0}$.
At finite temperature one can have both zero sound and hydrodynamic
modes, depending on the temperature and the wavelength of the excitation. In this case the
velocity of zero sound can not be fixed only by thermodynamic relations. For an exhaustive
discussion of collisionless and hydrodynamic collective modes we refer to the books \cite{NP90,G93}.
To first order in ${\rm g}$, the excitation energy of the zero-sound mode is given by the
long-wavelength limit of the Bogoliubov dispersion relation (\ref{hombs}), which corresponds
to phonons $\epsilon_q=c_B q$ propagating with the velocity
$c_B=\sqrt{{\rm g}n_0/m}$ fixed by the condensate density.
Starting from Eq. (\ref{homde1}), we will explicitly calculate the damping rate and the temperature
dependence of the speed of zero sound to order ${\rm g}^2$.
The damping of phonons in homogeneous systems has been recently investigated by Liu \cite{L97},
using functional-integration methods, and by Pitaevskii and Stringari \cite{PS97} by means of
perturbation theory. The temperature dependence of the speed of zero sound has been investigated
by Payne and Griffin \cite{PG85} within the framework of the dielectric formalism, and by Shi and
Griffin \cite{SG98} and Fedichev and Shlyapnikov \cite{FS98} using diagrammatic methods.
\subsubsection{Damping of zero sound}
The calculation of the damping of phonons from Eq. (\ref{homde1}) has been already carried out
in \cite{G98}. Here we will only review the main results.
In the quantum regime, $\epsilon_q\gg k_BT$, the damping is governed by the Beliaev mechanism, in
which a phonon decays into a pair of excitations. This mechanism is described in Eq. (\ref{homde1})
by the imaginary part of the term containing the matrices $B$ and
$\tilde{B}$. The damping rate is given by
\begin{equation}
\frac{\gamma}{c_B q}=\frac{3q^4}{640\pi\hbar^3mn_0c_B} \;\;.
\label{homBd}
\end{equation}
This result has been first obtained by Beliaev using diagrammatic techniques \cite{B58}.
In the thermal regime, $\epsilon_q\ll k_BT$, the phonon decay is dominated by a different damping
process, in which a phonon with energy $\epsilon_q$ is absorbed by a thermal excitation with energy
$\epsilon_p$ and is turned into another thermal excitation with energy $\epsilon_{|{\bf q}+{\bf p}|}$.
This mechanism is known as Landau damping and is described in Eq. (\ref{homde1}) by the imaginary part
of the term involving the matrix $A$. The result is given by (see Refs. \cite{PS97,G98})
\begin{equation}
\frac{\gamma}{c_B q} = (a^3n_0)^{1/2} F(\tau) \;\;,
\label{homLd}
\end{equation}
In the above equation $\tau=k_BT/{\rm g}n_0$ is the reduced temperature and
$F(\tau)$ is the dimensionless function
\begin{equation}
F(\tau)=4\sqrt{\pi}\int_0^\infty dx \left(e^{x/2}-e^{-x/2}\right)^{-2}
\left(1-\frac{1}{2u}-\frac{1}{2u^2}\right)^2 \;\;,
\label{homF}
\end{equation}
where $u$ is defined as in (\ref{homH}).
For temperatures $\tau\ll 1$ the function $F$ takes the asymptotic limit
$F\rightarrow 3\pi^{9/2}\tau^4/5$ and one recovers the known
result for the phonon damping \cite{AK63,HM65,LP81,PS97}
\begin{equation}
\frac{\gamma}{c_B q}=\frac{3\pi^3}{40}\frac{(k_BT)^4}{mn_0\hbar^3c_B^5} \;\;.
\label{homKd}
\end{equation}
In the opposite regime of temperatures, $\tau\gg 1$, one finds the asymptotic
limit $F\rightarrow 3\pi^{3/2}\tau/4$, and the damping coefficient is given by
\begin{equation}
\frac{\gamma}{c_B q}=\frac{3\pi}{8}\frac{k_BTa}{\hbar c_B} \;\;.
\label{homSd}
\end{equation}
The damping of phonons in this regime of temperatures has been first investigated
by Sz\'{e}pfalusy and Kondor \cite{SK74}.
In Fig. 2 the dimensionless function $F(\tau)$ is plotted as a function of
$\tau$ together with its asymptotic behaviour both at small and large
$\tau$'s. One can see that $F$ departs rather soon from the low-temperature
$\tau^4$ behaviour, while it approaches the high-temperature
linear law very slowly.
\subsubsection{Velocity of zero sound}
Differently from the calculation of the damping rates, all terms on the r.h.s. of (\ref{homde1})
contribute to the frequency shift $\delta E$. The first two terms, which involve $\tilde{m}_0$
and the renormalization of the coupling constant, are referred to as static terms.
Concerning the other two terms,
for excitations with $\epsilon_q\ll{\rm g}n_0$, the relevant region in the calculation
of the real part of the term which containes the matrices $B$ and $\tilde{B}$ corresponds to
energies $\epsilon_p\gg\epsilon_q$. This term is referred to as non-resonant term.
On the contrary, the resonance region gives an important contribution to the real part of the term
involving the matrix $A$. This term is referred to as resonant term.
Let us start by analysing the non-resonant term. The contribution of this term to the energy
shift $\delta E$ can be written as
\begin{eqnarray}
\delta E_{NR} &=& -\;\frac{\epsilon_q}{\epsilon_q^0}\;{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}
\left[ f_p^0\,\frac{\epsilon_p-\epsilon_k}{2\epsilon_p\epsilon_k} \;+\;
\frac{1+2f_p^0}{2\epsilon_p} \;-\;(1+2f_p^0)\,
\frac{(\epsilon_k^0-\epsilon_p^0)^2}{4\epsilon_p\epsilon_k(\epsilon_p+\epsilon_k)}\right]
\nonumber\\
&+& \epsilon_q\;{\rm g}^2n_0\frac{1}{V}\sum_{\bf p}(1+2f_p^0)\frac{\epsilon_p+\epsilon_k}
{\epsilon_q^2-(\epsilon_p+\epsilon_k)^2}
\left[\frac{\epsilon_q^0}{\epsilon_q^2}(2u_pv_k+2v_pu_k+u_pu_k+v_pv_k)^2\right.
\nonumber\\
&+& \left. 2\frac{u_pu_k-v_pv_k}{\epsilon_p+\epsilon_k}(2u_pv_k+2v_pu_k+u_pu_k+v_pv_k)
+ \frac{\epsilon_q^2}{\epsilon_q^0}\frac{(u_pu_k-v_pv_k)^2}{(\epsilon_p+\epsilon_k)^2}
\right] \;\;,
\label{homENR}
\end{eqnarray}
where ${\bf k}= {\bf q}+{\bf p}$. The above result is valid for any excitation
energy $\epsilon_q$, and is not limited to the long-wavelength regime $\epsilon_q\ll{\rm g}n_0$.
In the phonon regime, result (\ref{homENR}) can be simplified and one gets
\begin{eqnarray}
\frac{\delta E_{NR}}{c_B q} &=& - {\rm g}\frac{1}{V}\sum_{\bf p} (1+2f_p^0)
\left[\frac{(\epsilon_p^0)^2}{4\epsilon_p^3} - \frac{{\rm g}n_0\epsilon_p^0}{6\epsilon_p^3}\right]
- \frac{1}{\epsilon_q^0} {\rm g}^2n_0\frac{1}{V}\sum_{\bf p}\frac{1+2f_p^0}{2\epsilon_p}
\nonumber\\
&-& \frac{1}{\epsilon_q^0} {\rm g}^2n_0\frac{1}{V}\sum_{\bf p} f_p^0 \frac{\epsilon_p-\epsilon_k}
{2\epsilon_p\epsilon_k} \;\;.
\label{homENR1}
\end{eqnarray}
The contribution to $\delta E$ from the resonant term can be calculated in the same way
and one gets the general result
\begin{eqnarray}
\delta E_R &=& \frac{\epsilon_q}{\epsilon_q^0} \;{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\left[ f_p^0\,
\frac{\epsilon_p-\epsilon_k}{2\epsilon_p\epsilon_k} \;-\;
\frac{f_p^0-f_k^0}{\epsilon_p-\epsilon_k}\,
\frac{(\epsilon_k^0-\epsilon_p^0)^2}{4\epsilon_p\epsilon_k}\right]
+\epsilon_q\;{\rm g}^2n_0 \frac{1}{V}\sum_{\bf p}\frac{f_p^0-f_k^0}{\epsilon_q+\epsilon_p-\epsilon_k}
\nonumber\\
&\times& \left[\frac{\epsilon_q^0}{\epsilon_q^2}(2u_pu_k+2v_pv_k+v_pu_k+u_pv_k)^2
-2\frac{v_pu_k-u_pv_k}{\epsilon_p-\epsilon_k}(2u_pu_k+2v_pv_k+v_pu_k+u_pv_k)\right.
\nonumber\\
&+& \left. \frac{\epsilon_q^2}{\epsilon_q^0}\frac{(v_pu_k-u_pv_k)^2}{(\epsilon_p-\epsilon_k)^2}
\right] \;\;.
\label{homER}
\end{eqnarray}
In the limit $\epsilon_q\ll{\rm g}n_0$ the above expression reduces to
\begin{eqnarray}
\frac{\delta E_{R}}{c_B q} &=&
\frac{\rm g}{2V}\sum_{\bf p}\frac{\partial f_p^0}{\partial\epsilon_p}
\left[ \left(\frac{\epsilon_p^0}{\epsilon_p}+\frac{\partial\epsilon_p^0}{\partial\epsilon_p}
\right)^2\left(1-\frac{c_B}{c_B-\cos\theta \partial\epsilon_p/\partial p}\right)-
\frac{2}{3}\frac{{\rm g}n_0\epsilon_p^0}{\epsilon_p^2}\right]
\nonumber\\
&+& \frac{1}{\epsilon_q^0} {\rm g}^2n_0\frac{1}{V}\sum_{\bf p} f_p^0 \frac{\epsilon_p-\epsilon_k}
{2\epsilon_p\epsilon_k} \;\;,
\label{homER1}
\end{eqnarray}
where $\theta$ is the angle the momentum ${\bf p}$ forms with the direction of ${\bf q}$.
Notice that the last terms on the r.h.s. of (\ref{homENR1}) and (\ref{homER1}) are equal and opposite
in sign, thus they cancel in the sum $\delta E_R + \delta E_{NR}$.
Moreover, the second term on the r.h.s. of (\ref{homENR1}), which is ultraviolet divergent,
is canceled by a corresponding term arising from the contribution to
$\delta E$ of the static terms. In conclusion, the final result for the energy shift $\delta E$
in the phonon regime is well behaved and proportional to $\epsilon_q$. One finds
\begin{eqnarray}
\frac{\delta E}{c_B q}&=& \frac{\rm g}{2V}\sum_{\bf p}\frac{\partial f_p^0}{\partial\epsilon_p}
\left[ \left(\frac{\epsilon_p^0}{\epsilon_p}+\frac{\partial\epsilon_p^0}{\partial\epsilon_p}
\right)^2\left(1-\frac{c_B}{c_B-\cos\theta \partial\epsilon_p/\partial p}\right)-
\frac{2}{3}\frac{{\rm g}n_0\epsilon_p^0}{\epsilon_p^2}\right]
\nonumber\\
&+& \frac{\rm g}{2V}\sum_{\bf p}\left[ \left(\frac{m}{p^2}-\frac{1}{2\epsilon_p}
+\frac{4}{3}\frac{{\rm g}n_0\epsilon_p^0}{\epsilon_p^3}\right)-\frac{f_p^0}{\epsilon_p}
\left(1-\frac{8}{3}\frac{{\rm g}n_0\epsilon_p^0}{\epsilon_p^2}\right)\right] \;\;.
\label{homEZS}
\end{eqnarray}
The r.h.s. of the above equation gives the correction to the Bogoliubov velocity of zero
sound. By rearranging the integrals over momenta, one gets the relevant result
\begin{equation}
c=c_B\left[ 1+(a^3n_0)^{1/2} G(\tau) \right] \;\;.
\label{homEZS1}
\end{equation}
Here $G(\tau)$ is the following dimensionless function of
the reduced temperature $\tau=k_BT/{\rm g}n_0$
\begin{eqnarray}
G(\tau) &=& \frac{28}{3\sqrt{\pi}} \;+\; \frac{\sqrt{32}}{\sqrt{\pi}}\tau
\int_0^\infty dx \frac{1}{e^x-1} \frac{\sqrt{u-1}}{u}\frac{5-3u}{6(u+1)}
\;+\; \frac{\sqrt{32}}{\sqrt{\pi}}\int_0^\infty dx \frac{1}{(e^{x/2}-e^{-x/2})^2}
\nonumber\\
&\times& \frac{u-1}{3u\sqrt{u+1}}
\left[1-\frac{3}{2}\frac{(2u+1)^2(u-1)}{u^2}
\left(1+\frac{\sqrt{u+1}}{2\sqrt{2}u}\log\left|\frac{\sqrt{2}u-\sqrt{u+1}}{\sqrt{2}u+\sqrt{u+1}}
\right|\right)\right] \;\;,
\label{homG}
\end{eqnarray}
where $u$ is defined as in (\ref{homH}).
It is interesting to study Eq. (\ref{homEZS1}) in particular regimes of temperature.
At zero temperature, the function $G$ takes the value: $G(\tau=0) = 28/(3\sqrt{\pi})$,
and $n_0$ is related to the total density $n$ by the expression:
$n_0 = n[1-8/(3\sqrt{\pi})(a^3n_0)^{1/2}]$, which accounts for the quantum depletion.
The result for the sound velocity is:
\begin{equation}
c(T=0)=\sqrt{\frac{{\rm g}n}{m}}\left[ 1+\frac{8}{\sqrt{\pi}}(a^3n_0)^{1/2} \right] \;\;.
\label{cT0}
\end{equation}
The above result, which was first found by Beliaev \cite{B58}, coincides with the one obtained
from the thermodynamic relation $c(T=0)=[n(\partial\mu(T=0)/\partial n)/m]^{1/2}$, where
$\mu(T=0)$ is given in (\ref{homcpLT}).
At low temperatures, $\tau\ll 1$, one finds the following expansion of the $G$ function
\begin{equation}
G(\tau)\simeq\frac{28}{3\sqrt{\pi}} + \frac{\pi^{3/2}}{3}\tau^2 + \frac{3\pi^{7/2}}{5}
\tau^4\log(1/\tau^2) \;\;.
\label{GLT}
\end{equation}
In this regime of temperatures the condensate density is given by the expression (\ref{homcLT})
in terms of the density $n$, and the velocity of zero sound turns out to be
\begin{equation}
c=c(T=0) + \frac{3\pi^2}{40} \frac{(k_BT)^4}{mn_0\hbar^3c_B^4} \log[m^2c_B^4/(k_BT)^2] \;\;.
\label{cLT}
\end{equation}
This result was first obtained by Andreev and Khalatnikov \cite{AK63} using kinetic equations,
and later by Ma {\it et al.} \cite{MGW71} within the framework of the dielectric formalism.
Finally, in the high temperature regime $\tau\gg 1$, the function $G$ is linear in $\tau$:
$G(\tau)\to G(\infty) \tau$, with the numerical coefficent $G(\infty)$ given by the
following expression
\begin{equation}
G(\infty)= \frac{\sqrt{\pi}}{3}(9\sqrt{2}-28) + \frac{1}{\sqrt{\pi}} \int_0^1 dx\,
\frac{x^3+3x^2-4}{\sqrt{1-x^2}(1+x)} \log\left(\frac{\sqrt{2}-\sqrt{x(1+x)}}{\sqrt{2}+\sqrt{x(1+x)}}
\right) \simeq - 7.4 \;\;.
\label{Ginf}
\end{equation}
For the speed of zero sound in this regime of temperatures one gets the result
\cite{SG98,FS98}
\begin{equation}
c=c_B+\frac{G(\infty)}{2\sqrt{\pi}} \frac{k_BTa}{\hbar} \;\;,
\label{cHT}
\end{equation}
where the numerical coefficient $G(\infty)/(2\sqrt{\pi})\simeq -2.1$ agrees with the finding
of \cite{FS98}, while is about a factor 6 larger than the one calculated in \cite{SG98}.
The proper description of the cross-over between the low and high-temperature regime is provided by
Eq. (\ref{homEZS1}). The dimensionless function $G(\tau)$ is plotted
in Fig. 3. In the experiments on trapped gases the gas parameter in the center of the trap is
typically $a^3n_0\sim$ 10$^{-5}$-10$^{-4}$. For temperatures of the order of the chemical potential,
which means $\tau\sim 1$, the correction to the Bogoliubov speed of sound amounts
to about 2-5\%.
\section{Spatially inhomogeneous system}
In this section we generalize the perturbation scheme developed for a homogeneous Bose-condensed
gas to the case of inhomogenous systems trapped by a harmonic confining potential
\begin{equation}
V_{ext}({\bf r}) = \frac{m}{2}\left(\omega_x^2x^2 + \omega_y^2y^2 + \omega_z^2z^2\right) \;\;.
\label{harp}
\end{equation}
The relevant length scale associated to the external potential (\ref{harp}) is the harmonic
oscillator length defined as
\begin{equation}
a_{ho}=\left(\frac{\hbar}{m\omega_{ho}}\right)^{1/2} \;\;,
\label{aho}
\end{equation}
where $\omega_{ho}=(\omega_x\omega_y\omega_z)^{1/3}$ is the geometric average of the oscillator
frequencies. The length scale $a_{ho}$ gives the average width of the Gaussian which describes
the ground state of non-interacting particles in the harmonic potential (\ref{harp}).
The shape of the potential $V_{ext}$ fixes the symmetry of the problem. So far all experiments
on trapped Bose gases have been realized using axially symmetric traps. In this case there are
only two distinct oscillator frequencies: $\omega_{\perp}=\omega_x=\omega_y$ and $\omega_z$.
The ratio between the axial and radial frequencies, $\lambda=\omega_z/\omega_{\perp}$, fixes the
asymmetry of the trap. For $\lambda<1$ the trap is cigar shaped, whereas for $\lambda>1$ is disk
shaped and $\lambda=1$ refers to spherically symmetric traps.
In our analysis we only consider systems with repulsive interactions ($a>0$) in the thermodynamic
limit. As extensively discussed in \cite{DGPS99}, for harmonically trapped Bose systems the
thermodynamic limit is obtained by letting the total number of trapped particles $N\to\infty$ and
$\omega_{ho}\to 0$, while keeping the product $N\omega_{ho}^3$ constant.
With this definition the Bose-Einstein transition temperature $k_BT_c^0=\hbar\omega_{ho}(N/\zeta(3))
^{1/3}$ is well defined in the thermodynamic limit.
In the thermodynamic limit, the condition $N_0(T)a/a_{ho}\gg 1$, which ensures the validity of the
Thomas-Fermi (TF) approximation for the condensate with occupation number $N_0$, is always guaranteed
below the transition temperature. In the TF approximation one neglects the quantum-pressure term
proportional to $\nabla^2\Phi_0({\bf r})$ in the stationary equation (\ref{statgp}), and the
equilibrium profile of the condensate density is fixed by the following equation
\begin{equation}
{\rm g}n_0({\bf r})=\mu-V_{ext}({\bf r})-2{\rm g}\tilde{n}^0({\bf r})-{\rm g}\tilde{m}^0({\bf r}) \;\;,
\label{eqcd}
\end{equation}
in the central region of the trap where the r.h.s. of the above equation is positive, whereas
outside this region one has $n_0({\bf r)}=0$.
The chemical potential in Eq. (\ref{eqcd}) is fixed by the normalization condition $\int d{\bf r}\;
n_0({\bf r})=N_0(T)$, with $N_0(T)$ the equilibrium condensate occupation number at temperature $T$.
To lowest order in the interaction, the profile of the condensate density has the form of the inverted
parabola
\begin{equation}
n_{TF}({\bf r})={\rm g}^{-1} \left[ \mu_{TF}(N_0)-V_{ext}({\bf r}) \right] \;\;,
\label{nTF}
\end{equation}
where
\begin{equation}
\mu_{TF}(N_0)=\frac{\hbar\omega_{ho}}{2}\left(\frac{15 N_0 a}{a_{ho}}\right)^{2/5}
\label{muTF}
\end{equation}
is the temperature dependent TF chemical potential.
Moreover, in the thermodynamic limit, one can show \cite{GPS97,DGPS99} that the equilibrium
properties of the system can be expressed in terms of two parameters: the reduced temperature
$t=T/T_c^0$ and the interaction parameter $\eta$ defined as the ratio
\begin{equation}
\eta=\frac{\mu_{TF}(N)}{k_BT_c^0} \;\;,
\label{eta}
\end{equation}
between the TF chemical potential at $T=0$ and the transition temperature.
The time-dependent equation (\ref{fluceq}) for the fluctuations of the condensate
only needs to be solved in the region where $n_0({\bf r})\neq 0$, according to Eq. (\ref{eqcd}).
One finds
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\delta\Phi({\bf r},t) &=& \left( -\frac
{\hbar^2\nabla^2}{2m}+{\rm g} n_0({\bf r})-{\rm g} \tilde{m}^0({\bf r}) \right)\delta\Phi
({\bf r},t)
+ \left({\rm g}n_0({\bf r})+{\rm g}\tilde{m}^0({\bf r})\right)
\delta\Phi^{\ast}({\bf r},t)
\nonumber\\
&+& 2{\rm g}\Phi_0({\bf r})\delta
\tilde{n}({\bf r},t) + {\rm g}\Phi_0({\bf r})\delta \tilde{m}({\bf r},t) \;\;.
\label{eqcf}
\end{eqnarray}
For a trapped system in the thermodynamic limit the above equations ({\ref{eqcd}) and (\ref{eqcf})
replace respectively Eqs. (\ref{homcp}) and (\ref{homfc}), holding for a homogeneous system.
We are interested in the lowest-lying collective modes of the system with excitation energy
$\hbar\omega\ll\mu$. To lowest order in the interaction these modes are the solution of the
following coupled equations
\begin{eqnarray}
i\hbar\frac{\partial}{\partial t}\left[\delta\Phi({\bf r},t)+\delta\Phi^{\ast}({\bf r},t)\right]
&=& -\frac{\hbar^2\nabla^2}{2m} \left[\delta\Phi({\bf r},t)-\delta\Phi^{\ast}({\bf r},t)\right]
\;\;,
\nonumber\\
i\hbar\frac{\partial}{\partial t}\left[\delta\Phi({\bf r},t)-\delta\Phi^{\ast}({\bf r},t)\right]
&=& 2{\rm g}n_{TF}({\bf r})\left[\delta\Phi({\bf r},t)+\delta\Phi^{\ast}({\bf r},t)\right]
\;\;.
\label{hydeq}
\end{eqnarray}
These equations are obtained from (\ref{eqcf}) by neglecting all coupling terms to noncondensate
particles and neglecting also the term proportional to $\nabla^2(\delta\Phi+\delta\Phi^{\ast})$,
which is of higher order for the low-lying modes we are considering \cite{WG96}.
The oscillating solution defined by $\delta\Phi({\bf r},t)=\delta\Phi_1^0({\bf r})e^{-i\omega t}$,
$\delta\Phi^{\ast}({\bf r},t)=\delta\Phi_2^0({\bf r})e^{-i\omega t}$, with the Fourier components
fixed by the relations
\begin{eqnarray}
\left(\delta\Phi_1^0({\bf r})+\delta\Phi_2^0({\bf r})\right) &=& \sqrt{\frac{\hbar\omega}
{2{\rm g}n_{TF}({\bf r})}}\;\chi_0({\bf r}) \;\;,
\nonumber\\
\left(\delta\Phi_1^0({\bf r})-\delta\Phi_2^0({\bf r})\right) &=& \sqrt{\frac{2{\rm g}n_{TF}({\bf r})}
{\hbar\omega}}\;\chi_0({\bf r}) \;\;,
\label{hydcom}
\end{eqnarray}
reduces the coupled equations (\ref{hydeq}) to the following equation for the function
$\chi_0({\bf r})$ \cite{WG96}
\begin{equation}
m\omega^2\chi_0({\bf r}) + \nabla\Bigl[ {\rm g}n_{TF}({\bf r})\nabla\chi_0({\bf r})\Bigr] = 0
\;\;.
\label{hydeq1}
\end{equation}
The normalization condition $\int d{\bf r}\left(|\delta\Phi_1^0|^2-|\delta\Phi_2^0|^2\right)=1$
satisfied by the Fourier components $\delta\Phi_1^0$ and $\delta\Phi_2^0$, implies the normalization
condition $\int d{\bf r}\;\chi_0^{\ast}\chi_0^{\,} =1$ on the function $\chi_0({\bf r})$.
Equation (\ref{hydeq1}) was first derived at $T=0$ by Stringari \cite{S96} using the hydrodynamic
theory of superfluids, and it has then been studied by many authors \cite{WG96,FCSG97}.
For spherically symmetric traps the excitation energies $\hbar\omega\equiv\epsilon_{TF}$ obey
the dispersion law \cite{S96}
\begin{equation}
\epsilon_{TF}(n_r,l) = \hbar\omega_{ho} \left( 2n_r^2+2n_rl+3n_r+l \right)^{1/2} \;\;,
\label{symdl}
\end{equation}
where $n_r$ and $l$ are respectively the radial and the angular momentum quantum numbers.
In the case of axially symmetric traps, analytic results for the excitation energies are
obtained for the $m=0$ low and high mode \cite{S96}
\begin{equation}
\epsilon_{TF}(m=0)_{L,H}=\hbar\omega_{\perp} \left(2+\frac{3}{2}\lambda^2\mp\frac{1}{2}
\sqrt{9\lambda^4-16\lambda^2+16}\right)^{1/2} \;\;,
\label{m0LH}
\end{equation}
and for the surface modes of the form $\chi_m\propto r_{\perp}^{|m|} e^{im\phi}$, for which one has
\begin{equation}
\epsilon_{TF}(m)=\hbar\omega_{\perp}\sqrt{|m|} \;\;.
\label{m2}
\end{equation}
A general feature of Eq. (\ref{hydeq1}), which is explicitly reflected in the above results for
$\epsilon_{TF}$, is that the excitation energies do not depend on interaction and are proportional
to the oscillator frequencies of the harmonic potential. At finite temperature, where
${\rm g}n_{TF}=\mu_{TF}[N_0(T)]-V_{ext}$, this fact implies that $\epsilon_{TF}$ does not depend on
temperature either. This is an important difference with respect to the homogeneous case where
the corresponding excitations have the dispersion law $\epsilon_q=\sqrt{{\rm g}n_0/m}\;q$, and
depend on temperature through the condensate density. The behavior exhibited in the harmonic trap
is well understood if one notes that the values of $q$ are fixed by the boundary and vary as $1/R$,
where $R$ is the size of the condensate. In the Thomas-Fermi limit,
$R\sim\sqrt{\mu_{TF}/m\omega_{ho}^2}$ and the radius $R$ explicitly depends on the chemical potential.
On the other hand, the sound velocity is also fixed by the value of the chemical potential:
$c_B\sim\sqrt{\mu_{TF}/m}$. One finally finds that in the product $c_B q$ the chemical potential
cancels out, so that the collective frequency is proportional to the oscillator frequency
$\omega_{ho}$.
Provided that finite size effects can be neglected, the explicit dependence of the collective frequency
on the interaction parameter $\eta$ as well as on the reduced temperature $t=T/T_c^0$ arises due to
quantum and thermal fluctuations which are of order ${\rm g}^2$. These fluctuations have the same physical
origin as the corrections to the Bogoliubov speed of sound given in the homogeneous case by result
(\ref{homEZS1}). The difference, however, is that in the case of harmonic traps, beyond mean-field effects
give corrections to a collective frequency which is fixed only by the oscillator frequency: a much better
situation from the experimental point of view. In the following part of this section we will explicitly
calculate the effects of quantum and thermal fluctuations on the frequencies of the lowest compressional
and surface modes.
\subsection{Perturbation scheme}
The perturbation scheme we employ for trapped systems follows the same lines as the one developed
in the homogeneous case. However, there are two important differences: first of all the
quasiparticle states are not exact plane waves, secondly the condensate density is not fixed by
a single parameter but depends on position.
Concerning the quasiparticle states, we make use of the local density (semiclassical) approximation
which amounts to setting \cite{GPS96,GPS97,P99}
\begin{equation}
u_i({\bf r}) = \frac{\bar{u}_i({\bf r})}{\sqrt{V}}e^{i\varphi_i({\bf r})} \;\;,\;\;\;\;
v_i({\bf r}) = \frac{\bar{v}_i({\bf r})}{\sqrt{V}}e^{i\varphi_i({\bf r})} \;\;,
\label{lda}
\end{equation}
where $V$ is a large volume contaning the system and the real functions $\bar{u}_i$, $\bar{v}_i$
satisfy the normalization condition $\bar{u}_i^2({\bf r})-\bar{v}_i^2({\bf r})=1$.
The factor $e^{i\varphi_i({\bf r})}$ represents the rapidly varying part of the functions $u_i$ and
$v_i$, while the functions $\bar{u}_i$, $\bar{v}_i$ are assumed to be smooth functions of the
position. The phase $\varphi_i({\bf r})$, which is also assumed to be a smooth function of
${\bf r}$, characterizes the local impulse ${\bf p}=\hbar\nabla\varphi_i$ of the quasiparticle.
When summations over quasiparticle states are involved, these are replaced in the semiclassical
approximation by sums over momenta, $\sum_i ...\to\sum_{\bf p} ...$, and $\bar{u}_i({\bf r})\to
u_p({\bf r})$, $\bar{v}_i({\bf r})\to v_p({\bf r})$, where the functions $u_p({\bf r})$ and
$v_p({\bf r})$ are given, in the region of the condensate, by the following expressions
\begin{eqnarray}
&& u_p^2({\bf r})=1+v_p^2({\bf r})=\frac{(\epsilon_p^2({\bf r})+{\rm g}^2n_{TF}^2({\bf r}))^{1/2}
+\epsilon_p({\bf r})}
{2\epsilon_p({\bf r})} \;\;,
\nonumber \\
&& u_p({\bf r})v_p({\bf r}) = - \frac{{\rm g}n_{TF}({\bf r})}{2\epsilon_p({\bf r})} \;\;,
\label{uv}
\end{eqnarray}
where the position-dependent quasiparticle energies $\epsilon_p({\bf r})$ are given by
\begin{equation}
\epsilon_p({\bf r}) = \left(\bigl[\epsilon_p^0+{\rm g}n_{TF}({\bf r})\bigr]^2
-\bigl[{\rm g}n_{TF}({\bf r})\bigr]^2\right)^{1/2} \;\;.
\label{bs}
\end{equation}
For each position ${\bf r}$ the above equations coincide with the Bogoliubov expressions
(\ref{homuv}), (\ref{hombs}) with a local condensate density given by the TF density profile
$n_{TF}({\bf r})$ defined in (\ref{nTF}). The semiclassical approximation for
the excited states of a trapped Bose gas has been extensively used in the theoretical study
of the thermodynamic properties of the system \cite{GPS96,GPS97,MCT97}.
It gives a very good description of the system for temperatures $k_BT\gg\hbar\omega_{ho}$,
but is also valid at $T=0$ if the relevant energies in the summation over excited states
are much larger than the oscillator energy $\hbar\omega_{ho}$ \cite{DGGPS97}.
For large systems the oscillator energy is the smallest energy scale and vanishes in the
thermodynamic limit, as a consequence, in this limit, the semiclassical approximation becomes
a rigorous treatment.
The equilibrium noncondensate densities $\tilde{n}^0({\bf r})$ and $\tilde{m}^0({\bf r})$
are readily calculated employing the semiclassical approximation. One obtains
\begin{eqnarray}
\tilde{n}^0({\bf r}) &=& \frac{1}{V}\sum_i\left\{ [\bar{u}_i^2({\bf r})+\bar{v}_i^2({\bf r})]f_i^0
+ \bar{v}_i^2({\bf r}) \right\} = \frac{1}{V}\sum_{\bf p}\left\{ [u_p^2({\bf r})+v_p^2({\bf r})]
f_p^0({\bf r}) + v_p^2({\bf r}) \right\} \;\;,
\nonumber\\
\tilde{m}^0({\bf r}) &=& \frac{1}{V}\sum_i \bar{u}_i({\bf r})\bar{v}_i({\bf r}) (1+2f_i^0)
= \frac{1}{V}\sum_{\bf p} u_p({\bf r})v_p({\bf r}) \left[1+2f_p^0({\bf r})\right] \;\;,
\label{ldeq}
\end{eqnarray}
where $f_p^0({\bf r})=(e^{\epsilon_p({\bf r})/k_BT}-1)^{-1}$ is the local equilibrium quasiparticle
distribution function.
By inserting in Eq. (\ref{eqcd}) the above expressions for the noncondensate densities and
using the renormalization (\ref{homrg}) of the coupling constant, one obtains
the following result for the profile of the condensate density valid to order ${\rm g}^2$
\begin{equation}
{\rm g}n_0({\bf r})={\rm g}n_{TF}({\bf r}) + \delta\mu - {\rm g}n_{TF}({\bf r})
[a^3n_{TF}({\bf r})]^{1/2} H[\tau({\bf r})] \;\;,
\label{eqcd1}
\end{equation}
where $H(\tau)$ is the dimensionless function (\ref{homH}) of the local reduced temperature
$\tau({\bf r})=k_BT/{\rm g}n_{TF}({\bf r})$. In the above equation $\delta\mu=
\mu-\mu_{TF}(N_0)-2{\rm g}n_T^0$, with $n_T^0=\zeta(3/2)\lambda_T^{-3}$ as in (\ref{homdcp}),
is the shift in the chemical potential corresponding to the change in the condensate density
profile.
The application of the semiclassical approximation to the last two terms on the r.h.s. of
Eq. (\ref{eqcf}), which
describe the dynamic coupling to the noncondensate particles, needs a careful treatment.
Thus, for the moment, we calculate them in terms of the $u_i$ and $v_i$ functions.
Similarly to the homogeneous case [see Eqs. (\ref{homfg})] , one must neglect in Eqs. (\ref{feq}),
(\ref{geq}) the terms proportional to $\delta\tilde{n}$ and $\delta\tilde{m}$.
In Fourier space, one finds for the components of the matrices $f_{ij}$, $g_{ij}$ and $g^{\ast}_{ij}$
oscillating at the frequency $\omega$
\begin{eqnarray}
f_{ij}(\omega) &=& {\rm g} \frac{f_i^0-f_j^0}{\hbar\omega+(\epsilon_i-\epsilon_j)
+i0} \int d{\bf r} \;\Phi_0
\left[(\delta\Phi_1-\delta\Phi_2)(v_iu_j^{\ast}-u_iv_j^{\ast})\right.
\nonumber\\
&+& \left.
(\delta\Phi_1+\delta\Phi_2)(2u_iu_j^{\ast}+2v_iv_j^{\ast}+v_iu_j^{\ast}+u_iv_j^{\ast})
\right] \;\;,
\nonumber\\
g_{ij}(\omega) &=& {\rm g} \frac{1+f_i^0+f_j^0}{\hbar\omega-(\epsilon_i+\epsilon_j)}
\int d{\bf r} \;\Phi_0
\left[(\delta\Phi_1-\delta\Phi_2)(u_i^{\ast}u_j^{\ast}-v_i^{\ast}v_j^{\ast})\right.
\nonumber\\
&+& \left.
(\delta\Phi_1+\delta\Phi_2)(2u_i^{\ast}v_j^{\ast}+2v_i^{\ast}u_j^{\ast}+u_i^{\ast}u_j^{\ast}
+v_i^{\ast}v_j^{\ast})
\right] \;\;,
\label{fgomega} \\
g_{ij}^{\ast}(\omega) &=& {\rm g} \frac{1+f_i^0+f_j^0}{\hbar\omega+(\epsilon_i+\epsilon_j)}
\int d{\bf r} \;\Phi_0
\left[(\delta\Phi_1-\delta\Phi_2)(u_iu_j-v_iv_j)\right.
\nonumber\\
&-& \left.
(\delta\Phi_1+\delta\Phi_2)(2u_iv_j+2v_iu_j+u_iu_j+v_iv_j)
\right] \;\;,
\nonumber
\end{eqnarray}
where we have taken $\delta\Phi({\bf r},t)=\delta\Phi_1({\bf r})\,e^{-i\omega t}$ and
$\delta\Phi^{\ast}({\bf r},t)=\delta\Phi_2({\bf r})\,e^{-i\omega t}$.
In Eqs. (\ref{fgomega}) we have neglected in the expressions for $g_{ij}(\omega)$ and
$g_{ij}^{\ast}(\omega)$ the small imaginary part of the frequency $\omega$. As discussed
in Sec. III-C, this imaginary contribution is responsible for the Beliaev damping. However,
due to discretization of levels, this damping mechanism is not effective for the low-lying
modes we are investigating.
To order ${\rm g}^2$ the equation for the low-lying oscillations of the condensate can be
written in the form
\begin{eqnarray}
\hbar\omega(\delta\Phi_1+\delta\Phi_2) &=& -\frac{\hbar^2\nabla^2}{2m}(\delta\Phi_1-\delta\Phi_2)
\nonumber\\
&-& 2{\rm g}\tilde{m}^0 (\delta\Phi_1-\delta\Phi_2)
+ 2{\rm g}\sqrt{n_{TF}}\sum_{ij}(v_i^{\ast}u_j-u_i^{\ast}v_j) f_{ij}(\omega)
\nonumber\\
&+& {\rm g}\sqrt{n_{TF}}\sum_{ij}\left[ (u_iu_j-v_iv_j) g_{ij}(\omega)
- (u_i^{\ast}u_j^{\ast}-v_i^{\ast}v_j^{\ast}) g_{ij}^{\ast}(\omega) \right] \;\;,
\nonumber\\
\hbar\omega(\delta\Phi_1-\delta\Phi_2) &=& 2({\rm g}n_{TF}+\delta\mu) (\delta\Phi_1+\delta\Phi_2)
\label{dphi+-}\\
&+& 2{\rm g}n_{TF}\left[{\rm g}\frac{1}{V}\sum_{\bf p}\frac{m}{p^2}-(a^3n_{TF})^{1/2}
H[\tau({\bf r})]\right] (\delta\Phi_1+\delta\Phi_2)
\nonumber\\
&+& 2{\rm g}\sqrt{n_{TF}}\sum_{ij}(2u_i^{\ast}u_j+2v_i^{\ast}v_j+v_i^{\ast}u_j
+u_i^{\ast}v_j) f_{ij}(\omega)
\nonumber\\
&+& {\rm g}\sqrt{n_{TF}}\sum_{ij}\Bigl[ (2u_iv_j+2v_iu_j+u_iu_j+v_iv_j) g_{ij}(\omega) \Bigr.
\nonumber\\
&+& \Bigl. (2u_i^{\ast}v_j^{\ast}+2v_i^{\ast}u_j^{\ast}+u_i^{\ast}u_j^{\ast}+v_i^{\ast}v_j^{\ast})
g_{ij}^{\ast}(\omega) \Bigr]
\;\;.
\nonumber
\end{eqnarray}
In the above equations one can recognize the terms arising from the dynamic coupling between the
condensate and the noncondensate component, which contain $f_{ij}(\omega)$, $g_{ij}(\omega)$ and
$g_{ij}^{\ast}(\omega)$, the terms arising from the coupling to the static anomalous density
$\tilde{m}^0$ and from the renormalization of {\rm g}, and, finally, the terms
proportional to $\delta\mu$ and $H(\tau)$ which come from the change in the density profile
of the condensate. The last terms have no counterpart in the homogeneous case.
In Eqs. (\ref{dphi+-}) we have neglected, as in Eqs. (\ref{hydeq}), the term proportional to
$\nabla^2(\delta\Phi_1+\delta\Phi_2)$.
Following the analysis carried out in the homogeneous case, we write the excitation energy as
$\hbar\omega=\epsilon_{TF}+\delta E - i\gamma$. From Eqs. (\ref{dphi+-}), by treating the
corrections to Eqs. (\ref{hydeq}) as small perturbations, one gets the result
\begin{eqnarray}
\delta E - i\gamma &=& \int d{\bf r}\; {\rm g}n_{TF}
\left[{\rm g}\frac{1}{V}\sum_{\bf p}\frac{m}{p^2}-(a^3n_{TF})^{1/2}H[\tau({\bf r})]\right]
|\delta\Phi_1^0+\delta\Phi_2^0|^2
\nonumber\\
&-& \int d{\bf r}\; {\rm g}\tilde{m}^0 |\delta\Phi_1^0-\delta\Phi_2^0|^2
+ 4{\rm g}^2\sum_{ij} (f_i^0-f_j^0)\frac{|A_{ij}|^2}{\epsilon_{TF}+(\epsilon_i-
\epsilon_j)+i0}
\label{encorr}\\
&+&2{\rm g}^2\sum_{ij} (1+f_i^0+f_j^0)\left( \frac{|B_{ij}|^2}{\epsilon_{TF}-
(\epsilon_i+\epsilon_j)} - \frac{|\tilde{B}_{ij}|^2}{\epsilon_{TF}+
(\epsilon_i+\epsilon_j)}\right) \;\;,
\nonumber
\end{eqnarray}
holding for the low-lying modes with $\epsilon_{TF}\ll\mu$.
Notice that the shift $\delta\mu$ of the chemical potential does not enter result (\ref{encorr}).
In fact, in the Thomas-Fermi limit, the excitation frequencies obtained from Eq. (\ref{hydeq1})
do not depend on the value of $\mu$.
In Eq. (\ref{encorr}), the matrix elements $A_{ij}$, $B_{ij}$ and $\tilde{B}_{ij}$ are defined,
in analogy to the homogeneous case, as
\begin{eqnarray}
A_{ij}&=&\frac{1}{2}\int d{\bf r} \;\sqrt{n_{TF}}
\Bigl[ (\delta\Phi_1^0+\delta\Phi_2^0)(2u_iu_j^{\ast}+2v_iv_j^{\ast}+v_iu_j^{\ast}+u_iv_j^{\ast})
+ (\delta\Phi_1^0-\delta\Phi_2^0)(v_iu_j^{\ast}-u_iv_j^{\ast}) \Bigr] \;\;,
\nonumber\\
B_{ij}&=&\frac{1}{2}\int d{\bf r} \;\sqrt{n_{TF}}
\Bigl[ (\delta\Phi_1^0+\delta\Phi_2^0)(2u_i^{\ast}v_j^{\ast}+2v_i^{\ast}u_j^{\ast}
+u_i^{\ast}u_j^{\ast}+v_i^{\ast}v_j^{\ast})
+ (\delta\Phi_1^0-\delta\Phi_2^0)(u_i^{\ast}u_j^{\ast}-v_i^{\ast}v_j^{\ast}) \Bigr] \;\;,
\label{ABB}\\
\tilde{B}_{ij}&=&\frac{1}{2}\int d{\bf r} \;\sqrt{n_{TF}}
\Bigl[ (\delta\Phi_1^0+\delta\Phi_2^0)(2u_iv_j+2v_iu_j
+u_iu_j+v_iv_j)
- (\delta\Phi_1^0-\delta\Phi_2^0)(u_iu_j-v_iv_j) \Bigr] \;\;.
\nonumber
\end{eqnarray}
Starting from Eq. (\ref{encorr}), one can study both the damping and the frequency shift of the
low-lying modes. The calculation of the damping rates has been carried out by several authors
\cite{FSW98,BS98,GP99,RCGS99}. In Refs. \cite{FSW98,RCGS99} the damping of the $m=0$ and $m=2$ mode
has been calculated as a function of temperature and found in good agreement with experiments
\cite{JILA97,MIT98}. Concerning the frequency shifts, a calculation based on a method similar to
ours has been carried out in \cite{FS98}, but only for quasiclassical modes which satisfy
the condition $\hbar\omega_{ho}\ll\epsilon_{TF}\ll\mu$. In the present work we study the
frequency shift of the lowest-lying collective modes with $\epsilon_{TF}\sim\hbar\omega_{ho}$.
These modes have also been studied within the dielectric formalism in \cite{RCGS99}.
\subsection{Frequency shift of the collective modes}
\subsubsection{Non-resonant contribution}
Similarly to the homogeneous case, the non-resonant contribution to the frequency shift
$\delta E$ is defined as
\begin{equation}
\delta E_{NR}=
2{\rm g}^2\sum_{ij} (1+f_i^0+f_j^0) \left( \frac{|B_{ij}|^2}{\epsilon_{TF}-\epsilon_i-\epsilon_j}
-\frac{|\tilde{B}_{ij}|^2} {\epsilon_{TF}+\epsilon_i+\epsilon_j} \right) \;\;.
\label{NR0}
\end{equation}
The matrix elements of $B$ and $\tilde{B}$ are given in (\ref{ABB}).
By using the semiclassical approximation
(\ref{lda}) for the quasiparticle functions $u_i$ and $v_i$, one can write $\delta E_{NR}$ in the
following form
\begin{eqnarray}
\frac{\delta E_{NR}}{\epsilon_{TF}} = - \frac{{\rm g}}{2V^2} \sum_{ij} \int d{\bf r}\, d{\bf s}\;
e^{i{\bf s}\cdot\nabla[\varphi_i({\bf r})+\varphi_j({\bf r})]}\;
\chi_0({\bf r})\left[ \bar{K}_{1\,ij}^{NR}({\bf r},{\bf r}+{\bf s}) +
\bar{K}_{2\,ij}^{NR}({\bf r},{\bf r}+{\bf s}) \right]
\chi_0^{\ast}({\bf r}+{\bf s}) \;\;,
\label{NR1}
\end{eqnarray}
where the functions $\chi_0({\bf r})$ are the solutions of (\ref{hydeq1}),
and in $e^{i{\bf s}\cdot\nabla[\varphi_i({\bf r})+\varphi_j({\bf r})]}$ we
have neglected second derivatives of the phase $\varphi$. The smoothly varying kernels
$\bar{K}_{1\,ij}^{NR}$ and $\bar{K}_{2\,ij}^{NR}$ are defined as
\begin{eqnarray}
\bar{K}_{1\,ij}^{NR}({\bf r},{\bf r}^{\prime}) &=& \frac{1+f_i^0+f_j^0}{\epsilon_i+\epsilon_j}
\; \left(a_{ij}({\bf r})+\frac{2{\rm g}n_{TF}({\bf r})}{\epsilon_i+\epsilon_j}b_{ij}({\bf r})
\right) \left(a_{ij}({\bf r}^{\prime})+\frac{2{\rm g}n_{TF}({\bf r}^{\prime})}
{\epsilon_i+\epsilon_j}b_{ij}({\bf r}^{\prime})\right) \;\;,
\nonumber\\
\bar{K}_{2\,ij}^{NR}({\bf r},{\bf r}^{\prime}) &=& \frac{1+f_i^0+f_j^0}{\epsilon_i+\epsilon_j}
\; \frac{2{\rm g}n_{TF}({\bf r})}{\epsilon_{TF}}b_{ij}({\bf r})
\frac{2{\rm g}n_{TF}({\bf r}^{\prime})}{\epsilon_{TF}}b_{ij}({\bf r}^{\prime})\;\;,
\label{ker}
\end{eqnarray}
where we have introduced the matrices
\begin{eqnarray}
a_{ij}({\bf r}) &=& 2\bar{u}_i({\bf r})\bar{v}_j({\bf r})+2\bar{v}_i({\bf r})\bar{u}_j({\bf r})
+\bar{u}_i({\bf r})\bar{u}_j({\bf r})+\bar{v}_i({\bf r})\bar{v}_j({\bf r}) \;\;,
\nonumber\\
b_{ij}({\bf r}) &=& \bar{u}_i({\bf r})\bar{u}_j({\bf r})-\bar{v}_i({\bf r})\bar{v}_j({\bf r}) \;\;.
\label{ab}
\end{eqnarray}
In the limit $\epsilon_{TF}\ll\mu$ we can use the following gradient expansion
\begin{eqnarray}
\chi_0({\bf r})\bar{K}_{1\,ij}^{NR}({\bf r},{\bf r}+{\bf s})\chi_0^{\ast}({\bf r}+{\bf s}) \simeq
\bar{K}_{1\,ij}^{NR}({\bf r},{\bf r})\; |\chi_0({\bf r})|^2 \;\;,
\label{K1}
\end{eqnarray}
and
\begin{eqnarray}
\chi_0({\bf r})\bar{K}_{2\,ij}^{NR}({\bf r},{\bf r}+{\bf s})\chi_0^{\ast}({\bf r}+{\bf s}) &\simeq&
\bar{K}_{2\,ij}^{NR}({\bf r},{\bf r}) \Bigl[ |\chi_0({\bf r})|^2
+ \frac{1}{2}\chi_0({\bf r})\left({\bf s}\cdot\nabla\right)^2\chi_0^{\ast}({\bf r}) \Bigr]
\nonumber\\
&+& \frac{1}{2} \chi_0({\bf r})\left({\bf s}\cdot\nabla\right)\bar{K}_{2\,ij}^{NR}({\bf r},{\bf r})
\left({\bf s}\cdot\nabla\right)
\chi_0^{\ast}({\bf r}) \;\;.
\label{K2}
\end{eqnarray}
Since the kernel $\bar{K}_1^{NR}$ is already zeroth order in $\epsilon_{TF}/\mu$, we can neglect
higher order terms in the expansion (\ref{K1}). On the contrary, $\bar{K}_2^{NR}$ is of order
$(\mu/\epsilon_{TF})^2$ and we need the expansion (\ref{K2}) to second order in the displacement
${\bf s}$. Notice also that terms in the expansion (\ref{K2}) containing odd powers of ${\bf s}$
vanish in (\ref{NR1}) due to geometry. Moreover, we have neglected in (\ref{K2}) second derivatives
of the slowly varying functions $\bar{u}_i$, $\bar{v}_i$.
By the replacement $\sum_i\to\sum_{\bf p}$, and after integration by parts, one gets the result
\begin{eqnarray}
\frac{\delta E_{NR}}{\epsilon_{TF}} &=& - \frac{{\rm g}}{2V^2} \int d{\bf r}\, d{\bf s}
\, |\chi_0({\bf r})|^2 \sum_{{\bf p}{\bf q}} e^{-i{\bf q}\cdot{\bf s}/\hbar}
\left[ \bar{K}_{1\,pk}^{NR}({\bf r},{\bf r}) +
\bar{K}_{2\,pk}^{NR}({\bf r},{\bf r}) \right]
\nonumber\\
&+& \frac{{\rm g}}{12V^2} \int d{\bf r}\, d{\bf s}
\, |\nabla\chi_0({\bf r})|^2 \sum_{{\bf p}{\bf q}} s^2 e^{-i{\bf q}\cdot{\bf s}/\hbar}
\bar{K}_{2\,pk}^{NR}({\bf r},{\bf r}) \;\;,
\label{NR2}
\end{eqnarray}
where ${\bf k}={\bf q}+{\bf p}$. In the first term on the r.h.s. of (\ref{NR2}) the integration
over ${\bf s}$ gives $\delta_{{\bf q}{\bf 0}}$, while in the second term one writes
$s^2e^{-i{\bf q}\cdot{\bf s}/\hbar}=-\hbar^2\nabla_{\bf q}^2
e^{-i{\bf q}\cdot{\bf s}/\hbar}$ and integrates by parts over ${\bf q}$.
After some algebra one gets the result
\begin{eqnarray}
\frac{\delta E_{NR}}{\epsilon_{TF}} &=& - \int d{\bf r}\,
{\rm g}\frac{1}{V}\sum_{\bf p}(1+2f_p^0)
\left[ |\chi_0|^2 \frac{(\epsilon_p^0)^2}{4\epsilon_p^3} - |\nabla\chi_0|^2
\frac{\hbar^2{\rm g}^2n_{TF}^2}{m\epsilon_{TF}^2}\frac{\epsilon_p^0}{6\epsilon_p^3} \right]
\nonumber\\
&-& \int d{\bf r}\,|\chi_0|^2 \frac{{\rm g}^2n_{TF}^2}{\epsilon_{TF}^2}
{\rm g}\frac{1}{V}\sum_{\bf p}\frac{1+2f_p^0}{\epsilon_p}
\label{NR3}\\
&-& \frac{1}{6} \int d{\bf r}\,|\nabla\chi_0|^2 \frac{\hbar^2{\rm g}^2n_{TF}^2}{\epsilon_{TF}^2}
\frac{1}{V}\sum_{\bf q}\int d{\bf s}\, e^{-i{\bf q}\cdot{\bf s}/\hbar}\nabla_{\bf q}^2 \;{\rm g}
\frac{1}{V}\sum_{\bf p}\frac{f_p^0}{\epsilon_{|{\bf q}+{\bf p}|}} \;\;.
\nonumber
\end{eqnarray}
In the above equation $\epsilon_p=\epsilon_p({\bf r})$ and $f_p^0=f_p^0({\bf r})$, according to
(\ref{bs}). We notice that in the homogeneous limit, where $\chi_0({\bf r})=
e^{i{\bf q}\cdot{\bf r}/\hbar}/\sqrt{V}$ and $\epsilon_{TF}=c_B q$, the first two terms
on the r.h.s. of (\ref{NR3}) coincide with the corresponding terms in (\ref{homENR1}).
\subsubsection{Resonant contribution}
The resonant contribution to $\delta E$ is defined as
\begin{equation}
\delta E_{R}=4{\rm g}^2\sum_{ij}(f_i^0-f_j^0) \frac{|A_{ij}|^2} {\epsilon_{TF}+\epsilon_i-
\epsilon_j} \;\;,
\label{R0}
\end{equation}
where the matrix elements $A_{ij}$ are given in (\ref{ABB}).
Following the method used in the analysis of the non-resonant terms one has
\begin{eqnarray}
\frac{\delta E_{R}}{\epsilon_{TF}} = \frac{{\rm g}}{2V^2} \sum_{ij} \int d{\bf r}\, d{\bf s}\;
e^{-i{\bf s}\cdot\nabla[\varphi_i({\bf r})-\varphi_j({\bf r})]}\;
\chi_0({\bf r})\left[ \bar{K}_{1\,ij}^{R}({\bf r},{\bf r}+{\bf s}) +
\bar{K}_{2\,ij}^{R}({\bf r},{\bf r}+{\bf s}) \right]
\chi_0^{\ast}({\bf r}+{\bf s}) \;\;.
\label{R1}
\end{eqnarray}
The kernels in the above equation are defined by
\begin{eqnarray}
\bar{K}_{1\,ij}^{R}({\bf r},{\bf r}^{\prime}) &=& \frac{f_i^0-f_j^0}{\epsilon_{TF}+\epsilon_i
-\epsilon_j}
\; \left(c_{ij}({\bf r})-\frac{2{\rm g}n_{TF}({\bf r})}{\epsilon_i-\epsilon_j}d_{ij}({\bf r})
\right) \left(c_{ij}({\bf r}^{\prime})-\frac{2{\rm g}n_{TF}({\bf r}^{\prime})}
{\epsilon_i-\epsilon_j}d_{ij}({\bf r}^{\prime})\right) \;\;,
\nonumber\\
\bar{K}_{2\,ij}^{R}({\bf r},{\bf r}^{\prime}) &=& \frac{f_i^0-f_j^0}{\epsilon_i-\epsilon_j}
\; \frac{2{\rm g}n_{TF}({\bf r})}{\epsilon_{TF}}d_{ij}({\bf r})
\frac{2{\rm g}n_{TF}({\bf r}^{\prime})}{\epsilon_{TF}}d_{ij}({\bf r}^{\prime})\;\;,
\label{ker1}
\end{eqnarray}
where we have introduced the matrices
\begin{eqnarray}
c_{ij}({\bf r}) &=& 2\bar{u}_i({\bf r})\bar{u}_j({\bf r})+2\bar{v}_i({\bf r})\bar{v}_j({\bf r})
+\bar{v}_i({\bf r})\bar{u}_j({\bf r})+\bar{u}_i({\bf r})\bar{v}_j({\bf r}) \;\;,
\nonumber\\
d_{ij}({\bf r}) &=& \bar{v}_i({\bf r})\bar{u}_j({\bf r})-\bar{u}_i({\bf r})\bar{v}_j({\bf r}) \;\;.
\label{cd}
\end{eqnarray}
In the limit $\epsilon_{TF}\ll\mu$ the term in Eq. (\ref{R1}) containing the kernel
$\bar{K}_{2\,ij}^{R}$ can be treated using the gradient expansion (\ref{K2}). One gets thus
\begin{eqnarray}
\frac{{\rm g}}{2V^2} \sum_{ij} \int d{\bf r}\, d{\bf s}\; &&
e^{-i{\bf s}\cdot\nabla[\varphi_i({\bf r})-\varphi_j({\bf r})]}\; \chi_0({\bf r})
\bar{K}_{2\,ij}^{R}({\bf r},{\bf r}+{\bf s}) \chi_0^{\ast}({\bf r}+{\bf s}) =
\nonumber\\
&& - \int d{\bf r}\, |\nabla\chi_0|^2 \frac{\hbar^2{\rm g}^2n_{TF}^2}{m\epsilon_{TF}^2}
{\rm g}\frac{1}{V}\sum_{\bf p}\frac{\partial f_p^0}{\partial\epsilon_p}
\frac{\epsilon_p^0}{3\epsilon_p^2}
\label{R2}\\
&& + \frac{1}{6} \int d{\bf r}\,|\nabla\chi_0|^2 \frac{\hbar^2{\rm g}^2n_{TF}^2}{\epsilon_{TF}^2}
\frac{1}{V}\sum_{\bf q}\int d{\bf s}\, e^{-i{\bf q}\cdot{\bf s}/\hbar}\nabla_{\bf q}^2 \;
{\rm g}\frac{1}{V}\sum_{\bf p}\frac{f_p^0}{\epsilon_{|{\bf q}+{\bf p}|}} \;\;.
\nonumber
\end{eqnarray}
In the homogeneous limit, the first term on the r.h.s. of (\ref{R2}) coincides with the second term
in the square bracket on the r.h.s. of (\ref{homER1}).
Moreover, the last term on the r.h.s. of (\ref{NR3}) and (\ref{R2})
are equal and opposite in sign, and cancel out in the sum $\delta E_{NR}+\delta E_{R}$. A similar
cancellation is also present in the homogeneous case [see Eqs. (\ref{homENR1}) and (\ref{homER1})].
The contribution to $\delta E_{R}$ arising from $\bar{K}_{1\,ij}^{R}$ is more delicate.
In fact, all terms in the gradient expansion give contributions which are of the same order
in the limit $\epsilon_{TF}\ll\mu$. Hovever, if we restrict ourselves to modes for which
$\nabla^2\chi_0=const$, the expansion (\ref{K2}) is still appropriate. In fact, higher
order terms in (\ref{K2}) contain derivatives of $\nabla^2\chi_0$ and vanish for modes with
constant Laplacian. To this class of modes belong, for example, all surface modes, for which
$\nabla^2\chi_0=0$, and the lowest breathing modes.
For the above mentioned term one finds
\begin{eqnarray}
\frac{{\rm g}}{2V^2} \sum_{ij} && \int d{\bf r}\, d{\bf s}\;
e^{-i{\bf s}\cdot\nabla[\varphi_i({\bf r})-\varphi_j({\bf r})]}\;
\chi_0({\bf r})\bar{K}_{1\,ij}^{R}({\bf r},{\bf r}+{\bf s}) \chi_0^{\ast}({\bf r}+{\bf s}) =
\nonumber\\
&& - \frac{1}{12} \int d{\bf r}\, d{\bf s}\, {\rm g}\frac{1}{V^2}\sum_{{\bf p}{\bf q}}
e^{-i{\bf q}\cdot{\bf s}/\hbar} \hbar^2\nabla_{\bf q}^2
\left[ \chi_0({\bf r})\nabla^2\chi_0^{\ast}({\bf r}) \bar{K}_{1\,pk}^{R}({\bf r},{\bf r})
+ \chi_0({\bf r})\nabla\chi_0^{\ast}({\bf r}) \cdot
\nabla \bar{K}_{1\,pk}^{R}({\bf r},{\bf r}) \right] \;\;.
\label{R3}
\end{eqnarray}
The Laplacian in momentum space can be easily calculated once the low-${\bf q}$
behavior of $\bar{K}_{1\,pk}^{R}({\bf r},{\bf r})$ has been obtained. A straightforward
calculation yields:
\begin{equation}
{\rm g}\frac{1}{V}\sum_{\bf p} \bar{K}_{1\,p|{\bf q}+{\bf p}|}^{R} \to
\frac{2 q^2}{3m\epsilon_{TF}^2} {\rm g}\frac{1}{V}\sum_{\bf p} \left(-\frac{\partial f_p^0}
{\partial\epsilon_p}\right) \epsilon_p^0 \left(1+\frac{\epsilon_p^0}{\epsilon_p}
\frac{\partial\epsilon_p}{\partial\epsilon_p^0}\right)^2 \;\;.
\label{Lqex}
\end{equation}
After some algebra, one obtaines the following result for the contribution to $\delta E_{R}$
\begin{eqnarray}
\frac{{\rm g}}{2V^2} \sum_{ij} \int && d{\bf r}\, d{\bf s}\;
e^{-i{\bf s}\cdot\nabla[\varphi_i({\bf r})-\varphi_j({\bf r})]}\;
\chi_0({\bf r})
\bar{K}_{1\,ij}^{R}({\bf r},{\bf r}+{\bf s}) \chi_0^{\ast}({\bf r}+{\bf s}) =
\frac{\sqrt{32}}{3\sqrt{\pi}} \frac{\hbar^2{\rm g}}{m\epsilon_{TF}^2}
\nonumber\\
&& \times \int d{\bf r}\, |\nabla\chi_0|^2 (an_{TF})^{3/2}
\int dx \frac{\tau x}{(e^{x/2}-e^{-x/2})^2}
\left[ \frac{(u-1)^{3/2}}{u} \left(\frac{2u+1}{u+1}\right)^2 - 4\sqrt{\tau x}\right] \;\;,
\label{R4}
\end{eqnarray}
where $u=\sqrt{1+\tau^2({\bf r})x^2}$ and $\tau({\bf r})=k_BT/{\rm g}n_{TF}({\bf r})$.
We stress that the above contribution to the frequency shift
is peculiar of collective modes with constant Laplacian of harmonically trapped systems in the
thermodynamic limit. In the homogeneous case, where $\chi_0\propto e^{i{\bf q}\cdot{\bf r}}$ and
$\nabla^2\chi_0\neq const$, the contribution of this term is different and is given by the first
term in the square bracket on the r.h.s. of (\ref{homER1}).
\subsubsection{Results}
We are now in a position to calculate the shift $\delta E$ to order ${\rm g}^2$, by summing the
various contributions to the real part of Eq. (\ref{encorr}).
One gets the relevant result
\begin{eqnarray}
\frac{\delta E}{\epsilon_{TF}} &=& - \frac{4}{3\sqrt{\pi}}\frac{{\rm g}}{m\omega_{TF}^2}
\int d{\bf r}\, (an_{TF})^{3/2}\left(\chi_0\nabla^2\chi_0^{\ast} + \chi_0^{\ast}\nabla^2\chi_0\right)
\nonumber\\
&+& \int d{\bf r}\, (a^3n_{TF})^{1/2} |\chi_0|^2 G_1[\tau({\bf r})]
- \frac{{\rm g}}{m\omega_{TF}^2} \int d{\bf r}\, (an_{TF})^{3/2} |\nabla\chi_0|^2 G_2[\tau({\bf r})]
\;\;,
\label{DE}
\end{eqnarray}
where $\omega_{TF}=\epsilon_{TF}/\hbar$, and the functions $G_1(\tau)$, $G_2(\tau)$ of the local
reduced temperature $\tau({\bf r})=k_BT/{\rm g}n_{TF}({\bf r})$ are defined as follows
\begin{equation}
G_1(\tau) = \frac{\sqrt{32}}{\sqrt{\pi}} \tau \int_0^{\infty} dx \frac{1}{e^x-1}
\left( \sqrt{\tau x}-\frac{\sqrt{u-1}}{u}\,\frac{u^2+u-1}{u+1} \right)
\;\;,
\label{G1}
\end{equation}
and
\begin{eqnarray}
G_2(\tau) = \frac{\sqrt{32}}{3\sqrt{\pi}} \tau && \Biggl\{ \int_0^{\infty} dx
\frac{x}{(e^{x/2}-e^{-x/2})^2}
\left[ 4\sqrt{\tau x} - \frac{\sqrt{u-1}}{u(u+1)} - \frac{(u-1)^{3/2}}{u}
\left(\frac{2u+1}{u+1}\right)^2 \right] \Biggr.
\nonumber\\
&& - \Biggl. \int_0^{\infty} dx \frac{1}{e^x-1}\, \frac{\sqrt{u-1}}{u(u+1)} \Biggr\} \;\;,
\label{G2}
\end{eqnarray}
where, as usual, $u=\sqrt{1+\tau^2x^2}$.
The functions $G_1(\tau)$ and $G_2(\tau)$ are plotted in Fig. 4. Both are positive for any value of
$\tau$. As a consequence, $G_1(\tau)$ gives an upward shift of the excitation frequency, while
$G_2(\tau)$ gives a downward shift. The above result holds for collective modes which do not excite
the center of mass degrees of freedom. In fact, as discussed in Sec. II, the theoretical approach
developed in the present work does not describe the center of mass motion and, in particular, the
dipole mode.
At $T=0$, both $G_1$ and $G_2$ are zero and one is left with the result
\begin{equation}
\frac{\delta E}{\epsilon_{TF}} = - \frac{4}{3\sqrt{\pi}}\frac{{\rm g}}{m\omega_{TF}^2}
\int d{\bf r}\, (an_{TF})^{3/2}\left(\chi_0\nabla^2\chi_0^{\ast} + \chi_0^{\ast}\nabla^2\chi_0\right)
\;\;,
\label{DE0}
\end{equation}
which coincides with the findings of Ref. \cite{PS98}, obtained from the hydrodynamic theory of
superfluids, and of Ref. \cite{BP99}. Notice that only non-resonant terms give contribution
at $T=0$, as a consequence result (\ref{DE0}) holds in general for collective modes with
$\epsilon_{TF}\ll\mu$, and is not restricted to modes which have constant Laplacian.
For the monopole (breathing) mode in a spherically symmetric
trap ($\lambda=1$), characterized by the frequency $\omega_{M}=\sqrt{5}\omega_{ho}$, one has from
Eq. (\ref{DE0}) the fractional shift \cite{PS98,BP99}
\begin{equation}
\frac{\delta\omega_M}{\omega_M} = \frac{21\sqrt{2}}{320\zeta(3)} \eta^3 \;\;,
\label{M0}
\end{equation}
expressed in terms of the parameter $\eta$.
For the $m=0$ modes in an axially symmetric trap, which have excitation frequency given by
(\ref{m0LH}), one finds the result \cite{PS98,BP99}
\begin{equation}
\frac{\delta\omega_{m=0}}{\omega_{m=0}} = \frac{21\sqrt{2}}{320\zeta(3)} \eta^3 f_{\pm}(\lambda) \;\;,
\label{m=00}
\end{equation}
where
\begin{equation}
f_{\pm}({\lambda})=\frac{1}{2} \pm \frac{8+\lambda^2}{6\sqrt{9\lambda^4-16\lambda^2+16}} \;\;,
\label{fpm}
\end{equation}
and the index $\pm$ refers to the high ($+$) and low ($-$) $m=0$ mode.
As discussed in Ref. \cite{PS98}, these frequency shifts are very small. For $\eta=0.4$, which is a
typical value for the interaction parameter in experiments, one gets a fractional shift of
the order of 0.5 \%.
At finite temperature the terms involving the $G_1$ and $G_2$ functions contribute to the frequency
shift, and, differently from the $T=0$ case, also surface excitations with $\nabla^2\chi_0=0$ are
affected by the correction (\ref{DE}).
We consider first spherically symmetric traps. The monopole oscillation has the form
$\chi_M\propto (r^2-3R^2/5)$, where $R=\sqrt{2\mu_{TF}(N_0)/m\omega_{ho}^2}$ is the condensate radius.
The temperature dependence of the fractional shift $\delta\omega_M/\omega_M$ is given by the
equation
\begin{eqnarray}
\frac{\delta\omega_M}{\omega_M} = \frac{21\sqrt{2}}{320\zeta(3)} \eta^3
\left(\frac{N_0}{N}\right)^{1/5} && \left[ 1+\frac{16}{9\sqrt{\pi}}\int_0^1 dx\, x^{1/2}\sqrt{1-x}
\,(2-5x)^2\, G_1[\tau(x)] \right.
\nonumber\\
&& \left. - \frac{160}{9\sqrt{\pi}}\int_0^1 dx\, x^{3/2} (1-x)^{3/2} \,G_2[\tau(x)] \right]
\;\;,
\label{MT}
\end{eqnarray}
where $N_0/N$ is the equilibrium value of the condensate fraction, which is fixed by the parameter
$\eta$ and the reduced temperature $t=T/T_c^0$. The argument $\tau(x)$ of the $G_1$ and $G_2$
functions is given by the expression $\tau(x)=[(N_0/N)^{-2/5}t/\eta] 1/x$.
In Fig. 6 the monopole frequency shift (\ref{MT}) is shown as a function of the reduced temperature
$t$ for the value $\eta=0.4$ of the interaction parameter.
Already at relatively low temperatures, $t\simeq 0.3$, the monopole frequency is found to be about
1 \% smaller than $\omega_M=\sqrt{5}\omega_{ho}$. In fact, even for such low temperatures,
the local reduced temperature is large at the boundary of the condensate, as $\tau(x)\gg 1$ if
$x\to 0$, and the contribution of this region dominates the shift (\ref{MT}).
If $\eta\ll t$, one can approximate the functions $G_1$ and $G_2$ in (\ref{MT}) with their asymptotic
behavior for $\tau\gg1$. One has
\begin{eqnarray}
G_1(\tau) &\to& G_1(\infty) \tau \;\;,
\nonumber\\
G_2(\tau) &\to& G_2(\infty) \tau \;\;,
\label{G12infty}
\end{eqnarray}
where $G_1(\infty)=5\sqrt{\pi}$ and
\begin{equation}
G_2(\infty) = \frac{4\sqrt{2}}{3\sqrt{\pi}} \int_0^1 dx\,
\left[ \frac{8+5x-x^2}{\sqrt{x}\sqrt{1-x}(1+x)^3} + \frac{4}{x^{3/2}(1-x^2)^{3/4}}
\left( 1-\frac{(1-x)^{1/4}}{(1+x)^{1/4}} \right)\right]
\simeq 22.
\label{G2inf}
\end{equation}
In this case the monopole shift is given by
\begin{equation}
\frac{\delta\omega_M}{\omega_M} = \frac{7\sqrt{2\pi}}{960\zeta(3)}\;\frac{\eta^2 t}{(1-t^3)^{1/5}}
\left[17G_1(\infty)-10G_2(\infty)\right] \simeq - 1.1 \frac{\eta^2 t}{(1-t^3)^{1/5}} \;\;,
\label{MHT}
\end{equation}
where for the condensate fraction we have used the ideal gas law $N_0/N=1-t^3$.
Result (\ref{MHT}) gives a reasonably good approximation to the frequency shift $\delta\omega_M$ also
when $\eta\sim t$, for example, for $\eta=0.4$ and $t=0.8$ Eq. (\ref{MHT}) gives
$\delta\omega_M/\omega_M\simeq -0.16$ and the calculation based on Eq. (\ref{MT}) gives $-0.11$.
In a surface mode the oscillation of the condensate has the form $\chi_{lm}\propto
r^l Y_{lm}(\theta,\phi)$, and the excitation frequency is given by $\omega_l=\sqrt{l}\omega_{ho}$.
For these modes one finds the following fractional shift
\begin{eqnarray}
\frac{\delta\omega_l}{\omega_l} = \frac{\sqrt{2}(2l+3)}{15\sqrt{\pi}\zeta(3)} \eta^3
\left(\frac{N_0}{N}\right)^{1/5} && \left[ \int_0^1 dx\, x^{1/2} (1-x)^{l+1/2}
\, G_1[\tau(x)] \right.
\nonumber\\
&& \left. - (l+1/2) \int_0^1 dx\, x^{3/2} (1-x)^{l-1/2} \,G_2[\tau(x)] \right]
\;\;.
\label{lmT}
\end{eqnarray}
In the limit $\eta\ll t$ one gets, by using (\ref{G12infty}), the following result
\begin{equation}
\frac{\delta\omega_l}{\omega_l} = \frac{\sqrt{2}(2l+3)}{30\zeta(3)(l+1)}\,
\frac{(l+\frac{1}{2})\Gamma(l+\frac{1}{2})}{l\,\Gamma(l)}\;
\frac{\eta^2 t}{(1-t^3)^{1/5}} \left[2G_1(\infty)-
G_2(\infty) \right] \;\;.
\label{lmHT}
\end{equation}
In the case of axially symmetric traps, the oscillations with $m=0$ symmetry have the form
$\chi_{m=0}\propto -2\mu_{TF}(s^2-2)/[ms^2\omega_{\perp}^2 + r_{\perp}^2 + (s^2-4)z^2]$, where
$s=\omega_{m=0}/\omega_{\perp}$ and $\omega_{m=0}$ is given by Eq. (\ref{m0LH}) for the low and high
mode. After some algebra one gets the result
\begin{eqnarray}
\frac{\delta\omega_{m=0}}{\omega_{m=0}} = \frac{21\sqrt{2}}{320\zeta(3)} \eta^3 &&
\left(\frac{N_0}{N}\right)^{1/5}
\left\{ f_{\pm}(\lambda) - \frac{160}{9\sqrt{\pi}}\int_0^1 dx\, x^{3/2} (1-x)^{3/2}
\,G_2[\tau(x)] \right.
\\
\label{m=0T}
&& \left. + \frac{16}{9\sqrt{\pi}}\int_0^1 dx\, x^{1/2}\sqrt{1-x}
\,\biggl[4(1-x)^2+3f_\pm(\lambda)(7x^2-4x)\biggr]\, G_1[\tau(x)] \right\}
\;\;,
\nonumber
\end{eqnarray}
where $f_\pm(\lambda)$ is defined in (\ref{fpm}). In the limit $\eta\ll t$ the above result
reduces to
\begin{eqnarray}
\frac{\delta\omega_{m=0}}{\omega_{m=0}} =
\frac{7\sqrt{2\pi}}{960\zeta(3)}\;\frac{\eta^2 t}{(1-t^3)^{1/5}}
\left\{ [20-3f_\pm(\lambda)] G_1(\infty)-10G_2(\infty)\right\} \;\;.
\label{m=0HT}
\end{eqnarray}
On the contrary, surface excitations of the form $\chi_{m}\propto r_{\perp}^{|m|} e^{im\phi}$
and with excitation energy $\omega_m=\sqrt{|m|}\omega_{\perp}$, exhibit the fractional shift
(\ref{lmT}) with $l$ replaced by $|m|$.
In Fig. 5 (Fig. 6) we show the fractional shift of the mode $m=0$ low (high) as a function of the
reduced temperature $t=T/T_c^0$ and for the value $\eta=0.4$ of the interaction parameter.
We notice that in the case of the $m=0$ high mode (Fig. 6), the size of the fractional shift is
maximum for
spherically symmetric traps ($\lambda=1$) and is minimum for disk-shaped traps ($\lambda\gg 1$).
On the contrary, for the $m=0$ low mode (Fig. 5), $|\delta\omega/\omega|$ is maximum for
$\lambda\gg 1$, while
it is minimum in the $\lambda=1$ case. However, by changing the geometry of the trap, the curve of
the fractional shift remains qualitatively the same, and at intermediate temperatures,
$T\sim 0.5 T_c^0$, one finds downward shifts ranging from 1 to 4\% for both the mode $m=0$ low and
high. In Fig. 7 we show the results for the surface modes $m=2$ and $m=4$ for the same value,
$\eta=0.4$, of the interaction parameter. In the case of surface modes the fractional shift is
independent of the deformation parameter $\lambda$ and we find that the size of the shift
increases by increasing $m$. An explanation of this behavior can be found in the fact that modes
with higher $m$ are more localized at the surface of the condensate where $k_BT\gg
{\rm g}n_0({\bf r})$, being ${\rm g}n_0({\bf r})$ the local chemical potential. Thus, thermal effects
are more pronounced for such modes.
Experiments on the temperature dependence of the collective modes have been carried out both at JILA
\cite{JILA97} and MIT \cite{MIT98}. The JILA group has measured, as a function of temperature, the
frequency of the $m=2$ and $m=0$ low modes \cite{JILA97}. However, in these experiments, the number
of trapped particles is about 10$^4$ and beyond Thomas-Fermi effects are expected to play a
significant role. Nevertheless, our results for the fractional shift of the $m=2$ mode in disk-shaped
geometries, shown in Fig. 7, both qualitativelly and quantitativelly agree with the observed behavior.
In the case of the $m=0$ low mode other effects, not included in the present analysis, might be
responsible for the features observed in the experiment. The frequency of collective excitations in
the Thomas-Fermi regime has been measured by the MIT group for the $m=0$ low mode in a cigar-shaped
trap \cite{MIT98}. In Fig. 8 we show the comparison between the experimental results and our
theoretical prediction. The calculation has been carried out with the value $\eta=0.4$ of the
interaction parameter, which is close to the experimental conditions of \cite{MIT98}. In Fig. 8,
the experimental data have been plotted as a function of the reduced temperature $T/T_c^0$
\cite{DSK}. This is possible only for temperatures above 0.5 $\mu$K, as lower temperatures were not
measurable in \cite{MIT98}.
\subsection{Hydrodynamic equations at $T=0$}
At zero temperature, superfluid systems are described by the equations of hydrodynamics
(for a general discussion see the book \cite{K65}). These equations involve the total
density $n$ of the system and the superfluid velocity ${\bf v}_s$, which is related to
the gradient of the phase of the order parameter. The hydrodynamic picture has been
successfully employed in \cite{S96} to obtain the frequencies of the collective modes
in the Thomas-Fermi regime and later in \cite{PS98} to calculate the corrections to these
frequencies due to beyond mean-field effects. We have already verified [see Eq. (\ref{DE0})]
that our perturbation scheme reproduces at $T=0$ the results obtained from hydrodynamic theory.
However, since we start from dynamic equations written in terms of the condensate wavefunction and the
noncondensate density, it is important to understand whether these equations reduce at zero
temperature to the hydrodynamics of superfluids.
At the level of Gross-Pitaevskii theory the analogy is straightforward \cite{S96,WG96}. By writing
the condensate wavefunction in terms of a modulus and a phase $\Phi({\bf r},t)=
\sqrt{n_0({\bf r},t)}e^{i\varphi({\bf r},t)}$, one has the following identifications
\begin{eqnarray}
\delta n_0({\bf r},t) &=& \Phi_0({\bf r})\left[\delta\Phi({\bf r},t)+
\delta\Phi^{\ast}({\bf r},t)\right]
\;\;,\nonumber\\
i\delta\varphi({\bf r},t) &=& \frac{1}{2\Phi_0({\bf r})}
\left[\delta\Phi({\bf r},t)-\delta\Phi^{\ast}({\bf r},t)\right] \;\;,
\label{hyddv}
\end{eqnarray}
between the fluctuations of $n_0$ and $\varphi$ and the fluctuations of the order parameter.
The coupled equations (\ref{hydeq}), holding in the Thomas-Fermi regime, are then equivalent to
\begin{eqnarray}
&& \frac{\partial\delta n_0}{\partial t} + \nabla\cdot\left(n_{TF}{\bf v}_s\right) = 0 \;\;,
\nonumber\\
&& m\frac{\partial{\bf v}_s}{\partial t} + {\rm g}\nabla\delta n_0 = 0 \;\;,
\label{shydeq}
\end{eqnarray}
where ${\bf v}_s=\hbar\nabla\varphi/m$ is the superfluid velocity.
At $T=0$, if one neglects quantum depletion, the condensate density coincides with the
total density and Eqs. (\ref{shydeq}) coincide with the linearized hydrodynamic equations.
The former of Eqs. (\ref{shydeq}) corresponds to the equation of continuity and the latter
to Euler equation with the pressure $P$ fixed by $\partial P/\partial n_0={\rm g}n_{TF}$.
Beyond Gross-Pitaevskii theory one must replace Eqs. (\ref{hydeq}) by (\ref{dphi+-}), which include
the corrections to order ${\rm g}^2$. At $T=0$ these equations reduce to
\begin{eqnarray}
\hbar\omega(\delta\Phi_1+\delta\Phi_2) &=& -\frac{\hbar^2\nabla^2}{2m}(\delta\Phi_1-\delta\Phi_2)
- 2{\rm g}\tilde{m}^0 (\delta\Phi_1-\delta\Phi_2)
\nonumber\\
&+& {\rm g}\sqrt{n_{TF}}\sum_{ij}\left[ (u_iu_j-v_iv_j) g_{ij}(\omega)
- (u_i^{\ast}u_j^{\ast}-v_i^{\ast}v_j^{\ast}) g_{ij}^{\ast}(\omega) \right] \;\;,
\nonumber\\
\hbar\omega(\delta\Phi_1-\delta\Phi_2) &=& 2({\rm g}n_{TF}+\delta\mu) (\delta\Phi_1+\delta\Phi_2)
\label{dphi+-0}\\
&+& 2{\rm g}n_{TF}\left[{\rm g}\frac{1}{V}\sum_{\bf p}\frac{m}{p^2}-\frac{40}{3\sqrt{\pi}}
(a^3n_{TF})^{1/2}\right] (\delta\Phi_1+\delta\Phi_2)
\nonumber\\
&+& {\rm g}\sqrt{n_{TF}}\sum_{ij}\Bigl[ (2u_iv_j+2v_iu_j+u_iu_j+v_iv_j) g_{ij}(\omega) \Bigr.
\nonumber\\
&+& \Bigl. (2u_i^{\ast}v_j^{\ast}+2v_i^{\ast}u_j^{\ast}+u_i^{\ast}u_j^{\ast}+v_i^{\ast}v_j^{\ast})
g_{ij}^{\ast}(\omega) \Bigr]
\;\;,
\nonumber
\end{eqnarray}
where $\delta\mu=\mu-\mu_{TF}(N_0)$ is the change in the chemical potential
[see Eq. (\ref{eqcd1})] and the matrices $g_{ij}(\omega)$ and $g_{ij}^\ast(\omega)$ are given
in (\ref{fgomega}) with $f_i^0=f_j^0=0$. By using the semiclassical approximation (\ref{lda})
for the quasiparticle states, the above equations are written in terms of the variables
$\varphi$ and $n_0$ as
\begin{eqnarray}
- i\hbar\omega\; \delta n_0({\bf r}) &=& -\hbar\nabla\cdot
\left[ n_0({\bf r}){\bf v}_s({\bf r})\right]
- 4{\rm g}n_{TF}({\bf r})
\tilde{m}^0({\bf r})\delta\varphi({\bf r})
\nonumber\\
&+& \frac{2{\rm g}^2}{V^2}\sum_{ij} \frac{n_{TF}({\bf r})b_{ij}({\bf r})}{\epsilon_i+\epsilon_j}
\int d{\bf s}\; e^{i{\bf s}\cdot\nabla[\varphi_i({\bf r})+\varphi_j({\bf r})]}\;
\label{hydro0}\\
&\times& \left[\frac{i\hbar\omega\delta n_0({\bf r}+{\bf s})}{\epsilon_i+\epsilon_j}
\left( a_{ij}({\bf r}+{\bf s})+\frac{2{\rm g}n_{TF}({\bf r}+{\bf s})}
{\epsilon_i+\epsilon_j} b_{ij}({\bf r}+{\bf s}) \right) - 2n_{TF}({\bf r}+{\bf s})
b_{ij}({\bf r}+{\bf s})\delta\varphi({\bf r}+{\bf s}) \right] \;\;,
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
i\hbar\omega\; \delta\varphi({\bf r}) &=& {\rm g} \left( 1+
{\rm g}\frac{1}{V}\sum_{\bf p}\frac{m}{p^2} \right)
\delta n_0({\bf r})
\label{hydro1}\\
&-& \frac{{\rm g}^2}{V^2}\sum_{ij} \frac{a_{ij}({\bf r})}{\epsilon_i+\epsilon_j}
\int d{\bf s}\; e^{i{\bf s}\cdot\nabla[\varphi_i({\bf r})+\varphi_j({\bf r})]}\;
\delta n_0({\bf r}+{\bf s}) \left( a_{ij}({\bf r}+{\bf s})+\frac{2{\rm g}n_{TF}({\bf r}+{\bf s})}
{\epsilon_i+\epsilon_j} b_{ij}({\bf r}+{\bf s}) \right) \;\;,
\nonumber
\end{eqnarray}
where the matrices $a_{ij}$ and $b_{ij}$ have been defined in (\ref{ab}). By using the gradient
expansion employed in the previous section for the calculation of the non-resonant contributions
to $\delta E$, we get the result
\begin{eqnarray}
i\omega \left(1+\frac{4}{\sqrt{\pi}}(a^3n_{TF}({\bf r}))^{1/2}\right)\delta n_0({\bf r})
&=& \nabla \cdot\left[ n_0({\bf r})\left(1+\frac{8}{3\sqrt{\pi}}(a^3n_{TF}({\bf r}))^{1/2}\right)
{\bf v}_s({\bf r}) \right] \;\;,
\label{hydro2}
\end{eqnarray}
and
\begin{eqnarray}
i\hbar\omega\delta\varphi&=&{\rm g}\left(1+\frac{20}{\sqrt{\pi}}(a^2n_{TF})^{1/2}\right)
\delta n_0({\bf r}) \;\;.
\label{hydro3}
\end{eqnarray}
If one takes into account the effect of quantum depletion, the local relation between condensate
density and total density is given by $n=n_0[1+8(a^3n_{TF})^{1/2}/(3\sqrt{\pi})]$, and for the
fluctuations of the two densities $\delta n=\delta n_0[1+4(a^3n_{TF})^{1/2}/\sqrt{\pi}]$.
Finally, the change in the local chemical potential $\mu_l={\rm g}n[1+32(a^3n_{TF})^{1/2}/
(3\sqrt{\pi})]$ induced by a density fluctuation is given by the following expression
$\delta n\,\partial\mu_l/\partial n = \delta n_0 {\rm g}[1+20(a^3n_{TF})^{1/2}/\sqrt{\pi}]$.
It is now straightforward to recognize Eqs. (\ref{hydro2}), (\ref{hydro3}) as the linearized
hydrodynamic equations
\begin{eqnarray}
&& \frac{\partial \delta n}{\partial t} + \nabla\cdot\left(n{\bf v}_s\right) = 0 \;\;,
\nonumber\\
&& m\frac{\partial {\bf v}_s}{\partial t} + \nabla\cdot\left(\frac{\partial\mu_l}{\partial n}
\delta n\right) = 0 \;\;,
\label{hydro4}
\end{eqnarray}
which involve the total density $n$ and the superlfuid velocity ${\bf v}_s$.
\section{Concluding remarks}
In this paper we have studied the collisionless collective modes of a dilute Bose gas beyond the
Gross-Pitaevskii theory. In particular, for harmonically trapped systems in the thermodynamic
limit, we have calculated the corrections to the excitation frequencies of the low-lying collective
modes. We find that, not far below the Bose-Einstein transition temperature, the fractional
frequency shift is of the order of few percent for typical experimental conditions and
can be measured. A direct comparison with experimental data obtained by the
MIT group with large condensates looks very good. Similarly to what happened to
Gross-Pitaevskii theory, the study of collective excitations can become a useful bench-mark
also for theories beyond mean-field approximation.
\section*{Acknowledgments}
I would like to thank S. Stringari and L. Pitaevskii for many stimulating discussions.
It is also a pleasure to thank A. Chikkatur for providing me with the experimental data
of the frequency of the $m=0$ mode and D. Stamper-Kurn for useful remarks concerning these
experimental results. Useful discussions with P. Fedichev, G. Shlyapnikov, A. Minguzzi
and P. Schuck are also gratefully acknowledged. I am also particularly indebted to
L. Pitaevskii for a critical reading of the manuscript.
|
1,477,468,750,143 | arxiv | \section{\label{sec:introduction}Introduction}
Understanding the nonlinear response of dense colloidal systems to shear or other mechanical driving forces on a microscopic (i.e., particle-resolved) level has become a focus of growing interest.
Recent examples include density excitations (determining frictional properties) in driven colloidal monolayers \cite{Bohlein2012,Hasnain2013,Vanossi2012,Vanossi2013}, the stick-slip motion involved in the transmission of torque in driven colloidal clutches \cite{Williams2016}, as well as heterogeneities \cite{Benzi2014, Chaudhuri2013, Hentschel2016,Swayamjyoti2016},
and diverging stress- and strain correlations \cite{Benzi2014, Nicolas2014} in sheared colloidal glasses.
Related complex microscopic behavior occurs in sheared granular matter \cite{Denisov2016} and sheared suspensions of non-Brownian particles \cite{Fornari2016}.
Developing a microscopic understanding of such shear-induced behavior is interesting not only in the general context of nonequilibrium behavior of soft-matter systems, but also is crucial for applications in nanotribology, the design of novel materials and of efficient nanomachines.
In the present paper we are concerned with the shear-induced microscopic response of thin films of spherical colloidal particles between two planar walls (slit-pore geometry).
By using Brownian Dynamics (BD) computer simulations and an analytical approach, we aim at understanding transport mechanisms under shear for both, mono- and bidisperse systems.
The structural behavior of colloidal suspensions in presence of spatial confinement is nontrivial already in equilibrium; in particular, it is well established that the particles spontaneously form layers (see, e.g., \cite{Klapp2008}) which, moreover, become crystal-like ("capillary freezing") in lateral directions at sufficiently high densities \cite{Grandner2010}.
Exposing such highly correlated systems to shear flow (along a direction within the plane of the walls) leads to a breakdown of crystalline in-plane ordering after overcoming a "critical" shear rate, and a subsequent recrystallization at higher shear rates, as both computer simulations \cite{Messina2006, Vezirov2013} and experiments \cite{Reinmueller2013} reveal.
In two earlier publications \cite{Vezirov2013,Vezirov2015} we have analyzed this behavior in detail, for the exemplary case of a colloidal bilayer (of monodisperse particles) under constant shear rate \cite{Vezirov2013} or constant stress \cite{Vezirov2015} (both of these external control strategies can be experimentally realized).
One main conclusion was that the breakdown of crystalline order is related to "depinning" transitions in terms of the layer velocity from a locked into a running (sliding) state \cite{Vezirov2013}.
In this sense, the dynamical behavior of confined colloidal layers under shear bears strong similarities to the well-studied case of one-dimensional (1D) particle chains or two-dimensional (2D) particle monolayers driven over a periodic substrate \cite{Hasnain2014, Reimann2002, Risken1996}.
Inspired by this similarity, we here propose an analytical model which allows to predict the shear-induced depinning on the basis of the structure in thermal equilibrium.
The model is essentially a variant of the well-known Frenkel-Kontorova (FK) model \cite{Braun1998,Kontorova1939}, which has been extensively used to model friction between solid (atomic or colloidal) surfaces and has also proven to be crucial for understanding driven monolayers \cite{Bohlein2012,Hasnain2013}.
It should be stressed, however, that despite all similarities, there is one crucial difference between our system and the case of driven monolayers: in the latter case, the periodic substrate represents a fixed \emph{external field}, whereas in our case, the "substrate" rather corresponds to a neighboring layer which can respond to the shear flow itself by in- and out-of-plane deformations.
Indeed, one main goal of the present study is to elucidate the implications of this difference.
A further major goal is to explore the impact of incommensurability, that is, a mismatch of structural length scales, in our sheared system. To this end we consider an asymmetric system where a layer of small colloids is sheared with respect to (crystalline) layers of larger particles.
As expected from the FK model as well as from previous, experimental \cite{Bohlein2012} and theoretical \cite{Hasnain2013, McDermott2013, Siems2015} studies of driven monolayers, we observe moving defect structures with locally enhanced density ("kinks") or locally reduced density ("antikinks").
These kinks and antikinks correspond to soliton solutions of the continuum version of the FK model (i.e., the sine-Gordon equation).
Contrary to the theoretically predicted scenario, however, in our system only the kinks participate in the particle transport, whereas the antikinks remain essentially "locked" within the moving layer.
The rest of the papers is organized as follows. In \sref{models and simulation details} we describe our (mono- or bidisperse) model systems and the details of our BD simulations.
In \sref{average motion} we give a first overview of the behavior of the different films by considering simulation results for
the average motion of the layers.
We then proceed by presenting our analytical model which targets mainly the bilayer system (in \sref{bilayer}). However, we also discuss its application to a monodisperse trilayer system (in \sref{trilayer}).
Section~\ref{sec:density excitations} is devoted to the bidisperse system, for which we discuss in detail the local transport via density excitations.
We close with a summary and conclusion in \sref{conclusion}.
\section{\label{sec:models and simulation details}Models and simulation details}
\subsection{\label{sec:model systems}Model systems}
We consider a colloidal suspension consisting of macroions of diameter $d_i$, salt ions, counterions, and solvent molecules.
Focusing on the macroions, the influence of the solvent is considered implicitly by employing the Derjaguin-Landau-Verwey-Overbeek (DLVO) approximation.
In this framework, the electrostatic interaction of the macroions is screened by the salt- and counterions leading (on a mean-field level) to a \emph{Yukawa}-like potential
\begin{equation}\label{eq:dlvo-potential}
U_{ \text{DLVO} }(r_{ij}) = V_{ij} \frac{\exp(-\kappa \, r_{ij})}{r_{ij}},
\end{equation}
with the pair interaction strength $V_{ij}$, the inverse Debye screening length $\kappa$, and the particle distance $r_{ij}$.
The interaction parameters are set in accordance to real suspensions of charged silica particles with a diameter of about $d \approx 26\,nm$ \cite{Klapp2007}, yielding $\kappa d \approx 3.2$.
In order to account for the steric repulsion between the macroions we supplement the DLVO potential by a soft-sphere (SS) potential, which is given by the repulsive part of the Lennard-Jones potential
\begin{equation}\label{eq:soft-sphere potential}
U_{ \text{SS} }(r_{ij}) = 4\epsilon_{ \text{SS} }\lr{\frac{d_{ij} }{r_{ij} } }^{12},
\end{equation}
with the interaction strength $\epsilon_{ \text{SS} }$ and the mean particle diameter $d_{ij} = \lr{d_i+d_j}/2$.
Therefore, the total particle interaction between two macroions reads
\begin{equation}\label{eq:total particle interaction}
U_{inter}(r_{ij}) = U_{ \text{DLVO} }(r_{ij}) + U_{ \text{SS} }(r_{ij}).
\end{equation}
Following previous studies, the total particle interaction potential is truncated at a cutoff radius $r_{c} \approx 3d$ and shifted accordingly \cite{Vezirov2013,Vezirov2015}.
To mimic the slit-pore geometry, the colloids are confined by two plane-parallel soft walls extended infinitely in $x$- and $y$-direction and separated in $z$-direction by a distance $L_z$ (see \fref{sketch}). The interaction between the colloids and the walls is described by
\begin{equation}\label{eq:soft-wall potential}
U_{wall}(z_i) = \frac{4\pi\epsilon_w}{5} \LR{ \left( \frac{d_{i,w} }{ L_z/2 - z_i } \right)^9 + \left( \frac{d_{i,w} }{ L_z/2 + z_i } \right)^9 },
\end{equation}
with $z_i$ being the $z$-coordinate of particle $i$, the mean wall diameter $d_{i,w} = (d_i+d_w)/2$, the wall diameter $d_w = d$, and the wall-interaction strength $\epsilon_w$.
Equation~(\ref{eq:soft-wall potential}) is obtained by integrating over a half-space of continuously distributed uncharged soft wall-particles, where the interaction between the wall- and the fluid particles is set to the repulsive part of the Lennard-Jones potential [see \eref{soft-sphere potential} with diameter $d_w$].
It is widely adopted as a model for the fluid-wall interaction \cite{Jean-PierreHansen2013, Klapp2007}.\\
In this study, we focus on systems where $L_z$ is of the order of the particle diameter and the density is rather high.
In such situations the colloids arrange in well-defined layers with a solidlike in-plane structure (at least in equilibrium).
Further, we consider both, one-component systems and a special type of a binary mixture.
The latter involves particles with two different diameters, the idea being to create a structure with a mismatch of the underlying structural length scales of the corresponding pure systems.
Specifically, we aim to create a structure where small colloidal particles form a "top" layer on a crystal of larger particles.
In order to stabilize such an asymmetric situation (which would not arise with a symmetric external potential), we supplement the confinement potential by a linear "sedimentation" potential
\begin{equation}\label{eq:sedimentation potential}
U_{sed}(z_i) = \epsilon_{sed} d_i^3 z_i,
\end{equation}
with the sedimentation potential strength $ \epsilon_{sed} $.
This potential can be formally interpreted as the first-order term of a Taylor expansion of the gravitational potential in $z_i$ \cite{Cuetos2006, Ginot2015, Heras2016}.
The resulting force $F_{sed}\lr{\vec{r}_i} = -\nabla_{\vec{r}_i} U_{sed}\lr{z_i}$ depends for constant $ \epsilon_{sed} $ only on the diameter $d_i$ of the particle $i$ and therefore leads to the sedimentation of large colloids.
For appropriate values of $ \epsilon_{sed} $ we find stable configurations consisting of crystalline layers of large particles at the bottom and a layer of small particles on top.
\subsection{\label{sec:simulation details}Simulation details}
\begin{figure}
\includegraphics[width=1.0\linewidth,natheight=1102,natwidth=2102]{fig1.jpg}
\caption{(Color online) Sketch of the model system, involving colloidal particles in narrow slit-pore confinement and linear shear flow in $x$-direction with gradient $\dot{\gamma} z$ in $z$-direction. Periodic boundary conditions are applied in $x$- and $y$-direction. The width of the slit-pore is set to $L_z$.}
\label{fig:sketch}
\end{figure}
We perform standard (overdamped) BD simulations to examine the nonequilibrium properties and dynamics of our model systems.
The position $\vec{r}_i$ of particle $i$ is advanced according to the equation of motion \cite{Ermak1975}
\begin{equation}\label{eq:equation of motion}
\vec{r}_i \lr{ t + \delta t } = \vec{r}_i \lr{ t } +\mu \vec{F}_i \lr{ \llrr{ \vec{r} } } \delta t + \delta \vec{W}_i + \dot{\gamma} z_i \delta t \vec{e}_x,
\end{equation}
where $\vec{F}_i$ is the total conservative force
(stemming from two-particle interactions [see \eref{total particle interaction}], particle-wall interactions [see \eref{soft-wall potential}], and the sedimentation potential [see \eref{sedimentation potential}])
acting on particle $i$, $\llrr{ \vec{r} } = \vec{r}_1 ,\ldots, \vec{r}_N $ is the set of particle positions, and $\delta t$ is the time step.
Within the framework of BD, the influence of the solvent is mimicked by a single-particle frictional- and random force.
The inverse friction constant defines the mobility $\mu= D_0 /k_BT$, where $D_0$ is the short-time diffusion coefficient, $k_B$ is the Boltzmann constant, and $T$ is the temperature.
The random force is modeled by random Gaussian displacements $\delta \vec{W}_i$ with zero mean and variance $2D_0 \delta t$ for each Cartesian component.
The timescale of the system was set to $\tau=d^2/D_0$, which defines the so-called Brownian time.
We impose a linear shear profile $ \dot{\gamma} z_i \vec{e}_x $ [see last term in \eref{equation of motion}] representing flow in $x$- and gradient in $z$-direction.
The strength of the flow is characterized by the uniform shear rate $\dot{\gamma}$.
This ansatz seems plausible for systems where the impact of the walls on the driving mechanism can be neglected, such as charged colloids confined between likewise charged, smooth walls \cite{Klapp2007, Reinmueller2013a}.
For this situation, the distance between the colloids and the wall is naturally rather large, suggesting that the motion of the colloids is not directly coupled to that of the particles comprising the wall.
Thus, one may assume that the shear flow away from the wall is approximately linear.
We note that, despite the application of a linear shear profile, the real, steady-state flow profile can be nonlinear \cite{Delhommelle2003}.
The present simulation approach has also been employed in other recent simulation studies of sheared colloids \cite{Besseling2012, Cerda2008, Lander2013};
the same holds for the fact that we neglect hydrodynamic interactions.
Furthermore, similar approaches have been employed in shear flow simulations of polymers at an interface \citep{Kekre2010,Radtke2014} and active particles in confinement \cite{Apaza2016}.
For the one-component bilayer- and trilayer system, the number density $\rho d^3 = 0.85$ and the slit-pore width $L_z = 2.2d, 3.2d$ are chosen following previous studies \cite{Vezirov2013,Vezirov2015}.
The particle interaction parameters are set according to experimental setups for particles with diameter $d \approx 26\,nm$ and valency $Z=35$ \cite{Zeng2011,Klapp2007}, yielding $\kappa d\approx3.2$.
For the two-component system, an additional small particle species is introduced with diameter $d_2 = 0.42d$ and valency $Z_2 = 0.17Z$, which are set according to experimental setups \cite{Zeng2011}, where we set $\kappa d \approx 3.3$ for all particles.
The number density $\rho d^3 = 1.226$ and the slit-pore width $L_z = 2.65d$ of the two-component system are chosen such that the volume density is comparable to the one-component system.
The sedimentation potential strength is set to $ \epsilon_{sed} =300k_BT/d^4$ for the two-component system and zero for all one-component systems.
In fact, we find stable asymmetric configurations in the range of $250 \leq \epsilon_{sed} \, d^4 / k_B T \leq 450$.
For smaller values of $ \epsilon_{sed} $, the sedimentation force is insufficient to prevent mixing.
On the opposite side, larger values of $ \epsilon_{sed} $ lead to reentrant mixing due to an unrealistically strong compression of the layers.
We consider $N=1058$ and $N = 1587$ large particles for the one-component bilayer and trilayer system, respectively.
The two-component system consists of $N_1 = 1058$ large particles and $N_2=529$ small particles.
All systems were equilibrated for more than $10^7$ steps ($t > 100\tau$), with the discrete time step $\delta t = 10^{-5}\tau$.
After that, the shear force was switched on and the simulation was carried out for an additional time period of $t = 100\tau$, in which the systems reached a steady state.
Only after this period we started with the calculation of material properties.
\section{\label{sec:average motion}Simulation results for average motion}
\begin{figure*}
\includegraphics[width=1.0\linewidth,natheight=1168,natwidth=4260]{fig2.jpg}
\caption{
(Color online) Average velocity $\av{v_R} $ in flow ($x$-) direction of the top layer relative to the bottom layer(s) for a) the one-component bilayer, b) the one-component trilayer and c) the two-component trilayer system. The corresponding in-plane structure is indicated by the filling pattern.
}
\label{fig:average motion}
\end{figure*}
As a starting point, we analyze the dynamics of the model systems by calculating the average velocity $\av{v_R}$ of the crystal layers in flow ($x$-) direction relative to each other.
The average relative velocity in $y$- and $z$-direction vanishes for all considered systems.
Results for $\av{v_R}$ in the one-component bilayer and trilayer system as well as the two-component system are plotted in Figs. \ref{fig:average motion}a)-c).
In those figures, the dynamical states of the considered systems are indicated by different patterns.
These states were distinguished by monitoring the four- and sixfold in-plane angular bond order parameters $\Psi_4, \Psi_6$ \cite{Vezirov2013}.
For the one-component bilayer system [\fref{average motion}a)], we observe a pronounced depinning transition at the critical shear rate $\dot{\gamma}_c^{\text{BD}}\tau \approx 214$.
For subcritical shear rates $\dot{\gamma} < \dot{\gamma}_c^{\text{BD}}$, the system is "locked", with the colloids being pinned (apart from thermal fluctuations) on the sites of the crystalline layers with quadratic in-plane structure.
Increasing the shear rate then leads to a depinning of the crystalline layers and melting of the in-plane structure.
For large shear rates, a hexagonal crystalline order is recovered, which is accompanied by a collective zig-zag motion of the colloidal crystal layers \cite{Vezirov2013}.
A similar depinning transition (yet no subsequent crystallization) is found for driven monolayers on a periodic potential \cite{Achim2009,Bohlein2012,Hasnain2013,Juniper2015}.
Using this connection, we can formulate a simple model to estimate the critical shear rate.
This is discussed in detail in \sref{bilayer}.
We now consider the one-component trilayer system.
Here, the dynamics can be characterized by the average velocity of the top layer relative to the two bottom layers, which is plotted in \fref{average motion}b) \footnote{
\label{note: symmetric trilayer}
For the symmetric trilayer system, the middle layer does not move $\av{v_{mid} } = 0$ and the outer layers move with the same velocity $\av{v_{out} } $ in opposite directions, consistent with the velocity profile imposed by the linear shear flow. Therefore, the velocity of the top layer relative to the two bottom layers is given by $ \av{v_R} = \av{v_{out} } - ( \av{v_{mid} } - \av{v_{out} } )/2 = 1.5 \av{v_{out} } $
}.
In contrast to the bilayer system, the trilayer system displays a continuous onset of motion (i.e., no jump of the velocity) due to a new intermediate laned state.
Again, for small shear rates the colloidal layers are pinned in quadratic in-plane lattices.
However, upon increasing the shear rate, the middle layer becomes unstable and splits into two sublayers, which are each pinned to one outer layer.
The colloids in the sublayers form lanes, moving with the velocity of the respectively closest outer layer \cite{Vezirov2015a}.
This leads to a nonlinear velocity profile $\av{v_R}\lr{z}$ until the melted state is reached, where a quasi-linear velocity profile is recovered.
For large shear rates, the system forms a hexagonal steady state, similar to the bilayer system (see also \sref{trilayer}).
Introducing a second species to the system, the average velocity behaves very different to the one-component systems, as seen in \fref{average motion}c).
Specifically, we consider the average velocity of the top layer consisting of small colloids relative to the bottom layers consisting of large colloids.
The latter are locked in a quadratic crystalline structure for all considered shear rates.
Contrary to the one-component systems, the top layer of the binary system is never pinned to the bottom layers.
Instead, the top layer (which is weakly ordered, i.e., $\Psi_4 = 0.7$ and $\Psi_6=0.36$ for $\dot{\gamma}\tau=0$) transitions continuously into a melted state with increasing shear rate.
This is accompanied by a continuous onset of motion and results in a finite average velocity for all nonzero shear rates.
In order to understand this dynamics, we investigate the local structure and dynamics of the top layer in \sref{density excitations}.
\section{\label{sec:bilayer}Theory of depinning in the bilayer system}
In this section we will present a simple model, which allows us to estimate the critical shear rate of the depinning transition based on the equilibrium configuration.
Within this model, we map the dynamics of the bilayer system to the motion of a single particle in a 1D periodic potential.
This is in the spirit of the FK model \cite{Braun2000}, which considers a 1D chain of (harmonically) coupled colloids on a periodic sinusoidal substrate potential.
Importantly, the resulting equation of motion can be solved analytically, allowing a direct (yet approximate) calculation of the average relative velocity and also of the shear stress of the bilayer system.
\subsection{\label{sec:driven monolayers}Driven monolayers}
The 1D overdamped equation of motion for a particle $i$ in a driven monolayer is given by \cite{Hasnain2013, Risken1996}
\begin{equation}\label{eq:equation of motion driven monolayer}
\mu^{-1} \dot{x}_i = \sum_{j \neq i}^{N_L} F_{inter}\lr{x_{ij} } + F_{sub}\lr{x_i} + F_d + \Gamma_{i}\lr{t}
\text{,}
\end{equation}
with $N_L$ being the number of particles in the monolayer, $F_{inter}$ the two-particle interaction force, $F_{sub}$ the periodic substrate force, $F_d$ the constant driving force, and $\Gamma_{i}=\mu^{-1}\dot{W}_i$ the random force.
In the following we focus on a special case, which involves an infinitely stiff crystalline monolayer (corresponding to the strong coupling limit).
In this limit, the average velocity of all particles is determined by the velocity of the center of mass, $X$, where $X = N_L^{-1}\sum_{i=1}^{N_L} x_i$.
Indeed, for a large number of particles, $N_L \to \infty$, the random forces acting on $X$ vanish.
Further, considering radial pair interactions, the sum of all interaction forces vanishes due to the crystal symmetries.
We can thus restrict our consideration to the motion of $X$, determined by
\begin{equation}\label{eq:equation of motion stiff monolayer}
\mu^{-1} \dot{X} = F_{sub}\lr{X} + F_d
\text{.}
\end{equation}
For a sinusoidal substrate force $F_{sub}\lr{X} = F_{max} \sin\lr{ 2\pi X / a }$, this equation can be solved analytically \cite{Risken1996}.
The resulting average relative velocity is given by \cite{Hasnain2013}
\begin{equation}\label{eq:mean velocity stiff monolayer}
\av{v_R} = a \lr{\int_{0}^{a} \dot{X}^{-1} dX }^{-1} = \mu\sqrt{F_d^2-F_{max}^2}
\text{.}
\end{equation}
Equation (\ref{eq:mean velocity stiff monolayer}) expresses the fact that the crystal monolayer is pinned ($\av{v_R} = 0$) for driving forces smaller than the critical driving force ($F_{d,c} = F_{max}$) and displays a running state ($\av{v_R} > 0$) for larger driving forces.
\subsection{\label{sec:mapping to shear-driven system}Mapping to shear-driven system}
In order to relate the behavior of the driven monolayer to the dynamics of colloidal layers under shear flow, we need to formulate, for the shear-driven systems, an effective substrate force as well as an effective driving force.
To this end, we focus on the dynamics of the top layer, whereas the bottom layer is assumed to act as a "substrate".
From \eref{equation of motion}, the equation of motion of the center of mass $\Delta\vec{R} = N_L^{-1} \sum_{i=1}^{N_L} \vec{r}_i - \vec{R}_{bot}$ of the top layer relative to the center of mass of the bottom layer ($\vec{R}_{bot}$) in flow ($x$-) direction follows as
\begin{widetext}
\begin{equation}\label{eq:equation of motion center of mass}
\Delta\dot{R}_x = \frac{\mu}{N_L} \sum_{i=1}^{N_L} \lr{ \sum_{j\neq i}^N F_{x}^{inter}\lr{ r_{ij} } + F_x^{wall} + \mu^{-1}\dot{W}_{i,x} }+ \dot{\gamma} \Delta R_z \text{,}
\end{equation}
\end{widetext}
where $N_L$ is the number of particles of the top layer.
Again, for $N_L \to \infty$, the mean of the random forces acting on the layer vanishes, i.e., $N_L^{-1} \sum_{i=1}^{N_L} \mu^{-1}\dot{W}_{i,x} \approx 0$.
The force exerted from the confinement (see \eref{soft-wall potential}) has no $x$-component, thus $F_x^{wall}=0$.
Comparing the remaining terms with \eref{equation of motion stiff monolayer}, we identify the shear force as the driving force, i.e.,
\begin{equation}\label{eq:driving force}
F_d\lr{\Delta R_z}=\mu^{-1} \dot{\gamma} \Delta R_z \text{.}
\end{equation}
Further, the sum of particle interaction forces acting on the layer can be identified as the substrate force, i.e.,
\begin{equation}\label{eq:substrate force}
F_{sub}\lr{\llrr{ \vec{r} } } = \frac{1}{N_L}\sum_{i=1}^{N_L} \sum_{j\neq i}^N F_{x}^{inter}\lr{ r_{ij} } \text{,}
\end{equation}
%
where $\vec{F}^{inter} = -\nabla_{\vec{r}_i} U_{inter}$ is the particle interaction force [see \eref{total particle interaction}] and $\llrr{ \vec{r} } = \vec{r}_1 ,\ldots, \vec{r}_N $ is the set of particle positions.
Here we are interested in the depinning starting from the quadratic (equilibrium) state.
The particle positions (in the absence of noise) are therefore given by $ \vec{r} \lr{t} = \vec{r}_{nm} + \vec{R}\lr{t} $, with the corresponding lattice position $\vec{r}_{nm}$ and the center of mass of the layer, $\vec{R}$.
In this framework, the position on the lattice is given by the primitive vectors and is constant.
Using this ansatz, we can rewrite the substrate force (\ref{eq:substrate force}) for particles of the top layer
\begin{align}\label{eq:reduced substrate force}
F_{sub}\lr{\Delta \vec{R} } &= F_{x}^{inter,bot}\lr{ \Delta \vec{R} } \nonumber\\
&= \frac{1}{N_L}\sum_{i=1}^{N_L} \sum_{j = 1}^{N_{bot}} F_{x}^{inter}\lr{ r_{ij} } \text{,}
\end{align}
where $\vec{F}^{inter,bot}$ is the sum of interaction forces between particles of the top layer and particles of the bottom layer and $N_{bot}$ is the number of particles in the bottom layer.
The corresponding sum within the top layer is zero due to the crystal symmetry.
Inserting \eref{driving force} and \eref{reduced substrate force} into \eref{equation of motion center of mass} and neglecting the noise we obtain
\begin{equation}\label{eq:reduced equation of motion center of mass}
\mu^{-1}\Delta \dot{R}_x = F_{sub} \lr{\Delta \vec{R} } + F_d \lr{\Delta R_z}\text{.}
\end{equation}
The structure of \eref{reduced equation of motion center of mass} is already close to the corresponding monolayer equation \eref{equation of motion stiff monolayer}, yielding the strategy to calculate the critical shear rate via \eref{mean velocity stiff monolayer}.
However, in \eref{reduced equation of motion center of mass}, both the driving force (\ref{eq:driving force}) and the substrate force (\ref{eq:reduced substrate force}) still depend on the layer distance $\Delta \vec{R}$.
To proceed, we make the following \emph{ansatz} for $\Delta \vec{R}$ as function of the (relative) displacement of the center of mass,
\begin{equation}\label{eq:layer distance}
\Delta \vec{R}\lr{X} = X \vec{e}_x + \Delta R_{y}^{eq} \vec{e}_y + \Delta R_{z}\lr{X} \vec{e}_z\text{.}
\end{equation}
According to \eref{layer distance} we set the $x$-component of $\Delta \vec{R}$, $\Delta R_x$, equal to the variable $X$, which represents the center-of-mass coordinate in the 1D driven monolayer [see \eref{equation of motion stiff monolayer}].
Further, the $y$-component is set to its equilibrium value, which is constant ($\Delta R_y^{eq}$).
Indeed, from the symmetry of the system, it follows that $\Delta\dot{R}_y = F_y^{inter,bot}\lr{ \Delta R_y^{eq} } = 0$.
However, this does not hold for the displacement in $z$-direction $\Delta R_z$.
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig3.pdf}
\caption{(Color online) a) Mean layer distance of the one-component bilayer system in dependence of the shear rate. b) Time dependence of the layer distance for $\dot{\gamma}\tau=280$ in the hexagonal steady state.}
\label{fig:mean layer distance bilayer}
\end{figure}
In fact, $\Delta R_z$ depends markedly on the shear rate, see \fref{mean layer distance bilayer}.
In particular, one observes a pronounced increase of $\Delta R_z$ when the system transforms from the quadratic into the hexagonal phase.
Moreover, within the hexagonal state, $\Delta R_z$ actually oscillates in time, mimicking the zig-zag motion of the particles \cite{Vezirov2013}.
In view of the strong dependence of $\Delta R_z$ on the shear rate, it is not surprising that setting $\Delta R_z$ to its constant equilibrium value ($\dot{\gamma}\tau=0$) and using this value for the calculation of $F_d$ and $F_{sub}$ yields a wrong result (specifically an overestimation) for the critical shear rate.
Indeed, this simple calculation yields $\dot{\gamma}_c\tau \approx 330$, which has to be compared to the true value (obtained from BD simulation) of $\dot{\gamma}_c^{\text{BD}}\tau \approx 214$.
A somewhat better result is obtained if one sets $\Delta R_z = \Delta R_z\lr{\dot{\gamma}}$.
However, this requires to compute the nonequilibrium properties of the considered system beforehand.
A more desirable strategy would be to define all ingredients for the calculation of the critical shear rate based on the \emph{equilibrium} configuration.
To this end we model the motion of the particles by an optimal path defined for the \emph{equilibrium} configuration (see \eref{optimal path} in Appendix \ref{sec:appendix:optimal path}).
This allows to obtain analytic expressions for $F_{sub}$ and $F_d$.
Inserting \eref{effective substrate force} and (\ref{eq:layer distance optimal path}) from Appendix \ref{sec:appendix:optimal path} into \eref{reduced equation of motion center of mass}, we obtain an equation for the relative motion of the layers in $x$-direction
\begin{equation}\label{eq:equation of motion simple model}
\dot{X} = \mu \, F_{max} \sin\lr{ \frac{2\pi}{a} X } + \dot{\gamma} Z_A \cos\lr{ \frac{2\pi}{a} X } + \dot{\gamma} Z_0 ,
\end{equation}
This equation can be solved analytically, the resulting trajectories $X\lr{t}$ are given in \eref{trajectories optimal path} in Appendix \ref{sec:appendix:trajectory}.
The average velocity of the layers then follows as
\begin{equation}\label{eq:relative velocity for optimal path}
\av{v_R} = \sqrt{ \dot{\gamma}^2 \lr{ Z_0^2 - Z_{A}^2 } - \mu^2 F_{max}^2 }\text{,}
\end{equation}
yielding the critical shear rate
\begin{equation}\label{eq:critical shear rate}
\dot{\gamma}_c = \frac{\mu\,F_{max} }{ \sqrt{Z_0^2-Z_A^2} } \text{.}
\end{equation}
From the trajectories $X\lr{t}$, particularly their long-time solution $\tilde{X}\lr{t}$ given in \eref{long-time position} in Appendix \ref{sec:appendix:trajectory}, we can further calculate the mean shear stress.
The latter is determined (neglecting kinetic contributions \cite{Vezirov2015}) via the $x$-$z$-component of the stress tensor,
\begin{equation}\label{eq:shear stress}
\sigma_{xz} = \av{ \frac{1}{V} \sum_{i}\sum_{j>i} F_{x}^{inter}\lr{r_{ij}}z_{ij} }\text{,}
\end{equation}
where $V$ is the volume of the simulation box and $z_{ij}$ is the particle distance in $z$ direction.
Within our simple model, the shear stress for particles of the same layer vanishes ($z_{ij} = 0$).
Therefore, the relevant particle interaction forces are given by the substrate force $F_{sub}$ [see \eref{effective substrate force} in Appendix \ref{sec:appendix:optimal path}] and the particle distance is defined by the layer distance $\Delta R_z\lr{X}$ [see \eref{layer distance optimal path}].
The mean shear stress of the system then reads
\begin{equation}\label{eq:shear stress simple model}
\sigma_{xz} = \frac{N}{4Vt_{a}} \int_{0}^{t_{a}} F_{sub}\lr{ \tilde{X}\lr{t} }\Delta R_{z}\lr{ \tilde{X}\lr{t}}dt\text{,}
\end{equation}
where $N$ is the number of particles and $t_{a}=a/\av{v_R}$ is the time period for the top layer to move over one lattice position.
\subsection{\label{sec:bilayer results}Numerical results for the bilayer system}
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig5.pdf}
\caption{(Color online) Average velocity of the top layer relative to the bottom layer in the bilayer system from BD simulations (red) and from \eref{relative velocity for optimal path} (blue dashed), revealing the critical shear rate $\dot{\gamma}_c$.}
\label{fig:relativeVelocityBilayer}
\end{figure}
To judge the performance of the effective theory, outlined in \sref{mapping to shear-driven system}, we compare in \fref{relativeVelocityBilayer} the average velocity numerically obtained from \eref{relative velocity for optimal path} with corresponding BD simulation data.
Focusing first on the critical shear rate $\dot{\gamma}_c$, we find that the effective model is in good quantitative agreement ($\dot{\gamma}_c\tau\approx200$) with the BD results ($\dot{\gamma}_c^{\text{BD}}\tau\approx214$).
However by construction, the model predicts a \emph{continuous} transition from the pinned to the free sliding state.
This is clearly in contrast to the BD results, which indicate a discontinuous transition (accompanied by jumps in the velocity) from the quadratic to the melted state, as well as from the melted to the hexagonal state.
As analyzed in Ref. \cite{Achim2009}, these discontinuous transitions are related to the shear-induced restructuring of the in-plane order [see also \fref{average motion}a)].
Obviously, these complex processes are beyond the scope of the proposed model.
Still, the good estimate for $\dot{\gamma}_c$ suggests that the impact of structural changes occuring at larger $\dot{\gamma}$, as well as of thermal noise can be neglected if we just focus on the depinning itself.
A further interesting aspect arising from the effective model is that the critical shear rate [see \eref{critical shear rate}] strongly depends on the distance of the layers in $z$-direction.
In particular, an increase of the layer distance leads to the reduction of the critical shear rate (due to the increase of $F_d$ as well as the decrease of $F_{max}$).
This is a physically plausible result.\\
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig6.pdf}
\caption{(Color online) Shear stress of the bilayer system from BD simulations (red) and from \eref{shear stress simple model} (blue dashed).}
\label{fig:bilayer rheology}
\end{figure}
We now turn to the shear stress, $\sigma_{xz}$, in the long-time limit.
In \fref{bilayer rheology} we compare the shear rate dependence of $\sigma_{xz}$ obtained from \eref{shear stress simple model} with corresponding BD results \cite{Vezirov2015}.
Starting from the equilibrium configuration, the simulated system displays a quasi-linear increase of $\sigma_{xz}$, corresponding to an elastic deformation of the quadratic structure in the crystalline layers.
Once the system melts, the system becomes mechanically unstable, as reflected by the negative slope in $\sigma_{xz}$.
Finally, after the recrystallization into a hexagonal lattice $\sigma_{xz}$ increases again with $\dot{\gamma}$ \cite{Vezirov2015}.
Similar to these BD results, the effective model predicts an approximately linear increase of $\sigma_{xz}$ for subcritical shear rates $\dot{\gamma} \leq \dot{\gamma}_c$ as well as a sharp, nonlinear increase close to the critical shear rate.
For supercritical shear rates $\dot{\gamma} > \dot{\gamma}_c$, the shear stress then decreases essentially exponentially (as seen from a logarithmic plot) to zero, which corresponds to the shear stress of a freely sliding layer.
Overall, the effective model thus provides a reasonable description of the shear stress within the quadratic and melted state, similar to the estimated average velocity discussed before.
However, for $\dot{\gamma} \gg \dot{\gamma}_c$, the shear stress deviates markedly from that of the true system, where the structure becomes again crystalline and the particles perform a characteristic a zig-zag motion in $y$-direction \cite{Vezirov2013}.
\section{\label{sec:trilayer}Depinning in the symmetric trilayer system}
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig7.pdf}
\caption{(Color online) Average velocity of the top layer relative to the two bottom layers of the symmetric trilayer system from BD simulations (red) and the simple model \eref{relative velocity for optimal path} (blue) with critical shear rate $\dot{\gamma}_c$.}
\label{fig:relative velocity trilayer optimal}
\end{figure}
As discussed in \sref{average motion}, the one-component trilayer system also displays a depinning transition similar to the bilayer system (see also Ref. \cite{Vezirov2015}).
Applying the mapping strategy presented in the previous section to the trilayer system, we can calculate the mean relative velocity of the top layer relative to the two bottom layers \cite{Note1}, which is plotted in \fref{relative velocity trilayer optimal}.
Contrary to the case of the bilayer, we find that the model here overestimates the critical shear rate.
This is due to the additional laned state \cite{Vezirov2015a}, in which the middle layer becomes unstable and splits into two sublayers.
Still, closer inspection shows that the model does predict the onset of the melted state (which occurs at $\dot{\gamma}\tau \approx 34$ according to the BD simulation) in good quantitative agreement with BD data.
This suggests that the melting of the crystal layers is indeed induced by the depinning of the outer layers.
For even larger shear rates, the average velocity of the simple model is, in fact, in nearly perfect agreement with the corresponding BD result, despite the fact that the true system has undergone an additional structural transition from a melted into a hexagonal state [see \fref{average motion}c)].
\section{\label{sec:density excitations}Asymmetric Trilayer: Density excitations}
\begin{figure}
\includegraphics[width=1.0\linewidth,natheight=746,natwidth=2027]{fig8.jpg}
\caption{(Color online) a) Side view and b) top view on the binary system in equilibrium ($\dot{\gamma}\tau=0$), displaying two quadratic bottom layers (red) and one top layer containing small particles (blue). }
\label{fig:binary configuration}
\end{figure}
In this section we turn to a binary system of large and small colloids, where the different sizes induce a mismatch of the structural length scales of the corresponding pure systems.
Applying a constant sedimentation force $F_{sed}=-\nabla_{\vec{r}}U_{sed}\lr{z_i}$ [see \eref{sedimentation potential}], we can stabilize asymmetric configurations already at $\dot{\gamma}\tau = 0$.
These consist of two bottom layers containing only large colloids and one layer of small colloids on top, as shown in \fref{binary configuration}a).
The large colloids form crystalline layers with quadratic in-plane structure.
This structure, in turn, induces a semi-crystalline structure (characterized by order parameter values $\Psi_4 = 0.7$ and $\Psi_6=0.36$ at $\dot{\gamma}\tau=0$) of the particles in the top layer [see \fref{binary configuration}b)].
We note that the density of small particles is chosen such that, in principle, all "potential valleys" created by the bottom layers are filled with exactly one small particle.
For this density, the equilibrium structure of the small particles alone is liquidlike.
For the following investigations under shear, we will consider only shear rates which are subcritical with respect to the depinning of the two bottom layers, as well as insufficient to introduce a mixing of the two colloidal species.
The critical shear rate of the bottom layers follows from \eref{critical shear rate} as $\dot{\gamma}_c\tau \approx 98$.
We note that the range of relevant shear rates depends on the sedimentation potential strength $ \epsilon_{sed} $, since the latter influences the layer distance.
In contrast, we find that the dynamical behavior (in particular, the relation between the average- and the kink velocity to be discussed in \sref{estimate average motion via local dynamics}) is rather independent of the particular choice of $ \epsilon_{sed} $.
\subsection{\label{sec:structural properties}Structural properties of the top layer}
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig9.pdf}
\caption{(Color online) Top view on the equilibrium configuration ($\dot{\gamma}\tau=0$) and corresponding Voronoi tessellation (black) of particles of the top layer. The particles are colored with respect to their normalized Voronoi cell areas $\rho_L A_{VC}$. The position of antikinks $\rho_L A_{VC} > 1.2$ (violet) and kinks $\rho_L A_{VC} < 0.8$ (gray) are determined via a cluster identification algorithm.}
\label{fig:voronoi configuration}
\end{figure}
To analyze the local structure of the top layer we calculate the corresponding 2D Voronoi tessellation \cite{Aurenhammer1991}, which divides the total area of the layer into "eigencells".
Each eigencell contains exactly one particle.
The boundaries of each eigencell follow from analyzing the connecting vectors $\vec{r}_{ij}$ of the central particle with all of its neighbors; each boundary then corresponds to the perpendicular bisector of $\vec{r}_{ij}$, i.e., a line perpendicular to $\vec{r}_{ij}$ and cutting $\vec{r}_{ij}$ at its half.
The resulting area of the Voronoi cell, $A_{VC}$, allows to define a local density proportional to the inverse $A_{VC}$.
The Voronoi tessellation of the top layer in equilibrium ($\dot{\gamma}\tau=0$) is shown in \fref{voronoi configuration}.
In this figure, the particles are colored according to their normalized Voronoi cell area $\rho_L A_{VC}$, where $\rho_L = N_L / L^2$ is the average 2D number density of the layer ($L$ is the length of the simulation box and $N_L$ is the number of particles in the top layer).
In a \emph{perfect} lattice, one would have $\rho_L A_{VC}=1$ throughout the system.
Inspecting \fref{voronoi configuration}, we find that the true structure in the top layer is characterized by a substantial amount of defects.
Specifically, one observes both, cells with enhanced area relative to the ideal case (corresponding to a smaller-than-average local density) and cells with reduced area (corresponding to a locally increased density).
In analogy to the 1D FK model we call these defects "antikinks" ($\rho_L A_{VC} > 1.2$) and "kink" ($\rho_L A_{VC} < 0.8$) \cite{Braun1998, Braun2000}, respectively.
In the original FK model, an ideal kink consists of a single additional particle on a fully occupied lattice \cite{Braun1998}.
This additional particle can push another particles to the next occupied lattice position, leading to a hopping wave.
Similarly, an ideal antikink corresponds to a missing particle, allowing the neighboring particles to pull a particle to the unoccupied lattice side.
In other words, the kinks (antikinks) imply that there is more than (less than) one particle per lattice side.
Contrary to that, we find that in our system most of the defects are formed by multiple additional or missing particles.
Furthermore the defects extend over several lattice sides.
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig10.pdf}
\caption{(Color online) Distribution of the Voronoi cell areas in the top layer for different shear rates $\dot{\gamma}\tau=0,4,8,26$ (black, red, blue, green respectively). The Voronoi cell area distribution of the quadratic bottom layers (gray dahsed) is plotted as a reference. The vertical dashed lines indicate the threshold values for kinks ($\rho_L A_{VC} < 0.8$) and antikinks ($\rho_L A_{VC}>1.2$).}
\label{fig:voronoi cell area histogram}
\end{figure}
To quantify the number of particles contributing to defect structures we have calculated the time averaged distribution of Voronoi cell areas, which is plotted in \fref{voronoi cell area histogram}.
Included is the result for the bottom layers (dashed line).
These form a nearly perfect quadratic structure as reflected by the sharp peak at $\rho_L A_{VC}=1$.
Inspecting now the top layer distribution we observe, at $\dot{\gamma}\tau=0$, that $P\lr{A_{VC}}$ still has a maximum at $\rho_L A_{VC} = 1$.
However, there are also pronounced, asymmetric flanks, corresponding to particles in kinks ($\rho_L A_{VC} < 0.8$) and antikinks ($\rho_L A_{VC} > 1.2$).
We note that the left-hand flank is bounded by the tightest possible packing of small colloids.
This explains the rapid decrease of the number of particles with $\rho_L A_{VC} < 0.5$.
Such a limitation does not exist for the number of antikinks, which explains the much broader shape of $P\lr{A_{VC}}$ at the right side.
Considering now the impact of shear, we observe, first, a progressive decrease and finally, a disappearance, of the maximum of $P\lr{A_{VC}}$ at $\rho_L A_{VC}=1$.
This reflects the decrease of the number of particles with local quadratic order.
At the same time, the number of particles involved in kinks and antikinks increases.
Specifically, we observe that the area distribution for kinks increases mainly in height, but not in width, consistent with the above-mentioned limitation.
Therefore, the number of kinks with similar values of the local density increases with $\dot{\gamma}$.
This is in contrast to the antikinks, whose area distribution increases mainly in width, corresponding to an increasing size of defect structures with multiple missing particles.
Finally, for shear rates beyond the critical shear rate ($\dot{\gamma}_c\tau\approx 21$) of the idealized (crystalline) top layer (given by \eref{critical shear rate}), the semi-quadratic structure of the real top layer is essentially lost and most particles contribute to large defect structures.
\subsection{\label{sec:single particle and cluster dynamics}Single particle and cluster dynamics}
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig11.pdf}
\caption{(Color online) a) Displacement $\Delta x_i\lr{t} = x_i\lr{t}-x_i\lr{0}$ ($x$-direction) of a randomly chosen single particle in the top layer as function of dimensionless time and b) distribution of the waiting times $t_w$ for various shear rates.}
\label{fig:single particle trajectories}
\end{figure}
We now turn to the time-resolved dynamical behavior.
To start with, we plot in \fref{single particle trajectories}a) the displacement of a single particle in the top layer in $x$-direction for different shear rates.
In all cases the particle spends relatively long time at a lattice position before jumping to the next one.
In other words, the waiting time $t_w$ (defined according to the "minimum-based" definition in \cite{Gernert2014}) is larger than the Brownian timescale $\tau$ characterizing the diffusion of the free small particle over the distance $d$.
On increasing $\dot{\gamma}$, the jumps become more frequent, as expected due to the stronger drive which helps to overcome the "barriers" generated by the bottom layers.
This is also reflected by the distribution of the waiting times [see \fref{single particle trajectories}b)], whose maximum shifts to shorter times for increasing shear rates.
At this point we recall the increase of the number of kinks with $\dot{\gamma}$ discussed in \sref{structural properties}.
Having this in mind, the enhancement of the jump frequency (i.e., $1/t_w$) may be taken as an indication that the jumping particle is part of a kink.
We also note that, in contrast to the FK model, the particles in the present system can jump multiple lattice sides at once.
Clearly (see \fref{single particle trajectories}) this becomes more likely for large shear rates (e.g. $\dot{\gamma}\tau = 8$).
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig12.pdf}
\caption{(Color online) Example of the motion of density excitations in the top layer. Parts a)-d) show a section of the top layer (color coded Voronoi tessellation) at four different time steps. The circle (purple) indicates the time-dependent position of a kink ($\rho_L A_{VC} < 0.8$).}
\label{fig:voronoi configuration dynamics}
\end{figure}
In addition to tracking single particles, we have also investigated the motion of defect structures (kinks and antikinks) involving several particles.
An example of this analysis is shown in \fref{voronoi configuration dynamics}, where a section of the top layer is plotted at four different times.
The series clearly reveals the motion of a kink in $x$-direction.
This kink is tracked via a modified Hoshen-Kopelman algorithm, which identifies clusters with enhanced local density (i.e., $\rho_L A_{VC} < 0.8$) on the underlying triangular lattice given by the Delaunay triangulation.
For antikinks, the same approach is used to track clusters with reduced local density (i.e., $\rho_L A_{VC} > 1.2$).
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig13.pdf}
\caption{(Color online) Velocity of the kinklike and antikinklike defects relative to the bottom layers as functions of the shear rate. The relative velocity of the top layer (gray, dashed) is plotted as a reference.}
\label{fig:relative velocity density excitations}
\end{figure}
The tracking of the positions of the kinks and antikinks furthermore allows us to calculate their average velocity relative to the bottom layers in $x$-direction.
The resulting velocities are shown in \fref{relative velocity density excitations} as functions of the shear rate.
For comparision, we have included the average relative velocity of the top layer.
In equilibrium (i.e., $\dot{\gamma}\tau=0$), the kinks display no net motion just like the top layer.
This picture changes at finite shear forces, where the kinks move \emph{faster} than the average (i.e., $\av{v_{kink}}>\av{v_R}$).
This holds for all shear rates considered, however, the difference (more specifically, the ratio $\av{v_{kink}}/\av{v_R}$) is largest in the range $0<\dot{\gamma}\tau<5$.
Here the velocity of the kinks is nearly one order of magnitude larger than the velocity of the top layer.
This observation is in accordance with a prediction from the FK model, where the monolayer is displaced exactly one lattice side when a single kink travels through the layer \cite{Braun1998}, i.e.,
\begin{equation}\label{eq:FK velocity relation}
\av{v_R} = \frac{N_K}{N_L} \av{v_{kink} } \text{,}
\end{equation}
with $N_K$ the number of kinks and $N_L$ the number of particles in the monolayer.
Therefore, if there is only a small number of kinks, the velocity of the layer is expected to be much slower than the velocity of the kinks.
We will come back to this point in the subsequent section~\ref{sec:estimate average motion via local dynamics}.
Increasing the shear rate leads to a corresponding increase of the number of kinks (see \fref{voronoi cell area histogram}).
As a consequence the difference between the velocities decreases.
In contrast to the kinks, the antikinks seem to be "locked" within the top layer as revealed both by the direct visualization in \fref{voronoi configuration dynamics} and by the fact that their average velocity (see \fref{relative velocity density excitations}) is nearly identical to that of the top layer.
This "locking" behavior of the antikinks is in contrast to the (original) FK model, where the antikinks move with a velocity which is the same in magnitude, but opposite in direction to that of the kinks.
The reason for the antikink motion in the FK model is the attractive harmonic interaction potential linking the particles \cite{Braun1998}.
In driven monolayers with purely repulsive interactions, the magnitude of the velocity of the antikinks is expected to be smaller than that of the kinks \cite{Bohlein2012, Hasnain2013} but still different from the average motion of the layer.
In our system, the antikinks apparently move \emph{along} the direction of the driving force, which can not be explained by the absence of attractive interactions alone.
Instead, we interpret this phenomena as a result of the the fact that, in our system, the "substrate" acts not as an \emph{external} potential, but as a part of the layered system which responds to the behavior of the top layer.
Indeed, we find that the large particles of the bottom layer shift to higher $z$-positions in the vicinity of the antikinks.
In other words, the reduced local density in the top layer leads to a bump formed by the bottom particles.
These deformations of the bottom layers (which correspond to higher potential barriers) then prevent particles of the top layer to jump into the empty lattice positions.
Instead, the antikinks are pushed in the direction of the driving force.
\subsection{\label{sec:estimate average motion via local dynamics}Average versus kink velocity}
\begin{figure}
\includegraphics[width=1.0\linewidth]{fig14.pdf}
\caption{(Color online) The relative velocity of the top layer (blue dashed) and the approximation via the ansatz \eref{estimated relative velocity top layer} (red). The velocity of the ideal (crystalline) top layer \eref{equation of motion simple model} and its depinning transition are included as a reference.}
\label{fig:estimated relative velocity binary}
\end{figure}
The results discussed in \sref{single particle and cluster dynamics} suggest that kinks represent the only mechanism leading to the mean particle transport of the top layer.
Motivated by the corresponding formula in the FK model [see \eref{FK velocity relation}], we thus propose to describe the average velocity in our system as
\begin{equation}\label{eq:estimated relative velocity top layer}
\av{v_{R}} = \alpha\lr{\dot{\gamma}} \frac{N_{K}}{N_L} \av{v_{kink}}\text{,}
\end{equation}
where $\alpha$ is a (shear-rate dependent) factor of proportionality and $N_L$ is the number of particles in the top layer.
Of course, this relation is expected to hold only for shear rates, where the shear forces are not yet sufficient to introduce free sliding of the top layer and kinks are indeed the main transport mechanism.
In order to estimate this range of validity of \eref{estimated relative velocity top layer}, we calculated the critical shear rate $\dot{\gamma}_c \tau \approx 21$ for the depinning transition of the top layer (assuming that the latter is perfectly crystalline for $\dot{\gamma}<\dot{\gamma}_c$) by using the model presented in \sref{mapping to shear-driven system}.
Numerical results of this analysis are shown in \fref{estimated relative velocity binary}.
Fitting $\av{v_R}$ according to \eref{estimated relative velocity top layer} we find that $\alpha\lr{\dot{\gamma}} \approx 1.17 + 0.08 \dot{\gamma}\tau$.
The agreement between \eref{estimated relative velocity top layer} and the true kink velocity is particularly good in the range $\dot{\gamma} < \dot{\gamma}_c$.
Only for "supercritical" shear rates (i.e., $\dot{\gamma} > \dot{\gamma}_c$), we observe significant deviations.
Here, the top layer is completely melted and the system displays collective motion of the particles in large density waves.
This is obviously strongly different from the transport mechanism of kinks.
Finally, in comparison to the FK model, we find that the factor of proportionality ($\alpha\lr{\dot{\gamma}}$) is weakly shear-rate dependent, corresponding to a small thermal drift of the top layer.
However, especially for small shear rates, this drift can be neglected, reflecting that kinks are indeed the dominant transport mechanism.
Very similar results are obtained for somewhat larger sedimentation strengths \footnote{
For example, for $ \epsilon_{sed} \, d^4/k_B T = 350$, we find that $\alpha\lr{\dot{\gamma}} \approx 1.17 + 0.07\dot{\gamma}\tau$.
}.
\section{\label{sec:conclusion}Conclusion}
Using BD simulations and an analytical approach we have studied the dynamical behavior of three types of colloidal films under planar shear flow.
Focusing on high densities and strong confinement, where the colloids arrange in two or three layers with (squarelike) crystalline order, the shear-induced dynamical behavior is similar to that of colloidal monolayers driven over a periodic substrate potential \cite{Hasnain2013, Bohlein2012}.
In particular, the symmetric (one-component) bilayer system displays a depinning transition, where the layers are "pinned" to each other up to a critical shear rate \cite{Vezirov2013}.
A similar depinning transition is also observed for the symmetric (one-component) trilayer system.
Interestingly, this does not hold for the asymmetric (two-component) trilayer system, which is characterized by a mismatch of the effective lattice constants in the top and the two bottom layers.
In this system, the top layer is never fully pinned, rather we observe the formation of kinklike defects reminiscent of the FK model \cite{Braun1998}.
From a conceptual point of view, one key result of our study is that the dynamics of the symmetric systems can be mapped to the motion of a single particle driven over an effective periodic substrate potential.
The resulting effective model can be solved analytically and yields a prediction of the critical shear rate for the depinning transition.
For the bilayer system, both the resulting average velocity of the layers and the shear stress are in good qualitative agreement with the BD simulation results.
Further, the mapping procedure reveals the relation between the critical shear rate and important system parameters such as the strength of the pair interactions and the width of the confinement.
For the symmetric trilayer system, the critical shear rate is overestimated in the sense that the effective model cannot describe the laned state which occurs in the real system between the crystalline and the melted state.
Still, the model predicts nearly correctly the onset of melting.
Another main result of our study is the observation of local transport via kinklike density excitations in the asymmetric trilayer system.
For small shear rates, the kinks provide the main mechanism for particle transport in the top layer.
The average velocity of the layer is then proportional to their average velocity times the number of kinks.
The factor of proportionality is weakly shear rate dependent, which we interpret as a small thermal drift due to the noise.
Interestingly, the antikink-like defects do not contribute to the particle transport, rather they are stationary relative to the top layer.
This is in contrast to the FK model and can be explained by deformations of the bottom layers in response to the locally reduced density in the top layer.
Similar to previous studies \cite{Vezirov2013, Vezirov2015}, we here employed a set of system parameters pertaining to a realistic system of charged silica particles \cite{Klapp2007, Klapp2008, Zeng2011}.
Thus, our predictions can, in principle, be tested by experiments.
In this context we note that the presence of a solvent can induce hydrodynamic interactions between the colloidal particles, which are neglected in our model.
Considering experimental studies confirming the solidlike response of strongly confined fluids \cite{Gee1990} and local transport via kinklike defect structures \cite{Bohlein2012}, we expect these interactions to affect the timescales, but not the overall behavior of the system.
In addition to a direct comparison to experiments, it would be very interesting to investigate the shear-induced dynamics of confined films for wall distances corresponding to a hexagonal or disordered equilibrium configuration as well as the dynamics of thicker binary crystalline films \cite{Horn2014,Assoud2008}.
Further interesting aspects are the impact of oscillatory shear flow and of structured walls (which can influence the crystalline structure \cite{Besseling2012, Wilms2012}) on the dynamics of the system.
We also note that there is increasing interest concerning the interplay of shear flow and strong confinement in glasslike colloidal systems (see \cite{Chaudhuri2013} for a corresponding molecular dynamics study with a much wider slit-pore width).
Especially for the latter systems, the local particle transport in defect structures might be key.
To this end, it seems vital to better understand the relation between the structural properties of kinks (as well as of antikinks) and their dynamics.
A first step in this direction would be to investigate the dependence of the defect velocity on the size of the defects, as well as corresponding relaxational time scales.
Work in these directions is in progress.
\section{Acknowledgments}
This work was supported by the Deutsche Forschungsgemeinschaft through SFB 910 (Project No. B2).
|
1,477,468,750,144 | arxiv | \section{INTRODUCTION}
\label{sec:intro}
Discovery of transients is an important scientific aim for time domain astronomy. Because transients could appear in any positions of the sky with different variation rates and different magnitudes, sky coverage, observation cadence and depth are all important for discovery of transients. However, these requirements often contradict with each other and according to different scientific observation requirements, we should make trade off between them to design different telescopes with different observation strategies. Wide field small aperture telescope (WFSAT) is a kind of telescope that can obtain images with a large field of view in a cost effective way \cite{burd2005pi, Ma2007, yuan2008chinese, cui2008antarctic, Pablo2016,ping2017the,sun2019precise}. WFSATs are widely used to detect bright transits for fast sky survey. Besides, because WFSATs are low cost, it would be possible to build a telescope array with several WFSATs to scan sky continuously, which can further increase sky coverage and observation cadence \cite{kaiser2002pan, tonry2018atlas, ratzloff2019building, lokhorst2020wide, Liu2020}.\\
During observations, huge amount of observation data are obtained by WFSATs each night. To unleash the observation ability of WFSATs and provide information for some very important targets (such as electromagnetic counterparts of gravitational wave, supernova, tidal disruption events or asteroids), these data should be processed immediately to provide necessary information for follow--up observations. Besides as large amount of transient candidates would be detected by WFSATs each night and there are only limited number of telescopes with large aperture to follow up these candidates, the accuracy of detection is important. Processing large amount of observation data to output information of transients with fast speed and high accuracy is beyond the capacity of contemporary method. New data processing method which do not need manual intervention and can output information of transient candidates with high accuracy and fast speed is required.\\
To satisfy data processing requirements of WFSATs, a lot of smart data processing methods have been proposed in recent years \cite{romano2006supernova, tachibana2018a, gonzalez2018galaxy, Burk2019astron_rcnn, Mahabal2019ML, Duev2019Deepstreaks, Jia2019Optical, Duev2019Real-bogus,2020tuprin}. These smart data processing methods use deep learning algorithms to detect or classify transient targets from observed images. In real applications, these algorithms have shown superior performance than traditional methods: they could automatically obtain positions and types of transients. For example, in the task of detecting different types of astronomical targets for fast sky survey, a faster--rcnn based astronomical target detection algorithm has better performance than that of classic methods \cite{jia2020detection}.\\
However, although smart data processing algorithms can improve the detection abilities of WFSATs, there are still gaps that need to be fulfilled. As we can notice that contemporary transient detection algorithms are post-processing methods, which means they process observed images afterwards. Therefore, some dim astronomical targets that would be seen by WFSATs in telescope arrays several times, but due to their low signal to noise ratio, they would not be detected by telescope arrays. Some suspicious transients could be confirmed as soon as they are detected by one of WFSATs. Detection astronomical targets from stacked images is a possible method to locate transients with low signal to noise ratio, but WFSATs in a telescope array may have different detection abilities, which makes it hard to design robust image stacking methods. Besides, stacked images would reduce detection abilities for transients with fast variations.\\
In this paper, we report a new concept transient detection framework for WFSATs in telescope arrays (Astronomical taRGets detection framework for Unified telescopes -- ARGUS). In ARGUS, each WFSAT in a telescope array will be equipped with a embedded device. An astronomical target detection neural network will be deployed in the embedded device for real--time astronomical target detection. Detection results will be sent back to data center. A classifier based on AdaBoost algorithm will be used to merge these results. Finally, the ARGUS will be able to output detection results with high accuracy for follow--up observations directly. We will report the basic structure of the ARGUS in Section \ref{sec:2} and give our preliminary results in Section \ref{sec:3}. We will make our conclusions in Section \ref{sec:4}.\\
\section{SMART TRANSIENT DETECTION FRAMEWORK FOR WFSATS}
\label{sec:2}
The ARGUS includes two parts: the remote transient detection part and the data center that are used to receive and process detection results. The structure of the ARGUS is shown in figure \ref{fig:framework}. We will describe each part in this Section.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=14cm]{framework.eps}
\end{tabular}
\end{center}
\caption[Framework of ARGUS]
{ \label{fig:framework} The structure of the ARGUS for real applications. It includes two different parts that are in blue and red boxes. The remote part which includes the telescope with the embedded device is used to obtain coordinates and confidence of different astronomical targets from each WFSATs. The detection result will be sent to the data center part (red box). The data center will merge detection results and cross-match the detection results with celestial catalog. Then the ARGUS will send out transient alerts to follow-up telescopes.}
\end{figure}
The remote part in the ARGUS stands for the detection algorithm and the embedded device that are installed at each WFSATs. In this paper, we use NVIDIA Jetson AGX Xavier as the embedded device, because it has 512 tensor cores which can accelerate DNNs and its power cost (30 Watts) is low and its size ($105 mm \times 105 mm$) is small. A faster-rcnn based real-time astronomical target detection neural network is deployed in the embedded device \cite{jia2020detection}. Meanwhile, the detection algorithm is also being developed in a FPGA device now (F10A provided by Inspur). We have modified the structure of the faster-rcnn to make it suitable for detection of astronomical targets in WFSATs: we introduces Resnet-50 as backbone network and a feature pyramid network to extract features. The detection neural network is trained in double float accuracy in a workstation with GTX 1080 Ti Graphic Card. After training, the neural network is directly deployed in Jetson AGX Xavier also with double float accuracy. The neural network could achieve 0.5 frames per second for images with $300 \times 300$ pixels. The processing speed can further be increased if we use TensorRT to implement the detection deep neural network in the Jetson AGX Xavier and we will discuss it in our later papers.\\
The faster-rcnn detection deep neural network is a stand-along astronomical target detection framework. In the last layer of a detection algorithm, there is a classification function to output positions and types of astronomical targets. One frame of detection images is shown in figure \ref{fig:remotepartdet}. Targets with green bounding box stand for stars and targets with red bounding box stand for moving targets. As shown in this figure, the detection algorithm in the remote part of the ARGUS can automatically finish source extraction and deblending tasks. Besides, the detection algorithm has better performance than classic methods as we tested with real observation data\cite{jia2020detection}. For some astronomical targets with low signal to noise ratio, the faster-rcnn would output results with low confidence. Since astronomical targets with low signal to noise ratio would be seen by WFSATs in a telescope array several times, it is straightforward to think that whether we can use detection results from several telescopes to increase detection ability of telescope arrays?\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=8cm]{onedet.jpg}
\end{tabular}
\end{center}
\caption[Detection result of the remote part of the ARGUS.]
{ \label{fig:remotepartdet} The detection results for one frame images that are obtained by the remote part of the ARGUS in real applications. }
\end{figure}
Therefore in the ARGUS, we try to merge detection results from each WFSATs and we have removed the classification layer and the neural network direct outputs the bounding box with confidence value in the format: $(x_1, y_1, x_2, y_2, conf)$, where $x_1, y_1$ stand for position of the bottom-left pixel, $x_2, y_2$ stand for position of the upper-right pixel and $conf$ stands for the confidence of an object being a specific target. Directly merge the detection results from several telescopes with bootstrap aggregation (average the confidence of a detection from different telescopes for a target) is a possible way \cite{bishop2006pattern}. But because image qualities will change and will introduce different point spread functions (PSF), detection results would be unstable and can affect final results. Besides, because there are a lot of WFSATs, we would be also interested in the performance of each WFSATs for astronomical targets detection tasks which could guide us to maintain instruments and optimize our algorithms.\\
To meet our requirements, we propose a dynamical classification framework in the ARGUS to merge detection results of each WFSATs. Our method is based on the adaptive boosting (AdaBoost) algorithm. The AdaBoost algorithm is used to increase the performance of a classifier through combining several weak learners \cite{schapire2003boosting}. Although individual learners is weak, the performance of the final classification model would be better, if weak learners have better performance than random guess. Based on the concept of the AdaBoost algorithm, detection algorithms from different WFSATs are assumed as a weak classifier and the ARGUS would merge these results to output detection results with high accuracy.\\
In the center part, the ARGUS includes two stages: calibration stage and implication stage. In the calibration stage, all WFSATs will observe the same sky area and their detection results will be sent to the data center. All detection results will be calibrated by the celestial catalog. Then we will modify weights of classification algorithms from each WFSATs according to the matching results between detection results and celestial catalog. The AdaBoost algorithm defined in the scikit-learn is used in this paper \cite{scikit-learn}. In the implementation stage, we will directly use weights to modify the detection results to output celestial objects. As weights of each classifier are directly related to the performance of WFSATs, WFSATs with extremely low weights will be further checked by human experts for possible malfunctions. After classification, the ARGUS will further cross-match coordinates of celestial objects with the star catalog to check for transient candidates. All celestial objects detected by the ARGUS that can not be matched by any targets in star catalog will be labeled as transient candidates. \\
\section{PRELIMINARY RESULTS OF THE ARGUS}
\label{sec:3}
In this paper, we simulate a telescope array with three WFSATs and a large sky survey telescope with the skymaker \cite{2010Bertin} to test the ARGUS. The detail information of them is shown in table \ref{tab:WFSATs}. Three telescopes (Tel2, Tel3 and Tel4) are WFSATs with diameter of 1 meter and they represent a small part of the telescope array -- Sitian \cite{Liu2020}. We set different parameters for these telescopes to test whether the ARGUS can find telescopes with best or worst performance through weights of each telescope in the data center. The first telescope (Tel1) represents the Simonyi Survey Telescope which will be used for Legacy Survey of Space and Time (LSST) \cite{tyson2002large}. Images of the same sky area obtained by these four telescopes are shown in figure \ref{fig:fourtelescopeimages}. The gray scale of these figures has been stretched by log transformation.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=10cm]{alldet.eps}
\end{tabular}
\end{center}
\caption[Images of the same sky area obtained by four telescopes. ]
{ \label{fig:fourtelescopeimages} Images of the same sky area obtained by four telescopes. Gray scales of all images have been stretched by log transformation.}
\end{figure}
We use two scenarios to test the performance of the ARGUS. The first scenario is that we will use the ARGUS to merge data from both the large telescope and that from three WFSATs. The second scenario is that we will compare the performance of the ARGUS with 3 WFSATs in a telescope array and the performance of the large telescope. Four DNN based celestial objects detection algorithms are trained in this part. We simulate observation data of each telescopes and train each DNN model with the simulated data. \\
\begin{table}[ht]
\caption{Detailed information of WFSATs used in this paper. The Seidel aberrations include defocus, spherical aberration, coma in x and y directions, astigmatism in x and y directions.}
\label{tab:WFSATs}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
\rule[-1ex]{0pt}{3.5ex} Telescopes & Tel1 & Tel2 & Tel3 & Tel4 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Diameter (meter) & 8.4 & 1.0 & 1.0 & 1.0 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Seeing FWHM (arcsec) & 0.7 & 0.5 & 0.9 & 1.2 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Background (mag) & 24 & 23 & 23 & 25 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Pixel Scale(arcsec) & 0.2 & 0.2 & 0.2 & 0.2 \\
\hline
\rule[-1ex]{0pt}{3.5ex} Static Seidel Aberration & 0, 0.2, 0.15 & 0.1, 0.3, 0.1 & 0.2, 0.1, 0.02& 0, 0.2, 0.1 \\
\rule[-1ex]{0pt}{3.5ex} (in waves) & 0.1, 0.2, 0.1 & 0.05, 0, 0 & 0.05, 0.1, 0& 0.05, 0.1, 0 \\
\hline
\end{tabular}
\end{center}
\end{table}
In the first scenario, we merge detection results from all 4 telescopes with the ARGUS. Because the accuracy and precision are both important for transient detection, we use f1 score ($f1 = 2*(precision*accuracy)/(precision+accuracy)$) to evaluate the performance of the detection algorithms. The detection ability will be better, if the f1 score is closer to 1. The f1 scores of the ARGUS and that of the detection results from different telescopes are shown in figure \ref{fig:fourtelescopeimages}. From these results we can find that the ARGUS could increase detection ability of telescope arrays that are composed by different telescopes. The detection results of the ARGUS are better than most telescopes (Tel1, Tel3 and Tel4). For very dim targets, its performance is slightly worse that Tel2, since it has very small FWHM (0.5 arcsec).\\
Besides, the weights of different telescopes in the ARGUS reflect their detection abilities. After training of the Adaboost algorithm in the ARGUS, the weights for each telescope are: 1.198, 1.106, 1.001, 1.018. The large telescope (Tel1) has the largest weight, because it has very large aperture to capture enough photons. The second telescope (Tel2) has the second largest weight, because it has very small FWHM to make it better in detection of dim targets. Because FWHM is defined by observation conditions of different telescopes and we can find that WFSATs with smaller FWHM have better detection abilities, it indicates us that if we place WFSATs in different regions, it would be possible to compose a telescope array that has stronger and more stable transient detection ability than any telescopes in the same region. Therefore, we test the performance of the ARGUS with three WFSATs and that of the large telescope.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=8cm]{fourmerge.jpg}
\end{tabular}
\end{center}
\caption[Detection result of the ARGUS 1.]
{ \label{fig:fourtelescopeimages} Detection results of the ARGUS with four telescopes for celestial objects with different magnitudes. From this figure, we can find that the Tel2 has the best detection ability for dim targets ($f2\_score\_2$), while the detection ability of large telescope is affected by the larger FWHM induced by the atmospheric turbulence ($f1\_score\_2$). The ARGUS have merged all detection results and have better performance than most WFSATs ($f1\_score\_ensemble$).}
\end{figure}
In the second scenario, we use the ARGUS to process images obtained from Tel2, Tel3 and Tel4 and compare detection results with that of the large telescope. The detection results are shown in figure \ref{fig:threetelescopeimages}. We can find that the ARGUS has actively connected three WFSATs and achieves better detection ability than any WFSATs in the telescope array. It indicates us that the detection ability of the telescope array becomes better and more stable with the ARGUS. Considering WFSATs in a telescope array that distribute in different regions, the ARGUS makes the detection ability of a telescope array distributed in different regions have more chance to get better detection results than a telescope in the same region \cite{2020arXiv201102892B}.\\
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=8cm]{threemerge.jpg}
\end{tabular}
\end{center}
\caption[Detection result of the ARGUS 2.]
{ \label{fig:threetelescopeimages} Detection results of the ARGUS obtained by three 1 meter telescopes for celestial objects with different magnitudes. From this figure, we can find that the ARGUS has better and more stable detection ability than that of any WFSATs ($f1\_score\_ensemble$ is larger than the best detection results of all WFSATs -- $f1\_score\_234\_high$). With only three telescopes, the ARGUS have better performance than that of a large telescope in detection of transients ($f1\_score\_1$).}
\end{figure}
\section{CONCLUSIONS AND FUTURE WORK}
\label{sec:4}
In this paper, we propose the ARGUS, a general purpose celestial objects detection framework for WFSATs in telescope arrays. The ARGUS is successor of DNN based celestial objects detection algorithms. Different celestial objects detection algorithms in different WFSATs are assumed as weak learning machine. The contribution of each WFSATs is calibrated by the calibration data and we directly use calibrated weights to merge detection results from different telescopes. The ARGUS dynamically links different telescopes for transients detection and could be used for next generation sky survey projects with WFSATs \cite{ofek2020seeinglimited}, such as the Sitian \cite{Liu2020}. In the future, we will use real observation data to optimize the ARGUS and use the ARGUS to modify observation strategy to increase observation ability of WFSATs.\\
\section{Acknowledgments}
is work is supported by National Natural Science Foundation of China (NSFC) (11503018, 61805173), the Joint Research Fund in Astronomy (U1631133) under cooperative agreement between the NSFC and Chinese Academy of Sciences (CAS). Authors acknowledge the French National Research Agency (ANR) to support this work through the ANR APPLY (grant ANR-19-CE31-0011) coordinated by B. Neichel. This work is also supported by Shanxi Province Science Foundation for Youths (201901D211081), Research and Development Program of Shanxi (201903D121161), Research Project Supported by Shanxi Scholarship Council of China (HGKY2019039), the Scientific and Technological Innovation Programs of Higher Education Institutions in Shanxi (2019L0225). \\
|
1,477,468,750,145 | arxiv | \section{INTRODUCTION}\label{intro}
Globular clusters (GCs) are homogeneous stellar systems containing stars
with a single age and metallicity, which are in principle simpler to
interpret than photometric and spectroscopic observations of the
integrated stellar light of a galaxy. GCs are therefore considered to be
powerful probes with which to understand the star formation and chemical
enrichment history of their host galaxy.
One of the basic findings from observations about GC populations in
luminous galaxies is that while thousands of GCs are associated with
luminous elliptical galaxies, a significantly smaller number of GCs
exist around spiral galaxies with similar luminosities (e.g., Harris
1991; Barmby 2003). This indicates that the specific frequency of GCs
($S_N$), which is considered to be related to the relative efficiency of
GC formation and/or survival compared to galactic halo/bulge stars,
depends on galaxy morphology. In fact, $S_N$ has also been suggested to
be correlated with local galaxy density, with galaxies in denser
environments having larger $S_N$ values (West 1993). The fact that this
trend appears to exist even when the sample of galaxies is restricted to
ellipticals suggests that GC formation efficiency is more physically
linked with galaxy environment. One possibility to explain this
observation is biased GC formation in galaxies inhabiting denser
environments (West 1993; Blakeslee 1999; McLaughlin
1999). Alternatively, a substantial number of GCs in a luminous galaxy
may have an external origin; GCs could be captured from other galaxies
through galaxy interactions and accrete onto luminous galaxies in
clusters, which would then enhance $S_N$ values as observed.
Recent studies of GCs in the central regions of luminous ellipticals
conducted with the {\it Hubble Space Telescope} (HST) have revealed that
many luminous ellipticals have bimodal or multimodal colour
distributions of GCs (Gebhardt \& Kissler-Patig 1999; Larsen et al.
2001; Brodie et al. 2005). It is found that the mean colours of both red
(metal-rich) and blue (metal-poor) GC subpopulations are correlated with
the host galaxy luminosities and colours (Larsen et al. 2001; Strader,
Brodie \& Forbes 2004; Strader et al. 2005; Peng et al. 2006), which
will be important constraints on the proposed scenarios for the
formation and evolution of GC population such as multiphase collapse
scenario (Forbes, Brodie \& Grillmair 1997), merger scenario (Ashman \&
Zepf 1992), hiearchical merging scenario (Beasley et al. 2002), and
accretion scenario (C\^{o}t\'{e}, Marzke \& West 1998). The small field
of view of the HST, however, does not allow one to collect GC
populations in the outer halo of a galaxy and investigate their spatial
structures, which will also be key pieces of the puzzle (e.g., Moore et
al. 2005). Much wider-field studies (e.g., out to $\sim$ 100 kpc from
the galaxy centre) of GC populations therefore need to be performed
using ground-based data (Rhode \& Zepf 2001, 2004; Dirsch et al. 2003;
Bassino et al. 2006). Studying GC populations at large distances from
the host galaxy is of great importance because the outer halo of the
host galaxy, and even the intergalactic space, are presumed to be large
reservoirs of blue and metal-poor GCs in the accretion scenario.
\setlength{\tabcolsep}{1.5mm}
\begin{table*}
\centering
\begin{minipage}{150mm}
\begin{tabular}{lccccccccc} \hline\hline
\multicolumn{1}{c}{Field ID} &
\multicolumn{3}{c}{Field centre} &
\multicolumn{3}{c}{Integration time ($m_{\rm lim}$)} &
\multicolumn{3}{c}{Seeing size} \\[2pt]
&
&
&
&
\multicolumn{3}{c}{[sec (mag)]} &
\multicolumn{3}{c}{[arcsec]} \\
\multicolumn{1}{c}{(1)} &
\multicolumn{3}{c}{(2)} &
\multicolumn{3}{c}{(3)} &
\multicolumn{3}{c}{(4)} \\ \hline
&
$\alpha$(J2000) &
$\delta$(J2000) &
$b$(J2000) &
$B$ &
$V$ &
$I$ &
$B$ &
$V$ &
$I$ \\
Field 1 & $12^{h}31^{m}18^{s}_{\cdot}4$ & $12^{\circ}29^{\prime}13^{\prime\prime}$ &
74$^{\circ}_{\cdot}$6 &
3680 (25.6) & 1350 (25.1) & 3480 (24.6) & 1.8 & 1.0 & 1.0 \\
Field 2 & $12^{h}33^{m}42^{s}_{\cdot}8$ & $12^{\circ}27^{\prime}37^{\prime\prime}$ &
74$^{\circ}_{\cdot}$8 &
2640 (25.9) & 1350 (25.2) & 3690 (24.7) & 1.2 & 1.0 & 1.0 \\
Field 3 & $12^{h}36^{m}08^{s}_{\cdot}8$ & $12^{\circ}24^{\prime}35^{\prime\prime}$ &
74$^{\circ}_{\cdot}$9 &
1800 (25.7) & 1350 (25.2) & 2640 (24.5) & 1.5 & 1.0 & 1.1 \\
Field 4 & $12^{h}38^{m}33^{s}_{\cdot}2$ & $12^{\circ}24^{\prime}35^{\prime\prime}$ &
75$^{\circ}_{\cdot}$0 &
2280 (26.1) & 1350 (25.2) & 4380 (24.8) & 1.1 & 1.0 & 1.0 \\ \hline
HDF-N & $12^{h}36^{m}46^{s}_{\cdot}7$ & $62^{\circ}11^{\prime}50^{\prime\prime}$ &
54$^{\circ}_{\cdot}$8 &
6000 (26.6) & 4800 (25.5) & 4200 (25.2) & 0.8 & 1.1 & 0.9 \\
Lockman Hole & $10^{h}35^{m}55^{s}_{\cdot}2$ & $57^{\circ}42^{\prime}18^{\prime\prime}$ &
51$^{\circ}_{\cdot}$3 &
6000 (26.9) & 4800 (26.0) & 3600 (24.8) & 1.0 & 1.1 & 1.3 \\ \hline
\end{tabular}
\caption{Observation log. Data for HDF-N and Lockman Hole are retrieved
from SMOKA. Col. (3): The integration times were calculated including
data taken under non-photometric conditions on the first night (hence
the interpretation is not straightforward). Number in the parentheses
indicates 50 \% completeness to point sources estimated with artificial
star test. Col. (4): The seeing sizes are estimated in the stacked
images.} \label{basic}
\end{minipage}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[height=11cm,keepaspectratio,clip]{ntamurafig01.ps}
\caption{Locations and IDs of the observed fields. Each box indicates
one SCam field of view. The background is a DSS image. North is up and
East is left.} \label{deff}
\end{center}
\end{figure*}
In this paper and a companion paper (Tamura et al. 2006; Paper II
hereafter), we report a wide-field imaging survey of the GC populations
around M87 conducted with Suprime-Cam on the Subaru telescope. Several
moderately wide-field studies of GCs surrounding M87 have already been
carried out using photometry (Strom et al. 1981; McLaughlin, Harris, \&
Harris 1994; Harris, Harris, \& McLaughrin 1998; Hanes et al. 2001) and
spectroscopy (Cohen \& Rizhov 1997; Cohen, Blakeslee \& Rizhov 1998;
Kissler-Patig \& Gebhardt 1998; C\^{o}t\'{e} et al. 2001), but all of
these studies explored only the regions $\lesssim 10^{\prime}$
($\lesssim$ 45 kpc or 5 $r_e$) from M87. In contrast, the area of our
survey is approximately $2^{\circ} \times 0_{\cdot}^{\circ}5$ (560 kpc
$\times$ 140 kpc) extending from the centre of M87 out to $\sim$ 0.5
Mpc, which is the widest survey yet undertaken of the GC populations in
luminous galaxies.
In this paper, we focus on describing the observations, data reduction
(\S~\ref{obsredcal}) and data analyses such as selection and photometry
of GC candidates, incompleteness correction, and subtraction of
foreground and background contamination in the GC candidates
(\S~\ref{analyses}). We derive GC luminosity functions around M87 and
NGC 4552, another Virgo luminous elliptical galaxy in our survey field,
and estimate the global GC specific frequencies of these luminous
ellipticals in \S~\ref{resultsanddiscussions}. We investigate colour
distributions and spatial distributions of GC candidates in Paper II.
We adopt distances of 16.1 Mpc (distance modulus of 31.03) to M87, and
15.4 Mpc (distance modulus of 30.93) to NGC 4552, based on measurements
using the surface brightness fluctuation method (Tonry et al. 2001). An
angular scale of 1$^{\prime}$ corresponds to 4.7 kpc and 4.5 kpc at the
distance of M87 and NGC 4552, respectively.
\section{OBSERVATIONS, DATA REDUCTIONS, AND CALIBRATIONS}\label{obsredcal}
\subsection{The M87 Fields}\label{m87fields}
\subsubsection{Observations and data reductions}\label{m87obsred}
Imaging observations were performed on 17 and 18 March 2004, with
Suprime-Cam (SCam) (Miyazaki et al. 2002) on the Subaru telescope. SCam
is a mosaic CCD camera with 10 2K$\times$4K CCD chips and the field of
view is approximately $34^{\prime} \times 27^{\prime}$ on the sky. The
pixel scale is $0_{\cdot}^{\prime\prime}2$ pixel$^{-1}$. In this
observing program, a field of approximately 1 square degree
($136^{\prime} \times 27^{\prime}$) extending from M87 towards the east
was covered by 4 telescope pointings through $B$-, $V$-, and $I$-band
filters. The field IDs and locations are shown in Fig. \ref{deff} and
the observation log is presented in Table \ref{basic}. Each field was
observed with the telescope dithered by $\sim 5^{\prime\prime}$. Since
this dithering scale is smaller than the gap between the CCDs, the 10
CCD frames are not mosaiced into one continuous frame but are reduced
and analyzed individually. A typical exposure time of one frame is 360
sec, 270 sec, and 240 sec in $B$, $V$, and $I$ band, respectively;
several frames were co-added to give the total exposure times listed in
Table \ref{basic}. On the first night, the sky condition was
non-photometric and the transparency was highly variable. On the second
night, it was much better but was still hazy with a little variation.
We therefore scale the data taken on the first night by shifting the
magnitude zeropoints to match with those of the data on the second
night, calibrate the reduced data based on the standard stars taken on
the second night, and check the calibration using GC photometry in the
literature (see next section for details). Typical seeing sizes during
the observations were $1_{\cdot}^{\prime\prime}5$ in $B$ band and $\sim
1_{\cdot}^{\prime\prime}0$ in $V$ and $I$ bands.
Data reduction was performed with IRAF\footnote{IRAF is distributed by
the National Optical Astronomy Observatories, which is operated by the
Association of Universities for Research in Astronomy, Inc. under
cooperative agreement with the National Science Foundation.} in a
standard manner; bias subtraction, flat-fielding, masking bad columns
and saturated pixels, sky subtraction, registration, and average
stacking with a 3 $\sigma$ clipping algorithm.
Sky subtraction was performed by employing the following two steps.
Firstly, an image was divided into a mesh of 128 $\times$ 128 pixels
($\sim 26^{\prime\prime} \times 26^{\prime\prime}$) and a median sky
value was estimated in each window after bright objects were masked. A
sky value in each pixel was then estimated by an interpolation of the
median sky values for the adjacent windows. A background image of an
object frame was created with this process which was then subtracted.
For CCD frames where bright galaxies or their envelopes are quite
extended (e.g., near M87 and NGC 4552), this method cannot be applied.
Instead, an average background was estimated as a single value using a
``blank'' CCD frame within the same exposure. It was corrected for the
sensitivity difference between the two CCD frames using the flat-field
frames.
In stacking CCD frames, aperture photometry of $20 - 30$ bright stellar
objects selected using SExtractor (Bertin \& Arnout 1996) based on the
\verb|CLASS_STAR| index was performed in each frame and the zeropoint of
the frame was shifted so as to match that in a frame taken on the second
night. PSF matching was not performed to avoid degradation of image
quality. The stacked images are presented in Fig. \ref{f1} $-$
\ref{f4}.
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm,keepaspectratio]{ntamurafig02.ps}
\end{center}
\caption{Reduced data for Field 1 ($V$ band). North is up and East is
left.} \label{f1}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm,keepaspectratio]{ntamurafig03.ps}
\end{center}
\caption{Same as Fig. \ref{f1}, but for Field 2.} \label{f2}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm,keepaspectratio]{ntamurafig04.ps}
\end{center}
\caption{Same as Fig. \ref{f1}, but for Field 3.} \label{f3}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm,keepaspectratio]{ntamurafig05.ps}
\end{center}
\caption{Same as Fig. \ref{f1}, but for Field 4.} \label{f4}
\end{figure}
\subsubsection{Photometric and astrometric calibration}
\label{m87calib}
\begin{figure*}
\begin{center}
\includegraphics[width=8cm,angle=-90,keepaspectratio]{ntamurafig06.ps}
\caption{Comparison of our photometry ($V_{\rm SCam}$, $(B-V)_{\rm
SCam}$, and $(V-I)_{\rm SCam}$) with that by H01 ($(B-V)_{\rm H01}$)
and HHM ($V_{\rm HHM}$ and $(V-I)_{\rm HHM}$). Dotted lines in the
plots for the colours approximately show reddest and bluest boundaries
expected for GCs.} \label{h01hanes}
\end{center}
\end{figure*}
Photometric calibration of the M87 fields was performed using standard
stars from Landolt (1992) which were observed at the beginning and end
of the second night. Several standard stars were imaged on each CCD chip
so that the calibration could be carried out individually. Photometry of
standard stars was performed within a $12^{\prime\prime}$ diameter
aperture. After excluding saturated stars and those in crowded regions,
magnitude zeropoints and the trends with sec $z$ and colours were
calculated. The estimated accuracy in the fitting procedure is $0.03 -
0.05$ mag.
Since our data include frames which were taken under non-photometric sky
conditions, we check the photometric calibration by comparing our
photometry of GCs around M87 with that in the literature. We make use of
GC photometry in the $V$ and $I$ bands by Harris, Harris, \& McLaughlin
(1998, HHM hereafter) and that in the Washington system ($C$ and $T_1$)
by Hanes et al. (2001, H01 hereafter). The GC photometry by H01 is
converted to the standard $BVI$ system by using the formulae obtained by
Geisler (1996).
PSF-fitting photometry was carried out for GCs using the PSF and ALLSTAR
tasks in the DAOPHOT package of IRAF. A PSF was determined using $\sim
20$ moderately bright (unsaturated) stellar objects in each CCD frame,
which were selected using SExtractor based on the \verb|CLASS_STAR|
index. The PSF obtained was also used for GC photometry, since the GCs
at the distance of the Virgo cluster are unresolved at the seeing sizes
of our images.
Galactic extinction was then corrected using reddening maps from
Schlegel, Finkbeiner, \& Davis (1998).
We are primarily concerned about zeropoint offsets for $V$-band
magnitude and $B-V$ and $V-I$ colours; the $V$ band image is used as a
selection band to make a catalog of GCs and $B-V$ and $V-I$ colours are
used to isolate GC candidates from other unresolved objects (see
\S~\ref{gcselection} for details). While $C-T_1$ can be converted into
$B-V$ with a small error, it is not converted to $V-I$ with a good
accuracy (Geisler 1996). We therefore decided to estimate the zeropoint
offset in $B-V$ from the comparison with H01 and those in $V$ and $V-I$
from the comparison with HHM. Our GC photometry is compared with that
from H01 and HHM in Fig. \ref{h01hanes}. Dotted lines in these plots
indicate the approximate edges of the colour range which is expected to
be occupied by GCs. These comparisons suggest that there are some
zeropoint offsets between our photometry and that in the literature. The
zeropoint offsets in $V$, $B-V$ and $V-I$ are 0.04 mag, 0.05 mag and
0.10 mag, respectively. Our magnitude and colours are corrected for these
zeropoint offsets in the following analyses.
We performed astrometric calibration against the 2MASS catalog using the
CCMAP task in IRAF. A plate solution (second order polynomial with full
cross terms) was computed for each CCD chip using stars with $J \leq 17$
mag. The fitting accuracy is typically $\sim 0_{\cdot}^{\prime\prime}2$.
\subsection{The Control Fields}\label{archive}
\begin{figure*}
\begin{center}
\includegraphics[height=12cm,keepaspectratio]{ntamurafig07.ps}
\end{center}
\caption{Overall detection completenesses in $BVI$ bands are
overplotted. Top and middle panels: the results in the M87 fields are
shown. Bottom panels: those in the control fields are indicated.}
\label{maglimbvi}
\end{figure*}
In addition to the M87 field data, we also analyze $BVI$ images of the
(blank) fields, HDF-N and Lockman Hole (LH), retrieved from the
Subaru-Mitaka-Okayama-Kiso Archive (SMOKA) system (Baba et al. 2002).
These data were taken with SCam on the Subaru telescope during several
observing runs in 2001: 23 and 24 Feb for $BVI$ on HDF-N and $B$ on LH,
and 22 and 23 Apr for $V$ and $I$ on LH (Capak et al. 2004). This
information is also summarized in Table \ref{basic}. These control field
data are used to estimate contamination in the sample of GC candidates
and to statistically subtract it, which is essential for investigating
properties of GC populations such as the luminosity function and colour
distribution. We emphasize that these control fields are also at high
galactic latitudes (Table \ref{basic}),
and that the data cover reasonably wide sky areas (one SCam field of
view: $\sim$ 900 arcmin$^2$) and are comparable with our data in the M87
field in terms of the filter set, limiting magnitudes, and image
quality. This minimizes the possibility of introducing any systematic
errors into the subtractive corrections for foreground and background
contamination in the GC sample.
The data reduction was carried out with SDFRED (Yagi et al. 2002; Ouchi
et al. 2004), which is a reduction pipeline optimized for SCam data of
blank fields. The basic reduction procedure is the same as that applied
to the M87 field data. The large scale of telescope dithering ($\sim
1^{\prime}$) for these control field data enables the CCD frames to be
stacked into one continuous image with sensitivity differences between
CCDs corrected by using stellar objects in the overlap regions.
To avoid any complications due to a possible drop of limiting magnitude
near the field edge, we use only the central $27^{\prime} \times
27^{\prime}$ region in the following analyses.
The magnitude zeropoints of the HDF-N data were calculated using the
photometry catalog by Capak et al. (2004). These authors did not observe
any standard stars during the observations and determined magnitude
zeropoints by exploiting the accurate photometry of objects in the
region where deep HST/WFPC2 data are available. The best-fit SEDs to the
multi-band photometry of the objects (Fern\'{a}ndez-Soto, Lanzetta \&
Yahil 1999) were used to account for the slight differences in the
filter responses between Subaru/SCam and HST/WFPC2. Since the $B$ band
data of the LH field were taken in the same runs as for the HDF-N, the
same zeropoint is adopted. The $V$ and $I$ band data of the LH field
were taken on different observing runs and standard stars were observed
at elevations similar to those of the LH field during the night.
Magnitude zeropoints were derived from these data.
Galactic extinction was then corrected using Schlegel et al. (1998).
\section{DATA ANALYSES}\label{analyses}
\subsection{Halo Light Subtraction}\label{subtraction}
As shown in Figs. \ref{f1} and \ref{f3}, M87 and NGC 4552 extend
across several CCD frames and their halos have to be subtracted to
reveal the GC populations. We removed the halos by conducting an
iterative median smoothing and subtraction (e.g., McLaughlin, Harris, \&
Hanes 1994). First, we subtract unresolved objects from an image. This
process is not mandatory, but it helps better model the extended halo
light distributions or their residuals in subsequent iterations. Object
detection was performed with SExtractor and unresolved objects were
picked out based on the \verb|CLASS_STAR| index. A PSF was determined
using moderately bright stars, which was then fitted to unresolved
objects and the fitted profiles were subtracted from the original image
using the ALLSTAR task. The resulting image was median-smoothed
to create an image with a halo light distribution which was subtracted
from the original image. This procedure was repeated 4 times with the
mesh size successively reduced (128, 64, 32, and 32 pixels).
\begin{figure*}
\begin{center}
\includegraphics[height=12cm,keepaspectratio]{ntamurafig08.ps}
\end{center}
\caption{\texttt{CLASS\_STAR} indices of detected objects in the $V$
band images of the M87 fields, HDF-N field, and LH field are plotted
against their magnitudes in the top, middle, and bottom panels,
respectively. Dashed line indicates the lower limit for selection of
unresolved objects (0.6).}
\label{classstar}
\end{figure*}
\subsection{Artificial Star Tests}\label{limitmag}
\begin{figure*}
\begin{center}
\includegraphics[height=14cm,angle=-90,keepaspectratio]{ntamurafig09.ps}
\end{center}
\caption{For the unresolved objects, errors of $V$-band magnitude
(left), $B-V$ colour (centre) and $V-I$ colour (right) from the
PSF-fitting photometry are plotted as a function of $V$-band
magnitude. Grey line indicates the boundary below which 80 \% of the
unresolved objects are included at a given magnitude. In the top,
middle, and bottom panels, the results in the M87 fields, HDF-N field,
and LH field are shown, respectively.} \label{magerrs}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=12cm,angle=-90,keepaspectratio]{ntamurafig10.ps}
\caption{The colour criterion to select GC candidates is indicated by
a shaded region. In the left panel, colours of Galactic GCs in the
catalog by Harris (2003 version) are plotted with circles, and stellar
colours based on a stellar flux library by Pickles (1998) are plotted
with asterisks. Galactic extinctions for the GC colours are corrected
using the colour excess of each GC in the catalog. In the right panel,
evolutionary tracks of the galaxy colours calculated using PEGASE v2.0
(Fioc \& Rocca-Volmerange 1997) are indicated. Open squares are
plotted at redshifts of 0, 0.5, 1.0, 1.5, and 2.0 on each track. Note
that the galaxy colours at $z = 0$ are plotted around the middle of
this panel and go towards outer regions at higher redshifts.}
\label{colourcut}
\end{center}
\end{figure*}
We performed artificial star tests for investigating detection
completeness to point sources on the $B$-, $V$-, and $I$-band images,
after bright galaxies were subtracted if necessary. Using the IRAF
STARLIST and MKOBJECT tasks, artificial stars with the same PSF as that
determined using real unresolved objects were distributed on the
original image. For a series of tests, 500 artificial stars within a
certain range of magnitude $[m, m + 0.5$ mag$]$ were generated while
successively changing the magnitude range as the test progressed.
SExtractor was then used for object detection and the fraction of
artificial stars detected (i.e., detection completeness) was calculated
as a function of magnitude. Note that faint stars could be rejected by
DAOPHOT even if they are detected by SExtractor, but the fraction of
such artificial stars turns out to be small ($\leq$ 1 \%) throughout the
magnitude range investigated in our artificial star tests. We performed
these artificial star tests on all the CCD frames in all the observed
fields (Field 1 $-$ 4)
and the overall completeness in each observing field is plotted against
$B$-, $V$-, and $I$-band magnitude in the top and middle panels of
Fig. \ref{maglimbvi}. This indicates the presence of a slight
field-to-field variation in limiting magnitude.
The detection completeness is also a function of galactocentric
distance; for instance, limiting magnitudes are $\sim$ 0.5 mag brighter
near the centre of M87 where the noise is higher. Therefore, when
investigating the luminosity function and colour distribution of GC
candidates within an annulus at a certain distance from the host galaxy,
we correct for incompleteness using the completeness functions estimated
within the same annulus. Especially near the luminous ellipticals, we
divide the annulus into sub-annuli to follow the local variation of the
incompleteness in the annulus. The artificial star tests were also
executed on the control field images and the overall completeness
functions are indicated in the bottom panels of Fig. \ref{maglimbvi}.
The magnitudes giving 50 \% completeness on the M87 fields and the
control fields are listed in Table \ref{basic}.
Since unresolved objects are firstly selected based on the
\verb|CLASS_STAR| indices on the $V$-band image when selecting GC
candidates (see \S~\ref{gcselection}), one also needs to consider biases
associated with the selection of unresolved objects in addition to the
simple detection incompleteness as mentioned above. In the top panels of
Fig. \ref{classstar}, \verb|CLASS_STAR| indices of the detected objects
(left panel) and the artificial stars (right panel) in the M87 fields
are plotted against $V$-band magnitude. In the middle and bottom panels,
the results in the control fields are shown. These plots indicate that
the \verb|CLASS_STAR| index of a point source tends to be underestimated
at fainter magnitudes and more stellar objects are expected to be
excluded when we classify objects with \verb|CLASS_STAR| indices larger
than a certain value as unresolved. We quantify this selection effect as
functions of $V$ magnitude and distance from the host galaxy based on
these artificial star tests.
\subsection{Selection and Photometry of GC Candidates}
\label{gcselection}
\begin{figure*}
\begin{center}
\includegraphics[height=17.5cm,keepaspectratio]{ntamurafig11.ps}
\end{center}
\caption{$B-V$ and $V-I$ colour-colour diagrams for the unresolved
objects. The data in the M87 fields are divided into panels based on
the distances of the sources from M87. In the bottom right panel, the
unresolved objects in the HDF-N data (see \S~\ref{archive}) are
plotted. Black dots indicate unresolved objects which pass the colour
cut, and grey dots are those which do not. The shaded region in each
panel indicates the colour criterion. Some objects sitting outside the
colour criterion are accepted because we take into account errors in
both of the colours in applying the colour cut to these objects (see
text for details).} \label{allbvi}
\end{figure*}
We begin selection of GC candidates with object detection using
SExtractor. Firstly, we picked out all objects having at least 20
connected pixels ($\sim 0.8$ arcsec$^2$, which is approximately equal to
the FWHM area of the PSF) more than 2 $\sigma$ above the local
background (only objects selected in all the three $B$-, $V$-, and
$I$-bands were used for analysis). Secondly, we selected objects with
\verb|CLASS_STAR| indices larger than 0.6 as unresolved objects. The
$V$-band image was used for this classification because it has the best
image quality in our data. The subsequent results do not change if this
cutoff for the \verb|CLASS_STAR| index is set to 0.5 or 0.7. We note
that our criterion (\verb|CLASS_STAR| $\geq 0.6$) is more stringent than
those adopted in previous GC studies ($\geq 0.4$ in Dirsch et al. 2003
and $\geq 0.35$ in Forbes et al. 2004) using data with better image
quality than our data.
PSF-fitting photometry was performed for these unresolved objects and
their $B$, $V$, and $I$ magnitudes were measured. In this process, a
residual sky background around an object is estimated within an annulus
10$^{\prime\prime}$ away from the object with a 4$^{\prime\prime}$ width
and is subtracted. Errors in the $V$-band magnitude and $B-V$ and $V-I$
colours due to a PSF fitting error and sky subtraction error are plotted
as a function of $V$-band magnitude in the top, middle, and bottom
panels of Fig. \ref{magerrs} for unresolved objects in the M87 fields,
HDF-N field, and LH field, respectively. A colour criterion was then
imposed on these unresolved objects to isolate GC candidates. We show
this colour criterion as the shaded region in Fig. \ref{colourcut}
which includes almost all the Galactic GCs in the catalog by Harris
(1996)\footnote{The catalog used here is the version last updated in
2003.} but minimizes the contamination by foreground stars and
background galaxies. The $B-V$ and $V-I$ colour-colour diagrams of the
unresolved objects on our images are indicated in Fig. \ref{allbvi}; the
objects are divided into panels based on their distances from M87, apart
from those in the HDF-N field, which are plotted in the bottom right
panel for reference. Black dots indicate unresolved objects which pass
the colour selection, while grey dots are those which do not satisfy the
colour criterion. Some unresolved objects sitting outside the colour
criterion are accepted as GC candidates by taking into account errors in
the colours (e.g., Rhode \& Zepf 2001); if colours of unresolved objects
can satisfy the colour criterion within their errors, they are sampled
as GC candidates. This ``inclusive'' colour selection allows us to
incorporate fainter GCs which are more likely to be scattered out of the
colour criterion due to larger errors in the colours. Although
contaminating objects may also be included, they are expected to be
corrected for by the control field data (see \S~\ref{fieldpop}).
\subsection{Correction of Incompleteness and Foreground and Background
Contamination} \label{fieldpop}
\begin{figure*}
\begin{center}
\includegraphics[height=12cm,angle=-90,keepaspectratio]{ntamurafig12.ps}
\end{center}
\caption{{\it Upper panels}: Luminosity functions of the unresolved
objects which pass the colour selection in the HDF-N ({\it left}) and
the LH ({\it right}). Thin line shows raw LF and thick line indicates
incompleteness-corrected LF. Shaded region at the right hand side
corresponds to the magnitude range where the completeness is lower than
50 \%. {\it Lower panels}: Colour distributions of the unresolved
objects ($V \leq 24$ mag) which pass the colour selection. Thin line
shows raw distribution and thick line indicates
incompleteness-corrected distribution.} \label{fieldlfcld}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=17cm,keepaspectratio]{ntamurafig13.ps}
\end{center}
\caption{GC Luminosity Functions (GCLFs) obtained within annuli
centered on M87 are indicated. The dotted line indicates a raw LF
(without incompleteness correction or control field subtraction) and
thin black line describes an LF after the incompleteness is corrected
(but no field subtraction is performed yet). The red line and the thick
black line are those where the control field subtraction is performed
after the incompleteness correction: red (black) line shows the LF
where the control field subtraction is performed using the HDF-N (LH)
data, respectively. The shaded region indicates the magnitude range
where the completeness is lower than 50 \%. In the bottom right panel,
the LF of unresolved and colour-selected contaminating objects in the
HDF-N field is shown for reference (survey area is not normalized). }
\label{gclf_cor}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=17cm,keepaspectratio]{ntamurafig14.ps}
\end{center}
\caption{Black line indicates GCLF after the incompleteness is
corrected and the control field is subtracted using the HDF-N data
(same as that described with the red line in Fig. \ref{gclf_cor}).
This is divided into two LFs depending on GC colour: Red and blue lines
indicate the LFs for red ($V-I > 1.1$) and blue ($V-I \leq 1.1$) GC
candidates, respectively.} \label{gclf}
\end{figure*}
In deriving a GC luminosity function (GCLF) and colour distribution,
incompleteness correction is undertaken as follows. The number of GC
candidates at a certain $V$ magnitude is firstly corrected for the
incompleteness in detection and selection of unresolved objects on the
$V$-band image. Detection incompleteness on the $B$ and $I$ band images
is then corrected; GCs with a given $V$-band magnitude are divided into
bins according to their $B-V$ and $V-I$ colours, and the numbers of
objects are multiplied by a factor to correct for the incompleteness at
the $B$ and $I$ magnitudes ($B = (B-V) + V$, $I = V - (V-I)$). These
incompleteness corrections are also applied to GC candidates found in
the control fields.
Although the selection using the two colour diagram is expected to
efficiently isolate GCs from foreground stars and background unresolved
galaxies, there are still likely to be some contaminating objects, and a
subtractive correction of this contamination is essential to investigate
GC properties in the outer halo of the host galaxy where the GC surface
number density is very low. We extract contaminating populations of
unresolved objects from the control fields by using selection criteria
identical to those adopted in the M87 fields; objects with
\verb|CLASS_STAR| $\geq$ 0.6 are selected as unresolved objects and GC
candidates are isolated by using the same colour criterion on the $B-V$
vs. $V-I$ colour-colour diagram.
When subtracting contamination in a certain region within the M87 survey
field, we take into account differences in data quality, especially
errors in the colours at a certain $V$ magnitude, between the M87 field
and control field. Since the errors in the control fields are smaller
than those in the M87 field, a smaller number of objects would be
scattered into the colour criterion in the control fields and the
contamination could be underestimated. We pick out unresolved objects
within narrow ranges ($\pm 0.1$ mag) of $V$, $B-V$ and $V-I$ from both
the M87 field and control field and calculate the differences of typical
errors in the colours between the two samples of unresolved objects. We
then randomize the measured colours of the unresolved objects in the
control field by an amount which is determined from a Gaussian
distribution whose average is zero and whose standard deviation is
estimated from the difference of the typical errors in the colours. The
observational errors in the colours of the unresolved objects in the
control field are therefore replaced with the typical errors of those in
the M87 field. This sequence is repeated in the control field for
different $V$ magnitudes and the colours successively changed, which
provides a mock catalog of unresolved objects in the control field whose
error characteristics are compatible with those in the M87 field. Based
on the colours and errors in this mock catalog, the colour selection is
performed to pick up contaminating objects in the control
field\footnote{Because this sequence involves random numbers, the GC
candidates in a control field needs to be defined by a number of
attempts based on the Monte Carlo technique. But in fact, the variance
of the average population is small because the ``inclusive'' selection
of GC candidates does not give a sharp cutoff on the $B-V$ and $V-I$
colour-colour diagram.}. The luminosity function and colour distribution
of these contaminating objects including the incompleteness corrections
are then subtracted from those in the M87 field with the survey area
normalized.
In Fig. \ref{fieldlfcld}, the $V$-band luminosity functions and $V-I$
colour distributions of GC candidates found in the control fields are
displayed. Note that these are obtained by considering the entire region
of the M87 survey field. The LF and colour distribution in the control
fields are normalized to the survey area of a target field when
subtracted. Most of these contaminating objects are likely to be
background galaxies with compact morphology; candidates are dwarf
ellipticals and blue compact dwarfs, the latter of which need to be at
higher redshifts than the former to meet the colour criterion.
Foreground stars can also be scattered into the GC colour selection due
mainly to photometric errors, but since the errors in the colours are
$\sim 0.1$ mag even at $V \sim 24.5$ mag, which is approximately the
faintest magnitude of GCs studied in this work, their contribution is
presumed to be smaller than the background galaxies.
\setlength{\tabcolsep}{1mm}
\begin{table*}
\centering
\begin{minipage}{140mm}
\begin{tabular}{llcccccc} \hline\hline
\multicolumn{2}{c}{Galaxy} &
\multicolumn{2}{c}{All GCs} &
\multicolumn{2}{c}{Red GCs} &
\multicolumn{2}{c}{Blue GCs} \\ \cline{3-4}\cline{5-6}\cline{7-8}
&
&
$V_{\rm TO}$ &
$\sigma$ &
$V_{\rm TO}$ &
$\sigma$ &
$V_{\rm TO}$ &
$\sigma$ \\
&
&
(mag) &
(mag) &
(mag) &
(mag) &
(mag) &
(mag) \\ \hline
M87 & ($1^{\prime} \leq R \leq 10^{\prime}$) &
$23.62 \pm 0.06$ & $1.50 \pm 0.04$ & $23.85 \pm 0.19$ & $1.57 \pm 0.09$ & $23.35 \pm 0.05$ & $1.38 \pm 0.04$ \\
M87 & ($1^{\prime} \leq R \leq 5^{\prime}$) &
$23.63 \pm 0.08$ & $1.48 \pm 0.05$ & $23.77 \pm 0.23$ & $1.54 \pm 0.11$ & $23.36 \pm 0.11$ & $1.44 \pm 0.05$ \\
M87 & ($5^{\prime} \leq R \leq 10^{\prime}$) &
$23.52 \pm 0.08$ & $1.44 \pm 0.05$ & $23.97 \pm 0.43$ & $1.48 \pm 0.22$ & $23.37 \pm 0.08$ & $1.36 \pm 0.05$ \\
M87 & ($R \lesssim 1^{\prime}$; K99) &
$23.67 \pm 0.07$ & $1.39 \pm 0.06$ & $-$ & $-$ & $-$ & $-$ \\
M87 & ($R \lesssim 1^{\prime}$; L01) &
$23.44^{+0.04}_{-0.08}$ & $-$ & $23.52^{+0.06}_{-0.08}$ & $-$ & $23.30^{+0.06}_{-0.12}$ & $-$ \\
NGC 4552 & ($1^{\prime} \leq R \leq 10^{\prime}$) &
$23.56 \pm 0.20$ & $1.34 \pm 0.12$ & $23.75 \pm 0.35$ & $1.40 \pm 0.18$ & $23.33 \pm 0.16$ & $1.09 \pm 0.10$ \\
NGC 4552 & ($1^{\prime} \leq R \leq 5^{\prime}$) &
$23.48 \pm 0.22$ & $1.42 \pm 0.14$ & $23.85 \pm 0.44$ & $1.59 \pm 0.24$ & $23.52 \pm 0.28$ & $1.31 \pm 0.18$ \\
NGC 4552 & ($R \lesssim 1^{\prime}$; KW01) &
$23.54 \pm 0.18$ & $1.3$ (fixed) & $-$ & $-$ & $-$ & $-$ \\
NGC 4552 & ($R \lesssim 1^{\prime}$; L01) &
$23.19^{+0.11}_{-0.15}$ & $-$ & $23.52^{+0.22}_{-0.20}$ & $-$ & $22.91^{+0.15}_{-0.18}$ & $-$ \\ \hline
\end{tabular}
\caption{Parameters of Gaussians fitted to GCLFs. In the outer region of
NGC 4552, no Gaussian fits were attempted due to the poor statistics.
The GCLF parameters in the core region ($R \lesssim 1^{\prime}$) of M87
and NGC 4552 taken from the literature are also shown for comparison:
Kundu et al. (1999; K99), Kundu \& Whitmore (2001; KW01), and Larsen et
al. (2001; L01). L01 fitted $t_5$ functions to the GCLFs and hence we
only indicate the $V_{\rm TO}$s.} \label{gclfparams}
\end{minipage}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[height=14cm,angle=-90,keepaspectratio]{ntamurafig15.ps}
\end{center}
\caption{Histograms show GCLFs obtained only using GCs at distances
$\leq 10^{\prime}$ from host galaxy centre. Dotted line indicates
incompleteness-corrected GCLF. Solid and dashed lines show GCLFs after
subtracting control field populations based on the HDF-N and LH data,
respectively. A Gaussian fitted to the GCLF is overplotted by a solid
line. Shaded region indicates the magnitude range where the
completeness is lower than 50 \%.} \label{gauss}
\end{figure*}
\section{RESULTS AND DISCUSSIONS}
\label{resultsanddiscussions}
\subsection{Globular Cluster Luminosity Function}
In Fig. \ref{gclf_cor}, GCLFs obtained within annuli centered on M87 are
presented. The dotted lines indicate raw GCLFs without incompleteness
correction or control field subtraction, and the thin black lines show
GCLFs after the incompleteness is corrected for (but no subtractive
correction for contamination is performed yet). The red (thick black)
lines show GCLFs after the control field subtraction is also performed
based on the HDF-N (LH) field, respectively.
In Fig. \ref{gclf}, the GCLFs after the incompleteness correction and
field subtraction using the HDF-N data are divided into red ($V-I >
1.1$) and blue ($V-I \leq 1.1$) GC subpopulations. The black line shows
the GCLF for the total GC population (blue $+$ red). The boundary colour
of the blue and red GCs corresponds approximately to the middle of the
peak colours in the bimodal colour distributions (see Paper II). Note
that there is a local maximum of GC number in the $70^{\prime} < r <
80^{\prime}$ bin due to the GC population of NGC 4552 at a distance of
$\sim$ 75$^{\prime}$ from M87. GCLFs in the outermost regions tend to
have some negative bins due to the field subtraction. In both of these
figures, the shaded region indicates the magnitude range where the
completeness is lower than 50 \%. Note that the completeness is
calculated by considering not only the simple detection completeness on
the $V$ band image but also the selection efficiency of unresolved
objects based on the \verb|CLASS_STAR| index and the completeness in $B$
and $I$ bands at the corresponding magnitudes depending on the GC
colours.
We investigate and discuss the spatial distributions of GC populations
in detail in Paper II.
A Gaussian fitted to the GCLF obtained using only GCs at distances
smaller than $10^{\prime}$ ($\sim$ 45 kpc) from the host galaxy is
plotted in Fig. \ref{gauss} (we fit a Gaussian to the binned data). The
fainter part of the GCLF in the shaded region where the completeness is
lower than 50 \% is not used in this fitting process. The turnover
magnitude ($V_{\rm TO}$) and dispersion ($\sigma$) of the GCLF are then
estimated to be $V_{\rm TO} = 23.62 \pm 0.06$ mag and $\sigma = 1.50 \pm
0.04$ mag for the GC population in M87. For the NGC 4552 GCs, $V_{\rm
TO} = 23.56 \pm 0.20$ and $\sigma = 1.34 \pm 0.12$ mag.
In Fig. \ref{gclfcol}, the GCLFs of the red and blue GC subpopulations
are shown with fitted Gaussians. Again the shaded region indicates the
magnitude range where the completeness calculated by using all the GCs
(i.e., blue $+$ red) is lower than 50 \%. This limiting magnitude does
not change a lot if only the red or blue GC subpopulation is considered
since our B- and I-band images are deep enough not to miss a significant
number of red or blue globular clusters detected on the V-band image.
The fitted Gaussians suggest that the GCLF depends on subpopulation;
the $V_{\rm TO}$ of the red GC subpopulation is $\sim 0.5$ mag and
0.4 mag fainter than that of the blue one for M87 and NGC 4552,
respectively, although the results for the NGC 4552 GCs are less
significant due to the large errors.
Larsen et al. (2001) present similar results using the HST/WFPC2 data,
although their $V_{\rm TO}$ values tend to be brighter than those from
other studies. This difference of $V_{\rm TO}$ between the GC
subpopulations is perhaps because of a metallicity difference (Ashman,
Conti \& Zepf 1995; Elson \& Santiago 1996; Jord\'{a}n et al. 2002).
$V_{\rm TO}$ and $\sigma$ of the GCLFs are summarized in Table
\ref{gclfparams}.
\begin{figure*}
\begin{center}
\includegraphics[height=14cm,angle=-90,keepaspectratio]{ntamurafig16.ps}
\end{center}
\caption{The GCLF of all GCs, red GCs, and blue GCs is indicated by a
black solid line, dotted line, and grey solid line, respectively.
Incompleteness was corrected and contamination was subtracted based on
the HDF-N data. Gaussians fitted to the GCLFs are also plotted.}
\label{gclfcol}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[height=14cm,angle=-90,keepaspectratio]{ntamurafig17.ps}
\end{center}
\caption{Same as Fig. \ref{gclfcol}, but the GCLFs in the region of
$1^{\prime} \leq R \leq 5^{\prime}$ around M87 are compared with those
in the region of $5^{\prime} \leq R \leq 10^{\prime}$.}
\label{gclfinout}
\end{figure*}
The GCs within $10^{\prime}$ of M87 are further divided into two samples
at a boundary of $5^{\prime}$ and the GCLFs of all GCs, red GCs, and
blue GCs in the inner and outer regions are presented in Fig.
\ref{gclfinout}. Gaussians are also fitted to these GCLFs and the
$V_{\rm TO}$ and $\sigma$ are summarized in Table \ref{gclfparams}. This
indicates that the shape of GCLF is not significantly different between
the inner region ($1^{\prime} \leq R \leq 5^{\prime}$) and the outer
region ($5^{\prime} \leq R \leq 10^{\prime}$) for red GCs or blue GCs.
Furthermore, the GCLF shape for all GCs is consistent with that in the
core region ($\leq 1^{\prime}$) obtained by Kundu et al. (1999) using
HST/WFPC2: $V_{\rm TO} = 23.67 \pm 0.07$ mag ($\sigma = 1.39 \pm 0.06$).
This suggests that the GCLF shape of the M87 GCs is not a strong
function of distance from the host galaxy. We note that the fainter part
of the GCLF for all GCs tends to be more deficient at the larger
distance; this is due to the lower contribution of the red GC
subpopulation in the outer region. Any radial dependence of the GCLF is
unclear for the GC population around NGC 4552 because the number of GCs
is not substantial enough especially in the outer region ($5^{\prime}
\leq R \leq 10^{\prime}$). Nevertheless, the $V_{\rm TO}$ of the GCLF
obtained at $1^{\prime} \leq R \leq 10^{\prime}$ is consistent with that
in the core region obtained by Kundu \& Whitmore (2001) using HST/WFPC2
who found $V_{\rm TO} = 23.47 \pm 0.14$ mag with a fixed value of
$\sigma = 1.3$, again suggesting that the GCLF shape does not depend
significantly on distance from the host galaxy.
\subsection{GC Specific Frequency}
Using the $V_{\rm TO}$ and $\sigma$ of the Gaussians fitted to the GCLFs
presented in Fig. \ref{gauss}, we can estimate the total number of GCs
associated with M87 or NGC 4552
and calculate a GC specific frequency ($S_N$). We first integrate a GCLF
obtained in an annulus ($1^{\prime}$ width) centered on the host galaxy
down to the magnitude where the completeness becomes 50 \%. This number
is then multiplied by a correction factor to include the fainter GCs
based on the Gaussian GCLF. This calculation is repeated out to a
certain distance from the galaxy centre. We note that it is ideal to fit
a Gaussian to a GCLF obtained in each annulus but this is difficult in
practice, especially at large distances because the number of GCs is not
substantial and statistical errors are significant. We therefore use the
fitted Gaussian in the inner region ($\leq 10^{\prime}$) of M87 or NGC
4552 independently of the distance from the host galaxy. In order to
estimate the number of GCs at distances smaller than $\leq 1^{\prime}$,
where the brightness of host galaxy halo light exceeds the linearity
regime of the CCD, the radial profile of GC surface density outside the
core region is fitted with a de Vaucouleurs law profile (see Paper II
for details) and the fitted formula is extrapolated towards the galaxy
centre. The estimated total number of M87 GCs is $12000 \pm 800$ within
$25^{\prime}$ from the galaxy centre (cf. $13200 \pm 1500$ by HHM). For
NGC 4552, the total number of GCs is estimated to be $1400 \pm 170$
within $10^{\prime}$. Note that the fraction of GCs at $R \leq
1^{\prime}$ estimated from the extrapolation is 8.5 \% for M87 GCs and
17 \% for NGC 4552 GCs. If we adopt $M_V = -22.46$ mag as the $V$-band
absolute magnitude of M87, which is also adopted by HHM (a difference in
adopted distance modulus of 0.03 mag is corrected), the $S_N$ value of
M87 is calculated to be $12.5 \pm 0.8$, which is only slightly smaller
than $14.1 \pm 1.6$ obtained by HHM. The $V$-band luminosity of NGC
4552 is obtained from our data by fitting a de Vaucouleurs law to the
$V$-band surface brightness profile and integrating it out to the same
distance (10$^{\prime}$); $M_V = -21.12$ mag. The $S_N$ value of NGC
4552 is then estimated to be $5.0 \pm 0.6$.\footnote{We calculated the
total number of GCs and $S_N$ within 10$^{\prime}$ for NGC 4552 to avoid
possible contributions of M87 GCs and intergalactic GCs outside of this
radius (see Paper II). If we fit a de Vaucouleurs law to the GC surface
density profile within 10$^{\prime}$ from the NGC 4552 centre and
integrate it out to 25$^{\prime}$ as done for M87 GCs, the number of GCs
and $S_N$ are estimated to be $2000 \pm 660$ and $6.2 \pm 2.1$,
respectively.}
\section{SUMMARY}\label{summary}
We have performed a wide-field imaging survey of the globular cluster
(GC) populations around M87 with Suprime-Cam on the 8.2m Subaru
telescope. A $2^{\circ} \times 0_{\cdot}^{\circ}5$ (560 kpc $\times$ 140
kpc) field extending from M87 to the east was observed through the $BVI$
filters. In addition to this unprecedented large survey area, our data
analysis has been optimized to study the statistical properties of GCs
as follows:
\begin{itemize}
\item GC candidates are isolated not only with an extended source cut
but also with a colour cut, where only unresolved objects falling
on a specific region of the $B-V$ and $V-I$ colour-colour diagram
are accepted. The colour criterion is defined so as to include
almost all the Galactic GCs and to avoid foreground stars and
background galaxies. This is expected to efficiently isolate
bona-fide GCs from other unresolved objects on our imaging data.
\item In order to assess foreground and background contamination which
needs to be statistically subtracted, we analyze the imaging data
on the HDF-N field and the Lockman Hole field as control field
data. These fields cover reasonably wide sky areas ($\sim
30^{\prime} \times 30^{\prime}$) and are compatible with the data
of the M87 fields in terms of the filter set ($BVI$), limiting
magnitudes, and image qualities. We therefore extract
contaminating populations using identical criteria to those
adopted in the M87 fields, minimizing the possibility of
introducing any systematic errors into the subtractive
correction.
\end{itemize}
In this paper, we have investigated the luminosity function and global
specific frequency ($S_N$) of GC candidates surrounding M87 or NGC 4552.
The $V$-band GC luminosity functions (GCLFs) were obtained in the inner
regions of M87 and NGC 4552 at distances $\leq 10^{\prime}$ from the
galaxy centres. By fitting Gaussians to the GCLFs, the turnover
magnitude is estimated to be $23.62 \pm 0.06$ mag for M87 GCs and $23.56
\pm 0.20$ mag for NGC 4552 GCs. The GCLF appears to depend on GC colour;
the turnover magnitude in the GCLF of the red GC subpopulation ($V-I >
1.1$) is $\sim$ 0.5 mag and 0.4 mag fainter than that of the blue GC
subpopulation ($V-I \leq 1.1$) for the M87 GCs and NGC 4552 GCs,
respectively.
For the M87 GCs, the GCLFs at $1^{\prime} \leq R \leq 5^{\prime}$ were
compared with those at $5^{\prime} \leq R \leq 10^{\prime}$ but no
obvious trend with radius was found in the shape of the GCLF for either
the red or blue subpopulations.
The global $S_N$ of M87 GCs and NGC 4552 GCs is estimated to be 12.5
$\pm$ 0.8 within $25^{\prime}$ and 5.0 $\pm$ 0.6 within $10^{\prime}$,
respectively.
\section*{ACKNOWLEDGEMENTS}
We are grateful to the anonymous referee for careful reading of our
manuscript and helpful comments. This work was based on data collected
at Subaru Telescope and obtained from the SMOKA science archive at
Astronomical Data Analysis Center, which are operated by the National
Astronomical Observatory of Japan. We acknowledge the members of the
Subaru telescope operation team, especially Dr. Hisanori Furusawa for
supports during the observation. This work was partly supported by
Grants-in-Aid for Scientific Research (Nos. 16540223 and 17540216) by
the Japanese Ministry of Education, Culture, Sports, Science and
Technology.
|
1,477,468,750,146 | arxiv | \section{Introduction}
\label{sec:intro}
\vspace{-.5em}
Depth sensing is crucial for 3D reconstruction~\cite{Newcombe11KinectFusion,Niessner2013Hashing,Whelan15rss} and scene understanding~\cite{guptaECCV14,qi2017frustum,song15SUNRGBD}.
Active depth sensors (\eg, time of flight cameras~\cite{horaud16TOF, Remondino13TOF}, LiDAR~\cite{Christian2013ASO}) measure dense metric depth, but often have
limited operating range (\eg, indoor) and spatial resolution~\cite{chan08}, consume more power, and suffer from multi-path reflection and interference between sensors~\cite{Maimone12}.
In contrast, estimating depth directly from image(s) solves these issues, but faces other long-standing challenges such as scale ambiguity and drift for monocular methods~\cite{Saxena:2008DRS6}, as well as the correspondence problem and high computational cost for stereo~\cite{Tippetts16} and multi-view methods~\cite{Seitz:2006}.
\begin{figure}[t]
\centering
\includegraphics[width=.99\linewidth]{teaser_v5.pdf}
\caption{
We proposed a DL-based method to estimate depth and its uncertainty (or, confidence) continuously for a monocular video stream, with the goal of turning an RGB camera into an RGB-D camera.
Its output can be directly fed into classical RGB-D based 3D scanning methods~\cite{Newcombe11KinectFusion,Niessner2013Hashing}
for 3D reconstruction.}
\label{fig:teaser}
\vspace{-.5em}
\end{figure}
Inspired by recent success of deep learning in 3D vision~\cite{Bloesch18CodeSLAM,Chang18PSM, Fu18DORN,Godard17MonoDepth,Huang18DeepMVS,Tateno17CNNSLAM, Ummenhofer17DeMoN,Wang18DDVO,Yao18MVSNet,Zhou18DeepTAM,Zhou17SfmLearner}, in this paper, we propose a DL-based method to estimate depth and its uncertainty continuously from a monocular video stream, with the goal of effectively turning an RGB camera into an RGB-D camera.
We have two key ideas:
\begin{enumerate}
\vspace{-.5em}
\item Unlike prior work, for each pixel, we estimate a depth probability distribution rather than a single depth value, leading to an estimate of a Depth Probability Volume (DPV) for each input frame.
%
As shown in Fig.~\ref{fig:teaser}, the DPV provides both a Maximum-Likelihood-Estimate (MLE) of the depth map, as well as the corresponding per-pixel uncertainty measure.
\vspace{-.5em}
\item These DPVs across different frames are accumulated over time, as more incoming frames are processed sequentially. %
The accumulation step, originated from the Bayesian filtering theory and implemented as a learnable deep network, effectively reduces depth uncertainty and improves accuracy, robustness, and temporal stability over time, as shown later in Sec.~\ref{sec:results}.
\end{enumerate}
\vspace{-.5em}
We argue that all DL-based depth estimation methods should predict \emph{not depth values but depth distributions}, and should \emph{integrate such statistical distributions over time} (\eg, via Bayesian filtering).
This is because dense depth estimation from image(s) -- especially for single-view methods -- inherently has a lot of uncertainty, due to factors such as lack of texture, specular/transparent material, occlusion, and scale drift.
While some recent work started focusing on uncertainty estimation~\cite{Gal2016Dropout,Ilg18Uncertainty,Kendall2017uncertainty,Kendall2018multi} for certain computer vision tasks, to our knowledge, we are the first to predict a depth probability volume from images and integrate it over time in a statistical framework.
We evaluate our method extensively on multiple datasets and compare with recent state-of-the-art, DL-based, depth estimation methods~\cite{Fu18DORN,Godard17MonoDepth,Ummenhofer17DeMoN}. We also perform the so-called ``cross-dataset'' evaluation task, which tests models trained on a different dataset without fine-tuning. We believe such cross-dataset tasks are essential to
evaluate the robustness and generalization ability~\cite{RobustVisionChallenge18}. Experimental results show that, with reasonably good camera pose estimation, our method outperforms these prior methods on depth estimation with better accuracy, robustness, and temporal stability. Moreover, as shown in Fig.~\ref{fig:teaser}, the output of the proposed method can be directly fed into RGB-D based 3D scanning methods~\cite{Newcombe11KinectFusion, Niessner2013Hashing} for
3D scene reconstruction.
\section{Related Work}
\label{sec:related}
\paragraph{Depth sensing from active sensors}
Active depth sensors, such as depth cameras~\cite{horaud16TOF, Remondino13TOF} or LiDAR sensors~\cite{Christian2013ASO} provide dense metric depth measurements as well as sensor-specific confidence measure~\cite{Reynolds-tof-conf11}.
Despite of their wide usage~\cite{guptaECCV14, Newcombe11KinectFusion, qi2017frustum, Whelan15rss}, they have several inherent drawbacks\cite{chan08, Maimone12, Pomerleau12, Tuley05LIDAR}, such as limited operating range, low spatial resolution, sensor interference, and high power consumption.
Our goal in this paper is to mimic a RGB-D sensor with a monocular RGB camera, which continuously predicts depth (and its uncertainty) from a video stream.
\vspace{-1em}
\paragraph{Depth estimation from images}
Depth estimation directly from images has been a core problem in computer vision~\cite{Saxena:2007, Seitz:2006}.
Classical single view methods~\cite{Criminisi2000, Saxena:2008DRS6} often make strong assumptions on scene structures.
Stereo and multi-view methods~\cite{Seitz:2006} rely on triangulation and suffer from finding correspondences for textureless regions, transparent/specular materials, and occlusion.
Moreover, due to global bundle adjustment, these methods are often computationally expensive for real-time applications.
For depth estimation from a monocular video, there is also scale ambiguity and drifting~\cite{Artal17ORB}.
Because of these challenges, many computer vision systems~\cite{Artal17ORB, schoenberger2016sfm} use RGB images mainly for camera pose estimation but rarely for dense 3D reconstruction~\cite{schoenberger2016mvs}.
Nevertheless, depth sensing from images has great potentials, since it addresses all the above drawbacks of active depth sensors. In this paper, we take a step in this direction using a learning-based method.
\vspace{-1em}
\paragraph{Learning-based depth estimation}
Recently researchers have shown encouraging results for depth sensing directly from images(s), including single-view methods~\cite{Fu18DORN,Godard17MonoDepth,Zhou17SfmLearner}, video-based methods~\cite{mahjourian2018googleicp,Wang18DDVO,yin2018geonet}, depth and motion from two views~\cite{Chang18PSM,Ummenhofer17DeMoN}, and multi-view stereo~\cite{Huang18DeepMVS,Yao18MVSNet,Zhou18DeepTAM}.
A few work also incorporated these DL-based depth sensing methods into visual SLAM systems~\cite{Bloesch18CodeSLAM,Tateno17CNNSLAM}.
Despite of the promising performance, however, these DL-based methods are still far from real-world applications, since their robustness and generalization ability is yet to be thoroughly tested~\cite{RobustVisionChallenge18}.
In fact, as shown in Sec.~\ref{sec:results}, we found many state-of-the-art methods degrade significantly even for simple cross-dataset tasks.
This gives rise to an increasing demand for a systematic study of uncertainty and Bayesian deep learning for depth sensing, as performed in our paper.
\vspace{-1em}
\paragraph{Uncertainty and Bayesian deep learning}
Uncertainty and Bayesian modeling have been long studied in last few decades, with various definitions ranging from the variance of posterior distributions for low-level vision~\cite{Szeliski1990} and motion analysis~\cite{Kim11gprf} to variability of sensor input models~\cite{Kamberova98sensorerrors}. Recently, uncertainty~\cite{Gal2016Dropout,Kendall2017uncertainty} for Bayesian deep learning were introduced for a variety of computer vision tasks~\cite{Clark17VidLoc, Ilg18Uncertainty, Kendall2018multi}. In our work,
the uncertainty is defined as the posterior probability of depth, \ie, the DPV estimated from a local window of several consecutive frames. Thus, our network estimates the ``measurement uncertainty''~\cite{Kendall2017uncertainty} rather than the ``model uncertainty''. We also learn an additional network module to integrate this depth probability distribution over time in a Bayesian filtering manner, in order to improve the accuracy and robustness for depth estimation from a video stream.
\vspace{-.5em}
\section{Our Approach}
\label{sec:method}
\begin{figure*}
\centering
\includegraphics[width=0.99\textwidth]{overview_v3.pdf}
\caption{
Overview of the proposed network for depth estimation with uncertainty from a video.
Our method takes the frames in a local time window in the video as input and outputs a Depth Probability Volume (DPV) that is updated over time.
The update procedure is in a Bayesian filter fashion: we first take the difference between the local DPV estimated using the D-Net (Sec.~\ref{subsec:method_dnet}) and
the predicted DPV from previous frames to get the residual;
then the residual is modified by the K-Net (Sec.~\ref{subsec:method_kvnet}) and added back to the predicted DPV;
at last the DPV is refined and upsampled by the R-Net (Sec.~\ref{subsec:method_refinenet}), which can be used to compute the depth map and its confidence measure.
}
\label{fig:network}
\end{figure*}
Figure~\ref{fig:network} shows an overview of our proposed method for depth sensing from an input video stream, which consists of three parts. The first part (Sec.~\ref{subsec:method_dnet}) is the D-Net,
which estimates the Depth Probability Volume (DPV) for each input frame. The second part (Sec.~\ref{subsec:method_kvnet}) is the K-Net,
which helps to integrate the DPVs over time. The third part (Sec.~\ref{subsec:method_refinenet}) is the refinement R-Net,
which improves the spatial resolution of DPVs with the guidance from input images.
Specifically, we denote the depth probability volume (DPV) as $p(d;u,v)$, which represents the probability of pixel $(u,v)$ having a depth value $d$, where $d\in [d_{min}, d_{max}]$. Due to perspective projection, the DPV is defined on the 3D view frustum attached to the camera, as shown in Fig.~\ref{fig:repr}(a). $d_{min}$ and $d_{max}$ are the near and far planes of the 3D frustum, which is discretized into $N=64$ planes uniformly over the inverse of depth (\ie, disparity). The DPV contains the complete statistical distribution of depth for a given scene. In this paper, we directly use the non-parametric volume to represent DPV. Parametric models, such as Gaussian Mixture Model~\cite{mdn94}, can be also be used. Given the DPV, we can compute the Maximum-Likelihood Estimates (MLE) for depth and its confidence:
\begin{align}
\mbox{Depth}: &\hspace{1em} \hat{d}(u,v) = \sum_{d=d_{min}}^{d=d_{max}} p(d; (u,v))\cdot d, \\
\mbox{Confidence}: &\hspace{1em} \hat{C}(u,v) = p(\hat{d}, (u,v)).
\label{eq:dpv_depth}
\end{align}
To make notations more concise, we will omit $(u,v)$ and use $p(d)$ for DPVs in the rest of the paper.
When processing a video stream, the DPV can be treated as a hidden state of the system. As the camera moves, as shown in Fig.~\ref{fig:repr}(b), the DPV $p(d)$ is being \emph{updated} as new observations arrive, especially for the overlapping volumes. Meanwhile, if camera motion is known, we can easily \emph{predict} the next state $p(d)$ from the current state. This predict-update iteration naturally implies a Bayesian filtering scheme to update the DPV over time for better accuracy.
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{parameterize_v2.pdf}
\caption{Representation and update for DPV. (a)~The DPV is defined over a 3D frustrum defined by the pinhole camera model .
(b)~The DPV gets updated over time as the camera moves. }
\label{fig:repr}
\end{figure}
\subsection{D-Net: Estimating DPV}
\label{subsec:method_dnet}
For each frame $I_t$, we use a CNN, named D-Net, to estimate the conditional DPV, $p(d_t|I_t)$, using $I_t$ and its temporally neighboring frames.
In this paper, we consider a local time window of five frames $\mathcal{N}_t = [t-2\Delta t, t-\Delta t, t, t+\Delta t, t+2\Delta t]$, and we set $\Delta t=5$ for all our testing videos (25fps/30fps). For a given depth candidate $d$, we can compute a cost map by warping all the neighboring frames into the current frame $I_t$ and computing their differences. Thus, for all depth candidates, we can compute a cost volume, which produces the DPV after a softmax layer:
\begin{align}
L(d_t|I_t) &= \sum_{k \in \mathcal{N}_t, k\neq t} ||f(I_t)-\mbox{warp}(f(I_k); d_t, \delta T_{kt})||,\nonumber\\
p(d_t|I_t) &= \mbox{softmax}(L(d_t|I_t)),
\end{align}
where $f(\cdot)$ is a feature extractor, $\delta T_{kt}$ is the relative camera pose from frame $I_k$ to frame $I_t$, $\mbox{warp}(\cdot)$ is an operator that warps the image features from frame $I_k$ to the reference frame $I_t$, which is implemented as 2D grid sampling. In this paper, without loss of generality, we use the feature extractor $f(\cdot)$ from PSM-Net~\cite{Chang18PSM}, which outputs a feature map of 1/4 size of the input image. Later in Sec.~\ref{subsec:method_refinenet}, we learn a refinement R-Net to upsample the DPV back to the original size of the input image.
Figure~\ref{fig:confmap} shows an example of a depth map $\hat{d}(u,v)$ and its confidence map $\hat{C}(u,v)$ (blue means low confidence) derived from a Depth Probability Volume (DPV) from the input image. The bottom plot shows the depth probability distributions $p(d;u,v)$ for the three selected points, respectively. The red and green points have sharp peaks, which indicates high confidence in their depth values. The blue point is in the highlight region, and thus it has a flat depth probability distribution and a low confidence for its depth.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{method_conf.pdf}
\caption{An example of a depth map $\hat{d}(u,v)$ and its confidence map $\hat{C}(u,v)$ (blue means low confidence) derived from a Depth Probability Volume (DPV). The bottom plot shows the depth probability distributions $p(d;u,v)$ for the three selected points, respectively. The red and green points have sharp peaks, which indicates high confidence in their depth values. The blue point is in the highlight region, which results in a flat depth probability distribution and a low confidence for its depth value.}
\label{fig:confmap}
\end{figure}
\subsection{K-Net: Integrating DPV over Time}
\label{subsec:method_kvnet}
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{damping.pdf}
\caption{Comparison between different methods for integrating DPV over time. Part of the wall is occluded by the chair at frame $t$ and disoccluded in frame $t+1$.
\textbf{No filtering}: not integrating the DPV over time.
\textbf{No damping}: integrating DPV directly with Bayesian filtering.
\textbf{Global damping}: down-weighting the predicted DPV for all voxels using Eq.~\ref{eq:global_damping} with $\lambda=0.8$.
\textbf{Adaptive damping}: down-weighting the predicted DPV adaptively with the K-Net (Sec.~\ref{subsec:method_kvnet}).
Using the K-net, we get the best depth estimation for regions with/without disocclusion.
}
\label{fig:damping}
\vspace{-1em}
\end{figure}
When processing a video stream, our goal is to integrate the local estimation of DPVs over time to reduce uncertainty. As mentioned earlier, this integration can be naturally implemented as Bayesian filtering. Let us define $d_t$ as the hidden state, which is the depth (in camera coordinates) at frame $I_t$. The ``belief'' volume $p(d_t|I_{1:t})$ is the conditional distribution of the state giving all the previous frames. A simple Bayesian filtering can be implemented in two iterative steps:
\begin{align}
\mbox{Predict}:& \hspace{1em} p(d_t | I_{1:t}) \rightarrow p(d_{t+1} | I_{1:t}), \nonumber \\
\mbox{Update}:& \hspace{1em} p(d_{t+1} | I_{1:t}) \rightarrow p(d_{t+1} | I_{1:t+1}),
\end{align}
where the prediction step is to warp the current DPV from the camera coordinate at $t$ to the camera coordinate at $t+1$:
\begin{equation}
p(d_{t+1}| I_{1:t}) = \mbox{warp}(p(d_t|I_{1:t}), \delta T_{t,t+1}),
\label{eq:bayesian_p}
\end{equation}
where $\delta T_{t,t+1}$ is the relative camera pose from time $t$ to time $t+1$, and $\mbox{warp}(\cdot)$ here is a warping operator implemented as 3D grid sampling. At time $t+1$, we can compute the local DPV $p(d_{t+1}|I_{t+1})$ from the new measurement $I_{t+1}$ using the D-Net. This local estimate is thus used to update the hidden state, \ie, the ``belief'' volume,
\begin{equation}
\label{eq:bayesian}
p(d_{t+1} | I_{1:t+1}) = p(d_{t+1} | I_{1:t}) \cdot p(d_{t+1}|I_{t+1}).
\end{equation}
Note that we always normalize the DPV in the above equations and ensure $\int_{d_{min}}^{d_{max}} p(d)=1$. Figure~\ref{fig:damping} shows an example. As shown in the second row, with the above Bayesian filtering (labeled as "no damping"), the estimated depth map is less noisy, especially in the regions of the back wall and the floor.
However, one problem with directly applying Bayesian filtering is it integrates both correct and incorrect information in the prediction step. For example, when there are occlusions or disocclusions, the depth values near the occlusion boundaries change abruptly. Applying Bayesian filtering directly will propagate wrong information to the next frames for those regions, as highlighted in the red box in Fig.~\ref{fig:damping}. One straightforward solution is to reduce the weight of the prediction in order to prevent incorrect information being integrated over time. Specifically, by defining $E(d)=-\log p(d)$, Eq.~\ref{eq:bayesian} can be re-written as
\begin{equation}
E(d_{t+1} | I_{1:t+1} ) = E(d_{t+1} | I_{1:t}) + E(d_{t+1} | I_{t+1}), \nonumber
\end{equation}
where the first term is the prediction and the second term is the measurement. To reduce the weight of the prediction, we multiply a weight $\lambda\in [0,1]$ with the first term,
\begin{equation}
E(d_{t+1} | I_{1:t+1} ) = \lambda\cdot E(d_{t+1} | I_{1:t}) + E(d_{t+1} | I_{t+1}).
\label{eq:global_damping}
\end{equation}
We call this scheme ``global damping''. As shown in Fig.~\ref{fig:damping}, global damping helps to reduce the error in the disoccluded regions. However, global damping may also prevent some correct depth information to be integrated to the next frames, since it reduces the weights equally for all voxels in the DPV. Therefore, we propose an ``adaptive damping'' scheme to update the DPV:
\begin{equation}
E(d_{t+1} | I_{1:t+1} ) = E(d_{t+1} | I_{1:t}) + g(\Delta E_{t+1}, I_{t+1}),
\label{eq:kvnet}
\end{equation}
where $\Delta E_{t+1}$ is the difference between the measurement and the prediction,
\begin{equation}
\Delta E_{t+1} = E(d_{t+1}| I_{t+1}) - E(d_{t+1} | I_{1:t}),
\label{eq:kvnet_residual}
\end{equation}
and $g(\cdot)$ is a CNN, named K-Net, which learns to transform $\Delta E_{t+1}$ into a correction term to the prediction. Intuitively, for regions with correct depth probability estimates, the values in the overlapping volume of DPVs are consistent. Thus the residual in Eq.~\ref{eq:kvnet_residual} is small and the DPV will not be updated in Eq.~\ref{eq:kvnet}. On the other hand, for regions with incorrect depth probability, the residual would be large and the DPV will be corrected by $g(\Delta E, I_{t+1})$. This way, the weight for prediction will be changed adaptively for different DPV voxels. As shown in Fig.~\ref{fig:damping}, the adaptive damping, \ie, K-Net, significantly improve the accuracy for depth estimation. In fact, K-Net is closely related to the derivation of Kalman filter, where ``K'' stands for Kalman gain. Please refer to the appendix for details.
\subsection{R-Net and Training Details}
\label{subsec:method_refinenet}
Finally, since the DPV $p(d_t|I_{1:t})$ is estimated with $1/4$ spatial resolution (on both width and height) of the input image, we employ a CNN, named R-Net, to upsample and refine the DPV back to the original image resolution. The R-Net, $h(\cdot)$, is essentially an U-Net with skip connections, which takes input the low-res DPV from the K-Net $g(\cdot)$ and the image features extracted from the feature extractor $f(\cdot)$, and outputs a high-resolution DPV.
In summary, as shown in Fig.~\ref{fig:network}, the entire network has three modules, \ie, the D-Net, $f(\cdot;\Theta_1)$, the K-Net, $g(\cdot; \Theta_2)$, and the R-Net, $h(\cdot;\Theta_3)$. Detailed network architectures are provided in the appendix. The full network is trained end-to-end, with simply the Negative Log-Likelihood (NLL) loss over the depth, $\mbox{Loss} = \mbox{NLL}(p(d), d_{GT})$. We also tried to add image warping as an additional loss term (\ie, minimizing the difference between $I_t$ and the warped neighboring frames), but we found that it does not improve the quality of depth prediction.
During training, we use ground truth camera poses. For all our experiments, we use the ADAM optimizer \cite{Diederik2018Adam} with a learning rate of $10^{-5}$, $\beta_1=.9$ and $\beta_2=.999$. The whole framework, including D-Net, K-Net and R-Net, is trained together in an end-to-end fashion for 20 epochs.
\subsection{Camera Poses during Inference}
\label{subsec:method_lba}
During inference, given an input video stream, our method requires relative camera poses $\delta T$ between consecutive frames --- at least for all the first five frames --- to bootstrap the computation of the DPV. In this paper, we evaluated several options to solve this problem. In many applications, such as autonomous driving and AR, initial camera poses may be provided by additional sensors such as GPS, odometer, or IMU. Alternatively, we can also run state-of-the-art monocular visual odometry methods, such as DSO~\cite{Engel18pami}, to obtain the initial camera poses. Since our method outputs continuous dense depth maps and their uncertainty maps, we can in fact further optimize the initial camera poses within a local time window, similar to local bundle adjustment~\cite{Triggs99}.
\begin{figure}
\centering
\includegraphics[width=.99\linewidth]{pose_opt.pdf}
\caption{Camera pose optimization in a sliding local time window during inference. Given the relative camera pose from the reference frame in $\mathcal{N}_t$ to the reference frame in $\mathcal{N}_{t+1}$, we can predict the depth map for the reference frame in $\mathcal{N}_{t+1}$. Then, we optimize the relative camera poses between every source frame and the reference frame in $\mathcal{N}_{t+1}$ using Eq.\ref{eq:opt_pose}.}
\label{fig:pose_opt}
\vspace{-1em}
\end{figure}
Specifically, as shown in Fig.~\ref{fig:pose_opt}, given $p(d_{t}|I_{1:t})$, the DPV of the reference frame $I_t$ in the local time window $\mathcal{N}_{t}$, we can warp $p(d_t|I_{1:t})$ to the reference camera view in $\mathcal{N}_{t+1}$ to predict the DPV $p(d_{t+1}|I_{1:t})$ using Eq.~\ref{eq:bayesian_p}.
Then we get the depth map $\hat{d}$ and confidence map $\hat{C}$ for the new reference frame using Eq.~\ref{eq:dpv_depth}.
The camera poses within the local time window $\mathcal{N}_{t+1}$ are optimized as:
\begin{equation}
\begin{aligned}
\vspace{1em}
\underset{ \substack{\delta T_{k,t+1} \\
k \in \mathcal{N}_{t+1}, k\neq t+1} }{\text{min.}}
\vspace{1em} \sum_k \hat{C}
|I_{t+1} - \text{warp}(I_{k}; \hat{d}; \delta T_{k, t+1}) |_1,
\end{aligned}
\label{eq:opt_pose}
\end{equation}
where $\delta T_{k,t+1}$ is the relative camera pose of frame $k$ to frame $t+1$;
$I_{k}$ is the source image at frame $k$;
$\mbox{warp}(\cdot)$
is an operator that warps the image from the source view to the reference view.
\vspace{-0.5em}
\section{Experimental Results}
\label{sec:results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{confmap_v2.pdf}
\caption{Exemplar results of our approach on ScanNet~\cite{dai2017scannet}. In addition to high quality depth output, we also obtain reasonable confidence maps (as shown in the marked regions for occlusion and specularity) which correlates with the depth error. Moreover, the confidence maps accumulate correctly over time with more input frames.}
\label{fig:uncertainty}
\vspace{-0.5em}
\end{figure*}
We evaluate our method on multiple indoor and outdoor datasets~\cite{shotton13data,Sturm12iros,Gaidon16cvpr,Geiger12cvpr},
with an emphasis on accuracy and robustness. For accuracy evaluation, we argue the widely-used statistical metrics~\cite{Eigen14depth, Ummenhofer17DeMoN} are insufficient because they can only provide an overall estimate over the entire depth map. Rather, we feed the estimated depth maps directly into classical RGB-D based 3D scanning systems~\cite{Newcombe11KinectFusion,Niessner2013Hashing} for 3D reconstruction --- this will show the metric accuracy, the consistency, and the usefulness of the estimation. For robustness evaluation, we performed the aforementioned cross-dataset evaluation tasks, \ie, testing on new datasets without fine-tuning. The performance degradation over new datasets will show the generalization ability and robustness for a given algorithm.
As no prior work operates in the exact setting as ours, it is difficult to choose methods to compare with. We carefully select a few recent DL-based depth estimation methods and try our best for a fair comparison. For single-view methods, we select DORN~\cite{Fu18DORN} which is the current state-of-the-art~\cite{RobustVisionChallenge18}. For two-view methods, we compare with DeMoN~\cite{Ummenhofer17DeMoN}, which shows high quality depth prediction from a pair of images. We also compare with MonoDepth~\cite{Godard17MonoDepth}, which is a semi-supervised learning approach from stereo images.
To improve the temporal consistency for these per-frame estimations, we trained a post-processing network~\cite{Lai18Temporal}, but we observed it does not improve the performance.
Since there is always scale ambiguity for depth from a monocular camera, for fair comparison, we normalize the scale for the outputs from all the above methods before we compute statistical metrics~\cite{Eigen14depth}.
The inference time for processing one frame in our method is $\sim$ $0.7$ second per frame without pose optimization and $\sim$ $1.5$ second with pose estimation on a workstation with GTX 1080 GPU and 64 GB RAM memory, with the framework implemented in Python. The pose estimation part can be implemented with C++ to improve efficiency.
\vspace{-1em}
\paragraph{Results for Indoor Scenarios}
\begin{table}[t]
\centering
\caption{Comparison of depth estimation over the 7-Scenes dataset~\cite{shotton13data} with the metrics defined in~\cite{Eigen14depth}.}
\begin{tabular}{ rcccc }
\toprule
& $\sigma<1.25$
& abs. rel
& rmse
& scale inv. \\
\midrule
DeMoN~\cite{Ummenhofer17DeMoN}
& 31.88
& 0.3888
& 0.8549
& 0.4473\\
DORN~\cite{Fu18DORN}
& 60.05
& 0.2000
& 0.4591
& 0.2207 \\
Ours
& \textbf{69.26}
& \textbf{0.1758}
& \textbf{0.4408}
& \textbf{0.1899}\\
\bottomrule
\end{tabular}
\label{tab:result_7scenes}
\vspace{-1em}
\end{table}
We first evaluated our method for indoor scenarios, for which RGB-D sensors were used to capture dense metric depth for ground truth. We trained our network on ScanNet~\cite{dai2017scannet}.
Figure~\ref{fig:uncertainty} shows two exemplar results. As shown, in addition to depth maps, our method also outputs reasonable confidence maps (\eg, low confidence in the occluded or specular regions) which correlates with the depth errors.
Moreover, with more input frames, the confidence maps accumulate correctly over time: the confidence of the books (top row) increases and the depth error decreases;
the confidence of the glass region (bottom row) decreases and the depth error increases.
For comparison, since the models provided by DORN and DeMoN were trained on different datasets, we compare with these two methods on a separate indoor dataset 7Scenes~\cite{shotton13data}. For our method, we assume that the relative camera rotation $\delta R$ within a local time window is provided (\textit{e.g.} measured by IMU). As shown in Table~\ref{tab:result_7scenes}, our method significantly outperforms both DeMoN and DORN on this dataset based on the commonly used statistical metrics~\cite{Eigen14depth}. We include the complete metrics in the appendix.
Without using an IMU, our method can also achieve better performance, as shown in Table~\ref{tab:ablation_pose}.
For qualitative comparison, as shown in Fig.~\ref{fig:result_3d}, the depth maps from our method are less noisy, more sharper, and temporally more consistent. More importantly, using an RGB-D 3D scanning method~\cite{Niessner2013Hashing}, we can reconstruct a much higher quality 3D mesh with our estimated depths compared to other methods. Even when compared with 3D reconstruction using a real RGB-D sensor, our result has better coverage and accuracy in some regions (\eg, monitors / glossy surfaces) where active depth sensors cannot capture.
\begin{figure*}
\centering
\includegraphics[width=.99\linewidth]{voxel_hashing_compress.pdf}
\caption{Depth and 3D reconstruction results on indoor datasets (best viewed when zoomed in).
We compare our method with DORN \cite{Fu18DORN} and DeMoN \cite{Ummenhofer17DeMoN}, in terms of both depth maps and 3D reconstruction using Voxel Hashing \cite{Niessner2013Hashing} that accumulates the estimated depth maps for multiple frames.
To show the temporal consistency of the depths, we use different numbers of depth maps for Voxel Hashing: $2$ depth maps for the first sample and $30$ depth maps for the other samples.
The depth maps from DORN contain block artifacts as marked in red boxes. This is manifested as the rippled shapes in the 3D reconstruction.
DeMoN generates sharp depth boundaries but fails to recover the depth faithfully in the regions marked in the green box. Also, the depths from DeMoN is not temporally consistent. This leads to the severe misalignment artifacts in the 3D reconstructions.
In comparison, our method generates correct and temporally consistent depths maps, especially at regions with high confidence, such as the monitor where even the Kinect sensor fails to get the depth due to low reflectance.}
\label{fig:result_3d}
\vspace{-1em}
\end{figure*}
\vspace{-1em}
\paragraph{Results for Outdoor Scenarios}
We also evaluated our method on some outdoor datasets --- KITTI~\cite{Geiger12cvpr} and virtual KITTI~\cite{Gaidon16cvpr}. The virtual KITTI dataset is used because it has dense, accurate metric depth as ground truth, while KITTI only has sparse depth values from LiDAR as ground truth. For our method, we use the camera poses measured by the IMU and GPS.
Table~\ref{tab:result_kitti} lists the comparison results with DORN~\cite{Fu18DORN}, Eigen~\cite{Eigen14depth}, and MonoDepth~\cite{Godard17MonoDepth} which are also trained on KITTI~\cite{Geiger12cvpr}. Our method has similar performance with DORN~\cite{Fu18DORN}, and is better than the other two methods, based on the statistical metrics defined in~\cite{Eigen14depth}. We also tested our method with camera poses from DSO~\cite{Engel18pami} and obtain slightly worse performance (see appendix).
Figure~\ref{fig:result_kitti} shows qualitative comparison for depth maps in KITTI dataset. As shown, our method generate sharper and less noisier depth maps. In addition, our method outputs depth confidence maps (\eg, lower confidence on the car window). Our depth estimation is temporally consistent, which leads to the possibility of fusing multiple depth maps with voxel hashing~\cite{Niessner2013Hashing} in the outdoors for a large-scale dense 3D reconstruction, as shown in Fig.~\ref{fig:result_kitti}.
In Table~\ref{tab:result_vkitti}, we performed the cross-dataset task. The left shows the results with training from KITTI~\cite{Geiger12cvpr} and testing on virtual KITTI~\cite{Gaidon16cvpr}. The right shows the results with training from indoor datasets (NYUv2~\cite{Silberman12ECCV} for DORN~\cite{Fu18DORN} and ScanNet~\cite{dai2017scannet} for ours) and testing on KITTI~\cite{Geiger12cvpr}. As shown, our method performs better, which shows its better robustness and generalization ability.
\begin{table}
\centering
\caption{Comparison of depth estimation on KITTI~\cite{Geiger12cvpr}.}
\begin{tabular}{rcccc}
\toprule
& $\sigma<1.25$
& abs. rel
& rmse
& scale inv. \\
\midrule
Eigen~\cite{Eigen14depth}
& 67.80
& 0.1904
& 5.114
& 0.2628
\\
Mono~\cite{Godard17MonoDepth}
& 86.43
& 0.1238
& 2.8684
& 0.1635
\\
DORN~\cite{Fu18DORN}
& 92.62
& \textbf{0.0874}
& 3.1375
& 0.1233 \\
Ours
& \textbf{93.15}
& 0.0998
& \textbf{2.8294}
& \textbf{0.1070} \\
\bottomrule
\end{tabular}
\label{tab:result_kitti}
\vspace{-1em}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{kitti_v5.pdf}
\caption{
Depth map and 3D reconstruction for KITTI, compared with DORN \cite{Fu18DORN}, MonoDepth \cite{Ummenhofer17DeMoN} (best viewed when zoomed in).
First row: Our depth map is sharper and contains less noise.
For specular region (marked in the pink box), the confidence is lower.
Second row, from left to right: reconstructions using depth maps of the same 100 frames estimated from MonoDepth, DORN and our method. All meshes are viewed from above. Within the 100 frames, the vehicle was travelling in a straight line without turning.
}
\label{fig:result_kitti}
\end{figure*}
\begin{table}
\vspace{-1em}
\centering
\caption{Cross-dataset tests for depth estimation in the outdoors.}
\begin{tabular}{rcccc}
\toprule
& \multicolumn{4}{c}{KITTI (train) $\rightarrow$ virtual KITTI (test)}\\
\cmidrule(r){2-5}
& $\sigma<1.25$ & abs. rel & rmse & scale inv.\\
\midrule
DORN~\cite{Fu18DORN} & 69.61 & \textbf{0.2256} & 9.618 & 0.3986 \\
Ours & \textbf{73.38} & 0.2537 & \textbf{6.452} & \textbf{0.2548} \\
\midrule
& \multicolumn{4}{c}{Indoor (train) $\rightarrow$ KITTI (test)} \\
\cmidrule(r){2-5}
& $\sigma<1.25$ & abs. rel & rmse & scale inv.\\
\midrule
DORN~\cite{Fu18DORN} & 25.44 & 0.6352 & 8.603 & 0.4448\\
Ours & \textbf{72.96} & \textbf{0.2798} & \textbf{5.437} & \textbf{0.2139}\\
\bottomrule
\end{tabular}
\label{tab:result_vkitti}
\vspace{-.5em}
\end{table}
\vspace{-1em}
\paragraph{Ablation Study}
The performance of our method relies on accurate estimate of camera poses, so we test our method with different camera pose estimation schemes:
(1) Relative camera rotation $\delta R$ is read from an IMU sensor (denoted as ``GT $R$'').
(2) $\delta R$ of all frames are initialized with DSO~\cite{Engel18pami} (denoted as ``VO pose'')
(3) $\delta R$ of the first five frames are initialized with DSO~\cite{Engel18pami} (denoted as ``$1$\textit{st} win'').
We observe that when only the camera poses in the first time window are initialized using DSO, the performance in terms of depth estimation is better than that using the DSO pose initialization for all frames. This may seem counter-intuitive, but it is because monocular VO methods sometimes have large errors for textureless regions while optimization with dense depths may overcome this problem.
\begin{table}
\centering
\caption{Performance on 7Scenes with different initial poses}
\begin{tabular}{rcccc}
\toprule
& $\sigma<1.25$
& abs. rel
& rmse
& scale inv. \\
\midrule
VO pose
& 60.63
& 0.1999
& 0.4816
& 0.2158
\\
$1$\textit{st} win.
& 62.08
& 0.1923
& 0.4591
& 0.2001
\\
GT $R$
& 69.26
& 0.1758
& 0.4408
& 0.1899
\\
GT pose
& 70.54
& 0.1619
& 0.3932
& 0.1586 \\
\bottomrule
\end{tabular}
\label{tab:ablation_pose}
\vspace{-1em}
\end{table}
\vspace{-1em}
\paragraph{Usefulness of the Confidence Map}
The estimated confidence maps can also be used to further improve the depth maps. As shown in Fig.~\ref{fig:conf_FBS}(a), given the depth map and the corresponding confidence, we can correct the regions with lower confidence due to specular reflection. Also, for 3D reconstruction algorithm, given the depth confidence, we can mask out the regions with lower confidence for better reconstruction, as shown in Fig.~\ref{fig:conf_FBS}(b).
\begin{figure}
\centering
\includegraphics[width=.95\linewidth]{use_conf_4.pdf}
\caption{Usefulness of depth confidence map. (a) Correct depth map using Fast Bilateral Solver \cite{BarronPoole2016}.
(b) Mask out pixels with low confidence before applying Voxel Hashing \cite{Niessner2013Hashing}.
}
\label{fig:conf_FBS}
\vspace{-2em}
\end{figure}
\vspace{-.5em}
\section{Conclusions and Limitations}
\label{sec:conclusion}
In this paper, we present a DL-based method for continuous depth sensing from a monocular video camera. Our method estimates a depth probability distribution volume from a local time window, and integrates it over time under a Bayesian filtering framework.
Experimental results show our approach achieves high accuracy, temporal consistency, and robustness for depth sensing, especially for the cross-dataset tasks. The estimated depth maps from our method can be fed directly into RGB-D scanning systems for 3D reconstruction and achieve on-par or sometimes more complete 3D meshes than using a real RGB-D sensor.
There are several limitations that we plan to address in the future. First, camera poses from a monocular video often suffer from scale drifting, which may affect the accuracy of our depth estimation.
Second, in this work we focus on depth sensing from a local time window, rather than solving it in a global context using all the frames.
\newpage
\input{sec_appendix.tex}
\newpage
{\small
\bibliographystyle{ieee}
\section{Relation of K-Net to the Kalman filter}
The proposed update process defined in Eq.~8 in the main paper using residuals is closely related to Kalman Filter.
In Kalman Filter, given the observation $x_t$ at time $t$ and the estimated hidden state $h_{t-1}$ at time $t-1$, the updated hidden state $h_{t}$ is:
\begin{equation}
h_t = W_t h_{t-1} + K_t(x_t - V_t W_t h_{t-1})
\label{eq:kvnet_kalman}
\end{equation}
where $W_t$ is the transition matrix mapping the previous hidden state to current state;
$K_t$ is the gain matrix mapping the residual in the observation space to the hidden state space.
$V_t$ is the measurement matrix mapping the estimation in the hidden state space back to the observation space.
If we assume the measurement matrix is accurate: $x_t = V h_t$, and the gain and measurement matrices are temporally invariant,
we have:
\begin{align}
h_t &= W_t h_{t-1}
+ K(V h_t - V W_t h_{t-1}) \nonumber \\
&= W_t h_{t-1}
+ K V( h_t - W_t h_{t-1})
\label{eq:kvnet_kalman2}
\end{align}
Comparing our proposed update process in Eq.~5, Eq.~8 and Eq.~9 in the main paper
and
Kalman Filter in Eq.\ref{eq:kvnet_kalman2},
in our case the input images correspond to the observations $x_t$ ;
the negative-log depth probabilities correspond to the hidden states $h_t$;
the warping operator $\text{warp}(\cdot)$ corresponds to the transition matrix $W_t$;
the K-Net $g(\cdot)$ corresponds to the multiplication of the gain and measurement matrices $KV$
in Eq.~\ref{eq:kvnet_kalman2}.
\section{More Results}
\subsection{Complete metrics for Comparisons}
We show the
complete metrics for depth estimation comparisons
in Table~\ref{tab:result_7scenes}
and Table~\ref{tab:result_kitti}.
\begin{table*}[!t]
\centering
\caption{Comparison of depth estimation over the 7-Scenes dataset~\cite{shotton13data} with the metrics defined in~\cite{Eigen14depth}}
\begin{tabular}{ m{4em} m{4em} m{4.5em} m{4.5em} m{4em} m{4em} m{4em} m{4em} m{4em} }
\toprule
& $\sigma<1.25$
& $\sigma<1.25^2$
& $\sigma<1.25^3$
& abs. rel
& sq. rel
& rmse
& rmse log
& scale. inv \\
\midrule
DeMoN~\cite{Ummenhofer17DeMoN}
& 31.88
& 61.02
& 82.52
& 0.3888
& 0.4198
& 0.8549
& 0.4771
& 0.4473\\
DORN~\cite{Fu18DORN}
& 60.05
& 87.76
& 96.33
& 0.2000
& 0.1153
& 0.4591
& 0.2813
& 0.2207 \\
Ours
& \textbf{69.26}
& \textbf{91.77}
& \textbf{96.82}
& \textbf{0.1758}
& \textbf{0.1123}
& \textbf{0.4408}
& \textbf{0.2500}
& \textbf{0.1899}\\
\bottomrule
\end{tabular}
\label{tab:result_7scenes}
\end{table*}
\begin{table*}[!t]
\centering
\caption{Comparison of depth estimation over the KITTI dataset~\cite{Geiger12cvpr}.}
\begin{tabular}{m{4em} m{4em} m{4.5em} m{4.5em} m{4em} m{4em} m{4em} m{4em} m{4em} }
\toprule
& $\sigma<1.25$
& $\sigma<1.25^2$
& $\sigma<1.25^3$
& abs. rel
& sq. rel
& rmse
& rmse log
& scale. inv \\
\midrule
Eigen~\cite{Eigen14depth}
& 67.80
& 88.79
& 96.51
& 0.1904
& 1.263
& 5.114
& 0.2758
& 0.2628
\\
Mono~\cite{Godard17MonoDepth}
& 86.43
& 97.70
& \textbf{99.47}
& 0.1238
& 0.5023
& 2.8684
& 0.1644
& 0.1635
\\
DORN~\cite{Fu18DORN}
& 92.62
& \textbf{98.18}
& 99.35
& \textbf{0.0874}
& \textbf{0.4134}
& 3.1375
& 0.1337
& 0.1233 \\
Ours
& \textbf{93.15}
& 98.018
& 99.25
& 0.0998
& 0.4732
& \textbf{2.8294}
& \textbf{0.1280}
& \textbf{0.1070} \\
\bottomrule
\end{tabular}
\label{tab:result_kitti}
\end{table*}
\subsection{Results on KITTI without GPS or IMU}
In Table~\ref{tab:result_kitti_opt_pose},
we show the performance of our method on the KITTI dataset, in case
where only the IMU measurement are available (denoted as 'GT R'),
and neither IMU nor GPU are available (denoted as 'opt. pose').
\begin{table*}[!t]
\centering
\caption{Performance on KITTI dataset without GPS/IMU measurements}
\begin{tabular}{m{4em} m{4em} m{4.5em} m{4.5em} m{4em} m{4em} m{4em} m{4em} m{4em} }
\toprule
& $\sigma<1.25$
& $\sigma<1.25^2$
& $\sigma<1.25^3$
& abs. rel
& sq. rel
& rmse
& rmse log
& scale. inv \\
\midrule
GT R
& 89.34
& 98.30
& 99.64
& 0.1178
& 0.4490
& 3.2042
& 0.1514
& 0.1509
\\
opt. pose
& 87.78
& 97.22
& 99.10
& 0.1201
& 0.5763
& 3.5157
& 0.1672
& 0.1665 \\
\bottomrule
\end{tabular}
\label{tab:result_kitti_opt_pose}
\end{table*}
\section{Network structures}
In this section, we illustrate the network structures used in the pipeline.
\subsection{D-Net}
We show the structure of the D-Net in Table.~\ref{tab:structure_DNet}.
In the paper, we set $D=64$.
\subsection{K-Net}
We show the structure of the K-Net in Table.~\ref{tab:structure_KNet}.
In the paper, we set $D=64$.
\subsection{R-Net}
We show the structure of the R-Net in Table.~\ref{tab:structure_RNet}.
In the paper, we set $D=64$.
\begin{table*}[!ht]
\centering
\caption{K-Net structure. The operator expand($\cdot$) repeat the image intensity in the depth dimension}
\begin{tabular}{C{4em} C{23em} C{9em} C{9em}}
\toprule
Name & Components & Input & Output dimension \\
\midrule
Input
& concat(cost\_volume, expand($I_{\text{ref}}$))
&
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D $\times$ 4 \\
\hline
conv\_0 &
\begin{tabular}{c}
conv\_3d(3$\times$3, ch\_in=4, ch\_out=32), ReLU \\
conv\_3d(3$\times$3, ch\_in=32, ch\_out=32), ReLU
\end{tabular}
& Input
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D $\times$ 32
\\ \hline
conv\_1 &
$\begin{bmatrix}
\text{conv\_3d(3 $ \times$3, ch\_in=32, ch\_out=32), ReLU} \\
\text{conv\_3d(3$\times$3, ch\_in=32, ch\_out=32) }
\end{bmatrix} $ $\times$ 4
& conv\_0
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D $\times$ 32 \\ \hline
conv\_2 &
\begin{tabular}{c}
conv\_3d(3$\times$3, ch\_in=32, ch\_out=32), ReLU \\
conv\_3d(3$\times$3, ch\_in=32, ch\_out=1)
\end{tabular}
& conv\_1
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D $\times$ 1 \\ \hline
Output
& Modified cost\_volume from the conv\_2 layer
&
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D $\times$ 1 \\
\bottomrule
\end{tabular}
\label{tab:structure_KNet}
\end{table*}
\begin{table*}[!h]
\centering
\caption{R-Net structure}
\begin{tabular}{C{4em} C{23em} C{9em} C{9em}}
\toprule
Name & Components & Input & Output dimension \\
\midrule
Input
& cost\_volume from K-Net
&
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ D \\
\hline
conv\_0 &
\begin{tabular}{c}
conv\_2d(3$\times$3, ch\_in=64$+$D, ch\_out= 64$+$D), LeakyReLU \\
conv\_2d(3$\times$3, ch\_in=64$+$D, ch\_out= 64$+$D), LeakyReLU
\end{tabular}
& concat(Input, fusion in D-Net
)
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ (64$+$D)
\\ \hline
trans\_conv\_0 &
transpose\_conv(4$\times$4, ch\_in=64$+$D, ch\_out=D, stride=2), LeakyReLU
& conv\_0
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$ D \\ \hline
conv\_1 &
\begin{tabular}{c}
conv\_2d(3$\times$3, ch\_in=32$+$D, ch\_out=32 $+$ D ), LeakyReLU \\
conv\_2d(3$\times$3, ch\_in=32$+$D, ch\_out=32 $+$ D),LeakyReLU
\end{tabular}
& concat(trans\_conv\_0, conv\_1 in D-Net
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$ (D$+$32) \\ \hline
trans\_conv\_1
& transpose\_conv(4$\times$4, ch\_in=32$+$D, ch\_out=D, stride=2 ), LeakyReLU
& conv\_1
& H $\times$ W $\times$ D \\ \hline
conv\_2
&
\begin{tabular}{c}
conv\_2d(3$\times$3, ch\_in=3$+$D, ch\_out=3$+$D ), LeakyReLU \\
conv\_2d(3$\times$3, ch\_in=3$+$D, ch\_out=D ), LeakyReLU \\
conv\_2d(3$\times$3, ch\_in= D, ch\_out=D )
\end{tabular}
& concat(trans\_conv\_1, $I_\text{ref}$)
& H $\times$ W $\times$ D \\ \hline
Output
& Upsampled and refined cost\_volume
&
& H $\times$ W $\times$ D \\
\bottomrule
\end{tabular}
\label{tab:structure_RNet}
\end{table*}
\newpage
\begin{table*}[!ht]
\centering
\caption{D-Net structure. The structure is taken from \cite{Chang18PSM}}
\begin{tabular}{C{4em} C{28em} C{4em} C{8em}}
\toprule
Name & Components & Input & Output dimension \\
\midrule
Input
& Input frame
& & H $\times$ W $\times$ 3 \\
\hline
\multicolumn{4}{c}{\textbf{CNN Layers}} \\
\hline
conv0\_1
& conv\_2d(3$\times$3, ch\_in=3, ch\_out=32, stride=2), ReLU
& Input
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$ 32 \\
\hline
conv0\_2
& conv\_2d(3$\times$3, ch\_in=32, ch\_out=32 ), ReLU
& conv0\_1
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$ 32\\
\hline
conv0\_3
& conv\_2d(3$\times$3, ch\_in=32, ch\_out=32), ReLU
& conv0\_2
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$ 32\\
\hline
conv1
& $ \begin{bmatrix}
\text{conv\_2d(3$\times$3, ch\_in=32, ch\_out=32), ReLU} \\
\text{conv\_2d(3$\times$3, ch\_in=32, ch\_out=32)}
\end{bmatrix}$ $\times$ 3
& conv0\_2
& $\frac{1}{2}$H $\times$ $\frac{1}{2}$ W $\times$32 \\
\hline
conv1\_1
&
conv\_2d(3$\times$3, ch\_in=32, ch\_out=64, stride=2), ReLU
& conv1
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$64 \\
\hline
conv2
& $ \begin{bmatrix}
\text{conv\_2d(3$\times$3, ch\_in=64, ch\_out=64), ReLU} \\
\text{conv\_2d(3$\times$3, ch\_in=64, ch\_out=64)}
\end{bmatrix}$ $\times$ 15
& conv1\_1
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$64 \\
\hline
conv2\_1
&
conv\_2d(3$\times$3, ch\_in=64, ch\_out=128), ReLU
& conv2
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$128 \\
\hline
conv3
& $ \begin{bmatrix}
\text{conv\_2d(3$\times$3, ch\_in=128, ch\_out=128), ReLU} \\
\text{conv\_2d(3$\times$3, ch\_in=128, ch\_out=128)}
\end{bmatrix}$ $\times$ 2
& conv2\_1
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 128 \\
\hline
conv4
& $ \begin{bmatrix}
\text{conv\_2d(3$\times$3, ch\_in=128, ch\_out=128, dila=2), ReLU} \\
\text{conv\_2d(3$\times$3, ch\_in=128, ch\_out=128,
dila=2)}
\end{bmatrix}$ $\times$ 3
& conv3
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 128 \\
\hline
\multicolumn{4}{c}{\textbf{Spatial Pyramid Layers}} \\
\hline
branch1
&
\begin{tabular}{c}
avg\_pool(64$\times$64,stride=64) \\
conv\_2d(1$\times$1, ch\_in=128, ch\_out=32), ReLU \\
bilinear interpolation
\end{tabular}
& conv4
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 32 \\
\hline
branch2
&
\begin{tabular}{c}
avg\_pool(32 $\times$ 32,stride= 32) \\
conv\_2d(1$\times$1, ch\_in=128, ch\_out=32), ReLU \\
bilinear interpolation
\end{tabular}
& conv4
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 32 \\
\hline
branch3
&
\begin{tabular}{c}
avg\_pool(16 $\times$ 16,stride= 16) \\
conv\_2d(1$\times$1, ch\_in=128, ch\_out=32), ReLU \\
bilinear interpolation
\end{tabular}
& conv4
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 32 \\
\hline
branch4
&
\begin{tabular}{c}
avg\_pool(8 $\times$ 8,stride= 8) \\
conv\_2d(1$\times$1, ch\_in=128, ch\_out=32), ReLU \\
bilinear interpolation
\end{tabular}
& conv4
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 32 \\
\hline
concat
& concat(branch1, branch2, branch3, branch4, conv2, conv4)
&
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 320
\\
\hline
fusion
&
\begin{tabular}{c}
conv\_2d(3$\times$3, ch\_in=320, ch\_out=128), ReLU \\
conv\_2d(1$\times$1, ch\_in=128, ch\_out=64), ReLU
\end{tabular}
& concat
& $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 64
\\
\hline
Output
& The extracted image feature from the fusion layer
& & $\frac{1}{4}$H $\times$ $\frac{1}{4}$ W $\times$ 64 \\
\bottomrule
\end{tabular}
\label{tab:structure_DNet}
\end{table*}
\end{appendices}
|
1,477,468,750,147 | arxiv | \section{Introduction}
Detection of gravitational waves (GWs) has the potential to be the foundation for the future understanding of physics, such as general relativity, nuclear physics, cosmology, and astrophysics.~\cite{PhysRevLett.116.061102} In particular, calibrated gravitational waveforms with low statistical and systematic errors are expected to give us significant information for new physics.
Advanced LIGO~\cite{0264-9381-32-7-074001} and Advanced Virgo~\cite{0264-9381-32-2-024001} have measured some gravitational waves that are consistent with the simulation of numerical relativity~\cite{PhysRevLett.116.061102}. In addition, the international joint observation with Advanced LIGO, Advanced Virgo, KAGRA,~\cite{0264-9381-29-12-124007, PhysRevD.88.043007, 10.1093/ptep/ptaa125} and LIGO India are going to provide us with essential information, such as masses, spins, localization, red-shift, and polarizations of gravitational wave sources. KAGRA is a 3-km Large-Scale Cryogenic Gravitational Wave Telescope, which has been placed at Kamioka in Gifu prefecture, Japan.~\cite{0264-9381-29-12-124007, PhysRevD.88.043007, 10.1093/ptep/ptaa125} KAGRA employs two unique approaches: underground site and cryogenic mirrors. The stable underground and cryogenic mirrors provide us with low seismic noise and low thermal noise, respectively.
GWs cause differential variations of the arm length and generate power modulations in the detector readout. The power fluctuations measured by a photodetector work as the GW readout signal and an error signal for controlling differential arm length. For the stable operation of the instruments, feedback control of the differential arm length is required.
This control is achieved by taking a digitized readout signal, applying a set of digital filters, and sending the opposite phase signal of the filtered signal as a control signal to the test mass actuators. Therefore, the estimation of the equivalent GW strain sensed by the interferometer requires characterization and correction with calibration for the feedback control loop.~\cite{PhysRevD.88.043007, 0264-9381-34-22-225001, 10.1093/ptep/ptab018, Sun_2020} By modeling differential arm length with parameter from calibration, the calibrated gravitational wave strain are reconstructed~\cite{Sun_2020}.
Calibration uncertainties are directly translated to the errors in the absolute GW signal. The primary impact of calibration uncertainties on the parameters appears to be in the determination of the distance to the source.
Furthermore, since the estimation of the population of GW sources depends on the third power of the source distance, calibration uncertainties are also translated into uncertainties in the population estimation.~\cite{Abbott:2017xzu, Schutz_1986, Feeney:2018mkj}
Calibration uncertainties also affect coordinate reconstruction, particularly in the case that only up to three detectors in the worldwide GW detector network can detect the GW signal. This phenomenon often occurs because the sensitivity of the interferometer has directional dependence. The effect of calibration uncertainties is visible in high signal-to-noise ratio events where the angular resolution is less affected by the detector noise.~\cite{Abbott2016,0264-9381-34-22-225001, Estevez_2018} The photon calibrator (Pcal) is used in LIGO, Virgo, and KAGRA to calibrate interferometer response.
The Pcal can give a modulation the mirror surface with photon pressure. In the joint observation run 3 (2020 April) with KAGRA and GEO600 (O3GK),~\cite{10.1093/ptep/ptaa120, 10.1093/ptep/ptaa125, 10.1093/ptep/ptab018, 10.1093/ptep/ptx180} KAGRA employs the photon calibrator as a primary calibrator. The summary of the calibration overview in O3GK is described in the summary paper.~\cite{10.1093/ptep/ptab018} The initial characterization of photon calibrator instruments are summarized elsewhere.~\cite{bin-hua, cory} Pcal can modulate the mirror displacement by injecting the power-stabilized laser with intensity modulation. The main applications of the photon calibrator are (i) calibration of an interferometer response, (ii) monitoring of time dependency of the interferometer response, and (iii) hardware injection to verify the analysis pipeline~\cite{PhysRevD.95.062002,cory}. The displacement of the mirror can be written as:
\begin{equation}
\label{eq1}
X = \frac{2P \cos \theta }{c} S_{tot} (f, \vec{a}, \vec{b}),
\end{equation}
where $P$ is the absolute laser power, $\theta $ is the incident angle of the Pcal laser, and $c $ is the speed of light~\cite{doi:10.1063/1.4967303,0264-9381-27-8-084024,0264-9381-26-24-245011}. $S_{tot} (f, \vec{a}, \vec{b})$ is the force-to-displacement transfer function of the suspended pendulum. The complete form can be defined as
\begin{equation}
\label{eq2}
S_{tot} (f, \vec{a}, \vec{b})=S_{Len} (f)+S_{Rot} (f, \vec{a}, \vec{b})+S_{Ela} (f, \vec{a}, \vec{b}),
\end{equation}
where $S_{Len} (f)$, $S_{Rot} (f, \vec{a}, \vec{b})$ and $S_{Ela} (f, \vec{a}, \vec{b})$ are the transfer functions of pendulums for displacement, rotation and elastic deformations of mirror, respectively. In our observation frequency, we assumed that the pendulum transfer function was free mass motion. The transfer function is obtained from the following equations:
\begin{equation}
\label{eq3}
S_{Len}(f)=1/(M\omega^2)
\end{equation}
and
\begin{equation}
S_{Rot}(f)=\frac{\vec{a} \cdot \vec{b}}{I \omega^2} \label{eq2}
\end{equation}
where $M $ and $I = Mh^{2} /12 + Mr^{2}/4$ are the mass and moment of inertia of the test mass, $h $ and $r $ are the thickness and radius of the test mass, $\omega $
is the angular frequency of the laser power modulation, and $\vec{a}$ and $\vec{b}$ are the vectors pointing to the center of force of the Pcal beams and the main interferometer beam, respectively~\cite{doi:10.1063/1.4967303,0264-9381-27-8-084024,0264-9381-26-24-245011}. Figure~\ref{fig0} shows the schematic view of KAGRA interferometer and definition of beam vectors. The definition of beam position vectors are shown in Fig.~\ref{fig1}. The transfer function with elastic deformation is described below.~\cite{Hild_2007}
\begin{equation}
S_{Ela}(f, \vec{a}, \vec{b}) =\frac{\int d\xi d\eta G(\xi, \eta, \vec{b}) D(\xi, \eta, \vec{a};f)} {\int d \xi d \eta G(\xi, \eta, \vec{b})}
\end{equation}
where $D(\xi, \eta, \vec{a};f)$ is the deformation of test mass surface normalized by the injected power, and $(\xi, \eta)$ are coordinate parameters, $G(\xi, \eta, \vec{b})$ is Gaussian beam profile of main laser beam.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig0.png}
\caption{Schematic view of KAGRA interferometer. Photon calibrators are placed at X-end and Y-end stations. Top right view shows the schematic view of photon calibrator and end test mass.}
\label{fig0}
\end{figure}
The response to the excitation forces can be represented by the appropriate linear combination of normal modes. In particular, butterfly mode and drumhead mode ate main contribution of elastic deformation mode. However, these symmetric deformation effects can be mitigated by applying at least two beams that are diametrically opposed and sufficiently displaced from the center of the test mass. This trick erase $S_{Rot}$ and $S_{Eal}$. Only $S_{Len}$, which is necessary for calibration, remains. This scheme was tested and implemented in LIGO with an Advanced LIGO photon calibrator.~\cite{doi:10.1063/1.4967303}
We monitored the time dependency of the transfer function using the calibration pipelines~\cite{10.1093/ptep/ptab018}. This method is also available for monitoring the time dependency of the response function, such as the time dependencies of the actuation response, optical gain, and cavity pole frequency. By using this informations, we can estimate the uncertainty of gravitational wave strain. This is because that the relative response function is proportional to relative gravitational wave strain.
The displacement of the mirror is proportional to absolute laser power.
Therefore, we needed to calibrate the absolute Pcal laser power. LIGO's power standard is calibrated by NIST every year, and the relative uncertainty of power standard is 0.32~\%. On the other hand, NIST corresponds the absolute laser power response with several countries, whose absolute uncertainty is about 3~\%.~\cite{EUROMET, Bhattacharjee_2020} The Pcal power-stabilized laser also provided information on the consistency check of the response function. By using the Pcal read-back signal as a response of receiver module output, we were able to estimate the expected response of the end test mass. The definition of PCAL read-back signal is explained in Sec.~\ref{model}. We can compare the expected $h(t)$ from the read-back signal and the estimated $h(t)$ from the reconstruction pipeline of KAGRA.
Recently, the gravity field calibrator and Newtonian Calibrator methods have been proposed for absolute calibration.~\cite{PhysRevD.98.022005, Estevez_2021, Estevez_2018} The gravity field calibrator has a rotating disk with quadrupole mass distribution, which changes a gravity gradient around the end test mass. The demonstration of this method will be carried out in a future gravitational wave experiment.
In this paper, we summarized the photon calibrator instruments of KAGRA with the measurement of relative intensity noise and relative harmonics noise. In particular, the characterization of Pcal with 20 W laser is first demonstration in the research field. In section 2, we explained the specifications and design of the system. In section 3, we discussed the measurement results of the relative power noise and the harmonic noise. In section 4, we obtained the systematic errors.
\section{INSTRUMENTS}
The KAGRA photon calibrator was placed at the X and Y end stations, which were placed 3~km from the beam splitter of the interferometer as shown in Fig.~\ref{fig1}. We used the X-end system for calibration of interferometer and the Y-end system for hardware injection during observation~\cite{PhysRevD.95.062002}. The systems were installed 34.957~m from the end test mass (ETM) as shown in Fig.~\ref{fig1}. We injected two laser beams, dubbed Path-1 and Path-2, into the ETM. Figure~\ref{fig1} shows the layout of the KAGRA photon calibrator. The photon calibrator consisted of a transmitter module (Tx module), a receiver module (Rx module), a periscope, and a telephoto camera module (TCam module). They provide intensity modulated laser beam, monitor the intensity, change the height of optical axis and observe beam spot position on mirror surface, respectively. We employed a CW fiber laser whose maximum power and wavelength were 20~W and 1047~nm, respectively. To avoid the coupling of main laser, we selected 1047nm laser for Pcal. To stabilize and modulate the laser's power, we also installed an optical follower servo (OFS) in the Tx module. We split the beams in the Tx module to minimize the elastic deformation caused by pushing the ETM as shown in Fig.~\ref{fig2}. Each beam position was measured with a telephoto camera (TCam). The specification summary of the KAGRA photon calibrator is shown in Table~I.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig1.png}
\caption{Schematic view of the KAGRA photon calibrator. Right upper figure shows the definition of beam positions. $\vec{a}_1$ and $\vec{a}_2$ are beam position of path 1 and path 2, respectively. $\vec{b}$ is beam position of main beam. The origin of these vector is defined on the center of mirror surface.}
\label{fig1}
\end{figure}
\begin{table}[h!]
\centering
\caption{Specification summary of the KAGRA photon calibrator.Incident angle of Pcal is defined by the interval of periscope mirror and distance between Pcal and end test mass. Beam waist of input laser is measured on the Tx module. Definition of beam position corresponds to $\vec{a}=\vec{a_1}+\vec{a_2}$ as shown in Eq.\ref{eq2} and Fig.\ref{fig1}. The origin of vectors is defined on the center of mirror surface.}
\label{pcal_over}
\begin{tabular}{cc}
\hline
Mirror material & Sapphire \\
Mirror mass & 22.95 $\pm$ 0.01~kg \\
Mirror diameter & 220~mm \\
Mirror thickness & 150~mm \\
Distance from Pcal to test mass & 34.957 $\pm$ 0.01~m \\
Maximum laser power & 20~W \\
Pcal laser wavelength & 1047~nm \\
Incident angle & 0.839 $\pm$ 0.023~deg \\
Beam waist of input laser & 265.57 $\pm$ 0.04~[um]\\
Beam position $\vec{a_1}$ (Top) & (0~mm, 76~mm) \\
Beam position $\vec{a_2}$ (bottom) & (0~mm, $-$76~mm)\\
\hline
\end{tabular}
\end{table}
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig2.png}
\caption{Optical layout of the Transmitter module. We installed the CW laser on the optical table and its stabilization system. The laser beam is split by the beam splitter. The split beams are modulated with AOMs. The schematic view of feedback loop is shown in Fig.~\ref{fig6}}
\label{fig2}
\end{figure}
\subsection{Transmitter module}
The transmitter module was used as an input optical system to modulate and stabilize the laser power. The optical components of the transmitter module were mounted on a 900~mm $\times $ 900~mm optical table as shown in Fig.~\ref{fig2}. To isolate the fluctuation of the atmosphere, we covered the system with aluminum plates around the optical system. The plates were coated with Tobika black to absorb the scattered light. We mounted a 20~W Yb CW laser manufactured by Keopsys, whose typical beam waist and beam quality factor
$M^{2}$ are 265.6 $\pm $ 0.04 $\mu $m and 1.06 $\pm $ 0.01. The model number of the laser is CYFL-TERA-20-LP-1047-AM1-RGO-OM1-T305-C1. We split the laser beams using a beam splitter and then modulated each laser beam with ISOMET Acousto-Optic modulators (AOM). The part number of the AOM is M1080-T80L-M. Using the AOM, we were able to control the laser power from 0~W to 10~W for each path. We adjusted the offset at half of the maximum response of AOM. The powers of beams were monitored using photodiodes. We employed InGaAs PIN photodiode with a diameter of 3.0~mm, made by Excelitas with part number C30665GH. We mounted five photodiodes with a trans-impedance amplifier unit in the transmitter module, referred to as OFSPD1, OFSPD2, TxPD1, TxPD2, and LPD, as shown in Fig.~\ref{fig2}. OFSPD1 and OFSPD2 were used to obtain the feedback signal connected to the AOM for stabilizing the laser power. We attenuated the laser powers sensed by OFSPD1 and OFSPD2 to be 1~mW. The absolute shot noise of the sensor can be described as $\sqrt{2P\eta e}$, where $P $ is detection power, $\eta $ is the responsivity of the photodiode, and $e $ is the elementary electric charge. The detected signals at OFSPD1 and OFSPD2 were sent to the optical follower servo. LPD is a photo detector for monitoring laser noise as out-of-loop. TxPD1 and TxPD2 were used to monitor absolute power, and their responses were calibrated by NIST standard laser.
The beams were collimated by three lenses to reduce the spherical aberration. The beam radius on the ETM were 3.5~mm. We installed pico-motors to control the beam position on the mirror from the remote site.
\subsection{Periscope}
In order to control the beam position on the mirror surface, we installed the periscope into the vacuum chamber. We mounted a 100~mm fused silica view window on the vacuum chamber. Details of the periscope is as follows. The structure to put the mirrors and baffle to kill scattered light were mounted on the optical table. The 12 mirrors, whose diameters were 3~inches, were mounted in the chamber. The reflectance of each mirror at 1047~nm is 99.95~\%. The mirrors were manufactured by the Opto-Sigma company. We set the incident angle of the beam at 0.839~degrees and also mounted the periscope for the telephoto camera on the structure.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig4.png}
\caption{Optical layout of receiver module. The integrating sphere and QPDs are mounted on the optical table.}
\label{fig4}
\end{figure}
\subsection{Receiver module}
The purpose of the receiver module was to monitor the reflected absolute power and beam position, as shown in Fig.~\ref{fig4}. The optical components of the receiver module were mounted on the 600~mm $\times $ 600~mm optical table. The 6 inch integrating sphere was mounted with the photodetector, which had the same design as TxPD. The method for absolute power calibration is described in Section~IV. We mounted a beam sampler and quadrant photodetectors (QPDs) (Thorlabs PDQ80A) to monitor the drift of beam positions. We applied the QPDs as a tilt sensing optical lever. The sensitivity was 10 nrad for each. With the pico-motors on the transmitter module and this QPD, we were able to control and monitor the beam positions.
\subsection{Telephoto camera}
We measured the beam position using pictures. Figure~\ref{fig5} shows the injected beam for the ETMX. We injected the laser 76~mm above and below the center position. The positions of the laser induced rotation and elastic deformation effects, so to cancel these effects, we needed to inject two laser beams. The drift of the beam position on the ETM surface corresponded to the systematic error of the rotation and elastic deformation. To measure beam position, we monitored the mirror surface with a camera placed 34.957~m from the ETM. Thus, we employed a combination of telescope and high-resolution camera, a so-called telephoto camera (TCam). We tuned the focal point on the mirror surface using a Moonlight focuser. By using the high-resolution camera, we could measure the beam position within 1~mm accuracy. A Nikon D810 digital camera met our requirements. The D810 has a 36-megapixel resolution with 35.9 $\times $ 24.0~mm CMOS sensor. One pixel on the picture corresponds to 100 $\mu $m. We removed the IR filter because the commercial camera is not sensitive to laser wavelength (1047~nm). We employed a Maksutov-Cassegrain type telescope to observe the ETM surface.
The diameter of the primary mirror was 127~mm, and its focal length and focal ratio were 1500~mm and $f /12$, respectively. We mounted an LED illuminator on the cryogenic stage near the ETM, which was controlled by the remote system. We can monitor the mirror surface at 36m far from ETM very clearly with the combination of LED and telescope. By potting far place, we can minimize the scattering effect by small solid angle form ETM.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig3.png}
\caption{Transfer function of Path1 and Path2. In the application of Pcal, we require the unity response and 0 degree phase differences of closed loop response in observation frequency to minimize frequency dependence of calibration signal by the control. }
\label{fig3}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{fig5.png}
\caption{The picture of end test mass. Upper and lower in dashed circles show the Pcal laser beams. These positions are monitored by Telephoto Camera.}
\label{fig5}
\end{figure}
\section{Measurement}
The noise of the Pcal laser was propagated directly to the displacement noise through the suspension response. The typical relative power noise of the laser was $-$120~dB/$\sqrt{{Hz}}$ for each path. This noise fluctuate mirror position more largely than KAGRA goal sensitivity.
To reduce the laser noise, we needed to stabilize the laser power with the servo filters, referred to as the optical follower servo.
Figure~\ref{fig3} shows the transfer function of the optical follower servo, which was set up to have two poles at 3~kHz and one at 30~kHz. The closed loop transfer function is defined as $G/(1+G)$, where $G$ is open loop transfer function.
The measured unity gain frequencies are set to 40~kHz with a 50 degree phase margin.
The diagram of the feedback loop is shown in Fig~\ref{fig5}. We separated the beams into two paths to independently stabilize them. The detected signal at the OFSPD was sent to the servo filter. To maximize the performance of the feedback loop, we had to characterize the relative power noise (RPN) and harmonic noise (HN). In the following subsections, we explain the measured noise sources based on the goal sensitivity of KAGRA, where the target frequency for scientific study was between 30 and 1500~Hz.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig6.png}
\caption{Schematic view of the laser power stabilization system. Outloop (Inloop) PD 1 and 2 correspond to TxPD (OFSPD) 1 and 2 in Fig.~\ref{fig2}, respectively.}
\label{fig6}
\end{figure}
\begin{figure}[b!]
\centering
\includegraphics[width=\linewidth]{fig7.png}
\caption{Measured relative power noises and design sensitivity of the photon calibrator. The green curves represent the non-stabilized laser. The dashed line is the requirement from the design sensitivity of KAGRA~\cite{10.1093/ptep/ptaa125}.}
\label{fig7}
\end{figure}
\subsection{Relative power noise}
The optical follower servo suppressed the relative power noise of Pcal laser. The amplitude spectral density of each path, $G_{RPN}$, must be suppressed as follow:
\begin{equation}
G_{RPN} \leq \frac{Lh(f) Mc\omega^2}{10 \cos \theta P_i},
\end{equation}
where $i$ is path 1 and path2, $h (f)$ is the target sensitivity of KAGRA in Observation phase 3 (O3), $10$ is safety margin, and $L =$ 3000~m is the arm length of the interferometer. The open-loop transfer function is shown in Fig.~\ref{fig3}. Figure~\ref{fig7} shows the measured RPN of path-1 and path-2. The measured noise in out of loop met our requirements. The main noise under 100~Hz corresponds to the relative power noise of laser monitored by LPD. The noise floor above the 100~Hz is limited by the noise of a 16-bit DAC (General Standards Corporation, part number PCIe-16AO16-16-FO-DF). The noise floor at low frequency is limited by the noise of electrical circuit for offset control of AOM.
\subsection{Harmonics noise}
During the observation, we continuously injected sine wave excitation into the calibration line. Nonlinearity of Pcal modulation induce higher harmonics. Their amplitudes are required to be less than 10~\% of the displacement sensitivity. However, the amplitude of the displacement by Pcal higher harmonics decreased with the square of the frequency due to the suspension system's transfer function.Therefore, the requirement for the ratio of amplitude of n-th order harmonics to that of Pcal modulation is required to be less than
\begin{equation}
\frac{n^2}{1000}\frac{h(nf)}{h(f)}. \label{eq9}
\end{equation}
We assumed Signal to Noise Ratio (SNR) of injected Pcal modulation at frequency $f$ is 100 and safety margin is 10. Their product is 1000.We had to suppress the higher harmonics between 20 and 750~Hz for the $2f $ signal and those between 30~Hz and 500~Hz for the $3f $ signal because of observation frequency. By injecting sine curve with optical follower servo, we accumulate the data with 2 hours to suppress the noise floor. Figure~\ref{fig8} shows the measured relative modulation harmonics with the requirement curve. An optical follower servo was required to suppress both the power modulation at the harmonics of the Pcal fundamental frequencies and the inherent laser power fluctuations.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig8.png}
\caption{The measured relative harmonics noises. The left and right figures show the relative harmonics of $2f$ ($n= 2$) and $3f$ ($n= 3$). The circle and triangle points correspond to Path1 and Path2. The dashed line is the requirement from the design sensitivity of KAGRA~\cite{10.1093/ptep/ptaa125} and Eq.\ref{eq9}.}
\label{fig8}\
\end{figure}
\section{Model} \label{model}
The systematic errors of the photon calibrator are caused by the uncertainty of the measurements. According to a previous study~\cite{doi:10.1063/1.4967303}, we included the efficiency of absolute power calibration, rotation, and elastic deformation in our model, as shown in Eq. 1. To fix the model, we needed to measure the parameter of the absolute laser power and beam position, and we discuss the estimation of these effects in this section.
The photon calibrator can provide us with information about absolute displacement through the absolute measurement of laser power.For calibration. KAGRA Pcal integral spheres at X and Y ends are compared with independent detectors calibrated by NIST laser power standard. ~\cite{0264-9381-26-24-245011} To compare the detector response of photon calibrator and the NIST standard, we tried the following four measurements:
\begin{itemize}
\item Absolute calibration of laser power standard (Gold Standard which belongs to LIGO) with NIST~\cite{0264-9381-26-24-245011}: $\rho_{GS}$ [W/V].
\item Response ratio measurement at LIGO Hanford Observatory (LHO) between Working Standard KAGRA (4 inches) and Gold Standard (4 inches) : $\alpha_{\rm WSK}$.
\item Response ratio measurement at University of Toyama between a 5.3-inch integrating sphere and Working Standard KAGRA (4~inches): $\alpha_{{Toyama}}$.
\item Response ratio measurement of Pcal transmitter module and receiver module with a 5.3-inch integrating sphere: $\mathcal E $.
\end{itemize}
The excitation signal with the photon calibrator, $x^{(Pcal)}$, is defined to measure the transfer function of differential arm length:
\begin{equation}
x^{(Pcal)}=\frac{x_{Tx1}+x_{Tx2}+x_{Rx1}+x_{Rx2}}{2},
\label{eq:xpc}
\end{equation}
where $x^{(Pcal)}$ is a sum of components of displacement vector $\vec{x}=(x_{Tx1}, x_{Tx2}, x_{Rx1}, x_{Rx2}) $. $x^{(Pcal)}$ is used as PCAL read-back signal. This is because that real time response of displacement from Pcal can be defined by only detector response. By using this value, we can calibrate the interferometer response by each frequency.
Each displacement is function of voltage at each integrating sphere, $\vec{V}=(V_{Tx1}, V_{Tx2}, V_{Rx1}, V_{Rx2}) $. In a usual case, we measure the sum of path-1 and path-2 response at RxPD. To determine the ratio of power at RxPD, we defines the separation efficiency, $e_{1}$ and $e_{2}$. To estimate these factors, we measure the power of path1 and path2 independently. Therefore, each term can be defined as $V_{Rx1} = V_{Rx}e_{1}$ and $V_{Rx2} = V_{Rx}e_{2}$.
\begin{equation}
\vec{x}= \mathcal{S} \Gamma \vec{V}
\end{equation}
where $\Gamma $ is a coefficient matrix that converts voltage to force, all of which we describe in detail later. $\mathcal{S}$ is the transfer function matrix from the force to displacement defined as
\begin{equation}
\mathcal{S}=
\begin{pmatrix}
S_{tot}(f, \vec{a_1}, \vec{b})&0 & 0 & 0 \\
0 & S_{tot}(f, \vec{a_2}, \vec{b}) & 0 & 0 \\
0 & 0 & S_{tot}(f, \vec{a_1}, \vec{b}) & 0 \\
0 & 0 & 0 & S_{tot}(f, \vec{a_2}, \vec{b})
\end{pmatrix}.
\end{equation}
$S_{Ela}(f, \vec{a}, \vec{b})$ demonstrated that the calibration forces applied by a centered photon calibrator beam produced local elastic deformations that significantly altered the sensed displacement of the interferometer. Even stiff materials like fused silica or sapphire experienced small deformations when photon calibrator forces were applied. We performed modal analysis and simulation of the elastic deformation with the finite element analysis (FEA) software package, ANSYS and COMSOL. The real structure of KAGRA ETM is not a perfect cylinder. It has two flat cuts on both sides with two ears used to hang the mirror~\cite{Ushiba_2021}. We made a FEA on the realistic KAGRA ETM structure based on CAD drawing. Two laser beams are reflected by mirror and two beam spots are above and under the mirror surface center. When the distance between the spot and center is 76 mm, the elastic deformation effect is the minimum.
Figure~\ref{fig9} shows the ratio of displacement of elastic mirror motion to that in the case of rigid body as a function of frequency for optimally positioned beams on the KAGRA test mass, as well as $\pm $ 1~mm and $\pm $ 3~mm offsets from the optimal positions. The results of ANSYS and COMSOL are consistent. In O3GK, the systematic error of $S_{Ela}(f, \vec{a}, \vec{b})$ and $S_{Rot} (f, \vec{a}, \vec{b})$ terms are relatively negligible. We assumed $S_{Len} (f)$ in O3GK analysis. However, we need to include these effects in O4 due to the improvement of calibration uncertainty.
\begin{figure}[h!t]
\centering
\includegraphics[width=\linewidth]{fig9.png}
\caption{The ratio between the total sensed motion and rigid body motion as a function of frequency for optimally positioned beams and $\pm $ 1~mm and $\pm $ 3~mm offsets for $\vec{a_1}$ and $\vec{a_2}$. The cross check of the simulation is also done with COMSOL and shown for the optimum position and 1~mm offsets with open cross symbols.}
\label{fig9}
\end{figure}
$\Gamma $ is the $4 \times 4$ force coefficient matrix, which is transfer matrix from output voltage of photo detectors (integral spheres) to force by photons.
The unit of components of force coefficient matrix is [N/V]. Each force coefficient can be written as
\begin{equation}
\Gamma= \frac{2\cos \theta}{c} \rho
\end{equation}
where $\theta $ is the incident angle of the photon calibrator as listed in Table~\ref{pcal_over}, and $c $ is the speed of light. The incident angle is determined from CAD drawings, and the variance of $\theta $ is estimated by the accuracy of the vacuum chamber.
The 1-sigma uncertainty is 0.839 $\pm $ 0.023 degree. $\rho $ is a conversion matrix from photodetectors (integral spheres) output to power of incident Pcal power,
\begin{equation}
\rho=\rho_{\rm GS} \alpha_{{WSK}} \alpha_{Toyama} \mathcal{E}\mathcal{D},
\end{equation}
where $\mathcal{D}$ is the diagonal detector matrix, and each diagonal element corresponds to the calibrated detector response with the 5.3 inch integrating sphere. In the terms of $\rho$, $\alpha_{{WSK}} \alpha_{Toyama} \mathcal{E}\mathcal{D}$ are dimensionless.Thus, the dimension of $\rho$ is [W/V]. When we included optical cross talk in the diagonal detector matrix, the off-diagonal part in the efficiency matrix ought to have been obtained. In O3GK observation, the crosstalk of each detector was negligible. Thus, we assume $\mathcal{D} = \diag(1,1,1,1)$.
The Gold Standard (GS) is a power sensor system comprised of a 4 inch integrating sphere and an InGaAs photodetector. NIST provides a summary of measurement of GS response to a 1047~nm laser. The estimated value is $\rho_{\rm GS} =$ $-$8.0985 $\pm $ 0.0259~[W/V] ~\cite{0264-9381-26-24-245011, Bhattacharjee_2020}. The calibration of GS is transferred to a 4 inch integrating sphere
with an InGaAs detector called Working Standard KAGRA (WSK). The cross-calibration setup between WSK and GS is placed in LHO. The estimated value is $ \alpha_{WSK}=0.2750 \pm 0.0027 $ . The measured response ratio with
University of Toyama system was
$\alpha_{{Toyama}} = 1.39120 \pm 0.00020 $.
$\mathcal{E}=\diag(\epsilon^{(1)}_{\rm Tx}, \epsilon^{(2)}_{\rm Tx}, \epsilon^{(1)}_{\rm Rx}, \epsilon^{(2)}_{\rm Rx})$
is the $4 \times 4$ efficiency matrix describing the loss of path 1 and path 2 for the transmitter module side and receiver module side. The measured efficiencies of efficiency matrix are described as:
\begin{eqnarray}
\epsilon^{(1)}_{\rm Tx} = 0.980 \pm 0.040, \\
\epsilon^{(2)}_{\rm Tx} = 0.978 \pm 0.026, \\
\epsilon^{(1)}_{\rm Rx} = 0.984 \pm 0.040,\\
\epsilon^{(2)}_{\rm Rx} = 0.974 \pm 0.026.
\end{eqnarray}
\section{Discussion and conclusion}
The development of new photon calibrator is expected to extend for the possibility of new technologies. In previous study, the laser power of photon calibrator in LIGO and Virgo are 2W and 3W, respectively. In this development, we achieve to increase the maximum power to be 20W. LIGO and Virgo have used photon calibrator for measurement of response function, monitoring of time dependent interferometer response, and hardware injection test. For the application of response function measurement, high power laser can measure the response of low frequency and high frequency region with enough SNR. It can be reduced the uncertainty of these region. For monitoring of time dependent interferometer response, LIGO and Virgo injected the sine curves during measurement. By monitoring of these amplitude, we can know the time dependent interferometer response. But, number of these line are limited by the maximum laser power of Pcal. By applying KAGRA Pcal, we can improve the maximum number of monitoring signal. For hardware injection, one of the application is verification of analysis pipeline. They inject the simulated signal and confirm the parameters of it.
A detailed understanding of waveform accuracy and gravitational wave detector responses is necessary for understanding physics of gravitational wave itself an astronomical and cosmological objects. The photon calibrator method is a modern way to obtain accurate calibration. To increase accuracy, KAGRA has employed new techniques: (i) 20~W CW laser to improve the signal-to-noise ratio of high-frequency responses; (ii) Independent control of upper beam and lower beam to verify the response of rotation response; and (iii) Beam position monitoring and controlling system with QPDs and pico-motors. These techniques gave us a better understanding of the accurate model. In this paper, we characterized the relative power noise and harmonic noises, and the measured results met our requirements for worldwide observation 3. In our observation, KAGRA exchanged the laser power standard detector with LIGO and VIRGO, which enabled us to calibrate the relative amplitude of each interferometer.
The accuracy of beam positions are not the dominant systematic error in current observation. However, we need to apply the cryogenic study. This is because that during the cooling process, position of the end test mass are moving due to the thermal expansion effect of suspension. The expected displacement of thermal expansion is about 5mm. To reduce the elastic deformation effect, we need to change the beam position in the cooling process. It will be demonstrated in future experiment.
\begin{acknowledgments}
We would like to express our gratitude to Richard Savage, Evan Goetz, Jeff Kissel, and Peter King in LIGO for supporting us in developing a design for the KAGRA photon calibrator. We also would like to express Ayako Node, Ayako Ueda, Iwao Murakami, and Hirokazu Murakami to support this study. This work was supported by MEXT, JSPS Leading-edge Research Infrastructure Program, JSPS Grant-in-Aid for Specially Promoted Research 26000005, JSPS Grant-in-Aid for Scientific Research in Innovative Areas 2905: JP17H06358, JP17H06361, and JP17H06364, JSPS Core-to-Core Program A. Advanced Research Networks, JSPS Grant-in-Aid for Scientific Research (S) 17H06133, the Mitsubishi Foundation, the joint research program of the Institute for Cosmic Ray Research, University of Tokyo, National Research Foundation (NRF) and Computing Infrastructure Project of KISTI-GSDC in Korea, Academia Sinica (AS), AS Grid Center (ASGC), and the Ministry of Science and Technology (MoST) in Taiwan under the following grants: AS-CDA-105-M06, 110-2636-M-008-001, 110-2123-M-007-002, the KAGRA collaboration, the LIGO project, and the Virgo project.
\end{acknowledgments}
|
1,477,468,750,148 | arxiv | \section{Introduction}
The use of networks to model contact patterns or interactions between individuals has proved to be a step change in how epidemics and other spreading processes are modelled \cite{Kiss2017,Latora2017,doi:10.1137/S003614450342480,Widgren2016,Wang2016,Danon2011}. The basic ingredient of such models is to represent individuals by nodes and contacts between these as links between nodes. The use of graph-theoretical methods have helped to reveal and understand the role of contact heterogeneity, preferential mixing and clustering in how disease invade and spread \cite{Bansal879,Keeling295}. Having good network models is crucial. Simple mechanistic models that capture and preserve key properties of empirical networks are often employed as they offer greater flexibility in changing and tuning various network properties. While such models and theory are well developed for static networks, it is only recently that we have empirically measured real-world time varying forms \cite{Rocha2016,Stehle2011,Bansal879,Sah169573,Keeling295,Eubank2005,Enns2011,Steinhaeuser2009,Danon2011,Kiss1332,Skyrms01082000}.
Current underlying models for network-based epidemiology fall into a handful of categories. Some just use empirical data collected from sensors and apply an appropriate disease model to this \cite{Bansal879,Rocha2016,Stehle2011}. Others use a fairly elementary model where links appear as in gathered data, but are given lifespans drawn from a uniform distribution \cite{Rocha2016}, or are given a simple weighting drawn directly from the data \cite{Sah169573,Stehle2011}. Others take collected data and use it to convert a series of fully connected networks into sparse ones \cite{Sah169573}. Alternative methods involve the use of an idealised network \cite{Keeling295}, regular random network \cite{Bansal879}, random Poisson network \cite{Bansal879,Keeling295}, scale-free random network \cite{Bansal879} or lattice \cite{Keeling295}. In this paper our aim is to analyse an empirical time varying network, in a statistically rigorous way, and build theoretical models that are able to reproduce and mimic the behaviour observed from data.
We will re-analyse data previously collected by the SocioPatterns collaboration (\url{http://www.sociopatterns.org/}) with special focus on time-varying contact patterns in a primary \cite{Gemmetto2014,10.1371/journal.pone.0023176} and high \cite{10.1371/journal.pone.0136497} school. In particular we will focus on measuring properties such as activation time and duration of links as well as off-durations of links. We will then propose and fit candidate parametric distributions to the empirical data. Based on these, we will propose a few different theoretical time-varying network models. Two different model types are proposed. The first model assigns on-off durations to each link from an appropriate probability distribution. Our second model triggers activations at appropriate times (with inter-event times being drawn from an appropriate distribution), before selecting the link to be activated (using a probability matrix drawn from the original data) and assigning an on-duration to that link from an appropriate probability distribution. Even if these models do not capture all the important features of the real-world network, they still provide a useful first approximation. Whilst we focus on school classrooms, our approach can be adapted to modelling other types of social interactions.
\section{Data Collection and Description}
In the original data - both for the primary and high school students - the participants were equipped with sensors that deemed them `in contact' if they were within 1 to 1.5m of each other (an interaction), chosen by the organizers of the original study. This was to act as a proxy of a close-range encounter during which a communicable disease infection can be transmitted, for example, either by cough or sneeze, or directly by hand contact \cite{10.1371/journal.pone.0023176}. Every 20s, a radio packet would be exchanged between the sensors, and all packets transferred would be relayed to a central system to be recorded. This scale was deemed to allow an adequate description of person-to-person interactions that includes brief encounters \cite{10.1371/journal.pone.0023176}.
In both cases, this central system saved the data in a CSV file, with each row containing the timestamp (in 20s intervals), the IDs of the two sensors in contact, and some additional data about the two participants (such as their class). We modify the original data slightly before our initial analysis. Firstly, we remove any participants marked as staff from the data as their behaviour could be potentially anomalous when compared to that of the school children. Whilst we acknowledge that this could remove any potential impact of staff on the behaviour of pupils, we feel justified in this as staff only account for $11$ participants and approximately $5\%$ of the originally recorded links, which may prove problematic in terms of drawing any statistically significant conclusions about potential behaviour. In future work, including this additional layer may lead to an improved model - however, we feel that more data describing these interactions would be needed before we could confidently add this to a model. We also split the students into their separate classes. Whilst this results in the discarding of approximately $20\%$ of the originally recorded links, this given us more samples to analyse; moreover, it allows for a statistical comparison between the dynamics of different classes. From a more practical perspective, this restriction to classes has a considerable impact on the runtime of the model simulations (reducing this size from around 500 students to around 25).
The choice to restrict to classes is also justified from a modelling perspective as it is realistic to assume (at least as an initial hypothesis) that contacts outside of the classroom (during break/lunch) would follow substantially different behaviour.
We also split the data into individual days - similarly to splitting by class, this helped reduce the runtime of the simulation as well as increasing the number of samples we could analyse. Again, this is not unrealistic, as the interactions between students in the same class can reasonably be assumed to be similar from one day to another.
\subsection{Analysis of Original Data}\label{ssec:features}
A series of MATLAB functions were written to take these (separated) CSV files and perform an analysis of a variety of network and temporal features, and attempt to do best-fit analysis on all appropriate results - a full list of these features below. Animations showing the network evolution over time were also produced. For a listing of the code and short descriptions of the functions written to carry out this analysis, please see the handbook provided in the Supplemental Materials.
We identified a variety of key features for analysis. As usual, many more features can be observed from the data, and indeed, in order to approximate a completely realistic model, many of these should be analysed and incorporated into more detailed models. Our models are just an initial step into understanding these socially-interaction temporal networks, and we are only focusing on aspects that categorise and describe both the topology of the network and several temporal properties of the system. These features are presented below, along with brief definitions of these terms:\\
\noindent \textbf{Active Nodes:} The measure of active nodes at a given time $t$ is defined as the number of pupils involved in at least one interaction at time $t$, as a fraction of all pupils active during that day.\\
\textbf{Active Links:} The measure of active links at a given time $t$ is defined as the number of unique (undirected) pupil-pupil interactions at time $t$, as a fraction of all possible links for that day, equal to $\ell_{\mathrm{max}}=N(N-1)/2$, where $N$ is the number of pupils active during that day in the class under consideration.\\
\textbf{Node \& Link Activity Potential:} The activity potential of a node is defined as the number of activations involving that node, as a fraction of all node activations across the day \cite{Perra2012}. We also define an analogue for links, defined as the number of activations of that link, as a fraction of all link activations across the day.\\
\textbf{Global Clustering Coefficient:} The global clustering coefficient at a given time $t$ is defined as the ratio between the number of closed triplets and the number of connected triplets in the network \cite{doi:10.1137/S003614450342480}. That is, the ratio between the number of triangles in the network and the number paths of length $2$, that do not have a third edge connecting the end points.\\
\textbf{Node Degree:} The degree of node $n$ is the number of active links involving it \cite{Diestel2010}.\\
\textbf{Component Features:} Defining a component as a maximal subset of nodes that are fully connected \cite{Kleinberg2010}, we can also examine properties such as component count and nodes and links per component at a given time $t$.\\
\textbf{Activation Time:} For each link, an activation time is measured - defined as the period of time it takes for that specific link to be activated for the first time.\\
\textbf{On-Duration:} For each link, on-durations are measured - defined as the period of time between the activation and deactivation of that link.\\
\textbf{Off-Duration:} For each link, off-durations are measured - defined as the period of time between the deactivation and reactivation of that link.
\subsection{Properties Identified from Original Data}
In the initial part of this article we use the observed data to fit all of the above quantities to certain distributions. These will act as a stepping stone to the second part of this article, in which we develop theoretical models, in an attempt to recreate the observations using Monte Carlo simulations.
As we do not have any explicit theories for the dynamics of any of our chosen properties, we shall test against a series of appropriate common probability distributions \cite[pp.~899--917]{mun2008} and variations on these, representing a range of behaviours defined on the semi-infinite interval $[0,\infty).$ We will be using exponential, gamma, Rayleigh, log-normal, Mittag-Leffler, generalised Pareto and Weibull distibutions. All of these will have best fit parameters chosen using by three different methods - method of moments \cite{Bowman2006}, maximum likelihood estimators \cite{Scholz2006}, and the curve fitting tool in MATLAB (non-linear least squares) - and then compared to the empirical complementary cumulative distribution functions (eCCDFs) of the original data to determine which one is most optimal.
This comparison was achieved by looking at a variety of statistical distances - Kolmogorov-D, Cramer-von-Mises, Kuiper, Watson, Anderson-Darling and modified versions of the Kullback-Leibler and Jensen-Shannon \cite{Stephens1974}. These distances and comparisons were chosen as they emphasise a wide varying range of properties of the distributions to be compared - with, for example, some being more sensitive to changes in the head and tail of the eCCDF, whilst others are more sensitive to changes in the middle. Finding a distribution that had `good' values for all of these distances would indicate that it was a good fit across the entirety of the compared eCCDF.
In over $75\%$ of cases, the curve fitting tool in MATLAB produced the statistically best parameters, with the parameters chosen using this method in the majority of the remaining cases being only slightly different to those produced using a more optimal method. As a result of this, and additionally considering that the method of moments and least likelihood estimation do not work with all of our chosen distributions, we shall conduct any additional analysis using only the curve fitting tool, and only results produced using this method will be presented and used throughout this paper.
\subsubsection{Results from Data}
Below we present a summary of the distributions chosen using the method described above. Best-fit parameters and comparative distances have been excluded for brevity.\\
\noindent \textbf{Active Links:} The optimal tested distribution for the primary school data was Log-Normal, whilst for the high school data, both the Rayleigh and log-normal distributions gave similar fits, with log-normal being slightly more optimal.\\
\textbf{Active Nodes:} The optimal tested distribution for the primary school data was gamma, whilst for the high school data, the gamma and log-normal distributions both gave similar fits, with log-normal being more optimal in all but the most extreme values.\\
\textbf{Node Activity Potential:} In both data sets, gamma and log-normal distributions gave similar fits, with gamma being fractionally better.\\
\textbf{Links per Component:} For the primary school data, gamma and log-normal distributions both gave similar fits, with log-normal being marginally more optimal. Whilst for the high school data, gamma, log-normal and Rayleigh distributions all gave similar fits, with log-normal being slightly better.\\
\textbf{Nodes per Component:} For the primary school data, gamma, log-normal and Rayleigh distributions all gave similar fits, with log-normal being slightly better. Whilst for the high school data, gamma and log-normal distributions both gave similar fits, with no clear optimal distribution.\\
\textbf{Global Clustering Coefficient:} In both data sets, gamma, log-normal and Rayleigh distributions all gave similar fits. For the primary school data there was no clear optimal distribution between these, whilst for the high school data, a gamma distribution was slightly better.\\
\textbf{Interaction Times/On-Times:} In both data sets, the optimal tested distribution was generalised Pareto.\\
\textbf{Number of Components:} In both data sets, the optimal tested distribution was gamma.\\
\textbf{Time Between Contacts/Off Times:} In both data sets, the best tested distribution was log-normal.
\subsubsection{Link Inhomogeneity}
Not surprisingly, the off-durations of links (recalling that links are off if participants ar not in contact with each other) cannot be assumed to be homogeneous across students. This is in accordance with the realistic assumption that certain children are more popular or sociable than others. A further later of statistical fitting determined that for attempting to recreate the primary school data, it was optimal to have the off-durations vary link-by-link. The optimal choice for this was an exponential distribution with log-normal parameters. We additionally examined the \textbf{triangle count} within the network, as well as \textbf{inter-event times} (the time between two consecutive link activations in the network). For the first, a gamma distribution was the optimal fitted distribution, whilst for the second, a log-normal distribution was selected.
These two features were chosen to be added to the list of those analysed as the triangle count offers an additional measurement of the nature of the network structure alongside the global clustering coefficient, whilst the inter-event times were necessary for building our second model.
\subsubsection{Comparing Samples}\label{sssec:comparesamples}
\begin{table}
\centering
\input{comparesamples.txt}
\caption{Acceptances of $\mathcal{H}_{0}$ (see equation \ref{eq:h0comparesamples}) at $1\%$ and $5\%$ Levels (max: 190) (see subsection \ref{sssec:comparesamples} for full explanation)}
\label{tab:comparesamples}
\end{table}
When we create our models, we aim to have little dependence on the original data - varying parameters only between differing settings (primary school vs. high school), rather than within these settings. For example, we would aim to have the parameters for the random variable generation for the model for class 5A in the primary school to be the same as those in the model for class 1B of the primary school. Therefore, our first statistical test will be to test the validity of this statement. Our $\mathcal{H}_{0}$ is
\begin{equation}\label{eq:h0comparesamples}
\begin{split}
\mathcal{H}_{0}: &\text{ The two observed samples come}\\
&\text{ from a common distribution.}
\end{split}
\end{equation}
We compute two-sample Kolmogorov-Smirnov distances \cite{smirnov1939estimate,Press1992} between each of our data sets within each setting.
We present the number of acceptances of this hypothesis (out of 190) for our primary school data samples at the 1 and 5 percent levels in Table \ref{tab:comparesamples}.
Examining these results, we conclude that while we do not have a unanimous degree of acceptances for $\mathcal{H}_{0},$ we have a substantial number in some metrics and a notable level in others. Other metrics have a very low degree of matching - most noticeably in terms of active links and interaction times. Whilst this is not ideal for our aim to only vary parameters between scenarios, for brevity we shall still proceed under this assumption - although it should be noted that when we present our models we do not actually fix the parameter in the distribution for our interaction times. Instead we draw this parameter from a random distribution itself, which reflects this behaviour in the data originally collected by the SocioPatterns Collaboration.
\section{Model Creation}\label{sec:models}
The aim of our model is to recreate the dynamics seen in the original data with as few properties and parameters taken from the original data as possible. In more precise terms, we wish to test if the mechanism of interactions within the original data can be explained by a small number of key factors and identify and refine those parameters. As with any model, we doubt that we will be able to replicate every property in the original data, but it is important to examine the differences between our model and the original data, and to put a measurement on the distance between the two. Whilst there will be some properties that we will be controlling, there will be several network and temporal properties that emerge from our model that we can compare to our original data, hence giving us a measure of the distance between the two. For the sake of brevity, we will only present the results and parameter values for primary school data below. Analysis supporting our choice of distributions and parameters is provided in the Supplemental Materials.
\subsection{Model 1}
For this stage-0 model we look at each (potential) link individually and model its behaviour as an alternating renewal process (ARP). We also include an initialization phase for each link that models the time (in seconds) until the first activation of that link. This can be seen as the following process for each link where $X_{ij,n}$ represents duration of the $n$-th on (or off) phase for the link $(i, j)$, with the distributions chosen using an empirical analysis of the data. Algorithmically, we present this as:
\begin{enumerate}
\item \textbf{Initialization Phase:} Generate the initialisation time for this link with $$X_{ij}^{\mathrm{Init}} \sim \text{Exp}(6278.0)$$
\item \textbf{ARP On-Phase:} Assign the link the on-duration $$X_{ij,n}^{\mathrm{On}} \sim \text{Exp}\left(Y_{ij}\right)$$ with parameter fixed for each $(i, j)$ to $$Y_{ij} \sim \text{LogNormal}(3.5348,0.2807).$$
\item \textbf{ARP Off-Phase:} Assign the link the off-duration as $$X_{ij,n}^{\mathrm{Off}} \sim \text{LogNormal}(6.3512,1.3688).$$
\item \textbf{Repeating Process:} Repeat Stages 2 and 3 until the total time has reached or exceeded the simulation time.
\end{enumerate}
\subsection{Model 2a}
In this stage-0 model, we will be dealing with the system on a macroscopic basis. We are drawing times between activations from an appropriate distribution, then at each of these activations, a link is chosen at random from a custom distribution constructed from the link activity potentials (as defined in subsection \ref{ssec:features}) extracted from the data and represented by a symmetric weighting matrix $M$. If the chosen link is already active in the network, this selection is discarded, and another link is chosen for that activation time. Once a link has been activated, it is given a lifespan from an appropriate distribution. This can be seen as the following process, with the distributions chosen using an empirical analysis of the data. Algorithmically, we present this as:
\begin{enumerate}
\item \textbf{Time between Activations:} Generate $$t_{i} \sim \mathrm{LogNormal}(5.6901\times 10^{-4},1.7957).$$
\item \textbf{Link Activation:} At each activation time $T_{k}$, defined as $$T_{k}=\sum_{i=0}^{k}{t_{i}},$$ a link $(n_{1},n_{2})$ is chosen using the relative weights in the matrix $M$. If $(n_{1},n_{2})$ is already active at time $T_{k}$, choose another link for this time $(n'_{1},n'_{2})$.
\item \textbf{Assign On-Durations:} This link is given the duration $$X_{n_1 n_2}^{k} \sim \mathrm{Exp}(Y_{n_1 n_2})$$ as before with parameter fixed for each $(n_1,n_2)$ to $$Y_{n_1 n_2} \sim \mathrm{LogNormal}(3.5348,0.2807).$$
\end{enumerate}
\subsection{Model 2b}\label{ssec:model2bcreation}
In this model, we modify our Model 2a and attempt to improve triangle count and clustering. Most of the method is similar to the earlier model, but we force chosen links to close a pair of links into a triangle at a fixed rate, reweighting our selection matrix to only account for these links (if no such links exist, we use the original selection matrix), before proceeding as before with this link selected. This can be seen as the following algorithm, with the distributions always chosen using an empirical analysis of the data:
\begin{enumerate}
\item \textbf{Time between Activations:} Generate $$t_{i} \sim \mathrm{LogNormal}(5.6901\times 10^{-04},1.7957).$$
\item \textbf{Triangulation Bias:} Generate a random number $u$ such that
$$u \sim \mathrm{Unif}[0,1].$$
If $u\geq 0.0640$ (our `forcing' rate, calculated from the data), proceed to Stage 3a, else proceed to Stage 3b.
\item \textbf{Link Activation:}
\begin{enumerate}
\item \textbf{Standard Activation:} At each activation time $T_{k}$, defined as $$T_{k}=\sum_{i=0}^{k}{t_{i}},$$ a link $(n_{1},n_{2})$ is chosen using the relative weights in the matrix $M$. If $(n_{1},n_{2})$ is already active at time $T_{k}$, choose another link for this time $(n'_{1},n'_{2})$. Proceed to Stage 4.
\item \textbf{Triangle-Biased Activation:}
\begin{enumerate}
\item \textbf{Matrix Reweighting:} Generate the (symmetric logical) matrix $C$ of links that will complete triangles. If this matrix is $0$, set $C=\mathbb{I}$. Create the adjusted weighted matrix $M'$ where $M'_{ij}=C_{ij}M_{ij}$.
\item \textbf{Link Activation:} At each activation time $T_{k}$, defined as $$T_{k}=\sum_{i=0}^{k}{t_{i}},$$ a link $(n_{1},n_{2})$ is chosen using the relative weights in the adjusted matrix $M'$. If $(n_{1},n_{2})$ is already active at time $T_{k}$, choose another link for this time $(n'_{1},n'_{2})$. Proceed to Stage 4.
\end{enumerate}
\end{enumerate}
\item \textbf{Assign On-Durations:} This link is given the duration $$X_{n_1 n_2}^{k} \sim \mathrm{Exp}(Y_{n_1 n_2})$$ as usual with parameter fixed for each $(n_1,n_2)$ to $$Y_{n_1 n_2} \sim \mathrm{LogNormal}(3.5348,0.2807).$$
\end{enumerate}
\subsection{Model 2c}
We shall again build upon our previous model - Model 2b - this time changing our matrix $M$. Previously, this has been a fixed matrix extracted from the data, but we wish to move to a randomly generated one to reduce this strict dependency on the original data. Analysing these (symmetric) matrices, we examine the row (or column) sums, which we attempt to find a distribution for. From an analysis of the data, we choose an appropriate distribution for these sums - we shall use row sums $$M_{i\Sigma} = \sum_{j=1}^{n}{M_{ij}} \sim \Gamma(12.3109,0.0037).$$
For our first attempt at generating an appropriate random matrix $M$, we shall assume that each term is taken from a gamma distribution with
$$M_{ij}\sim\Gamma(\mu^{A}_{i},0.0037)+\Gamma(\mu^{B}_{j},0.0037)$$
for $i<j$, $M_{ij}=0$ for $i=j$ and $M_{ij}=M_{ji}$ for $i>j$. This distribution is chosen in a simple yet natural way that ensures correlations across rows and columns. We also construct this in such a way that the choice of a self-loop is impossible, whilst also ensuring symmetry (which is to be expected as our network is undirected). Due to the additive properties of the gamma distribution, this is equivalent to the distribution
$$M_{ij}\sim\Gamma(\mu^{A}_{i}+\mu^{B}_{j},0.0037)$$
for $i<j$, $M_{ij}=0$ for $i=j$ and $M_{ij}=M_{ji}$ for $i>j$.
We can use the properties of the gamma distribution to specify the parameters $\mu^A_i$ and $\mu^B_j$ as follows. As this matrix has to be symmetric, we modify those entries below the diagonal accordingly.
To sum across a row, we first add the entries to the right of the diagonal, which is equal to
$$(n-i)\mu^{A}_{i} + \sum_{j=i+1}^{n}\mu^{B}_{j}.$$
We then notice that the entries to the left of the diagonal, are equal to the column sum to the diagonal, equal to
$$(i-1)\mu^{B}_{i} + \sum_{j=1}^{i-1}\mu^{A}_{j},$$
giving to total sum to be
$$(n-i)\mu^{A}_{i} + \sum_{j=i+1}^{n}\mu^{B}_{j} + (i-1)\mu^{B}_{i} + \sum_{j=1}^{i-1}\mu^{A}_{j}.$$
To match the distributions for the row sums, we require that:
\begin{align*}
(n&-1)\mu^{A}_{1} + \sum_{j=2}^{n}\mu^{B}_{j}\\
&= (n-2)\mu^{A}_{2} + \sum_{j=3}^{n}\mu^{B}_{j} + \mu^{B}_{2} + \mu^{A}_{1}\\
&= (n-3)\mu^{A}_{3} + \sum_{j=4}^{n}\mu^{B}_{j} + 2\mu^{B}_{3} + \sum_{j=1}^{2}\mu^{A}_{j}\\
&= \hdots \\
&= \mu^{A}_{n-1} + \mu^{B}_{n} + (n-2)\mu^{B}_{n-1} + \sum_{j=1}^{n-2}\mu^{A}_{j}\\
&= (n-1)\mu^{B}_{n} + \sum_{j=1}^{n-1}\mu^{A}_{j} = 12.3109
\end{align*}
The trivial solution to this is $\mu^{A}_{i}=\mu^{B}_{j}=\mu^{\star} \; \forall i,j\in\{1,2,\hdots,n\}$, giving $\mu^{\star}=12.3109/2(n-1)$. Our initial model for a randomly generated symmetric $M$ shall be with $$M_{ij} \sim \Gamma\left(\frac{12.3109}{2(n-1)},0.0037\right)$$ for $i<j$, $M_{ij}=0$ for $i=j$ and $M_{ij}=M_{ji}$ for $i>j$. Whilst the use of this trivial solution is somewhat simplistic, we believe that the inclusion of this method is an important step as it allows us to examine behaviours and test mechanics before examining non-trivial solutions in future work.
\subsection{Summary}\label{ssec:summary}
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|}\hline
\textbf{Model} & \textbf{Parameters} & \textbf{Parameter Values} & \textbf{Parameter Count} \\ \hline
\multirow{3}{*}{Model 1} & $X_{ij}^{\mathrm{Init}} \sim \text{Exp}(\lambda)$ & $\lambda = 6278.0$ & \multirow{3}{*}{5} \\ \cline{2-3}
& $Y_{ij} \sim \text{LogNormal}(\mu_{1},\sigma_{1}^{2})$ & $(\mu_{1},\sigma_{1}^{2}) = (3.5348,0.2807)$ & \\ \cline{2-3}
& $X_{ij,n}^{\mathrm{Off}} \sim \text{LogNormal}(\mu_{2},\sigma_{2}^{2})$ & $(\mu_{2},\sigma_{2}^{2}) = (6.3512,1.3688)$ & \\ \hline\hline
\multirow{2}{*}{Model 2a} & $t_{i} \sim \mathrm{LogNormal}(\mu_{1},\sigma_{1}^{2})$ & $(\mu_{1},\sigma_{1}^{2}) = (5.6901\times 10^{-4},1.7957)$ & \multirow{2}{*}{4} \\ \cline{2-3}
& $Y_{n_1 n_2} \sim \mathrm{LogNormal}(\mu_{2},\sigma_{2}^{2})$ & $(\mu_{2},\sigma_{2}^{2}) = (3.5348,0.2807)$ & \\ \hline \hline
\multirow{4}{*}{Model 2b} & $t_{i} \sim \mathrm{LogNormal}(\mu_{1},\sigma_{1}^{2})$ & $(\mu_{1},\sigma_{1}^{2}) = (5.6901\times 10^{-4},1.7957)$ & \multirow{4}{*}{$5+\frac{n(n-1)}{2}$} \\ \cline{2-3}
& $Y_{n_1 n_2} \sim \mathrm{LogNormal}(\mu_{2},\sigma_{2}^{2})$ & $(\mu_{2},\sigma_{2}^{2}) = (3.5348,0.2807)$ & \\ \cline{2-3}
& $u \geq u_{f}$ (our `forcing' rate) & $u_{f}=0.0640$ & \\ \cline{2-3}
& $M$ & $n\times n$ symmetric matrix & \\ \hline \hline
\multirow{4}{*}{Model 2c} & $t_{i} \sim \mathrm{LogNormal}(\mu_{1},\sigma_{1}^{2})$ & $(\mu_{1},\sigma_{1}^{2}) = (5.6901\times 10^{-4},1.7957)$ & \multirow{4}{*}{7} \\ \cline{2-3}
& $Y_{n_1 n_2} \sim \mathrm{LogNormal}(\mu_{2},\sigma_{2}^{2})$ & $(\mu_{2},\sigma_{2}^{2}) = (3.5348,0.2807)$ & \\ \cline{2-3}
& $u \geq u_{f}$ (our `forcing' rate) & $u_{f}=0.0640$ & \\ \cline{2-3}
& $M_{ij} \sim \Gamma(k,\theta)$ & $(k,\theta)=\left(\frac{12.3109}{2(n-1)},0.0037\right)$ & \\ \hline
\end{tabular}
\caption{Summary of Model Dependencies (see subsection \ref{ssec:summary} for full explanation and subsection \ref{ssec:model2bcreation} for the definitions of $u_{f}$ and $M$)}
\label{tab:summary}
\end{table*}
In Table \ref{tab:summary} we present a concise comparative summary of the data dependencies of each of our 4 model variants. For most of our models, we feel as though the parameter count is acceptable considering the complexities of the behaviours we are attempting to capture. In Model 2b, the parameter count is much higher than reasonable due to the explicit dependence on the original data, suggesting that this would not be an ideal model to fully implement - however it is included in our analysis in order to allow us to observe the accuracy of Model 2c.
\section{Model Analysis}\label{sec:modelanalysis}
\begin{table*}
\centering
\input{distance.txt}
\caption{Selected Two-Sample Kolmogorov-Smirnov Distances (see section \ref{sec:modelanalysis} for full explanation)}
\label{tab:distances}
\end{table*}
Please note, in the figures highlighting key results, simulated data is represented by crosses whereas observed data is represented by dotted lines, with the data displayed as an eCCDF with log-log axes (with scaling preserved between models). Each colour represents a different simulation or data set. In order, the four eCCDFs shown represent active nodes, node activity potentials, component counts and the global clustering coefficients. We choose these metrics to illustrate as they represent both promising behaviours and less-optimal ones, thereby giving a representative snapshot of our results. Additionally, these eCCDFs are some of the clearer and easier ones to read, allowing us to demonstrate a number of behaviours in a brief and compact manner. It should be noted that in some cases (most apparent in the case of the global clustering coefficients) that some of these eCCDFs appear not to start at $1$ as expected - this is a result of a high prevalence of the value $0$ in our data, with a large jump between this and other values. For readability, this jump has been excluded from the graphics, with our images only showing the section of the graph where the majority of our values fall.
We also present comparative data in two tables. In Table \ref{tab:distances}, we show a summary of the two-sample Kolmogorov-Smirnov distances \cite{smirnov1939estimate,Press1992} between our collection of 20 empirical samples and 20 simulated data samples from each of the 4 models presented above - showing the minimum, maximum, mean and mode of the distance between any of the 20 sets of real world data and any of the 20 sets of generated data.
We also compare horizontally, comparing each empirical data set against 50 data sets generated using our chosen metrics. We test the hypothesis $\mathcal{H}_{0},$ in this case, this is
\begin{equation}\label{eq:h0comparisons}
\begin{split}
\mathcal{H}_{0}: &\text{ The chosen empirical and generated}\\
&\text{ data samples come from a common}\\
&\text{ distribution.}
\end{split}
\end{equation}
In Table \ref{tab:comparisons}, we present the total number of acceptances (out of a possible 1000) at the $5\%$-level of this hypothesis when tested on a particular metric.
\begin{center}\begin{figure}
\begin{subfigure}{0.4\columnwidth}
\includegraphics[width=\columnwidth]{echoes_orig}
\caption{Original Data}
\label{subfig:echoes_orig}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\includegraphics[width=\columnwidth]{echoes_m1}
\caption{Model 1}
\label{subfig:echoes_m1}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\includegraphics[width=\columnwidth]{echoes_m2a}
\caption{Model 2a}
\label{subfig:echoes_m2a}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\includegraphics[width=\columnwidth]{echoes_m2b}
\caption{Model 2b}
\label{subfig:echoes_m2b}
\end{subfigure}
\begin{subfigure}{0.4\columnwidth}
\includegraphics[width=\columnwidth]{echoes_m2c}
\caption{Model 2c}
\label{subfig:echoes_m2c}
\end{subfigure}
\caption{Long Term Behaviours for Original Data and Models - Size of Nodes \& Transparency of Links Represent Relative Activities (see last paragraph of the opening of section \ref{sec:modelanalysis} for full explanation and the relevant subsections of section \ref{sec:modelanalysis} and section \ref{sec:modelcomp} for an analysis of these results). One immediate observation is that Model 1 homogenises much faster - note the limited number of darker links.}
\label{fig:echoes}
\end{figure}\end{center}
Additionally, we present Figure \ref{fig:echoes} to highlight long-term behaviours in our model. In this figure, the transparency of each link represents its relative activity in comparison to other links, and the size of each node represents the relative activity of each node. The 5 images in this figure represent this behaviour at $t=15000$ seconds for an example of the original data, Model 1, Model 2a, Model 2b and Model 2c. Using this figure, we can see systemic behaviours, such as possible grouping of nodes into friendship groups or similar metrics that would be more difficult to measure empirically. This also gives us an intrinsic definition for link \textbf{spread}. Figure \ref{subfig:echoes_m1} demonstrates a poor spread - the long-term behaviour is relatively homogeneous with fewer darker links. Similarly, a simulation that resulted in long-term behaviour that only had darker links limited to a very small number of nodes would also suffer from poor spread. Comparatively, Figure \ref{subfig:echoes_orig} has a better edge spread - there are a higher number of darker links spread among a larger number of nodes. More precisely, this is measuring a combination of factors - including activity potentials, component structures and other network features - but allows us to get an impression of many of these features at a glance. We do not expect a perfect matching between the examples here due to the randomness of the data, but are instead looking for system-wide similarity in behaviour. Differences are expected in the placement of stronger links and nodes (and indeed, do occur between simulation runs). However, we would expect a well-fitting model to exhibit similar numbers to those in the original data and with a similar relationship between them (for example, as Figure \ref{subfig:echoes_orig} has many nodes being involved in at least one stronger link, a well-fitting model would not be expected to have all of its strong links emanating from a common node).
\subsection{Model 1}\label{ssec:model1}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{model1}
\caption{Selected Results for Model 1. Simulated data is represented by crosses whereas observed data is represented by dotted lines. Each colour represents a different simulation or data set. See subsection \ref{ssec:model1} for full explanation.}
\label{fig:model1}
\end{figure}
Looking at Figure \ref{fig:model1}, the appropriate sections of Tables \ref{tab:distances} and \ref{tab:comparisons} and other comparative and graphical results not directly presented in this paper for brevity, as a first attempt at creating a model, we see promising results. The model produces acceptable fits for several of the examined features. Active links, active nodes and on-durations all produce graphically acceptable results, although using our Kolmogorov-Smirnov acceptances (shown in Table \ref{tab:comparisons}), there are improvements to be made in terms of these fits. For off-durations, when we compare our eCCDFs, we observe a reasonable fit in certain areas of the distibution although this fit deteriorates for extreme values and once again we notice that our acceptances indicate that the current construction of this model requires refinement to fully capture this behaviour. For our global clustering coefficient (presented in Figure \ref{fig:model1}) and triangle count, we have poor fits where comparing the data sets graphically, although we are getting a small number of acceptances with our two-sample Kolmogorov-Smirnov tests - likely as a result of an extreme prevalence of certain values in these data sets.
For nodes per component, links per component and the component count (partially presented in Figure \ref{fig:model1}), we observe acceptable fits graphically and are indeed accepting a small number of these fits when calculating our statistical distances, as shown in Table \ref{tab:comparisons}. This also indicated indicates that slight refinement to this fit may be possible. For node activity potential, we have a good fit, both graphically and when considering our number of Kolmogorov-Smirnov acceptances.
It is evident that this model does have noticeable differences to the observed data. We have a substantial number of small linear components in our model, which is impacting many of the features described above. Additionally there are problems with link selection spread (defined in section {sec:modelanalysis}) as can be seen when comparing the original behaviour displayed in Figure \ref{subfig:echoes_orig} with that in Figure \ref{subfig:echoes_m1}, resulting in very few popular links (reflecting strong friendships), which could also explain differences within the node activity potentials at the tail of our CCDFs.
\subsection{Model 2a}\label{ssec:model2a}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{model2a}
\caption{Selected Results for Model 2a. Simulated data is represented by crosses whereas observed data is represented by dotted lines. Each colour represents a different simulation or data set. See subsection \ref{ssec:model2a} for full explanation.}
\label{fig:model2a}
\end{figure}
Considering Figure \ref{fig:model2a}, the appropriate sections of Tables \ref{tab:distances} and \ref{tab:comparisons} and other results measured, we see a substantially improved model. As with Model 1, we have results that appear graphically similar across the entirety or keys sections of the distribution for active links, active nodes, global clustering coefficient, on-durations and off-durations, whilst our Kolmogorov-Smirnov distances for these indicate that there are still improvements to the fits to be made here. For our triangle count, we are seeing reasonable fits graphically and are accepting a higher number of our statistical comparisons.
Again, for nodes per component, links per component and the component count, we observe acceptable fits graphically (partially presented in Figure \ref{fig:model2a}) and are indeed accepting a small number of these fits when calculating our statistical distances - overall a slightly higher number than in Model 1, but with only small variations in each one. For node activity potential, we have a very good fit, both graphically and when considering Kolmogorov-Smirnov distances.
As can be seen when we compare Figure \ref{subfig:echoes_orig} and Figure \ref{subfig:echoes_m2a}, we are also producing an acceptable link selection spread (defined in section {sec:modelanalysis}), which reflects the varying levels of friendships observed in the real world data.
However, this model is insufficient to capture the related component structure - with our generated data still having too many linear components in comparison to triangles. Although attempting to resolve this will increase our dependence on the data, it is believed to be significant enough to warrant this.
\subsection{Model 2b}\label{ssec:model2b}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{model2b}
\caption{Selected Results for Model 2b. Simulated data is represented by crosses whereas observed data is represented by dotted lines. Each colour represents a different simulation or data set. See subsection \ref{ssec:model2b} for full explanation.}
\label{fig:model2b}
\end{figure}
In Figure \ref{fig:model2b}, the relevant sections of our tables and other results measured, we see similar results to Model 2a. Again, we have fits that have various levels of visual similarity to the observed data for active links, active nodes, on-durations and off-durations, whilst our Kolmogorov-Smirnov distances as reported in Tables \ref{tab:distances} and \ref{tab:comparisons}, for these indicate that there are issues with these. With our global clustering coefficient have reasonable fits graphically, but similar to Model 2a are still having issues with Kolmogorov-Smirnov acceptances.
Again, for nodes per component, links per component and the component count, we observe acceptable fits graphically (partially presented in Figure \ref{fig:model2b}) and note in Table \ref{tab:comparisons} a slight increase or similar levels in count of acceptances. We have a similar result for the node activity potential, with a very good graphic fit and a very high number of Kolmogorov-Smirnov acceptances.
For the triangle count in the network, we observe good fits graphically and in terms of our statistical tests, with a substanial improvement over the results obtained in Model 2a.
We also observe varying levels of popularity in the links, reflecting the various levels of friendships that can be seen in the original data - as can be seen in comparing behaviours in Figure \ref{subfig:echoes_orig} and Figure \ref{subfig:echoes_m2a}.
\subsection{Model 2c}\label{ssec:model2c}
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{model2c}
\caption{Selected Results for Model 2c. Simulated data is represented by crosses whereas observed data is represented by dotted lines. Each colour represents a different simulation or data set. See subsection \ref{ssec:model2c} for full explanation.}
\label{fig:model2c}
\end{figure}
In most metrics, this model performs similarly to Model 2b, with little to no difference in all of our examined metrics. Whilst, as illustrated in Table \ref{tab:comparisons}, some see a slight drop in the number of acceptances of the null hypothesis for the two-sample Kolmogorov-Smirnov test, others see a slight increase and overall we see a very marginal increase in the total count. Overall behaviours and link selection weighting reflect the observed data with a reasonable degree of accuracy as can be seen in Figure \ref{fig:model2c} and a comparison between Figures \ref{subfig:echoes_orig} and \ref{subfig:echoes_m2a}.
\section{Model Comparison}\label{sec:modelcomp}
\begin{table*}
\centering
\input{comparisons.txt}
\caption{Acceptances of $\mathcal{H}_{0}$ (see equation \ref{eq:h0comparisons}) at $5\%$ Level (max: 1000) (see section \ref{sec:modelanalysis} for full explanation)}
\label{tab:comparisons}
\end{table*}
Overall, there is a considerable improvement across most metrics between Model 1 and Model 2a. This can be seen empirically when we examine the statistical distances between the observed data and our generated simulations and the count of $5\%$ acceptances (as illustrated in Tables \ref{tab:distances} and \ref{tab:comparisons}). Significant improvements are made to the node activity potential and triangle count, including a noticeable graphical improvement to the global clustering coefficient, as can been seen in Figures \ref{fig:model1} and \ref{fig:model2a}. Whilst modifications could be made to Model 1 to improve its accuracy in some of these areas (such as including the link selection preference matrix), due to its improved performance with similar levels of dependence on the data, the second model will be the basis for all future work. We also notice a substantial drop in link selection spread (defined in section {sec:modelanalysis}) as we move between these models, with Model 2a reflecting real world behaviours much closer in our observations, as displayed in Figure \ref{fig:echoes}.
Between Model 2a and Model 2b, many metrics remain similar, although as expected from our modifications to the algorithm, we do notice a considerable improvement to the triangle count, illustrated in both Table \ref{tab:comparisons} and when observing the decrease in the maximum and mean statistical distance for this metric in Table \ref{tab:distances}. However, one of the larger problems with Model 2b is that the link selection preference matrix depends heavily on the original data, and we note that we could reduce this data draw considerably by generating this matrix rather than extracting it directly from the data. Model 2c attempts to do this, and can be considered successful as we can observe in Tables \ref{tab:distances} and \ref{tab:comparisons}, although a deeper examination of the temporal and network properties indicates that further improvements are still to be made.
\section{Model Validation}\label{sec:validation}
\begin{table*}
\centering
\input{validation.txt}
\caption{Validation Acceptances of $\mathcal{H}_{0}$ (see equation \ref{eq:h0validation}) at $5\%$ Level (max: 1000) (see subsection \ref{sec:validation} for full explanation)}
\label{tab:validation}
\end{table*}
As indicated in Table \ref{tab:comparesamples}, our approach to using the same distributions through all of our primary school models is not ideal. Therefore, we shall examine our methods in such a way to examine if the dynamics used in our models is a valid choice. To do this we shall draw temporal data directly from the appropriate eCCDFs - for Model 1, these are the on-, off- and activation times, whilst for Model 2, these are the on-times and interevent times. We shall then compare the data generated using this method to the real world sample that we draw the eCCDFs from - if we have a low statistical distance between these, we can conclude that our model dynamics have validity and that any issues identified in the examination above can be significantly addressed through parameter improvements and refinements to the choice of distributions for our random values.
In Table \ref{tab:validation}, we present the results of our validation. We take each of our 20 original data samples and input the appropriate eCCDFs in the place of the random generation outlined in Methods 1, 2a and 2b as described in section \ref{sec:models}. We do not analyse Method 2c using this method of validation as if we were to draw the link preferential matrix in this method from the data, this would be functionally identical to Model 2b.
We then generate 50 samples for each and compare them to the original data (for a total of 1000 comparisons for each metric and model). Please node that interaction times for all models (and the time between contacts for Model 1) have been excluded from this table as they are being controlled directly from the data and thus, a validation using this metric would serve no purpose. In this table our $\mathcal{H}_{0}$ is given as
\begin{equation}\label{eq:h0validation}
\begin{split}
\mathcal{H}_{0}: &\text{ The chosen observed and validation}\\
&\text{ samples come from a common}\\
&\text{ distribution.}
\end{split}
\end{equation}
Using this data, we can clearly see that our variations of Model 2 have considerably improved dynamics over Model 1, although we can still see that there are still improvements to be made. When we compare Tables \ref{tab:comparisons} and \ref{tab:validation}, we observe whilst choosing the `right' time structures does lead to some improvements - most notably in terms of active nodes - it is not enough to ensure a fit across all chosen metrics, and therefore that changes to our overall dynamics should be considered. From our examination of these results, we conclude that efforts should be made to improve link dynamics and hypothesise that by modifying our code to change the number of links generated in the network should improve our dynamics - especially for active nodes, although we would expect to also see improvements in our global clustering coefficient, component features and active nodes. This should also improve our time between contacts as changing the number of link activations will have a direct impact on this metric.
However, despite these small improvements still to be made to our model, we conclude that our Model 2b (and therefore 2c) have justifiable dynamics and that an improvement to the random generations will lead to an improved model overall.
\section{Conclusions}
We have developed two forms of model for the social interactions observed in the original data collected by the SocioPatterns collaboration \cite{Gemmetto2014,10.1371/journal.pone.0023176,10.1371/journal.pone.0136497}. In terms of statistical distance, both of these exhibit varying degrees of matching with the original data - the second of our models out-performing the first in almost all of our chosen metrics. We have added refinements to this, improving upon this matching, whilst continuing to minimise the amount of dependence on the underlying data.
We also have run a form of model validation and can certainly acknowledge that our model dynamics have a notable degree of validity in a number of key metrics when compared to the real word data - whilst this indicates that we do have additional improvements to the mechanisms in our models to perform, we believe that our current models are a promising step in a strong direct.
We also acknowledge that further refinement for the parameters and distributions used may lead to improved matching, although we believe the models presented here provide a solid foundation from which to proceed.
\section{Future Work}
Improvements to the method for generating our matrix in Model 2c will have to be undertaken before this algorithm is finalised. Additionally, further parameter and distribution refinement in our method will also be explored, including potentially moving from a Log-Normal distribution for the interevent times to a more complicated method in order to improve the matching between the generated time between contacts and that in the real world data. We will also attempt to make modifications as proposed in our model validation in section \ref{sec:validation}, although these improvements are only hypothesised to improve model dynamics. Once we have completed our model for primary school data, we shall move to the high school data by using the same method and adjusting parameters.
We will also carry out a deeper theoretical analysis of our model and examine any interesting patterns or behaviours within it, looking at long-term behaviours and, through simulation, the potential existence of any absorbing states. Additionally, once we have a finalised model and thus a statistically rigorous understanding of the distributions behind the observed behaviours, we can propose theoretical reasoning for these choices by examining the significance and underlying mechanisms of such distributions.
Eventually, we aim to place a network-driven epidemic model on our time-varying network and examine properties of disease spread and potential predictive power, comparing both to existing models and real data.
\section*{Notes}\label{sec:notes}
Supplemental materials can be found at the following link: {\tt https://drive.google.com/drive/folders/}\\
\noindent {\tt 1nLpdt91XUNElF1es2x3sm6qemqkQGrGf?usp=sharing}
\section*{Acknowledgements}
We acknowledge useful discussion with J\'{a}nos Kert\'{e}sz at the 13th Econophysics Colloquium \& 9th Polish Symposium on Physics in Economy and Social Sciences, Warsaw, July 2017. Additionally, we would like to thank and anonymous referee for a thorough reading of this article and for the useful suggestions that improved the clarity and presentation. This research has been partially funded by an EPSRC DTP grant.
|
1,477,468,750,149 | arxiv | \section{Introduction}
Games (and especially games played on graphs) have been intensively
used in computer science as a powerful way of modelling interactions
between several computerised
systems~\cite{Thomas-cav02,Henzinger-tark05}. Until recently, more
focus had been put on the study of purely antagonistic games
(a.k.a.~zero-sum games), which conveniently represent systems evolving
in a (hostile) environment. In~this zero-sum games setting, the
objectives of both players are opposite: the aim of one player is to
prevent the other player from achieving her own objective.
Over the last ten years, games with non-zero-sum objectives have come
into the picture: they~allow for conveniently modelling complex
infrastructures where each individual system tries to fulfil its own
objectives, while still being subject to uncontrollable actions of the
surrounding systems. As an example, consider a wireless network in
which several devices try to send data: each device can modulate its
transmit power, in order to maximise its bandwidth and reduce energy
consumption as much as possible. In~that setting, focusing only on
optimal strategies for one single agent may be too narrow.
Game-theoreticians have defined and studied many other solution
concepts for such settings, of which Nash equilibrium~\cite{nash50} is
the most prominent. A~Nash equilibrium is a strategy profile where no
player can improve the outcome of the game by unilaterally changing
her strategy. In~other terms, in a~Nash equilibrium, each individual
player has a satisfactory strategy. Notice that Nash equilibria need
not exist or be unique, and are not necessarily optimal: Nash
equilibria where all players lose may coexist with more interesting
Nash equilibria. Finding \emph{constrained} Nash equilibria (\eg,
equilibria in which some players are required to win) is thus an
interesting problem for our setting.
In this paper, we report on our recent contributions on the
computation of Nash equilibria in concurrent games (preliminary works
appeared as~\cite{BBM10a,BBMU11,BBMU12}). Concurrent games played on
graphs are a general model for interactive systems, where the agents
take their decision simultaneously. Therefore concurrent games subsume
turn-based games, where in each state, only one player has the
decision for the next move. One motivation for concurrent games is the
study of \emph{timed games} (which are games played on timed
automata~\cite{AMPS98,AFH+03}): the~semantics of a timed game is
naturally given as a concurrent game (the~players all choose
simultaneously a delay and an action to play, and the player with the
shortest delay decides for the next move---this~mechanism cannot be
made turn-based since we cannot fix \textit{a~priori} the player who
will choose the smallest delay); the region-based game
abstraction which preserves Nash equilibria also requires the formalism of
concurrent games~\cite{BBMU11,brenguier12}. Multi-agent
infrastructures can be viewed as distributed systems, which can
naturally be modelled as concurrent games.
\subsection*{Our contributions}
The paper focuses on concurrent deterministic games and on \textit{pure} Nash
equilibria, that is, strategy profiles which are deterministic (as
opposed to randomised). In this work we assume strategies only depend
on the set of states which is visited, and not on the actions that
have been played. This is a partial-information hypothesis which we
believe is relevant in the context of distributed systems, where only
the effect of the actions can be seen by the players. We will discuss
in more detail all these choices in the conclusion.
In the context exposed above, we develop a complete methodology for
computing pure Nash equilibria in (finite) games. First, in
Section~\ref{sec:suspect}, we propose a novel transformation of the
multi-player concurrent game (with a preference relation for each
player) into a two-player zero-sum turn-based game, which we call the
\emph{suspect game}. Intuitively, in the suspect game, one of the
players suggests a global move (one action per player of the original
game), with the aim to progressively build a Nash equilibrium; while
the second player aims at proving that what the first player proposes
is \emph{not} a Nash equilibrium. This transformation can be applied
to arbitrary concurrent games (even those with infinitely many states)
and preference relations for the players, and it has the property that
there is a correspondence between Nash equilibria in the original game
and winning strategies in the transformed two-player turn-based
game. The winning condition in the suspect game of course depends on
the preference relations of the various players in the original game.
Then, using that construction we develop (worst-case)
optimal-complexity algorithms for deciding the existence of
(constrained) Nash equilibria in \textit{finite} games for various
classes of preference relations. In Section~\ref{sec:single}, we focus
on qualitative $\omega$-regular objectives, \ie, preference relations
are given by single objectives (which can be reachability, B\"uchi,
parity, etc), and it is better for a player to satisfy her objective
than to not satisfy her objective. We prove the whole set of results
which are summarised in the second column of Table~\ref{table-single}
(the first column summarises the complexity in the zero-sum two-player
setting~--~called the value problem). Among the results obtained this
way, the constrained Nash equilibrium existence problem is
\NP-complete in finite games with single reachability or safety
objectives, while it is \PTIME-complete for single B\"uchi objectives.
\begin{table}[t]
\centering
\def1.1{1.1}
\begin{tabular}{@{}r||c|c@{}}
\hline
Objective & Value
& (Constrained) Existence of Nash Eq.\\
\hline
Reachability & \P-c. \cite{McNaughton93} &
\NP-c. (Sect.~\ref{subsec:reachability}) \\% & \NP-c. \\
Safety & \P-c. \cite{McNaughton93} & \NP-c. (Sect.~\ref{subsec:safety}) \\%& \NP-c. \\
B\"uchi & \P-c. \cite{McNaughton93} & \P-c. (Sect.~\ref{subsec:buchi}) \\%& \P-c. \\
co-B\"uchi & \P-c. \cite{McNaughton93}& \NP-c. (Sect.~\ref{subsec:cobuchi}) \\% &\NP-c. \\
Parity & \UP $\cap$ \co-\UP \cite{Jurdzinski98} &
$\P^\NP_\parallel$-c.\footnotemark (Sect.~\ref{subsec:rabin}) \\
Streett & \co-\NP-c. \cite{emerson1988complexity}& $\P^\NP_\parallel$-h. and
in \PSPACE \\
Rabin & \NP-c. \cite{emerson1988complexity} & $\P^\NP_\parallel$-c.
(Sect.~\ref{subsec:rabin}) \\
Muller & \PSPACE-c. \cite{Hunter07}
& \PSPACE-c. \\
Circuit & \PSPACE-c. \cite{Hunter07}
& \PSPACE-c. (Sect.~\ref{subsec:circuits}) \\
Det. B\"uchi Automata & \P-c. & \PSPACE-h.
(Sect.~\ref{sec:rabin-auto}) and in \EXPTIME \\
Det. Rabin Automata & \NP-c. & \PSPACE-h. and in \EXPTIME
(Sect.~\ref{sec:rabin-auto}) \\
\hline
\end{tabular}
\caption{Summary of the complexities for single
objectives}\label{table-single}
\begin{tabular}{@{}r|c|c|c@{}}
\hline
Preorder & Value & Existence of NE & Constr.~Exist. of~NE \\
\hline
Maximise, Disj. &
\P-c. (Sect.\ref{subsec:reducible})
& \P-c. (Sect.\ref{subsec:reducible})
& \P-c. (Sect.\ref{subsec:reducible})
\\
Subset &
\P-c. (Sect.~\ref{sec:reduc-buchi-auto}) &
\P-c. (Sect.\ref{subsec:reducible}) &
\P-c. (Sect.\ref{subsec:reducible})
\\
Conj., Lexicogr. &
\P-c. (Sect.~\ref{sec:reduc-buchi-auto}) &
\P-h., in \NP
(Sect.~\ref{subsec:monotonic}) &
\NP-c. (Sect.~\ref{subsec:monotonic})
\\
Counting &
\coNP-c. (Sect.~\ref{subsec:monotonic}) &
\NP-c. (Sect.~\ref{subsec:monotonic}) &
\NP-c. (Sect.~\ref{subsec:monotonic})
\\
Mon.~Bool.~Circuit &
\coNP-c. (Sect.~\ref{subsec:monotonic}) &
\NP-c. (Sect.~\ref{subsec:monotonic}) &
\NP-c. (Sect.~\ref{subsec:monotonic})
\\
Boolean~Circuit &
\PSPACE-c. (Sect.~\ref{subsec:general}) &
\PSPACE-c. (Sect.~\ref{subsec:general}) &
\PSPACE-c. (Sect.~\ref{subsec:general})
\\
\hline
\end{tabular}
\caption{Summary of the results for ordered B\"uchi objectives}\label{table-buchi}
\begin{tabular}{@{}r|c|c@{}}
\hline
Preorder & Value & (Constrained) Exist. of NE \\
\hline
Disjunction, Maximise & \P-c. (Sect.~\ref{reach-simple}) & \NP-c.
(Sect.~\ref{reach-simple}) \\
Subset & \PSPACE-c. (Sect.~\ref{ssec-generalcase}) & \NP-c.
(Sect.~\ref{reach-simple}) \\
Conjunction, Counting, Lexicogr. & \PSPACE-c.
(Sect.~\ref{ssec-generalcase}) & \PSPACE-c. (Sect.~\ref{ssec-generalcase})
\\
(Monotonic) Boolean Circuit & \PSPACE-c. (Sect.~\ref{ssec-generalcase})
& \PSPACE-c. (Sect.~\ref{ssec-generalcase}) \\
\hline
\end{tabular}
\caption{Summary of the results for ordered reachability objectives}\label{table-reach}
\end{table}
\footnotetext{\label{fn-pnp||}%
The complexity class $\P^\NP_\parallel$ is defined in terms of
Turing machine having access to an oracle; oracle are artificial devices
that can solve a problem in constant time, thus hiding part of the
complexity of the overall problem. The class $\P^\NP$ is the class of
problems that can be solved in polynomial time by a deterministic Turing
machine which has access to an oracle for solving \NP problems. The class
$\P^\NP_\parallel$ is the subclass where, instead of asking a sequence of
(dependent) queries to the oracle, the Turing machine is only allowed to ask
one set of queries. We~refer to~\cite{Pap94,Wag88} for more details.}
In Sections~\ref{sec:buchi} and~\ref{sec:reach}, we~extend the
previous qualitative setting to the \textit{semi-quantitative} setting
of ordered objectives. An ordered objective is a set of B\"uchi (or
reachability) objectives and a preorder on this set. The preference
relation given by such an ordered objective is then given by the value
of the plays (w.r.t. the objectives) in that preorder. Preorders of
interest are for instance conjunction, disjunction, lexicographic
order, counting preorder, maximise preorder, subset preorder, or more
generally preorders given as Boolean circuits. We provide algorithms
for deciding the existence of Nash equilibria for ordered objectives,
with (in most cases) optimal worst-case complexity. These algorithms
make use of the suspect-game construction.
The results are listed in Table~\ref{table-buchi} for B\"uchi
objectives and in Table~\ref{table-reach} for reachability objectives.
\subsection*{Examples}
Back to the earlier wireless network example, we can model a~simple
discretised version~of~it as follows. From a state, each device can
increase (action~$1$) or keep unchanged (action~$0$) its power: the
arena of the game is represented for two devices and two levels of
power on Figure~\ref{fig-network} (labels of states are power levels).
This yields a new bandwidth allocation (which depends on the
degradation due to the other devices) and a new energy
consumption. The satisfaction of each device is measured as a
compromise between energy consumption and bandwidth allocated, and it
is given by a quantitative payoff function.\footnote{The
(quantitative) payoff for player $i$ can be expressed by $\payoff_i
= \frac{R}{\mathsf{power}_i} \Big(1- e^{-0.5 \gamma_i}\Big)^L$ where
$\gamma_i$ is the signal-to-interference-and-noise ratio for player
$i$, $R$ is the rate at which the wireless system transmits the
information in bits per seconds and $L$ is the size of the packets
in bits (\cite{SMG99}).} This can be transformed into B\"uchi
conditions and a preorder on them. There are basically two families
of pure Nash equilibria in this system: the one where the two players
choose to go and stay forever in state $(1,1)$; and the one where the
two players go to state $(2,2)$ and stay there forever.
\begin{figure}[t]
\centering
\begin{tikzpicture}[scale=.9,yscale=.9]
\def\ensuremath{1}\xspace{\ensuremath{1}\xspace}
\def\ensuremath{0}\xspace{\ensuremath{0}\xspace}
\everymath{\scriptstyle}
\tikzset{noeud/.style={circle,draw=black,thick,fill=black!10,minimum height=8mm}}
\draw (0,0) node [noeud] (00) {$0,0$};
\draw (0,2) node [noeud] (01) {$0,1$};
\draw (2,0) node [noeud] (10) {$1,0$};
\draw (0,4) node [noeud] (02) {$0,2$};
\draw (2,2) node [noeud] (11) {$1,1$};
\draw (4,0) node [noeud] (20) {$2,0$};
\draw (2,4) node [noeud] (12) {$1,2$};
\draw (4,2) node [noeud] (21) {$2,1$};
\draw (4,4) node [noeud] (22) {$2,2$};
\draw [-latex',thick,dashed] (00) -- (01) node [pos=.5,left]
{$\ensuremath{0}\xspace,\ensuremath{1}\xspace$};
\draw [-latex',thick,dashed] (01) -- (02);
\draw [-latex',thick,dashed] (10) -- (11);
\draw [-latex',thick,dashed] (11) -- (12);
\draw [-latex',thick,dashed] (21) -- (22);
\draw [-latex',thick,dashed] (20) -- (21);
\draw [-latex',thick,dotted] (00) -- (10) node [pos=.5,above,sloped]
{$\ensuremath{1}\xspace,\ensuremath{0}\xspace$};
\draw [-latex',thick,dotted] (01) -- (11);
\draw [-latex',thick,dotted] (02) -- (12);
\draw [-latex',thick,dotted] (10) -- (20);
\draw [-latex',thick,dotted] (11) -- (21);
\draw [-latex',thick,dotted] (12) -- (22);
\draw [-latex',thick] (00) -- (11) node [midway,sloped,below] {$\ensuremath{1}\xspace,\ensuremath{1}\xspace$};
\draw [-latex',thick] (11) -- (22);
\draw [-latex',thick] (01) -- (12);
\draw [-latex',thick] (10) -- (21);
\draw [-latex',thick] (00) .. controls +(-20:1.5) and +(-70:1.5) .. (00) node
[pos=.5,right] {$\ensuremath{0}\xspace,\ensuremath{0}\xspace$};
\draw [-latex',thick] (01) .. controls +(-20:1.5) and +(-70:1.5)
.. (01);
\draw [-latex',thick] (02) .. controls +(-20:1.5) and +(-70:1.5)
.. (02);
\draw [-latex',thick] (10) .. controls +(-20:1.5) and +(-70:1.5)
.. (10);
\draw [-latex',thick] (11) .. controls +(-20:1.5) and +(-70:1.5)
.. (11);
\draw [-latex',thick] (12) .. controls +(-20:1.5) and +(-70:1.5)
.. (12);
\draw [-latex',thick] (20) .. controls +(-20:1.5) and +(-70:1.5)
.. (20);
\draw [-latex',thick] (21) .. controls +(-20:1.5) and +(-70:1.5)
.. (21);
\draw [-latex',thick] (22) .. controls +(-20:1.5) and +(-70:1.5)
.. (22);
\end{tikzpicture}
\caption{A simple game-model for the wireless network}
\label{fig-network}
\end{figure}
We describe another example, the medium access control, that involves
qualitative objectives. It was first given a game-theoretic model
in~\cite{MW03}. Several users share the access to a wireless
channel. During each slot, they can choose to either transmit or wait
for the next slot. If too many users are emitting in the same slot,
then they fail to send data. Each attempt to transmit costs energy to
the players. They have to maximise the number of successful attempts
using the energy available to them. We give in Figure~\ref{fig-mac} a
possible model for that protocol for two players and at most one
attempt per player and a congestion of $2$ (that is, the two players
should not transmit at the same time): each state is labelled with the
energy level of the two players, and the number of successful attempts
of each of the player. There is several Nash equilibria, and they
give payoff $1$ to every player: it consists in going to state
$(0,1,0,1)$ by not simultaneously transmitting.
\begin{figure}[t]
\centering
\begin{tikzpicture}[xscale=1.6,yscale=.9]
\def1\xspace{1\xspace}
\def0\xspace{0\xspace}
\tikzset{round/.style={rounded
corners=2mm,draw=black,thick,fill=black!10,minimum height=8mm}}
\everymath{\scriptstyle}
\draw (0,0.2) node [round] (00) {$1,0,1,0$};
\draw (0,2) node [round] (01) {$1,0,0,1$};
\draw (2,0.2) node [round] (10) {$0,1,1,0$};
\draw (2,2) node [round] (11) {$0,1,0,1$};
\draw (-45:1.5) node [round] (20) {$0,0,0,0$};
\draw [-latex',thick] (00) -- node [left] {$0\xspace,1\xspace$} (01) ;
\draw [-latex',thick] (00) -- node [above] {$1\xspace,0\xspace$}(10) ;
\draw [-latex',thick] (00) -- node [above,sloped] {$1\xspace,1\xspace$} (20) ;
\draw [-latex',thick] (10) -- node [right] {$0\xspace,1\xspace$} (11) ;
\draw [-latex',thick] (01) -- node [above] {$1\xspace,0\xspace$} (11) ;
\draw [-latex',thick] (00) .. controls +(-150:1.2) and +(150:1.2) .. (00) node
[pos=.5,below left] {$0\xspace,0\xspace$};
\draw [-latex',thick] (01) .. controls +(-150:1.2) and +(150:1.2)
.. (01) node
[pos=.5,left] {$0\xspace,0\xspace$};
\draw [-latex',thick] (10) .. controls +(30:1.2) and +(-30:1.2)
.. (10) node
[pos=.5,below right] {$0\xspace,0\xspace$};
\draw [-latex',thick] (11) .. controls +(30:1.2) and +(-30:1.2)
.. (11) node
[pos=.5,right] {$0\xspace,0\xspace$};
\draw [-latex',thick] (20) .. controls +(30:1.2) and +(-30:1.2)
.. (20) node
[pos=.5,right] {$0\xspace,0\xspace$};
\end{tikzpicture}
\caption{A simple game-model for the medium access control}
\label{fig-mac}
\end{figure}
\subsection*{Related work}
Game theory has been a very active area since the 1940's, with the
pioneering works of Von~Neumann, Morgenstern~\cite{MvN47},
Nash~\cite{nash50} and Shapley~\cite{shapley1952value}. It~has had
numerous uses in various domains, ranging from economics to human
sciences and logic. Equilibria are a central concept in
(non-zero-sum) games, as they are meant to represent rational
behaviours of the players. Many important results about existence of
various kinds of equilibria in different kinds of games have been
established~\cite{MvN47,nash50,fink64}.
For applications in logic and computer science, games played on graphs
have received more focus; also, computer scientists have been mostly
looking for algorithmic solutions for deciding the existence and
effectively computing equilibria and
$\epsilon$-equilibria~\cite{CMJ04,chatterjee05,ummels08}.
For two-player concurrent games with B\"uchi objectives, the existence of
$\epsilon$-equilibria (in~randomised strategies) was proved by
Chatterjee~\cite{chatterjee05}. However, exact Nash equilibria need not exist;
turn-based games with B\"uchi objectives are an~important subclass where Nash
equilibria (even in pure strategies) always exist~\cite{CMJ04}. When they
exist, Nash equilibria need not be unique; equilibria where all the players
lose can coexist with equilibria where some (or~all) of them win. Ummels
introduced \emph{constrained} Nash equilibria, \ie, Nash equilibria where some
players are required to win. In~particular, he~showed that the existence of
constrained Nash equilibria can be decided in polynomial time for turn-based
games with B\"uchi objectives \cite{ummels08}. In~this paper, we extend this
result to concurrent games, and to various classes of $\omega$-regular winning
objectives. For concurrent games with $\omega$-regular objectives, the
decidability of the constrained Nash equilibrium existence problem \wrt pure
strategies was established by Fisman~\ea~\cite{FKL10}, but their algorithm
runs in doubly exponential time, whereas our algorithm runs in exponential
time for objectives given as B\"uchi automata. Finally, Ummels and
Wojtczak~\cite{UW11} proved that the existence of a Nash equilibrium in pure
or randomised strategies is undecidable for \emph{stochastic} games with
reachability or B\"uchi objectives, which justifies our restriction to
concurrent games without probabilistic transitions. They also proved a similar
undecidability result for randomised Nash equilibria in non-stochastic
games~\cite{UW11a}, hence we consider only pure-strategy Nash equilibria.
Several solution concepts have been defined and studied for games on graphs.
In~particular, \emph{secure equilibria}~\cite{CHJ05b,DFKSV14} are Nash
equilibria where besides satisfying their primary objectives, the players try
to prevent the other players from achieving their own (primary) objectives.
Notice that our results in Sect.~\ref{subsec:monotonic} and
Sect.~\ref{ssec-generalcase} do apply to such kinds of lexicographic
combination of several objectives.
Temporal logics can also be used to express properties of games. While
\ATL~\cite{AHK02} can mainly express only zero-sum properties, other logics
such as \ATL with strategy contexts~(\ATLsc)~\cite{DLM10} and \textsf{Strategy
Logic}~(\SL)~\cite{CHP10,MMV10} can be used to express rich properties in a
non-zero-sum setting.
In~terms of complexity however, model checking for such logics has high
complexity: Nash equilibria can be expressed using one quantifier alternation
(an existential quantification over strategy profiles followed with a
universal quantification over deviations); model checking this fragment of
\ATLsc or~\SL is \EXPTIME[2]-complete.
\section{Definitions}
\subsection{General definitions}\label{ssec-gendef}
In this section, we fix some definitions and notations.
\paragraph{Preorders.}
We~fix a non-empty set~$P$. A~\newdef{preorder} over~$P$ is a binary
relation~$\mathord\preorder\subseteq P\times P$ that is reflexive and
transitive. With~a preorder~$\preorder$, we~associate an
\newdef{equivalence relation}~$\sim$ defined so that $a \sim b$ if,
and only if, ${a \preorder b}$ and ${b \preorder a}$.
The~\newdef{equivalence class} of~$a$, written $[a]_\preorder$, is the
set $\{ b \in P \mid a \sim b\}$. We~also associate with~$\preorder$
a~\newdef{strict partial order}~$\prec$ defined so that ${a \prec b}$
if, and only if, ${a \preorder b}$ and ${b \not\preorder a}$.
A~preorder~$\preorder$ is said \newdef{total} if, for all
elements~$a,b\in P$, either ${a \preorder b}$, or ${b\preorder a}$.
An element $a$ in a subset~$P'\subseteq P$ is said \emph{maximal
in~$P'$} if there is no $b \in P'$ such that $a \prec b$; it~is said
\emph{minimal in~$P'$} if there is no $b\in P'$ such that ${b \prec
a}$.
A~preorder is said \newdef{Noetherian} (or \emph{upwards
well-founded}) if any subset~$P'\subseteq P$ has at least one
maximal element. It is said \newdef{almost-well-founded} if any
lower-bounded subset $P' \subseteq P$ has a minimal element.
\paragraph{Transition systems.}
A \newdef{transition system} is a pair $\calS = \tuple{\Stat,\Edg}$
where $\Stat$ is a set of states and $\Edg \subseteq \Stat \times
\Stat$ is the set of transitions. A~\newdef{path}~$\pi$ in $\calS$ is
a sequence $(s_i)_{0\leq i < n}$ (where~$n\in\N^+ \cup\{\infty\}$) of
states such that $(s_i,s_{i+1})\in \Edg$ for all~$i\leq n$.
The~\newdef{length} of~$\pi$, denoted by~$\length\pi$, is~$n-1$.
The~set of finite paths (also called \newdef{histories}) of~$\calS$ is
denoted by~$\Hist_\calS$, the set of infinite paths (also called
\newdef{plays}) of~$\calS$ is denoted by~$\Play_\calS$, and
$\Path_\calS = \Hist_\calS\cup \Play_\calS$ is the set of all paths
of~$\calS$.
Given a path $\pi = (s_i)_{0 \le i < n}$ and an integer~$j<n$, the
\newdef{$j$-th prefix} (\resp \newdef{$j$-th suffix}, \newdef{$j$-th
state}) of~$\pi$, denoted by~$\pref\pi j$ (\resp $\pi_{\ge j}$,
$\pi_{=j}$), is the finite path~$(s_i)_{0\leq i<j+1}$ (\resp the path
$(s_{j+i})_{0 \le i <n-j}$, the state~$s_j$). If $\pi = (s_i)_{0\leq
i< n}$ is a history, we write $\last(\pi) = s_{\length\pi}$ for the
last state of~$\pi$. If~$\pi'$ is a path such that
$(\last(\pi),\pi'_{=0})\in\Edg$, then the
\newdef{concatenation}~$\pi\cdot \pi'$ is the path~$\rho$ s.t.
$\rho_{=i}=\pi_{=i}$ for~$i\leq \length\pi$ and
$\rho_{=i}=\pi'_{=(i-1-\length\pi)}$ for~$i>\length\pi$. In~the
sequel, we~write $\Hist_\calS(\stat)$, $\Play_{\calS}(\stat)$
and~$\Path_\calS(\stat)$ for the respective subsets of paths starting
in state~$\stat$. If~$\pi$~is a~play, $\Occ(\pi) = \{ s \mid \exists
j.\ \pi_{=j} =s \}$ is the sets of states that appears at least once
along~$\pi$ and $\Inf(\pi) = \{ s \mid \forall i.\ \exists j \ge i.\
\pi_{=j} =s \}$ is the set of states that appears infinitely often
along~$\pi$.
\subsection{Concurrent games}
\begin{wrapfigure}r{5cm}
\hfill
\begin{minipage}{4.9cm}
\centering
\begin{tikzpicture}[scale=1,thick]
\tikzset{every node/.style={font=\scriptsize}}
\tikzset{rond/.style={circle,draw=black,thick,fill=black!10,minimum height=7mm}}
\draw (0,0) node[rond] (l0) {$\ell_0$};
\draw (2,0) node[rond] (l1) {$\ell_1$};
\draw (2,-2) node[rond] (l2) {$\ell_2$};
\draw (0,-2) node[rond] (l3) {$\ell_3$};
\path[use as bounding box] (0,.8) -- (0,-3);
\draw[-latex'] (l0) .. controls +(60:10mm) and +(120:10mm) .. (l0)
node[midway,above] {$\langle 2,2\rangle$};
\draw[-latex'] (l0) -- (l1) node[midway,above] {$\langle 1,1\rangle$};
\draw[-latex'] (l0) -- (l3) node[midway,left] {$\langle
2,1\rangle$};
\draw[-latex'] (l0) .. controls +(-30:10mm) and +(120:10mm) .. (l2)
node[midway,above right=-1mm] {$\langle 1,2\rangle$};
\draw[-latex'] (l2) .. controls +(150:10mm) and +(-60:10mm) .. (l0)
node[midway,below left=-1mm] {$\langle 1,1\rangle$};
\draw[-latex'] (l1) .. controls +(60:10mm) and +(120:10mm) .. (l1)
node[midway,above] {$\langle 1,1\rangle$};
\draw[-latex'] (l3) .. controls +(-60:10mm) and +(-120:10mm) .. (l3)
node[midway,below] {$\langle 1,1\rangle$};
\draw[-latex'] (l1) -- (l2) node[midway,right] {$\langle 1,2\rangle$};
\end{tikzpicture}
\caption{Representation of a two-player concurrent game}\label{fig-ex}
\end{minipage}
\end{wrapfigure}
Our definition of concurrent games extends the definition
in~\cite{AHK02} by allowing for more than two players, each of them
having a preorder over plays.
\begin{definition}
A \newdef{concurrent game} is a tuple
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt}
}$, where $\Stat$ is a finite non-empty set of states, $\Agt$~is a finite
set of players, $\Act$~is a finite set of actions, and
\begin{itemize}
\item $\Allow\colon \Stat \times \Agt \to
2^\Act\setminus\{\varnothing\}$ is a mapping indicating the
actions available to a given player in a given state;
\item $\Tab\colon \Stat\times \Act^{\Agt} \to \Stat$ associates,
with a given state and a given move of the players (\ie, an
element of $\Act^\Agt$), the state resulting from that move;
\item for each~$A\in\Agt$, $\prefrel_A$ is a preorder over
$\Stat^\omega$, called the preference relation of player~$A$.
\end{itemize}
\end{definition}
\noindent Figure~\ref{fig-ex} displays an example of a finite concurrent game.
Transitions are labelled with the moves that trigger them. We~say that a
\newdef{move} $\act_\Agt=\indice\act A\Agt\in\Act^\Agt$ is \newdef{legal}
at~$\stat$ if $m_A\in\Allow(\stat,A)$ for all $A\in\Agt$. A~game
is~\newdef{turn-based} if for each state the set of allowed moves is a
singleton for all but at most one player.
In a concurrent game~$\calG$, whenever we arrive at a state~$\stat$,
the players simultaneously select an available action, which results
in a legal move~$\act_\Agt$; the next state of the game is then
$\Tab(\stat,\act_\Agt)$. The same process repeats
\textit{ad~infinitum} to form an infinite sequence of states.
In the sequel, as no ambiguity will arise, we~may abusively write~$\calG$ for
its underlying transition system~$(\Stat, \Edg)$ where $\Edg = \{ (s,s') \in
\Stat \times \Stat \mid \exists m_{\Agt} \in \prod_{A\in\Agt}
\Allow(s,A)\allowbreak\text{ s.t. }\Tab(s,m_\Agt) = s' \}$. The notions of
paths and related concepts in concurrent games follow from this
identification.
\begin{remark}[Representation of finite games]
\label{remark:encoding}
In this paper, for finite games, we will assume an explicit encoding
of the transition function $\Tab$. Hence, its size, denoted
$|\Tab|$, is equal to $\sum_{s\in \Stat} \prod_{A\in\Agt}
|\Mov(s,A)| \cdot \lceil\log(|\Stat|)\rceil$. Note that it can be
exponential with respect to the number of players. A~symbolic
encoding of the transition table has been proposed in~\cite{LMO08},
in the setting of \textsc{ATL} model checking. This makes the
problem harder, as the input is more succinct (see
Remark~\ref{remark:explosion} and
Proposition~\ref{proposition:explosion} for a formal
statement). We~would also have a blowup in our setting, and prefer
to keep the explicit representation in order to be able to compare
with existing results. Notice that, as a matter of fact, there is
no way to systematically avoid an explosion: as~there are
$|\Stat|^{|\Stat|\cdot|\Act|^{|\Agt|}}$ possible transition
functions, for any encoding there is one function whose encoding
will have size at least $\lceil\log(|\Stat|)\rceil \cdot |\Stat|
\cdot |\Act|^{|\Agt|}$.
%
The total size of the game, is then
\[ |\calG| = |\Stat| + |\Stat|\cdot|\Agt|\cdot|\Act| + \sum_{s\in
\Stat} \prod_{A\in\Agt} |\Mov(s,A)| \cdot
\lceil\log(|\Stat|)\rceil + \sum_{A\in\Agt} | \preorder_A |.\] The
size of a preference relation~$\preorder_A$ will depend on how it is
encoded, and we will make it precise when it is relevant. This is
given in Section~\ref{sec:prefrel}.
\end{remark}
\begin{definition}
Let~$\calG$ be a concurrent game, and~$A\in\Agt$. A
\newdef{strategy} for~$A$ is a mapping $\sigma_A\colon \Hist_\calG
\to \Act$ such that $\sigma_A(\pi) \in \Allow(\last(\pi),A)$ for all
$\pi\in\Hist_\calG$.
%
A~strategy~$\sigma_P$ for a coalition~$P\subseteq \Agt$ is a tuple
of strategies, one for each player in~$P$. We~write
$\sigma_P=(\sigma_A)_{A\in P}$ for such a strategy.
A~\newdef{strategy profile} is a strategy for~$\Agt$. We~write
$\Strat\calG P$ for the set of strategies of coalition~$P$, and
$\Profile_\calG=\Strat\calG \Agt$.
\end{definition}
Note that, in this paper, we only consider \emph{pure} (\ie,
non-randomised) strategies. This is actually crucial in all the
constructions we give (lasso representation in
Subsection~\ref{sec:lasso} and suspect-game construction in
Section~\ref{sec:suspect}).
Notice also that our strategies are based on the sequences of visited states
(they map sequences of states to actions), which is realistic when considering
multi-agent systems. In some settings, it is more usual to base strategies on
the sequences of actions played by all the players. When dealing with Nash
equilibria, this makes a big difference: strategies based on actions can
immediately detect which player(s) deviated from their strategy; strategies based
on states will only detect deviations because an unexpected state is visited,
without knowing which player(s) is responsible for the deviation. Our
construction precisely amounts to keeping track of a list of \emph{suspects}
for some deviation.
Let~$\calG$ be a game, $P$ a~coalition, and $\sigma_P$ a~strategy
for~$P$. A~path~$\pi$ is \newdef{compatible} with the
strategy~$\sigma_P$ if, for all~$k<\length\pi$, there exists a
move~$\indicebis\act A\Agt$ \st
\begin{enumerate}
\item $\indicebis\act A\Agt$ is legal at~$\pi_{=k}$,
\item $\act_A = \sigma_A(\pi_{\le k})$ for all~$A\in P$, and
\item $\Tab(\pi_{=k}, \indicebis\act A\Agt) = \pi_{=k+1}$.
\end{enumerate}
We~write~$\Out_{\calG}(\sigma_P)$ for the set of paths (called
the~\emph{outcomes}) in~$\calG$ that are compatible with
strategy~$\sigma_P$ of~$P$. We~write $\FOut_{\calG}$ (\resp
$\IOut_{\calG}$) for the finite (\resp infinite) outcomes, and
$\Out_\calG(\stat,\sigma_P)$, $\FOut_\calG(\stat,\sigma_P)$ and
$\IOut_\calG(\stat,\sigma_P)$ for the respective sets of outcomes
of~$\sigma_P$ with initial state~$\stat$. Notice that any strategy
profile has a single infinite outcome from a given state. In the
sequel, when given a strategy profile~$\sigma_\Agt$, we~identify
$\Out(\stat, \sigma_\Agt)$ with the unique play it contains.
A~concurrent game involving only two players ($A$~and~$B$,~say) is
\newdef{zero-sum} if, for any two plays~$\pi$ and~$\pi'$, it~holds
$\pi\prefrel_A \pi'$ if, and only~if, $\pi'\prefrel_B \pi$. Such a
setting is purely antagonistic, as both players have opposite
objectives. The most relevant concept in such a setting is that of
\emph{winning strategies}, where the aim is for one player to achieve
her objectives \emph{whatever the other
players~do}. In~\newdef{non-zero-sum} games, winning strategies are
usually too restricted, and the most relevant concepts are
\emph{equilibria}, which correspond to strategies that \emph{satisfy} (which
can be given several meanings) all the players. One of the most
studied notion of equilibria is \emph{Nash equilibria}~\cite{nash50},
which we now introduce.
\subsection{Nash equilibria}
We begin with introducing some vocabulary. When $\pi \prefrel_A
\pi'$, we say that $\pi'$ is \newdef{at~least as good} as~$\pi$
for~$A$.
We say that a strategy~$\sigma_A$ for~$A$ \newdef{ensures}~$\pi$ if
every outcome of~$\sigma_A$ is at least as good as~$\pi$ for~$A$, and
that $A$ \newdef{can ensure}~$\pi$ when such a strategy exists.
Given a move~$\indicebis mA\Agt$ and an action~$m'$ for some
player~$A$, we~write ${\replaceter m A {m'}}$ for the move~$\indicebis
nA\Agt$ with $n_B=m_B$ when~$B\not=A$ and $n_A=m'$. This~is extended
to strategies in the natural way.
\begin{definition}\label{def-NE}
Let~$\calG$ be a concurrent game and let $\stat$ be a state
of~$\calG$. A~\newdef{Nash equilibrium} of~$\calG$ from~$\stat$ is
a strategy profile $\sigma_\Agt \in \Profile_\calG$ \st
$\Out(\stat,\replaceter \sigma A {\sigma'}) \prefrel_A
\Out(\stat,\sigma_\Agt)$ for all players $A\in\Agt$ and all
strategies $\sigma'\in\Strat{}A$.
\end{definition}
\begin{wrapfigure}r{5.4cm}
\centering
\begin{tikzpicture}[yscale=-1,scale=1.2,thick,minimum height=5mm]
\draw[dotted,rounded corners=4mm,fill=black!30!white] (0,.5) -- (1.5,.5) --
(1.5,2.4) -- (.4,3.5) -- (-.4,3.5) -- (-1.5,2.4) -- (-1.5,1.5) -- (-.5,1.5) --
(-.5,.5) -- (0,.5);
\draw[dotted,rounded corners=2mm,fill=black!10!white,opacity=.8] (-1,.7) -- (-.7,.7) --
(-.7,1.7) -- (.3,1.7) -- (.3,3.3) -- (-.2,3.3) -- (-1.3,2.2) -- (-1.3,.7) -- (-1,.7);
\draw (0,0) node[circle,draw,fill=white] (000) {};
\draw (-1,1) node[circle,draw,fill=black] (001) {};
\draw (0,1) node[circle,draw,fill=white] (010) {};
\draw (1,1) node[circle,draw,fill=white] (100) {};
\draw (-1,2) node[circle,draw,fill=white] (011) {};
\draw (0,2) node[circle,draw,fill=white] (101) {};
\draw (1,2) node[circle,draw,fill=white] (110) {};
\draw (0,3) node[circle,draw,fill=white] (111) {};
\draw[latex'-] (000) -- (001);
\draw[latex'-] (000) -- (010);
\draw[latex'-] (000) -- (100);
\draw[latex'-] (001) -- (011);
\draw[latex'-] (001) -- (101);
\draw[latex'-] (010) -- (011);
\draw[latex'-] (010) -- (110);
\draw[latex'-] (100) -- (101);
\draw[latex'-] (100) -- (110);
\draw[latex'-] (011) -- (111);
\draw[latex'-] (101) -- (111);
\draw[latex'-] (110) -- (111);
\end{tikzpicture}
\caption{Two different notions of \emph{improvements} for a non-total order.}\label{fig-improve}
\end{wrapfigure}
So, Nash equilibria are strategy profiles where no single player has an
incentive to unilaterally deviate from her strategy.
\begin{remark}
Our definition of a Nash equilibrium requires any deviation to be
worse or equivalent to the equilibrium. Another possible definition
would have been to ask any deviation to be no better than the
equilibrium. Those two definitions yield different notions of Nash
equilibria (unless the preorders are total), as illustrated in
Figure~\ref{fig-improve}:
the black node~$n$ represents $\Out(\stat,\sigma_\Agt)$, the
light-gray area contains the nodes~$n'$ such that $n'\prefrel n$,
while the dark-gray area contains the nodes~$n'$ for which
$n\not\prefrel n'$.
This alternative definition would also be meaningful, and the
techniques we develop in this paper could be adapted to handle such
a variant.
\end{remark}
In this paper we will give a general construction that relates Nash
equilibria in a game (which can be infinite) and winning strategies in
a two-player turn-based game (called the \emph{suspect game}), it is
presented in Section~\ref{sec:suspect}. We will then be mostly
interested in solving the decision problems that we define next, when
games are finite.
\subsection{Decision problems we will consider}\label{ssec-probs}
Given a~concurrent game
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt}}$
and a state~$\stat\in\Stat$, we consider the following problems:
\begin{itemize}
\item \emph{Value problem}: Given a player~$A$ and a play~$\pi$,
is~there a strategy~$\sigma_A$ for player~$A$ such that for any
outcome~$\rho$ in~$\calG$ from~$\stat$ of~$\sigma_A$, it~holds $\pi
\prefrel_A \rho$?
\item \emph{NE Existence problem}: Does there exist a Nash equilibrium
in~$\calG$ from~$\stat$?
\item \emph{Constrained NE existence problem}: Given two plays $\pi_A^-$
and $\pi_A^+$ for each player~$A$, does there exist a Nash
equilibrium in~$\calG$ from~$\stat$ whose outcome~$\pi$ satisfies
$\pi_A^- \prefrel_A \pi \prefrel_A \pi_A^+$ for all~$A\in \Agt$?
\end{itemize}
We will focus on decidability and complexity results of these three
problems when games are finite, for various classes of preference
relations. Complexity results will heavily rely on what preorders we
allow for the preference relation and how they are represented. We
have already discussed the representation of the game structure in
Remark~\ref{remark:encoding}. We define and discuss now the various
preference relations we will study, and explain how we encode the
various inputs to the problems.
\subsection{Focus on the preference relations we will consider}
\label{sec:prefrel}
We define the various classes of preference relations we will focus on
in the rest of the paper. We begin with single-objective preference
relations, and we then define a more general class of ordered
objectives. We fix a game $\calG =
\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$.
\subsubsection{Single-objective preference relations}
\begin{definition}
An~\newdef{objective} (or \newdef{winning condition}) is an
arbitrary set of plays. A preference relation $\prefrel_A$ is
\newdef{single-objective} whenever there exists an objective
$\Omega_A$ \st: $\rho \prefrel_A \rho'$ if, and only~if, $\rho' \in
\Omega_A$ (we~then say that $\rho'$ is winning for~$A$) or
$\rho\not\in\Omega_A$ (we~then say that $\rho$ is losing for~$A$).
\end{definition}
The setting of single-objective preference relations is purely
qualitative, since a player can only win (in case the outcome is in
her objective), or lose (otherwise).
\medskip An~objective~$\Omega$ can be specified in various ways. Next
we will consider the following families of $\omega$-regular
objectives:
\begin{itemize}
\item A \textbf{reachability objective} is given by a target set $T
\subseteq \Stat$ and the corresponding set of winning plays is
defined by
\[\Omega^{\text{Reach}}_T = \{\rho \in \Play
\mid \Occ(\rho) \cap T \ne \varnothing \}. \]
\item A \textbf{safety objective} is given by a target set $T
\subseteq \Stat$ and the corresponding set of winning plays is
defined by
\[\Omega^{\text{Safety}}_T = \{\rho \in \Play \mid
\Occ(\rho) \cap T = \varnothing \}.\]
\item A \textbf{B\"uchi} objective is given by a target set $T
\subseteq \Stat$ and the corresponding set of winning plays is
defined by
\[\Omega^{\text{B\"uchi}}_T = \{ \rho \in \Play
\mid \Inf(\rho) \cap T \ne \varnothing \}.\]
\item A \textbf{co-B\"uchi objective} is given by a target set $T
\subseteq \Stat$ and the corresponding set of winning plays is
defined by
\[\Omega^{\text{co-B\"uchi}}_T =
\{ \rho \in \Play \mid \Inf(\rho) \cap T = \varnothing \}.\]
\item A \textbf{parity objective} is given by a priority function $p
\colon \Stat \mapsto \lsem 0 , d \rsem$ (where $\lsem 0,d\rsem = [0,d]\cap
\mathbb Z$) with $d\in \N$, and the
corresponding set of winning plays is defined by
\[\Omega^{\text{Parity}}_{p} = \{ \rho \in \Play \mid
\min(\Inf(p(\rho))) \ \text{is even} \}.\]
\item A \textbf{Streett objective} is given by a tuple
$(Q_i,R_i)_{i\in\lsem 1,k\rsem}$ and the corresponding set of
winning plays is defined by
\[\Omega^{\text{Streett}}_{(Q_i,R_i)_{i\in\lsem
1,k\rsem}} = \{ \rho \in \Play \mid \forall i.\ \Inf(\rho) \cap
Q_i \ne \varnothing \Rightarrow \Inf(\rho) \cap R_i \ne \varnothing
\}.\]
\item A \textbf{Rabin objective} is given by a tuple
$(Q_i,R_i)_{i\in\lsem 1,k\rsem}$ and the corresponding set of
winning plays is defined by
\[\Omega^{\text{Rabin}}_{(Q_i,R_i)_{i\in\lsem
1,k\rsem}} = \{ \rho \in \Play \mid \exists i.\ \Inf(\rho) \cap
Q_i \ne \varnothing \land \Inf(\rho) \cap R_i = \varnothing \}.\]
\item A \textbf{Muller objective} is given by a finite set $C$, a
coloring function $c \colon \Stat \mapsto C$, and a set $\mathcal{F}
\subseteq 2^C$. The corresponding set of winning plays is then
defined by
\[\Omega^{\text{Muller}}_{c,\mathcal{F}} = \{ \rho \in \Play
\mid \Inf(c(\rho)) \in\mathcal{F} \}.\]
\end{itemize}
\medskip\noindent We will also consider the following other types of objectives:
\begin{itemize}
\item A \textbf{circuit objective} is given by a boolean circuit~$C$ with the
set $\Stat$ as input nodes and one output node. A~play~$\rho$ is winning if
and only if $C$~evaluates to true when the input nodes corresponding to
states in $\Inf(\rho)$ are set to \true, and all other input nodes are set to
\false. We~write $\Omega^{\text{Circuit}}_{C}$ for the set of winning plays.
Figure~\ref{fig:ex-circuit} displays an example of a circuit
for the game of Figure~\ref{fig-ex}: this Boolean
circuit defines the condition that either $\ell_3$ appears
infinitely often, or if $\ell_1$ appears infinitely often then
so does~$\ell_2$.
\begin{figure}[t]
\centering
\begin{tikzpicture}[thick]
\draw (0,4) node [draw,circle] (S0) {$\ell_0$};
\draw (1.5,4) node [draw,circle] (S1) {$\ell_1$};
\draw (3,4) node [draw,circle] (S2) {$\ell_2$};
\draw (4.5,4) node [draw,circle] (S3) {$\ell_3$};
\draw (1.5,3) node [draw,circle] (Not) {$\lnot$};
\draw (2.5,2.5) node [draw,circle] (Or) {$\lor$};
\draw (3.5,2) node [draw,circle] (Or2) {$\lor$};
\draw (3.5,1) node (Out) {~~};
\draw[-latex'] (S1) -- (Not);
\draw[-latex'] (Not) -- (Or);
\draw[-latex'] (S2) -- (Or);
\draw[-latex'] (Or) -- (Or2);
\draw[-latex'] (S3) -- (Or2);
\draw[-latex'] (Or2) -- (Out);
\end{tikzpicture}
\caption{Boolean circuit defining
the condition that either $\ell_3$ appears infinitely often, or if
$\ell_1$ appears infinitely often then so does~$\ell_2$.}
\label{fig:ex-circuit}
\end{figure}
\item A \textbf{deterministic B\"uchi automaton objective} is given by
a deterministic B\"uchi automaton~$\calA =
\tuple{Q,\Sigma,\delta,q_0,R}$, with $\Sigma=\Stat$. Then the
corresponding set of winning plays is defined by
\[\Omega^{\text{det-B\"uchi-aut}}_{\calA} = \Lang(\calA).\]
\item A \textbf{deterministic Rabin automaton objective} is given by a
deterministic Rabin automaton~$\calA = \tuple{Q,\Sigma,\delta,q_0,
(E_i,F_i)_{i\in \lsem 1 , k\rsem}}$, with $\Sigma=\Stat$. Then the
corresponding set of winning plays is defined by
\[\Omega^{\text{det-Rabin-aut}}_{\calA} = \Lang(\calA).\]
\item A \textbf{Presburger-definable objective} is given by a
Presburger formula~$\phi$ with free variables $(X_s)_{s \in
\Stat}$. The corresponding set of winning plays is defined
by \[\Omega^{\text{Presb}}_\phi = \{\rho \in \Play \mid
\phi(\#s(\rho))_{s \in \Stat})=0\}\] where $\#s(\rho)$ is the number
of occurrences\footnote{By convention, if $s \in \Inf(\rho)$, and
variable $X_s$ appears in~$\phi$, then $\rho \notin
\Omega^{\text{Presb}}_\phi$.} of state~$s$ along~$\rho$.
\end{itemize}
\paragraph{Encodings} For complexity issues we now make explicit how
the various objectives are encoded:
\begin{itemize}
\item Reachability, safety, B\"uchi and co-B\"uchi objectives are
given by a set $T\subseteq \Stat$, they can therefore be encoded
using $|\Stat|$ bits.
\item For parity objectives, we assume without loss of generality that
$d \le 2 \cdot |\Stat|$. The priority function has then size at
most $|\Stat| \cdot \lceil\log(2\cdot |\Stat| + 1)\rceil$.
\item Street and Rabin objectives are given by tuples $(Q_i,R_i)_{i\in
\lsem 1,k\rsem}$. Their sizes are given by: $\sum_{i\in \lsem
1,k\rsem} |Q_i| \lceil\log(|\Stat|)\rceil$.
\item Muller objectives are given by a coloring function and a set
$\mathcal{F}$. Its size is $|\Stat|\cdot \lceil\log(|C|)\rceil +
|\mathcal{F}| \cdot \lceil\log(|C|)\rceil$. Note that thanks to the
coloring function, this encoding can be exponentially more succinct
than an explicit representation such as the one considered
in~\cite{horn2008explicit}.
\item The size of objectives given by circuits, deterministic automata
or Presburger formulas is that of the corresponding circuits,
deterministic automata or Presburger formulas.
\end{itemize}
\paragraph{Encodings of thresholds in inputs of the value and the
constrained NE existence problems.}
For all the objectives except for those given by automata, whether a
play $\rho$ satisfies the objective or not only depends on the sets
$\Occ(\rho)$ and $\Inf(\rho)$. The various thresholds will therefore
be encoded as such pairs $(\Occ,\Inf)$.
For deterministic-automata objectives, the thresholds will be also
encoded as pairs of sets of states of the objectives, representing
respectively the set of states which are visited and the set of states
which are visited infinitely often.
For the Boolean circuit objectives, whether a play $\rho$ satisfies
the objective or not only depends on the set $\Inf(\rho)$. Therefore
we will use as encoding for the threshold a single set $\Inf$.
For the Presburger formulas objectives, we will use as encoding for
the thresholds the Parikh image of the play (\ie, the number of visits
to each of the states).
\subsubsection{Ordered objectives}
We now turn to a more general class of preference relations, allowing
for a \textit{semi-quantitative setting}.
\begin{definition}
An \newdef{ordered objective} is a pair $\omega = \langle
(\Omega_i)_{1 \le i \le n},\preorder\rangle$, where, for every $1
\le i \le n$, $\Omega_i$~is an objective, and $\preorder$~is a
preorder on $\{0,1\}^n$. A~play~$\rho$ is assigned a \newdef{payoff
vector} w.r.t. that ordered objective, which is defined
as~$\payoff_\omega(\rho) = \One_{\{i \mid
\rho\in\Omega_i\}}\in\{0,1\}^{n}$ (where $\One_S$ is the vector
$v$ such that $v_i = 1 \Leftrightarrow i \in S$). The corresponding
preference relation $\prefrel_\omega$ is then defined by $\rho
\prefrel_\omega \rho'$ \iff $\payoff_\omega(\rho) \preorder
\payoff_\omega(\rho')$.
\end{definition}
There are many ways of specifying a preorder. We define below the
preorders on~$\{0,1\}^n$ that we consider in the
sequel. Figure~\ref{fig-preorder} displays four such preorders for
$n=3$. For the purpose of these definitions, we assume that
$\max\varnothing=-\infty$.
\begin{figure}[t]
\bgroup
\makeatletter
\def\@captype{subfigure}
\makeatother
\centering
\begin{minipage}{.3\textwidth}
\centering
\begin{tikzpicture}
\tikzset{every node/.style={font=\scriptsize}}
\draw(0,0) node (A) {$(0,0,0)$};
\draw (-1,1) node (B) {$(1,0,0)$};
\draw (0,1) node (C) {$(0,1,0)$};
\draw (1,1) node (D) {$(0,0,1)$};
\draw (-1,2) node (E) {$(1,1,0)$};
\draw (0,2) node (F) {$(1,0,1)$};
\draw (1,2) node (G) {$(0,1,1)$};
\draw (0,3) node (H) {$(1,1,1)$};
\draw[-latex'] (A) -- (B);
\draw[-latex'] (A) -- (C);
\draw[-latex'] (A) -- (D);
\draw[-latex'] (B) -- (E);
\draw[-latex'] (B) -- (F);
\draw[-latex'] (C) -- (E);
\draw[-latex'] (C) -- (G);
\draw[-latex'] (D) -- (F);
\draw[-latex'] (D) -- (G);
\draw[-latex'] (E) -- (H);
\draw[-latex'] (F) -- (H);
\draw[-latex'] (G) -- (H);
\end{tikzpicture}
\caption{Subset preorder}\label{fig-first}
\end{minipage}%
\begin{minipage}{.35\textwidth}
\centering
\begin{tikzpicture}[x=0.95cm]
\tikzset{every node/.style={font=\scriptsize}}
\draw(0,0) node (A) {$(0,0,0)$};
\draw (0,1) node (B) {$(1,0,0)$};
\draw (-1,2) node (C1) {$(0,1,0)$};
\draw (1,2) node (C2) {$(1,1,0)$};
\draw (-2,3) node (D1) {$(0,0,1)$};
\draw (-0.7,3) node (D2) {$(1,0,1)$};
\draw (0.7,3) node (D3) {$(0,1,1)$};
\draw (2,3) node (D4) {$(1,1,1)$};
\node[draw,densely dotted,rounded corners=2mm,fit=(A),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(B),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(C1)(C2),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(D1)(D2)(D3)(D4),inner sep=0mm] {};
\draw[-latex'] (0,.3) -- (0,.7);
\draw[-latex'] (0,1.3) -- (0,1.7);
\draw[-latex'] (0,2.3) -- (0,2.7);
\end{tikzpicture}
\caption{\mbox{Maximise preorder}
\end{minipage}%
\begin{minipage}{.35\textwidth}
\centering
\begin{tikzpicture}
\tikzset{every node/.style={font=\scriptsize}}
\draw(0,0) node (A) {$(0,0,0)$};
\draw (-1,1) node (B) {$(1,0,0)$};
\draw (0,1) node (C) {$(0,1,0)$};
\draw (1,1) node (D) {$(0,0,1)$};
\draw (-1,2) node (E) {$(1,1,0)$};
\draw (0,2) node (F) {$(1,0,1)$};
\draw (1,2) node (G) {$(0,1,1)$};
\draw (0,3) node (H) {$(1,1,1)$};
\node[draw,densely dotted,rounded corners=2mm,fit=(A),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(B)(C)(D),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(E)(F)(G),inner sep=0mm] {};
\node[draw,densely dotted,rounded corners=2mm,fit=(H),inner sep=0mm] {};
\draw[-latex'] (0,.3) -- (0,.7);
\draw[-latex'] (0,1.3) -- (0,1.7);
\draw[-latex'] (0,2.3) -- (0,2.7);
\end{tikzpicture}
\caption{Counting preorder}
\end{minipage}
\bigskip
\begin{minipage}{.9\linewidth}
\centering
\begin{tikzpicture}[scale=1.1]
\tikzset{every node/.style={font=\scriptsize}}
\draw(0,0) node (A) {$(0,0,0)$};
\draw (1.4,0) node (B) {$(0,0,1)$};
\draw (2.8,0) node (C) {$(0,1,0)$};
\draw (4.2,0) node (D) {$(0,1,1)$};
\draw (5.6,0) node (E) {$(1,0,0)$};
\draw (7,0) node (F) {$(1,0,1)$};
\draw (8.4,0) node (G) {$(1,1,0)$};
\draw (9.8,0) node (H) {$(1,1,1)$};
\draw[-latex'] (A) -- (B);
\draw[-latex'] (B) -- (C);
\draw[-latex'] (C) -- (D);
\draw[-latex'] (D) -- (E);
\draw[-latex'] (E) -- (F);
\draw[-latex'] (F) -- (G);
\draw[-latex'] (G) -- (H);
\end{tikzpicture}
\caption{Lexicographic order}\label{fig-last}
\end{minipage}
\egroup
\caption{Examples of preorders (for $n=3$): dotted boxes represent
equivalence classes for the relation~$\sim$, defined as $a\sim b
\Leftrightarrow a\preorder b \land b\preorder a$; arrows represent
the preorder relation~$\preorder$ quotiented by~$\sim$.}\label{fig-preorder}
\end{figure}
\begin{enumerate}
\item \newdef{Conjunction}: $v \preorder w$ \iff either $v_i=0$ for
some~$1\leq i\leq n$, or $w_i=1$ for all~$1\leq i\leq n$. This
corresponds to the case where a player wants to achieve all her
objectives.
\item \newdef{Disjunction}: $v \preorder w$ \iff either $v_i=0$ for
all~$1\leq i\leq n$, or $w_i=1$ for some~$1\leq i\leq n$. The aim
here is to satisfy at least one objective.
\item \newdef{Counting}: $v \preorder w$ \iff $|\{i\mid v_i=1\}| \leq
|\{i\mid w_i=1\}|$. The aim is to maximise the number of conditions
that are satisfied;
\item \newdef{Subset}: $v \preorder w$ \iff $\{i\mid v_i=1\} \subseteq
\{i\mid w_i = 1 \}$: in this setting, a player will always struggle
to satisfy a larger (for inclusion) set of objectives.
\item \newdef{Maximise}: $v \preorder w$ \iff $\max \{i\mid v_i=1\}
\leq \max \{i\mid w_i = 1 \}$. The aim is to maximise the highest
index of the objectives that are satisfied.
\item \newdef{Lexicographic}: $v \preorder w$ \iff either $v=w$, or
there is~$1 \le i \le n$ such that $v_i=0$, $w_i=1$ and $v_j=w_j$
for all $1\leq j<i$.
\item \newdef{Boolean Circuit}: given a Boolean circuit, with input
from $\{0,1\}^{2 n}$, $v \preorder w$ \iff the circuit evaluates~$1$
on input $v_1 \ldots v_n w_1 \ldots w_n$.
\item \newdef{Monotonic Boolean Circuit}: same as above, with the
restriction that the input gates corresponding to~$v$ are negated,
and no other negation appear in the circuit.
\end{enumerate}
\noindent In terms of expressiveness, any preorder over $\{0,1\}^n$ can be given
as a Boolean circuit: for each pair~$(v,w)$ with $v \preorder w$, it
is possible to construct a circuit whose output is~$1$ \iff the input
is $v_1 \ldots v_n w_1 \ldots w_n$; taking the disjunction of all
these circuits we obtain a Boolean circuit defining the preorder. Its
size can be bounded by $2^{2 n+3} n$, which is exponential in
general. But all the above examples ((1)-(6)) can be specified with a
circuit of polynomial size. In Figure~\ref{fig:boolean-subset} we give
a polynomial-size Boolean circuit for the subset preorder. In the
following, for complexity issues, we will assume that the encoding of
all preorders (1)-(6) takes constant size, and that the size of the
preorder when it is given as a Boolean circuit is precisely the size
of the circuit for input size $n$, where $n$ is the number of
objectives.
A preorder~$\preorder$ is \newdef{monotonic} if it is compatible with
the subset ordering, \ie if $\{i\mid v_i=1\} \subseteq \{i\mid w_i = 1
\}$ implies $v \preorder w$. Hence, a preorder is monotonic if
fulfilling more objectives never results in a lower payoff. All our
examples of preorders except for the Boolean circuit preorder are
monotonic. Moreover, any monotonic preorder can be expressed as a
monotonic Boolean circuit: for a pair~$(v,w)$ with~$v\preorder w$, we
can build a circuit whose output is~$1$ \iff the input is~$v_1 \ldots
v_n w_1 \ldots w_n$. We~can require this circuit to have negation at
the leaves. Indeed, if the input~$w_j$ appears negated, and if
$w_j=0$, then by monotonicity, also the input $(v,\tilde w)$ is
accepted, with $\tilde w_i=w_i$ when $i\not=j$ and $\tilde w_j=1$.
Hence the negated input gate can be replaced with~$\texttt{true}$.
Similarly for positive occurrences of any~$v_j$. Hence any monotonic
preorder can be written as a monotonic Boolean circuit. Notice that
with Definition~\ref{def-NE}, any Nash equilibrium $\sigma_\Agt$ for
the subset preorder is also a Nash equilibrium for any monotonic
preorder.
\begin{figure}[htb]
\centering{
\begin{tikzpicture}[thick]
\everymath{\scriptstyle}
\draw[black!10!white,line width=4.5mm] (-.4,0) -- (3cm+3pt,0);
\draw[black!10!white,line width=4.5mm] (4cm+3pt,0) -- +(3.4cm+3pt,0);
\draw(0,0) node[draw,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (A) {$v_1$};
\draw(A.0) node[draw,right,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (B) {$v_2$};
\draw(B.0) node[right,minimum width=10mm, minimum height=4.5mm,inner sep=0pt] (C) {$\dots$};
\draw(C.0) node[draw,right,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (D) {$v_n$};
\draw(D.0) +(1,0) node[draw,right,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (E) {$w_1$};
\draw(E.0) node[draw,right,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (F) {$w_2$};
\draw(F.0) node[right,minimum width=10mm, minimum height=4.5mm,inner sep=0pt] (G) {$\dots$};
\draw(G.0) node[draw,right,minimum width=8mm, minimum height=4.5mm,inner sep=0pt] (H) {$w_n$};
\draw (A) + (0,-0.7) node[draw,rounded corners=2mm,inner sep=1mm] (G1) {$\textsf{NOT}$};
\draw (B) + (0,-0.7) node[draw,rounded corners=2mm,inner sep=1mm] (G2)
{$\textsf{NOT}$};
\draw (C) + (0,-0.7) node {$\dots$};
\draw (D) + (0,-0.7) node[draw,rounded corners=2mm,inner sep=1mm] (G3) {$\textsf{NOT}$};
\draw (E) + (0,-2.4) node[draw,rounded corners=2mm,inner sep=1mm] (G4) {$\textsf{OR}$};
\draw (F) + (0,-1.9) node[draw,rounded corners=2mm,inner sep=1mm] (G5) {$\textsf{OR}$};
\draw (G) + (0,-1.5) node {$\dots$};
\draw (H) + (0,-1.2) node[draw,rounded corners=2mm,inner sep=1mm] (G6) {$\textsf{OR}$};
\draw (E) + (0,-3) node[draw,rounded corners=2mm,inner sep=1mm] (G7) {$\textsf{AND}$};
\draw (A) -- (G1);
\draw (B) -- (G2);
\draw (D) -- (G3);
\draw (G1) |- (G4);
\draw (G2) |- (G5);
\draw (G3) |- (G6);
\draw (E) -- (G4);
\draw (F) -- (G5);
\draw (H) -- (G6);
\draw (G4) -- (G7);
\draw (G5) |- (G7.20);
\draw (G6) |- (G7);
\draw (G7) -- +(0,-0.6);
\end{tikzpicture}
}
\caption{Boolean circuit defining the subset preorder}
\label{fig:boolean-subset}
\end{figure}
Next we will be be interested in two kinds of ordered objectives,
\emph{ordered reachability objectives}, where all objectives are supposed to
be reachability objectives, and \emph{ordered B\"uchi objectives}, where all
objectives are supposed to be B\"uchi objectives. Note that other classical
objectives (parity, Streett, Rabin, Muller, \etc) can be equivalently
described with a preorder given by a polynomial-size Boolean circuit over
B\"uchi objectives. For instance, each set of a~Muller condition can be
encoded as a conjunction of B\"uchi and co-B\"uchi conditions.
For ordered reachability (resp. B\"uchi) objectives, thresholds used
as inputs to the various decision problems will be given by the set of
states that are visited (resp. visited infinitely often).
\bigskip In Sections~\ref{sec:buchi} and~\ref{sec:reach}, we will be
interested in games where, for every player $A$, the preference
relation $\prefrel_A$ is given by an ordered objective $\omega_A =
\langle (\Omega^A_i)_{1 \le i \le n_A},\preorder_A\rangle$. We will
then write~$\payoff_A$ instead of $\payoff_{\omega_A}$ for the
payoffs, and if $\rho$ is a play, $\payoff(\rho) =
(\payoff_A(\rho))_{A \in \Agt}$.
\subsection{Undecidability of all three problems for single Presburger-definable objectives}
We end this section with an undecidability result in the quite
general setting of Presburger-definable preference relations.
\begin{theorem}
\label{theorem:undecidable}
The value, NE existence and constrained NE existence problems are
undecidable for finite games with preference relations given by
Presburger-definable qualitative objectives.
\end{theorem}
\begin{proof}
We first prove the result for the constrained NE existence problem, by
encoding a two-counter machine. We fix a two-counter machine, and
assume without loss of generality that the halting state is preceded
by a non-zero test for the two counters (hence if the machine halts,
the two counters have a positive value in the halting state).
We begin with defining a family of preorders. Fix two sets of
states~$S$ and~$T$; a~play is said $(S=T)$-winning if the
number of visits to~$S$ equals the number of visits to~$T$, and both
are finite. Formally, $\pi \prefrel_{S=T} \pi'$ whenever $\pi$ is
not $(S=T)$-winning, or $\pi'$~is.
We use such preorders to encode the acceptance problem for two-counter
machines: the value of counter~$c_1$ is encoded as the difference
between the number of visits to~$S_1$ and~$T_1$, and similarly for
counter~$c_2$. Incrementing counter~$c_i$ thus consists in visiting a
state in~$S_i$, and decrementing consists in visiting~$T_i$; in other
terms, if instruction~$q_k$ of the two-counter machine consists in
incrementing~$c_1$ and jumping to~$q_{k'}$, then the game will have a
transition from some state~$q_k$ to a state in~$S_1$, and a transition
from there to~$q_{k'}$. The game involves three players: $A_1$,
$A_2$ and~$B$. The~aim of player~$A_1$ (resp.~$A_2$) is to visit~$S_1$
and~$T_1$ (resp.~$S_2$ and~$T_2$) the same number of times: player
$A_i$'s preference is $\prefrel_{S_i=T_i}$. The aim of player~$B$ is
to reach the state corresponding to the halting state of the
two-counter machine. Due to the assumption on the two-counter machine,
if $B$ wins, then both $A_1$ and $A_2$ lose.
\smallskip
\begin{wrapfigure}{r}{6.3cm}
\centering
\begin{tikzpicture}[thick]
\tikzstyle{square}=[draw,minimum height=5mm,minimum width=5mm, inner sep=1mm]
\tikzstyle{smsquare}=[draw,minimum height=5mm,minimum width=5mm,fill=black!20!white,inner sep=1mm]
\draw (0.2,0) node[draw,circle,minimum width=5mm] (B) {};
\draw (B.-90) node[below] (BB) {$B$};
\draw (2,0.8) node[square] (A1) {$u_i^{{\scriptscriptstyle \ne 0}}$};
\draw (A1.170) node[left] (A1B) {$A_i$};
\draw (2,-0.8) node[square] (A2) {$u_i^{{\scriptscriptstyle = 0}}$};
\draw (A2.90) node[above] (A2A) {$A_i$};
\draw (2,1.8) node[smsquare] (U) {};
\draw (1,-1.6) node[square] (S) {$s_i$};
\draw (S.0) node[right] (A2A) {$A_i$};
\draw (3,-1.6) node[square] (T) {$t_i$};
\draw (T.180) node[left] (A2A) {$A_i$};
\draw (1,-2.6) node[smsquare] (SB) {};
\draw (3,-2.6) node[smsquare] (TB) {};
\draw[-latex'] (B) -- (A1);
\draw[-latex'] (A1) -- (U);
\draw[-latex'] (A1) -- +(.8,0);
\draw[-latex'] (U) .. controls +(.5,0.9) and +(-.5,0.9) .. (U);
\draw[-latex'] (B) -- (A2);
\draw[-latex'] (A2) -- +(.8,0);
\draw[-latex'] (A2) -- (S);
\draw[-latex'] (A2) -- (T);
\draw[-latex'] (S) -- (SB);
\draw[-latex'] (T) -- (TB);
\draw[-latex'] (S) .. controls +(-1,-0.7) and +(-1,0.7) .. (S);
\draw[-latex'] (T) .. controls +(1,-0.7) and +(1,0.7) .. (T);
\draw[-latex'] (SB) .. controls +(0.5,-0.9) and +(-0.5,-0.9) .. (SB);
\draw[-latex'] (TB) .. controls +(0.5,-0.9) and +(-0.5,-0.9) .. (TB);
\draw[-latex'] (-0.5,0) -- (B);
\end{tikzpicture}
\caption{Testing whether $c_i=0$.}
\label{fig:zero-test}
\end{wrapfigure}
It~remains to encode the zero-test: this is achieved by the
module of Figure~\ref{fig:zero-test}. In~this module, player~$B$
tries to avoid the three sink states (marked in grey), since this would
prevent her from reaching her goal.
When entering the module, player~$B$ has to choose one of the
available branches: if she decides to go to~$u_i^{{\scriptscriptstyle
\ne 0}}$, then $A_i$ could take the play into the self-loop, which
is winning for her if $S_i$ and~$T_i$ have been visited the same
number of times in the history of this path, which corresponds to
having $c_i=0$; hence player~$B$ should play
to~$u_i^{{\scriptscriptstyle \ne 0}}$ only if $c_i\not=0$, so that
$A_1$ has no interest in going to this self-loop.
Similarly, if player~$B$ decides to go to~$u_i^{{\scriptscriptstyle
=0}}$, player~$A_i$ has the opportunity to ``leave'' the main
stream of the game, and go to~$s_i$ or~$t_i$ (obviously $s_i \in S_i$
and $t_i \in T_i$). If~the numbers of visits to~$S_i$ and~$T_i$ up to
that point are different, then player~$A_i$ has the opportunity to
make both numbers equal, and to win. Conversely, if both numbers are
equal (i.e., $c_i=0$), then going to~$s_i$ or~$t_i$ will be losing
for~$A_i$, whatever happens from there. Hence, if~$c_i=0$ when
entering the module, then player~$B$ should go
to~$u_i^{{\scriptscriptstyle =0}}$.
\medskip One can then easily show that the two-counter machine stops
if, and only if, there is a Nash equilibrium in the resulting game~$\calG$, in
which player~$B$ wins and players~$A_1$ and~$A_2$ lose.
Indeed, assume that the machine stops, and consider the strategies
where player~$B$ plays (in the first state of the test modules)
according to the value of the corresponding counter, and where
players~$A_1$ and~$A_2$ always keep the play in the main stream of the
game. Since the machine stops, player~$B$ wins, while players~$A_1$
and~$A_2$ lose. Moreover, none of them has a way to improve their
payoff: since player~$B$ plays according to the values of the
counters, players~$A_1$ and~$A_2$ would not benefit from deviating
from their above strategies.
Conversely, if there is such a Nash equilibrium, then in any visited
test module, player~$B$ always plays according to the values of the
counters: otherwise, player~$A_1$ (or~$A_2$) would have the
opportunity to win the game. By~construction, this means that the run
of the Nash equilibrium corresponds to the execution of the
two-counter machine. As~player~$B$ wins, this execution reaches the
halting state.
\bigskip Finally, it is not difficult to adapt this reduction to
involve only two players: players~$A_1$ and~$A_2$ would be replaced by
one single player~$A$, in charge of ensuring that both conditions
(for~$c_1$ and~$c_2$) are fulfilled. This requires minor changes to
the module for testing~$c_i=0$: when leaving the main stream of the
game in a module for testing counter~$c_i$, player~$A$ should be given
the opportunity (after the grey state) to visit states~$S_{3-i}$
or~$T_{3-i}$ in order to adjust that part of her objective.
\smallskip
By changing the winning condition for Player~$B$, the game~$\calG$ can also be made
zero-sum: for this, $B$~must lose if the play remains in the main stream
forever without visiting the final state; otherwise, $B$~loses if the
number of visits to~$s_i$ and~$t_i$ are finite and equal for both~$i=1$
and~$i=2$; $B$~wins in any other case. The objective of player~$A$ is
opposite. It~is not difficult to modify the proof above for showing that the
two-counter machine halts if, and only if, player~$B$ has a winning strategy
in this game.
\smallskip
\begin{wrapfigure}{r}{6.2cm}
\centering
\begin{tikzpicture}[thick]
\tikzset{noeud/.style={circle,draw=black,thick,fill=black!10,minimum
height=6mm,inner sep=0pt}}
\draw (0,0) node [noeud] (A) {$s_0$};
\draw (30:1.6cm) node [noeud] (B) {$s_1$};
\draw (-30:1.6cm) node [noeud] (C) {$s$};
\begin{scope}[xshift=1.1cm,yshift=-.75cm]
\draw[dashed, rounded corners=2mm] (0,0) .. controls +(90:5mm) .. (1,.5) -- (2,.9) -- (2.5,.6) --
(3,0) -- (2.5,-.8) -- (2,-.9) -- (1,-.7) .. controls +(150:5mm) and
+(-90:5mm) .. (0,0);
\draw (1.8,-0.1) node {\begin{minipage}{2.1cm}\footnotesize\centering
Copy of~$\calG$
\end{minipage}};
\end{scope}
\draw[-latex'] (A) -- (B) node[midway,above left=-2pt] {$\scriptstyle \tuple{1,1}, \tuple{2,2}$};
\draw[-latex'] (A) -- (C) node[midway,below left=-2pt] {$\scriptstyle \tuple{1,2}, \tuple{2,1}$};
\draw[-latex'] (B) .. controls +(30:10mm) and +(-30:10mm) .. (B);
\draw[dashed] (C) -- +(40:5mm);
\draw[dashed] (C) -- +(0:4mm);
\draw[dashed] (C) -- +(-40:5mm);
\end{tikzpicture}
\caption{Extending the game with an initial concurrent module}
\label{fig-init-module}
\end{wrapfigure}
Finally, by adding a small initial module depicted on
Figure~\ref{fig-init-module} to this zero-sum version of the game~$\calG$, one
can encode the halting problem for two-counter machines to the
NE existence problem. Indeed, in the zero-sum game, there is exactly one
Nash equilibrium, with only two possible payoffs (either~$A$ wins,
or~$B$~wins).
Now, assuming that $A$ loses and $B$ wins in state~$s_1$,
then there is a (pure) Nash equilibrium in
the game extended with the initial module if, and only if, player~$B$
wins in the zero-sum game above.
\end{proof}
\section{Preliminary results}\label{sec-prelim}
This section contains general results that will be applied later in
various settings. In each of the statements, we give the restrictions
on the games and on the preference relations that should be satisfied.
\subsection{Nash equilibria as lasso runs}
\label{sec:lasso}
We first characterise outcomes of Nash equilibria as ultimately
periodic runs, in the case where preference relations only depend on the set of
states that are visited, and on the set of states that are visited
infinitely often.
Note that $\omega$-regular conditions satisfy this hypothesis,
but Presburger relations such as the ones used for proving
Theorem~\ref{theorem:undecidable} do~not.
\begin{proposition}\label{lem:play-length}
Let~$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$ be
a \textbf{finite} concurrent game such that, for every player $A$,
it~holds\footnote{We recall that $\rho \sim_A \rho'$ if, and only if, $\rho
\prefrel_A \rho'$ and $\rho' \prefrel_A \rho$.} $\rho \sim_A \rho'$ as
soon as $\Inf(\rho) = \Inf(\rho')$ and $\Occ(\rho) = \Occ(\rho')$. Let~$\rho
\in \Play$. If there is a Nash equilibrium with outcome~$\rho$, then there
is a Nash equilibrium with outcome~$\rho'$ of the form $\pi \cdot
\tau^\omega$ such that $\rho \sim_A \rho'$, and where $|\pi|$ and~$|\tau|$
are bounded by~$|\Stat|^2$.
\end{proposition}
\begin{proof}
Let $\sigma_\Agt$ be a Nash equilibrium from some state~$\stat$, and
$\rho$~be its outcome. We~define a new strategy
profile~$\sigma'_{\Agt}$, whose outcome from~$\stat$ is ultimately
periodic, and then show that $\sigma'_{\Agt}$ is a Nash equilibrium
from~$\stat$.
To begin with, we inductively construct a history $\pi = \pi_0 \pi_1
\dots \pi_n$ that is not too long and visits precisely those states
that are visited by~$\rho$ (that is, $\Occ(\pi) = \Occ(\rho)$).
The initial state is $\pi_0 = \rho_0=\stat$. Then we assume we have
constructed $\pi_{\le k} = \pi_0 \dots \pi_k$ which visits exactly
the same states as $\rho_{\le k'}$ for some~$k'$. If all the states
of~$\rho$ have been visited in $\pi_{\le k}$ then the construction
is over. Otherwise there is an index~$i$ such that $\rho_{i}$ does
not appear in~$\pi_{\le k}$. We~therefore define our next target as
the smallest such~$i$: we~let $t(\pi_{\le k}) = \min \{ i \mid
\forall j \le k.\ \pi_j \neq \rho_i\}$. We~then look at the
occurrence of the current state~$\pi_k$ that is the closest to the
target in~$\rho$: we~let $c(\pi_{\le k}) = \max \{ j < t(\pi_{\le
k}) \mid \pi_k = \rho_j\}$. Then we~emulate what happens at that
position by choosing $\pi_{j+1} = \rho_{c(\pi_{\le j})+1}$. Then
$\pi_{k+1}$ is either the target, or a state that has already been
seen before in $\pi_{\le k}$, in which case the resulting $\pi_{\le
k+1}$ visits exactly the same states as $\rho_{\le c(\pi_{\le
k})+1}$.
At each step, either the number of remaining targets strictly
decreases, or the number of remaining targets is constant but the
distance to the next target strictly decreases. Therefore the
construction terminates. Moreover, notice that between two targets
we do not visit the same state twice, and we visit only states that
have already been visited, plus the target. As~the number of
targets is bounded by~$|\Stat|$, we~get that the length of the path
$\pi$ constructed thus far is bounded by $1+|\Stat|\cdot
(|\Stat|-1)/2$.
\smallskip Using similar ideas, we now inductively construct $\tau =
\tau_0 \tau_1 \dots \tau_m$, which visits precisely those states
which are seen infinitely often along~$\rho$, and which is not too
long. Let $l$ be the least index after which the states visited
by~$\rho$ are visited infinitely often, \ie $l = \min\{ {i\in\N \mid
\forall j\geq i.\ \rho_j \in \Inf(\rho)}\}$. The~run~$\rho_{\ge
l}$ is such that its set of visited states and its set of states
visited infinitely often coincide. We~therefore define~$\tau$ in the
same way we have defined~$\pi$ above, but for play~$\rho_{\ge
l}$. As~a by-product, we also get $c(\tau_{\leq k})$, for~$k<m$.
We now need to glue $\pi$ and~$\tau$ together, and to ensure
that~$\tau$ can be glued to itself, so that $\pi \cdot \tau^\omega$
is a real run. We~therefore need to link the last state of~$\pi$
with the first state of~$\tau$ (and similarly the last state of
$\tau$ with its first state). This possibly requires appending some
more states to~$\pi$ and~$\tau$: we~fix the target of~$\pi$ and
$\tau$ to be~$\tau_0$, and apply the same construction as previously
until the target is reached. The total length of the resulting
paths~$\pi'$ and~$\tau'$ is bounded by $1+(|\Stat| -
1)\cdot(|\Stat|+2)/2$ which is less than~$|\Stat|^2$.
\smallskip We~let~$\rho'=\pi'\cdot{\tau'}^\omega$, and abusively
write $c(\rho'_{\leq k})$ for $c(\pi'_{\leq k})$ if~$k\leq
\length{\pi'}$ and $c(\tau'_{\leq k'})$ with $k'=(k-1-\length{\pi'})
\mod \length{\tau'}$ otherwise. We now define our new strategy
profile, having~$\rho'$ as outcome from~$\stat$. Given a
history~$h$:
\begin{itemize}
\item if $h$ followed the expected path, \ie, $h = \rho'_{\le k}$
for some~$k$, we~mimic the strategy at~$c(h)$: $\sigma'_\Agt(h) =
\sigma_\Agt(\rho'_{c(h)})$.
This way, $\rho'$ is the outcome of~$\sigma'_\Agt$ from~$\stat$.
\item otherwise we take the longest prefix $h_{\le k}$ that is a prefix
of~$\rho'$, and define $\sigma'_\Agt(h) =
\sigma_\Agt(\rho'_{c(h_{\le k})} \cdot h_{\ge k+1})$.
\end{itemize}
\noindent We now show that $\sigma'_\Agt$ is a Nash equilibrium. Assume that
one of the players changes her strategy while playing according
to~$\sigma'_\Agt$: either the resulting outcome does not deviate
from~$\pi \cdot \tau^\omega$, in which case the payoff of that
player is not improved; or~it~deviates at some point, and from that
point on, $\sigma'_\Agt$~follows the same strategies as
in~$\sigma_\Agt$. Assume that the resulting outcome is an
improvement over~$\rho'$ for the player who deviated. The suffix of
the play after the deviation is the suffix of a play
of~$\sigma_\Agt$ after a deviation by the same player. By
construction, both plays have the same sets of visited and
infinitely-visited states. Hence we have found an advantageous
deviation from~$\sigma_\Agt$ for one player, contradicting the fact
that $\sigma_\Agt$ is a Nash equilibrium.
\end{proof}
\subsection{Encoding the value problem as a constrained NE existence problem}
We now give a reduction that will be used to infer hardness results for the
constrained NE existence problem from the hardness of the value problem (as
defined in Section~\ref{ssec-probs}): this will be the case when the hardness
proof for the value problem involves the construction of a game satisfying the
hypotheses of the proposition.
\begin{proposition}
\label{lem:link-value-constr}
Let~$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt}
}$ be a two-player zero-sum game played between players $A$ and $B$,
such that:
\begin{itemize}
\item the preference relation $\prefrel_{A}$ for player~$A$ is total,
Noetherian and almost-well-founded (see~Section~\ref{ssec-gendef});
\item $\calG$ is determined, \ie, for all play~$\pi$:
\[
[\exists \sigma_A.\ \forall \sigma_B.\ \pi \prefrel_A
\Out(\sigma_A,\sigma_B)] \quad \Leftrightarrow \quad [\forall
\sigma_B.\ \exists \sigma_A.\ \pi \prefrel_A
\Out(\sigma_A,\sigma_B)].
\]
\end{itemize}
Let $\calG'$ be the (non-zero-sum) game obtained from~$\calG$ by
replacing the preference relation of player~$B$ by the one where all
plays are equivalent. Then, for every state $s$, for every play
$\pi$ from $s$, the two following properties are equivalent:
\begin{enumerate}[label=(\roman*)]
\item there is a Nash equilibrium in $\calG'$ from~$s$ with
outcome~$\rho$ such that $\pi \not\prefrel_A \rho$;
\item player $A$ cannot ensure $\pi$ from~$s$ in $\calG$.
\end{enumerate}
\end{proposition}
\begin{proof}
In this proof, $\sigma_A$ and $\sigma'_A$ (resp. $\sigma_B$ and
$\sigma'_B$) refer to player-$A$ (resp. player-$B$) strategies.
Furthermore we will write $\Out(\sigma_A,\sigma_B)$ instead of
$\Out_{\calG}(s,(\sigma_A,\sigma_B))$.
We first assume there is a Nash equilibrium $(\sigma_A,\sigma_B)$ in
$\calG'$ from $s$ such that $\pi \not\prefrel_A
\Out(\sigma_A,\sigma_B)$. Since $\prefrel_A$ is total,
$\Out(\sigma_A,\sigma_B) \prec_A \pi$. Consider a strategy
$\sigma'_A$ of player $A$ in $\calG$. As $(\sigma_A,\sigma_B)$ is a
Nash equilibrium, it holds that $\Out(\sigma'_A,\sigma_B) \prefrel_A
\Out(\sigma_A,\sigma_B)$, which implies $\Out(\sigma'_A,\sigma_B)
\prec_A \pi$. We conclude that condition $(ii)$ holds.
Assume now property $(ii)$. As the preference relation is
Noetherian, we can select $\pi^+$ which is the largest element for
$\prefrel_A$ which can be ensured by player $A$. Let $\sigma_A$ be a
corresponding strategy: for every strategy $\sigma_B$, $\pi^+
\prefrel_A \Out(\sigma_A,\sigma_B)$. Towards a contradiction,
assume now that for every strategy $\sigma'_B$, there exists a
strategy $\sigma'_A$ such that $\pi^+ \prec_A
\Out(\sigma'_A,\sigma'_B)$. Consider the set $S$ of such outcomes,
and define $\pi'$ as its minimal element (this is possible since the
order $\prefrel_A$ is almost-well-founded). Notice then that $\pi^+
\prec_A \pi'$, and also that for every strategy $\sigma'_B$, there
exists a strategy $\sigma'_A$ such that $\pi' \prefrel_A
\Out(\sigma'_A,\sigma'_B)$. Then, as the game is determined, we get
that there exists some strategy $\sigma'_A$ such that for all
strategy $\sigma'_B$, it holds that $\pi' \prefrel_A
\Out(\sigma'_A,\sigma'_B)$. In particular, strategy $\sigma'_A$
ensures $\pi'$, which contradicts the maximality of $\pi^+$.
Therefore, there is some strategy $\sigma'_B$ for which for every
strategy $\sigma'_A$, $\pi^+ \not\prec_A \Out(\sigma'_A,\sigma'_B)$,
which means $\Out(\sigma'_A,\sigma'_B) \prefrel_A \pi^+$. We show
now that $(\sigma_A,\sigma'_B)$ is a witness for property $(i)$. We
have seen on the one hand that $\pi^+ \prefrel_A
\Out(\sigma_A,\sigma'_B)$, and on the other hand that
$\Out(\sigma_A,\sigma'_B) \prefrel_A \pi^+$. By~hypothesis, $\pi^+
\prec_A \pi$, which yields $\Out(\sigma_A,\sigma'_B) \prec_A \pi$.
Pick another strategy $\sigma'_A$ for player $A$. We have seen that
$\Out(\sigma'_A,\sigma'_B) \prefrel_A \pi^+$, which implies
$\Out(\sigma'_A,\sigma'_B) \prefrel_A
\Out(\sigma_A,\sigma'_B)$. This concludes the proof of $(i)$.
\end{proof}
\begin{remark}
Any finite total preorder is obviously Noetherian and
almost-well-founded. Also, any total preorder isomorphic to the set
of non-positive integers is Noetherian and almost-well-founded.
%
On the other hand, a total preorder isomorphic to $\{1/n \mid n \in
\mathbb{N}^+\}$ is Noetherian but not almost-well-founded.
\end{remark}
\subsection{Encoding the value problem as a NE existence problem}
\label{sec:link-value-exist}
We prove a similar result for the NE existence problem. In this reduction
however, we have to modify the game by introducing a truly concurrent
move at the beginning of the game. This is necessary since for
turn-based games with $\omega$-regular winning conditions, there
always exists a Nash equilibrium~\cite{CMJ04}, hence the NE existence
problem would be trivial.
Let
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt}}$ be
a two-player zero-sum game, with players $A$ and~$B$. Given a state~$s$ of
$\calG$and a play~$\pi$ from~$s$, we~define a game $\calG_\pi$ by adding two
states $s_0$ and~$s_1$, in the very same way as in
Figure~\ref{fig-init-module}, on page~\pageref{fig-init-module}.
From~$s_0$, $A$~and $B$ play a matching-penny
game to either go to the sink state~$s_1$, or to the state~$s$ in the
game~$\calG$.
We~assume the same hypotheses than in Proposition~\ref{lem:link-value-constr}
for the preference relation~$\prefrel_A$. Let~$\pi^+$ be in the highest
equivalence class for $\prefrel_A$ smaller than~$\pi$ (it exists since
$\prefrel_A$ is Noetherian). In~$\calG_\pi$, player~$B$ prefers runs that end
in~$s_1$: formally, the preference relation~$\prefrel^\pi_B$ of player $B$ in
$\calG_\pi$ is given by $\pi' \prefrel^\pi_B \pi'' \Leftrightarrow \pi'' = s_0
\cdot s_1^\omega \lor \pi' \ne s_0 \cdot s_1^\omega$. On~the other hand,
player~$A$ prefers a path of $\calG$ over going to $s_1$, if and only~if,
it~is at least as good as~$\pi$: formally, the preference relation
$\prefrel^\pi_A$ for player~$A$ in~$\calG_\pi$ is given by $s_0 \cdot \pi'
\prefrel^\pi_A s_0 \cdot \pi'' \Leftrightarrow \pi' \prefrel_A \pi''$, and
$s_0 \cdot s_1^\omega \sim''_A s_0 \cdot \pi^+$.
\begin{proposition}
\label{lem:link-value-exist}
Let
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$
be a two-player zero-sum game, with players $A$ and $B$, such that:
\begin{itemize}
\item the preference relation $\prefrel_{A}$ for player~$A$ is
total, Noetherian and almost-well-founded;
\item $\calG$ is determined.
\end{itemize}
Let $s$ be a state and $\pi$ be a play in $\calG$ from $s$. Consider
the game~$\calG_\pi$ defined above. Then the following two
properties are equivalent:
\begin{enumerate}[label=(\roman*)]
\item there is a Nash equilibrium in $\calG_\pi$ from $s_0$;
\item player $A$ cannot ensure $\pi$ from $s$ in $\calG$.
\end{enumerate}
\end{proposition}
\noindent In particular, in a given class of games, if the hardness proof of the
value problem involves a game which satisfies the hypotheses of the
proposition, and if $\calG_\pi$ belongs to that class, then the
NE existence problem is at least as hard as the complement of the value
problem.
\begin{proof}
Assume that player~$A$ cannot ensure at least $\pi$ from~$s$
in~$\calG$, then according to
Proposition~\ref{lem:link-value-constr}, there is a Nash equilibrium
$(\sigma_A,\sigma_B)$ in the game~$\calG'$ of
Proposition~\ref{lem:link-value-constr} with outcome $\rho$ such
that $\pi \not\prefrel_A \rho$. Consider the strategy profile
$(\sigma^\pi_A,\sigma^\pi_B)$ in~$\calG_\pi$ that consists in
playing the same action for both players in~$s_0$, and then if the
path goes to~$s$, to play according to $(\sigma_A,\sigma_B)$.
Player~$B$ gets her best possible payoff under that strategy
profile. If $A$ could change her strategy to get a payoff better
than $s_0 \cdot \pi^+$, then it would induce a strategy in $\calG'$
giving her a payoff better than~$\rho$ (when played with strategy
$\sigma_B$), which contradicts the fact that $(\sigma_A,\sigma_B)$
is a Nash equilibrium in~$\calG'$. Therefore,
$(\sigma^\pi_A,\sigma^\pi_B)$ is a Nash equilibrium in~$\calG_\pi$.
Conversely, assume that $A$ can ensure $\pi$ from $s$ in $\calG$,
and assume towards a contradiction that there is a Nash equilibrium
$(\sigma^\pi_A,\sigma^\pi_B)$ in $\calG_\pi$ from $s_0$. Then
$\Out_{\calG_\pi}(\sigma^\pi_A,\sigma^\pi_B)$ does not end in $s_1$,
otherwise player $A$ could improve by switching to $s$ and then
playing according to a strategy which ensures $\pi$. Also,
$\Out_{\calG_\pi}(\sigma^\pi_A,\sigma^\pi_B)$ cannot end in $\calG$
either, otherwise player $B$ would improve by switching to $s_1$. We
get that there is no Nash equilibrium in $\calG_\pi$ from $s_0$,
which concludes the proof.
\end{proof}
\subsection{Encoding the constrained NE existence problem as an NE
existence problem}
\label{sec:link-constr-exist}
The next proposition makes a link between the existence of a Nash
equilibrium where a player gets a payoff larger than some bound and
the (unconstrained) existence of a Nash equilibrium in a new game.
This will allow, in some specific cases, to infer hardness results
from the constrained NE existence problem to the NE existence problem.
The construction is inspired by the previous one, but it applies to a
game with at least two players, and it applies to any two selected
players as follows. Let
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$
be a concurrent game, $s$ be a~state of $\calG$, $\rho$ be a play from
$s$, and $A_i$ and~$A_j$ be two distinct players. We define the new
game $E(\calG,A_i,A_j,\rho)$ again in the same way as on
Figure~\ref{fig-init-module}. Now, in~$s_0$, the two players $A_i$~and
$A_j$~play a matching-penny game to either go to the sink state~$s_1$,
or to state~$s$ in game~$\calG$.
For player~$A_j$, the preference relation in $E(\calG,A_i,A_j,\rho)$
is given by $\prefrel'_{A_j}$ such that $s_0\cdot s_1^\omega
\prec'_{A_j} s_0 \cdot \pi$ and $s_0 \cdot \pi \prefrel'_{A_j} s_0
\cdot \pi' \Leftrightarrow \pi \prefrel_{A_j} \pi'$, for any path
$\pi$ and $\pi'$ from $s$ in $\calG$. For player $A_i$ the preference
relation is $s_0 \cdot \pi \prefrel'_{A_i} s_0 \cdot \pi'
\Leftrightarrow \pi \prefrel_{A_i} \pi'$, for any path $\pi$ and
$\pi'$ from $s$ in $\calG$, and $s_0\cdot s_1^\omega \sim_{A_i} s_0
\cdot \rho$. For any other player~$A_k$, the preference relation
$E(\calG,A_i,A_j,\rho)$ is given by $s_0 \cdot \pi \prefrel'_{A_k} s_0
\cdot \pi' \Leftrightarrow \pi \prefrel_{A_k} \pi'$ for any path $\pi$
and $\pi'$ from $s$ in $\calG$, and $s_0\cdot s_1^\omega \sim_{A_k}
s_0 \cdot \rho$.
\begin{proposition}\label{lem:link-constr-exist}
Let
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$
be a concurrent game, let $s$ be a state of $\calG$, and $A_i$ and
$A_j$ be two distinct players participating to $\calG$. Pick two
plays $\pi$ and $\rho$ from $s$ such that $\rho \prefrel_{A_i}
\pi$. If there is a Nash equilibrium in $\calG$ whose outcome
is~$\pi$, then there is a Nash equilibrium in
$E(\calG,A_i,A_j,\rho)$ whose outcome is ${s_0 \cdot \pi}$.
Reciprocally, if there is a Nash equilibrium in
$E(\calG,A_i,A_j,\rho)$ whose outcome is $s_0 \cdot \pi$, then there
is a Nash equilibrium in $\calG$ whose outcome is~$\pi$.
\end{proposition}
\begin{proof}
Assume that there is a Nash equilibrium $\sigma_\Agt$ in
$\calG$ with outcome $\pi$ such that $\rho \prefrel_{A_i} \pi$.
Then $s_0 \cdot s_1^\omega \prefrel'_{A_i} s_0 \cdot \pi$. Consider
the strategy profile in $E(\calG,A_i,A_j,\rho)$ that consists for
$A_i$ and~$A_j$ in playing different actions in $s_0$ and when the
path goes to~$s$, to play according to~$\sigma_\Agt$.
Players $A_i$ and~$A_j$ have no interest in changing their
strategies in~$s_0$, since for~$A_j$ all plays of~$\calG$ are better
than $s_0 \cdot s_1^\omega$, and for~$A_i$ the play $s_0 \cdot \pi$
is better than $s_0 \cdot s_1^\omega$. Hence, this is a Nash
equilibrium in game $E(\calG,A_i,A_j,\rho)$.
Reciprocally, if there is a Nash equilibrium in $E(\calG,A_i,A_j,\rho)$, its
outcome cannot end in~$s_1$, since $A_j$ would have an interest in changing
her strategy in~$s_0$ (all~plays of~$\calG$ are then better for her). The
strategies followed from~$s$ thus defines a Nash equilibrium in~$\calG$.
\end{proof}
If we consider a class of games such that $E(\calG,A_i,A_j,\rho)$
belongs to that class when $\calG$ does, then the NE existence problem is
then at least as hard as the constrained NE existence problem. Note
however that the reduction assumes lower bounds on the payoffs, and we
do not have a similar result for upper bounds on the payoffs. For
instance, as we will see in Section~\ref{sec:buchi}, for a conjunction
of B\"uchi objectives, we do not know whether the NE existence problem is
in \P~(as the value problem) or \NP-hard (as is the existence of an
equilibrium where all the players are losing).
\section{The suspect game}
\label{sec:suspect}
In this section, we construct an abstraction of a multi-player
game~$\calG$ as a two-player zero-sum game~$\calH$, such that there is
a correspondence between Nash equilibria in~$\calG$ and winning
strategies in~$\calH$ (formalised in forthcoming
Theorem~\ref{thm:eq-win}). This transformation does not require the
game to be finite and is conceptually much deeper than the reductions
given in the previous section; it will allow us to use algorithmic
techniques from zero-sum games to compute Nash equilibria and hence
solve the value and (constrained) NE existence problems in various
settings.
\subsection{Construction of the suspect game}
We fix a concurrent game
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$
for the rest of the section, and begin with introducing a few extra
definitions.
\begin{definition}
\label{def:trigger}
A strategy profile~$\sigma_\Agt$ is a \emph{trigger profile} for a
play~$\pi$ from some state~$s$ if, for every player~$A\in \Agt$, for
every strategy~$\sigma'_A$ of player~$A$, the~path~$\pi$ is at
least as good as the outcome of $\replaceter \sigma A {\sigma'_A}$
from~$s$ (that~is, $\Out(s,\replaceter \sigma A {\sigma'_A})
\prefrel_A \pi$).
\end{definition}
The following result is folklore and a direct consequence of the
definition:
\begin{lemma}
A Nash equilibrium is a trigger profile for its outcome.
Reciprocally, a strategy profile which is trigger profile for its
outcome~is a Nash equilibrium.
\end{lemma}
\begin{definition}[\cite{BBM10a}]
Given two states~$s$ and~$s'$, and a move~$m_{\Agt}$, the set of
\newdef{suspect players} for~$(s,s')$ and~$m_{\Agt}$ is the set
\[
\Susp((s,s'),m_{\Agt}) = \{A \in\Agt \mid \exists\,m' \in
\Allow(s,A).\ \Tab(s,\replaceter m A {m'}) = s'\}.
\]
Given a path~$\rho$ and a strategy profile~$\sigma_\Agt$, the set of
suspect players for~$\rho$ and~$\sigma_\Agt$ is the set of players
that are suspect along each transition of~$\rho$, \ie, it~is the set
\[
\Susp(\rho,\sigma_\Agt) = \Bigl\{A \in\Agt \Bigm| \forall i<
\size\rho.\
A\in\Susp\bigl((\rho_{=i},\rho_{=i+1}),\sigma_\Agt(\rho_{\leq
i})\bigr)\Bigr\}.
\]
\end{definition}
Intuitively, player~$A\in\Agt$ is a suspect for transition~$(s,s')$
and move~$\indicebis mA\Agt$ if she can unilaterally change her action
to activate the transition $(s,s')$: if~$s' \ne \Tab(s,
\indicebis mA\Agt)$, then this may be due to a deviation
from~$m_{\Agt}$ of any of the players in the set
$\Susp((s,s'),m_{\Agt})$, and no one else. If $s' = \Tab(s, \indicebis
mA\Agt)$, it~may simply be the case that no one has deviated, so
everyone is a potential suspect for the next moves.
Similarly, we easily infer that player~$A$ is in
$\Susp(\rho,\sigma_\Agt)$ if, and only if, there is a
strategy~$\sigma'_A$ such that $\Out(\stat,\replaceter \sigma A
{\sigma'_A})=\rho$.
Note that the notion of suspect players requires moves and arenas to
be deterministic, and therefore everything which follows assumes the
restriction to pure strategy profiles and to deterministic game
structures.
\bigskip
We fix a play $\pi$ in~$\calG$. From game~$\calG$ and play~$\pi$, we build the
\emph{suspect game}~$\calH(\calG,\pi)$,
which is a two-player turn-based game defined as follows.
The players in~$\calH(\calG,\pi)$ are named~\Eve and \Adam. Since~$\calH(\calG,\pi)$ is
turn-based, its state space can be written as the disjoint union of
the~set~$V_\shortEve$ controlled by~\Eve, which is (a~subset~of)
$\Stat \times 2^\Agt$, and the set~$V_\shortAdam$ controlled by~\Adam,
which~is (a~subset~of) $\Stat \times 2^\Agt \times \Act^\Agt$. The
game is played in the following way: from a configuration~$(s,P)$
in~$V_\shortEve$, \Eve chooses a legal move~$m_\Agt$ from~$s$; the
next state is $(s,P,m_\Agt)$; then \Adam chooses some state~$s'$
in~$\Stat$, and the new configuration is~$(s',P
\cap\Susp((s,s'),m_\Agt))$. In~particular, when the state~$s'$ chosen
by \Adam is such that $s'=\Tab(s,m_\Agt)$ (we~say that \Adam
\newdef{obeys}~\Eve when this is the case), then the new configuration
is~$(s',P)$.
We define projections $\Sproj_1$ and~$\Sproj_2$ from $V_\shortEve$ on
$\Stat$ and~$2^{\Agt}$, resp., by $\Sproj_1(s,P) = s$ and
$\Sproj_2(s,P)=P$. We~extend these projections to paths in a
natural~way (but only using \Eve's states in order to avoid
stuttering), letting $\Sproj_1((s_0,P_0) \cdot (s_0,P_0,m_0) \cdot
(s_1,P_1)\cdots ) = s_0 \cdot s_1 \cdots$.
For any play~$\rho$, $\Sproj_2(\rho)$ (seen as a sequence of sets of
players of~$\calG$) is non-increasing, therefore its
limit~$\limitpi2(\rho)$ is well defined. We notice that if
$\limitpi2(\rho) \ne \emptyset$, then $\Sproj_1(\rho)$ is a play
in~$\calG$. An~outcome~$\rho$ is \emph{winning for~\Eve}, if for
all~$A\in \limitpi2(\rho)$, it~holds ${\Sproj_1(\rho) \prefrel_A \pi}$.
The~\emph{winning region} $W(\calG, \pi)$ (later simply denoted
by~$W$ when $\calG$ and $\pi$ are clear from the context) is the set
of configurations of~$\calH(\calG, \pi)$ from which \Eve has a
winning strategy.
Intuitively \Eve tries to have the players play a Nash equilibrium,
and \Adam tries to disprove that it is a Nash equilibrium, by~finding
a possible deviation that improves the payoff of one of the players.
\subsection{Correctness of the suspect-game construction}
The next lemma establishes a correspondence between winning strategies
in $\calH(\calG,\pi)$ and trigger profiles (and therefore Nash equilibria) in
$\calG$.
\begin{lemma}\label{lem:suspect-game}\label{lemma-suspectgame}
Let $s$ be a state of $\calG$ and $\pi$ be a play from $s$
in~$\calG$. The following two conditions are equivalent:
\begin{itemize}
\item \Eve has a winning strategy in~$\calH(\calG,\pi)$
from~$(s,\Agt)$, and its outcome~$\rho'$ from~$(s,\Agt)$ when
\Adam~obeys~\Eve is such that $\Sproj_1(\rho')=\rho$;
\item there is a trigger profile for $\pi$ in~$\calG$ from
state~$s$ whose outcome from~$s$ is~$\rho$.
\end{itemize}
\end{lemma}
\begin{proof}
Assume there is a winning strategy~$\sigma_\shortEve$ for \Eve
in~$\calH(\calG,\pi)$ from~$(s,\Agt)$, whose outcome from~$(s,\Agt)$ when \Adam
obeys \Eve is~$\rho'$ with $\Sproj_1(\rho') = \rho$. We~define the
strategy profile~$\sigma_\Agt$ according to the actions played
by~\Eve.
%
Pick a history~$g=s_1 s_2\cdots s_{k+1}$, with $s_1=s$. Let~$h$ be
the outcome of~$\sigma_\shortEve$ from~$s$ ending in a state
of~$V_\shortEve$ and such that $\Sproj_1(h)=s_1\cdots s_k$. This
history is uniquely defined as follows: the first state of~$h$
is~$(s_1,\Agt)$, and if its $(2i+1)$-st state is~$(s_i,P_i)$, then
its~$(2i+2)$-nd state is $(s_i,P_i,\sigma_\shortEve(h_{\leq 2i+1}))$
and its $(2i+3)$-rd state is $(s_{i+1}, P_i\cap
\Susp((s_i,s_{i+1}),\sigma_\shortEve(h_{\leq 2i+1})))$.
%
Now, write $(s_k,P_k)$ for the last state of~$h$, and let $h'=h\cdot
(s_k,P_k,\sigma_\shortEve(h))\cdot (s_{k+1}, P_k\cap
\Susp((s_k,s_{k+1}), \sigma_\shortEve(h)))$. Then we define
$\sigma_\Agt(g)=\sigma_\shortEve(h')$. Notice that when $g\cdot s$
is a prefix of~$\Sproj_1(\rho')$, then $g\cdot s\cdot
\sigma_\Agt(g\cdot s)$ is also a prefix of~$\Sproj_1(\rho')$. In
particular, $\Out(s,\sigma_\Agt) = \Sproj_1(\rho') = \rho$.
We~now prove that $\sigma_\Agt$ is a trigger profile for~$\pi$.
Pick a player~$A\in \Agt$, a~strategy~$\sigma'_A$ for player~$A$,
and let $g=\Out(\stat, \replaceter \sigma A {\sigma'_A})$. With a
play~$g$, we~associate a play~$h$ in~$\calH(\calG,\pi)$ in the same way as
above. Then player~$A$ is a suspect along all the transitions
of~$g$, so that she~belongs to~$\limitpi2(h)$. Now,
as~$\sigma_\shortEve$~is winning, $\Sproj_1(h) \prefrel_A \pi$, which
proves that $\sigma_\Agt$ is a trigger profile.
\medskip Conversely, assume that $\sigma_\Agt$ is a trigger profile
for~$\pi$ whose outcome is~$\rho$, and define the
strategy~$\sigma_\shortEve$ by $\sigma_\shortEve(h) =
\sigma_\Agt(\Sproj_1(h))$. Notice that the outcome~$\rho'$
of~$\sigma_\shortEve$ when \Adam obeys~\Eve satisfies
$\Sproj_1(\rho')=\rho$.
Let~$\eta$ be an outcome of~$\sigma_\shortEve$ from~$\stat$,
and~$A\in \limitpi2(\eta)$. Then $A$~is a suspect for each
transition along~$\Sproj_1(\eta)$, which means that for all~$i$, there
is a move~$m^A_i$ such that
\[
\Sproj_1(\eta_{=i+1}) = \Tab(\Sproj_1(\eta_{=i}),
\sigma_\Agt(\Sproj_1(\eta_{\leq i}))[A\mapsto m^A_i]).
\]
Therefore there is a strategy $\sigma_A'$ such that
$\Sproj_1(\eta)=\Out(s,\sigma_\Agt[A\mapsto\sigma_A'])$. Since
$\sigma_\Agt$ is a trigger profile for~$\pi$, it holds that
$\Sproj_1(\eta) \prefrel_A \pi$. As~this holds for any~$A\in
\limitpi2(\eta)$, $\sigma_\shortEve$ is winning.
\end{proof}
We now state the correctness theorem for the suspect game construction.
\begin{theorem}\label{thm:eq-win}
Let
$\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$
be a concurrent game, $s$ be a state of~$\calG$, and $\pi$ be a play
in~$\calG$. The following two conditions are equivalent:
\begin{itemize}
\item there is a Nash equilibrium $\sigma_\Agt$ from~$s$ in~$\calG$
whose outcome is~$\pi$.
\item there is a play~$\rho$ from $(s,\Agt)$ in~$\calH(\calG,\pi)$,
\begin{enumerate}
\item \label{cond:proj} such that $\Sproj_1(\rho)=\pi$;
\item \label{cond:obey} along which \Adam always obeys~\Eve; and
\item\label{cond:win} such that for all indices~$i$, there is a
strategy $\sigma^i_\shortEve$ for~\Eve, for which any play in
$\rho_{\le i} \cdot \Out(\rho_{=i}, \sigma^i_\shortEve)$ is
winning for~\Eve.
\end{enumerate}
\end{itemize}
\end{theorem}
\begin{proof}
The Nash equilibrium is a trigger profile, and from
Lemma~\ref{lemma-suspectgame}, we~get a winning
strategy~$\sigma_\shortEve$ in~$\calH(\calG,\pi)$. The outcome~$\rho$
of~$\sigma_\shortEve$ from~$s$ when \Adam obeys~\Eve is such that
$\pi=\Sproj_1(\rho)$ is the outcome of the Nash equilibrium. Now
for all prefix $\rho_{\le i}$, the strategy
$\sigma_\shortEve^i\colon h \mapsto \sigma_\shortEve(\rho_{\le
i}\cdot h)$ is such that any play in $\rho_{\le i}\cdot
\Out(\rho_{=i},\sigma^i_\shortEve)$ is winning for~\Eve.
\medskip Conversely, let $\rho'$ be a path in~$\calH(\calG,\pi)$ and
assume it satisfies all three conditions. We~define a
strategy~$\lambda_\shortEve$ that follows~$\rho'$ when
\Adam~obeys. Along~$\rho'$, this strategy is defined as follows:
$\lambda_\shortEve(\rho'_{\leq 2i}) = m_\Agt$ such that
$\Tab(\Sproj_1(\rho'_{=i}),m_\Agt) = \Sproj_1(\rho'_{=i+1})$. Such a
legal move must exist since \Adam obeys~\Eve along~$\rho'$ by
condition~\ref{cond:obey}. Now, if \Adam deviates from the obeying
strategy (at step~$i$), we~make~$\lambda_\shortEve$ follow the
strategy~$\sigma_\shortEve^i$ (given by condition~\ref{cond:win}),
which will ensure that the outcome is winning for~\Eve.
The outcomes of~$\lambda_\shortEve$ are then either the path~$\rho'$,
or a path~$\rho''$ obtained by following a winning strategy after a
prefix of~$\rho'$. The path~$\rho''$ is losing for~\Adam, hence for
all $A\in \limitpi2(\rho')$, $\rho''\prefrel_A \rho'$. This proves
that $\lambda_\shortEve$ is a winning strategy. Applying
Lemma~\ref{lemma-suspectgame}, we~obtain a strategy
profile~$\sigma_\Agt$ in~$\calG$ that is a trigger profile
for~$\pi$. Moreover, the~outcome of~$\sigma_\Agt$ from~$s$
is~$\Sproj_1(\rho')$ (using condition~\ref{cond:proj}), so that
$\sigma_\Agt$ is a Nash equilibrium.
\end{proof}
\begin{remark}\label{rem:prefix-independant}
Assume the preference relations of each player~$A$ in $\calG$ are
prefix-independent, \ie, for all plays $\rho$ and $\rho'$, $\rho
\prefrel_A \rho'$ iff for all indices $i$ and $j$, $\rho_{\ge i}
\prefrel_A \rho'_{\ge j}$. Then the winning condition of \Eve is
also prefix-independent, and condition~\ref{cond:win} just states
that $\rho'$ has to stay within the winning region of~\Eve. Note
that, for prefix-dependent preference relations,
condition~\ref{cond:win} does not reduce to stay within the winning
region of~\Eve: for instance, for safety objectives, if the losing
states of all the players have been visited then any prolongation
will satisfy the condition, even though it might leave the winning
region of~\Eve.
\end{remark}
\begin{example}
We depict on Figure~\ref{fig-suspg} part of the suspect game for the game of
Figure~\ref{fig-ex}. Note that the structure of $\calH(\calG,\pi)$
does not depend on~$\pi$. Only the winning condition is affected by
the choice of~$\pi$.
\begin{figure}[t]
\begin{center}
\begin{tikzpicture}[thick]
\everymath{\scriptstyle}
\draw (0,1.5) node [draw,dashed] (A) {$\ell_0,\{\pl 1,\pl 2\}$};
%
\draw (3,3.5) node [draw] (B1) {$\ell_0,\{\pl 1,\pl 2\},\langle 1,1 \rangle$};
\draw (3,2) node [draw] (B2) {$\ell_0,\{\pl 1,\pl 2\},\langle 1,2 \rangle$};
\draw (3,1) node [draw] (B3) {$\ell_0,\{\pl 1,\pl 2\},\langle 2,1 \rangle$};
\draw (3,-.5) node [draw] (B4) {$\ell_0,\{\pl 1,\pl 2\},\langle 2,2 \rangle$};
\draw [-latex'] (A) -- (B1);
\draw [-latex'] (A) -- (B2);
\draw [-latex'] (A) -- (B3);
\draw [-latex'] (A) -- (B4);
%
\begin{scope}[yshift=1mm]
\draw (6,5.5) node [draw,dashed] (C10) {$\ell_0,\emptyset$};
\draw (6,4.5) node [draw,dashed] (C11) {$\ell_1,\{\pl 1,\pl 2\}$};
\draw (6,3.5) node [draw,dashed] (C12) {$\ell_2,\{\pl 2\}$};
\draw (6,2.5) node [draw,dashed] (C13) {$\ell_3,\{\pl 1\}$};
\draw (9,4) node [draw] (D1111) {$\ell_1,\{\pl 1,\pl 2\}, \langle 1,1\rangle$};
\draw (9,5) node [draw] (D1112) {$\ell_1,\{\pl 1,\pl 2\}, \langle 1,2\rangle$};
\draw (9,3) node [draw] (D1211) {$\ell_2,\{\pl 2\}, \langle 1,1\rangle$};
\draw (9,2) node [draw] (D1311) {$\ell_3,\{\pl 1\}, \langle 1,1\rangle$};
\end{scope}
\begin{scope}[yshift=-3mm]
\draw (6,-1) node [draw,dashed] (C41) {$\ell_1,\emptyset$};
\draw (6,0) node [draw,dashed] (C42) {$\ell_2,\{\pl 1\}$};
\draw (6,1) node [draw,dashed] (C43) {$\ell_3,\{\pl 2\}$};
\draw (9,0) node [draw] (D4211) {$\ell_2,\{\pl 1\}, \langle 1,1\rangle$};
\draw (9,1) node [draw] (D4311) {$\ell_3,\{\pl 2\}, \langle 1,1\rangle$};
\end{scope}
\draw [-latex'] (B1) -- (C10);
\draw [-latex'] (B1) -- (C11);
\draw [-latex'] (B1) -- (C12);
\draw [-latex'] (B1) -- (C13);
\draw [-latex'] (B4) .. controls +(180:2cm) and +(-90:2cm) .. (A);
\draw [-latex'] (B4) -- (C41);
\draw [-latex'] (B4) -- (C42);
\draw [-latex'] (B4) -- (C43);
\draw[-latex'] (C11.-20) .. controls +(-20:5mm) .. (D1111);
\draw[-latex'] (C11) -- (D1112);
\draw[-latex'] (C12) -- (D1211);
\draw[-latex'] (C13.-20) .. controls +(-20:5mm) .. (D1311);
\draw[-latex'] (D1111.160) .. controls +(160:5mm) .. (C11);
\draw[-latex'] (D1111) -- (C12);
\draw[-latex'] (C42) -- (D4211);
\draw[-latex'] (C43.-10) .. controls +(-10:5mm) .. (D4311.-170);
\draw[-latex'] (D4311.170) .. controls +(170:5mm) .. (C43.10);
\draw[-latex'] (D1311.160) .. controls +(160:5mm) .. (C13);
\draw[dashed] (B2) -- +(15:18mm);
\draw[dashed] (B2) -- +(5:18mm);
\draw[dashed] (B2) -- +(-5:18mm);
\draw[dashed] (B2) -- +(-15:18mm);
\draw[dashed] (B3) -- +(15:18mm);
\draw[dashed] (B3) -- +(5:18mm);
\draw[dashed] (B3) -- +(-5:18mm);
\draw[dashed] (B3) -- +(-15:18mm);
\draw[dashed] (D1112) -- +(15:18mm);
\draw[dashed] (D1112) -- +(5:18mm);
\draw[dashed] (D1112) -- +(-5:18mm);
\draw[dashed] (D1112) -- +(-15:18mm);
\draw[dashed] (D1111) -- +(5:18mm);
\draw[dashed] (D1111) -- +(-5:18mm);
\draw[dashed] (D1211) -- +(15:18mm);
\draw[dashed] (D1211) -- +(5:18mm);
\draw[dashed] (D1211) -- +(-5:18mm);
\draw[dashed] (D1211) -- +(-15:18mm);
\draw[dashed] (D1311) -- +(10:18mm);
\draw[dashed] (D1311) -- +(0:18mm);
\draw[dashed] (D1311) -- +(-10:18mm);
\draw[dashed] (D4311) -- +(10:18mm);
\draw[dashed] (D4311) -- +(0:18mm);
\draw[dashed] (D4311) -- +(-10:18mm);
\draw[dashed] (D4211) -- +(15:18mm);
\draw[dashed] (D4211) -- +(5:18mm);
\draw[dashed] (D4211) -- +(-5:18mm);
\draw[dashed] (D4211) -- +(-15:18mm);
\end{tikzpicture}
\end{center}
\caption{A small part of the suspect game for the game of
Figure~\ref{fig-ex}}\label{fig-suspg}
\end{figure}
\end{example}
In the rest of the paper, we use the suspect-game construction to
algorithmically solve the NE existence problem and the constrained NE
existence problem in finite games for large classes of preference relations.
Before that we carefully analyse the size of the suspect game when the
original game is finite.
\subsection{Size of the suspect games when the original game is finite}
We suppose that $\calG$ is finite.
At first sight, the number of states in~$\calH(\calG,\pi)$ is exponential (in~the
number of players of~$\calG$). However, there are two cases for which
we easily see that the number of states of~$\calH(\calG,\pi)$ is actually only
polynomial:
\begin{itemize}
\item if there is a state in which all the players have several
possible moves, then the transition table (which is part of the
input, as discussed in Remark~\ref{remark:encoding})
is also exponential in the number of players;
\item if the game is turn-based, then the transition table is ``small'', but
there is always at most one suspect player (unless all of them are
suspects), so that the number of reachable states in~$\calH(\calG,\pi)$ is also small.
\end{itemize}
We now prove that, due to the explicit encoding of the set of
transitions (recall Remark~\ref{remark:encoding},
page~\pageref{remark:encoding}), this can be generalised:
\begin{proposition}\label{lem:polynomial-size}
Let $\calG=\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt}
}$ be a finite concurrent game and $\pi$ be a play in $\calG$. The
number of reachable configurations from $\Stat\times \{\Agt\}$
in~$\calH(\calG,\pi)$ is polynomial in the size of~$\calG$.
\end{proposition}
\begin{proof}
The game~$\calH(\calG,\pi)$ contains the state~$(s,\Agt)$ and the states
$(s,\Agt,m_\Agt)$, where $m_\Agt$ is a legal move from~$s$; the
number of these states is bounded by ${|\Stat| + |\Tab|}$.
The successors of those states that are not of the same form, are
the $(t,\Susp((s,t),m_\Agt))$ with $t \ne \Tab(s,m_\Agt)$. If~some
player~$A\in\Agt$ is a suspect for transition~$(s,t)$, then
besides~$m_A$, she~must have at least a second action~$m'$, for
which $\Tab(s,m_\Agt [A\mapsto m']) = t$. Thus the transition table
from state~$s$ has size at least $2^{|\Susp((s,t),m_\Agt)|}$.
The successors of $(t,\Susp((s,t),m_\Agt))$ are of the form $(t',P)$
or $(t',P,m_\Agt)$ where $P$~is a subset of $\Susp((s,t),m_\Agt)$;
there can be no more than $(|\Stat| + |\Tab|) \cdot
2^{|\Susp((s,t),m_\Agt)|}$ of them, which is bounded by $(|\Stat| +
|\Tab|)\cdot |\Tab|$. The total number of reachable states is then
bounded by $(|\Stat| + |\Tab|) \cdot (1 + (|\Stat| + |\Tab|) \cdot
|\Tab|)$.
\end{proof}
\section{Single-objective preference relations}\label{sec:single}
In this section we will be interested in finite games with
single-objective preference relations.
\medskip The value problem for finite concurrent games with $\omega$-regular
objectives has standard solutions in game theory; they are given in
Table~\ref{table-single} (page~\pageref{table-single}). Let us briefly give
some explanations. Most of the basic literature on two-player games focus on
turn-based games, and in particular algorithms for solving two-player games
with $\omega$-regular objectives only deal with turn-based games (see for
instance~\cite[Chapter~2]{GTW02}). In~particular, McNaughton developed an
algorithm to solve turn-based parity games in time $O(|\Stat| \cdot |
\Edg|^{p-1})$, where $p-1$ is the number of priorities~\cite{McNaughton93}.
B\"uchi games and co-B\"uchi games correspond to parity games with two
priorities, hence they are solvable in polynomial time. Similarly reachability
games and safety games can be transformed into B\"uchi games by making the
target states absorbing. Hence turn-based game with these types of objectives
can be solved in polynomial time.
Note however that we can reuse these algorithms in the concurrent case as follows. Any finite
concurrent zero-sum game with objective $\Omega$ for player $A_1$ can
be transformed into a turn-based zero-sum game with objective
$\widetilde\Omega$ for player~$A_1$: the idea is to replace any edge
labelled with pair of actions $\langle a_1,a_2\rangle$ into two
consecutive transitions labelled with $a_1$ (belonging to player
$A_1$) and with $a_2$ (belonging to player $A_2$). Furthermore
$\Omega$ is an $\omega$-regular condition, then so is
$\widetilde\Omega$, and the type of the objective (reachability,
B\"uchi, etc) is preserved (note however that this transformation only
preserves Player~$A_1$ objective). Hence the standard algorithm on the
resulting turn-based game can be applied.
Lower bounds for reachability/safety
and B\"uchi/co-B\"uchi games are also folklore results, and can be
obtained by encoding the circuit-value problem (we recall the encoding
in Section~\ref{ptime-hard}).
\medskip We now focus on the NE existence problem and on the constrained
NE existence problem when each player has a single ($\omega$-regular)
objective using the suspect game construction. The results are
summarised in the second column of Table~\ref{table-single}.
Streett and Muller objectives are not explicitly mentioned in the
rest of the section. The complexity of their respective (constrained)
NE existence problems, which is given in Table~\ref{table-single}, can
easily be inferred from other ones. The $\P^\NP_\parallel$-hardness
for the NE existence problem with Streett objectives follows from the
corresponding hardness for parity objectives (parity objectives can be
encoded efficiently as Streett objectives). Hardness for the
NE existence problem in Muller games, is deduced from hardness of the
value problem (which holds for turn-based games), applying
Proposition~\ref{lem:link-value-exist}. For both objectives,
membership in \PSPACE follows from \PSPACE membership for objectives
given as Boolean circuits, since they can efficiently be encoded as
Boolean circuits.
We fix for the rest of the section a multi-player finite game $\calG =
\tuple{\Stat,\Agt,\Act,\Allow,\Tab,(\mathord\prefrel_A)_{A\in\Agt} }$, and we
assume that each $\prefrel_A$ is single-objective, given by set
$\Omega_A$.
\begin{remark}
\label{remark:explosion}
Let us come back to Remark~\ref{remark:encoding} on our choice of an
explicit encoding for the set of transitions. Assuming more compact
encodings, the complexity of computing Nash equilibria for
qualitative objectives does not allow to distinguish between the
intrinsic complexity of the objectives. Indeed, in the formalism
of~\cite{LMO08}, the transition function is given in each state by a
finite sequence $((\phi_0 , s_0 ), ..., (\phi_h , s_h ))$, where
$s_i \in \Stat$, and $\phi_i$ is a boolean combination of
propositions $(A = \act)$ that evaluates to true iff agent $A$
chooses action~$\act$. The transition table is then defined as
follows: $\Tab(s, m_\Agt) = s_j$ iff $j$ is the smallest index such
that $\phi_j$ evaluates to true when, for every player $A\in\Agt$,
$A$ chooses action $m_A$. It is required that the last boolean
formula $\phi_h$ be $\top$, so that no agent can enforce a deadlock.
We can actually state the following result, whose proof is postponed
to the Appendix on page~\pageref{app}.
\begin{proposition}\label{proposition:explosion}
For finite concurrent games with compact encoding of transition
functions and with reachability/B\"uchi/safety objectives, the
constrained NE existence problems is \PSPACE-hard.
\end{proposition}
\end{remark}
\begin{remark}
\label{simplification}
It is first interesting to notice that given two plays $\pi$
and~$\pi'$ the suspect games $\calH(\calG,\pi)$
and~$\calH(\calG,\pi')$ only differ in their winning conditions.
In~particular, the structure of the game only depends on~$\calG$, and
has polynomial size (see Proposition~\ref{lem:polynomial-size}).
We~denote it with~$\calJ(\calG)$. Moreover, as~each
relation~$\prefrel_A$ is given by a single objective~$\Omega_A$, the
winning condition for \Eve in $\calH(\calG,\pi)$ rewrites~as: for
every $A \in \limitpi2(\rho) \cap \Losers(\pi)$, $\Sproj_1(\rho)$~is
losing (in~$\calG$) for player~$A$, where $\Losers(\pi)$ is the set of
players losing along $\pi$ in $\calG$.
This winning condition only depends on $\Losers(\pi)$ (not on the
precise value of play~$\pi$). Therefore in this section, the suspect
game is denoted with~$\calH(\calG,L)$, where $L \subseteq \Agt$, and
\Eve wins play~$\rho$ if, for~every $A \in \limitpi2(\rho) \cap L$,
$A$~loses along $\Sproj_1(\rho)$ in~$\calG$. In~many cases we will be
able to simplify this winning condition, and to obtain simple
algorithms to the corresponding problems.
\end{remark}
We now distinguish between the winning objectives of the
players. There are some similarities in some of the cases (for
instance safety and co-B\"uchi objectives), but they nevertheless all
require specific techniques and proofs.
\subsection{Reachability objectives}
\label{subsec:reachability}
The value problem for a reachability winning condition is \P-complete. Below,
we design a non-deterministic algorithm that runs in polynomial time for
solving the constrained NE existence problem. We~then end this subsection with a
\NP-hardness proof of the constrained NE existence problem and NE existence
problem. In the end, we prove the following result:
\begin{theorem}
For finite concurrent games with single reachability objectives, the NE
existence problem and the constrained NE existence problem are \NP-complete.
\end{theorem}
\subsubsection{Reduction to a safety game}
We assume that for every player $A$, $\Omega_A$ is a single
reachability objective given by target set~$T_A$. Given $L \subseteq
\Agt$, in the suspect game~$\calH(\calG,L)$, we show that the
objective of \Eve reduces to a safety objective. We define the safety
objective~$\Omega_L$ in $\calH(\calG,L)$ by the set $T_L = \{(s,P)
\mid \exists A\in P \cap L.\ s \in T_A\}$ of target states.
\begin{lemma}
\label{lemma:reach-to-safety}
\Eve has a winning strategy in game $\calH(\calG,L)$ iff \Eve has a
winning strategy in game $\calJ(\calG)$ with safety
objective~$\Omega_L$.
\end{lemma}
\begin{proof}
We first show that any play in $\Omega_L$ is winning in
$\calH(\calG,L)$. Let $\rho \in \Omega_L$, and let $A \in
\limitpi2(\rho) \cap L$. Toward a contradiction assume that
$\Occ(\Sproj_1(\rho)) \cap T_A \ne \emptyset$: there is a state
$(s,P)$ along $\rho$ with $s \in T_A$. Obviously $\limitpi2(\rho)
\subseteq P$, which implies that $A \in P \cap L$. This contradicts
the fact that $\rho \notin \Omega_L$. We have shown so far that any
winning strategy for \Eve in $\calJ(\calG)$ with safety objective
$\Omega_L$ is a winning strategy for \Eve in $\calH(\calG,L)$.
Now assume that \Eve has no winning strategy in game $\calJ(\calG)$
with safety objective~$\Omega_L$. Turn-based games with safety
objectives being determined, \Adam has a
strategy~$\sigma_\shortAdam$ which ensures that no outcome of
$\sigma_\shortAdam$ is in $\Omega_L$. If $\rho \in
\Out(\sigma_\shortAdam)$, there is a state~$(s,P)$ along $\rho$ such
that there is $A\in P \cap L$ with $s\in T_A$. We now modify the
strategy of \Adam such that as soon as such a state is reached we
switch from $\sigma_\shortAdam$ to the strategy that always obeys
\Eve. This ensures that in every outcome~$\rho'$ of the new
strategy, we reach a state $(s,P)$ such that there is $A\in P \cap
L$ with $s\in T_A$, and $\limitpi2(\rho') = P$. This \Adam's
strategy thus makes \Eve lose the game~$\calH(\calG,L)$, and \Eve
has no winning strategy in game~$\calH(\calG,L)$.
\end{proof}
\subsubsection{Algorithm}
The algorithm for solving the constrained NE existence problem in a game
where each player has a single reachability objective relies on
Theorem~\ref{thm:eq-win} and Proposition~\ref{lem:play-length}, and on
the above analysis:
\begin{enumerate}[label=(\roman*)]
\item\label{step1} guess a lasso-shaped play $\rho = \tau_1 \cdot
\tau_2^\omega$ (with $|\tau_i| \le 2 |\Stat|^2$) in $\calJ(\calG)$,
such that \Adam obeys \Eve along $\rho$, and $\pi = \Sproj_1 (\rho)$
satisfies the constraint on the payoff;
\item\label{step3} compute the set $W(\calG,\Losers(\pi))$ of states that are
winning for~\Eve in the suspect game $\calH(\calG,\Losers(\pi))$, where
$\Losers(\pi)$ is the set of losing players along~$\pi$;
\item\label{step4} check that $\rho$ stays in $W(\calG,\Losers(\pi))$.
\end{enumerate}
First notice that this algorithm is non-deterministic and runs in
polynomial time: the witness~$\rho$ guessed in step~\ref{step1} has
size polynomial; the suspect game $\calH(\calG,\Losers(\pi))$ has also
polynomial size (Proposition~\ref{lem:polynomial-size});
Step~\ref{step3} can be done in polynomial time using a standard
attractor computation~\cite[Sect.~2.5.1]{GTW02} as the game under
analysis is equivalent to a safety game
(Lemma~\ref{lemma:reach-to-safety}); finally step~\ref{step4} can
obviously be performed in polynomial time.
Step~\ref{step1} ensures that conditions~\ref{cond:obey}
and~\ref{cond:proj} of Theorem~\ref{thm:eq-win} hold for $\rho$ and
step~\ref{step4} ensures condition~\ref{cond:win}. Correctness of the
algorithm then follows from Theorem~\ref{thm:eq-win} and
Proposition~\ref{lem:play-length}.
\subsubsection{Hardness}
We prove \NP-hardness of the constrained NE existence problem by encoding an
instance of \SAT as follows. We assume set of atomic propositions $\AP =
\{x_1,\dots,x_k\}$, and we let $\phi = \bigwedge_{i=1}^n c_i$ where $c_i =
\ell_{i,1} \lor \ell_{i,2} \lor \ell_{i,3}$ where $\ell_{i,j} \in \{ x_k, \lnot
x_k \mid 1\leq k\leq p\}$. We~build the turn-based game $\calG_\phi$ with $n+1$
players $\Agt = \{ A, C_1,\dots , C_n\}$ as follows: for every $1 \le k \le
p$, player~$A$ chooses to visit either location $x_k$ or location $\lnot x_k$.
Location~$x_k$ is winning for player~$C_i$ if, and only~if, $x_k$~is one of
the literals in~$c_i$, and similarly location $\lnot x_k$ is winning for~$C_i$
if, and only~if, $\lnot x_k$ is one of the literals of~$c_i$. The construction
is illustrated on Figure~\ref{fig-jeuNP}, with the reachability objectives
defined as $\Omega_{C_i} = \{\ell_{i,1}, \ell_{i,2}, \ell_{i,3}\}$ for $1 \le
i \le n$. Now, it is easy to check that this game has a Nash equilibrium with
payoff~1 for all players $(C_i)_{1 \le i \le n}$ if, and only~if, $\phi$ is
satisfiable.
We prove hardness for the NE existence problem by using the transformation
described in Section~\ref{sec:link-constr-exist} once for each player. We
define the game $\calG_0$ similar to $\calG$ but with an extra
player~$C_{n+1}$ who does not control any state for now. For $1\leq i \leq n$,
we define $\calG_i = E(\calG_{i-1},C_i, C_{n+1},\rho)$, where $\rho$ is a
winning path for~$C_i$. The preference relation can be expressed in any
$\calG_i$ by a reachability condition, by giving to $C_{n+1}$ a target which
is the initial state of~$\calG$. According to
Proposition~\ref{lem:link-constr-exist} there is a Nash equilibrium
in~$\calG_i$ if, and only~if, there is one in~$\calG_{i-1}$ where $C_i$~wins.
Therefore there is a Nash equilibrium in $\calG_n$ if, and only~if, $\phi$~is
satisfiable. This entails \NP-hardness of the NE existence problem.
\begin{figure*}[ht]
\centering
\begin{tikzpicture}[thick,xscale=.8,yscale=.8]
\tikzset{rond/.style={circle,draw=black,thick,fill=black!10,minimum
height=6.5mm,inner sep=0pt}}
\tikzset{carre/.style={draw=black,thick,fill=black!10,minimum
height=5mm,minimum width=5mm,inner sep=0pt}}
\everymath{\scriptstyle}
\path (-.3,1) node {$\displaystyle \calG_\phi$};
\draw (0,0) node [carre](choix-p1) {$A$};
\draw [latex'-] (choix-p1.180) -- ++(-.4,0);
\draw (1.5,1) node [rond] (p1) {$x_1$};
\draw (1.5,-1) node [rond] (nonp1) {$\neg x_1$};
\draw [-latex'] (choix-p1) -- (p1);
\draw [-latex'] (choix-p1) -- (nonp1);
\draw (3,0) node [carre] (choix-p2) {$A$};
\draw (4.5,1) node [rond] (p2) {$x_2$};
\draw (4.5,-1) node [rond] (nonp2) {$\neg x_2$};
\draw [-latex'] (p1) -- (choix-p2);
\draw [-latex'] (nonp1) -- (choix-p2);
\draw [-latex'] (choix-p2) -- (p2);
\draw [-latex'] (choix-p2) -- (nonp2);
\draw (6,0) node [carre] (choix-p3) {$A$};
\draw [-latex'] (p2) -- (choix-p3);
\draw [-latex'] (nonp2) -- (choix-p3);
\draw[dashed] (choix-p3) -- +(.75,.5);
\draw[dashed] (choix-p3) -- +(.75,-.5);
\draw (7.75,0) node {\Large\dots};
\draw (9.5,1) node [rond] (ph) {$x_p$};
\draw (9.5,-1) node [rond] (nonph) {$\neg x_p$};
\draw[dashed,latex'-] (ph) -- +(-.75,-.5);
\draw[dashed,latex'-] (nonph) -- +(-.75,.5);
\draw (11,0) node [rond] (final) {};
\draw [-latex'] (final) .. controls +(120:36pt) and +(60:36pt) .. (final);
\draw [-latex'] (ph) -- (final);
\draw [-latex'] (nonph) -- (final);
\end{tikzpicture}
\caption{Reachability game for the reduction of \SAT}
\label{fig-jeuNP}
\end{figure*}
\subsection{Safety objectives}\label{subsec:safety}
The value problem for safety objectives is \P-complete. We next show
that the constrained NE existence problem can be solved in \NP, and
conclude with \NP-hardness of both the constrained NE existence problem and
the NE existence problem. We hence prove:
\begin{theorem}
For finite games with single safety objectives, the NE existence problem and
the constrained NE existence problem are \NP-complete.
\end{theorem}
\subsubsection{Reduction to a conjunction of reachability objectives}
We assume $\Omega_A$ is a single safety objective given by set~$T_A$.
In the corresponding suspect game, we show that the goal of \Eve is
equivalent to a conjunction of reachability objectives. Let $L
\subseteq \Agt$. In suspect game $\calH(\calG,L)$, we define several
reachability objectives as follows: for each $A \in L$, we define
$T'_A = T_A \times \{P \mid P \subseteq \Agt\} \cup \Stat \times \{ P
\mid A \not\in P\}$, and we write $\Omega'_A$ for the corresponding
reachability objectives.
\begin{lemma}
A play $\rho$ is winning for \Eve in $\calH(\calG,L)$ iff $\rho \in
\bigcap_{A \in L} \Omega'_A$.
\end{lemma}
\begin{proof}
Let $\rho$ be a play in $\calH(\calG,L)$, and assume it is winning
for \Eve. Then, for each $A \in \limitpi2(\rho)\cap L$, $\rho
\notin \Omega_A$, which means that the target set $T_A$ is visited
along $\Sproj_1(\rho)$, and therefore $T'_A$ is visited along $\rho$.
If $A \notin \limitpi2(\rho)$, then a state $(s,P)$ with $A \notin
P$ is visited by $\rho$: the target set $T'_A$ is visited. This
implies that $\rho \in \bigcap_{A \in L} \Omega'_A$.
Conversely let $\rho \in \bigcap_{A \in L} \Omega'_A$. For every $A
\in L$, $T'_A$ is visited by~$\rho$. Then, either $T_A$ is visited
by $\Sproj_1(\rho)$ (which means that $\rho \notin \Omega_A$) or
$A\not\in \limitpi2(\rho)$. In~particular, $\rho$~is a winning play
for \Eve in~$\calH(\calG,L)$.
\end{proof}
\subsubsection{Algorithm for solving finite zero-sum turn-based games
with a conjunction of reachability objectives} \label{conj-reach} We
now give a simple algorithm for solving zero-sum games with a
conjunction of reachability objectives. This algorithm works in
exponential time with respect to the size of the conjunction (we~will
see in Subsection~\ref{subsec:boo-reach} that the problem is
\PSPACE-complete). However for computing Nash equilibria in safety
games we will only use it for small (logarithmic size) conjunctions.
Let $\overline\calG$ be a two-player turn-based game with a winning
objective for \Eve given as a conjunction of $k$~reachability objectives
$\Omega_1,\dots,\Omega_k$. We assume vertices of \Eve and \Adam in
$\overline\calG$ are $V_\shortEve$ and $V_\shortAdam$ respectively,
and that the initial vertex is $v_0$. The idea is to construct a new
game $\overline\calG'$ that remembers the objectives that have been
visited so far. The vertices of game $\overline\calG'$ controlled by
\Eve and \Adam are $V'_\shortEve = V_\shortEve \times 2^{\lsem 1 ,
k\rsem}$ and $V'_\shortAdam = V_\shortAdam \times 2^{\lsem
1,k\rsem}$ respectively. There is a transition from~$(v,S)$ to
$(v',S')$ iff there is a transition from $v$ to $v'$ in the original
game and $S' = S\cup \{i \mid v'\in \Omega_i\}$. The reachability
objective $\Omega$ for \Eve is given by target set $\Stat \times \lsem
1,k\rsem$. It is clear that there is a winning strategy in
$\overline\calG$ from $v_0$ for the conjunction of reachability
objectives $\Omega_1,\dots,\Omega_k$ iff there is a winning strategy
in game $\overline\calG'$ from $(v_0, \{i \mid v_0 \in \Omega_i\})$
for the reachability objective $\Omega$. The number of vertices of
this new game is $|V'_\shortEve \cup V'_\shortAdam| = |V_\shortEve
\cup V_\shortAdam| \cdot 2^k$, and the size of the new transition
table $\Tab'$ is bounded by $|\Tab| \cdot 2^k$, where $\Tab$ is the
transition table of $\overline\calG$. An attractor computation on
$\overline\calG'$ is then done in time $\mathcal{O}(|V'_\shortEve \cup
V'_\shortAdam| \cdot |\Tab'|)$, we obtain an algorithm for solving
zero-sum games with a conjunction of reachability objectives, running
in time $\mathcal{O}(2^{2k}\cdot (|V_\shortEve \cup V_\shortAdam|
\cdot |\Tab|))$.
\subsubsection{Algorithm}
The algorithm for solving the constrained NE existence problem for single
reachability objectives could be copied and would then be correct. It~would
however not yield an \NP upper bound. We therefore propose a refined
algorithm:
\begin{enumerate}[label=(\roman*)]
\item guess a lasso-shaped play $\rho = \tau_1 \cdot \tau_2^\omega$
(with $|\tau_i| \le |\Stat|^2$) in $\calJ(\calG)$ such that \Adam
obeys \Eve along $\rho$, and $\pi = \Sproj_1(\rho)$ satisfies the
constraint on the payoff.
Note that if $\Losers(\pi)$ is the set of players
losing in~$\pi$, computing $W(\calG,\Losers(\pi))$ would require
exponential time. We will avoid this expensive computation.
\item check that any \Adam-deviation along~$\rho$, say at position~$i$
(for~any~$i$), leads to a state from which \Eve has a
strategy~$\sigma^i_\shortEve$ to ensure that any play in $\rho_{\le
i}\cdot \Out(\sigma^i_\shortEve)$ is winning for her.
\end{enumerate}
Step~$(ii)$ can be done as follows: pick an \Adam-state $(s,\Agt,m_\Agt)$
along $\rho$ and a successor $(t,P)$ such that $t \ne \Tab(s,m_\Agt)$; we only
need to show that $(t,P) \in W(\calG,(\Losers(\pi) \setminus \Losers(\rho_{\le
i})) \cap P)$. We~can compute this set efficiently (in polynomial time)
using the algorithm of the previous paragraph since $2^{|P|} \le |\Tab|$
(using the same argument as in Proposition~\ref{lem:polynomial-size}).
This non-deterministic algorithm, which runs in polynomial time,
precisely implements Theorem~\ref{thm:eq-win}, and therefore correctly
decides the constrained NE existence problem.
\subsubsection{Hardness}
The \NP-hardness for the constrained NE existence problem can be proven
by encoding an instance of \SAT using a game similar to that for
reachability objectives, see Section~\ref{subsec:reachability}. We
only change the constraint which is now that all players~$C_i$ should
be losing, and we get the same equivalence.
The reduction of Lemma~\ref{sec:link-constr-exist} cannot be used to
deduce the hardness of the NE existence problem, since it assumes a lower
bound on the payoff. Here the constraint is an upper bound (``each
player should be losing''). We therefore provide an ad-hoc reduction
in this special case, which is illustrated on
Figure~\ref{fig-safety-hardnes}. We add some module at the end of the
game to enforce that in an equilibrium, all players are losing. We
add concurrent states between $A$ and each~$C_i$ (named~$A/C_i$). All players~$C_i$
are trying to avoid~$t$, and $A$ is trying to avoid~$u$.
Since $A$ has no target in $\calG_\phi$ she cannot lose before seeing $u$, and
then she can always change her strategy in the concurrent states in order to
go to $t$. Therefore an equilibrium always ends in $t$. A~player~$C_i$ whose
target was not seen during game~$\calG_\phi$, can change her strategy in order
to go to~$u$ instead of~$t$. That means that if there is an equilibrium, there
was one in $\calG_\phi$ where all $C_i$ are losing. Conversely, if there was
such an equilibrium in~$\calG_\phi$, we can extend this strategy profile by
one whose outcome goes to $t$ and it is an equilibrium in the new game. This
concludes the \NP-hardness of the NE existence problem.
\begin{figure*}[t]
\centering
\begin{tikzpicture}[thick]
\draw[dashed, rounded corners=2mm] (-.3,0) .. controls +(90:5mm)
.. (1,.5) -- (2,.9) -- (3,.6) -- (3.5,0) -- (3,-.8) -- (2,-.9)
-- (1,-.7) .. controls +(150:5mm) and +(-90:5mm) .. (-.3,0);
\draw (1.85,0) node {\begin{minipage}{2.2cm}\small\raggedright
Copy of $\calG_\phi$
\end{minipage}};
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm]
\tikzstyle{oval}=[draw,minimum height=6mm,inner sep=0mm,rounded corners=2mm]
\draw (0.2,0) node [rond] (I) {$s$};
\draw (3,0) node [rond] (A) { };
\draw (4.3,0) node [oval] (B) {$A/C_1$};
\draw (6.5,0) node [oval] (C) {$A/C_2$};
\draw (8.7,0) node [oval] (D) {$A/C_3$};
\draw (11,0) node [rond] (T) {$t$};
\draw (7,-1.7) node [rond] (U) {$u$};
\draw[-latex'] (A) -- (B);
\draw[-latex'] (B) -- (C) node[midway,above] {$\scriptstyle \tuple{1,1}, \tuple{2,2}$};
\draw[-latex'] (B) -- (U) node[midway,left] {$\scriptstyle \tuple{1,2}, \tuple{2,1}$};
\draw[-latex'] (C) -- (D) node[midway,above] {$\scriptstyle \tuple{1,1}, \tuple{2,2}$};
\draw[-latex'] (C) -- (U) node[midway] {$\scriptstyle \tuple{1,2}, \tuple{2,1}$};
\draw[-latex'] (D) -- (T) node[midway,above] {$\scriptstyle \tuple{1,1}, \tuple{2,2}$};
\draw[-latex'] (D) -- (U) node[midway,right] {$\scriptstyle \tuple{1,2}, \tuple{2,1}$};
\draw[-latex'] (T) .. controls +(30:10mm) and +(-30:10mm) .. (T);
\draw[-latex'] (U) .. controls +(-50:10mm) and +(-120:10mm) .. (U);
\draw[dashed] (I) -- +(40:5mm);
\draw[dashed] (I) -- +(0:4mm);
\draw[dashed] (I) -- +(-40:5mm);
\draw[dashed] (A) -- +(140:5mm);
\draw[dashed] (A) -- +(180:4mm);
\draw[dashed] (A) -- +(220:5mm);
\end{tikzpicture}
\caption{Extending game~$\calG_\phi$ with final concurrent modules}
\label{fig-safety-hardnes}
\end{figure*}
\subsection{B\"uchi objectives}
\label{subsec:buchi}
The value problem for B\"uchi objectives is \P-complete. In this
subsection we design a polynomial-time algorithm for solving the
constrained NE existence problem for B\"uchi objectives. The \P-hardness
of the NE existence problem can then be inferred from the \P-hardness of
the value problem, applying Propositions~\ref{lem:link-value-constr}
and~\ref{lem:link-value-exist}. Globally we prove the following
result:
\begin{theorem}
For finite games with single B\"uchi objectives, the NE existence problem and
the constrained NE existence problem are \P-complete.
\end{theorem}
\subsubsection{Reduction to a co-B\"uchi game}
We assume that for every player $A$, $\Omega_A$ is a B\"uchi objective
given by target set $T_A$. Given $L \subseteq \Agt$, in the suspect
game~$\calH(\calG,L)$, we show that the objective of \Eve is
equivalent to a single co-B\"uchi objective. We define the co-B\"uchi
objective~$\Omega_L$ in $\calH(\calG,L)$ given by the target set $T_L
= \{(s,P) \mid \exists A\in P \cap L.\ s \in T_A\}$. Notice that the
target set is defined in the same way as for reachability objectives.
\begin{lemma}\label{lem:red-co-buchi}
A play $\rho$ is winning for \Eve in $\calH(\calG,L)$ iff $\rho \in
\Omega_L$.
\end{lemma}
\begin{proof}
Assume that $\rho$ is winning for \Eve in $\calH(\calG,L)$. Then for every $A
\in \limitpi2(\rho) \cap L$, it~holds ${\Inf(\Sproj_1(\rho)) \cap T_A =
\varnothing}$. Toward a contradiction, assume that ${\Inf(\rho)
\cap T_L \ne \varnothing}$. There exists $(s,P)$ such that there
is $A \in P \cap L$ with $s \in T_A$, which appears infinitely often
along $\rho$. In particular, $P = \limitpi2(\rho)$ (otherwise it
would not appear infinitely often along $\rho$). Hence, we have
found $A \in \limitpi2(\rho) \cap L$ such that $\Inf(\Sproj_1(\rho))
\cap T_A \ne \varnothing$, which is a contradiction. Therefore,
$\rho \in \Omega_L$.
Assume $\rho \in \Omega_L$: for every $(s,P)$ such that there exists
$A \in P \cap L$ with $s \in T_A$, $(s,P)$ appears finitely often
along $\rho$. Let $A \in \limitpi2(\rho) \cap L$, and assume towards
a contradiction that there is $s \in T_A$ such that $s$ appears
infinitely often along $\Sproj_1(\rho)$. This means that
$(s,\limitpi2(\rho))$ appears infinitely often along $\rho$, which
contradicts the above condition. Therefore, $\rho$ is winning for
\Eve in $\calH(\calG,L)$.
\end{proof}
\subsubsection{Algorithm}
\label{subsubsec:algo2}
As for reachability objectives, the winning region for \Eve in
$\calH(\calG,L)$ can be computed in polynomial time (since this is the
winning region of a co-B\"uchi game, see Lemma~\ref{lem:red-co-buchi}
above). A non-deterministic algorithm running in polynomial time
similar to the one for reachability objectives can therefore be
inferred. However we can do better than guessing an appropriate
lasso-shaped play $\rho$ by looking at the strongly connected
components of the game: a strongly connected component of the game
uniquely defines a payoff, which is that of all plays that visit
infinitely often all the states of that strongly connected
component. Using a clever partitioning of the set of strongly
connected components of the game, we obtain a polynomial-time
algorithm.
\medskip From now on and until the end of
Subsection~\ref{subsubsec:algo2} we relax the hypotheses on the
preference relations (that they are all single-objective with a
B\"uchi condition). We present an algorithm in a more general context,
since the same techniques will be used in
Subsection~\ref{subsec:co-reducible} (and we chose to only present
once the construction).
For the rest of this subsection we therefore make the following
assumptions on the preference relations $(\mathord\prefrel_A)_{A \in
\Agt}$. For every player $A \in \Agt$: \label{hyp:star}
\begin{enumerate}[label=(\alph*)]
\item $\prefrel_A$ only depends on the set of states which is visited
infinitely often: if $\rho$ and $\rho'$ are two plays such that
$\Inf(\rho) = \Inf(\rho')$ then $\rho \prefrel_A \rho'$ and $\rho'
\prefrel_A \rho$;
\item $\prefrel_A$ is given by an ordered objective $\omega_A$ with
preorder $\preorder_A$, and $\preorder_A$ is supposed to be
monotonic;
\item for every threshold $w^A$, we can compute in polynomial time
$S^A \subseteq \Stat$ such that $\Inf(\rho) \subseteq S^A
\Leftrightarrow \rho \prefrel_A w^A$.
\end{enumerate}
Obviously preferences given by single B\"uchi objectives do satisfy
those hypotheses. At every place where it is relevant, we will explain
how the particular case of single B\"uchi objectives is handled.
Next we write $(\star)$ for the above assumptions, and $(\star)_{a}$
(resp. $(\star)_b$, $(\star)_c$) for only the first (resp. second,
third) assumption.
\medskip We first characterise the `good' plays in $\calJ(\calG)$ in
terms of the strongly connected components they define: the strongly
connected component defined by a play is the set of states that are
visited infinitely often by the play.
We fix for each player $A$, equivalence classes of plays $u^A$ and
$w^A$, that represent lower- and upper-bounds for the constrained
NE existence problem. Both can be represented as finite sets,
representing the set of states which are visited infinitely often.
For each $K \subseteq \Stat$, we write $v^A(K)$ for the equivalence
class of all paths~$\pi$ that visits infinitely often exactly~$K$,
\ie: $\Inf(\pi) = K$. We also write $v(K)=(v^A(K))_{A \in \Agt}$. We
look for a transition system~$\langle K,E\rangle$, with $K\subseteq
\Stat$ and $E \subseteq K \times K$, for which the following
properties hold:
\begin{enumerate}
\item $u^A \preorder_A v^A(K) \preorder_A w^A$ for
all~$A\in\Agt$; \label{cond:1}
\item $\langle K,E\rangle$ is strongly connected;\label{cond:2}
\item $\forall k\in K.\ (k,\Agt) \in W(\calG,v(K))$;\label{cond:3}
\item $\forall (k,k') \in E.\ \exists (k,\Agt,m_\Agt)\in
W(\calG,v(K)).\ \Tab(k,m_\Agt) = k'$;\label{cond:4}
\item $(K\times\{\Agt\})$ is reachable from $(s,\Agt)$ in
$W(\calG,v(K))$;\label{cond:5}
\end{enumerate}
where $W(\calG,v(K))$ is the winning region of \Eve in suspect
game~$\calH(\calG,v(K))$.\footnote{Formally the suspect game has been
defined with a play as reference, and not a equivalence
class. However, in this subsection, if $\pi$ and $\pi'$ are
equivalent, the games $\calH(\calG,\pi)$ and $\calH(\calG,\pi')$ are
identical.}
If one can find one such transition system $\langle K,E \rangle$, then
we will be able to build a lasso-play $\rho$ from $(s,\Agt)$ in the
suspect-game that will satisfy the conditions of
Theorem~\ref{thm:eq-win}. Formally, we have the following lemma:
\begin{lemma}\label{lem:characterization}
Under hypothesis $(\star)_{a}$, there is a transition system
$\langle K,E\rangle$ satisfying
conditions~\ref{cond:1}--\ref{cond:5} if, and only if, there is a
path $\rho$ from $(s, \Agt)$ in $\calH(\calG,v(K))$ that never gets
out of $W(\calG,v(K))$, along which \Adam always obeys \Eve, $u^A
\preorder_A v^A(K) \preorder_A w^A$ for all~$A\in\Agt$, and
$\Sproj_1(\Inf(\rho) \cap V_\shortEve)=K$ (which implies that
$\rho\in v^A(K)$ for all $A$).
\end{lemma}
\begin{proof}
The first implication is shown by building a path in $W(\calG,v(K))$
that successively visits all the states in~$K\times\{\Agt\}$
forever. Thanks to~\ref{cond:5}, \ref{cond:2} and~\ref{cond:4}
(and the fact that \Adam obeys~\Eve), such a path exists, and
from~\ref{cond:3} and~\ref{cond:4}, this path remains in the
winning region. From~\ref{cond:1}, we~have the condition on the
preferences.
Conversely, consider such a path~$\rho$, and let $K =
\Sproj_1(\Inf(\rho)\cap V_\shortEve)$ and $E = \{ (k,k')\in K^2 \mid
\exists (k,\Agt,m_\Agt) \in \Inf(\rho).\ \Tab(k,m_\Agt) =
k'\}$. Condition~\ref{cond:5} clearly holds.
Conditions~\ref{cond:1}, \ref{cond:3} and~\ref{cond:4} are
easy consequences of the hypotheses and construction. We~prove that
$\langle K, E \rangle$ is strongly connected. First, since \Adam
obeys~\Eve and $\rho$ starts in~$(k,\Agt)$, we~have
$\limitpi2(\rho)=\Agt$. Now, take any two states~$k$ and~$k'$ in~$K$: then
$\rho$ visits~$(k,\Agt)$ and~$(k',\Agt)$ infinitely often, and there
is a subpath of~$\rho$ between those two states, all of which states
appear infinitely often along~$\rho$. Such a subpath gives rise to a
path between~$k$ and~$k'$, as required.
\end{proof}
As a consequence, if $\langle K,E \rangle$ satisfies the five previous
conditions, by~Theorem~\ref{thm:eq-win}, there is a Nash equilibrium
whose outcome lies between the bounds~$u^A$ and~$w^A$. Our aim is to
compute efficiently all maximal pairs $\langle K, E \rangle$ that
satisfy the five conditions.
To that aim we define a recursive function \SSG (standing for ``solve
sub-game''), working on transition systems, that will decompose
efficiently any transition system that does not satisfy the five
conditions above into polynomially many disjoint sub-transition
systems \textit{via} a decomposition into strongly connected
components.
\begin{itemize}
\item if $K\times\{\Agt\} \subseteq W(\calG,v(K))$, and if for all
$(k,k') \in E$ there is a $(k,\Agt,m_\Agt)$ in $W(\calG,v(K))$
s.t. $\Tab(k,m_\Agt) = k'$, and finally if $\langle K,E\rangle$ is
strongly connected, then we set $\SSG(\langle K,E\rangle) =
\{\langle K,E\rangle\}$. This means that conditions (2)-(4) are
satisfied by $\langle K,E\rangle$.
\item otherwise, we let
\[
\SSG(\langle K,E\rangle) = \bigcup_{\langle K',E'\rangle \in
\SCC(\langle K,E\rangle)} \SSG ( T (\langle K',E'\rangle))
\]
where $\SCC(\langle K,E \rangle)$ is the set of strongly connected
components of~$\langle K,E\rangle$ (which can be computed in linear
time), and where $T(\langle K',E'\rangle)$ is the transition system
whose set of states is $\{ k \in K' \mid (k,\Agt) \in
W(\calG,v(K'))\}$ and whose set of edges is
\[
\{(k,k') \in E'\mid \exists (k,\Agt,m_\Agt) \in W(\calG,v(K')).\
\Tab(k,m_\Agt) = k' \}.
\]
Notice that this set of edges is never empty, but $T(\langle
K',E'\rangle)$ might not be strongly connected anymore, so that this
is really a recursive definition.
\end{itemize}
The recursive function $\SSG$ decomposes any (sub-)transition system
of the game into a list of disjoint transition systems which all
satisfy conditions (2)-(4) above.
So far the computation does not take into account the bounds for the
payoffs of the players (lower bound $u^A$ and upper bound $w^A$ for
player $A$). For each upper bound $w^A$, we assume condition
$(\star)_c$ holds . In the particular case of a single B\"uchi
objective for each player define by target $T_A$, this is simply done
by setting $S^A = \Stat \setminus T_A$, if this player has to be
losing (that is, if $w^A$ does not satisfy the B\"uchi objective).
Now assuming we have found the appropriate set $S^A$, we define
\[
\Sol = \SSG\bigl(\langle \bigcap_{A\in\Agt} S^A , \Edg' \rangle \bigr)
\cap \bigl\{ \langle K,E\rangle \mid \forall A\in \Agt.\ u^A \preorder
v^A(K) \bigr\}
\]
where $\Edg'$ restricts $\Edg$ to $\bigcap_{A\in\Agt} S^A$.
We now show that the set $\Sol$ computes (in a sense that we make
clear) the transition systems that are mentioned in
Lemma~\ref{lem:characterization}).
\begin{lemma}\label{lem:character2}
We suppose condition $(\star)$ holds.
If $\langle K,E\rangle \in \Sol$ then it satisfies
conditions~\ref{cond:1} to~\ref{cond:4}. Conversely, if $\langle
K,E\rangle$ satisfies conditions~\ref{cond:1} to~\ref{cond:4}, then
there exists $\langle K',E'\rangle \in \Sol$ such that $\langle K,E
\rangle \subseteq \langle K',E'\rangle$.
\end{lemma}
\begin{proof}
Let $\langle K,E\rangle \in \Sol$. By~definition of~\SSG, all
$(k,\Agt)$ for $k \in K$ are in~$W(\calG,v(K))$, and for all
$(k,k')\in E$, there is a state~$(k,\Agt,m_\Agt)$ in $W(\calG,v(K))$
such that $\Tab(k,m_\Agt) = k'$, and $\langle K,E\rangle$ is
strongly connected. Also, for all $A$, $u^A \preorder v^A(K)$
because $\Sol \subseteq \{ \langle K,E\rangle \mid u^A \preorder
v^A(K) \}$. Finally, for any $A \in \Agt$, $v^A(K) \preorder w^A$
because the set $K$ is included in $S^A$.
Conversely, assume that $\langle K,E \rangle$ satisfies the
conditions. We~show that if $\langle K,E\rangle \subseteq \langle
K',E'\rangle$ then there is $\langle K'',E''\rangle$ in
$\SSG(\langle K',E'\rangle)$ such that $\langle K,E\rangle \subseteq
\langle K'',E''\rangle$. The proof is by induction on the size of
$\langle K',E' \rangle $.
The basic case is when $\langle K',E'\rangle$ satisfies
the conditions~\ref{cond:2}, \ref{cond:3}, and~\ref{cond:4}: in that case,
$\SSG(\langle K' ,E'\rangle) = \{\langle K',E'\rangle\}$, and by
letting $\langle K'',E'' \rangle= \langle K',E' \rangle$ we get the
expected result.
We now analyze the other case. There is a strongly connected
component of $\langle K',E'\rangle$, say $\langle K'',E''\rangle$,
which contains $\langle K,E\rangle$, because $\langle K,E\rangle$
satisfies condition~\ref{cond:2}. We have $v^A(K) \preorder_A
v^A(K'')$ (because $K\subseteq K''$ and $\preorder_A$ is monotonic)
for every~$A$, and thus $W(\calG,v(K))\subseteq W(\calG,v(K''))$.
This ensures that $T(\langle K'',E''\rangle)$ contains $\langle K,E
\rangle$ as a subgraph. Since $\langle K'',E'' \rangle$ is a
subgraph of $\langle K',E' \rangle$, the graph $T(\langle K'',E''
\rangle)$ also~is. We show that they are not equal, so~that we can
apply the induction hypothesis to~$T(\langle K'',E''\rangle)$. For
this, we~exploit the fact that $\langle K',E'\rangle$ does not
satisfy one of conditions~\ref{cond:2} to~\ref{cond:4}:
\begin{itemize}
\item first, if $\langle K',E'\rangle$ is not strongly connected while
$\langle K'',E''\rangle$~is, they cannot be equal;
\item if there is some $k \in K'$ such that $(k,\Agt)$ is not in
$W(\calG,v(K'))$, then $k$ is not a vertex of $T(\langle K'' , E''
\rangle)$;
\item if there some edge $(k,k')$ in $E'$ such that there is no state
$(k,\Agt,m_\Agt)$ in $W(\calG,v(K'))$ such that $\Tab(k,m_\Agt) =
k'$, then the edge $(k,k')$ is not in $T(\langle K'' , E''
\rangle)$.
\end{itemize}
%
We then apply the induction hypothesis to $T(\langle
K'',E''\rangle)$, and get the expected result.
Now, because of condition~\ref{cond:1},
$u^A \preorder v^A(K) \preorder w^A$.
%
Hence, due to the previous analysis, there exists $\langle
K',E'\rangle \in \SSG\left(\langle \bigcap_{A \in \Agt} S^A
, \Edg' \rangle \right)$
such that $\langle
K,E\rangle \subseteq \langle K',E'\rangle$. This concludes the
proof of the lemma.
\end{proof}
\begin{lemma}\label{lem:compute-sol}
Under assumptions $(\star)$, if for every $K$, the set
$W(\calG,v(K))$ can be computed in polynomial time, then the set
$\Sol$ can also be computed in polynomial time.
\end{lemma}
\begin{proof}
Each recursive call to \SSG applies to a decomposition in strongly
connected components of the current transition system under
consideration. Hence the number of recursive calls is bounded by
$|\Stat|^2$. Computing the decomposition in SCCs can be done in
linear time. By assumption, each set
$W(\calG,v(K))$ can be computed in polynomial time. $S^A$ is
obtained by removing the target of the losers (for $w^A$) from
$\Stat$. Hence globally we can compute $\Sol$ in polynomial time.
\end{proof}
To conclude the algorithm, we need to check that
condition~\ref{cond:5} holds for one of the solutions $\langle K, E
\rangle$ in $\Sol$. It can be done in polynomial time by looking for a
path in the winning region of \Eve in $\calH(\calG,v(K))$ that reaches
$K \times \{\Agt\}$ from $(s,\Agt)$. The correctness of the algorithm
is ensured by the fact that if some $\langle K, E \rangle$ satisfies
the five conditions, there is a $\langle K',E' \rangle$ in $\Sol$ with
$K \subseteq K'$ and $E \subseteq E'$. Since $K \subseteq K'$ implies
$v^A(K) \preorder_A v^A(K')$, the winning region of \Eve in
$\calH(\calG,v(K'))$ is larger than that $\calH(\calG,v(K'))$, which
implies that the path from $(s,\Agt)$ to $K \times \{\Agt\}$ is also a
path from $(s,\Agt)$ to $K' \times \{\Agt\}$. Hence, $\langle K', E'
\rangle$ also satisfies condition~\ref{cond:5}, and therefore the five
expected conditions.
\bigskip We have already mentioned that single B\"uchi objectives do
satisfy the hypotheses~$(\star)$. Furthermore,
Lemma~\ref{lem:red-co-buchi} shows that, given $v(K)$, one can compute
the set~$W(\calG,v(K))$ as the winning region of a co-B\"uchi
turn-based game, which can be done in polynomial time (this is argued
at the beginning of the section). Therefore
Lemma~\ref{lem:compute-sol} and the subsequent analysis apply: this
concludes the proof that the constrained NE existence problem for finite
games with single B\"uchi objectives is in \P.
\subsubsection{Hardness}
\label{ptime-hard}
We recall a possible proof of \P-hardness for the value problem, from
which we will infer the other lower bounds. The circuit-value problem
can be easily encoded into a deterministic turn-based game with
B\"uchi objectives: a circuit (which we assume w.l.o.g. has only \AND-
and \OR-gates) is transformed into a two-player turn-based game, where
one player controls the \AND-gates and the other player controls the
\OR-gates. We add self-loops on the leaves. Positive leaves of the
circuit are the (B\"uchi) objective of the \OR-player, and negative
leaves are the (B\"uchi) objective of the \AND-player. Then obviously,
the circuit evaluates to true iff the \OR-player has a winning
strategy for satisfying his B\"uchi condition, which in turn is
equivalent to the fact that there is an equilibrium with payoff~$0$
for the \AND-player, by Proposition~\ref{lem:link-value-constr}. We
obtain \P-hardness for the NE existence problem, using
Proposition~\ref{lem:link-value-exist}: the preference relations in
the game constructed in Proposition~\ref{lem:link-value-exist} are
B\"uchi objectives.
\subsection{Co-B\"uchi objectives}\label{subsec:cobuchi}
The value problem for co-B\"uchi objectives is \P-complete. We now prove
that the constrained NE existence problem is in \NP, and that the
constrained NE existence problem and the
NE existence problem are \NP-hard.
We therefore deduce:
\begin{theorem}
For finite games with single co-B\"uchi objectives, the NE existence problem
and the constrained NE existence problem are \NP-complete.
\end{theorem}
The proof of this Theorem is very similar to that for safety
objectives: instead of conjunction of reachability objectives, we
need to deal with conjunction of B\"uchi objectives. Of course
constructions and algorithms need to be adapted. That is what we
present now.
\subsubsection{Reduction to a conjunction of B\"uchi conditions}
We assume that for every player $A$, $\Omega_A$ is a single co-B\"uchi
objective~$\Omega_A$ given by~$T_A$. In the corresponding suspect
game, we show that the goal of player \Eve is equivalent to a
conjunction of B\"uchi objectives. Let $L \subseteq \Agt$. In suspect
game $\calH(\calG,L)$, we define several B\"uchi objectives as
follows: for each $A \in L$, we define $T'_A = T_A \times \{P \mid P
\subseteq \Agt\} \cup \Stat \times \{ P \mid A \not\in P\}$, and we
write $\Omega'_A$ for the corresponding B\"uchi objective.
\begin{lemma}
A play $\rho$ is winning for \Eve in $\calH(\calG,L)$ iff $\rho \in
\bigcap_{A \in L} \Omega'_A$.
\end{lemma}
\begin{proof}
Let $\rho$ be a play in $\calH(\calG,L)$, and assume it is winning
for \Eve. Then, for each $A \in \limitpi2(\rho)\cap L$, $\rho
\notin \Omega_A$, which means that the target set $T_A$ is visited
along $\Sproj_1(\rho)$, and therefore $T'_A$ is visited infinitely
often along $\rho$. If $A \notin \limitpi2(\rho)$, then a state
$(s,P)$ with $A \notin P$ is visited infinitely often by $\rho$: the
target set $T'_A$ is visited infinitely often. This implies that
$\rho \in \bigcap_{A \in L} \Omega'_A$.
Conversely let $\rho \in \bigcap_{A \in L} \Omega'_A$. For every $A
\in L$, $T'_A$ is visited infinitely often by $\rho$. Then, either
$T_A$ is visited infinitely often by $\Sproj_1(\rho)$ (which means
that $\rho \notin \Omega_A$) or $A\not\in \limitpi2(\rho)$. In
particular, $\rho$ is a winning play for \Eve in $\calH(\calG,L)$.
\end{proof}
\subsubsection{Algorithm for solving zero-sum games with a conjunction of
B\"uchi objectives}
We adapt the algorithm for conjunctions of reachability
objectives (page~\pageref{conj-reach}) to conjunctions of B\"uchi
objectives. Let $\calG$ be a two-player turn-based game with a
winning objective for \Eve given as a conjunction of B\"uchi
objectives $\Omega_1,\dots,\Omega_k$. The idea is to construct a new
game $\calG'$ which checks that each objective~$\Omega_i$ is visited
infinitely often. The vertices of~$\calG'$ controlled by \Eve and
\Adam are $V'_\shortEve = V_\shortEve \times \lsem 0 , k\rsem$ and
$V'_\shortAdam = V_\shortAdam \times \lsem 0,k \rsem$ respectively.
There is a transition from $(v,k)$ to $(v',0)$ iff there is a
transition from $v$ to $v'$ in the original game and for $0 \le i <
k$, there is a transition from~$(v,i)$ to $(v',i+1)$ iff there is a
transition from $v$ to $v'$ in the original game and $v'\in
\Omega_{i+1}$. In~$\calG'$, the objective for \Eve is the B\"uchi
objective~$\Omega$ given by target set $\Stat \times \{k\}$, where
$\Stat=V_\shortEve \cup V_\shortAdam$ is the set of vertices of
$\calG$. It is clear that there is a winning strategy in $\calG$ from
$v_0$ for the conjunction of B\"uchi objectives
$\Omega_1,\dots,\Omega_k$ iff there is a winning strategy in $\calG'$
from $(v_0, 0)$ for the B\"uchi objective~$\Omega$. The number of
states of game $\calG'$ is $|\Stat'|= |\Stat| \cdot k$, and the size
of the transition table $|\Tab'|=|\Tab| \cdot k$. Using the standard
algorithm for turn-based B\"uchi objectives~\cite{CHP08}, which works in time
$\mathcal{O}(|\Stat'| \cdot |\Tab'|)$, we~obtain an algorithm for
solving zero-sum games with a conjunction of B\"uchi objectives
running in time $\mathcal{O}(k^2\cdot |\Stat|\cdot |\Tab|)$ (hence in
polynomial time).
\subsubsection{Algorithm}
The algorithm is the same as for reachability objectives. Only the
computation of the set of winning states in the suspect game is
different. Since we just showed that this part can be done in
polynomial time, the global algorithm still runs in
(non-deterministic) polynomial time.
\subsubsection{Hardness}
The hardness result for the constrained NE existence problem with
co-B\"uchi objectives was already proven in~\cite{ummels08}. The idea
is to encode an instance of \SAT into a game with co-B\"uchi
objectives. For completeness we describe the reduction below, and
explain how it can be modified for proving \NP-hardness of the
NE existence problem.
Let us consider an instance $\phi = c_1 \land \dots \land c_n$ of
\SSAT, where $c_i = \ell_{i,1} \lor \ell_{i,2} \lor
\ell_{i,3}$, and $\ell_{i,j} \in \{ x_k, \lnot x_k \mid 1 \le k \le
p\}$.
The game~$\calG$ is obtained from module~$M(\phi)$ depicted on
Figure~\ref{fig-M}, by joining the outgoing edge of~$c_{n+1}$ to~$c_1$. Each
module~$M(\phi)$ involves a set of players~$B_{k}$, one for each
variable~$x_k$, and a player $A_1$. Player $A_1$ controls the clause states.
Player~$B_{k}$ control the literal states~$\ell_{i,j}$ when $\ell_{i,j} = \neg
x_k$, then having the opportunity to go to state~$\bot$. There is no
transition to~$\bot$ for literals of the form~$x_k$. In~$M(\phi)$, assuming
that the players~$B_k$ will not play to~$\bot$, then $A_1$ has a strategy that
does not visit both~$x_k$ and~$\neg x_k$ for every $k$ if, and only~if,
formula~$\phi$ is satisfiable.
Finally, the co-B\"uchi objective of~$B_k$ is given by
$\{x_k\}$. In~other terms, the aim of $B_k$ is to visit~$x_k$ only a
finite number of times. This way, in a Nash equilibrium, it~cannot be
the case that both $x_k$ and~$\neg x_k$ are visited infinitely often:
it~would imply that $B_k$ loses but could improve her payoff by going
to~$\bot$ (actually, $\neg x_k$ should not be visited at all if~$x_k$
is visited infinitely often). Therefore setting the objective of
$A_1$ to $\{\bot\}$, there is a Nash equilibrium where she wins iff
$\phi$ is satisfiable. This shows \NP-hardness for the constrained
NE existence problem.
For the NE existence problem, we use the transformation described in
Section~\ref{sec:link-constr-exist}. We add an extra player~$A_2$ to~$\calG$
and consider the game $\calG' = E(\calG,A_1,A_2,\rho)$, where $\rho$ is a
winning path for~$A_1$. The objective of the players in~$\calG'$ can be
described by co-B\"uchi objectives: $A_2$~has to avoid seeing $T = \{ s_1\}$
infinitely often and keep the same target for~$A_1$. Applying
Proposition~\ref{lem:link-constr-exist}, there is a Nash equilibrium
in~$\calG'$ if, and only~if, there is one in~$\calG$ where $A_1$~wins, this
shows \NP-hardness for the NE existence problem.
\begin{figure}[t]
\centering
\begin{tikzpicture}[thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm]
\tikzstyle{oval}=[draw,minimum height=6mm,inner sep=0mm,rounded corners=2mm]
\draw (0,0) node [rond] (C1) {$c_1$};
\draw (3,0) node [rond] (C2) {$c_2$};
\draw (7,0) node [rond] (CN) {$c_{n+1}$};
\draw (C1.90) node [above] {$A_1$};
\draw (C2.90) node [above] {$A_1$};
\draw (1.5,1) node [rond] (L11) {$\ell_{1,1}$};
\draw (1.5,0) node [rond] (L12) {$\ell_{1,2}$};
\draw (1.5,-1) node [rond] (L13) {$\ell_{1,3}$};
\draw (3.5,2) node [rond] (B) {$\bot$};
\draw[-latex'] (C1) -- (L11);
\draw[-latex'] (C1) -- (L12);
\draw[-latex'] (C1) -- (L13);
\draw[-latex'] (L11) -- (C2);
\draw[-latex'] (L12) -- (C2);
\draw[-latex'] (L13) -- (C2);
\draw[-latex'] (C2) -- +(1,0.5);
\draw[-latex'] (C2) -- +(1,0);
\draw[-latex'] (C2) -- +(1,-0.5);
\draw (5,0) node {\dots} ;
\draw[-latex'] (CN)+(-1,-0.5) -- (CN);
\draw[-latex'] (CN)+(-1,0) -- (CN);
\draw[-latex'] (CN)+(-1,0.5) -- (CN);
\draw[-latex'] (CN) -- + (1,0);
\draw[-latex'] (-1,0) -- (C1) ;
\draw[-latex',dotted] (L11) -- (B);
\draw[-latex',dotted] (L12) -- (B);
\draw[-latex',dotted] (L13) -- (B);
\draw[-latex'] (B) .. controls +(.5,1) and +(-.5,1) .. (B);
\end{tikzpicture}
\caption{Module $M(\phi)$, where $\phi = c_1 \land \dots \land c_n$
and $c_i = \ell_{i,1} \lor \ell_{i,2} \lor \ell_{i,3}$}\label{fig-M}
\end{figure}
\subsection{Objectives given as circuits}\label{subsec:circuits}
The value problem is known to be \PSPACE-complete for turn-based games
and objectives given as circuits~\cite{Hunter07}. The transformation
presented in the beginning of the section can be used to decide the
value problem for finite concurrent games with a single
circuit-objective, yielding \PSPACE-completeness of the value problem
in the case of finite concurrent games as well.
We now show that the (constrained) NE existence problem is also
\PSPACE-complete in this framework:
\begin{theorem}
For finite games with single objectives given as circuits, the NE existence
problem and the constrained NE existence problem are \PSPACE-complete.
\end{theorem}
\subsubsection{Reduction to a circuit objective}
We assume the preference relation of each player $A \in \Agt$ is given
by a circuit~$C_A$. Let $L \subseteq \Agt$. We define a Boolean
circuit defining the winning condition of \Eve in the suspect game
$\calH(\calG,L)$.
We define for each player $A \in \Agt$ and each set $P$ of players
(such that $\Stat \times P$ is reachable in $\calH(\calG,L)$), a
circuit $D_{A,P}$ which outputs \true for the plays $\rho$ with
$\limitpi2(\rho)=P$ (\ie whose states that are visited infinitely
often are in $\Stat \times \{P\}$), and whose value by $C_A$ is \true.
We do so by making a copy of the circuit $C_A$, adding $|\Stat|$ \OR
gates $g_1 \cdots g_{|\Stat|}$ and one \AND gate $h$. There is an edge
from $(s_i,P)$ to $g_i$ and from $g_{i-1}$ to $g_i$ if $i<|\Stat|$
then there is an edge from the output gate of $C_A$ to $h$ and from
$h$ to the output gate of the new circuit. Inputs of $C_A$ are now the
$(s,P)$'s (instead of the $s$'s). The circuit $D_{A,P}$ is given on
Figure~\ref{fig:DAP}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm]
\tikzstyle{oval}=[draw,minimum height=6mm,inner sep=0mm,rounded corners=2mm]
\tikzstyle{carre}=[draw,minimum height=6mm,minimum width=12mm,inner sep=0mm]
\draw (3,0.5) node [carre] (I1) {$(s_1,P)$};
\draw (5.5,0.5) node [carre] (I2) {$(s_2,P)$};
\draw (9,0.5) node [carre] (IN) {$(s_{n},P)$};
\draw (7,0.5) node (ID) {$\dots$};
\draw (10,-1.5) node[rond] (C) {$C_A$};
\draw (0,-1) node [rond] (G1) {$\lor$} node [below=7pt] {$g_1$};
\draw (1.5,-1) node [rond] (G2) {$\lor$} node [below=7pt] {$g_2$};
\draw (3,-1) node {$\dots$};
\draw (5,-1) node [rond] (GN) {$\lor$} node [below=7pt] {$g_n$};
\draw (6,-1.5) node [rond] (H) {$\land$} node [label=-120:$h$] {};
\draw[-latex'] (I1) -- (G1);
\draw[-latex'] (I2) -- (G2);
\draw[-latex'] (IN) -- (GN);
\draw[-latex'] (G1) -- (G2);
\draw[-latex'] (4,-1) -- (GN);
\draw[-latex'] (GN) -- (H);
\draw[-latex'] (C) -- (H);
\draw[-latex'] (H) -- +(0,-.8);
\draw[-latex'] (I1) -- (C);
\draw[-latex'] (I2) -- (C);
\draw[-latex'] (IN) -- (C);
\end{tikzpicture}
\caption{Circuit $D_{A,P}$}
\label{fig:DAP}
\end{figure}
We then define a circuit $E_A$ which outputs \true for the plays
$\rho$ with $A \in \limitpi2(\rho)$ and whose output by $C_A$ is
\true. We do so by taking the disjunction of the circuits $D_{A,P}$.
Formally, for each set of players $P$ such that $\Stat\times P$ is
reachable in the suspect game and $A\in P$, we include the circuit
$D_{A,P}$ and writing $o_{A,P}$ for its output gate, we add \OR gates
so that there is an edge from $o_{A,P}$ to $g_i$ and from $g_i$ to
$g_{i+1}$, and then from $g_{n+1}$ to the output gate.
Finally we define the circuit $F_L$, which outputs \true for the plays
$\rho$ such that there is no $A\in L$ such that $A\in \limitpi2(\rho)$
and the output of $\Sproj_1(\rho)$ by $C_A$ is \true. This corresponds
exactly to the plays that are winning for \Eve in suspect game
$\calH(\calG,L)$. We do so by negating the disjunction of all the
circuits $E_A$ for $A\in L$.
The next lemma follows from the construction:
\begin{lemma}
\label{lem:circuit}
A play $\rho$ is winning for \Eve in $\calH(\calG,L)$ iff $\rho$
evaluates circuit $F_L$ to \true.
\end{lemma}
We should notice that circuit $F_L$ has size polynomial in the size of
$\calG$, thanks to Proposition~\ref{lem:polynomial-size}.
\subsubsection{Algorithm and complexity analysis}
To solve the constrained NE existence problem we apply the same algorithm
as for reachability objectives (see
section~\ref{subsec:reachability}). For complexity matters, the only
difference stands in the computation of the set of winning states in
the suspect game. Thanks to Lemma~\ref{lem:circuit}, we know it
reduces to the computation of the set of winning states in a
turn-based game with an objective given as a circuit (of
polynomial-size). This can be done in \PSPACE~\cite{Hunter07}, which
yields a \PSPACE upper bound for the constrained NE existence problem
(and therefore for the NE existence problem and the value problem~--~see
Proposition~\ref{lem:link-value-constr}). \PSPACE-hardness of all
problems follows from that of the value problem in turn-based
games~\cite{Hunter07}, and from Propositions~\ref{lem:link-value-constr}
and~\ref{lem:link-value-exist} (we notice that the preference
relations in the new games are easily definable by circuits).
\subsection{Rabin and parity objectives}
\label{subsec:rabin}
The value problem is known to be \NP-complete for Rabin
conditions~\cite{emerson1988complexity} and in \UP $\cap$ \co-\UP\ for
parity conditions~\cite{Jurdzinski98}.
We then notice that a parity condition is a Rabin condition with half
as many pairs as the number of priorities: assume the parity condition
is given by $p \colon \Stat \mapsto \lsem 0 , d \rsem$ with $d\in \N$;
take for $i$ in $\lsem 0, \frac{d}{2}\rsem$, $Q_i = p^{-1}\{2 i\}$ and
$R_i = p^{-1} \{ 2j+1 \mid j \ge i\}$. Then the Rabin objective
$(Q_i,R_i)_{0 \le i \le \frac{d}{2}}$ is equivalent to the parity
condition given by $p$.
We design an algorithm that solves the constrained NE existence
problem in $\P^\NP_\parallel$ for Rabin objectives (see
footnote~\ref{fn-pnp||} on page~\pageref{fn-pnp||} for an informal definition
of $\P^{\NP}_\parallel$).
Our algorithm heavily uses non-determinism (via the oracle). We
then propose a deterministic algorithm which runs in exponential
time, but will be useful in Section~\ref{subsec:rabin-auto}. This
subsection ends with proving $\P^\NP_\parallel$-hardness of the
constrained NE existence problem and NE existence problem for parity
objectives.
In the end, we will have proven the following theorem:
\begin{theorem}
For finite games with single objectives given as Rabin or parity conditions,
the NE existence problem and the constrained NE existence problem are
$\P^\NP_\parallel$-complete.
\end{theorem}
\subsubsection{Reduction to a Streett game}
We assume that the preference relation of each player $A \in \Agt$ is
given by the Rabin condition $(Q_{i,A},R_{i,A})_{i\in\lsem
1,k_A\rsem}$. Let $L \subseteq \Agt$. In the suspect game
$\calH(\calG,L)$, we define the Streett objective
$(Q'_{i,A},R'_{i,A})_{i\in\lsem 1,k_A\rsem, A\in L}$, where $Q'_{i,A}
= (Q_{i,A} \times \{P \mid A \in P\}) \cup (\Stat \times \{ P \mid A
\not\in P\})$ and $R'_{i,A} = R_{i,A} \times \{P \mid A \in P\}$, and
we write $\Omega_L$ for the corresponding set of winning plays.
\begin{lemma}
\label{lemma:red-streett}
A play $\rho$ is winning for \Eve in $\calH(\calG,L)$ iff $\rho \in
\Omega_L$.
\end{lemma}
\begin{proof}
Assume $\rho$ is winning for \Eve in $\calH(\calG,L)$. For all $A
\in \limitpi2(\rho) \cap L$, $\Sproj_1(\rho)$ does not satisfy the Rabin
condition given by $(Q_{i,A},R_{i,A})_{i\in\lsem 1,k_A\rsem}$. For
all $1 \le i \le k_A$, $\Inf(\Sproj_1(\rho))\cap Q_{i,A} = \varnothing$
or $\Inf(\Sproj_1(\rho)) \cap R_{i,A}\ne \varnothing$. We infer that
for all $1 \le i \le k_A$, $\Inf(\rho) \cap Q'_{i,A} = \varnothing$
or $\Inf(\rho) \cap R'_{i,A} \ne \varnothing$. Now, if $A \notin
\limitpi2(\rho)$ then all $Q'_{i,A}$ are seen infinitely often along
$\rho$. Therefore for every $A \in L$, the Streett
conditions~$(Q'_{i,A},R'_{i,A})$ is satisfied along $\rho$ (that is,
$\rho \in \Omega_L$).
Conversely, if the Streett condition $(Q'_{i,A},R'_{i,A})_{i\in\lsem
1,k_A\rsem, A\in L}$ is satisfied along $\rho$, then either the
Rabin condition $(Q_{i,A}, R_{i,A})$ is not satisfied along
$\Sproj_1(\rho)$ or $A\not\in \limitpi2(\rho)$. This means that \Eve is winning
in $\calH(\calG,L)$.
\end{proof}
\subsubsection{Algorithm}
We now describe a $\P^\NP_\parallel$ algorithm for solving the
constrained NE existence problem in games where each player has a single
Rabin objective. As in the previous cases, our algorithm relies on the
suspect game construction.
Write $\calP$ for the set of sets of players of~$\Agt$ that appear as
the second item of a state of~$\calJ(\calG)$:
\[
\calP = \{P\subseteq \Agt \mid \exists s\in\Stat.\ (s,P)\text{ is a
state of }\calJ(\calG)\}.
\]
Since~$\calJ(\calG)$ has size polynomial, so has~$\calP$. Also, for
any path~$\rho$, $\limitpi2(\rho)$ is a set of~$\calP$. Hence, for a
fixed~$L$, the~number of sets~$\limitpi2(\rho)\cap L$ is
polynomial. Now, as recalled on page~\pageref{simplification}, the
winning condition for~\Eve is that the players in~$\limitpi2(\rho)\cap
L$ must be losing along $\Sproj_1(\rho)$ in $\calG$ for their Rabin
objective. We have seen that this can be seen as a Streett objective
(Lemma~\ref{lemma:red-streett}).
Now, deciding whether a state is winning in a turn-based game for a
Streett condition can be decided in
\coNP~\cite{emerson1988complexity}. Hence, given a state~$s\in\Stat$
and a set~$L$, we can decide in \coNP~whether $s$ is winning for \Eve
in~$\calH(\calG,L)$. This will be used as an oracle in our algorithm
below.
\smallskip Now, pick a set~$P\subseteq\Agt$ of suspects, \ie, for
which there exists $(s,t)\in\Stat^2$ and $m_\Agt$
s.t. $P=\Susp((s,t),m_\Agt)$. Using the same arguments as in the proof
of Proposition~\ref{lem:polynomial-size}, it~can be shown that $2^{\size P}
\leq \size\Tab$, so that the number of subsets of~$P$ is polynomial.
Now, for each set~$P$ of suspects and each~$L\subseteq P$, write
$w(L)$ for the size of the winning region of~\Eve
in~$\calH(\calG,L)$. Then the sum $\sum_{P\in\calP\setminus\{\Agt\}}
\sum_{L\subseteq P} w(L)$ is at most $\size\Stat\times \size\Tab^2$.
Assume that the exact value~$M$ of this sum is known, and consider the
following algorithm:
\begin{enumerate}
\item for each $P\subseteq \calP\setminus\{\Agt\}$ and
each~$L\subseteq P$, guess a set $W(L)\subseteq \Stat$, which we
intend to be the exact winning region for~\Eve in~$\calH(\calG,L)$.
\item check that the sizes of those sets sum up to~$M$;
\item for each $s\notin W(L)$, check that \Eve does not have a winning
strategy from~$s$ in~$\calH(\calG,L)$. This can be checked in
non-deterministic polynomial time, as explained above.
\item guess a lasso-shaped path~$\rho=\pi\cdot \tau^\omega$ in~$\calH(\calG,L)$
starting from~$(s,\Agt)$, with $\size \pi$ and $\size \tau$ less
than $\size\Stat^2$ (following Proposition~\ref{lem:play-length})
visiting only states where the second item is~$\Agt$. This path can
be seen as the outcome of some strategy of~\Eve when \Adam
obeys. For this path, we~then check the following:
\begin{itemize}
\item along~$\rho$, the sets of winning and losing players satisfy
the original constraint (remember that we aim at solving the
constrained NE existence problem);
\item any deviation along~$\rho$ leads to a state that is winning
for~\Eve. In~other terms, pick a state~$h=(s,\Agt,m_\Agt)$
of~\Adam along~$\rho$, and pick a successor~$h'=(t,P)$ of~$h$ such
that $t\not=\Tab(s,m_\Agt)$. Then the algorithm checks that $t\in
W(L\cap P)$.
\end{itemize}
\end{enumerate}
The algorithm accepts the input~$M$ if it succeeds in finding the
sets~$W$ and the path~$\rho$ such that all the checks are
successful. This algorithm is non-deterministic and runs in polynomial
time, and will be used as a second oracle.
\medskip We now show that if $M$ is exactly the sum of the $w(L)$,
then the algorithm accepts~$M$ if, and only~if, there is a Nash
equilibrium satisfying the constraint, \ie, if, and only~if, \Eve has
a winning strategy from~$(s,\Agt)$ in~$\calH(\calG,L)$.
First assume that the algorithm accepts~$M$. This means that it~is
able, for each~$L$, to find sets~$W(L)$ of states whose complement
does not intersect the winning region of~$\calH(\calG,L)$. Since~$M$
is assumed to be the exact sum of~$w(L)$ and the size of the
sets~$W(L)$ sum up to~$M$, we~deduce that~$W(L)$ is exactly the
winning region of~\Eve in~$\calH(\calG,L)$. Now, since the algorithm
accepts, it~is also able to find a (lasso-shaped) path~$\rho$ only
visiting states having~$\Agt$ as the second component. This path has
the additional property that any ``deviation'' from a state of \Adam
along this path ends up in a state that is winning for~\Eve for
players in~$L\cap P$, where $P$ is the set of suspects for the present
deviation. This way, if during~$\rho$, \Adam deviates to a
state~$(t,P)$, then \Eve~will have a strategy to ensure that along any
subsequent play, the objectives of players in~$L\cap P$ (in~$\calG$)
are not fulfilled, so that along any run~$\rho'$, the players
in~$L\cap \limitpi2(\rho')$ are losing for their objectives
in~$\calG$, so that \Eve wins in~$\calH(\calG,L)$.
Conversely, assume that there is a Nash equilibrium satisfying the
constraint. Following Proposition~\ref{lem:play-length}, we assume
that the outcome of the corresponding strategy profile has the form
$\pi\cdot\tau^\omega$. From Lemma~\ref{lemma-suspectgame}, there is a
winning strategy for~\Eve in~$\calH(\calG, L)$ whose outcome when
\Adam obeys follows the outcome of the Nash equilibrium. As~a
consequence, the outcome when \Adam obeys is a path~$\rho$ that the
algorithm can guess. Indeed, it~must satisfy the constraints, and any
deviation from~$\rho$ with set of suspects~$P$ ends in a state where
\Eve wins for the winning condition of~$\calH(\calG,L)$, hence also
for the winning condition of~$\calH(\calG,L\cap P)$, since any
path~$\rho'$ visiting~$(t,P)$ has $\limitpi2(\rho')\subseteq P$.
\medskip Finally, our global algorithm is as follows: we run the first
oracle for all the states and all the sets~$L$ that are subsets of a
set of suspects (we know that there are polynomially many such
inputs). We~also run the second algorithm on all the possible values
for~$M$, which are also polynomially many. Now, from the answers of
the first oracle, we~compute the exact value~$M$, and return the value
given by the second on that input. This algorithm runs
in~$\P^{\NP}_{\parallel}$ and decides the constrained NE existence
problem.
\subsubsection{Deterministic algorithm}
\label{rabin:det-algo}
In the next section we will need a deterministic algorithm to solve
games with objectives given as deterministic Rabin automata. We
therefore present it right now. The deterministic algorithm works by
successively trying all the possible payoffs, there are $2^{|\Agt|}$
of them. Then it computes the winning strategies of the suspect game
for that payoff. In \cite{horn2005streett} an algorithm for Streett
games is given, which works in time $\mathcal{O}(n^k\cdot k!)$, where
$n$ is the number of vertices in the game, and $k$ the size of the
Streett condition. The algorithm has to find, in the winning region
of \Eve in $\calJ(\calG)$, a lasso that satisfies the Rabin winning
conditions of the winners and do not satisfy whose of the losers. To
do so it tries all the possible choices of elementary Rabin condition
that are satisfied to make the players win, there are at most
$\prod_{A \in \Agt} k_A$ possible choices. And for the losers, we try
the possible choices for whether $Q_{i,A}$ is visited of not, there
are $\prod_{A\in\Agt} 2^{k_A}$ such choices. It then looks for a
lasso cycle that, when $A$ is a winner, does not visit $Q_{i_A,A}$ and
visits $R_{i_A,A}$, and when $A$ is a loser, visits $R_{i_A,A}$ when
it has to, or does not visit $Q_{i_A,A}$. This is equivalent to
finding a path satisfying a conjunction of B\"uchi conditions and can
be done in polynomial time $\mathcal{O}(n\times \sum_{A\in \Agt}
k_A)$. The global algorithm works in
time \[\mathcal{O}\left(2^{|\Agt|} \cdot \left(|\Tab|^{3 \sum_A k_A}
\cdot (\sum_A k_A)! + \left(\prod_{A \in \Agt} k_A \cdot
2^{k_A}\right) \cdot |\Tab|^3 \cdot \sum_A k_A\right)\right)\]
Notice that the exponential does not come from the size of the graph
but from the number of agents and the number of elementary Rabin
conditions, this will be important when in the next subsection we will
reuse the algorithm on a game structure whose size is exponential.
\subsubsection{$\P^{\NP}_{\parallel}$-hardness}
We now prove $\P^{\NP}_{\parallel}$-hardness of the (constrained)
NE existence problem in the case of parity objectives. The main
reduction is an encoding of the \PARITYSAT problem, where the aim is
to decide whether the number of satisfiable instances among a set of
formulas is even. This problem is known to be complete for
$\P^{\NP}_{\parallel}$~\cite{Got95}.
Before tackling the whole reduction, we first develop some
preliminaries on single instances of \SSAT, inspired
from~\cite{CHP07}. Let us consider an instance $\phi = c_1 \land
\dots \land c_n$ of \SSAT, where $c_i = \ell_{i,1} \lor \ell_{i,2}
\lor \ell_{i,3}$, and $\ell_{i,j} \in \{ x_k, \lnot x_k \mid 1 \le k
\le p\}$. With~$\phi$, we associate a three-player game~$N(\phi)$,
depicted on Figure~\ref{fig-N1} (where the first state of~$N(\phi)$ is
controlled by~$A_1$, and the first state of each~$N'(c_j)$ is
concurrently controlled by~$A_2$ and~$A_3$). For each variable~$x_j$,
players~$A_2$ and~$A_3$ have the following target sets:
\begin{xalignat*}4
T^{A_2}_{2j} &= \{x_j\} &
T^{A_2}_{2j +1} &=\{\lnot x_j\} &\qquad
T^{A_3}_{2j +1} &= \{ x_j\} &
T^{A_3}_{2j} &= \{\lnot x_j\}
\end{xalignat*}
\begin{figure}[t]
\begin{minipage}{0.45\textwidth}
\centering
\begin{tikzpicture}[thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum height=6mm,inner sep=0mm,rounded corners=2mm,fill=black!10]
\draw [thin,densely dotted,rounded corners=2mm] (-.5,2.3) --(2.8,2.3) -- (2.8,-2) -- (-.5,-2) --cycle;
\path (0,2) node {$N(\phi)$};
\draw (0,0) node[rond] (A) {$A_1$};
\draw (2,1.5) node[oval] (P1) {$N'(c_1)$};
\draw (2,.5) node[oval] (P2) {$N'(c_2)$};
\draw (2,-.5) node {\vdots};
\draw (2,-1.5) node[oval] (Pn) {$N'(c_n)$};
\draw[latex'-] (A.-180) -- +(-.6,0);
\draw[-latex'] (A) -- (P1);
\draw[-latex'] (A) -- (P2);
\draw[-latex'] (A) -- (Pn);
\end{tikzpicture}
\end{minipage}
\hfill
\begin{minipage}{0.45\textwidth}
\centering
\begin{tikzpicture}[thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum height=6mm,inner sep=0mm,rounded corners=2mm,fill=black!10]
\draw [thin,densely dotted,rounded corners=2mm] (-.8,2.3) --(2.8,2.3) -- (2.8,-2) -- (-.8,-2) --cycle;
\path (0,2) node {$N'(c_i)$};
\draw (0,0) node[oval] (A) {$A_2/A_3$};
\draw (2,1.5) node[rond] (P1) {$\ell_{i,1}$};
\draw (2,0) node[rond] (P2) {$\ell_{i,2}$};
\draw (2,-1.5) node[rond] (P3) {$\ell_{i,3}$};
\draw (4,0) node[oval,dotted,inner sep=1mm] (MP) {$N(\phi)$};
\draw[latex'-] (A.-180) -- +(-.6,0);
\draw[-latex',rounded corners=1mm] (A) |- node[left,pos=.25]
{\begin{minipage}{1cm}\baselineskip=7pt\flushright
$\scriptstyle\langle 2,2\rangle$
$\scriptstyle\langle 0,1\rangle$
$\scriptstyle\langle 1,0\rangle$
\end{minipage}}
(P1);
\draw[-latex',rounded corners=1mm] (A) --
node[below] {\begin{minipage}{1cm}\baselineskip=7pt\centering
$\scriptstyle\langle 1,1\rangle$
$\scriptstyle\langle 2,0\rangle$
$\scriptstyle\langle 0,2\rangle$
\end{minipage}}
(P2);
\draw[-latex',rounded corners=1mm] (A) |-
node[left,pos=.25] {\begin{minipage}{1cm}\baselineskip=7pt\flushright
$\scriptstyle\langle 0,0\rangle$
$\scriptstyle\langle 2,1\rangle$
$\scriptstyle\langle 1,2\rangle$
\end{minipage}}
(P3);
\draw[-latex',dotted] (P1) -- (MP);
\draw[-latex',dotted] (P2) -- (MP);
\draw[-latex',dotted] (P3) -- (MP);
\end{tikzpicture}
\end{minipage}
\caption{The game~$N(\phi)$ (left), where $N'(c_i)$ is the module on the right.}\label{fig-N1}
\end{figure}
\noindent This construction enjoys interesting properties, given by the
following lemma:
\begin{lemma}\label{lemma-N}
If the formula~$\phi$ is not satisfiable, then there is a strategy
for player~$A_1$ in~$N(\phi)$ such that players~$A_2$ and~$A_3$
lose.
%
If the formula~$\phi$ is satisfiable, then for any strategy
profile~$\sigma_\Agt$, one of~$A_2$ and~$A_3$ can change her
strategy and win.
\end{lemma}
\begin{proof}
We begin with the first statement, assuming that $\phi$~is not
satisfiable and defining the strategy for~$A_1$. With a history~$h$
in~$N(\phi)$, we~associate a valuation $v^h \colon \{x_k \mid k \in
[1,p]\} \to \{\top,\bot\}$ (where $p$ is the number of distinct
variables in~$\phi$), defined as follows:
\[
v^h(x_k) = \top \ \Leftrightarrow\ \exists m.\ h_m = x_k \land
\forall m'>m .\ h_{m'} \ne \lnot x_k \qquad \text{for all
$k\in[1,p]$}
\]
We also define $v^h(\neg x_k) = \neg v^h(x_k)$. Under this
definition, $v^h(x_k)=\top$ if the last occurrence of~$x_k$ or~$\neg
x_k$ along~$h$ was~$x_k$. We~then define a strategy~$\sigma_1$ for
player~$A_1$: after a history~$h$ ending in an $A_1$-state,
we~require~$\sigma_1(h)$ to go to $N'(c_i)$ for some $c_i$ (with
least index,~say) that evaluates to false under~$v^h$ (such a~$c_i$
exists since $\phi$ is not satisfiable). This strategy enforces that
if $h\cdot\sigma_1(h)\cdot \ell_{i,j}$ is a finite outcome
of~$\sigma_1$, then $v^h(\ell_{i,j}) = \bot$, because $A_1$ has
selected a clause~$c_i$ whose literals all evaluate to~$\bot$.
Moreover, $v^{h\cdot\sigma_1(h)\cdot \ell_{i,j}}(\ell_{i,j}) =
\top$, so that for each~$j$, any outcome of~$\sigma_1$ will either
alternate between~$x_k$ and~$\neg x_k$ (hence visit both of them
infinitely often), or no longer visit any of them after some
point. Hence both~$A_2$ and~$A_3$ lose.
\medskip We now prove the second statement. Let $v$ be a valuation
under which~$\phi$ evaluates to~true, and $\sigma_\Agt$ be a
strategy profile. From~$\sigma_{A_2}$ and~$\sigma_{A_3}$, we~define
two strategies $\sigma_{A_2}'$ and~$\sigma_{A_3}'$. Consider a
finite history~$h$ ending in the first state of~$N'(c_i)$, for
some~$i$. Pick a~literal~$\ell_{i,j}$ of~$c_i$ that is true
under~$v$ (the one with least index,~say). We~set
\begin{xalignat*}2
\sigma'_{A_2}(h) &= [j - \sigma_{A_3}(h) \text{ (mod 3)}]
&
\sigma'_{A_3}(h) &= [j - \sigma_{A_2}(h) \text{ (mod 3)}].
\end{xalignat*}
It~is easily checked that, when $\sigma_{A_2}$ and~$\sigma'_{A_3}$
(or $\sigma'_{A_2}$ and $\sigma_{A_3}$) are played simultaneously in
the first state of some~$N'(c_i)$, then the game goes
to~$\ell_{i,j}$. Thus under those strategies, any visited literal
evaluates to~true under~$v$, which means that at most one of~$x_k$
and $\neg x_k$ is visited (infinitely often). Hence one of~$A_2$
and~$A_3$ is winning, which proves our claim.
\end{proof}
\medskip We now proceed by encoding an instance
\begin{xalignat*}1
\exists x^1_{1}, \dots x^1_{k}.\ &\phi^1(x^1_{1},\dots,x^1_{k}) \\
&\dots \\
\exists x^m_{1}, \dots x^m_{k}.\ &\phi^m(x^m_{1},\dots,x^m_{k})
\end{xalignat*}
of \PARITYSAT into a parity game. The game involves the three
players~$A_1$, $A_2$ and~$A_3$ of the game~$N(\phi)$ defined above,
and it~will contain a copy of~$N(\phi^r)$ for each~$1\leq r\leq m$.
The~objectives of~$A_2$ and~$A_3$ are the unions of their objectives
in each~$N(\phi^r)$, e.g. $p^{A_2}(x^1_j) = p^{A_2}(x^2_j) = \cdots =
p^{A_m} (x^m_j) = 2j$.
For each such~$r$, the game will also contain a copy of the
game~$M(\phi^r)$ depicted on Figure~\ref{fig-M}. Each
game~$M(\phi^r)$ involves an extra set of players~$B^r_{k}$, one for
each variable~$x^r_k$. As we have seen in
Section~\ref{subsec:cobuchi}, in a Nash equilibrium, it~cannot be the
case that both $x^r_k$ and~$\neg x^r_k$ are visited infinitely often.
In order to test the parity of the number of satisfiable formulas, we
then define two families of modules, depicted on Figure~\ref{fig-mod1}
to~\ref{fig-modn}. Finally, the whole game~$\calG$ is depicted on
Figure~\ref{fig-g}. In~that game, the objective of~$A_1$ is to visit
infinitely often the initial state~$\textrm{init}$.
\begin{figure}[!ht]
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[thick,xscale=1.2]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=1mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum width=10mm,inner sep=1mm,minimum height=7mm,rounded corners=2mm,fill=black!10]
\draw (0,0) node[rond] (A1) {$A_1$};
\draw (1,1) node[oval] (MP) {$M(\phi^r)$};
\draw (3,1) node[oval] (GP) {$G(\phi^{r-1})$};
\draw (1,-1) node[oval] (A2A3) {$A_2/A_3$};
\draw (3,-2) node[oval] (NP) {$N(\phi^r)$};
\draw (3,0) node[oval] (HP) {$H(\phi^{r-1})$};
\draw[-latex'] (-0.8,0) -- (A1);
\draw[-latex'] (A1) -- (MP);
\draw[-latex'] (A1) -- (A2A3);
\draw[-latex'] (MP) -- (GP);
\draw[-latex'] (A2A3) -- node[above,sloped] {$\scriptstyle\langle
1,0\rangle $} node[below,sloped] {$\scriptstyle\langle 0,1 \rangle$}
(NP);
\draw[-latex'] (A2A3) -- node[below,sloped] {$\scriptstyle\langle
1,1\rangle $} node[above,sloped] {$\scriptstyle\langle 0,0 \rangle$}
(HP);
\end{tikzpicture}
\caption{Module $H(\phi^r)$ for $r\ge 2$}\label{fig-mod1}
\end{minipage}%
\begin{minipage}{0.5\textwidth}
\centering
\begin{tikzpicture}[thick,xscale=1.2]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=1mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum width=10mm,inner sep=1mm,minimum height=7mm,rounded corners=2mm,fill=black!10]
\draw (0,0) node[rond] (A1) {$A_1$};
\draw (1,1) node[oval] (MP) {$M(\phi^r)$};
\draw (3,1) node[oval] (GP) {$H(\phi^{r-1})$};
\draw (1,-1) node[oval] (A2A3) {$A_2/A_3$};
\draw (3,-2) node[oval] (NP) {$N(\phi^r)$};
\draw (3,0) node[oval] (HP) {$G(\phi^{r-1})$};
\draw[-latex'] (-0.8,0) -- (A1);
\draw[-latex'] (A1) -- (MP);
\draw[-latex'] (A1) -- (A2A3);
\draw[-latex'] (MP) -- (GP);
\draw[-latex'] (A2A3) -- node[above,sloped] {$\scriptstyle\langle
1,0\rangle $} node[below,sloped] {$\scriptstyle\langle 0,1 \rangle$}
(NP);
\draw[-latex'] (A2A3) -- node[below,sloped] {$\scriptstyle\langle
1,1\rangle $} node[above,sloped] {$\scriptstyle\langle 0,0 \rangle$}
(HP);
\end{tikzpicture}
\caption{Module $G(\phi^r)$ for $r\ge 2$}
\end{minipage}
\medskip
\defFig.{Fig.}
\captionindent=0pt
\noindent\begin{minipage}{\linewidth}
\noindent
\begin{minipage}{.3\linewidth}
\centering
\begin{tikzpicture}[thick,xscale=1.2]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=1mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum width=10mm,inner sep=1mm,minimum height=7mm,rounded corners=2mm,fill=black!10]
\path[use as bounding box] (-1,.9) -- (1,-1.7);
\draw (0,0) node[oval] (M) {$M(\phi^1)$};
\draw[-latex'] (-1,0) -- (M);
\draw[-latex'] (M) -- (1,0);
\end{tikzpicture}
\caption{Module $H(\phi^1)$}
\end{minipage}\hfill
\begin{minipage}{0.32\linewidth}
\centering
\begin{tikzpicture}[thick,xscale=1.2]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=1mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum width=10mm,inner sep=1mm,minimum height=7mm,rounded corners=2mm,fill=black!10]
\path[use as bounding box] (-.5,.9) -- (1.5,-1.7);
\draw (0,0) node[oval] (M) {$A_2/A_3$};
\draw (0,-1.5) node[oval] (N) {$N(\phi^1)$};
\draw[-latex'] (-1,0) -- (M);
\draw[-latex'] (M) -- node[above] {$\scriptstyle\langle
0,0\rangle $} node[below] {$\scriptstyle\langle 1,1 \rangle$}
(1.5,0);
\draw[-latex'] (M) -- node[left] {$\scriptstyle\langle
1,0\rangle $} node[right] {$\scriptstyle\langle 0,1 \rangle$}
(N);
\end{tikzpicture}
\caption{\mbox{Module $G(\phi^1)$}}\label{fig-modn}
\end{minipage}\hfill
\begin{minipage}{0.32\linewidth}
\centering
\begin{tikzpicture}[thick,xscale=1.2]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=1mm,fill=black!10]
\tikzstyle{oval}=[draw,minimum width=10mm,inner sep=1mm,minimum height=7mm,rounded corners=2mm,fill=black!10]
\path[use as bounding box] (-1,.9) -- (2.5,-1.7);
\draw (0,0) node[oval] (A1) {$\textrm{init}\vphantom{G^n}$};
\draw (2,0) node[oval] (MP) {$G(\phi^m)$};
\draw[-latex'] (-1,0) -- (A1);
\draw[-latex'] (A1) -- (MP);
\draw[-latex',rounded corners=3mm] (MP) |- (1,-1) -| (A1) ;
\end{tikzpicture}
\caption{The game $\calG$}\label{fig-g}
\end{minipage}
\end{minipage}
\end{figure}
\begin{lemma}
There is a Nash equilibrium in the game $\calG$ where $A_2$ and
$A_3$ lose and $A_1$ wins if, and only if, the number of satisfiable
formulas is even.
\end{lemma}
\begin{proof}
Assume that there is a Nash equilibrium in~$\calG$ where $A_1$ wins
and both~$A_2$ and~$A_3$ lose. Let~$\rho$ be its outcome. As~already
noted, if~$\rho$~visits module~$M(\phi^r)$ infinitely often, then
it~cannot be the case that both~$x^r_k$ and~$\neg x^r_k$ are visited
infinitely often in~$M(\phi^r)$, as otherwise $B^r_k$ would be
losing and have the opportunity to improve her payoff. This implies
that $\phi^r$ is satisfiable.
Similarly, if~$\rho$ visits infinitely often the states
of~$H(\phi^r)$ or $G(\phi^r)$ that is controlled by~$A_2$ and~$A_3$,
then it~must be the case that $\phi^r$ is not satisfiable, since
from Lemma~\ref{lemma-N} this would imply that $A_2$ or~$A_3$ could
deviate and improve her payoff by going to~$N(\phi^r)$.
\medskip We now show by induction on~$r$ that if $\rho$ goes
infinitely often in module $G(\phi^r)$ then $\#\{ j \le r \mid
\phi^r\text{ is satisfiable}\}$ is even, and that (if~$n>1$) this
number is odd if $\rho$ goes infinitely in module $H(\phi^r)$.
When~$r=1$, since $H(\phi^1)$ is $M(\phi^1)$, $\phi^1$ is
satisfiable, as noted above. Similarly, if $\rho$ visits $G(\phi^1)$
infinitely often, it~also visits its $A_2/A_3$-state infinitely
often, so that $\phi^1$ is not satisfiable. This proves the base
case.
Assume that the result holds up to some~$r-1$, and assume
that~$\rho$ visits~$G(\phi^{r})$ infinitely often. Two cases may
occur:
\begin{itemize}
\item it~can be the case that $M(\phi^r)$ is visited infinitely
often, as well as $H(\phi^{r-1})$. Then $\phi^r$ is satisfiable,
and the number of satisfiable formulas with index less than or
equal to~$r-1$ is~odd. Hence the number of satisfiable formulas
with index less than or equal to~$r$ is even.
\item it~can also be the case that the state $A_2/A_3$
of~$G(\phi^r)$ is visited infinitely often. Then $\phi^r$ is not
satisfiable. Moreover, since~$A_1$ wins, the play will also
visit~$G(\phi^{r-1})$ infinitely often, so that the number of
satisfiable formulas with index less than or equal to~$r$ is even.
\end{itemize}
If $\rho$ visits~$H(\phi^r)$ infinitely often, using similar
arguments we prove that the number of satisfiable formulas with
index less than or equal to~$r$ is odd.
To conclude, since~$A_1$ wins, the play visits~$G(\phi^m)$
infinitely often, so that the total number of satisfiable formulas
is even.
\medskip Conversely, assume that the number of satisfiable formulas
is even. We~build a strategy profile, which we prove is a Nash
equilibrium in which~$A_1$ wins, and~$A_2$ and~$A_3$
lose. The~strategy for~$A_1$ in the initial states of~$H(\phi^r)$
and~$G(\phi^r)$ is to go to~$M(\phi^r)$ when $\phi^r$ is
satisfiable, and to state~$A_2/A_3$ otherwise. In~$M(\phi^r)$, the
strategy is to play according to a valuation
satisfying~$\phi^r$. In~$N(\phi^r)$, it~follows a strategy along
which~$A_2$ and~$A_3$ lose (this exists according to
Lemma~\ref{lemma-N}). This defines the strategy for~$A_1$. Then
$A_2$ and~$A_3$ are required to always play the same move, so that
the play never goes to some~$N(\phi^r)$. In~$N(\phi^r)$, they can
play any strategy (they lose anyway, whatever they~do). Finally, the
strategy of~$B^r_k$ never goes to~$\bot$.
We now explain why this is the Nash equilibrium we are after. First,
as $A_1$ plays according to fixed valuations for the
variables~$x^r_k$, either $B^r_k$ wins or she does not have the
opportunity to go to~$\bot$. It~remains to prove that $A_1$ wins,
and that $A_2$ and~$A_3$ lose and cannot improve
(individually). To~see this, notice that between two consecutive
visits to~$\textrm{init}$, exactly one of~$G(\phi^r)$
and~$H(\phi^r)$ is visited. More precisely, it~can be observed that
the strategy of~$A_1$ enforces that $G(\phi^r)$ is visited if
$\#\{r<r' \leq m \mid \phi^{r'} \text{ is satisfiable}\}$ is even,
and that $H(\phi^r)$ is visited otherwise. Then if~$H(\phi_1)$ is
visited, the number of satisfiable formulas with index between~$2$
and~$m$ is odd, so that $\phi_1$ is satisfiable and $A_1$ can return
to~\textrm{init}. If~$G(\phi^1)$ is visited, an even number of
formulas with index between~$2$ and~$m$ is satisfiable, and $\phi^1$
is~not. Hence $A_1$ has a strategy in~$N(\phi^1)$ to make $A_2$
and~$A_3$ lose, so that $A_2$ and $A_3$ cannot improve their
payoffs.
\end{proof}
This proves hardness for the constrained NE existence problem with parity
objectives. For the NE existence problem, we use the construction of
Section~\ref{sec:link-constr-exist}, but since it can only be used to get rid
of constraint of the type ``$A_1$ is winning'', we add to the game two
players, $A_4$~and~$A_5$, whose objectives are opposite to~$A_2$ and~$A_3$
respectively, and one player~$A_6$ that will be playing matching-penny games.
The objectives for $A_4$ and~$A_5$ are definable by parity objectives, by
adding $1$ to all the priorities. Then, we consider game $\calG'=
E(E(E(\calG,A_1,A_6,\rho_1),A_4,A_6,\rho_4),A_5,A_6,\rho_5)$ where $\rho_1$,
$\rho_4$ and $\rho_5$ are winning paths for $A_1$, $A_4$ and~$A_5$
respectively. Thanks to Proposition~\ref{lem:link-constr-exist}, there is a
Nash equilibrium in~$\calG'$ if, and only~if, there is a Nash equilibrium
in~$\calG$ where $A_1$ wins and $A_2$ and $A_3$ lose. We~deduce
$\P^{\NP}_{\parallel}$-hardness for the NE existence problem with parity
objectives.
\subsection{Objectives given as deterministic Rabin automata}
\label{sec:rabin-auto}\label{subsec:rabin-auto}
In order to find Nash equilibria when objectives are given as
deterministic Rabin automata, we first define the notion of
\newdef{game simulation}, which we show has the property that when
$\calG'$~game-simulates~$\calG$, then a Nash equilibrium in the latter
game gives rise to a Nash equilibrium in the former one.
We then define the product of a game with automata (defining the
objectives of the players), and show that it game-simulates the
original game. This reduces the case of games with objectives are
defined as Rabin automata to games with Rabin objectives, which we
handled at the previous section; the resulting algorithm is in
\EXPTIME. We then show a \PSPACE lower bound for the problem in the
case of objectives given as deterministic B\"uchi automata. This
proves the following theorem:
\begin{theorem}
For finite games with single objectives given as deterministic Rabin
automata or deterministic B\"uchi automata, the NE existence problem and
the constrained NE existence problem are in \EXPTIME and \PSPACE-hard.
\end{theorem}
It~must be noticed that game simulation can be used in other contexts:
in particular, in~\cite{BBM10a} (where we introduced this notion),
it~is shown that a region-based abstraction of timed games game
simulates its original timed game, which provides a way of computing
Nash equilibria in timed games.
\subsubsection{Game simulation}
\label{sec:simulation}
We~define game simulation and show how that can be used to compute
Nash equilibria. We then apply it to objectives given as
deterministic Rabin automata.
\begin{definition}\label{def-gsim}
Consider two games $\calG=\tuple{\Stat,\Agt,\Act, \Allow,\Tab,
(\mathord\prefrel_A)_{A\in\Agt}}$ and
$\calG'=\tuple{\Stat',\Agt,\Act',\Allow',\Tab',
(\mathord\prefrel'_A)_{A\in\Agt}}$ with the same set~$\Agt$ of players.
A~relation $\mathord{\simulrel} \subseteq \Stat \times \Stat'$ is a
\emph{game simulation} if $s \simulrel s'$ implies that for each
move $\indicebis {m}A\Agt$ in~$\calG$ there exists a move
$\indicebis{m'}A\Agt$ in~$\calG'$ \st:
\begin{enumerate}\raggedright
\item\label{cond:sim22} $\Tab(s,\indicebis {m}A\Agt) \simulrel
\Tab'(s',\indicebis{m'}A\Agt)$, and
\item\label{cond:sim21} for each $t'\in\Stat'$ there exists
$t\in\Stat$ with $t\simulrel t'$ and
$\Susp((s',t'),\indicebis{m'}A\Agt) \subseteq
\Susp((s,t),\indicebis {m}A\Agt)$.
\end{enumerate}
If $\simulrel$~is a game simulation and $(s_0,s'_0)\in {\simulrel}$,
we~say that $\calG'$ \emph{game-simulates} (or simply
\emph{simulates})~$\calG$. When there are two paths $\rho$ and
$\rho'$ such that $\rho_{=i}\simulrel\rho'_{=i}$ for all $i\in\N$,
we will simply write $\rho \simulrel \rho'$.
A~game simulation~$\simulrel$ is \emph{preference-preserving} from
$(s_0,s'_0) \in \Stat \times \Stat'$ if for all $\rho_1 , \rho_2 \in
\Play_{\calG}(s_0)$ and $\rho'_1 , \rho'_2 \in \Play_{\calG'}(s'_0)$
with $\rho_1 \simulrel \rho'_1$ and $\rho_2 \simulrel \rho'_2$, for
all $A\in\Agt$ it~holds that $\rho_1 \prefrel_A \rho_2$ iff
$\rho'_1 \prefrel'_A \rho'_2$.
\end{definition}
As we show now, Nash equilibria are preserved by game simulation, in
the following sense:
\begin{proposition}
\label{prop:sim}
Let $\calG=\tuple{\Stat,\Agt,\Act, \Allow,\Tab,
(\mathord\prefrel_A)_{A\in\Agt}}$ and
$\calG'=\tuple{\Stat',\Agt,\Act',\Allow',\Tab',
(\mathord\prefrel'_A)_{A\in\Agt}}$ be two games involving the same set of
players. Fix two states $s_0$ and~$s_0'$ in $\calG$ and~$\calG'$
respectively, and let $\simulrel$~be a preference-preserving game
simulation from $(s_0,s_0')$. If there exists a Nash equilibrium
$\sigma_\Agt$ in~$\calG$ from~$s_0$, then there exists a Nash
equilibrium $\sigma'_\Agt$ in~$\calG'$ from~$s_0'$ with
$\Out_{\calG}(s_0,\sigma_\Agt) \simulrel
\Out_{\calG'}(s'_0,\sigma'_\Agt)$.
\end{proposition}
\begin{proof}
We fix a strategy profile~$\sigma_\Agt$ in $\calG$ and $\rho$
the outcome of $\sigma_\Agt$ from $s_0$. We~derive a strategy
profile~$\sigma'_\Agt$ in~$\calG'$ and its outcome~$\rho'$
from $s'_0$, such that:
\begin{enumerate}[label=(\alph*)]
\item\label{eq-sima} for every $\overline\rho' \in
\Play_{\calG'}(s'_0)$, there exists $\overline\rho \in
\Play_{\calG}(s_0)$ s.t. $\overline\rho \simulrel \overline\rho'$
and $\Susp(\overline\rho',\sigma'_\Agt) \subseteq
\Susp(\overline\rho,\sigma_\Agt)$;
\item\label{eq-simb} $\rho \simulrel \rho'$.
\end{enumerate}
Assume we have done the construction, and that $\sigma_\Agt$ is a
Nash equilibrium in~$\calG$. We~prove that $\sigma'_\Agt$ is a Nash
equilibrium in~$\calG'$. Towards a contradiction, assume that some
player~$A$ has a strategy $\overline\sigma'_A$ in $\calG'$ such that
$\overline\rho' \not\prefrel_A \rho'$, where $\overline\rho' =
\Out_{\calG'}(s', \replaceter{\sigma'}{A}{\overline\sigma'_A})$.
Note that $A \in \Susp(\overline\rho',\sigma'_\Agt)$.
Applying~\eqref{eq-sima} above, there exists $\overline\rho \in
\Play_{\calG}(s_0)$ such that $\overline\rho \simulrel
\overline\rho'$ and $\Susp(\overline\rho',\sigma'_\Agt) \subseteq
\Susp(\overline\rho,\sigma_\Agt)$. In~particular, $A \in
\Susp(\overline\rho,\sigma_\Agt)$, and there exists a strategy
$\overline\sigma_A$ for~$A$ such that $\overline\rho =
\Out_{\calG}(s_0,
\replaceter{\sigma}{A}{\overline\sigma})$. As~${\rho \simulrel
\rho'}$ (by~\eqref{eq-simb}) and $\simulrel$~is
preference-preserving from~$(s_0,s'_0)$, $\overline\rho
\not\prefrel_A \rho$, which contradicts the fact that
$\sigma_\Agt$~is a Nash equilibrium. Hence, $\sigma'_\Agt$~is a Nash
equilibrium in~$\calG'$ from~$s'_0$.
\medskip It remains to show how we construct $\sigma'_\Agt$
(and~$\rho'$). We~first build~$\rho'$ inductively, and define
$\sigma'_\Agt$ along that~path.
\begin{itemize}
\item Initially, we let~$\rho'_{=0}=s'_0$. Since~$\simulrel$ is a
game simulation containing~$(s_0,s'_0)$, we~have $s_0\simulrel
s'_0$, and there is a move~$m'_\Agt$ associated with
$\sigma_\Agt(s_0)$ satisfying the conditions of
Definition~\ref{def-gsim}. Then $\rho_{=0}\simulrel \rho'_{=0}$,
and $\Susp(\rho'_{=0},\sigma'_\Agt(\rho'_{=0}))\subseteq
\Susp(\rho_{=0},\sigma_\Agt(\rho_{=0}))$.
\item Assume we have built $\rho'_{\le i}$ and $\sigma'_\Agt$ on all
the prefixes of~$\rho'_{\le i}$, and that they are such that
$\rho_{\leq i}\simulrel \rho'_{\leq i}$ and $\Susp(\rho'_{\leq
i},\sigma'_\Agt)\subseteq \Susp(\rho_{\leq i},\sigma_\Agt)$
(notice that $\Susp(\rho'_{\leq i},\sigma'_\Agt)$ only depends on
the value of~$\sigma'_\Agt$ on all the prefixes of~$\rho_{\leq
i}$). In~particular, we~have $\rho_{=i}\simulrel \rho'_{=i}$, so
that with the move~$\sigma_\Agt(\rho_{\leq i})$, we~can associate
a move~$m'_{\Agt}$ (to~which we set~$\sigma'_{\Agt}(\rho'_{\leq
i})$) satisfying both conditions of Definition~\ref{def-gsim}.
This defines~$\rho'_{=i+1}$ in such a way that $\rho_{\leq i+1}
\simulrel \rho'_{\leq i+1}$; moreover, $\Susp(\rho'_{\leq
i+1},\sigma'_\Agt) = \Susp(\rho'_{\leq i},\sigma'_\Agt) \cap
\Susp((\rho'_{=i}, \rho'_{=i+1}),m'_\Agt)$ is indeed a subset of
$\Susp(\rho_{\leq i+1},\sigma_\Agt)$.
\end{itemize}
It~remains to define~$\sigma'_\Agt$ outside its
outcome~$\rho'$. Notice that, for our purposes, it~suffices to
define~$\sigma'_\Agt$ on histories starting from~$s'_0$. We~again
proceed by induction on the length of the histories,
defining~$\sigma'_\Agt$ in order to satisfy~\eqref{eq-sima} on
prefixes of plays of~$\calG'$ from~$s'_0$. At~each step, we also
make sure that for every $h' \in \Hist_{\calG'}(s'_0)$, there exists
$h \in \Hist_{\calG}(s)$ such that $h \simulrel h'$,
$\Susp(h',\sigma'_\Agt) \subseteq \Susp(h,\sigma_\Agt)$, and
$\sigma_\Agt(h)$ and $\sigma'_\Agt(h')$ satisfy the conditions of
Definition~\ref{def-gsim} in the last states of~$h$ and~$h'$, resp.
As we only consider histories from~$s'_0$, the case of histories of
length~zero was already handled. Assume we have defined
$\sigma'_\Agt$ for histories $h'$ of length~$i$, and fix a new
history $h'\cdot t' \in \Hist_{\calG'}(s'_0)$ of length~$i+1$ (that
is not a prefix of~$\rho$). By~induction hypothesis, there is $h
\in \Hist_{\calG}(s_0)$ such that $h \simulrel h'$, and
$\Susp(h',\sigma'_\Agt) \subseteq \Susp(h,\sigma_\Agt)$, and
$\sigma_\Agt(h)$ and $\sigma_\Agt(h')$ satisfy the required
properties. In~particular, with~$t'$, we~can associate~$t$
s.t. $t\simulrel t'$ and $\Susp((\last(h'),t'),\sigma'_\Agt(h'))
\subseteq \Susp((\last(h),t),\sigma_\Agt(h))$. Then $(h \cdot t)
\simulrel (h' \cdot t')$. Since~$t\simulrel t'$, there is a
move~$m'_\Agt$ associated with $\sigma_\Agt(h\cdot t)$ and
satisfying the conditions of
Definition~\ref{def-gsim}. Letting~$\sigma'_\Agt(h'\cdot
t')=m'_\Agt$, we~fulfill all the requirements of our induction
hypothesis.
We now need to lift the property from histories to infinite paths.
Consider a play $\overline\rho'\in\Play_{\calG'}(s'_0)$, we will
construct a corresponding play $\overline\rho$ in $\calG$. Set
$\overline\rho_{0} = s_0$. If $\overline\rho$ has been defined up to
index $i$ and $\overline\rho_{i} \simulrel \overline\rho'_{i}$ (this
is true for $i = 0$), thanks to the way $\sigma'_\Agt$ is
constructed, $\sigma_\Agt(\overline\rho_{\le i})$ and
$\sigma'_\Agt(\overline\rho'_{\le i})$ satisfy the conditions of
Definition~\ref{def-gsim} in $\overline\rho_{\le i}$ and
$\overline\rho'_{i}$, respectively. We then pick
$\overline\rho_{i+1}$ such that $\overline\rho_{i+1} \simulrel
\overline\rho'_{i+1}$ and
$\Susp((\overline\rho_{i},\overline\rho_{i+1}),
\sigma_\Agt(\overline\rho_{i})) \subseteq
\Susp((\overline\rho'_{i},\overline\rho'_{i+1}),
\sigma'_\Agt(\overline\rho'_{i}))$. This being true at each step,
the path $\overline\rho$ that is obtained, is such that
$\overline\rho \simulrel \overline\rho'$ and
$\Susp(\overline\rho',\sigma'_\Agt) \subseteq
\Susp(\overline\rho,\sigma_\Agt)$. This is the desired property.
\end{proof}
\subsubsection{Product of a game with deterministic Rabin automata}
\label{sec:productB}
After this digression on game simulation, we come back to the game
$\calG = \tuple{\Stat, \Agt, \Act, \Allow, \Tab,
(\mathord\prefrel_A)_{A\in\Agt}}$, where we assume that some player~$A$ has
her objective given by a deterministic Rabin automaton
$\calA=\tuple{Q,\Stat,\delta,q_0,(Q_{i},R_{i})_{i \in \lsem
1,n\rsem}}$ (recall that this automaton reads sequences of states
of~$\calG$, and accepts the paths that are winning for player~$A$).
We show how to compute Nash equilibria in~$\calG$ by building a
product~$\calG'$ of~$\calG$ with the automaton~$\calA$ and by
computing the Nash equilibria in the resulting game, with a Rabin
winning condition for~$A$.
We define the product of the game~$\calG$ with the automaton~$\calA$
as the game $\calG \ltimes \calA = \tuple{\Stat',
\Agt, \Act, \Allow', \Tab',(\mathord\prefrel'_A)_{A\in\Agt}}$,
where:
\begin{itemize}
\item $\Stat' = \Stat \times Q$;
\item $\Allow'((s,q),\pl j) = \Allow(s,\pl j)$
for every $\pl j \in \Agt$;
\item $\Tab'((s,q), m_\Agt) = (s',q')$ where
$\Tab(s,m_\Agt)= s'$ and $\delta (q,s) = q'$;
\item If $B=A$ then $\prefrel'_B$ is given by the internal Rabin
condition $Q_i' = \Stat \times Q_i$ and $R_i' = \Stat \times R'_i$.
Otherwise $\prefrel'_B$ is derived from~$\prefrel_B$, defined by
$\rho \prefrel'_B \overline\rho$ if, and only if, $\Rproj(\rho)
\prefrel_B \Rproj(\overline\rho)$ (where $\Rproj$ is the projection of
$\Stat'$ on~$\Stat$). Notice that if $\prefrel_B$ is an internal
Rabin condition, then so is $\prefrel'_B$.
\end{itemize}
\begin{lemma}
\label{lemma:product1}
$\calG \ltimes \calA$ game-simulates~$\calG$, with game simulation
defined according to the projection: $s \simulrel (s',q)$ iff
$s=s'$. This game simulation is preference-preserving.
Conversely, $\calG$ game-simulates~$\calG \ltimes \calA$, with game
simulation defined by $(s,q) \simulrel' s'$ iff $s=s'$, which is
also preference-preserving.
\end{lemma}
\begin{proof}
We~begin with proving that both relations are
preference-preserving. First notice that if $((s_n,q_n))_{n\ge 0}$ is
a play in ${\calG \ltimes \calA}$, then its $\Rproj$-projection
$(s_n)_{n\ge 0}$ is a play in~$\calG$. Conversely, if $\rho=(s_n)_{n
\ge 0}$ is a play in~$\calG$, then there is a unique
path~$(q_n)_{n \ge 0}$ from initial state~$q_0$ in~$\calA$ which
reads~it, and $((s_n,q_n))_{n\ge 0}$ is then a path in $\calG
\ltimes \calA$ that we write $\Rproj^{-1}(\rho) = ((s_n,q_n))_{n\ge
0}$. That way, $\Rproj$~defines a one-to-one correspondence between
plays in $\calG$ and plays in~$\calG \ltimes \calA$ where the second
component starts in~$q_0$. For a player~$B \ne A$, the objective is
defined so that $\Rproj(\rho)$ has the same payoff as $\rho$. Consider
now player $A$, she is winning in~$\calG$ for $\rho = (s_n)_{n\ge
0}$ iff $(s_n)_{n \ge 0} \in \Lang(\calA)$ iff the unique path
$(q_n)_{n \ge 0}$ from initial state~$q_0$ that reads $(s_n)_{n\ge
0}$ satisfies the Rabin condition $(Q_i,R_i)_{i\in\lsem 1,n\rsem}$
in $\calA$ iff $\Rproj^{-1}(\rho)$ satisfies the internal Rabin
condition $(Q'_i,R'_i)_{i\in\lsem 1,n\rsem}$ in $\calG \ltimes
\calA$. This proves that~$\simulrel$ is winning-preserving.
\medskip It remains to show that both relations are game
simulations. Assume $s \simulrel (s,q)$ and pick a move $m_\Agt$ in
$\calG$. It~is also a move in~$\calG \ltimes \calA$, and
$\Tab'((s,q),m_\Agt) = (\Tab(s,m_\Agt),\delta(q,s))$. By definition
of $\simulrel$ it then holds that $\Tab(s,m_\Agt) \simulrel
\Tab'((s,q),m_\Agt)$, which proves condition~\eqref{cond:sim22} of
the definition of a game simulation. It remains to show
condition~\eqref{cond:sim21}. Pick a state $(s',q') \in \Stat'$. We
distinguish two cases
\begin{itemize}
\item If $\delta(q,s) \ne q'$ then
$\Susp(((s,q),(s',q')),m_\Agt)=\varnothing$, and
condition~\eqref{cond:sim21} trivially holds.
\item Otherwise $\delta(q,s)=q'$. In that case,
for any move $m'_\Agt$, we have
that $\Tab(s,m'_\Agt)=s'$ if, and only~if,
$\Tab'((s,q),m'_\Agt)=(s',q')$. It~follows that
$\Susp(((s,q),(s',q')),m_\Agt) = \Susp((s,s'),m_\Agt)$, which
implies condition~\eqref{cond:sim21}.
\end{itemize}
This proves that $\calG \ltimes \calA$ game-simulates $\calG$.
We now assume $(s,q) \simulrel' s$ and pick a move $m_\Agt$ in
$\calG \ltimes \calA$. It is also a move in~$\calG$, and as
previously, condition~\eqref{cond:sim22} obviously holds. Pick now
$s' \in \Stat$. We define $q' = \delta(q,s)$, and we have $(s',q')
\simulrel s'$ by definition of $\simulrel'$. As before, we get
condition~\eqref{cond:sim21} because $\Susp(((s,q),(s',q')),m_\Agt)
= \Susp((s,s'),m_\Agt)$.
\end{proof}
We will solve the case where each player's objective is given by a
deterministic Rabin automaton by applying the above result
inductively. We will obtain a game where each player has an internal
Rabin winning condition. Applying Proposition~\ref{prop:sim} each
time, we~get the following result:
\begin{proposition}
Let $\calG=\tuple{\Stat,\Agt,\Act, \Allow,\Tab,
(\mathord\prefrel_A)_{A\in\Agt}}$ be a finite concurrent game, where for
each player $A$, the preference relation $\prefrel_A$ is
single-objective given by a deterministic Rabin
automaton~$\calA$. Write $\Agt = \{A_1,\dots,A_n\}$. There is a
Nash equilibrium $\sigma_\Agt$ in $\calG$ from some state~$s$ with
outcome~$\rho$ iff there is a Nash equilibrium $\sigma'_\Agt$
in~$\calG' = (((\calG \ltimes \calA_1) \ltimes \calA_2) \dots \times
\calA_n)$ from $(s,q_{01},\dots,q_{0n})$ with outcome~$\rho'$, where
$q_{0i}$~is the initial state of~$\calA_i$ and $\rho$ is the
projection of $\rho'$ on $\calG$.
\end{proposition}
\subsubsection{Algorithm}
Assume that the objective of player $A_i$ is given by a deterministic
Rabin automaton~$\calA_i$. The algorithm for solving the constrained
NE existence problem starts by computing the product of the game with the
automata: $\calG' = (((\calG \ltimes \calA_1) \ltimes \calA_2) \dots
\times \calA_n)$. The resulting game has size $|\calG| \times
\prod_{j\in \lsem 1,n\rsem} | \calA_j|$, which is exponential in the
number of players. For each player~$A_j$ ($1 \le j \le n$), the number
of Rabin pairs in the product game is that of the original
specification $\calA_j$, say $k_j$. We then apply the deterministic
algorithm that we have designed for Rabin objectives (see
Subsection~\ref{rabin:det-algo} page~\pageref{rabin:det-algo}), which
yields an exponential-time algorithm in our framework.
\subsubsection{Hardness}
We prove \PSPACE-hardness in the restricted case of deterministic
B\"uchi automata, by a reduction from (the~complement~of) the problem
of the emptiness of the intersection of several language given by
deterministic finite automata. This problem is known to be
\PSPACE-complete~\cite[Lemma~3.2.3]{kozen1977lower}.
We fix finite automata~$\calA_1, \dots , \calA_n$ over alphabet~$\Sigma$.
Let~$\Sigma'=\Sigma\cup\{\init,\final\}$, where $\init$ and~$\final$ are
two special symbols not in~$\Sigma$. For
every $j\in\lsem 1,n\rsem$, we construct a B\"uchi automaton~$\calA'_j$ from
$\calA_j$ as follows. We~add a state~$F$ with a self-loop
labelled by~$\final$ and an initial state~$I$ with a transition labelled
by~$\init$ to the original initial state. We~add transitions labelled
by~$\final$ from every terminal state to~$F$. We~set the B\"uchi condition
to~$\{F\}$. If~$\calL_j$ is the language recognised by~$\calA_j$, then the
language recognised by the B\"uchi automaton~$\calA'_j$ is $\calL'_j= \init
\cdot \calL_j \cdot \final^\omega$. The~intersection of the languages
recognised by the automata~$\calA_j$ is empty if, and only~if, the
intersection of the languages recognised by the automata~$\calA'_j$ is empty.
We construct the game~$\calG$, with $\Stat = \Sigma'$. For each $j\in
\lsem 1 ,n \rsem$, there is a player~$A_j$ whose objective is given by
$\calA'_j$ and one special player~$A_0$ whose objective is
$\Stat^\omega$ (she~is always winning). Player~$A_0$ controls all the
states and there are transitions from any state to the states of
$\Sigma \cup \{\final\}$. Formally $\Act= \Sigma\cup\{\final\}\cup \bot$,
for all state~$s\in \Stat$, $\Mov(s, A_0) = \Act$, and if $j\ne 0$ then
$\Mov(s,A_j) = \{\bot\}$ and for all $\alpha\in \Sigma \cup \{\final\}$,
$\Tab(s,(\alpha,\bot,\dots,\bot)) = \alpha$.
\begin{lemma}
There is a Nash equilibrium in game~$\calG$ from~$\init$ where every
player wins if, and only~if, the intersection of the languages
recognised by the automata~$\calA'_j$ is not empty.
\end{lemma}
\begin{proof}
If there is such a Nash equilibrium, let~$\rho$ be its outcome.
The~path~$\rho$ forms a word of~$\Sigma'$, it~is accepted by every
automata~$\calA'_j$ since every player wins. Hence the intersection of the
languages~$\calL_j$ is not empty.
Conversely, if a word~$w = \init \cdot w_1 \cdot w_2 \cdots$ is accepted
by all the automata, player~$A_0$ can play in a way such that everybody is
winning: if at each step~$j$ she plays~$w_j$, then the outcome is~$w$
which is accepted by all the automata. It~is a Nash equilibrium
since $A_0$ controls everything and cannot improve her payoff.
\end{proof}
Since \PSPACE is stable by complementation, this proves that the
constrained NE existence problem is \PSPACE-hard for objectives described
by B\"uchi automata.
In order to prove hardness for the NE existence problem we use results
from Section~\ref{lem:link-constr-exist}. Winning conditions in
$E(E(\dots(E(\calG,A_n,A_0,\rho_n),\dots,A_2,A_0,\rho_2),A_1,A_0,\rho_1)$,
where $\rho_j$ is a winning play for~$A_i$, can be defined by slightly
modifying automata $\calA'_1,\dots,\calA'_n$ to take into account the
new states. By Proposition~\ref{lem:link-constr-exist}, there exists
a Nash equilibrium in this game if, and only,~if there is one in~$\calG$
where all the players~win. Hence \PSPACE-hardness also holds for the
NE existence problem.
\section{Ordered B\"uchi objectives}
\label{sec:buchi}
In this Section we assume that preference relations of the players are
given by ordered B\"uchi objectives (as~defined in
Section~\ref{sec:prefrel}), and we prove the results listed in
Table~\ref{table-buchi} (page~\pageref{table-buchi}). We~first
consider the general case of preorders given as Boolean circuits, and
then exhibit several simpler cases.
For the rest of this section, we fix a game $\calG=\tuple{\Stat,\Agt,
\Act,\Allow, \Tab,(\mathord\prefrel_A)_{A\in\Agt}}$, and assume that
$\prefrel_A$ is given by an ordered B\"uchi objective $\omega_A =
\langle (\Omega_i^A)_{1 \le i \le n_A}, (\mathord\preorder_A)_{A \in \Agt}
\rangle$.
\subsection{General case: preorders are given as circuits}
\label{subsec:general}
\begin{theorem}\label{thm:buchi-pspace}
For finite games with ordered B\"uchi objectives where preorders are
given as Boolean circuits, the value problem, the NE existence problem and
the constrained NE existence problem are \PSPACE-complete.
\end{theorem}
\begin{proof}
We explain the algorithm for the constrained NE existence problem. We
assume that for each player $A$, the preorder $\preorder_A$ is given
by a Boolean circuit $C_A$. The algorithm proceeds by trying all the
possible payoffs for the players.
Fix such a payoff $(v^A)_{A\in\Agt}$, with $v^A \in \{0,1\}^{n_A}$
for every player $A$. We~build a circuit $D_A$ which
represents a single objective for player $A$. Inputs to circuit $D_A$
will be states of the game. This circuit is constructed from $C_A$
as follows: We set all input gates $w_1 \cdots w_n$ of circuit~$C_A$
to the value given by payoff $v^A$; The former input $v_i$ receives
the disjunction of all the states in~$\Omega_i$; We negate the
output. It is not hard to check that the new circuit $D_A$ is such
that for every play $\rho$, $D_A[\Inf(\rho)]$ evaluates to \true \iff
$\payoff_A(\rho) \not\preorder_A v^A$, \ie if $\rho$ is
an improvement for player~$A$.
Circuit $D_A$ is now viewed as a single objective for player~$A$, we
write $\calG'$ for the new game. We look for Nash equilibria in
this new game, with payoff~$0$ for each player. Indeed, a Nash
equilibrium $\sigma_\Agt$ in~$\calG$ with payoff $(v^A)_{A \in
\Agt}$ is a Nash equilibrium in game $\calG'$ with payoff
$(0,\dots,0)$. Conversely a Nash equilibrium $\sigma_\Agt$ in game
$\calG'$ with payoff $(0,\dots,0)$ is a Nash equilibrium in
$\calG$ as soon as the payoff of its outcome (in~$\calG$) is
$(v^A)_{A \in \Agt}$.
We use the algorithm described in Section~\ref{subsec:circuits}.
for computing Nash equilibria with single objectives given as
Boolean circuits, and we slightly modify it to take into account the
constraint that it has payoff~$v^A$ for each player~$A$. This can
be done in polynomial space, thanks to
Proposition~\ref{lem:play-length}: it is sufficient to look for
plays of the form $\pi \cdot \tau^\omega$ with $|\pi| \le |\Stat|^2$
and $|\tau|\le |\Stat|^2$.
\PSPACE-hardness was proven for single objectives given as a Boolean
circuit (the circuit evaluates by setting to \texttt{true} all
states that are visited infinitely often, and to \texttt{false} all
other states) in Section~\ref{subsec:circuits}. This kind of
objective can therefore be seen as an ordered B\"uchi objective with
a preorder given as a Boolean circuit.
\end{proof}
\subsection{When the ordered objective can be (co-)reduced to a single
B\"uchi objective}
\label{subsec:reducible}
For some ordered objectives, the preference relation can (efficiently)
be reduced to a single objective. For instance, a~disjunction of
several B\"uchi objectives can obviously be reduced to a single
B\"uchi objective, by considering the union of the target sets.
Formally, we~say that an ordered B\"uchi objective $\omega = \langle
(\Omega_i)_{1 \le i \le n},\preorder \rangle$ is \newdef{reducible} to
a single B\"uchi objective if, given any payoff vector~$v$, we~can
construct in polynomial time a target set~$\widehat T(v)$ such that
for all paths~$\rho$, $v \preorder \payoff_\omega(\rho)$ \iff
$\Inf(\rho) \cap \widehat T(v) \neq \emptyset$. It~means that
\emph{securing} payoff~$v$ corresponds to ensuring infinitely many
visits to the new target set. Similarly, we~say that $\omega$ is
\newdef{co-reducible} to a single B\"uchi objective if for any
vector~$v$ we can construct in polynomial time a target set~$\widehat
T(v)$ such that $\payoff_\omega(\rho) \not\preorder v$ \iff
$\Inf(\rho) \cap \widehat T(v) \neq \emptyset$. It~means that
\emph{improving} on payoff~$v$ corresponds to ensuring infinitely many
visits to the new target
We prove the following proposition, which exploits
(co-)reducibility for efficiently solving the various problems.
\begin{proposition}\hfill
\label{prop:buchi-reducible}
\begin{itemize}
\item For finite games with ordered B\"uchi objectives which are
reducible to single B\"uchi objectives, and in which the preorders
are non-trivial\footnote{That is, there is more than one class in
the preorder.} and monotonic, the value problem is \P-complete.
\item For finite games with ordered B\"uchi objectives which are
co-reducible to single B\"uchi objectives, and in which the
preorders are non-trivial and monotonic the NE existence problem and
the constrained NE existence problem are \P-complete.
\end{itemize}
\end{proposition}
\noindent Note that the hardness results follow from the hardness of the same
problems for single B\"uchi objectives (see
Section~\ref{subsec:buchi}). We now prove the two upper bounds.
\subsubsection{Reducibility to single B\"uchi objectives and the value problem.}
\label{lem:value-buchi-reducible}
We transform the ordered B\"uchi objectives of the considered player
into a single B\"uchi objective, and use a polynomial-time
algorithm~\cite[Chapter~2]{GTW02} to solve the resulting zero-sum
(turn-based) B\"uchi game.
\subsubsection{Co-reducibility to single B\"uchi objectives and the
(constrained) NE existence problem.}
\label{subsec:co-reducible}
We assume that the ordered objectives $(\omega_A)_{A \in \Agt}$ are
all co-reducible to single B\"uchi objectives. We show that we
can use the algorithm presented in Section~\ref{subsubsec:algo2} to
solve the constrained NE existence problem in polynomial time.
We first notice that the preference relations $\prefrel_A$ satisfy the
hypotheses $(\star)$ (see page~\pageref{hyp:star}): $(\star)_a$ and
$(\star)_b$ are obvious, and $(\star)_c$ is by co-reducibility of the ordered
objectives. It means that we can apply the results of
Lemmas~\ref{lem:characterization} and~\ref{lem:character2} to the current
framework. To be able to conclude and apply Lemma~\ref{lem:compute-sol}, we
need to show that for every payoff~$v$, we~can compute in polynomial time the
set~$W(\calG,v)$ in the suspect game~$\calH(\calG,v)$.
\begin{lemma}\label{lem:w-poly}
Fix a threshold $v$. The set $W(\calG,v)$ can be computed in
polynomial time.
\end{lemma}
\begin{proof}
As the ordered objectives are co-reducible to single B\"uchi
objectives, we can construct in polynomial time target sets
$\widehat T^A(v)$ for each player $A$. The objective of $\Eve$ in
the suspect game $\calH(\calG,K)$ is then equivalent to a co-B\"uchi
objective with target set $\{{(\widehat T^A(v,P)} \mid {A \in P}\}$.
The winning region $W(\calG,v)$ can then be determined using a
polynomial time algorithm of~\cite[Sect.~2.5.3]{GTW02}.
\end{proof}
\subsubsection{Applications.}
We will give preorders to which the above applies, allowing to infer
several \P-completeness results in Table~\ref{table-buchi} (those
written with reference ``Section~\ref{subsec:reducible}'').
We first show that reducibility and co-reducibility coincide
when the preorder is total.
\begin{lemma}\label{lemma-total}
Let $\omega = \langle (\Omega_i)_{1 \le i \le n},\preorder \rangle$
be an ordered B\"uchi objective, and assume that $\preorder$ is
total. Then, $\omega$~is reducible to a single B\"uchi objective
\iff $\omega$ is co-reducible to a single B\"uchi objective.
\end{lemma}
\begin{proof}
Let~$u \in \{0,1\}^n$ be a vector. If $u$~is a maximal element, the
new target set is empty, and thus satisfies the property for
co-reducibility. Otherwise we pick a vector~$v$ among the smallest
elements that is strictly larger than~$u$. Since the preorder is
reducible to a single B\"uchi objective, there is a target
set~$\widehat T$ that is reached infinitely often whenever the
payoff is greater than~$v$. Since the preorder is total and by
choice of~$v$, we~have $w \not\preorder u \Leftrightarrow v
\preorder w$. Thus the target set~$\widehat T$ is visited infinitely
often when~$u$ is not larger than the payoff. Hence $\omega$ is
co-reducible to a single B\"uchi objective.
The proof of the other direction is similar.
\end{proof}
\begin{lemma}
\label{lemma:examples}
Ordered B\"uchi objectives with disjunction or maximise preorders
are reducible to single B\"uchi objectives. Ordered B\"uchi
objectives with disjunction, maximise or subset preorders are
co-reducible to single B\"uchi objectives.
\end{lemma}
\proo
Let $\omega = \langle (\Omega_i)_{1 \le i \le n},\preorder \rangle$
be an ordered B\"uchi objective. Assume $T_i$ is the target set
for $\Omega_i$.
Assume $\preorder$ is the disjunction preorder. If the payoff~$v$ is
different from $\Zero$ then we define~$\widehat T(v)$ as the union of
all the target sets: $\widehat T (v) = \bigcup_{i=1}^n T_i$. Then, for
every run $\rho$,
\begin{eqnarray*}
v \preorder \payoff_\omega(\rho) & \Leftrightarrow &
\text{there is some}\ i\ \text{for which}\ \Inf(\rho) \cap T_i \ne \varnothing \\
& \Leftrightarrow & \Inf(\rho)\cap \widehat T(v) \ne \varnothing
\end{eqnarray*}
If the payoff~$v$ is~$\Zero$ then we get the expected result with
$\widehat T(v) = \Stat$. Disjunction being a total preorder, it is
also co-reducible (from Lemma~\ref{lemma-total}).
We assume now that $\preorder$ is the maximise preorder. Given a
payoff~$v$, consider the index $ i_0=\max\{i\mid v_i = 1 \}$. We
then define $\widehat T(v)$ as the union of the target sets that are
above~$i_0$: $\widehat T(v) = \bigcup_{i\geq i_0} T_i$. The following
four statements are then equivalent, if $\rho$ is a run:
\begin{eqnarray*}
v \preorder \payoff_\omega(\rho) & \Leftrightarrow &
v \preorder \One_{\{i\mid \Inf(\rho)\cap T_i\ne \varnothing\}} \\
& \Leftrightarrow &
i_0 \leq \max\{i\mid \Inf(\rho) \cap T_i \ne \varnothing\} \\
& \Leftrightarrow &
\exists i\geq i_0.\ \Inf(\rho) \cap T_i \ne \varnothing
\end{eqnarray*}
Hence $\omega$ is reducible, and also co-reducible as it~is total,
to a single B\"uchi objective.
Finally, we assume that $\preorder$ is the subset preorder, and we
show that $\omega$ is then co-reducible to a single
B\"uchi objective. Given a payoff~$v$, the new target is the
union of the target sets that are not reached infinitely often for
that payoff:
$\widehat T(v) = \bigcup_{\{i \mid v_i = 0\}} T_i$. Then the following
statements are equivalent, if $\rho$ is a run:
\begin{eqnarray*}
\payoff_\omega(\rho) \not\preorder u & \Leftrightarrow &
\One_{\{i\mid \Inf(\rho)\cap T_i\ne \varnothing\}} \not\preorder u \\
& \Leftrightarrow &
\exists i.\ \Inf(\rho)\cap T_i \ne \varnothing \text{ and }u_i = 0 \\
&\Leftrightarrow& \Inf(\rho)\cap \widehat T(v) \ne
\varnothing
\rlap{\hbox to 155 pt{\hfill\qEd}}
\end{eqnarray*}
As a corollary, we get the following result:
\begin{corollary}
For finite games with ordered B\"uchi objectives, with either the
disjunction or the maximise preorder, the value problem is
\P-complete.
%
For finite games with ordered B\"uchi objectives, with either the
disjunction, the maximise or the subset preorder, the NE existence problem and
the constrained NE existence problem are \P-complete.
\end{corollary}
\begin{remark}
Note that we cannot infer \P-completeness of the
value problem for the subset preorder since the subset preorder is
not total, and ordered objectives with subset preorder are not
reducible to single B\"uchi objectives. Such an ordered objective
is actually reducible to a generalised B\"uchi objective (several
B\"uchi objectives should be satisfied).
\end{remark}
\subsection{When the ordered objective can be reduced to a
deterministic B\"uchi automaton objective.}
\label{sec:reduc-buchi-auto}
For some ordered objectives, the preference relation can (efficiently)
be reduced to the acceptance by a deterministic B\"uchi automaton.
Formally, we~say that an ordered objective $\omega = \langle
(\Omega_i)_{1 \le i \le n},\preorder \rangle$ is \newdef{reducible} to
a deterministic B\"uchi automaton whenever, given any payoff
vector~$u$, we~can construct in polynomial time a deterministic
B\"uchi automaton over $\Stat$ which accepts exactly all plays $\rho$
with $u \preorder \payoff_\omega(\rho)$.
For such preorders, we will see that the value problem can be solved
efficiently by constructing the product of the deterministic B\"uchi
automaton and the arena of the game. This construction does however
not help for solving the (constrained) NE existence problems since the
number of players is a parameter of the problem, and the size of the
resulting game will then be exponential.
\begin{proposition}\label{prop:buchi-value-p}
For finite games with ordered B\"uchi objectives which are reducible
to deterministic B\"uchi automata, the value problem is \P-complete.
\end{proposition}
\begin{proof}
Given the payoff~$v^A$ for player~$A$, the algorithm proceeds by
constructing the automaton that recognises the plays with payoff
higher than~$v^A$. By performing the product with the game as
described in Section~\ref{sec:productB}, we obtain a new game, in
which there is a winning strategy \iff there is a strategy in the
original game to ensure payoff~$v^A$. In~this new game, player~$A$
has a single B\"uchi objective, so that the NE existence of a winning
strategy can be decided in polynomial time.
Hardness follows from that of games with single B\"uchi objectives.
\end{proof}
\subsubsection*{Applications}
We now give preorders to which the above result applies, that is,
which are reducible to deterministic B\"uchi automata objectives.
\begin{lemma}
\label{lemma:conj}
An ordered objective where the preorder is either the conjunction,
the subset or the lexicographic preorder is reducible to a
deterministic B\"uchi automaton objective.
\end{lemma}
\begin{proof}
We first focus on the \textbf{conjunction preorder}. Let $\omega =
\langle (\Omega_i)_{1 \le i \le n},\preorder \rangle$ be an ordered
B\"uchi objective, where $\preorder$ is the conjunction. For every
$1 \le i \le n$, let $T_i$ be the target set defining the B\"uchi
condition $\Omega_i$. There are only two possible payoffs: either
all objectives are satisfied, or one objective is not satisfied. For
the second payoff case, any play has a larger payoff: hence the
trivial automaton (which accepts all plays) witnesses the
property. For the first payoff case, we~construct a deterministic
B\"uchi automaton~$\calB$ as follows. There is one state for each
target set, plus one accepting state: $Q = \{q_0, q_1, \dots, q_n
\}$; the initial state is $q_0$, and the unique repeated state is
$q_n$.
%
For all $1 \le i \le n$, the transitions are $q_{i-1}
\xrightarrow{s} q_{i}$ when $s\in T_i$ and $q_{i-1} \xrightarrow{s}
q_{i-1}$ otherwise. There are also transitions $q_{n}
\xrightarrow{s} q_0$ for every $s \in \Stat$. Automaton $\calB$
describes the plays that goes through each set $T_i$ infinitely
often, hence witnesses the property. It can furthermore be computed
in polynomial time. The construction is illustrated in
Figure~\ref{fig:conjunction-automaton}.
\medskip We now turn to the \textbf{subset preorder}. Let $\omega =
\langle (\Omega_i)_{1 \le i \le n},\preorder \rangle$ be an ordered
B\"uchi objective, where $\preorder$ is the subset preorder. For
every $1 \le i \le n$, let $T_i$ be the target set defining the
B\"uchi condition $\Omega_i$. Fix a payoff~$u$. A~play $\rho$ is
such that $u \preorder \payoff_\omega(\rho)$ \iff $\rho$ visits
infinitely often all sets $T_i$ with $u_i=1$. This is then
equivalent to the conjunction of all $\Omega_i$'s with $u_i=1$. We
therefore apply the previous construction for the conjunction and get
the expected result.
\medskip We finish this proof with the \textbf{lexicographic
preorder}. Let $\omega = \langle (\Omega_i)_{1 \le i \le
n},\preorder \rangle$ be an ordered B\"uchi objective, where
$\preorder$ is the lexicographic preorder. For every $1 \le i \le
n$, let $T_i$ be the target set defining the B\"uchi condition
$\Omega_i$. Let $u \in \{0,1\}^n$ be a payoff vector. We construct
the following deterministic B\"uchi automaton which recognises the
runs whose payoff is greater than or equal to~$u$.
In this automaton there is a state~$q_i$ for each $i$ such that
$u_i=1$, and a state~$q_0$ that is both initial and repeated: $Q =
\{q_0\} \cup \{q_i \mid u_i=1\}$. We write $I = \{0\} \cup \{i \mid
u_i=1\}$. For every $i \in I$, we write $\mathsf{succ}(i) = \min (I
\setminus \{j \mid j \le i\})$, with the convention that $\min
\emptyset = 0$. The transition relation is defined as follows:
\begin{itemize}
\item for every $s \in \Stat$, there is a transition $q_0
\xrightarrow{s} q_{\mathsf{succ}(0)}$;
\item for every $i \in I \setminus \{0\}$, we have the following
transitions:
\begin{itemize}
\item $q_{i} \xrightarrow{T_i} q_{\mathsf{succ}(i)}$;
\item $q_i \xrightarrow{T_k \setminus T_i} q_0$
with $k<i$ and $u_k=0$;
\item $q_i \xrightarrow{s} q_i$ for every $s \in \Stat \setminus
(T_i \cup \bigcup_{k<i,u_k=0}T_k)$.
\end{itemize}
\end{itemize}
\noindent An example of the construction is given in
Figure~\ref{fig:lexico-automaton}.
\smallskip We now prove correctness of this construction. Consider
a path that goes from~$q_0$ to~$q_0$: if the automaton is currently
in state~$q_i$, then since the last occurrence of~$q_0$, at~least
one state for each target set~$T_j$ with $j<i$ and $u_j = 1$ has
been visited. When $q_0$ is reached again, either it~is because we
have seen all the $T_j$ with $u_j = 1$, or it~is because the run
visited some target~$T_i$ with $u_i=0$ and all the $T_j$ such that
$u_j=1$ and $j<i$; in both cases, the set of targets that have been
visited between two visits to~$q_0$ describes a payoff greater
than~$u$. Assume the play~$\pi$ is accepted by the automaton; then
there is a sequence of~$q_i$ as above that is taken infinitely
often, therefore $\payoff_\omega(\pi)$ is greater than or equal
to~$u$ for the lexicographic order.
Conversely assume $v = \payoff_\omega(\pi)$ is greater than or equal
to~$u$, that we already read a prefix $\pi_{\le k}$ for some~$k$,
and that the current state is~$q_0$. Reading the first symbol
in~$\pi$ after position~$k$, the run goes to the state~$q_i$ where
$i$ is the least integer such that $u_i=1$. Either the path
visits~$T_i$ at some point, or it~visits a state in a target $T_j$,
with $j$ smaller than~$i$ and $v_j=0$, in which case the automaton
goes back to~$q_0$. Therefore from~$q_0$ we can again come back
to~$q_0$ while reading the following of~$\pi$, and the automaton
accepts.
\end{proof}
\begin{figure}[!ht]
\begin{minipage}{.48\textwidth}
\centering
\begin{tikzpicture}[scale=1,thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm,fill=black!10]
\draw (0,0) node[rond] (A) {$q_0$};
\draw (1.5,0) node[rond] (B) {$q_1$};
\draw (3,0) node[rond] (C) {$q_2$};
\draw (1,-1) node[rond,double] (D) {$q_3$};
\draw[-latex'] (-0.8,0) -- (A);
\draw[-latex'] (A) -- node[above] {$T_1$} (B);
\draw[-latex'] (B) -- node[above] {$T_2$} (C);
\draw[-latex',rounded corners=4mm] (C) |- node[below,pos=0.7]
{$T_3$} (D.0);
\draw[-latex',rounded corners=4mm] (D) -| node[left] {$\Stat$}
(A);
\draw[-latex'] (A) .. controls +(.5,1) and +(-.5,1) .. (A);
\draw[-latex'] (B) .. controls +(.5,1) and +(-.5,1) .. (B);
\draw[-latex'] (C) .. controls +(.5,1) and +(-.5,1) .. (C);
\end{tikzpicture}
\caption{The automaton for the conjunction preorder, $n = 3$}
\label{fig:conjunction-automaton}
\end{minipage}\hskip-.05\textwidth
\begin{minipage}{.57\textwidth}
\centering
\begin{tikzpicture}[scale=1,thick]
\tikzstyle{rond}=[draw,circle,minimum size=6mm,inner sep=0mm,fill=black!10]
\draw (0,0) node[rond] (A) {$q_2$};
\draw (3,0) node[rond] (B) {$q_5$};
\draw (5,0) node[rond] (C) {$q_6$};
\draw (1,-1) node[rond,double] (E) {$q_0$};
\draw[-latex'] (A) -- node[above] {$T_2$} (B);
\draw[-latex'] (B) -- node[above] {$T_5$} (C);
\draw[-latex'] (.2,-1.2) -- (E);
\draw[-latex',rounded corners=4mm] (C) |- node[below,pos=0.7]
{$T_1,T_3,T_4,T_6$} (E.0);
\draw[-latex',rounded corners=4mm] (B) |- node[above,pos=0.78]
{$T_1,T_3,T_4$} (E.20);
\draw[-latex'] (A) -- node[above,pos=0.7] {\ $T_1$} (E);
\draw[-latex',rounded corners=4mm] (E) -| node[left] {$\Stat$}
(A);
\draw[-latex'] (A) .. controls +(.5,1) and +(-.5,1) .. (A);
\draw[-latex'] (B) .. controls +(.5,1) and +(-.5,1) .. (B);
\draw[-latex'] (C) .. controls +(.5,1) and +(-.5,1) .. (C);
\end{tikzpicture}
\caption{The automaton for the lexicographic order, $n = 7$ and $u=
(0,1,0,0,1,1,0)$}\label{fig:lexico-automaton}
\end{minipage}
\end{figure}
We conclude with the following corollary:
\begin{corollary}
For finite games with ordered B\"uchi objectives with either of the
conjunction, the lexicographic or the subset preorders, the value
problem is \P-complete.
\end{corollary}
\subsection{Preference relations with monotonic preorders}
\label{subsec:monotonic}
We will see in this part that monotonic preorders lead to more
efficient algorithms. More precisely we prove the following result:
\begin{proposition}\label{prop:buchi-np}
\begin{itemize}
\item For finite games with ordered B\"uchi objectives where the preorders
are given by monotonic Boolean circuits, the value problem is in \coNP,
and the NE existence problem and the constrained NE existence problem are
in \NP.
\item Completeness holds in both cases for finite games with ordered
B\"uchi objectives where the preorders are given by monotonic
Boolean circuits or with the counting preorder.
\item \NP-completeness also holds for the constrained NE existence
problem for finite games with ordered B\"uchi objectives where the
preorders admit an element~$v$ such that for every $v'$, it~holds
$v' \ne \One \Leftrightarrow v' \preorder v$.\footnote{To be fully
formal, a preorder~$\preorder$ is in fact a family
$(\mathord\preorder_n)_{n\in\N}$ (where $\preorder_n$ compares two
vectors of size~$n$), and this condition should be stated as
``\textit{for all~$n$, there is an element~$v_n\in\{0,1\}^n$
such that for all~$v'\in\{0,1\}^n$, it~holds $v' \ne \One
\Leftrightarrow v' \preorder v_n$}''.}
\end{itemize}
\end{proposition}
\noindent We first show that monotonicity of the preorders imply some
memorylessness property in the suspect game. We then give algorithms
witnessing the claimed upper bounds, and show the various lower bounds.
\subsubsection{When monotonicity implies memorylessness.}
We say that a strategy $\sigma$ is \emph{memoryless} (resp. memoryless
from state $s_0$) if there exists a function $f\colon \Stat \to \Act$
such that $\sigma(h\cdot s) = f (s)$ for every $h \in \Hist$
(resp. for every $h \in \Hist(s_0)$). A strategy profile is said
memoryless whenever all strategies of single players are
memoryless. We show that when the preorders used in the ordered
B\"uchi objectives are monotonic, the three problems are also easier
than in the general case. This is because we can find memoryless
trigger profiles (recall Definition~\ref{def:trigger}).
We first show this lemma, that will then be applied to the suspect
game.
\begin{lemma}\label{lem:memoryless}
Let $\calH$ be a turn-based two-player game. Call \Eve one player,
and let $\sigma_\shortEve$ be a strategy for~\Eve, and $s_0$ be a
state of~$\calH$. There is a memoryless
strategy~$\sigma'_\shortEve$ such that for every $\rho' \in
\Out_{\calH}(s_0,\sigma'_\shortEve)$, there exists $\rho \in
\Out_{\calH}(s_0,\sigma_{\shortEve})$ such that $\Inf(\rho')
\subseteq\Inf(\rho)$.
\end{lemma}
\begin{proof}
This proof is by induction on the size of the set
\[
S(\sigma_1) =
\{(s,m) \mid \exists h \in\Hist(\sigma_1) .\ \sigma_1(h) = m\
\text{and}\ \last(h)=s\}.
\]
If its size is the same as that of $\{ s \mid \exists h \in
\Hist(\sigma_1).\ \last(h) = s \}$ then the strategy is memoryless.
Otherwise, let~$s$ be a state at which $\sigma_1$ takes several
different actions (\ie, $|(\{s\}\times\Act) \cap S(\sigma_1)| > 1$).
We will define a new strategy~$\sigma'_1$ that takes fewer different
actions in $s$ and such that for every outcome of~$\sigma_1'$, there
is an outcome of~$\sigma_1$ that visits (at~least) the same states
infinitely often.
If $\sigma$ is a strategy and $h$~is ~a~history, we let $\sigma\circ
h\colon h'\mapsto \sigma(h \cdot h')$ for any history~$h'$. Then for
every~$m$ such that $(s,m) \in S(\sigma_1)$ we let ${H_m
= \{h \in \Hist(\sigma_1) \mid \last(h)=s\ \text{and}\
\sigma_1(h)=m\}}$, and for every~$h$, $h^{-1} \cdot H_m = \{h'
\mid h \cdot h' \in H_m\}$.
We pick $m$ such that $H_m$ is not empty.
\begin{itemize}
\item Assume that there is $h_0 \in \Hist(\sigma_1)$ with
$\last(h_0) = s$, such that $h_0^{-1} \cdot H_m$ is empty. We
define a new strategy $\sigma'_1$ as follows. If $h$ is an
history which does not visit $s$, then $\sigma'_1(h)=\sigma_1(h)$.
If $h$ is an history which visits $s$, then decompose $h$ as $h'
\cdot h''$ where $\last(h') = s$ is the first visit to $s$ and
define $\sigma'_1(h) = \sigma_1(h_0 \cdot h'')$. Then,
strategy~$\sigma'_1$ does not use~$m$ at state~$s$, and therefore
at least one action has been ``removed'' from the strategy. More
precisely, $|(\{s\}\times\Act) \cap S(\sigma'_1)| \le
|(\{s\}\times\Act) \cap S(\sigma_1)| - 1$. Furthermore the
conditions on infinite states which are visited infinitely often
by outcomes of $\sigma'_1$ is also satisfied.
\item Otherwise for any $h \in \Hist(\sigma_1)$ with $\last(h) = s$,
$h^{-1}\cdot H_m$ is not empty. We will construct a strategy
$\sigma'_1$ which plays~$m$ at~$s$. Let $h$ be an history, we
first define the extension~$e(h)$ inductively in that way:
\begin{itemize}
\item $e(\varepsilon) = \varepsilon$, where $\varepsilon$ is the
empty history;
\item $e(h\cdot s) = e (h) \cdot h'$ where $h' \in (e(h))^{-1}
\cdot H_m$;
\item $e(h \cdot s') = e(h) \cdot s'$ if $s' \ne s$.
\end{itemize}
We extend the definition of $e$ to infinite outcomes in the
natural way: $e(\rho)_{i} = e(\rho_{\le i})_{i}$. We then define
the strategy $\sigma'_1 \colon h \mapsto \sigma_1(e(h))$. We show
that if $\rho$ is an outcome of $\sigma'_1$, then $e(\rho)$ is an
outcome of $\sigma_1$. Indeed assume $h$ is a finite outcome
of~$\sigma'_1$, that $e(h)$ is an outcome of~$\sigma_1$ and
$\last(h) = \last(e(h))$. If $h\cdot s$ is an outcome of
$\sigma'_1$, by construction of $e$, $e(h\cdot s) = e(h) \cdot
h'$, such that $\last(h') = s$, and $h'$ is an outcome of
$\sigma_1\circ e(h)$ and as $e(h)$ is an outcome of $\sigma_1$ by
hypothesis, that means that $e(h\cdot s)$ is an outcome of
$\sigma_1$. If $h\cdot s'$ with $s'\ne s$ is an outcome of
$\sigma'_1$, $e(h \cdot s') = e(h) \cdot s'$, $s' \in \Tab
(\last(h), \sigma'_1(h))$, and $\sigma'_1(h) =
\sigma_1(e(h))$. Using the hypothesis $\last(h) = \last(e(h))$,
and $e(h)$ is an outcome of~$\sigma_1$, therefore $e(h\cdot s')$
is an outcome of~$\sigma_1$. This shows that if $\rho$ is an
outcome of~$\sigma'_1$ then $e(\rho)$ is an outcome
of~$\sigma_1$. The property on states visited infinitely often
follows. Several moves have been removed from the strategy at $s$
(since the strategy is now memoryless at~$s$, playing~$m$).
\end{itemize}
In all cases we have $S(\sigma'_1)$ strictly included in
$S(\sigma_1)$, and an inductive reasoning entails the result.
\end{proof}
\begin{lemma}\label{lem:suspect-based}
If for every player $A$, $\preorder_A$ is monotonic, and if there is
a trigger profile for some play $\pi$ from~$s$, then there is a
memoryless winning strategy for~$\Eve$ in $\calH(\calG,\pi)$ from
state~$(s,\Agt)$.
\end{lemma}
\begin{proof}
Assume there is a trigger profile for~$\pi$. We have seen in
Lemma~\ref{lem:suspect-game} that there is then a winning
strategy~$\sigma_\shortEve$ in game $\calH(\calG,\pi)$ for $\Eve$.
Consider the memoryless strategy $\sigma'_\shortEve$ constructed as
in Lemma~\ref{lem:memoryless}. Let~$\rho'$ be an outcome of
$\sigma'_\shortEve$, there is an outcome~$\rho$
of~$\sigma_\shortEve$ such that $\Inf(\rho') \subseteq
\Inf(\rho)$. As $\sigma_\shortEve$ is winning in~$\calH(\calG,\pi)$,
for every $A \in \limitpi2(\rho)$, $\Sproj_1(\rho) \prefrel_A \pi$.
We assume the B\"uchi conditions are given by the target sets
$(T_i^A)_{A,i}$. For each player~$A$, $\{i \mid \Inf
(\Sproj_1(\rho'))\cap T^A_i\} \subseteq \{i \mid \Inf
(\Sproj_1(\rho))\cap T^A_i\}$. As the preorder is monotonic the
payoff of $\Sproj_1(\rho')$ is smaller than that of~$\Sproj_1(\rho)$:
$\Sproj_1(\rho') \prefrel_A \Sproj_1(\rho)$. So~the play is winning
for any player~$A$ and $\sigma'_\shortEve$ is a memoryless winning
strategy in game $\calH(\calG,\pi)$ for $\Eve$.
\end{proof}
\begin{lemma}\label{lem:check-threat}
If for every player $A$, $\preorder_A$ is given by monotonic Boolean
circuits, then given a path~$\pi$, we can decide in polynomial time
if a memoryless strategy for~$\Eve$ in $\calH(\calG,\pi)$ is
winning.
\end{lemma}
\begin{proof}
Let $\sigma_\shortEve$ be a memoryless strategy in
$\calH(\calG,\pi)$ for $\Eve$. By keeping only the edges that are
taken by $\sigma_\shortEve$, we define a subgraph of the game. We
can compute in polynomial time the strongly connected components of
this graph. If one component is reachable and does not satisfy the
objective of~$\Eve$, then the strategy is not winning. Conversely
if all the reachable strongly connected components satisfy the
winning condition of $\Eve$, since the preorder is monotonic,
$\sigma_\shortEve$ is a winning strategy. Notice that since the
preorder is given as a Boolean circuit, we can check in polynomial
time whether a strongly connected component is winning or
not. Globally the algorithm is therefore polynomial-time.
\end{proof}
We now turn to the proof of the claimed upper bounds.
\subsubsection{Proofs for the upper bounds.}
We show that the value problem is in \coNP~for finite games with
ordered B\"uchi objectives, when preorders are given by monotonic
Boolean circuits.
As already mentioned at the beginning of Section~\ref{sec:single}, for
the value problem, we can make the concurrent game turn-based: since
player $A$ must win against any strategy of the coalition $P = \Agt
\setminus \{A\}$, she must also win in the case where the opponents'
strategies can adapt to what $A$ plays. In other terms, we can make
$A$ play first, and then the coalition. This turn-based game is
determined, so~that there is a strategy~$\sigma$ whose outcomes are
always better (for~$A$) than $v^{A}$ \iff for any strategy~$\sigma'$
of coalition $P$, there is an outcome with payoff (for~$A$) better
than~$v^{A}$. If there is a counterexample to this fact, then thanks
to Lemma~\ref{lem:memoryless} there is one with a memoryless
strategy~$\sigma'$. The \coNP\ algorithm proceeds by checking that
all the memoryless strategies of coalition~$P$ have an outcome better
than~$v^{A}$, which is achievable in polynomial time, with a method
similar to Lemma~\ref{lem:check-threat}.
\medskip We show now that the constrained NE existence problem is in
\NP~for finite games with ordered B\"uchi objectives, when preorders
are given by monotonic Boolean circuits.
The algorithm for the constrained NE existence problem proceeds by
guessing:
\begin{itemize}
\item the payoff for each player,
\item a~play of the form $\pi \cdot \tau^\omega$, where
$|\pi|\le|\Stat|^2$ and $|\tau|\le|\Stat|^2$,
\item an under-approximation $W$ of the set of winning states
in~$\calH(\calG,\pi \cdot \tau^\omega)$
\item a memoryless strategy profile~$\sigma_\Agt$ in~$\calH(\calG,\pi
\cdot \tau^\omega)$.
\end{itemize}
We check that $\sigma_\Agt$ is a witness for the fact that the states
in $W$ are winning; thanks to Lemma~\ref{lem:check-threat}, this can
be done in polynomial time. We also verify that the play $\pi \cdot
\tau^\omega$ has the expected payoff, that the payoff satisfies the
constraints, and that it never gets out of $W$. If these conditions
are fulfilled, then the play $\pi \cdot \tau^\omega$ meets the
conditions of Theorem~\ref{thm:eq-win}, and there is a Nash
equilibrium with outcome $\pi \cdot \tau^\omega$.
Lemma~\ref{lem:suspect-based} and Proposition~\ref{lem:play-length}
ensure that if there is a Nash equilibrium, we~can find it this way.
\subsubsection{Proofs for the hardness results.}
We first prove the hardness results for the counting preorder.
\begin{lemma}\label{prop:buchi-counting-nphard}
For finite games with ordered B\"uchi objectives that use the
counting preorder, the value problem is \co\NP-hard.
\end{lemma}
\begin{proof}
We~reduce (the complement of) \SAT into the value problem for
two-player turn-based games with B\"uchi objectives with the
counting preorder. Consider an instance
\[\phi = C_1 \land \cdots \land C_m\]
with $C_j=\ell_{j,1} \lor \ell_{j,2} \lor \ell_{j,3}$, over a set of
variables~$\{x_1,\ldots,x_n\}$. With~$\phi$, we~associate a
two-player turn-based game~$\calG$. Its set of states is made of
\begin{itemize}
\item a set containing the unique initial state~$V_0=\{s_0\}$,
\item a set of two states~$V_k=\{x_k, \lnot x_k\}$ for each~$1\leq
k\leq n$,
\item and a set of three states~$V_{n+j}=\{t_{j,1}, t_{j,2},
t_{j,3}\}$ for each~$1\leq j\leq m$.
\end{itemize}
Then, for each~$0\leq l\leq n+m$, there is a transition between any
state of~$V_l$ and any state of~$V_{l+1}$ (assuming
$V_{n+m+1}=V_0$).
The game involves two players: player~$B$ owns all the states, but
has no objectives (she~always loses). Player~$A$ has a set of
B\"uchi objectives defined by $T^A_{2\cdot k} = \{x_k\} \cup
\{t_{j,p} \mid \ell_{j,p} = x_k \}$, $T^A_{2\cdot k+1} = \{ \lnot
x_k\} \cup \{t_{j,p} \mid \ell_{j,p} = \lnot x_k \}$, for $1\leq
k\leq n$. Notice that at least $n$ of these objectives will be
visited infinitely often along any infinite play. We prove that if
the formula is not satisfiable, then at least $n+1$ objectives will
be fulfilled, and conversely.
\smallskip Assume the formula is satisfiable, and pick a witnessing
valuation~$v$. We define a strategy~$\sigma_B$ for~$B$ that
``follows'' valuation~$v$: from states in~$V_{k-1}$, for any~$1\leq
k\leq n$, the strategy plays towards~$x_k$ if $v(x_k)=\texttt{true}$
(and to~$\lnot x_k$ otherwise). Then, from a state in~$V_{n+l-1}$
with~$1\leq l\leq m$, it~plays towards one of the~$t_{j,p}$ that
evaluates to true under~$v$ (the one with least
index~$p$,~say). This way, the number of targets of player~$A$ that
are visited infinitely often is~$n$.
Conversely, pick a play in~$\calG$ s.t. at most (hence exactly) $n$
objectives of~$A$ are fulfilled. In particular, for any~$1\leq k\leq
n$, this play never visits one of~$x_k$ and~$\lnot x_k$, so that it
defines a valuation~$v$ over $\{x_1,\ldots, x_n\}$. Moreover, any
state of~$V_{n+l}$, with~$1\leq l\leq p$, that is visited infinitely
often must correspond to a literal that is made true by~$v$, as
otherwise this would make one more objective that is fulfilled
for~$A$. As~a consequence, each clause of~$\phi$ evaluates to true
under~$v$, and the result follows.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[xscale=1.4,thick]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=0mm,fill=black!10]
\draw (0,0) node [rond] (A1) {$s_0$};
\draw (1,1) node [rond] (B1) {$x_1$};
\draw (1,-1) node [rond] (C1) {$\lnot
x_1$};
\draw (2,1) node [rond](B2){$x_2$};
\draw (2,-1) node [rond](C2){$\lnot x_2$};
\draw (3,1) node [rond] (B3) {$x_3$};
\draw (3,-1) node [rond] (C3){$\lnot
x_3$};
\draw (4.5, 1.2) node [rond]
(B4) {$t_{1,1}$};
\draw (4.5, 0 ) node [rond]
(C4) {$t_{1,2}$};
\draw (4.5,-1.2) node [rond]
(D4) {$t_{1,3}$};
\draw (6, 1.2) node [rond]
(B5) {$t_{2,1}$};
\draw (6, 0 ) node [rond]
(C5) {$t_{2,2}$};
\draw (6,-1.2) node [rond]
(D5){$t_{2,3}$};
\draw (7,0) node (A6) {};
\draw [-latex'] (A1) -- (B1);
\draw [-latex'] (A1) -- (C1);
\draw [-latex'] (B1) -- (B2);
\draw [-latex'] (C1) -- (B2);
\draw [-latex'] (B1) -- (C2);
\draw [-latex'] (C1) -- (C2);
\draw [-latex'] (B2) -- (B3);
\draw [-latex'] (C2) -- (B3);
\draw [-latex'] (B2) -- (C3);
\draw [-latex'] (C2) -- (C3);
\draw [-latex'] (B3) -- (B4);
\draw [-latex'] (C3) -- (B4);
\draw [-latex'] (B3) -- (C4);
\draw [-latex'] (C3) -- (C4);
\draw [-latex'] (B3) -- (D4);
\draw [-latex'] (C3) -- (D4);
\draw [-latex'] (B4) -- (B5);
\draw [-latex'] (C4) -- (B5);
\draw [-latex'] (D4) -- (B5);
\draw [-latex'] (B4) -- (C5);
\draw [-latex'] (C4) -- (C5);
\draw [-latex'] (D4) -- (C5);
\draw [-latex'] (B4) -- (D5);
\draw [-latex'] (C4) -- (D5);
\draw [-latex'] (D4) -- (D5);
\draw[rounded corners=4mm] (B5) -| (7,-1.6);
\draw[rounded corners=4mm] (C5) -| (7,-1.6);
\draw[rounded corners=4mm] (D5) -| (7,-1.6);
\draw [rounded corners=4mm,-latex'] (7,-1.6) |- (5,-2) -| (A1);
\end{tikzpicture}
\caption{The game~$\calG$ associated with formula~$\phi$ of~\ref{eq:counting}}
\label{fig:counting}
\end{figure}
\begin{example}
We illustrate the construction of the previous proof in
Figure~\ref{fig:counting} for the formula
\begin{equation}\label{eq:counting}
\varphi = (x_1 \lor x_2 \lor \lnot x_3) \land (\lnot x_1 \lor
x_2 \lor \lnot x_3)\,.
\end{equation}
The targets for player $A$ are $T_1 = \{ x_1, t_{1,1}\}$, $T_2 = \{
\lnot x_1, t_{2,1}\}$, $T_3 = \{ x_2, t_{1,2}, t_{2,2}\}$, $T_4 =
\{ \lnot x_2\}$, $T_5 = \{ x_3\}$, $T_6 = \{ \lnot x_3, t_{1,3},
t_{2,3}\}$. Player~$A$ cannot ensure visiting infinitely often four
target sets, therefore the formula is satisfiable.
\end{example}
\begin{lemma}
For finite games with ordered B\"uchi objectives that use the
counting preorder, the NE existence problem is \NP-hard.
\end{lemma}
\begin{proof}
Let $\calG$ be the game we constructed for
Lemma~\ref{prop:buchi-counting-nphard}. We construct the game
$\calG''$ from $\calG$ as described in
Section~\ref{sec:link-value-exist}. The preference in $\calG'$ can
still be described with ordered B\"uchi objectives and the counting
preorder: the only target set of $B$ is $\{s_1\}$ and we add $s_1$
to $n$ different targets of $A$, where $n$ is the number of
variables as in Lemma~\ref{prop:buchi-counting-nphard}. From
Proposition~\ref{lem:link-value-exist} there is a Nash equilibrium
in~$\calG''$ from~$s_0$ \iff $A$ cannot ensure visiting at least
$n+1$ targets infinitely often. Hence the NE existence problem is
\NP-hard.
\end{proof}
This proves also \NP-hardness for the constrained NE existence problem
for ordered B\"uchi objectives with the counting preorder. Hardness
results for preorders given by monotonic Boolean circuits follow from
the above since the counting preorder is a special case of preorder
given as a monotonic Boolean circuit (and the counting preorder can be
expressed as a polynomial-size monotonic Boolean circuit).
We now show hardness in the special case of preorders with (roughly)
at most one maximal element below $\One$.
\begin{lemma}\label{prop:buchi-nphard}
For finite turn-based games with ordered B\"uchi objectives with a
monotonic preorder for which there is an element~$v$ such that for
every $v'$, $v' \ne \One \Leftrightarrow v' \preorder v$, the
constrained NE existence problem is \NP-hard.
\end{lemma}
\begin{proof}
Let us consider a formula $\phi = C_1 \land \cdots \land C_m$
For~each variable~$x_i$, our~game has one player~$B_i$ and three
states~$s_i$, $x_i$ and~$\lnot x_i$. The objectives of~$B_i$ are the
sets~$\{x_i\}$ and~$\{\lnot x_i\}$. Transitions go from each~$s_i$
to~$x_i$ and~$\lnot x_i$, and from~$x_i$ and~$\lnot x_i$
to~$s_{i+1}$ (with $s_{n+1}=s_0$). Finally, an~extra player~$A$ has
full control of the game (\ie, she~owns all the states) and has $n$
objectives, defined by $T^A_i = \{\ell_{i,1}, \ell_{i,2},
\ell_{i,3}\}$ for $1\leq i\leq n$. The construction is illustrated
in Figure~\ref{fig:buchi-nphard}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[xscale=1.4,thick]
\tikzstyle{rond}=[draw,circle,minimum size=7mm,inner sep=0mm,fill=black!10]
\draw (0,0) node [rond] (A1) {$s_1$};
\draw (1,.8) node [rond] (B1) {$x_1$};
\draw (1,-.8) node [rond] (C1) {$\lnot
x_1$};
\draw (2,0) node [rond] (A2) {$s_2$};
\draw (3,.8) node [rond](B2){$x_2$};
\draw (3,-.8) node [rond](C2){$\lnot x_2$};
\draw (4,0) node [rond] (A3) {$s_3$};
\draw (5,.8) node [rond] (B3) {$x_3$};
\draw (5,-.8) node [rond] (C3){$\lnot
x_3$};
\draw (6,0) node [rond] (A4) {$s_4$};
\draw (7,.8) node [rond] (B4) {$x_4$};
\draw (7,-.8) node [rond] (C4){$\lnot x_4$};
\draw [-latex'] (A1) -- (B1);
\draw [-latex'] (A1) -- (C1);
\draw [-latex'] (B1) -- (A2);
\draw [-latex'] (C1) -- (A2);
\draw [-latex'] (A2) -- (B2);
\draw [-latex'] (A2) -- (C2);
\draw [-latex'] (B2) -- (A3);
\draw [-latex'] (C2) -- (A3);
\draw [-latex'] (A3) -- (B3);
\draw [-latex'] (A3) -- (C3);
\draw [-latex'] (B3) -- (A4);
\draw [-latex'] (C3) -- (A4);
\draw [-latex'] (A4) -- (B4);
\draw [-latex'] (A4) -- (C4);
\draw[rounded corners=4mm] (B4) -| (8,-1.2);
\draw[rounded corners=4mm] (C4) -| (8,-1.2);
\draw [rounded corners=4mm,-latex'] (8,-1.2) |- (5,-1.6) -| (A1);
\end{tikzpicture}
\caption{The B\"uchi game for a formula with $4$ variables}
\label{fig:buchi-nphard}
\end{figure}
\smallskip We show that formula~$\phi$ is satisfiable \iff there is a
Nash equilibrium where each player~$B_i$ gets payoff~$\beta_i$
satisfying~$\beta_i\preorder v$ (hence $\beta_i\not=(1,1)$), and
player~$A$ gets payoff~$\One$.
First assume that the formula is satisfiable, and pick a witnessing
valuation~$u$. By~playing according to~$u$, player~$A$ can satisfy all
of her objectives (hence she cannot improve her payoff, since the
preorder is monotonic). Since she alone controls all the game, the
other players cannot improve their payoff, so that this is a Nash
equilibrium. Moreover, since~$A$ plays memoryless, only one of~$x_i$
and~$\lnot x_i$ is visited for each~$i$, so~that the payoff~$\beta_i$
for~$B_i$ satisfies~$\beta_i\preorder v$. Conversely, if there is a
Nash equilibrium with the desired payoff, then by hypothesis, exactly
one of each~$x_i$ and~$\lnot x_i$ is visited infinitely often (so that
the payoff for~$B_i$ is not~$(1,1)$), which defines a
valuation~$u$. Since in this Nash equilibrium, player~$A$ satisfies
all its objectives, one state of each target is visited, which means
that under valuation~$u$, formula~$\phi$ evaluates to true.
\end{proof}
\subsubsection{Applications.}
We now describe examples of preorders which satisfy the conditions on
the existence of an element $v$ such that $v' \ne \One \Leftrightarrow
v' \preorder v$.
\begin{lemma}\label{lem:second-maximal}
Conjunction, counting and lexicographic preorders have an element
$v$ such that $v' \ne \One \Leftrightarrow v' \preorder v$.
\end{lemma}
\begin{proof}
Consider $v = (1,\dots,1,0)$, and $v' \ne \One$. For conjunction,
there is~$i$ such that $v'_i=0$, so $v'\preorder v$. For counting,
$|\{i\mid v'_i = 1\}| < n$, so $v'\preorder v$. For the
lexicographic preorder, let $i$ be the smallest index such that
$v'_i =0$, and either $v_i=1$ and $v_j = v'_j$ for all $j<i$, or for
all $j \in \{1,\ldots,n\}$, $v_j = v'_j$. In both cases $v'
\preorder v$.
\end{proof}
As a consequence, the result of Lemma~\ref{prop:buchi-nphard} applies
in particular to the conjunction and lexicographic preorders, for
which the constrained NE existence problem is thus \NP-complete. Hence we
get:
\begin{corollary}
For finite games with ordered B\"uchi objectives with either of the
conjunction or the lexicographic preorders, the constrained
NE existence problem is \NP-complete.
\end{corollary}\section{Ordered reachability objectives}\label{sec:reach}
In this Section we assume that preference relations of the players are
given by ordered reachability objectives (as defined in
Section~\ref{sec:prefrel}), and we prove the results listed in
Table~\ref{table-reach} (page~\pageref{table-reach}). We will first
consider the general case when preorders are given by Boolean circuits
and we will show that the various decision problems are
\PSPACE-complete. We will even notice that the hardness result holds
for several simpler preorders. We will finally improve this result in
a number of cases.
For the rest of this section, we fix a game $\calG=\tuple{\Stat,\Agt,
\Act,\Allow, \Tab,(\mathord\prefrel_A)_{A\in\Agt}}$, and we assume that
$\prefrel_A$ is given by an ordered reachability objective $\omega_A =
\langle (\Omega_i^A)_{1 \le i \le n_A},\allowbreak (\mathord\preorder_A)_{A \in \Agt}
\rangle$.
\subsection{General case: preorders are given as circuits}
\label{ssec-generalcase}
We prove the following result:
\begin{proposition}\label{prop:reach-pspace}\hfill
\begin{itemize}
\item For finite games with ordered reachability objectives where
preorders are given by Boolean circuits, the value problem, the NE
existence problem and
the constrained NE existence problem are in \PSPACE.
\item For finite two-player turn-based games with ordered
reachability objectives where preorders have~$\One$ as a unique
maximal element, the value problem is \PSPACE-hard.
\item For finite two-player games with ordered reachability
objectives where preorders have~$\One$ as a unique maximal
element, and have an element $v$ such that for every $v'$, $v' \ne
\One \Leftrightarrow v' \preorder v$, then the NE existence problem and the
constrained NE existence problem are \PSPACE-hard.
\end{itemize}
\end{proposition}
\noindent The upper bound will be proven by reduction to games with ordered
B\"uchi objectives using game-simulation.
\subsubsection{Reduction to a game with ordered B\"uchi objectives.}
We show how to transform a game $\calG$ with preferences given by
Boolean circuits over reachability objectives into a new
game~$\calG'$, with preferences given by Boolean circuits over B\"uchi
objectives. Although the size of~$\calG'$ will be exponential,
circuit order with B\"uchi objectives define prefix-independent
preference relations and thus checking condition~\ref{cond:win} of
Theorem~\ref{thm:eq-win} can be made more efficient.
States of $\calG'$ store the set of states of $\calG$ that have
already been visited. The set of states of~$\calG'$ is $\Stat' = \Stat
\times 2^\Stat$. The transitions are as follows: $(s,S) \rightarrow
(s',S')$ when there is a transition $s\rightarrow s'$ in~$\calG$ and
$S' = S \cup \{s'\}$. We keep the same circuits to define the
preference relations, but the reachability objectives are transformed
into B\"uchi objectives: a~target set~$T$ is transformed into $T' = \{
(s,S) \mid S \cap T \ne \varnothing\}$. Although the game has
exponential size, the preference relations only depend on the strongly
connected components the path ends~in, so~that we will be able to use
a special algorithm, which we describe after this lemma.
We define the relation $s \simulrel s'$ over states of $\calG$
and~$\calG'$ \iff $s' = (s,S)$ with $S\subseteq \Stat$, and prove that
it is a game simulation (see Definition~\ref{def-gsim}).
\begin{lemma}\label{lem:construction-reach-general}
The relation~$\simulrel$ (resp. $\simulrel^{-1}$) is a game
simulation between $\calG$ and~$\calG'$, and it is
preference-preserving from $(s_0,(s_0,\{s_0\}))$
(resp. $((s_0,\{s_0\}),s_0)$).
\end{lemma}
\begin{proof}
Let $m_\Agt$ be a move; writing $t = \Tab(s,m_\Agt)$, we~have
$\Tab'((s,S),m_\Agt) = (t,S\cup \{t\})$. Therefore $\Tab(s,m_\Agt)
\simulrel \Tab'(s',m_\Agt)$. Let $(t,S')$ be a state of~$\calG'$;
then we also have ${t\simulrel (t,S')}$. If ${S' = S\cup \{t\}}$
then $\Susp((s,t),m_\Agt) = \Susp(((s,S),(t,S')),m_\Agt)$; otherwise
${\Susp(((s,S),(t,S')),m_\Agt) = \varnothing}$. In both cases,
condition~(2) in the definition of a game simulation is obviously
satisfied.
In the other direction, let $(s',S\cup\{s'\}) = \Tab((s,S),m_\Agt)$;
we have that $s'\simulrel (s',S\cup\{s'\})$. Let $t \in
\Stat$. Then $t \simulrel (t,S\cup \{t\})$, and $\Susp((s,t),m_\Agt)
= \Susp(((s,S),(t,S\cup\{t\})),m_\Agt)$. Hence $\simulrel^{-1}$ is a
game simulation.
\smallskip Let $\rho$ and $\rho'$ be two paths, from $s_0$ and
$(s_0,\{s_0\})$ respectively, and such that $\rho \simulrel \rho'$.
We show preference preservation, by showing that $\rho$ reaches
target set~$T$ \iff $\rho'$~visits~$T'$ infinitely often.
If~$\rho$~visits some state $s\in T$, then from that point, states
visited by $\rho'$ are of the form $(s',S')$ with $s \in S'$; all
these states are in $T'$, therefore $\rho'$ visits~$T'$ infinitely
often. Conversely, if $\rho'$ visits~$T'$ infinitely often, then
some state of~$T'$ have been visited by~$\rho$. From this, we~easily
obtain preference preservation.
\end{proof}
\noindent As a corollary (Proposition~\ref{prop:sim}) we get that there is a
correspondence between Nash equilibria in $\calG$ and Nash equilibria
in $\calG'$.
\begin{lemma}
If there is a Nash equilibrium $\sigma_\Agt$ in $\calG$ from $s_0$,
then there is a Nash equilibrium $\sigma'_\Agt$ in $\calG'$ from
$(s_0,\{s_0\})$ such that $\Out_{\calG}(s_0,\sigma_\Agt) \simulrel
\Out_{\calG'}((s_0,\{s_0\}),\sigma'_\Agt)$. And vice-versa: if
there is a Nash equilibrium $\sigma'_\Agt$ in $\calG'$ from
$(s_0,\{s_0\})$, then there is a Nash equilibrium $\sigma_\Agt$ in
$\calG$ from $s_0$ such that
$\Out_{\calG'}((s_0,\{s_0\}),\sigma'_\Agt) \simulrel^{-1}
\Out_{\calG}(s_0,\sigma_\Agt)$.
\end{lemma}
Note that, if $\Out_{\calG}(s_0,\sigma_\Agt) \simulrel
\Out_{\calG'}((s_0,\{s_0\}),\sigma'_\Agt)$, then
$\Out_{\calG}(s_0,\sigma_\Agt)$ satisfies the reachability objective
with target set $T$ \iff $\Out_{\calG'}((s_0,\{s_0\}),\sigma'_\Agt)$
satisfies the B\"uchi objective with target set $T' = \{(s,S) \mid S
\cap T \ne \emptyset\}$. From this strong correspondence between
$\calG$ and $\calG'$, we get that it is sufficient to look for Nash
equilibria in game $\calG'$.
\subsubsection{How to efficiently solve the suspect game of $\calG'$}
In game $\calG'$, preference relations are
prefix-independent. Applying Remark~\ref{rem:prefix-independant} the
preference relation in the suspect game is then also
prefix-independent, and the payoff of a play only depends on which
strongly-connected component the path ends in. We now give an
alternating algorithm which runs in polynomial time and solves the
game~$\calH(\calG',\pi')$, where $\pi'$ is an infinite path in
$\calG'$.
\begin{lemma}\label{lem:alternating-algo}
The winner of $\calH(\calG',\pi')$ can be decided by an alternating
algorithm which runs in time polynomial in the size of
$\calG$.
\end{lemma}
\begin{proof}
Let $C^A$ be the circuit defining the preference relation of player
$A$. Let $\rho = (s_i,S_i)_{i\ge 0}$ be a path in $\calG'$, the
sequence $(S_i)_{i \ge 0}$ is non-decreasing and converges to a
limit $S(\rho)$. We have $\payoff_A(\rho) = \One_{\{i \mid T_A^i
\cap S(\rho) = \varnothing\}}$. Therefore the winning condition
of $\Eve$ in $\calH(\calG',\pi')$ for a play $\rho$ only depends on
the limits~$\limitpi2(\rho)$ and~$S(\Sproj_1(\rho))$. It can be
described as a single B\"uchi condition with target set $T
= \{ ((s,S),P) \mid \forall A\in P.\ C^A[ v^A(S) , w^A ] \
\text{evaluates to \true} \}$ where $v^A(S) = \One_{\{i \mid T_A^i
\cap S = \varnothing\}}$ and $w^A = \payoff_A(\pi')$. We now
describe the algorithm.
Initially the current state is set to $((s_0,\{s_0\}),\Agt)$. We
also keep a list of the states which have been visited, and we
initialise it with $\Occ \leftarrow \{ (s_0,\{s_0\}),\Agt \}$.
Then,
\begin{itemize}
\item if the current state is $((s,S),P)$, the algorithm
existentially guesses a move~$m_\Agt$ of \Eve and we set $t =
((s,S),P,m_\Agt)$;
\item otherwise if the current state is of the form
$((s,S),P,m_\Agt)$, it universally guesses a state $s'$ which
corresponds to a move of \Adam and we set $t = ((s',S\cup
\{s'\}),P\cap \Susp((s,s'),m_\Agt))$.
\end{itemize}
If $t$ was already seen (that is, if $t \in \Occ$), the algorithm
returns $\true$ when $t\in T$ and $\false$ when $t \notin T$,
otherwise the current state is set to $t$, and we add $t$ to the
list of visited states: $\Occ \leftarrow \Occ \cup \{t\}$, and we
repeat this step. Because we stop when the same state is seen, the
algorithm stops after at most $\ell+1$ steps, where $\ell$ is the
length of the longest acyclic path. Since the size of~$S$ can only
increase and the size of~$P$ only decrease, we~bound~$\ell$ with
$|\Stat|^2 \cdot |\Agt|$.
We now prove the correctness of the algorithm. First,
$\calH(\calG',\pi')$ is a turn-based B\"uchi game, which is a
special case of parity game. Parity games are known to be
determined with memoryless
strategies~\cite{mostowski1991games,emerson1991tree}, hence
$\calH(\calG',\pi')$ is determined with memoryless strategies.
If the algorithm returns \true, then there exist a strategy
$\sigma_\shortEve$ of \Eve such that for all the strategies
$\sigma_\shortAdam$ of \Adam, any outcome $\rho$ of
$\Out(\sigma_\shortEve,\sigma_\shortAdam)$ is such that there exist
$i < j \leq \ell + 1$ with $\rho_i = \rho_j \in T$ and all $\rho_k$
with $k < j$ are different. We extend this strategy
$\sigma_\shortEve$ to a winning strategy $\sigma'_\shortEve$ for
\Eve. We do so by ignoring the loops we see in the history,
formally we inductively define a reduction $r$ of histories by:
\begin{itemize}
\item $r(\varepsilon) = \varepsilon$;
\item if $((s,S),P)$ does not appear in $r(h)$ then $r(h \cdot
((s,S),P)) = r(h) \cdot ((s,S),P)$;
\item otherwise $r(h \cdot ((s,S),P)) = r(h)_{\le i}$ where $i$ is
the smallest index such that $r(h)_i = ((s,S),P)$.
\end{itemize}
We then define $\sigma'_\shortEve$ for any history $h$ by
$\sigma'_\shortEve(h) = \sigma_\shortEve(r(h))$.
We show by induction that if $h$ is a history compatible with
$\sigma'_\shortEve$ from $((s_0,\{s_0\}),\Agt)$ then $r(h)$ is
compatible with $\sigma_\shortEve$ from $((s_0,\{s_0\}),\Agt)$ . It
is true when $h= ((s_0,\{s_0\}),\Agt)$, now assuming it holds for
all history of length $\le k$, we show it for history of length
$k+1$. Let $h\cdot s$ be a history of length $k+1$ compatible with
$\sigma'_\shortEve$. By hypothesis $r(h)$ is compatible with $h$
and since $\sigma'_\shortEve (h)= \sigma_\shortEve(r(h))$,
$r(h)\cdot s$ is compatible with $\sigma_\shortEve$. If $r(h\cdot
s) = r(h)\cdot s$ then $r(h\cdot s)$ is compatible with
$\sigma_\shortEve$. Otherwise $r(h\cdot s)$ is a prefix of $r(h)$
and therefore of length $\le k$, we can apply the induction
hypothesis to conclude that $r(h\cdot s)$ is compatible with
$\sigma_\shortEve$.
We now show that the strategy $\sigma'_\shortEve$ that we defined,
is winning. Let $\rho$ be a possible outcome of
$\sigma'_\shortEve$, let $i<j$ be the first indexes such that
$\rho_i,\rho_j \in (\Stat\times S(\rho)) \times \limitpi2(\rho)$ and
$\rho_i = \rho_j$. Because there is no repetition between $i$ and
$j-1$: $r(\rho_{\le j-1}) = r(\rho_{\le i -1}) \rho_{i} \cdots
\rho_{j-1}$. We have that $\sigma_\shortEve(r(\rho_{\le i -1})
\rho_{i} \cdots \rho_{j-1}) = \sigma'_\shortEve(\rho_{j-1})$. From
this move, $\rho_j$ is a possible next state, so $r(\rho_{\le i -
1}) \rho_{i}\cdots \rho_{j}$ is a possible outcome of
$\sigma_\shortEve$. As $\rho_{i} = \rho_{j}$ and all other states
are different, by the hypothesis on $\sigma_\shortEve$ we have that
$\rho_j \in T$. This shows that $\rho$ ultimately loops in states
of $T$ and therefore $\rho$ is a winning run for \Eve.
Reciprocally, if \Eve has a winning strategy, she has a memoryless
one~$\sigma_\shortEve$ since this is a B\"uchi game.
We can see this strategy as an oracle for the various
existential choices in the algorithm.
Consider some universal choices in the algorithm, it corresponds to
a strategy $\sigma_\shortAdam$ for \Adam. The branch corresponding to
$(\sigma_\shortEve,\sigma_\shortAdam)$ ends the first time we
encounter a loop, we write this history $h\cdot h'$ with $\last(h')
= \last(h)$. Since the strategy $\sigma_\shortEve$ is memoryless,
$h\cdot h'^\omega$ is a possible outcome. Since it is winning,
$\last(h')$ is in $T$ and therefore the branch is accepting. This
being true for all the branches given by the choices of
$\sigma_\shortEve$, the algorithm answers \true.
\end{proof}
\subsubsection{Proof of the \PSPACE\ upper bounds in
Proposition~\ref{prop:reach-pspace}.}
We describe a \PSPACE~algorithm for solving the constrained NE existence
problem. The algorithm proceeds by trying all plays~$\pi$ in $\calG$
of the form described in Proposition~\ref{lem:play-length}. This
corresponds to a (unique) play $\pi'$ in $\calG'$. We check that
$\pi'$ has a payoff satisfying the constraints, and that there is a
path $\rho$ in $\calH(\calG',\pi')$, whose projection is $\pi'$, along
which \Adam obeys \Eve, and which stays in the winning region of \Eve.
This last step is done by using the algorithm of
Lemma~\ref{lem:alternating-algo} on each state $\rho$ goes through.
All these conditions are satisfied exactly when the conditions of
Theorem~\ref{thm:eq-win} are satisfied, in which case there is a Nash
equilibrium within the given bounds.
The \PSPACE\ upper bound for the value problem can be inferred from
Proposition~\ref{lem:link-value-constr}.
\subsubsection{Proof of \PSPACE-hardness for the value problem.}
We show \PSPACE-hardness of the value problem when the preorder has
$\One$ as a unique maximal element.
We reduce \QSAT to the value problem, where \QSAT is the
satisfiability problem for quantified Boolean formulas. For an instance
of \QSAT, we assume without loss of generality that the Boolean
formula is a conjunction of disjunctive clauses\footnote{With the
convention that an empty disjunction is equivalent to~$\bot$.}.
Let $\phi = Q_1 x_1 \dots Q_p x_p.\ \phi'$, where $Q_i \in \{ \forall,
\exists \}$ and $\phi'= c_1 \land \dots \land c_n$ with $c_i =
\bigvee_{1\leq j\leq 3} \ell_{i,j}$ and $\ell_{i,j} \in \{ x_k ,
\lnot x_k \mid 1 \le k \le p\}\cup\{\top,\bot\}$. We~define a
turn-based game~$\calG(\phi)$ in the following way (illustrated in
Example~\ref{ex:reduction} below). There~is one state for each
quantifier, one for each literal, and two additional states~$\top$
and~$\bot$:
\[
\Stat = \{ Q_k \mid 1 \le k \le p\} \cup \{ x_k , \lnot x_k \mid
1 \le k\le p \} \cup \{ \top,\bot\}.
\]
The game involves two players, $A$~and~$B$. The~states~$\top$,
and~$\bot$,
the existential-quantifier states and the literal states are all controlled
by~$A$, while the universal-quantifier states belong to player~$B$. For
all~$1\leq k\leq p$, the state corresponding to quantifier~$Q_k$ has two
outgoing transitions, going to $x_k$ and~$\lnot x_k$ respectively. Those two
literal states only have one transition to the next quantifier
state~$Q_{k+1}$, or to the final state~$\top$ if $k=p$. Finally,
states~$\top$ and~$\bot$ carries a self-loop
(notice that~$\bot$ is not reachable, while $\top$ will always be
visited).
Player~$A$ has one target set for each clause: if $c_i =
\bigwedge_{1\leq j\leq 3} \ell_{i,j}$ then $T^A_i = \{\ell_{i,j} \mid
1\leq j\leq 3\}$. The $i$-th objective~$\Omega_i^A$ is to reach
target set~$T^A_i$. The following result is then straightforward:
\begin{lemma}\label{lem:formula-valid}
Formula $\phi$ is valid \iff player~$A$ has a strategy whose
outcomes from state~$Q_1$ all visit each target set~$T^A_i$.
\end{lemma}
\begin{proof}
We begin with the direct implication, by induction on~$p$. For the
base case, $\phi=Q_1 x_1.\ \bigwedge_i c_i$ where $c_i$ only
involves~$x_1$ and~$\non x_1$. We~consider two cases:
\begin{itemize}
\item $Q_1=\exists$: since we assume~$\phi$ be true, there must
exist a value for~$x_1$ which makes all clauses true. If this
value is~$\top$, consider the strategy~$\sigma_\top$ of Player~$A$
such that $\sigma_\top(Q_1)=x_1$. Then each clause~$c_i$ must
have~$x_1$ as one of its literals, so that the
objective~$\Omega^A_i$ is satisfied with this strategy. The same
argument applies if the value for $x_1$ were~$\bot$.
\item $Q_1=\forall$: in that case, Player~$A$ has only one strategy.
For both $x_1$ and $\non x_1$ all the clauses are
satisfied. It~follows that each clause~$c_i$ must contain $x_1$
and $\non x_1$, so that objective~$\Omega^A_i$ is satisfied for
any strategy of player~$B$.
\end{itemize}
\noindent Now, assume that the result holds for all \QSAT instances with at
most $p-1$ quantifiers.
\begin{itemize}
\item if $Q_1 = \exists$, then one of $Q_2x_2\ldots Q_p x_p
\phi'[x_1 \leftarrow \top]$ and $Q_2x_2\ldots Q_p x_p\phi'[x_1
\leftarrow \bot]$ is valid. We~handle the first case, the second
one being symmetric. For a literal~$\lambda_k\in\{x_k,\non x_k\}$,
we~write
$T_{\lambda_k}$ for the set of target sets~$T_i^A$ such that the
clause~$c_i$ contains the literal~$\lambda_k$.
Assume $Q_2x_2\ldots Q_p x_p \phi'[x_1 \leftarrow \top]$ is valid;
by~induction we know that there exists a strategy~$\sigma^{x_1}$
such that all the targets in~$T_{\lambda_k}$ are visited along any
outcome from state~$Q_2$ (because $\calG(Q_2x_2\ldots Q_p x_p
\phi'[x_1 \leftarrow \top])$ is the same game as~$\calG(\phi)$,
but with~$Q_2$ as the initial state, and with the targets
in~$T_{x_1}$ containing~$\{\top\}$ in place of~$x_1$). We~define
the strategy~$\sigma$ by $\sigma(Q_1)=x_1$ and $\sigma(Q_1 \cdot
x_1 \cdot \pathg) = \sigma^{x_1}(\pathg)$. An~outcome of~$\sigma$
will necessarily visit~$x_1$, hence visiting all the targets
in~$T_{x_1}$; because $\sigma$ follows $\sigma^{x_1}$, all the
objectives not in~$T_{x_1}$ are~met as~well.
\item if $Q_1 = \forall$, then $Q_2x_2\ldots Q_p x_p \phi'[x_1
\leftarrow \top]$ is valid. Using the induction hypothesis we
know that from~$Q_2$ there is a strategy $\sigma^{x_1}$ that
enforces a visit to all the targets in~$T_{x_1}$. Similarly,
$Q_2x_2\ldots Q_p x_p \phi'[x_1 \leftarrow \bot]$ is valid, and
there is a strategy~$\sigma^{\non x_1}$ that visits all the
objectives not in~$T_{\non x_1}$. We~define a new
strategy~$\sigma$~as follows: $\sigma(Q_1 \cdot x_1 \cdot \pathg)
= \sigma^{x_1}(\pathg)$ and $\sigma(Q_1 \cdot \non x_1 \cdot
\pathg) = \sigma^{\non x_1}(\pathg)$. Consider an outcome
of~$\sigma$: if~it visits~$x_1$, then all the objectives
in~$T_{x_1}$ are visited, and because the path
follows~$\sigma^{x_1}$, the objectives not in~$T_{x_1}$ are also
visited. The other case is similar.
\end{itemize}
\medskip\noindent We now turn to the converse implication. Assume the
formula is not valid. We prove that for any strategy~$\sigma$ of
player~$A$, there is an outcome~$\pathg$ of this strategy such that
some objective~$\Omega^A_i$ is not satisfied. We~again proceed by
induction, beginning with the case where~$n=1$.
\begin{itemize}
\item if $Q_1 = \exists$, then both $\phi'[x_1 \leftarrow \top]$ and
$\phi'[x_1 \leftarrow \bot]$ are false.
This entails that one of the clauses only
involves~$\bot$ (no~other disjunction involving~$x_1$ and\slash
or~$\non x_1$ is always false), and the corresponding
reachability condition is~$\bot$, which is not reachable.
\item if $Q_1=\forall$, then one of $\phi'[x_1 \leftarrow \top]$
and $\phi'[x_1 \leftarrow \bot]$ is false.
In~the former case, one of the clauses~$c_i$
contains~$\non x_1$, or only contains~$\bot$. Then along the run
$Q_1 \cdot x_1 \cdot \top^\omega$,
the objective~$T^A_i$ is not
visited. The other case is similar.
\end{itemize}
Now, assuming that the result holds for formulas with~$n-1$
quantifiers, we prove the result with~$n$ quantifiers.
\begin{itemize}
\item if $Q_1 = \exists$, then both $Q_2x_2\ldots Q_p x_p \phi'[x_1
\leftarrow \top]$ and $Q_2x_2\ldots Q_p x_p \phi'[x_1 \leftarrow
\bot]$ are false. Ising the induction
hypothesis, any run from~$Q_2$ fails to visit some objective not
in $T_{x_1} \cup T_{\non x_1}$. Hence no strategy from~$Q_1$ can
enforce a visit to all the objectives.
\item if $Q_1 = \forall$, then one of $Q_2x_2\ldots Q_p x_p \phi'[x_1
\leftarrow \top]$ and $Q_2x_2\ldots Q_p x_p \phi'[x_1 \leftarrow
\bot]$ is false.
We handle the first case, the second one being symmetric.
By induction hypothesis, for any strategy~$\sigma$ of player~$A$
in the game $\calG(\phi'[x_1 \leftarrow \top])$, one of the
outcome fails to visit all the objective not in~$T_{x_1}$. Then
along the path $\pathg = Q_1 \cdot x_1 \cdot \pathg'$, some
objectives not in $T_{x_1}$ are not visited.\forceqed
\end{itemize}
\end{proof}
\noindent We can directly conclude from this lemma that the value of the game for $A$ is
$\One$ (the~unique maximal payoff for our preorder) \iff the formula~$\phi$ is
valid, which proves that the former problem is \PSPACE-hard.
\begin{example}
\label{ex:reduction}
As an example of the construction, let us consider the formula
\begin{equation}\label{eq:formula1}
\phi = \forall x_1.\ \exists
x_2.\ \forall x_3.\ \exists x_4.\ (x_1 \lor \lnot x_2 \lor \lnot x_3)
\land (x_1 \lor x_2 \lor x_4) \land (\lnot x_4 \lor\bot \lor\bot)
\end{equation}
The target sets for player~$A$ are given by $T^A_1 = \{x_1; \lnot
x_2; \lnot x_3\}$, $T^A_2 = \{ x_1; x_2; x_4\}$, and $T^A_3 =
\{\lnot x_4; \bot\}$. The structure of the game is represented in
Figure~\ref{fig:reach-game}. $B$ has a strategy that falsifies one
of the clauses whatever $A$ does, which means that the formula is
not valid.
\end{example}
\begin{figure}[!ht]
\centering{
\begin{tikzpicture}[thick]
\tikzstyle{carre}=[draw,minimum width=6mm,minimum height=6mm,inner sep=0pt,fill=black!10]
\tikzstyle{rond}=[draw,minimum width=7mm,circle,inner sep=0pt,fill=black!10]
\draw (-3.5,1.2) node [rond] {} node [right=.5cm] {player $A$};
\draw (-3.5,.3) node [carre] {} node [right=.5cm] {player $B$};
\draw (0,0) node [carre] (A1) {$\forall_1$};
\draw (1,1) node [rond] (B1) {$x_1$};
\draw (1,-1) node [rond] (C1) {$\lnot x_1$};
\draw (2,0) node [rond] (A2) {$\exists_2$};
\draw (3,1) node [rond](B2){$x_2$};
\draw (3,-1) node [rond](C2){$\lnot x_2$};
\draw (4,0) node [carre] (A3) {$\forall_3$};
\draw (5,1) node [rond] (B3) {$x_3$};
\draw (5,-1) node [rond] (C3){$\lnot x_3$};
\draw (6,0) node [rond] (A4) {$\exists_4$};
\draw (7,1) node [rond] (B4) {$x_4$};
\draw (7,-1) node [rond] (C4){$\lnot x_4$};
\draw (8,0) node [rond] (A5){$\top$};
\draw [-latex'] (A1) -- (B1);
\draw [-latex'] (A1) -- (C1);
\draw [-latex'] (B1) -- (A2);
\draw [-latex'] (C1) -- (A2);
\draw [-latex'] (A2) -- (B2);
\draw [-latex'] (A2) -- (C2);
\draw [-latex'] (B2) -- (A3);
\draw [-latex'] (C2) -- (A3);
\draw [-latex'] (A3) -- (B3);
\draw [-latex'] (A3) -- (C3);
\draw [-latex'] (B3) -- (A4);
\draw [-latex'] (C3) -- (A4);
\draw [-latex'] (A4) -- (B4);
\draw [-latex'] (A4) -- (C4);
\draw [-latex'] (B4) -- (A5);
\draw [-latex'] (C4) -- (A5);
\draw[-latex'] (A5) .. controls +(0.8,-0.5) and +(0.8,0.5) .. (A5);
\end{tikzpicture}
}
\caption{Reachability game associated with the
formula~\eqref{eq:formula1}}
\label{fig:reach-game}
\end{figure}
\subsubsection{Proof of \PSPACE-hardness for the (constrained) NE existence problem.}
We will now prove \PSPACE-hardness for the NE existence problem, under
the conditions specified in the statement of
Proposition~\ref{prop:reach-pspace}, using
Proposition~\ref{lem:link-value-exist}. We specify the new preference
relation for the construction of Section~\ref{sec:link-value-exist}.
We give $B$ one objective, which is to reach $s_1$ ($s_1$ is the sink
state introduced by the construction). In terms of preferences for
$A$, going $s_1$ should be just below visiting all targets. For this
we use the statement in Proposition~\ref{prop:reach-pspace}, that
there is $v$ such that for every $v'$, $v' \ne \One \Leftrightarrow v'
\preorder v$, and add $s_1$ as a target to each $T^A_i$ such that $v_i
= 1$. This defines a preference relation equivalent to the one in the
game constructed in Section~\ref{sec:link-value-exist}, therefore we
deduce with Proposition~\ref{lem:link-value-exist} that the NE existence
problem is \PSPACE-hard.
\subsubsection{Applications}
\label{subsec:boo-reach}
We should now notice that conjunction, counting and lexicographic
preorders (thanks to the fact that $\One$ is the unique maximal
element for theses orders and to Lemma~\ref{lem:second-maximal}).
As conjunction (for instance) can easily be encoded using a
(monotonic) Boolean circuit in polynomial time, the hardness results
are also valid if the preorder is given by a (monotonic) Boolean
circuit. Finally the subset preorder can be expressed as a
polynomial-size Boolean circuit and has a maximal element. We
therefore get the following summary of results:
\begin{corollary}\hfill
\begin{itemize}
\item For finite games with ordered reachability objectives, with either the
conjunction, the counting or the lexicographic preorder, the value
problem, the NE existence problem and the constrained NE existence problem
are \PSPACE-complete.
\item For finite games with ordered reachability objectives, where the
preorders are given by (monotonic) Boolean circuits, the value problem,
the NE existence problem and the constrained NE existence problem are
\PSPACE-complete.
\item For finite games with ordered reachability objectives, with
the subset preorder, the value problem is \PSPACE-complete.
\end{itemize}
\end{corollary}
\noindent On~the other hand, the disjunction and maximise preorders do not have
a unique maximal element, so the hardness result does not carry over
to these preorders. In~the same way, for the subset preorder, there
is no~$v$ such that $v' \ne \One \Leftrightarrow v' \preorder v$, so
the hardness result does not apply. We prove later (in
Section~\ref{reach-simple}) that in these special cases, the
complexity is actually lower.
\subsection{Simple cases}
\label{reach-simple}
As for ordered B\"uchi objectives, for some ordered reachability
objectives, the preference relation can be (efficiently) (co-)reduced
to a single reachability objective. We do not give the formal
definitions, they can easily be inferred from that for B\"uchi
objectives on page~\pageref{subsec:reducible}.
\begin{proposition}\label{prop:simple-reach-p}\label{prop:reach-max-np}\hfill
\begin{itemize}
\item For finite games with ordered reachability objectives which
are reducible to single reachability objectives and in which the
preorders are non-trivial, the value problem is \P-complete.
\item For finite games with ordered reachability objectives which
are co-reducible to single reachability objectives, and and in
which the preorders are non-trivial, the NE existence problem and the
constrained NE
existence problem are \NP-complete.
\end{itemize}
\end{proposition}
\begin{proof}
Since \P-hardness (resp. \NP-hardness) already holds for the value
(resp. NE existence) problem with a single reachability objective
(see~\cite[Sect.~2.5.1]{GTW02}), we only focus on the upper bounds.
We begin with the value problem: given a payoff vector~$u$ for
player~$A$, we~build the new target set~$\widehat T$ in polynomial
time, and then use a classical algorithm for deciding whether
$A$~has a winning strategy (see \cite[Sect.~2.5.1]{GTW02}). If
she~does, then she can secure payoff~$u$.
Consider now the constrained NE existence problem, and assume that the
preference relation for each player~$A$ is given by target
sets~$(T_i^A)_{1\leq i\leq n_A}$. The \NP-algorithm consists in
guessing the payoff vector~$(v_A)_{A\in\Agt}$ and an ultimately
periodic play~$\rho = \pi \cdot \tau^\omega$ with $|\pi|,|\tau| \le
|\Stat|^2$, which, for each~$A$, visits $T_i^A$ \iff
$v^A_i = 1$. We then co-reduce the payoff to a new target
set~$\widehat T^A(v^A)$ for each player~$A$.
The run~$\rho$ is the outcome of a Nash equilibrium with
payoff~$(v_A)_{A \in \Agt}$ for the original preference relation
\iff $\rho$~is the outcome of a Nash equilibrium with payoff~$0$
with the single reachability objective~$\widehat T^A(v^A)$ for
each~$A\in\Agt$. Indeed, in~both cases, this is equivalent to the
property that no player~$A$ can enforce a payoff greater than~$v^A$.
Applying the algorithm presented in
Section~\ref{subsec:reachability}. this condition can be checked in
polynomial time.
\end{proof}
We now see to which ordered objectives this result applies. It is
not difficult to realise that the same transformations as those made
in the proof of Lemma~\ref{lemma:examples} can be made as well for
reachability objectives. We therefore get the following lemma, from
which we get the remaining results in Table~\ref{table-reach}.
\begin{lemma}
Ordered reachability objectives with disjunction or maximise
preorders are reducible to single reachability objectives. Ordered
reachability objectives with disjunction, maximise or subset
preorders are co-reducible to single reachability objectives.
\end{lemma}
We conclude with stating the following corollary:
\begin{corollary}\hfill
\begin{itemize}
\item For finite games with ordered reachability objectives, with
either the disjunction or the maximise preorder, the value problem
is \P-complete.
\item For finite games with ordered reachability objectives, with either the
disjunction, the maximise or the subset preorder, the NE existence problem
and the constrained NE existence problem are \NP-complete.
\end{itemize}
\end{corollary}
\section{Conclusion}
\paragraph{Summary and impact of the results}
Concurrent games are a natural class of games, extending classical
turn-based games with more complex interactions. We have developed a
complete methodology, involving new techniques, for computing pure
Nash equilibria in this class of games. We were able to characterise
the complexity of finding Nash equilibria (possibly with constraints
on the payoff) for simple qualitative objectives first
(Section~\ref{sec:single}), and then for semi-quantitative objectives
(Section~\ref{sec:buchi} and \ref{sec:reach}). We would like to point
out that the algorithm for B\"uchi objectives with maximise preorder
(see Section~\ref{subsec:reducible}) has been implemented in tool
Praline\footnote{Available on
\url{http://www.lsv.ens-cachan.fr/Software/praline/}}~\cite{brenguier13b}
We believe the methodology we have developed in this paper can be used
in many other contexts, and the suspect game is a very powerful tool
that will allow to analyze various properties of multi-agent
systems. Indeed, the correspondence between pure Nash equilibria in
the original game and winning strategies in the suspect game holds
with no assumption on the structure of the game. In particular it can
be applied to games given as pushdown systems, counter systems, \etc
Also it does not assume anything on the preference relations, only the
resulting winning condition in the suspect game can become very
complex if the preference relations are complex. Now the matter is
just algorithmics, in that we have to solve a two-player turn-based
game in a potentially complex arena (if the original game structure is
complex) with a potentially complex winning condition (if the
preference relations are complex).
The suspect game construction can also be adapted to compute many
other kinds of equilibria; this is for instance applied to robust
equilibria in~\cite{brenguier13}. We believe this can be used in many
other contexts.
We have also developed in this paper another tool that might have its
own interest and be useful in some other contexts: the game-simulation
(see Section~\ref{sec:simulation}). We used this tool several times
(for handling objectives given by deterministic Rabin automata, but
also for handling ordered reachability objectives). This tool can also
be used to handle more complex game structures, like we did
in~\cite{BBMU11} for timed games, when we originally introduced this
notion. In particular, the construction done in~\cite{BBMU11} shows
that we can compute Nash equilibria for timed games with all kinds of
objectives studied in the current paper.
Our future researches will include extending the use of the
suspect game abstraction for other families of games, and to push it
further to also handle truly quantitative objectives.
\paragraph{Discussion on the various hypotheses made in this paper}
We have assumed strategies are pure, and game structures are
deterministic. This is indeed a restriction, and allowing for
randomised strategies would be of great interest. Note however that
pure Nash equilibria are resistant to malicious randomised players
(that is, to deviations by randomised strategies). There is no
obvious way to modify the suspect game construction to handle either
stochastic game structures or randomised strategies. Indeed, given a
history, it is hard to detect strategy deviations if they can be
randomised, and therefore the set of suspects is hard to compute (and
actually even define). This difficulty is non-surprising, since the
existence of a Nash equilibrium in pure or randomised strategies is
undecidable for stochastic games with reachability or B\"uchi
objectives~\cite{UW11}, and the existence of a Nash equilibrium in
randomised strategies is undecidable for deterministic
games~\cite{UW11a}. However we would like to exhibit subclasses of
stochastic games for which we can synthesize randomised Nash
equilibria, this is part of our research programme.
We have assumed that strategies are based on histories that only
record states which have been visited, and not actions which have been
played. We believe this is more relevant in the context of distributed
systems, where only the effect of an action might be visible to other
players. Furthermore, this framework is more general than the one
where every player could see the actions of the other players, since
the latter can easily be encoded in the former. In the context of
complete information (precise view of the actions), computing Nash
equilibria is rather easy since, once a player has deviated from the
equilibrium, all the other players know it and can make a coalition
against that player. To illustrate that simplification, we only
mention that the constrained NE existence problem falls in \NP for finite
games with single parity objectives (we can obtain this bound based on
the suspect game construction), if we assume that strategies can
observe actions, whereas the problem is $\P^\NP_\parallel$-hard if
strategies do not observe the actions.
Finally we have chosen the framework of concurrent games, and not that of
turn-based games as is often the case in the literature. Concurrent games
naturally appear when studying timed games~\cite{BBMU11} (the~semantics of a
timed game is that of a concurrent game, and the abstraction based on regions
that is correct for timed games is concurrent), and in the context of
distributed systems, concurrent moves are also very natural. In~fact
turn-based games are even a simpler case of concurrent games when we assume
strategies can see the actions. Of~course, the~suspect game construction
applies to turn-based games, but becomes quite simple (as~is the case if
strategies do see actions), since the set of suspect players is either the
set~$\Agt$ of all players (this is the case as long as no player has deviated from the
equilibrium), or reduces to a singleton, as~soon as a player has deviated.
To~illustrate this simplification, we notice that in the turn-based finite
games, the constrained NE existence problem is \NP-complete for single parity
objectives~\cite{ummels08} (it is $\P^\NP_\parallel$-complete in finite
concurrent games).
\bigskip\noindent \textbf{Acknowledgment.}
We would like to thank the reviewers for their numerous comments and remarks,
which helped us improve the presentation of this paper.
\bibliographystyle{myplain}
|
1,477,468,750,150 | arxiv | \section{Introduction} \label{sec:int}
Over the past few decades, astronomers have detected the presence of magnetic fields in galaxies at all spatial scales. These major studies have been performed using optical and radio observations \citep[][for reviews]{kron1994,ZH1997,BG2004,beck2015}. Radio observations measure the polarization of synchrotron radiation from relativistic electrons; this radiation is sensitive to the cosmic ray electron population, which does not closely trace the gas mass. Studies of interstellar polarization at optical wavelengths can reveal the magnetic field geometry as a result of magnetically-aligned dust grains by radiative alignment torques \citep{ALV2015}. However, the optical polarization measurements suffer from contamination by highly polarized scattered light. Linear polarization at 6.3 and 21.2 cm using the Effelsberg 100-m telescope and optical polarization \citep{scar87} have been compared in the face-on spiral galaxy M~51 \citep{BKW1987,flecher2011}. The optical and radio linear polarization shows a similar magnetic field morphology within the eastern and southern quadrants, but with significant variations of the magnetic field direction, up to 60$^{\circ}$, in the western quadrant. A study on the face-on spiral galaxy NGC 6946 also show similarities between the optical and radio polarization \citep{FBN1998}. Further observations of M~51 using near-infrared (NIR) polarization only registered upper-limit polarization across the galaxy, which ruled out the dichroic absorption as the main polarization mechanism \citep{pave12}. The scattering cross section of typical interstellar dust declines much faster $(\sim \lambda^{-4})$ between $0.55$ and $1.65~\micron$ than its absorption \citep[$\sim \lambda^{-1}$,][]{jowh15}. It is likely that the optical polarization measured in the previous works \citep[i.e.][]{scar91} was due to scattering, rather than extinction by dust grains aligned with the interstellar magnetic field. For M51, the expected dichroic absorptive polarization at H-band, based on the measured optical polarization, is $\sim0.4$\%, but an upper limit of $0.05\%$ was measured at H-band \citep{pave12}. The dichroic absorptive polarization should has shown a higher polarization at H-band than the measured to be due to magnetically aligned dust grains. Observations of the magnetic field geometry more sensitive to the denser gas and dust are needed.
Our understanding of how spiral arms form and their role in galaxy evolution is still incomplete. The most widely accepted theoretical model for spiral arms is the density-wave theory \citep{lindblad1960,LS1964,shu2016}. This theory posits that spiral structure can be described as a superposition of waves of enhanced stellar density with constant pitch angles (the angle between the tangent of the wave and circles around the galactic center) and constant pattern speed \citep{Athanassoula1984}. This theory predicts that stars form in the arms as gas moves into the wave and is compressed by its gravitational potential. Under this scheme, the spatial displacement of the spiral arms should be different for different tracers of star formation (e.g., molecular clouds, HII regions, and newly formed stars) because they appear at different phases of the wave. The spiral arms at optical/NIR wavelengths is expected to trace already born stars, while far-IR (FIR) wavelengths will trace ongoing star formation.
NGC 1068 is the nearest (D$_{L} = 13.5$ Mpc, 1\arcsec = 65~pc) grand-design spiral galaxy with both a bright active galactic nucleus (AGN) and a luminous circumnuclear starburst. This galaxy is classified as a Seyfert 2, where the active nucleus is obscured by a dusty structure, and the host is an Sb type. Associated with the AGN is a narrow-line region, i.e. ionization cones, of $\sim20\arcsec$ ($\sim1.3$~kpc) in diameter at a position angle of $\sim40^{\circ}$ East of North \citep[i.e.][]{BH1985} with the northern region protruding toward us out of the plane of the galaxy. \citet{Schinnerer2000} have presented CO observations of the central region of NGC 1068 and discussed the relationship between the stellar and gas dynamics in the galaxy. At radii $>15\arcsec$ ($>0.975$ kpc), the kinematic axis of the galaxy is approximately east-west, and at very large radii the eccentricities of the galaxy's faintest contours is consistent with the $40^{\circ}$ inclination suggested by HI data. However, there is a bright oval structure $\sim180\arcsec$ ($\sim11.7$ kpc) diameter with long axis perpendicular to the kinematic axis that probably corresponds to a large-scale stellar bar \citep[fig. 6 of][]{Schinnerer2000}. The inner Lindblad resonance (ILR) of this bar has a de-projected radius of $\sim18\arcsec$ ($\sim1.17$ kpc), near the location of a compact spiral or ring-like structure in $^{12}$CO, consistent with theoretical predictions that gas can be transported inward along bars and accumulate at the ILR. NIR observations in the K-band have detected the presence of a $30\arcsec$ ($1.95$ kpc) diameter inner-bar \citep{scoville1988,thronson1989}. \citet{TD1988} have suggested that for the case of the NGC~1068 outer-bar the ILR is actually split into two resonances (``inner-inner'' or iILR and ``outer-inner'' or oILR) and that the CO spiral pattern is produced by a spiral density wave between the iILR and oILR that is driven by the inner-bar. We will refer to this annulus as the ``starburst ring''. In addition to the CO in the starburst ring, there are CO clumps extending down into the northeastern branch of the inner-bar (but not in the southwestern branch) and a compact CO ring at radii $\sim1\arcsec$ ($\sim 65$ pc) that may be gas accumulating at the ILR of the inner-bar.
There is ample evidence for an exceptionally high rate of star formation in the central region of NGC 1068, possibly the result of a recent minor merger \citep{TYT2017}. The optical surface brightness of the galactic disk at radii $\le 25\arcsec$ ($\le1.63$ kpc) (henceforth the ``starburst disk'') is among the brightest of any galaxy in the local universe \citep{KW1978,weedman1985,IOK1987}. Observations in the FIR have shown that the total luminosity of the starburst disk is $> 10^{11} L_{\odot}$ \citep{TH1980,telesco1984}. Radio observations at 1.465 GHz \citep{WU1982} show that the same region is extremely rich in the type of non-thermal synchrotron emission produced in supernova explosions of massive young stars. On the basis of 10.8 $\mu$m\ and NIR data, \citet{TD1988} and \citet{thronson1989}, respectively, have argued that almost all the actual star formation takes place within the starburst ring and that the most intense activity occurs near the outer ends of the inner-bar.
These properties have made NGC 1068 a suitable object for spatially resolved polarimetry. Using optical polarimetry of NGC~1068, \cite{scar91} found a spiral pattern that was interpreted as delineating the magnetic field geometry in the spiral arms of the galaxy. Given the effects of scattering on optical polarimetry measurements and that the host galaxy of NGC 1068 has not been observed using radio polarimetry, this association has to be questioned.
We have performed FIR polarimetric observations to image the central $2\arcmin \times 2\arcmin$ ($7.8 \times 7.8$ kpc$^{2}$) of NGC~1068 at wavelengths of 53 and 89 $\mu$m. We discuss the 53 $\mu$m\ flux from the AGN in a previous paper \citep{LR2018a}. In this paper, we present our polarimetric results, including an 89 $\mu$m\ image that for the first time reveals galactic spiral structure by means of thermal emission from magnetically aligned dust grains. At FIR wavelengths, scattering and Faraday rotation are not a factor at these scales. The dominant emission is from warm dust, which more closely samples the total gas column density than relativistic electrons producing the synchrotron emission. The paper is organized as follows: Section \ref{sec:obs} describes the observations, data reduction, and observational results. In Section \ref{sec:Bfield}, we fit our polarimetry data to a spiral galactic magnetic field model, which is then analyzed and discussed in Section \ref{sec:DIS}. In Section \ref{sec:CON} we present our conclusions.
\section{Observations and Data Reduction} \label{sec:obs}
\begin{figure*}[ht!]
\includegraphics[angle=0,scale=0.6]{fig1}
\caption{\textit{Left:} 89 $\mu$m\ total flux (color scale) with polarization vectors (white vectors) with $P/\sigma_{P} > 3.0$ and rotated by $90^{\circ}$ to show the inferred magnetic field within the central $2\arcmin\ \times 2\arcmin$ ($7.8 \times 7.8$ kpc$^{2}$). Contours are shown for $2^{n}\sigma$, where $n = 3.0, 3.5, 4.0, \dots$ and with $\sigma = 6.74\times 10^{-3}$ Jy sqarcsec$^{-1}$. Polarization vector of $5$\% (white vector) and beam size of $7$\farcs$8$ (white circle) are shown. \textit{Right:} Polarized flux (color scale) with filled contours starting at $3\sigma$ with $\sigma = 6.64\times10^{-4}$ Jy sqarcsec$^{-1}$ and increasing in steps of $2\sigma$. Polarization vectors are shown as in the left figure. Black cross shows the location of the AGN.
\label{fig:fig1}}
\epsscale{2.}
\end{figure*}
\begin{figure*}[ht!]
\includegraphics[angle=0,scale=0.5]{fig2}
\caption{\textit{Left:} Total flux (color scale) image at 89 $\mu$m\ with overlaid streamlines of the inferred magnetic field morphology using the line integral convolution technique. \textit{Right:} \textit{HST} composite image with overlaid magnetic field vectors (blue vectors) inferred by the SOFIA/HAWC+ 89 $\mu$m\ polarimetric observations. Polarization vectors have been rotated by $90^{\circ}$ to show the inferred magnetic field. Polarization vector of $5$\% (blue vector) is shown. \label{fig:fig2}}
\epsscale{2.}
\end{figure*}
NGC~1068 was observed at $53$ and $89$ $\mu$m~as part of the guaranteed time observations (GTO, ID: 70\_0609) using the High-resolution Airborne Wideband Camera-plus (HAWC+)
\citep{Vaillancourt2007,Dowell2010,Harper2018} on the $2.7$-m Stratospheric Observatory For Infrared Astronomy (SOFIA) telescope. HAWC+ polarimetric observations simultaneously measure two orthogonal components of linear polarization arranged in two arrays of $32 \times 40$ pixels each, with pixel scales of $2$\farcs$55$ and $4$\farcs$02$ pixel$^{-1}$, and beam sizes (full width at half maximum, FWHM) of $4$\farcs$85$ and $7$\farcs$80$ at $53$ and $89$ $\mu$m, respectively. For both bands, we performed observations in a four-dither square pattern with a distance of three pixels from the center position in the array coordinate system. In each dither position, four halfwave plate position angles (PA) were taken. We used a chop frequency of 10.2 Hz, a nod time of $45$s, a chop-throw of $180$\arcsec, and a chop-angle of $90^{\circ}$ to always be along the short axis of the $32 \times 40$ pixels array. We used the science instrument reference frame (SIRF) for these observations. At $53$ $\mu$m, we observed a total of $8$ dither positions, and a total of $63$ dither positions at $89$ $\mu$m. The total on-source integration times were $0.47$h and $2.03$h at $53$ and $89$ $\mu$m, respectively. The total clock time of the observations—on-source time plus off-source time plus overheads—is 1.3h and 5.2h, respectively. Based on the morphology of the source using images from the \textit{Herschel Space Observatory}\footnote{\textit{Herschel} is an ESA space observatory with science instruments provided by European-led Principal Investigator consortia and with important participation from NASA} \citep{Pilbratt2010} and the chop-throw of $180\arcsec$ used for the HAWC+ observations, the reference beam fluxes should be $\le 1/10$ times the source flux over a source diameter of $120\arcsec$ and therefore negligible. Data were reduced using the \textsc{hawc\_dpr pipeline} v1.3.0. The pipeline procedure described by \citet{Harper2018} was used to background-subtract and flux-calibrate the data and compute Stokes parameters and their uncertainties. Computation of final degree and PA of polarization accounts for correction of the instrumental polarization \citep[table 4 by][]{Harper2018} with typical standard deviations after subtraction of $\sim0.3\%$, and de-biasing and PA error estimation \citep{WK1974}. Further analyses and high-level displays were done with custom \textsc{python} routines.
Figure \ref{fig:fig1} shows the total flux image with overlaid polarization vectors in a $2\arcmin \times 2\arcmin$ ($7.8 \times 7.8$ kpc$^{2}$) field-of-view (FOV) at $89$ $\mu$m. Polarization vectors have been rotated by $90^{\circ}$ to show the morphology of the magnetic field. We note that the polarization orientations are ambiguous by $180^{\circ}$, thus the displayed polarization vectors for all figures should be interpreted as magnetic field lines. Each polarization vector shows a statistically independent polarization measurement with $P/\sigma_{P} \ge 3$. Polarization vectors are shown every $4.02\arcsec$ (Nyqvist sampling at 89 $\mu$m). Throughout the paper we assume that dust grain alignment is perpendicular to the direction of the magnetic field. We observe extended dust emission along the spiral arms of the galaxy up to a diameter of $\sim$100\arcsec\ ($\sim6.5$~kpc) centered at the nucleus while the central $\sim1$ kpc radius emission is extended along PA of $\sim45^{\circ}$ and co-spatial with the inner-bar. The polarization map shows a spiral shape with a diameter of $\sim50$\arcsec\ ($\sim 3.25$ kpc). This spiral shape can be clearly identified in Figure \ref{fig:fig2}-left in the image made using the line integral convolution contour (LIC) technique \citep{CL1993}. This figure uses all polarization vectors regardless of signal-to-noise ratio measured in our observations. Figure \ref{fig:fig2}-right shows combined \textit{HST} observations at V-band, I-band, and H$_{\alpha}$ with overlaid magnetic field vectors (blue vectors) from our 89 $\mu$m\ HAWC+ observations. The polarized flux is shown in Figure \ref{fig:fig1}-right. The polarized flux shows extended emission along the E-W direction with a peak shifted by $4\arcsec$ (Section \ref{sec:AGN}) NE from the peak of the total flux intensity.
Given the short integration time at $53$ $\mu$m, only polarization at the nucleus (one independent vector is measured) of NGC~1068 was detected, and the polarization map is not shown. For the total intensity image at 53 $\mu$m, we refer to the HAWC+ observations using the Lissajous observing mode presented by \citet{LR2018a}. We here report the measured nuclear degree of polarization of $1.3\pm0.3$\% and PA of polarization of $139\pm6^{\circ}$ (E-vector) within a $5$\arcsec\ ($325$ pc) diameter at $53$ $\mu$m. For 89 $\mu$m, we measured a nuclear polarization of $0.6\pm0.3$\% and PA of polarization of $156\pm13^{\circ}$ (E-vector) within a $8\arcsec$ ($520$ pc) diameter. For both bands, the measured nuclear polarization is below the instrumental polarization and above the residual polarization after instrumental polarization correction. Thus, the measured polarization is consistent with a low polarized core. As the fractional contribution of the AGN increases at short wavelengths and the dust emission from the host galaxy increases at long wavelengths \citep[fig. 6 by][]{LR2018a}, we expect to measure a decrease of the nuclear degree of polarization as wavelength increases. Our measurements are compatible with this behavior.
\section{Model of the large-scale galactic magnetic field}\label{sec:Bfield}
The observational results shown in Section \ref{sec:obs} suggest the presence of a large-scale magnetic field along two spiral arms. We here produce a model of the magnetic field to characterize this structure. We use an axisymmetric spiral structure \citep[i.e.][]{RG2010,BHB2010} defined in cartesian coordinates by the form
\begin{eqnarray}
B_{x} &=& -B_{0}(r) \sin (\phi + \Psi)\cos \chi(z) \\
B_{y} &=& B_{0}(r) \cos (\phi + \Psi)\cos \chi(z) \\
B_{z} &=& B_{0}(r) \sin \chi(z)
\end{eqnarray}
\noindent
where $B_{0}(r)$ is the magnetic field as a function of the radial distance from the core of the galaxy, and $\Psi$ is the pitch angle defined as the angle between the azimuthal ($\phi$) direction and the magnetic field direction. $\chi(z)$ is the out-of-plane magnetic field taken as $\chi(z) = \chi_{0}\tanh(z/z_{0})$ \citep{RG2010}.
When this magnetic field configuration is viewed at an inclination, $i$, and tilt angle\footnote{Also described as the PA of the major axis of the projected galaxy plane.}, $\theta$, of the galaxy projected on the plane of the sky, the magnetic field at the observer's frame is given by
\begin{eqnarray}
B_{x^{s}} &=& B_{x}\cos\theta + (B_{y}\cos i - B_{z}\sin i)\sin\theta \\
B_{y^{s}} &=& -B_{x}\sin\theta + (B_{y}\cos i - B_{z}\sin i) \cos\theta \\
B_{z^{s}} &=& B_{y}\sin i + B_{z}\cos i
\end{eqnarray}
\noindent
where ($x^{s},y^{s},z^{s}$) are the major axis, minor axis, and the line of sight, respectively, in the sky coordinate system.
The PA of polarization in the plane of the sky, which is parallel to the direction of the magnetic field in the ($x^{s},y^{s},z^{s}$) coordinate system, is computed as $PA_B = \arctan(B_{y^{s}} / B_{x^{s}})$. The degree of polarization as a function of the inclination and magnetic field direction in the plane of the sky is computed as $P = p_{o} (1 - (B_{z^{s}}/B_{s})^{2})$, where $p_{o}$ is a constant factor, and $B^{2}_{s} = B^{2}_{x^{s}} + B^{2}_{y^{s}} + B^{2}_{z^{s}}$ is the total magnetic field. We assume a negligible contribution of the out-of-plane magnetic field, $\chi_{0} = 0$, thus our final magnetic field is coplanar with the disk of the galaxy. This model is purely a geometrical description and does not fulfill the divergence-free vector field as $B_z$ is neglected.
Since the FIR polarimetry is not directly sensitive to the magnetic field strength, there are three free model parameters: pitch angle, $\Psi$, inclination angle, $i$, and tilt axis position angle, $\theta$. Face-on view corresponds to $i = 0^{\circ}$ and edge-on to $i = 90^{\circ}$. Tilt axis has a reference point, $\theta = 0$, along the north-south direction and positively increases east from north. To fit for model parameters, we computed a Markov Chain Monte Carlo (MCMC) approach using the differential evolution metropolis sampling step in the \textsc{python} code \textsc{PyMC3} \citep{pymc}. The prior distributions are set to flat within the ranges $\Psi$~=~($0^{\circ}$,$50^{\circ}$), $i$~=~($0^{\circ}$,$90^{\circ}$), and $\theta$~=~($0^{\circ}$,$90^{\circ}$). We run the code using 15 chains with 6000 steps and a 1000 steps burn-in per chain. Because the central $8\farcs3 \times 8\farcs3$ ($0.54 \times 0.54$ kpc$^{2}$) region of NGC~1068 is dominated by the inner-bar, we have excluded this region in this analysis. One could perhaps question the size of the exclusion zone, but the overall effect on the model fit (presented above) of the small number of vectors within the innermost annulus will be small.
\begin{figure}[ht]
\includegraphics[angle=0,scale=0.51]{fig3}
\caption{\textit{Top:} Posterior distributions and MAP values of the pitch, $\Psi$, inclination, $i$, and tilt, $\theta$, angles of the magnetic field model. \textit{Bottom:} Degree and PA of polarization vectors were subtracted by vectors in radial coordinates centered on the galaxy nucleus, $\Delta\phi$ (black dots). Azimuthal angle equal to 0 corresponds to North and positive values are counted counter-clockwise in the East of North direction. $\Delta \phi = 0^{\circ}$ corresponds to a perfectly radial direction, while $\Delta \phi = \pm90^{\circ}$ corresponds to a perfectly azimuthal direction. The best fit model (orange solid line) and the $1\sigma$ uncertainties (blue shadow region) are shown.
\label{fig:fig3}}
\epsscale{2.}
\end{figure}
\begin{figure*}[ht!]
\includegraphics[angle=0,scale=0.28]{fig4}
\caption{\textit{Left:} Best inferred model (blue lines) of the galactic magnetic field explaining the measured (black lines) magnetized spiral arms of the central $3 \times 3$ kpc$^{2}$ of NGC1068. This figure shows constant length observed vectors (black) and predicted model vectors (blue). \textit{Right:} Total flux (color scale) with overlaid magnetic field model (streamlines) using LIC and measured polarization vectors (black lines). Approximated locations of the inner-bar and starburst rings are shown (Section \ref{sec:AGN}). Polarization vectors have been rotated by $90^{\circ}$ to show the inferred magnetic field.
\label{fig:fig4}}
\epsscale{2.}
\end{figure*}
Figure \ref{fig:fig3} shows the marginalized posterior distributions of the free parameters and their maximum a posteriori (MAP) with their 1$\sigma$ estimations. This figure also shows, as a function of azimuthal angle, the measured degree of polarization, the measured PA of polarization $\Delta\phi$ (black dots) in radial coordinates centered on the galaxy nucleus, the best fit model (orange solid lines), and 1$\sigma$ uncertainties (blue shadow region). If $\Delta \phi = 0^{\circ}$, the angle is perfectly radial, and if $\Delta \phi = \pm90^{\circ}$, the angle is perfectly azimuthal. To generate the data points in the figure, the Stokes I, Q, and U images were binned by azimuthal angle in bins of 10$^{\circ}$. Then, debiased degree and PA of polarization were estimated for each bin.
We find a symmetric pattern of the polarization as a function of the azimuthal angle, reflecting the relationship among degree of polarization, polarization PA, and model parameters shown above. An axisymmetric spiral structure with a pitch angle of $\Psi = 16.9^{+2.7}_{-2.8}$$^{\circ}$, inclination $ i = 48.1^{+1.8}_{-1.9}$$^{\circ}$, and tilt angle of $\theta = 37.8\pm1.7$$^{\circ}$ best describes our data. The host galaxy is inclined by 40$^{\circ}$ \citep{deVaucouleurs1991}, and using the HI velocity field, \citet{brinks1997} estimated an inclination angle of $40\pm3^{\circ}$, which both are fairly close to our estimated values. The main difference may be that \citet{deVaucouleurs1991} estimated the galaxy inclinations using isophotes on the optical images, and that \citet{brinks1997} used the rotation curve of the HI emission observations while our estimation is based on a smaller region of the galaxy dominated by the FIR emission. The position angle of the kinematic major axis of HI is estimated to vary from $270\pm3^{\circ}$ in the inner $30-70\arcsec$ disk to $286\pm5^{\circ}$ beyond $100\arcsec$, which is almost East-West \citep{brinks1997}. We further discuss the pitch angle in Section~\ref{sec:PA}.
Figure \ref{fig:fig4} (left) shows the magnetic field configuration (blue vectors) of the best inferred parameters (Fig. \ref{fig:fig3}) with the overlaid measured PA of polarization rotated by $90^{\circ}$ (black vectors) of the central $3 \times 3$ kpc$^{2}$ of NGC~1068. Figure \ref{fig:fig4} (right) shows the total flux (color scale) image of NGC~1068 with overlaid magnetic field model (streamlines) using LIC techniques and the measured polarization vectors (black lines). All polarization vectors have the same length, and their orientation shows the magnetic field morphology. The large-scale magnetic field morphology agrees with an axisymmetric spiral structure. The largest difference between our model and the observations is located within $\sim 1$ kpc of the nucleus (Section \ref{sec:AGN}). As our axisymmetric magnetic field model does not fulfill the divergence-free requirement, we cannot reproduce the field in the inner areas of the galaxy. The deviations within $\sim 1$ kpc around the nucleus may indicate a limitation of the model and/or a different magnetic field morphology as described in Section \ref{sec:AGN}.
\section{Discussion}\label{sec:DIS}
\subsection{Temperature and column density maps}\label{sec:TNh}
To support the analysis of the following sections, we have constructed temperature and column density maps of NGC~1068. We have combined our $89$ $\mu$m\ HAWC+ observations with archival \textit{Herschel Space Observatory} \citep{Pilbratt2010} observations at $70$, $160$, and $250$ $\mu$m\ taken with the PACS \citep{Poglitsch2010} and SPIRE \citep{Griffin2010} instruments. We binned each observation to the pixel scale, $6$\arcsec, of the $160$ $\mu$m\ \textit{Herschel} data. Then we extracted the intensity values of each pixel associated with the same part of the sky at each wavelength. Finally, we fit a modified blackbody function assuming a dust emissivity of $\epsilon_{\lambda} \propto \lambda^{1.6}$ \citep[e.g.][]{Boselli2012} and with the temperature and optical depth, $\tau$, as free parameters. We estimate the column density using the relation $N_{HI+H_{2}} = \tau / (k \mu m_{H})$, where $\tau$ is the optical depth at 250 $\mu$m from the best fit modified blackbody function at a given pixel, $k = 0.1$ cm$^{2}$ g$^{-1}$ \citep{H1983}, $\mu = 2.8$ is the mean molecular mass per H molecule \citep{K2008}, and $m_{H}$ is the hydrogen mass. Figure \ref{fig:fig5} shows the estimated temperature and column density distributions within the same FOV ($2\arcmin~\times~2\arcmin$, $7.8~\times~7.8$~kpc) as Fig. \ref{fig:fig1}.
\begin{figure}[h!]
\includegraphics[angle=0,scale=0.47]{fig5}
\caption{ \textit{Top:} Hydrogen column density map, $N_{HI+H_{2}}$. Contours start at $\log{(N_{HI+H_{2}} (cm^{-2}))} = 20$ and increase in steps of $0.2$dex. \textit{Bottom:} Temperature map. Contours start at $20$K and increase in steps of $1$K. Black cross shows the location of the AGN.
\label{fig:fig5}}
\epsscale{2.}
\end{figure}
\begin{figure*}[ht!]
\includegraphics[angle=0,scale=0.9]{fig6}
\caption{\textit{Left:} \textit{HST}/F555W image with the polarization vectors observed with HAWC+ within a $1.5\arcmin\ \times 1.5\arcmin$ ($5.9 \times 5.9$ kpc$^{2}$) FOV. \textit{Right:} Zoom-in of left image to show the central $15\arcsec\ \times 15\arcsec$ ($0.98 \times 0.98$ kpc$^{2}$). The black cross shows the location of the active nucleus. Polarization vectors have been rotated by $90^{\circ}$ to show the inferred magnetic field.
\label{fig:fig6}}
\epsscale{2.}
\end{figure*}
The estimated dust temperatures lie in the range of $20-34$ K and peak in a ridge along the inner-bar. There is a strong peak temperature ($34\pm2$ K) displaced $\sim 7\arcsec$ ($\sim0.45$ kpc) NE from the position of the AGN (black cross). The dust temperature has another peak $\sim 45\arcsec$ ($\sim2.93$ kpc) SW spiral arm at the location of a large complex of HII regions and giant molecular clouds \citep[see Fig. \ref{fig:fig2}-right,][]{kaneko1989,tomoka2017}. Fig. \ref{fig:fig6} shows the location of the HII regions (bright compact sources) using the \textit{HST}/F555W ($\lambda_{c} = 0.54$ $\mu$m, $ \Delta\lambda = 0.11$ $\mu$m) filter to compare with our inferred magnetic field morphology.
The column densities in the mapped area lie in the range $10^{20}-10^{22.4}$ cm$^{-2}$, corresponding to optical extinctions of A$_{V} \sim 0.05-14$ mag. The distribution is extended along the inner-bar but is not as strongly peaked along the ridge as the temperatures. There is a weak maximum SW of the nucleus, and at the lowest contour levels there is an extension along the SW spiral arm. The morphology is consistent with HI absorption maps observed with the VLA with a beam size of $2\farcs1~\times~1\farcs0$ \citep{gallimore1994}. They found column densities in the range $[1-4]~\times~ 10^{21}$ cm$^{-2}$ in the region SW of the nucleus. Integrating over the column density map, we derive a mass of $[6.1~\pm~1.4]~\times~10^{8}$ M$_{\odot}$ in the central $2\arcmin~\times~2\arcmin$ ($7.8~\times~7.8$~kpc).
\subsection{On the role of magnetic fields in the spiral arms}\label{sec:BArms}
The main result of these observations is the detection of coherent magnetic fields over a $3$ kpc diameter region. Our column density map shows that all points have optical depths $< 0.2$ at $89$ $\mu$m. We converted the estimated hydrogen density, $N_{H}$, from Figure \ref{fig:fig5} to the visual extinction, $A_V$, using the standard ratio, $A_{V}/N_{H} = 5.35 \times 10^{-22}$ mag cm$^{-2}$ \citep{BSD1978}. Then, we used the typical extinction curve\footnote{The extinction curve used for the optical depth conversion can be found at \url{https://www.astro.princeton.edu/~draine/dust/dustmix.html}} of the Milky Way for R$_{V} = 3.1$ \citep{WD2001} to estimate the conversion $\tau_{0.55}/\tau_{89} = 4 \times 10^{-3}$ between optical depths at 0.55 $\mu$m~and 89 $\mu$m. Hence, at each line of sight it is likely that we are sampling the integrated flux from aligned dust grains all the way through the galactic disk. In the outer regions of the map, the optical extinctions are also low to moderate, and there is a close alignment between visual tracers of spiral arms and our magnetic field vectors (see Fig. \ref{fig:fig1}, right and Fig. \ref{fig:fig6}, left). Within a diameter of $10$ kpc, CO is also aligned with the optical spiral structure, and H$_{\alpha}$ residual velocities reveal streaming motions along the arms \citep{D97}.
While we discussed above some of the shortcomings of mapping magnetic fields with optical polarimetry, the V-band vectors of figure 2 in \citet{scar91} or the data shown by \citet{WT1987} line up well with our mathematical model. In the outer regions ($r > 15\arcsec$, outside the starburst ring) that is also true for our FIR polarimetry. Inside the starburst ring, where the optical depths are high (Fig. \ref{fig:fig5}-top), the V-band degree of polarization is very small, but in general, the directions follow our model. The FIR emission, on the other hand, is more highly polarized and there are significant deviations from the model (see Section \ref{sec:AGN}). From the above, we infer 1) that our FIR polarimetry results are, indeed, tracing spiral structure, 2) that, as expected, they seem to offer significant advantages over optical polarimetry for measuring the bulk properties of the ISM, particularly along lines of sight with high optical depth, and 3) that the field lines appear to follow the directions of gas flows along spiral arms, which may indicate that the arms have strong shocks otherwise the field would cross the arms at a given angle.
\begin{figure*}[ht!]
\includegraphics[angle=0,scale=0.6]{fig7}
\caption{Degree of polarization as a function of the column density (left) and surface brightness (right). Colorscale corresponds to the temperature (left) and column density (right). In all cases, polarization vectors with $\sigma_{P} < 5$\% are shown. A power-law (black dashed line) fit to the polarization vectors along the spiral arms are shown.
\label{fig:fig7}}
\epsscale{2.}
\end{figure*}
The magnetic field in the ISM is often modeled using a combination of constant and turbulent components. The trend of decreasing fractional polarization with increasing column density \citep{H99,jowh15, Jon15} provides an indirect measurement of the effect of the turbulent component. For maximally aligned dust grains along a line of sight with a constant magnetic field direction, the degree of polarization, $P$, in emission will be constant with optical depth $\tau$. If there is a region along the line of sight with some level of dust grain misalignment, it will result in a reduced degree of polarization. If the magnetic field direction in a region varies completely randomly along the line of sight with a well-defined scale length in $\tau$, we have $P \propto \tau^{-0.5}$ \citep{Jon15}. For any combination of constant and random components, the slope will be between these two limits \citep{JB15}. However, if 1) grain alignment decreases in denser regions, or 2) there is coherent cancellation of polarization (e.g. crossed field directions), or 3) there are regions of very high turbulence on very small scale lengths, the polarization can decline faster with optical depth than a slope of $-1/2$.
Figure \ref{fig:fig7} shows log/log plots of polarization versus column density and surface brightness for all polarization vectors with $\sigma_{P} < 5$\%. The column density and dust temperature for each polarization vector is shown in color scale. It is clear that the polarization is higher when the temperature, column density, and fluxes are low, although the spatial anti-correlation of temperature and column density (Figure \ref{fig:fig5}) are the more fundamental relationships. This behavior has been previously observed in molecular clouds complexes in our Galaxy and Galactic center \citep[e.g.,][]{H99,C03}, which shows a depolarization effect as flux increases. The upper envelope of polarization measurements provides the least depolarized line-of-sights on the source, which allows to find the overall trend of the optical depth index. We use a linear fit to the log-log plot along the upper envelope of the measurements, which gives an optical depth index of $-0.5$ (black dashed line). This result indicates that most of the polarization measurements suffers of depolarization with an optical depth index steeper than $-0.5$, which may be due to high turbulence at small scales, effects of dust grain alignment towards dense regions and/or cross fields on the LOS.
The degree of polarization in our map is typically $< 3$\%, with a peak of $\sim 7$\% (with $P/\sigma_{P} \sim 3$). The polarization is systematically higher in the outer regions than in the starburst disk at radius $\le 25\arcsec$ ($1.6$ kpc). FIR polarimetric observations of Galactic clouds typically measure degree of polarization of $3-6$\% in the $60-100$ $\mu$m\ wavelength range \citep{dotson2000}. For our observations of NGC~1068, although we have an ordered polarization spiral pattern, we identify a potential disorder in the magnetic field on scales smaller than the resolution element, $8\arcsec$ ($520$ pc), of our observations. In a sense, this should not be surprising. The forces creating large-scale structures like spiral arms result from gravity and rotation. Where there are additional local sources of energy such at those arising from the assembly and disruption of star-forming complexes, there will be multiple additional mechanisms for distorting pre-existing magnetic fields or generating new ones with orientations that differ from the large-scale field. The larger the fraction of the ISM involved in these processes, the greater the average misalignment with the general field and the greater the depolarization of the emergent FIR radiation. This is consistent with studies that show that Galactic star-forming complexes with scale-lengths of tens of parsecs are misaligned with the Galaxy’s field on kiloparsec scales \citep{S2011}. One of the most significant aspects of our observations may be that in spite of the extremely high star formation rate in the starburst disk of NGC 1068, there is still a clearly discernible spiral field over much of its extent.
The alignment efficiency of dust grains may be an issue in very dense molecular cloud cores where the extinction exceeds A$_{V} \sim 20$ and no embedded Young Stellar Objects (YSOs) illuminate the dust \citep{JB15}. To determine the effect of loss of grain alignment on our observations, we need to calculate what fraction of the flux in our 53 and 89 $\mu$m\ beams could arise from starless or pre-stellar molecular cores \citep{caselli2013}. Using $^{13}$CO and 1.1 mm dust continuum observations of the Milky Way, \citet{BH2014} find that only about 10\% or less of the mass in Giant Molecular Clouds (GMC) is contained in compact regions with A$_{v} > 10$ and temperatures of T$\sim10$ K. Given that some of these regions may contain a YSO that could produce enough radiation to align grains \citep[e.g.][]{jones2016}, this must be considered an upper limit to the mass fraction in starless cores. Given this upper limit on the mass fraction of GMCs in the form of starless cores and the $20-34$ K range of temperatures we derive for the dust emission in our beam (Figure \ref{fig:fig5}) for NGC~1068, the fraction of the flux in our beam from cloud cores with unaligned grains must be less than 1\%. Thus, in our 53 and 89~$\mu$m\ beams ($5-8$\arcsec), loss of grain alignment cannot be a significant contributing factor to the trend seen in Figure \ref{fig:fig7}. This leaves the coherent cancelation of polarization and/or regions of very high turbulence on small scale lengths as potential explanations for the slope in Figure \ref{fig:fig7}.
\subsection{The pitch angle and masses}\label{sec:PA}
As the density wave passes through the host galaxy, it enhances the star formation. The spiral density wave theory predicts that the pitch angle may vary with wavelength, as described in Section \ref{sec:int}. NGC~1068 has been studied across the electromagnetic spectrum where the pitch angle of the arms has been estimated to be $20.6\pm4.5^{\circ}$ at 0.46 $\mu$m\ \citep{Berrier2013}, $17.3 \pm 2.2^{\circ}$ using R band images with the 2.5-m Las Cumbres Observatory (LCO) \citep{Seigar2008}, $15^{\circ}$ using the $H_{\alpha}$ velocity field \citep{emsellen2006}, and $7-10^{\circ}$ using the CO (J=1-0) emission \citep{PSM1991}. We estimated a pitch angle of $16.9^{+2.7}_{-2.8}$$^{\circ}$ in the magnetic field inferred from 89 $\mu$m\ observations, which is between the pitch angles estimated in the spiral arms at 0.46 $\mu$m\ and the CO (J=1-0) emission. The material in the galaxy which is sampled by our FIR measurements is spatially coincident with the $H_{\alpha}$ velocity field \citep[see fig. 7 by][]{emsellen2006}. The analysis by \citet{DSW1998} computed that the $H_{\alpha}$ velocity field pattern is spatially correlated with the starbursting HII regions which was enhanced by {a burst in the past 30 Myr. Later, \citet{emsellen2006} found that the $H_{\alpha}$ and CO intensity distributions are not spatially coincident due to star formation and extinction. We conclude that the matter sampled by our estimated pitch angle follows the starbursting regions along the spiral arms.
The spiral density wave theory has also shown that the concentration of host galaxy mass and nuclear mass determines the pitch angle \citep{RRS1975}. As a consequence, recent studies have gone further and empirically shown that the spiral arm pitch angle, $\Psi$, and the central black hole mass, $M_{BH}$, may be correlated in the modal density wave \citep[e.g.][]{Seigar2008, Berrier2013} by the form
\begin{equation}
\log{(\frac{M_{BH}}{M_{\odot}})} = (8.21 \pm 0.16) - (0.062 \pm 0.009)\Psi
\end{equation}
\noindent
\citep{Berrier2013}. Using our estimated pitch angle of $16.9^{+2.7}_{-2.8}$$^{\circ}$, we estimate a nuclear mass of $\log(M/M_{\odot})= 7.16^{+0.46}_{-0.51}$. Our result is in agreement, within the uncertainties, with the black hole mass of $\log(M_{BH}/M_{\odot})=6.95 \pm 0.02$ estimated using maser modeling on the central pc of NGC~1068 \citep{LB2003}.
\subsection{The central region of NGC~1068}\label{sec:AGN}
We found several features between the measured PA of polarization and the magnetic field model within $\sim1$ kpc of the nucleus. In the map of polarized flux (right-hand panel of Figure \ref{fig:fig1}), we identify:
\begin{itemize}
\item a low polarization, $0.6\pm0.3$\%, within an $8\arcsec$ ($0.52$ kpc) diameter beam centered on the AGN,
\item a strong peak in the polarized flux at PA~$\sim22^{\circ}$ at a radius of $\sim4\arcsec$ ($\sim0.26$ kpc) from the position of the AGN,
\item three distinct minima at (a) at PA~$\sim22^{\circ}$, $8\arcsec~\le~r~\le~16\arcsec$ ($0.52-1$~kpc); (b) at PA~$\sim212^{\circ}$, $3\arcsec \le r \le 12\arcsec$ ($0.20-0.78$ kpc); and (c) at PA~$\sim240^{\circ}$, $14\arcsec \le r \le 18\arcsec$ ($0.91-1.17$ kpc), and
\item significant discrepancies between our observed polarization directions and our spiral model.
\end{itemize}
The polarization of the nuclear region has been extensively studied from ultraviolet (UV) to MIR wavelengths. The UV-Optical polarization is very high, $~15$\% (intrinsic polarization), and it is attributed to dust scattering from the NE regions of the NLR within the central kpc \citep[e.g.][]{AM85,AHM94,K99}. The NIR polarization, $\sim7$\% (intrinsic polarization), is attributed to dichroic absorption of aligned dust grains in the pc-scale dusty obscuring torus around the active nucleus \citep[e.g.][]{B1988,Y1995,L1999,P1997,LR2015,G2015}. The MIR polarization, $<1$\%, is attributed to self-absorbed dust emission from aligned dust grains in the pc-scale optically thick dusty torus \citep{P07,LR2016}. We attribute our measured FIR polarization of $0.6\pm0.3$\% at the core due to depolarization from 1) beam, $\sim8$\arcsec, averaging of a more complex underlying field, 2) the high column density toward the core (Section \ref{sec:TNh}), and/or 3) turbulent environment caused by the jet.
It is interesting to note the difference between the nuclear polarization of the radio-quiet AGN NGC~1068 from the recently reported highly polarized radio-loud AGN, Cygnus A \citep{LR18}. At 89 $\mu$m, Cygnus A has a nuclear polarization of $\sim10$\% with a PA of polarization (B-vector) along the equatorial axis of the torus. Although both galaxies are at different distances and their angular diameters in the plane of the sky are different, we can estimate the polarization of NGC 1068 at the same spatial scale of Cygnus A. We measure the polarization of NGC~1068 at the same physical scale ($8$ kpc diameter) of Cygnus A, and we obtain a polarization level of $\sim0.7$\% with a PA of polarization (E-vector) of $\sim144^{\circ}$, i.e. perpendicular to the overall extension of the polarized inner-bar (Fig. \ref{fig:fig1}). The core of NGC~1068 at 89 $\mu$m\ seems to be affected by depolarization arising from the inner-bar and/or an intrinsically weak magnetic field, while Cygnus A may have an intrinsically polarized core. Further magnetohydrodynamical models are required to test the influence of magnetic fields in AGN tori to compare between radio-loud and radio-quiet objects.
The peak of the polarized flux is located $\sim4$\arcsec\ ($\sim0.26$ kpc) NE from the AGN near the position of maximum dust temperature. This feature coincides with the northeastern radio lobe at 4.9 GHz \citep{WU1982}, which is associated with high-velocity OIII-emitting clouds within a bow-shock that separates the AGN outflow from the local ISM. \citet{YWS2001} observed a bright X-ray emission at the same position. The X-ray spectrum of this feature is a smooth continuum bremsstrahlung spectrum plus emission lines. Observations at $19.7-53$ $\mu$m\ show excess emission at $3\arcsec-6\arcsec$ ($0.20-0.39$ kpc) NE of the AGN after removal of an unresolved source at the position of the AGN \citep{LR2018a}. \citet{scar91} also found the region to be polarized at a level of $3$\%, which they attributed to the interaction of the jet with the ionization cone. At the angular resolution of the current FIR data, it is not possible to differentiate between the case in which the observed magnetic field at the position of peak polarized flux is generated by the bow shock or is intrinsic to the interstellar material with which the outflow is interacting.
The minima in polarized flux at $22^{\circ}$ (a) and $240^{\circ}$ (c) coincide with peaks in the $53$ $\mu$m\ map of \citet{LR2018a}, the locations of the most intense star formation in the starburst ring. The most likely explanation for the low polarization is that at these positions, the coherent large-scale field is completely dominated by randomly oriented fields generated during the collapse and destruction of a large number of star-forming cores.
The minimum in polarized flux at $212^{\circ}$ (b), in the direction of the counter-jet, may stem from a deficit in ISM density and CO intensity in that direction. The CO cloud density along the NE branch of the NIR bar and near the NE ionization cone appears to be greater than along the SW branch \citep[see, e.g.][]{Schinnerer2000,tosaki2017}.
The largest discrepancies between the measured polarization and the model of Section \ref{sec:Bfield} (see Figure \ref{fig:fig4}) lie at $5\arcsec \le r\le 15\arcsec$ at PA $\sim70^{\circ}$ and PA $\sim270^{\circ}$, in the directions of gaps in molecular cloud density along the starburst ring \citep{Schinnerer2000}. The gaps occur between the regions of most active star formation (see the $53$ $\mu$m\ map of \citet{LR2018a}) and the points at which the outer spiral arms diverge from the starburst ring (PA $\sim112^{\circ}$ and PA $\sim292^{\circ}$). The deviations of the observed field directions from the model are in the sense of a larger radial component in the observed vectors. In the sector toward PA $\sim 70^{\circ}$, the polarized flux is particularly strong (see Fig. \ref{fig:fig1}-right) and the vectors line up well with both the NE branch of the inner-bar (PA $\sim 45^{\circ}$) and visible dust lanes (see Fig. \ref{fig:fig2}-right). If the magnetic field traces the gas flow, this would be consistent with inward transport of gas at the leading edges of the inner-bar, with the highest present-day flow rates occurring along the NE branch of the bar. Magnetic fields tracing inward gas transport have been observed in the inner-bar of NGC 1097 in radio synchrotron polarization \citep{Beck2005}.
\section{Conclusions}\label{sec:CON}
We have presented the first FIR polarimetry of NGC 1068. We found a large-scale ($\sim3$ kpc) spiral pattern that we attribute to thermal emission from magnetically-aligned grains. There is a spatial and morphological correspondence between the 89 $\mu$m\ magnetic field vectors and other tracers, i.e. OIII, H$_{\alpha}$, optical, of spiral arms. We found a symmetric polarization pattern as a function of the azimuthal angle arising from the projection and inclination of the disk field component in the plane of the sky. We are able to explain this behavior with an axisymmetric spiral polarization pattern with pitch angle $16.9^{+2.7}_{-2.8}$$^{\circ}$, inclination $48.1^{+1.8}_{-1.9}$$^{\circ}$, and tilt angle $37.8 \pm 1.7^{\circ}$. The matter sampled by our estimated pitch angle follows the starbursting regions along the spiral arms, and, using the pitch angle-mass relationship, predicts a nuclear mass of $\log(M/M_{\odot}) = 7.16^{+0.46}_{-0.51}$.
Outside the starburst disk (radius $\le1.6$ kpc) in NGC 1068, the degree of polarization is similar to that seen in Milky Way sources. At smaller radii, it decreases with flux, dust temperature, and column density, and it has two minima near the locations of the ends of the NIR inner-bar in the starburst ring. This trend is consistent with dilution of the net spiral field by random fields created by injection of kinetic energy into the ISM by active star formation.
Inside the starburst ring ($<1.6$ kpc), we found evidence for large-scale coherent magnetic fields that align with visual tracers but not with our model field. The discrepancies are largest along the leading edges of the inner-bar. If the magnetic field traces gas flows, this is consistent with inward transport of gas induced by the bar. The intensity of polarized flux is stronger along position angles centered at $\sim 70^{\circ}$, suggesting that at the present time the strongest flows are along the NE branch of the inner-bar.
A peak in polarized flux intensity and dust temperature occurs at $\sim 4\arcsec$ ($\sim0.26$ kpc) NNE of the AGN near the location of the bow shock separating the AGN outflow from the surrounding ISM. This is consistent with the hypothesis that the magnetic field has been amplified at the shock interfaces along the edges of the outflow cavity, but our current angular resolution is insufficient to rule out the possibility that the field is intrinsic to the interstellar clouds with which the outflow is interacting. There is a minimum in polarized flux to the SSW of the AGN at the location of the counter-jet, possibly because of a current deficit of ISM in that direction.
The degree of FIR polarization at the position of the AGN is low, $1.3 \pm 0.3$\% within $5\arcsec$ ($0.33$ kpc) diameter at 53 $\mu$m\ and $0.6 \pm 0.3$\% within $8\arcsec$ ($0.52$ kpc) diameter at 89 $\mu$m. This is much lower than the $\sim10$\% FIR polarization of the radio-loud AGN in Cygnus A \citep{LR18}. The degree of polarization of NGC 1068 integrated over an $8$ kpc diameter region comparable to the physical scale of the region encompassed by the Cygnus A measurement is only $0.7$\%.
The results presented here, along with our prior studies of M 82 and NGC 253 \citep{jones2019}, provide evidence that FIR polarimetry can be a valuable tool for studying magnetic field structure in external galaxies, particularly in regions of high optical depth.
\acknowledgments
We are grateful to the anonymous referee that helped to clarify and improve the text. Based on observations made with the NASA/DLR Stratospheric Observatory for Infrared Astronomy (SOFIA) under the GTO Program. SOFIA is jointly operated by the Universities Space Research Association, Inc. (USRA), under NASA contract NAS2-97001, and the Deutsches SOFIA Institut (DSI) under DLR contract 50 OK 0901 to the University of Stuttgart. Portions of this work were carried out at the Jet Propulsion Laboratory, operated by the California Institute of Technology under a contract with NASA. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). SPIRE has been developed by a consortium of institutes led by Cardiff University (UK) and including Univ. Lethbridge (Canada); NAOC (China); CEA, LAM (France); IFSI, Univ. Padua (Italy); IAC (Spain); Stockholm Observatory (Sweden); Imperial College London, RAL, UCL-MSSL, UKATC, Univ. Sussex (UK); and Caltech, JPL, NHSC, Univ. Colorado (USA). This development has been supported by national funding agencies: CSA (Canada); NAOC (China); CEA, CNES, CNRS (France); ASI (Italy); MCINN (Spain); SNSB (Sweden); STFC, UKSA (UK); and NASA (USA).
\vspace{5mm}
\facilities{SOFIA (HAWC+), \textit{Herschel} (PACS, SPIRE), \textit{HST} (WFPC2, ACS)}
\software{\textsc{astropy} \citep{2013A&A...558A..33A},
\textsc{PyMC3} \citep{pymc},
\textsc{aplpy} \citep{RB2012},
\textsc{matplotlib} \citep{hunter2007}.
}
|
1,477,468,750,151 | arxiv | \section{Introduction}
We consider high-dimensional estimation problems, where the number of variables $p$ can be much larger that the number of observations $n$.
In this regime, consistent estimation can be achieved by imposing low-dimensional structural constraints on the estimation parameters.
{\it Sparsity} is a prototypical structural constraint, where at most a small set of parameters can be non-zero.
A key class of sparsity-constrained estimators is based on regularized $M$-estimators using \emph{convex} penalties, with the $\ell_1$ penalty by far the most common. In the context of linear regression, the Lasso estimator~\citep{Tibshirani96} solves an $\ell_1$ regularized or constrained least squares problem, and has strong statistical guarantees, including prediction error consistency~\citep{GeerBuhl09}, consistency of the parameter estimates in some norm~\citep{GeerBuhl09,MeinshausenYu09,Dantzig2007}, and variable selection consistency~\citep{Meinshausen06,Wainwright06,Zhao06}.
In the context of sparse Gaussian graphical model (GMRF) estimation, the graphical Lasso estimator minimizes the Gaussian negative log-likelihood regularized by the $\ell_1$ norm of the off-diagonal entries of the concentration~\citep{YuaLin07,FriedHasTib2007,BanGhaAsp08}. Strong statistical guarantees for this estimator have been established (see \citet{RWRY11} and references therein).
Recently, there has been significant interest in \emph{non-convex} penalties to alleviate the bias incurred by convex approaches, including SCAD and MCP penalties \citep{FanLi91,breheny2011coordinate,zhang2010nearly,zhang2012general}. In particular,~\citet{zhang2012general} established consistency for the global optima of least-squares problems with certain non-convex penalties. \citet{LW15} showed that under some regularity conditions on the penalty, any stationary point of the objective function will lie within statistical precision of the underlying parameter vector and thus provide $\ell_2$- and $\ell_1$- error bounds for any stationary point. \citet{LW17} proved that for a class of \emph{amenable} non-convex regularizers with vanishing derivative away from the origin (including SCAD and MCP), any stationary point is able to recover the parameter support without requiring the typical incoherence conditions needed for convex penalties. All of these analyses apply to
non-convex penalties that are \emph{coordinate-wise separable}.
Our starting point is a family of $M$-estimators with trimmed $\ell_1$ regularization, which leaves the largest $h$ parameters unpenalized.
This non-convex family includes the Trimmed Lasso~\cite{gotoh2017dc,bertsimas2017trimmed} as a special case.
Unlike SCAD and MCP, trimmed regularization exactly solves constrained best subset selection for large enough values of the regularization parameter,
and offers more direct control of sparsity via the parameter $h.$ While Trimmed Lasso has been studied from an optimization perspective and with respect to its connections to existing penalties, it has \emph{not} been analyzed from a statistical standpoint.
\vskip 4pt \noindent
{\bf Contributions:}
\begin{itemize}
\item We present the \emph{first} statistical analysis of $M$-estimators with trimmed regularization, \emph{including} Trimmed Lasso. Existing results for non-convex regularizers \citep{LW15,LW17} cannot be applied as trimmed regularization is neither coordinate-wise decomposable nor ``ameanable''. We provide support recovery guarantees, $\ell_\infty$ and $\ell_2$ estimation error bounds for general $M$-estimators, and derive specialized corollaries for linear regression and graphical model estimation. Our results show different regimes based on how the trimming parameter $h$ compares to the true support size.
\item To optimize the trimmed regularized problem we develop and analyze a new algorithm, which performs better than difference of convex (DC)
functions optimization~\citep{khamaru2018convergence}.
\item Experiments on sparse linear regression and graphical model estimation show $\ell_1$ trimming is
competitive with other non-convex penalties and vanilla $\ell_1$ when $h$ is selected by cross-validation, and
has consistent benefits for a wide range of values for $h$.
\item Moving beyond $M$-estimation, we apply trimmed regularization to two deep learning tasks: (i) recovering input structures of deep models and (ii) network pruning (a.k.a. sparsification, compression). Our experiments on input structure recovery are motivated by~\citet{Oymak18}, who quantify
complexity of sparsity encouraging regularizers by introducing the covering dimension, and demonstrates the benefits of regularization for learning over-parameterized networks.
We show trimmed regularization achieves superior sparsity pattern recovery compared to competing approaches.
For network pruning, we illustrate the benefits of trimmed $\ell_1$ over vanilla $\ell_1$ on MNIST classification using the LeNet-300-100 architecture.
Next, motivated by recently developed pruning methods based on variational Bayesian approaches \citep{dai2018vib,Louizos18}, we propose Bayesian neural networks with trimmed $\ell_1$ regularization. In our experiments, these achieve superior results compared to competing approaches with respect to both error and sparsity level.
Our work therefore indicates broad relevance of trimmed regularization in multiple problem classes.
\end{itemize}
\section{Trimmed Regularization}\label{Sec:Setup}
Trimming has been typically applied to the \emph{loss} function $\L$ of $M$-estimators.
We can handle outliers
by trimming {\it observations} with large residuals in terms of $\L$: given a collection of $n$ samples, $\Data = \{Z_1, \hdots, Z_n\}$, we solve
\[\minimize_{\th \in \Omega, \w \in \{0,1\}^n} \sum_{i=1}^n w_i \L(\th; Z_i) \quad \mbox{s.t.}
\sum_{i=1}^n w_i = n-h,
\]
where $\Omega$ denotes the parameter space (e.g., $\mathbb{R}^p$ for linear regression). This amounts to trimming $h$ outliers as we learn $\th$ (see~\citet{YLA2018} and references therein).
In contrast, we consider here a family of $M$-estimators with trimmed \emph{regularization} for general high-dimensional problems. We trim entries of $\th$ that incur the largest penalty using the following program:
\begin{align}\label{EqnTrimmedReg1}
\minimize_{\th \in \Omega, \, \w \in [0,1]^p} \ \ & \L(\th;\Data) + \lam \sum_{j=1}^p w_j |\theta_j| \nonumber\\
\st \ \ & {\bf 1}^\top \w \geq p-h \, .
\end{align}
Defining the order statistics of the parameter $|\theta_{(1)}| > |\theta_{(2)}| > \hdots > |\theta_{(p)}|$, we can partially minimize over $\w$ (setting $w_i$ to $0$ or $1$ based on the size of $|\theta_{i}|$),
and rewrite the reduced version of problem \eqref{EqnTrimmedReg1} in $\th$ alone:
\begin{align}\label{EqnTrimmedReg2}
\minimize_{\th \in \Omega} \ \ & \L(\th;\Data) + \lam \R(\th;h)
\end{align}
where the regularizer $\R(\th; h)$ is the smallest $p-h$ absolute sum of $\th: \sum_{j=h+1}^p |\theta_{(j)}|$. The constrained version of \eqref{EqnTrimmedReg2}
is equivalent to minimizing a loss subject to a sparsity penalty~\citep{gotoh2017dc}:
$
\minimize_{\th \in \Omega} \L(\th;\Data) \ \st \ \|\th\|_0 \leq h.
$
For statistical analysis, we focus on the reduced problem~\eqref{EqnTrimmedReg2}. When optimizing, we exploit the
structure of~\eqref{EqnTrimmedReg1}, treating weights $\w$ as auxiliary optimization variables, and derive a new fast algorithm
with a custom analysis that does not use DC structure.
We focus on two key examples: sparse linear models and sparse graphical models. We also present empirical results for trimmed regularization of deep learning tasks to show that the
ideas and methods generalize well to these areas.
\paragraph{Example 1: Sparse linear models.} In high-dimensional linear regression, we observe $n$ pairs of a real-valued target $y_i \in \reals$ and its covariates ${\bm x}_i \in \reals^p$ in a linear relationship:
\begin{align}\label{EqnLinearModel}
\y = \X \Tth + \e.
\end{align}
Here, $\y \in \reals^n$, $\X \in \reals^{n\times p}$ and $\e\in \reals^n$ is a vector of $n$ independent observation errors. The goal is to estimate the $k$-sparse vector $\Tth \in \reals^p$. According to~\eqref{EqnTrimmedReg2}, we use the least squares loss function with trimmed $\ell_1$ regularizer (instead of the standard $\ell_1$ norm in Lasso \cite{Tibshirani96}):
\begin{align}\label{EqnLS}
\minimize_{\th \in \reals^{p}} \frac{1}{n} \big\| \X \th - \y \big\|_2^2 + \lam \R(\th;h) .
\end{align}
\paragraph{Example 2: Sparse graphical models.}
GGMs form a powerful class of statistical models for representing distributions over a set of variables~\citep{Lauritzen96}, using undirected graphs to encode conditional independence conditions among variables.
In the high-dimensional setting, graph sparsity constraints are particularly pertinent for estimating GGMs. The most widely used estimator, the graphical Lasso minimizes the negative Gaussian log-likelihood regularized by the $\ell_1$ norm of the entries (or the off-diagonal entries) of the precision matrix (see~\citet{YuaLin07,FriedHasTib2007,BanGhaAsp08}). In our framework, we replace $\ell_1$ norm with its trimmed version:
$\minimize_{\Th \in \mathcal{S}^{p}_{++}} \ \textrm{trace}\big(\Sig \Th \big) -\log\det \big(\Th\big) + \lam \R(\Th_{\textrm{off}};h)$
where $\mathcal{S}^{p}_{++}$ denotes the convex cone of symmetric and strictly positive definite matrices, $\R(\Th_{\textrm{off}};h)$ does the smallest $p(p-1)-h$ absolute sum of off-diagonals.
\paragraph{Relationship with SLOPE (OWL) penalty.}
Trimmed regularization has an apparent resemblance to the SLOPE (or OWL) penalty~\citep{bogdan2015slope,figueiredo2014sparse}, but the two are in fact distinct and pursue different goals. Indeed, the SLOPE penalty can be written as $\sum_{i=1}^p w_i |\beta_{(i)}|$ for a \emph{fixed} set of weights $w_1 \geq w_2 \geq \cdots \geq w_p \geq 0$ and where $|\beta_{(1)}|>|\beta_{(2)}|>\cdots> |\beta_{(p)}|$ are the sorted entries of $\bm\beta.$ SLOPE is convex and penalizes more those parameter entries with {\it largest amplitude}, while trimmed regularization is generally non-convex, and only penalizes entries with {\it smallest amplitude}; the weights are also optimization variables. While the goal of trimmed regularization is to alleviate bias, SLOPE is akin to a significance test where top ranked entries are subjected to a ``tougher'' threshold, and has been employed for clustering strongly correlated variables~\citep{figueiredo2014sparse}. Finally from a robust optimization standpoint, Trimmed regularization can be viewed as using an optimistic (min-min) model of uncertainty and SLOPE a pessimistic (min-max) counterpart. We refer the interested reader to~\citet{bertsimas2017trimmed} for an in-depth exploration of these connections.
\paragraph{Relationship with $\ell_0$ regularization.}
The $\ell_0$ norm can be written as $\|\bm\theta\|_0 = \sum_{j=1}^{p} z_j$ with reparameterization $\theta_j = z_j \tilde\theta_j$ such that $z_j \in \{0, 1\}$ and $\tilde\theta_j \neq 0$. ~\citet{Louizos18} suggest a smoothed version via continuous relaxation on $\bm{z}$ in a variational inference framework. The variable $\bm{z}$ plays a similar role to $\bm{w}$ in our formulation in that they both learn sparsity patterns. In Section~\ref{Sec:Exp} we consider a Bayesian extension of the trimmed regularization problem where $\bm\theta$ only is be treated as Bayesian, since we can optimize $\bm{w}$ without any approximation, in contrast to previous work which needs to relax the discrete nature of $\bm{z}$.
\section{Statistical Guarantees of $M$-Estimators with Trimmed Regularization}
Our goal is to estimate the \emph{true} $k$-sparse parameter vector (or matrix) $\Tth$ that is the minimizer of expected loss: $\Tth := \argmin_{\th \in \Omega} \E[\L(\th)]$. We use $\Supp$ to denote the support set of $\Tth$, namely the set of non-zero entries (i.e., $k = |\Supp|$). In this section, we derive support recovery, $\ell_\infty$ and $\ell_2$ guarantees under the following standard assumptions:
\begin{enumerate}[leftmargin=0.5cm, itemindent=0.65cm,label=\textbf{(C-$\bf{\arabic*}$)}, ref=\textnormal{(C-$\arabic*$)},start=1]
\item The loss function $\L$ is differentiable and convex.\label{Con:diff}
\vspace{-.2cm}\item {\bf (Restricted strong convexity on $\th$)} Let $\errtSet$ be the possible set of error vector on the parameter $\th$. Then, for all $\errt := \th - \Tth \in \errtSet$,
$\Biginner{\nabla \L(\Tth + \errt) - \nabla\L(\Tth) }{\errt} \geq \, \RSCcon \|\errt\|_2^2 - \RSCtolOne \frac{\log p}{n}\|\errt\|_1^2$,
where $\RSCcon$ is a ``curvature'' parameter, and $\RSCtolOne$
is a ``tolerance'' constant.\label{Con:rsc}
\end{enumerate}
In the high-dimensional setting ($p>n$), the loss function $\L$ cannot be strongly convex in general. \ref{Con:rsc} imposes strong curvature only in some limited directions where the ratio $\frac{\|\errt\|_1}{\|\errt\|_2}$ is small. This condition has been extensively studied and known to hold for several popular high dimensional problems (see \citet{Raskutti2010,NRWY12,LW15} for instance). The convexity condition of $\L$ in \ref{Con:diff} can be relaxed as shown in \cite{LW17}. For clarity, however, we focus on convex loss functions.
We begin with $\ell_\infty$ guarantees. We use a primal-dual witness (PDW) proof technique, which we adapt to the trimmed regularizer $\R(\th;h)$. The PDW method has been used to analyze the support set recovery of $\ell_1$ regularization \citep{Wainwright2006new,YRAL13} as well as decomposable and amenable non-convex regularizers \citep{LW17}. However, the trimmed regularizer $\R(\th;h)$ is neither decomposable nor amenable, thus the results of~\citet{LW17} cannot be applied. The key step of PDW is to build a restricted program: Let $\Nonreg$ be an arbitrary subset of $\{1,\hdots,p\}$ of size $h$. Denoting $\USupNonreg := \Supp \cup \Nonreg$ and $\DSupNonreg := \Supp - \Nonreg$, we consider the following restricted program:
$\Gth \in \argmin_{\th \in \reals^{\USupNonreg} : \ \th \in \Omega} \ \L(\th) + \lam \R(\th;h)$
where we fix $\Gth_j = 0$ for all $j \in \USupNonregC$. We further construct the dual variable $\Gz$ to satisfy the zero sub-gradient condition
\begin{align}\label{EqnPDW}
\nabla \L(\Gth) + \lam \Gz = 0
\end{align}
where $\Gz = (0, \Gz_{\DSupNonreg}, \Gz_{\USupNonregC})$ for $\Gth = (\Gth_{\Nonreg}, \Gth_{\DSupNonreg},0_{\USupNonregC})$ (after re-ordering indices properly) and $\Gz_{\DSupNonreg} \in \partial \|\Gth_{\DSupNonreg}\|_1$. We suppress the dependency on $\Nonreg$ in $\Gz$ and $\Gth$ for clarity. In order to derive the final statement, we will establish the strict dual feasibility of $\Gz_{\USupNonregC}$, i.e., $\|\Gz_{\USupNonregC}\|_\infty < 1$.
The following theorem describes our main theoretical result concerning \emph{any} local optimum of the non-convex program \eqref{EqnTrimmedReg2}. The theorem guarantees under strict dual feasibility that non-relevant parameters of local optimum have smaller absolute values than relevant parameters; hence relevant parameters are not penalized (as long as $h \geq k$).
\begin{theorem}\label{ThmSupp}
Consider the problem with trimmed regularizer \eqref{EqnTrimmedReg2} that satisfies \ref{Con:diff} and \ref{Con:rsc}. Let $\Lth$ be an any local minimum of \eqref{EqnTrimmedReg2} with a sample size $n \geq \frac{2\RSCtolOne}{\RSCcon} (k+h)\log p$ and $\lam \geq 2 \|\nabla \L(\Tth) \|_\infty$. Suppose that:
\begin{enumerate}[leftmargin=0.25cm, itemindent=0.45cm,label=(\alph*)]
\item given {\bf any} selection of $\Nonreg \subseteq \{1,\hdots,p\}$ s.t. $|\Nonreg|=h$, the dual vector $\Gz$ from the PDW construction \eqref{EqnPDW} satisfies the strict dual feasibility with some $\delta \in \left(0, 1\right]$,
$\|\Gz_{\USupNonregC}\|_\infty \leq 1 - \delta$
where $\USupNonreg$ is the union of true support $S$ and $\Nonreg$,
\vspace{-.2cm}\item letting $\Q : = \int_0^1 \nabla^2 \L\big(\Tth + t(\Gth - \Tth) \big) dt$, the minimum absolute value $\Tth_{\min} := \min_{j\in \Supp} |\Tth_j|$ is lower bounded by
$\frac{1}{2}\Tth_{\min} \geq \| (\Qs)^{-1} \nabla\L(\Tth)_\USupNonreg \|_{\infty} + \lam \matnorm{(\Qs)^{-1}}{\infty}$ where $\matnorm{\cdot}{\infty}$ denotes the maximum absolute row sum of the matrix.
\end{enumerate}
Then, the following properties hold:
\begin{enumerate}[leftmargin=0.25cm, itemindent=0.45cm,label=(\arabic*)]
\item For every pair $j_1 \in \Supp, j_2 \in \SuppC$, we have $|\Lth_{j_1}| > |\Lth_{j_2}|$,
\vspace{-.4cm}\item If $h < k$, all $j \in \SuppC$ are successfully estimated as zero and $\|\Lth -\Tth\|_\infty$ is upper bounded by
\begin{align}\label{EqnThmInfty}
\big\| \big(\TQ\big)^{-1} \nabla\L(\Tth)_{\Supp} \big\|_{\infty} + \lam \matnormbig{\big(\TQ\big)^{-1}}{\infty} ,
\end{align}
\vspace{-.4cm}\item If $h \geq k$, at least the smallest (in absolute value) $p-h$ entries in $\SuppC$ are estimated exactly as zero and we have a simpler (possibly tighter) bound:
\begin{align}\label{EqnThmInftyTight}
\|\Lth -\Tth\|_\infty \leq \big\| \big(\GQs\big)^{-1} \nabla\L(\Tth)_{\GUSupNonreg} \big\|_{\infty}
\end{align}
where $\GUSupNonreg$ is defined as the $h$ largest absolute entries of $\Lth$ including $\Supp$.
\end{enumerate}
\end{theorem}
\paragraph{Remarks.} The above theorem will be instantiated for the specific cases of sparse linear and sparse graphical models in subsequent corollaries (for which we will bound terms involving $\nabla\L(\Tth)$, $\Gz$ and $\Q$). Though
conditions (a) and (b) in Theorem \ref{ThmSupp} seem apparently more stringent than the case where $h=0$ (vanilla Lasso), we will see in corollaries that they are uniformly upper bounded for all selections, under the asymptotically same probability as $h=0$.
Note also that for $h=0$, we recover the results for the vanilla $\ell_1$ norm. Furthermore, by the statement $(1)$ in the theorem, if $h<k$, $\GUSupNonreg$ only contains relevant feature indices and some relevant features are not penalized. If $h \geq k$, $\GUSupNonreg$ includes all relevant indices (and some non-relevant indices). In this case, the second term in \eqref{EqnThmInfty} disappears, but the term $\big\|\big(\GQs\big)^{-1} \nabla\L(\Tth)_{\GUSupNonreg} \big\|_{\infty}$ increases as $|\GUSupNonreg|$ gets larger. Moreover, the condition that $n \asymp (k+h) \log p$ will be violated as $h$ approaches $p$. While we do not know the true sparsity $k$ a priori in many problems, we implicitly assume that we can set $h \asymp k$ (i.e., by cross-validation).
Now we turn to $\ell_2$ bound under the same conditions:
\begin{theorem}\label{ThmL2}
Consider the problem with trimmed regularizer \eqref{EqnTrimmedReg2} where all conditions in Theorem \ref{ThmSupp} hold. Then, for any local minimum of \eqref{EqnTrimmedReg2}, the parameter estimation error in terms of $\ell_2$ norm is upper bounded: for some constant $\Cltwo$,
\[
\| \Lth - \Tth \|_2 \leq
\begin{dcases*}
\Cltwo \lam \left(\sqrt{k}/2 + \sqrt{k-h}\right) & if $h < k$\\
\Cltwo \lam \sqrt{h}/2 & otherwise
\end{dcases*}
\]
\end{theorem}
\paragraph{Remarks.}
The benefit of using trimmed $\ell_1$ over standard $\ell_1$ can be clearly seen in Theorem \ref{ThmL2}. Even though both have the same asymptotic convergence rates (in fact, standard $\ell_1$ is already information theoretically optimal in many cases such as high-dimensional least squares), trimmed $\ell_1$ has a smaller constant: $\frac{3\Cltwo \lam\sqrt{k}}{2}$ for standard $\ell_1$ ($h=0$) vs. $\frac{\Cltwo \lam \sqrt{k}}{2}$ for trimmed $\ell_1$ ($h=k$).
Comparing with non-convex $(\mu,\gamma$)-amenable regularizers SCAD or MCP, we can also observe that the estimation bounds are asymptotically the same: $\| \Lth - \Tth \|_{\infty} \leq c \| (\TQ)^{-1} \nabla\L(\Tth)_{\Supp} \|_{\infty}$ and $\| \Lth - \Tth \|_2 \leq c \lam \sqrt{k}$. However, the constant $c$ here for those regularizers might be too large if $\mu$ is not small enough, since it involves $\frac{1}{\RSCcon - \mu}$ term (vs. $\frac{1}{\RSCcon}$ for the trimmed $\ell_1$.)
Moreover amenable non-convex regularizers require the additional constraint $\|\th\|_1 \leq R$ in their optimization problems for theoretical guarantees, along with further assumptions on $\Tth$ and tuning parameter $R$,
and the true parameter must be feasible for their modified program (see \citet{LW17}).
The condition $\|\bm \theta^*\|_1 \le R$ is stringent with respect to the analysis: as $p$ and $k$ increase, in order for $R$ to remain constant, $\|\bm \theta^*\|_\infty$ must shrink to get satisfactory theoretical bounds.
In contrast, while choosing the trimming parameter $h$ requires cross-validation, it is possible to set $h$ on a similar order as $k$.
We are now ready to apply our main theorem to the popular high-dimensional problems introduced in Section \ref{Sec:Setup}: sparse linear regression and sparse graphical model estimation. Due to space constraint, the results for sparse graphical models are provided in the supplementary materials.
\subsection{Sparse Linear Regression} \label{sec:slr}
Motivated by the information theoretic bound for arbitrary methods, all previous analyses of sparse linear regression assume $n \geq c_0 k \log p$ for sufficiently large constant $c_0$. We also assume $n \geq c_0 \max\{k,h\} \log p$, provided $h \asymp k$.
\begin{corollary}\label{CorLS}
Consider the model \eqref{EqnLinearModel} where $\e$ is sub-Gaussian. Suppose we solve \eqref{EqnLS} with the selection of:
\begin{enumerate}[leftmargin=0.25cm, itemindent=0.45cm,label=(\alph*)]
\item $\lam \geq \cLStwo \sqrt{\frac{\log p}{n}}$ for some constant $\cLStwo$ depending only on the sub-Gaussian parameters of $X$ and $\e$
\item $h$ satisfying: for any selection of $\Nonreg \subseteq [p] \text{ s.t. } |\Nonreg| = h$,
\begin{align}\label{EqnLSIncoh}
& \matnormBig{\big(\GG^{-1}\big)_{\USupNonreg \USupNonreg}}{\infty} \leq \cLSone, \qquad \matnormBig{\GG_{\USupNonregC\USupNonreg} \Big(\GG_{\USupNonreg\USupNonreg}\Big)^{-1}}{\infty} \leq \eta, \nonumber\\
&\max\Big\{\lambda_{\max}(\GG_{\USupNonregC \USupNonregC}),\lambda_{\max}\big((\GG_{\USupNonreg \USupNonreg})^{-1}\big)\Big\} \leq \cLSsix
\end{align}
where $\GG = \frac{\X^\top X}{n}$ is the sample covariance matrix and $\lambda_{\max}$ is the maximum singular value of a matrix.
\end{enumerate}
Further suppose $\frac{1}{2}\Tth_{\min} \geq \cLSthree \sqrt{\frac{\log p}{n}} + \lam \cLSone$ for some constant $\cLSthree$.
Then with high probability at least $1-\cLSfour \exp (-\cLSfive \log p)$, any local minimum $\Lth$ of \eqref{EqnLS} satisfies
\begin{enumerate}[leftmargin=0.25cm, itemindent=0.45cm,label=(\alph*)]
\item for every pair $j_1 \in \Supp, j_2 \in \SuppC$, we have $|\Lth_{j_1}| > |\Lth_{j_2}|$,
\item if $h < k$, all $j \in \SuppC$ are successfully estimated as zero and we have
\begin{align*}
& \|\Lth -\Tth\|_\infty \leq \cLSthree \sqrt{\frac{\log p}{n}} + \lam \cLSone , \nonumber\\
& \| \Lth - \Tth \|_2 \leq c_4 \sqrt{\frac{\log p}{n}} \left(\sqrt{k}/2 + \sqrt{k-h}\right) \, .
\end{align*}
\item if $h \geq k$, at least the smallest $p-h$ entries in $\SuppC$ have exactly zero and we have
\begin{align*}
\|\Lth -\Tth\|_\infty \leq \cLSthree \sqrt{\frac{\log p}{n}} , \ \ \| \Lth - \Tth \|_2 \leq \frac{c_4}{2} \sqrt{\frac{h \log p }{n}} \, .
\end{align*}
\end{enumerate}
\end{corollary}
\paragraph{Remarks.} The conditions in Corollary \ref{CorLS} are also used in previous work and may be shown to hold with high probability via standard concentration bounds for sub-Gaussian matrices. In particular \eqref{EqnLSIncoh} is known as an incoherence condition for sparse least square estimators \citep{Wainwright06_new}. In the case of vanilla Lasso, estimation will fail if the incoherence condition is violated \citep{Wainwright06_new}.
In contrast, we confirm by simulations in Section \ref{Sec:Exp} that the trimmed $\ell_1$ problem \eqref{EqnLS} can succeed even when this condition is not met. Therefore we conjecture that the incoherence condition could be relaxed in our case, similarly to the case of non-convex $\mu$-amenable regularizers such as SCAD or MCP \citep{LW17}. Proving this conjecture is highly non-trivial, since our penalty is based on a sum of absolute values, which is not $\mu$-amenable; we leave the proof for future work.
\begin{algorithm}[t]
\caption{Block Coordinate Descent for \eqref{EqnTrimmedReg1}}
\label{alg:bcd}
\begin{algorithmic}
\State {\bfseries Input:} $\lambda$, $\eta$, and $\tau$.
\State {\bfseries Initialize:} $\bm \theta^0$, $\bm w^0$, and $k=0$.
\While{not converged}
\Let{$\bm w^{k+1}$}{$\mathrm{proj}_{\cS}[\bm w^k - \tau \bm r(\bm \theta^{k})]$}
\Let{$\bm \theta^{k+1}$}{$\mathrm{prox}_{\eta\lambda\R(\cdot,\bm w^{k+1})}[\bm \theta^k - \eta \nabla \L(\bm\theta^k)]$}
\Let{$k$}{$k+1$}
\EndWhile
\State {\bfseries Output:} $\bm \theta^k$, $\bm w^k$.
\end{algorithmic}
\end{algorithm}
We develop and analyze a block coordinate descent algorithm for solving objective \eqref{EqnTrimmedReg1}, which is highly nonconvex problem because of the coupling of $w$ and $\theta$ in the regularizer.
The block-coordinate descent algorithm uses simple nonlinear operators:
\[
\begin{aligned}
\mathrm{proj}_{\mathcal{S}}(\bm z) &:= \arg\min_{\bm w \in \mathcal{S}} \frac{1}{2}\|\bm z - \bm w\|^2\\
\mathrm{prox}_{\eta\lambda \mathcal{R}(\cdot, \bm w^{k+1})} (\bm z) &:= \arg\min_{\bm \theta} \frac{1}{2\eta\lambda} \|\bm \theta - \bm z \|^2 + \sum_{j=1}^p w_j^{k+1} |\theta_{j}|
\end{aligned}
\]
Adding a block of weights $\bm w$ decouples the problem into simply computable pieces.
Projection onto a polyhedral set is straightforward, while the prox operator is a weighted soft thresholding step.
We analyze Algorithm~\ref{EqnTrimmedReg1} using the structure of~\eqref{EqnTrimmedReg1} instead of relying on the DC formulation for~\eqref{EqnTrimmedReg2}.
The convergence analysis is summarized in Theorem~\ref{th:con} below. The analysis centers on the
general objective function
\begin{equation}
\label{eq:model}
\min_{\bm \theta, \bm w} F(\bm \theta, \bm w) := \L(\bm \theta) + \lambda \sum_{i=1}^p w_i r_i(\bm \theta) + \delta(\bm w| \cS),
\end{equation}
where $\delta(\bm w| \cS)$ enforces $w \in \cS$. We let
\[
\bm r(\bm \theta) = \begin{bmatrix}
r_1(\bm x)&
\dots&
r_p(\bm x)
\end{bmatrix}^T, \R(\bm \theta, \bm w) = \inner{\bm w}{\bm r(\bm \theta)}.
\]
\textcolor{black}{In the case of trimmed $\ell_1,$ $r$ is the $\ell_1$ norm, $r_i(x)=|x_i|$ and $\cS$ encodes
the constraints $0 \leq w_i \leq 1$, $\bm 1^T\bm w= p-h$.}
We make the following assumptions.
\begin{assumption}
\label{assumption}
(a) $\L$ is a smooth closed convex function with an $L_f$-Lipchitz continuous gradient; (b) $r_i$ are convex, and $L_r$-Lipchitz continuous and
(c) $\cS$ is a closed convex set and $F$ is bounded below.
\end{assumption}
In the non-convex setting, we do not have access to distances to optimal iterates or best function values,
as we do for strongly convex and convex problems. Instead, we use distance to stationarity to analyze the algorithm.
Objective~\eqref{eq:model} is highly non-convex, so we design a stationarity criterion, which
goes to $0$ as we approach stationary points. The analysis then shows
Algorithm~\ref{EqnTrimmedReg1} drives this measure to $0$, i.e. converges to stationarity.
In our setting, every stationary point of~\eqref{EqnTrimmedReg1}
corresponds to a local optimum in $\bm w$ with $\bm \theta$ fixed,
and a local optimum in $\bm \theta$ with $\bm w$ fixed.
\begin{definition}[Stationarity]
Define the stationarity condition $T(\bm \theta, \bm w)$ by
\begin{equation}
\label{eq:stationarity}
\begin{aligned}
T(\bm \theta, \bm w) = \min\{\|\bm u\|^2 + \|\bm v\|^2 : &\bm u \in \partial_\theta F(\bm \theta, \bm w),\\
&\bm v \in \partial_w F(\bm \theta, \bm w)\}.
\end{aligned}
\end{equation}
The pair $(\bm \theta, \bm w)$ is a stationary point when $T(\bm \theta, \bm w) = 0$.
\end{definition}
\begin{theorem}
\label{th:con}
Suppose Assumptions~\ref{assumption} (a-c) hold, and define the quantity $\mathcal{G}$ as follows:
\[
\mathcal{G}_k := \frac{L_f}{2}\|\bm \theta^{k+1} - \bm \theta^k\|^2 + \frac{\lambda}{\tau}\|\bm w^{k+1}- \bm w^k\|^2.
\]
With step size $\eta = 1/L_f$, we have,
\[
\begin{aligned}
\min_{k} \mathcal{G}_k &\leq \frac{1}{K}\sum_{k=1}^K\mathcal{G}_k \le \frac{1}{K}(F(\bm \theta^1) - F^*) \\
T(\bm \theta^{k+1}, \bm w^{k+1}) &\le (4 + 2\lambda L_r/L_f) \mathcal{G}_k,
\end{aligned}
\]
and therefore
\[
\min_{k = 1: K} \{T(\bm \theta^{k}, \bm w^{k})\} \leq\frac{4 + 2\lambda L_r/L_f}{K}(F(\bm \theta^1) - F^*).
\]
\end{theorem}
\textcolor{black}{The trimmed $\ell_1$ problem satisfies Assumption~\ref{assumption} and hence Theorem~\ref{th:con} holds.}
Algorithm~\ref{alg:bcd} for~\eqref{EqnTrimmedReg1}
converges at a sublinear rate measured using the distance to stationarity $T$~\eqref{eq:stationarity}, see Theorem~\ref{th:con}.
In the simulation experiments of Section~\ref{Sec:Exp}, we will observe that the iterates converge to very close points regardless of initializations.
\citet{khamaru2018convergence} use similar concepts to analyze their DC-based algorithm, since it is also developed
for a nonconvex model.
\iffalse In the supplements, we include a small numerical experiment, comparing Algorithm 1
with~Algorithm 2 of~\cite{khamaru2018convergence} (this prox-type algorithm did particularly well for subset selection, see Figure 2 of~\cite{khamaru2018convergence}). Initial progress of the methods is comparable,
but Algorithm~\eqref{alg:bcd} continues at a linear rate to a lower value of the objective, while
Algorithm 2 of~\cite{khamaru2018convergence} tapers off at a higher objective value.
We consistently observe this phenomenon for a broad range of settings, regardless of hyperparameters.
\fi
We include a small numerical experiment, comparing Algorithm 1
with~Algorithm 2 of~\cite{khamaru2018convergence}.
The authors proposed multiple approaches for DC programs; the prox-type algorithm (Algorithm 2) did particularly well for subset selection, see Figure 2 of~\cite{khamaru2018convergence}.
We generate Lasso simulation data with variables of dimension $500$, and $100$ samples.
The number of nonzero elements in the true generating variable is 10. We take $h=25$,
and apply both Algorithm~\ref{alg:bcd} and Algorithm 2 of \cite{khamaru2018convergence}.
Initial progress of the methods is comparable,
but Algorithm~\ref{alg:bcd} continues at a linear rate to a lower value of the objective, while
Algorithm 2 of~\cite{khamaru2018convergence} tapers off at a higher objective value.
We consistently observe this phenomenon for a broad range of settings, regardless of hyperparameters;
see convergence comparisons in Figure~\ref{fig:alg_compare} for $\lambda \in \{0.5, 5, 20\}$.
This comparison is very brief; we leave a detailed study comparing Algorithm~\ref{alg:bcd} with
DC-based algorithms to future algorithmic work, along with further analysis of Algorithm~\ref{alg:bcd} and its variants
under the Kurdyka-Lojasiewicz assumption~\citep{attouch2013convergence}.
\section{Experimental Results}\label{Sec:Exp}
\paragraph{Simulations for sparse linear regression.}
\begin{figure}[t]
\vskip 0pt
\centering
\subfigure[$\lambda = 0.5$]{\includegraphics[width=0.32\linewidth]{figs/obj_his1.pdf}}
\subfigure[$\lambda = 5$]{\includegraphics[width=0.32\linewidth]{figs/obj_his2.pdf}}
\subfigure[$\lambda = 20$]{\includegraphics[width=0.32\linewidth]{figs/obj_his4.pdf}}
\caption{Convergence of Algorithm~\ref{alg:bcd} (blue solid) vs. Algorithm 2 of \cite{khamaru2018convergence} (orange dot). We see consistent results across parameter settings.}
\label{fig:alg_compare}
\end{figure}
\begin{figure}[t]
\setlength{\belowcaptionskip}{10pt}
\centering
\subfigure[$p=128, k=8$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_incoherent_128.pdf}}
\subfigure[$p=256, k=16$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_incoherent_256.pdf}}
\subfigure[$p=512, k=32$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_incoherent_512.pdf}}
\subfigure[Stationarity]{\includegraphics[width=0.32\linewidth]{figs/stationary_incoherent.pdf}}
\subfigure[$\log \ell_2$-errors]{\includegraphics[width=0.32\linewidth]{figs/l2_error_incoherent.pdf}}
\caption{Results for the incoherent case of the first experiments. \textbf{(a)}$\sim$\textbf{(c)}: Probability of sucessful support recovery for Trimmed $\ell_1$, SCAD, MCP, and standard $\ell_1$ as sample size $n$ increases. For \textbf{(d)}, \textbf{(e)}, we adopt the high-dimensional setting with $(n, p, k) = (160, 256, 16)$, and use 50 random initializations.}
\label{Fig:incoherence_reg}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[$p=128, k=8$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_nonincoherent_128.pdf}}
\subfigure[$p=256, k=16$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_nonincoherent_256.pdf}}
\subfigure[$p=512, k=32$]{\includegraphics[width=0.32\linewidth]{figs/linreg_recover_rate_nonincoherent_512.pdf}}
\subfigure[Stationarity]{\includegraphics[width=0.32\linewidth]{figs/stationary_nonincoherent.pdf}}
\subfigure[$\log \ell_2$-errors ]{\includegraphics[width=0.32\linewidth]{figs/l2_error_nonincoherent.pdf}}
\caption{Results for the non-incoherent case. \textbf{(a)}$\sim$\textbf{(e)}: same as Figure~\ref{Fig:incoherence_reg}.}
\label{Fig:nonincoherence_reg}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Small Regime]{\includegraphics[width=0.32\linewidth]{figs/trim_vs_standard_large_lambda.pdf}}
\subfigure[Non-incoherent]{\includegraphics[width=0.32\linewidth]{figs/linreg_h_versus_p_nonincoherent.pdf}}
\subfigure[Incoherent]{\includegraphics[width=0.32\linewidth]{figs/linreg_h_versus_p_incoherent.pdf}}
\caption{Plots for third and last experiments. \textbf{(a)}: Trimmed Lasso versus standard one in a small regime. We set $h = \lceil 0.05p \rceil$. \textbf{(b)}, \textbf{(c)}: Performance of the trimmed Lasso as the value of $h$ varies.}
\label{Fig:exp_figure}
\end{figure}
We design four experiments. For all experiments except the third one where we investigate the effect of small regularization parameters, we choose the regularization parameters via cross-validation from the set: $\log_{10}\lambda$ $\in$ $\{-3.0, -2.8, \ldots, 1.0\}$. For non-convex penalties requiring additional parameter, we just fix their values (2.5 for MCP and 3.0 for SCAD respectively) since they are not sensitive to results. When we generate feature vectors, we consider two different covariance matrices of normal distribution as introduced in \citet{LW17} to see how regularizers are affected by the incoherence condition.
In our first experiment, we generate i.i.d. observations from $x_i \sim N(0, M_2(\theta))$ where $M_2(\theta) = \theta\mathbf{11}^T+(1-\theta)I_p$ with $\theta$ = 0.7.\footnote{$M_1$ and $M_2$ as defined in \citet{LW17}.} This choice of $M_2(\theta)$ satisfies the incoherence condition~\citet{LW17}.
We give non-zero values $\beta^*$ with the magnitude sampled from $N(0, 5^2)$, at $k$ random positions, and the response variables are generated by $y_i = x_i^T\beta^* + \epsilon_i$, where $\epsilon_i \sim N(0, 1^2)$.
In Figure~\ref{Fig:incoherence_reg} (a) $\sim$ (c), we set $(p, k) = (128, 8), (256, 16), (512, 32)$ and increase the sample size $n$.
The probability of correct support recovery for trimmed Lasso is higher than baselines for all samples in all cases.
Figure \ref{Fig:incoherence_reg}(d) corroborates Corollary \ref{CorLS}: any local optimum with trimmed $\ell_1$ is close to points with correct support regardless of initialization;
see comparisons against baselines with same setting in Figure \ref{Fig:incoherence_reg}(e).
In the second experiment, we replace $M_2(\theta)$ with $M_1(\theta)$, which does not satisfy the incoherence condition.\footnote{$M_1(\theta)$ is a matrix with $1$'s on the diagonal, $\theta$'s in the first $k$ positions of the $(k+1)^{\text{st}}$ row and column, and $0$'s elsewhere.}
Trimmed still outperforms comparison approaches (Figure~\ref{Fig:nonincoherence_reg}).
Lasso is omitted from Figure \ref{Fig:nonincoherence_reg}(e) as it always fails in this setting.
Our next experiment compares Trimmed Lasso against vanilla Lasso where both $\lambda$ and true non-zeros are small: $\log\lambda \in \{-3.0, -2.8, \ldots, -1.0\}$ and $\beta^* \sim N(0, 0.8^2)$. When the magnitude of $\Tth$ is large, standard Lasso tends to choose a small value of $\lambda$ to reduce the bias of the estimate while Trimmed Lasso gives good performance even for large values of $\lambda$ as long as $h$ is chosen suitably. Figure~\ref{Fig:exp_figure}(a) also confirms the superiority of Trimmed Lasso in a small regime of $\lambda$ with a proper choice of $h$.
In the last experiment, we investigate the effect of choosing the trimming parameter $h$. Figure~\ref{Fig:exp_figure}(b) and (c) show that Trimmed $\ell_1$ outperforms if we set $h = k$ (note $(p-h)/p \approx 0.94$). As $h \downarrow 0$ (when $(p-h)/p = 1$), the performance approaches that of Lasso, as we can see in Corollary \ref{CorLS}.
Additional experiments on sparse Gaussian Graphical Models are provided as supplementary materials.
\paragraph{Input Structure Recovery of Compact Neural Networks.}
We apply the Trimmed $\ell_1$ regularizer to recover input structures of deep models.
We follow~\citet{Oymak18} and consider the regression model $y_i = \bm{1}^T\sigma(\bm{W}^*\bm{x}_i)$ with input dimension $p = 80$, hidden dimension $z = 20$, and ReLU activation $\sigma(\cdot)$. We generate i.i.d. data $\bm{x}_i \sim N(0, I_p)$ and $\bm{W}^* \in \mathbb{R}^{z \times p}$ such that $i$th row has exactly 4 non-zero entries from $N(0, \frac{p}{4z})$ to ensure that $\mathbb{E}[\|\bm{W}^*\bm{x}\|_{\ell_2}^2] = \|\bm{x}\|_{\ell_2}^2$ at only $4(i-1) + 1 \sim 4i$ positions.
For $\ell_0$ and $\ell_1$ regularizations, we optimize $\bm{W}$ using a projected gradient descent with prior knowledge of $\|\bm{W}^*\|_0$ and $\|\bm{W}^*\|_1$, and we use Algorithm \ref{alg:bcd} for trimmed $\ell_1$ regularization with $h = 4z$ and $(\lambda, \tau) = (0.01, 0.1)$ obtained by cross-validation.
We set the step size $\eta = 0.1$ for all approaches.
We consider two sets of simulations with varying sample size $n$ where the initial $\bm{W}_0$ is selected as (a) a small perturbation of $\bm{W}^*$ and (b) at random, as in \citet{Oymak18}.
Figure~\ref{Fig:neural_net} shows the results where black dots indicate nonzero values in the weight matrix, and we can confirm that Trimmed $\ell_1$ outperforms alternatives in terms of support recovery for both cases.
\begin{figure}[t]
\centering
\subfigure[with good initialization (small perturbation from true signal)]
{\includegraphics[width=\linewidth]{figs/good_initialization.png}}
\subfigure[with random initialization]
{\includegraphics[width=\linewidth]{figs/random_initialization.png}}
\caption{Results for sparsity pattern recovery of deep models.}
\label{Fig:neural_net}
\end{figure}
\paragraph{Pruning Deep Neural Networks.}
Several recent studies have shown that neural networks are highly over-parameterized, and we can prune the weight parameters/neurons with marginal effect on performance.
Toward this, we consider trimmed regularization based network pruning. Suppose we have deep neural networks with $L$ hidden layers. Let $n_i$ be the number of neurons in the layer $\bm{h}_i$. The parameters we are interested in are $\mathcal{W} \coloneqq \{\bm{\theta}_l, \bm{b}_l\}_{l=1}^{L+1}$ for $\bm\theta_l \in \mathbb{R}^{n_{l-1} \times n_l}$ and $\bm{b}_l \in \mathbb{R}^{n_l}$ where $\bm{h}_0$ is the input feature $\bm{x}$ and $\bm{h}_{L+1}$ is the output $\bm{y}$. Then, for $l=1,\hdots, L$, $\bm{h}_l = \text{ReLU}(\bm{h}_{l-1} \bm\theta_l + \bm{b}_l)$. Since the edge-wise pruning will not give actual benefit in terms of computation, we prune unnecessary \emph{neurons} through group-sparse encouraging regularizers. Specifically, given the weight parameter $\bm\theta \coloneqq \bm\theta_l$ between $\bm{h}_{l-1}$ and $\bm{h}_l$, we consider the group norm extension of trimmed $\ell_1$:
\[
\mathcal{R}_l(\bm\theta, \bm{w}) \coloneqq \lambda \sum_{j=1}^{n_{l-1}} w_j \sqrt{\theta_{j,1}^2 + \cdots + \theta_{j,n_l}^2}
\]
with the constraint of $\bm{1}^T\bm{w} = n_{l-1} - h$.
Moreover, we can naturally make an extension to a convolutional layer with encouraging \emph{activation map sparsity} as follows. If $\bm\theta$ is a weight parameter for 2-dimensional convolutional layer (most generally used) with $\bm\theta \in \mathbb{R}^{C_{\text{out}} \times C_{\text{in}} \times H \times W}$, the trimmed regularization term that induces activation map-wise sparsity is given by
\[
\mathcal{R}_l(\bm\theta, \bm{w}) \coloneqq \lambda \sum_{j=1}^{C_{\text{out}}} w_j \sqrt{\sum_{m,n,k} \theta_{j,m,n,k}^2}
\]
for all possible indices $(m,n,k)$.
Finally, we add all penalizing terms to a loss function to have
\[
\mathcal{L}(\mathcal{W}; \mathcal{D}) + \sum_{l=1}^{L+1} \lambda_l \mathcal{R}_l(\bm{\theta}_l, \bm{w}_l)
\]
where we allow different hyperparameters $\lambda_l$ and $h_l$ for each layer.
In Table \ref{tab:naive_comparison}, we compare trimmed group $\ell_1$ regularization against vanilla group $\ell_1$ on MNIST dataset using LeNet-300-100 architecture \citep{Lecun98gradient-basedlearning}. Here, we set the trimming parameter $h$ to half sparsity level of the original model. For the vanilla group $\ell_1$, we need larger $\lambda$ values to obtain sparser models, for which we pay a significant loss of accuracy. In contrast, we can control the sparsity level using trimming parameters $h$ with little or no drop of accuracy.
\begin{table}[t]
\caption{Results on MNIST using LeNet-300-100.}
\label{tab:naive_comparison}
\begin{center}
\begin{small}
\begin{tabular}{lcccr}
\toprule
Method & Pruned Model & Error ($\%$) \\
\midrule
No Regularization & 784-300-100& \textbf{1.6} \\
grp $\ell_1$ & 784-241-67 & 1.7 \\
grp $\ell_{1_{\text{trim}}}$, $h = \text{half of original}$ & \textbf{392-150-50} & \textbf{1.6}\\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\end{table}
\begin{table}[t]
\caption{Results on MNIST classification for LeNet 300-100 with Bayesian approaches. $h = \circ$ means that the trimming parameter $h$ is set to the same sparsity level of $\circ$, and $\lambda$ sep. indicates that different $\lambda$ values are employed on each layer.}
\label{tab:l0_comparison}
\begin{center}
\begin{small}
\begin{tabular}{lccr}
\toprule
Method & Pruned Model & Error ($\%$) \\
\midrule
$\ell_0$ \citep{Louizos18} & 219-214-100 & \textbf{1.4} \\
$\ell_0$, $\lambda$ sep. \citep{Louizos18} & 266-88-33 & 1.8 \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h = \ell_0$ & 219-214-100 & \textbf{1.4} \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h = \ell_0$, $\lambda$ sep. & 266-88-33 & 1.6 \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h < \ell_0$, $\lambda$ sep. & \textbf{245-75-25} & 1.7 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\end{table}
\begin{table}[t!]
\caption{Results on MNIST classification for LeNet-5-Caffe with Bayesian approaches.}
\label{tab:l0_comparison_lenet}
\begin{center}
\begin{small}
\begin{tabular}{lccr}
\toprule
Method & Pruned Model & Error ($\%$) \\
\midrule
$\ell_0$ \citep{Louizos18} & 20-25-45-462 & \textbf{0.9} \\
$\ell_0$, $\lambda$ sep. \citep{Louizos18} & 9-18-65-25 & 1.0 \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h < \ell_0$ & 20-25-45-150 & \textbf{0.9} \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h = \ell_0$, $\lambda$ sep. & 9-18-65-25 & 1.0 \\
Bayes grp $\ell_{1_{\text{trim}}}$, $h < \ell_0$, $\lambda$ sep. & \textbf{8-17-53-19} & 1.0 \\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\end{table}
Most algorithms for network pruning recently proposed are based on a variational Bayesian approach \cite{dai2018vib, Louizos18}. Motivated by learning sparse structures via smoothed version of $\ell_0$ norm ~\citep{Louizos18}, we propose a Bayesian neural network with trimmed regularization where we regard only $\bm\theta$ as Bayesian. Inspired by a relation between variational dropout and Bayesian neural networks ~\cite{Kingma15}, we specifically choose a fully factorized Gaussian as a variational distribution, $q_{\bm\phi, \bm\alpha}(\theta_{i,j}) = \mathcal{N}(\phi_{i,j}, \alpha_{i,j}\phi_{i,j}^2)$, to approximate the true posterior and leave $\bm{w}$ to directly learn sparsity patterns. Then the problem is cast to maximizing corresponding evidence lower bound (ELBO),
\[
\mathbb{E}_{q_{\bm\phi, \bm\alpha}}[\mathcal{L}(\mathcal{W};\mathcal{D})] - \mathbb{KL}\big(q_{\bm\phi, \bm\alpha}(\mathcal{W}) \| p(\mathcal{W})\big).
\]
Combined with trimmed $\ell_1$ regularization, the objective is
\begin{equation}
\label{bayesian_loss}
\begin{aligned}
\mathbb{E}_{q_{\bm\phi, \bm\alpha}(\bm\theta)}\Big[-\mathcal{L}(\mathcal{W}; \mathcal{D}) + \sum\limits_{l=1}^{L+1} \lambda_l \mathcal{R}_l(\bm\theta_l, \bm{w}_l)\Big] \\
+~ \mathbb{KL}(q_{\bm\phi, \bm\alpha}(\mathcal{W}) \| p(\mathcal{W}))
\end{aligned}
\end{equation}
which can be interpreted as a sum of expected loss and expected trimmed group $\ell_1$ penalizing term.
\citet{Kingma14} provide the efficient unbiased estimator of stochastic gradients for training $(\bm\phi, \bm\alpha)$, via the reparameterization trick to avoid computing gradient of sampling process. In order to speed up our method, we approximate expected loss term in \eqref{bayesian_loss} using a local reparameterization trick ~\citep{Kingma15} while the standard reparameterization trick is used for the penalty term.
Trimmed group $\ell_1$ regularized Bayesian neural networks have smaller capacity with less error than other baselines (Table \ref{tab:l0_comparison}).
Our model has lower error rate and better sparsity even for convolutional network, LeNet-5-Caffe\footnote{\url{https://github.com/BVLC/caffe/tree/master/examples/mnist}} (Table \ref{tab:l0_comparison_lenet}).\footnote{We only consider methods based on sparsity encouraging regularizers. State-of-the-art VIBNet ~\citep{dai2018vib} exploits the mutual information between each layer.}
The code is available at \url{https://github.com/abcdxyzpqrst/Trimmed_Penalty}.
\section{Concluding Remarks}
In this work we studied statistical properties of high-dimensional $M$-estimators with the trimmed $\ell_1$ penalty, and demonstrated
the value of trimmed regularization compared to convex and non-convex alternatives.
We developed a provably convergent algorithm for the trimmed problem,
based on specific problem structure rather than generic DC structure, with promising numerical results.
A detailed comparison to DC based approaches is left to future work. Going beyond $M$-estimation, we showed that trimmed regularization can be beneficial for two deep learning tasks: input structure recovery and network pruning.
As future work we plan to study trimming of general decomposable regularizers, including $\ell_1 / \ell_q$ norms, and further investigate the use of trimmed regularization in deep models.
{\small \paragraph{Acknowledgement.}
This work was supported by the National Research Foundation of Korea (NRF) grant (NRF-2018R1A5A1059921), Institute of Information \& Communications Technology Planning \& Evaluation (IITP) grant (No.2019-0-01371) funded by the Korea government (MSIT) and Samsung Research Funding \& Incubation Center via SRFC-IT1702-15.}
\newpage
|
1,477,468,750,152 | arxiv | \section{Chip-to-chip Synchronous Communication in RQL}
RQL technology is the first superconducting digital technology to
efficiently use AC power distribution using a resonant network. The
on-chip resonant clock and power network, implemented using
metamaterials \cite{strong2021ZOR}, is a proven enabler for scaling to
complex digital logic. The network has a Zeroth-Order Resonance (ZOR)
mode, providing uniform amplitude and zero clock skew across the
chip. Low skew enables synchronous communication. For a multi-chip
system the approach is extended to a clock network across chips,
with chip-to-chip communication.
The critical innovation that enables synchronous communication on the
MCM is a dendritic resonator network implemented on the carrier, which
we call a ``MegaZOR.'' The MegaZOR, designed to the same resonance
frequency as the individual chips, performs several functions. A
primary function is to equalize
phases and amplitudes of the clock signals on the chips. The chips are
not isolated, but instead the network causes them to operate as a
single resonator while tolerating a spread in their resonant
frequencies that may be larger than the individual resonance peaks.
This is achieved by a network of multiple passive transmission line
segments, optimized for a uniform response in amplitude, frequency and
phase of the on-chip resonators. The MegaZOR also transforms the
impedance from the 50\,$\Omega$ drive to the small 22\,m$\Omega$
impedance of the on-chip resonators, four of which are driven in
parallel.
\begin{figure}
\centering
\includegraphics[width=3.4in]{lbw_schem.pdf} \\
\includegraphics[width=3.5in]{lbw.pdf} \\
\caption{LBW synchronous communication link. a) The driver
low-pass filters several underlying SFQ pulses generated by
undamped, latching junctions to produce a Gaussian-pulse with 35\,GHz
spectral bandwidth. The 20\,$\Omega$ interconnect is terminated at
the receiver, which recovers the bipolar RQL-encoded data. b)
Spectre-simulated waveforms at the driver output (Tx) and receiver input
(Rx) show the attenuation and dispersion of the LBW pulse
propagating over 9\,mm short and 380\,mm long PTLs. For clarity,
propagation delay is subtracted from each waveform, and the
waveforms are progressively offset by $-1$\,mV.
\label{link}}
\end{figure}
The synchronous data link across the MCM requires that the chips have
the same frequency, phase, and amplitude within tight limits for
fabrication process variations. The delay between driver and receiver
is determined by signal propagation time on superconducting Passive
Transmission Lines (PTL) on the carrier. The clock phase in degrees of
the receiver, $\theta_r$, is
\[
\theta_r=\theta_d+\tau_{\mbox{\footnotesize PTL}}
\times f_{\mbox{\footnotesize res}} \times 360^{\circ},
\]
where $\theta_d$ is the phase of the driver,
$\tau_{\mbox{\footnotesize PTL}}$ is the electrical length of the PTL,
and $f_{\mbox{\footnotesize res}}$ is the frequency of the
clock. Latency may exceed one clock cycle, so the phase is wrapped,
e.g.\ if the PTL delay were 1025\,ps corresponding to a 100\,mm length
and the phase of the driver were $90^{\circ}$, then
$\theta_r=90+1025\,\mbox{ps} \times 5\,\mbox{GHz} \times
360^{\circ}=1935^{\circ}$, and subtracting $5 \times 360$ would yield
the nominal receiver phase of $\theta_r=135^{\circ}$. Synthesis and
timing analysis tools treat the MCM system as a single chip and assign
the phase of each receiver post-layout. Synchronous communication can
be used in systems where the fabrication-related skew between wires is
small relative to the timing window of the receiver, about one phase
of the RQL clock. For larger systems involving board-to-board
communication isochronous protocol is required, as described in
\cite{dai2021Isoch}.
An important component for long-range RQL data links is a driver with
improved bandwidth efficiency. Our Low Band-Width (LBW) driver has
about ten times less analog spectral bandwidth than the
single-flux-quantum (SFQ) pulse. Superconducting PTLs have some loss
and dispersion at RF frequencies which can degrade the signal over
distance. As first described in \cite{kautz1978picosecond} and experimentally
confirmed in \cite{talanov2021propagation}, a single SFQ pulse with ps-scale width
has a limited range of about 10\,mm on typical Nb passive transmission
lines of sub-micron width. Increasing the pulse width increases the
PTL range to meters, enabling chip-to-chip and board-to-board
interconnect.
Fig.~\ref{link}a shows the design of the LBW data link. All components
are AC-powered through inductive transformers. The LBW driver produces
7-10 SFQ pulses per bit, which requires the mutual inductance of the
L2 transformer to be about ten times larger than the standard for RQL
circuits. The LBW produces bipolar pulses 16\,ps wide, FWHM. The LBW
is triggered by a bipolar SFQ input and includes a
floating buffer junction to prevent back-action on the driving
JTL. Circuit dynamics are those of the self-resetting gate first
described in \cite{chan1975high}. Note that the width of the generated
pulse is still low compared to the clock period, giving the latching
circuit adequate time to reset and avoiding elevated bit-error-rate
associated with Josephson switching errors. The current version of the driver can support up to 10\,Gbps throughput per link with 3\,fJ/bit dissipation, including a 300\,W/W cooling
overhead. The receiver achieves a
wide timing window of $80^{\circ}$ relative to AC clock, while
maintaining wide margins of $\pm 30\%$ on AC bias amplitude and
about $\pm 50\%$ on individual junction critical current. Spectre
simulations of the LBW driver and transmission line, shown in
Fig.~\ref{link}b, use a dispersive propagation model for 1\,$\mu$m
wide Nb 20\,$\Omega$ PTL on SiO$_2$ that accurately models the high
frequency components of the pulse, as described in \cite{talanov2021propagation}.
Different fabrication processes were used for the chips for the two
circuit demonstrations. The 5$\times$5\,mm$^2$ chips were fabricated
in the ten-metal-layer SFQ5ee process developed by Lincoln Laboratory
\cite{tolpygo2019advanced}. The on-chip resonant clock-power network
occupies the bottom two metal layers and is separated from digital
logic by a superconducting ground plane. The process features
0.5$\mu$m design rules and Josephson junctions with
10\,$\mu$A/$\mu$m$^2$ critical current density. The
10$\times$10\,mm$^2$ chips were fabricated in the D-Wave six-metal
layer process \cite{berkley2010scalable} with a modified junction
critical current density of 10\,$\mu$A/$\mu$m$^2$. The process
features 0.25$\mu$m design rules for 5 bottom metal layers and
0.5$\mu$m design rules for the top layer. Here the on-chip resonant
clock-power network occupies the top two metal layers and is again
separated from the digital logic by a superconducting ground plane. In
both fabs, the chips are passivated with dielectric and use Ti/Au/Pt
pad metallization.
The MCMs for both demonstrations used a well-characterized process for
the carrier developed at Lincoln labs \cite{LincolnMCM}, and bump
bonds also developed at Lincoln. The SMCM4m four-Nb-metal MCM process
supports the design of 20\,$\Omega$ data PTLs and clock resonator
network, with good isolation provided by a superconducting ground
plane. The In bump-bond process supports up to 10,000 bumps with
15\,$\mu$m bump diameter, 35\,$\mu$m bump pitch, and 3\,$\mu$m
post-bond bump height.
\begin{figure}
\centering
\includegraphics[width=2.8in]{fig2.pdf}
\caption{MCM interconnect. a) Pressure-contact probe packaging of
individual chip used a standard pad template with large perimeter
pads for clock and data. b) The same pads were used in the MCM
attach for clock and I/O signals to room temperature. Multiple bumps
were used for signal, ground, and mechanical support. c)
High-bandwidth chip-to-chip interconnect used single, 15\,$\mu$m
bumps for each signal and ground pad, on a 35\,$\mu$m pitch.
\label{bumps}}
\end{figure}
Two kinds of pads were involved in attaching chips to the MCM, shown
in Fig.~\ref{bumps}. The high frequency, 20\,$\Omega$ chip-to-chip
interconnect used four ground pads surrounding each signal, and had a
single bump per pad. This transition was optimized using the HFSS
3D field solver to achieve 350\,GHz analog bandwidth with $-20$\,dB
reflection. The chip-to-chip clock interconnect used large perimeter
pads with multiple bumps. Here the signal integrity requirements are
relaxed to a single tone bandwidth between 3.5-10\,GHz. The same large
perimeter pads are used to mount chips in the standard
pressure-contact dip probe and to connect MCM room temperature pads to
the chips. This configuration allows test of individual chips prior to
mounting in the MCM. In accordance with SMCM4m process design rules,
additional non-electrical bump bonds are added along the perimeter for
mechanical strength.
Process Control Monitor (PCM) and Time-of-Flight (TOF) resonators are
included on the MCM as circuit diagnostics for the MegaZOR and
interconnect PTL in order to track fabrication targeting
and spread. The PCM structures are characterized with four-port room
temperature resistance measurement. These low-cost tests are performed
for every MCM on every lot. These data indicate variations in the
metal width and thickness. Quarter and half wave TOF resonators
are tested at cryogenic temperatures using
a network analyzer to measure S-parameters. The frequency of the
resonance captures variations in the dielectrics and
metals. Correlating the PCM and TOF data gives the complete picture of
relevant fabrication process parameters.
\section{Multi-chip MCM with Meander Clock}
\begin{figure}
\centering
\includegraphics[width=3.4in]{2chip_block.pdf}
\includegraphics[width=2.7in]{9chipflip2.pdf}
\caption{Synchronous data link with a meander clock. a) Schematic of
two interconnected chips. Individual chips can be pre-tested using
the room-temperature (R-T) interface prior to populating the MCM.
The two-chip chain as shown uses the same R-T interface on the
periphery, but uses the PTL interface chip-to-chip. Clock and data
take similar paths on the MCM, with similar path lengths. b)
Microphotograph of the 32$\times$32\,mm$^2$ MCM populated with nine
5$\times$5\,mm$^2$ chips, showing two-chip and six-chip data links.
\label{the9}}
\end{figure}
The functionality of the on-chip circuitry, and the correctness of the
PTL model including transitions between chip and a carrier, was first
validated using a nine chip MCM circuit with a meander clock. The
meander clock has the advantage of tunable frequency. The meander
clock line and signal PTL delay are designed for a 2\,GHz clock rate,
but as the delays are the same the design works across a large
frequency range. The on-chip test circuit has two data paths as shown
in Fig.~\ref{the9}a connected to 4\,mV output amplifiers for room
temperature testing. The first data path is used to pretest individual
chips before populating the MCM. The second data path is configured for
connection between chips using MCM pads. On-chip circuitry designed in
this way also allows to tap into input and output of individual chips
on MCM before testing the link through the multiple chips. In this
particular MCM design (see Fig.~\ref{the9}b) there are two serial data
links going through 6 chips and 2 chips. The last chip was reserved
for a different experiment.
Individual chips and the MCM were tested in liquid He dip probes with
standard 32-pin and 64-pin pad templates. Tests were performed
at-speed with direct connection to room-temperature electronics, using
instrumentation and methods similar to that described in
\cite{egan2021true}.
\begin{figure}
\centering
\includegraphics[width=3.3in]{fig4.pdf}
\caption{Meander clock test results. a) Measured AC bias margins for
the eight chips tested individually, at 2\,GHz and across multiple
cooldowns. Typical margins are about 6\,dBm. The shaded green
region shows a common margin of about 4.5\,dB. b) Measured output
waveforms through the six-chip chain on MCM at 8\,GHz. In data
pattern is a repetitive ``11110000111000110010'' chirp, and the
output is averaged.
\label{fig4}}
\end{figure}
\begin{table}
\renewcommand{\arraystretch}{1.3}
\caption{Measured Margins for the Meander-Clock MCM }
\label{9marg}
\centering
\begin{tabular}{cll}
{\bf Build} & {\bf Frequency} & {\bf AC Margins} \\
\hline
2 Chip Chain & \ \ \ 2\,GHz & \ \ \ 4.3\,dB \\
& \ \ \ 8\,GHz & \ \ \ 3.0\,dB \\
\hline
6 Chip Chain & \ \ \ 2\,GHz & \ \ \ 5.4\,dB \\
& \ \ \ 8\,GHz & \ \ \ 3.2\,dB \\
\hline
\end{tabular}
\end{table}
Fifteen individual chips were tested from the same wafer fabricated in
the SFQ5ee process, and eight were chosen to populate the chains on the
MCM. The individual-chip test for on-chip data paths showed up to
6\,dB ($\pm33\%$) of AC bias margin with common overlap between chips
of 4.5\,dB, as shown in Fig.~\ref{fig4}a. Operating margins across
multiple cooldowns showed about 1\,dB variation, which is within
the design specification for parasitic coupling to sequestered fluxes in
the moats. Similar measurements were taken on the two MCM links with
two and six-chip chains, at 2\,GHz and 8\,GHz clock rate. Output and
clock-return waveforms at 8\,GHz are shown in Fig.~\ref{fig4}b. Both
links were functional with measured clock margins as entered in
Table~\ref{9marg}. The operating margins at 2\,GHz for both chains are
similar to those of the individual chips. Margins at 8\,GHz are
degraded by about 2\,dB, which indicates misalignment of the pulse
arrival time within the timing window of the receiver.
\section{Four-chip MCM with a Resonant Clock}
The demonstration vehicle for synchronous communication using a
MegaZOR resonant clock is a 32$\times$32\,mm$^2$ MCM populated with
four 10$\times$10\,mm$^2$ chips, shown in Fig.~\ref{428photo}. This
build represents aggressive scaling due to the large on-chip ZOR clock
networks provisioning the entire active area of the
10$\times$10\,mm$^2$ chips. The whole system is capable of powering 4
million Josephson junctions. The increased scale stresses the
requirements for resonator amplitude and phase uniformity, bump
uniformity, and fabrication parameter spread. On-chip circuitry is
similar to that discussed in the previous section, but with an
additional synchronous data link providing a loop-back on-chip, in
parallel to the chip-to-chip data link. The four chips are connected
with two short 5\,mm PTLs and a long 54\,mm PTL that is one cycle
longer at 2\,GHz. Allocation of extra taps along the data path enables
test of all four chips individually, in pairs and in threes including
chip-to-chip communication, and complete test of data propagation
through all four chips.
\begin{figure}
\centering
\includegraphics[width=2.7in]{CedarSMCM428.pdf}
\caption{Microphotograph of the 32$\times$32\,mm$^2$ resonant-clock MCM
populated with four 10$\times$10\,mm$^2$ chips, showing short and long
signal paths between chips driven by a common resonator.
\label{428photo}}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=6.4in]{fig6.pdf}
\caption{MegaZOR design. a) Schematic showing four identical branches
of transmission line segments, each connected to the ZOR resonator
on an individual chip. The segment length is given in degrees at
the resonance frequency. b) Simulated current amplitude at the
spines of on-chip resonator as function of frequency given for
three different cases. The middle panel corresponds to the case
with total $1\%$ mismatch between individual chip resonances
frequencies with 2 chips being at $\pm 0.5\%$ and 2 chips being at
$\pm 0.167\%$ from central frequency of 2\,GHz. Maximum spine
current amplitude at the drive frequency is 42\,mA. The top and
bottom channels correspond to the cases with added carrier MegaZOR
frequency variation of $\pm 1.5\%$ on top of the frequency
distribution between chips. Maximum spine current amplitude at the
drive frequency is 50\,mA.
\label{fig6}}
\end{figure*}
The schematic of the MegaZOR on the MCM is shown in
Fig.~\ref{fig6}a. It consists of four branches each connected to the
individual chip resonator. Each branch consists of two segments with
$90^{\circ}$ and $270^{\circ}$ passive transmission line (PTL)
resonators, which positions the connections between circuit segments
at voltage anti-nodes. The impedances of the segments are adjusted to
achieve a total effective loaded quality factor of resonator of
$Q_{\mbox{\footnotesize load}} = 73$. This represents a nice
compromise between minimizing input power and maximizing the
bandwidth.
There are two distinct MegaZOR networks for I and Q, electrically
identical but with different physical layout. In the current four
metal layer MCM process, the resonant network occupies the first metal
layer and is separated from data transmission lines by a
superconducting ground plane in the second metal layer. This
constrains the layout to minimize the number of crossovers between the
I and Q networks. Physical layout components were simulated using
Ansys HFSS to determine impedances and propagation speeds, and
sensitivities to process variations. The top-level netlist was
simulated using Keysight's Advanced Design System (ADS).
Fig.~\ref{fig6}b shows simulated current amplitudes at the spines of
the four chips in the worst-case assumption of chip frequency mismatch
range of $\pm 0.5\%$ and three different carrier frequencies at
nominal, 2\,GHz, and $\pm 1.5\%$ from nominal. In the ideal case when
there is no mismatch in frequency chip-to-chip and chip-to-MegaZOR,
the entire system has a single resonance frequency with near-zero
currents at the spines, which are voltage antinodes. The frequency
mismatch causes elevated currents in the spines due to shift from ZOR
mode towards the first mode and an increase in total drive power. For
the worst case described above, our design was optimized to a
maximum 50\,mA variation of the current in the spines. The parasitic
current in the spine divides across multiple ribs of the on-chip
resonator. For the current design with about 100 ribs at the current
antinode carrying a nominal current of 5\,mA, the total resulting
variation of the bias current amplitude as seen by the circuit
junctions is less than 1\% and the input amplitude adjustment to the
system is only 1.5\,dB.
\begin{table}
\renewcommand{\arraystretch}{1.0}
\caption{Measured Margins for Individual Chips and for Chip Chains on
the MCM with a Resonant Clock}
\label{428marg}
\centering
\begin{tabular}{c|l|l|l}
{\bf Build} & {\bf Test Path} & {\bf Resonance} & {\bf AC Margins} \\
\hline
& Chip A & 1.973\,GHz & 2.5\,dB \\
& Chip B & 1.974\,GHz & 4.5\,dB \\
& Chip C & 1.982\,GHz & 5.0\,dB \\
1 & Chip D & 1.976\,GHz & 3.0\,dB \\
& MCM Chips A-B & & untestable$^*$\\
& MCM Chips C-D & & 4.8\,dB \\
& MCM Chips A-B-C & & untestable$^*$ \\
\hline
& Chip A & 2.019\,GHz & 4.0\,dB \\
& Chip B & 2.011\,GHz & 4.0\,dB \\
& Chip C & 2.019\,GHz & 3.5\,dB \\
2 & Chip D & 2.031\,GHz & 4.5\,dB \\
& MCM Chips A-B & & 2.0\,dB \\
& MCM Chips C-D & & 2.5\,dB \\
& MCM Chips A-B-C & & 1.0\,dB \\
\hline
\multicolumn{4}{l}{$^*$Untestable due to signal continuity failures on MCM}
\end{tabular}
\end{table}
Two MCMs were built using the two sets of chips, entered in
Table~\ref{428marg}, with resonant frequencies within 0.45\% of each
other and within $\pm 1\%$ of the MegaZOR frequency of the
MCM. Measurement of the on-carrier PCM resonators showed good
targeting of the fabrication process, with speed-of-light within 1\%
of the design value.
Table~\ref{428marg} also lists the measured AC clock margins for the
pretested chips and for the synchronous data links between chips. Only
the short path between chips C and D was testable on Build 1 due to
signal continuity failures. Measured margins for this path are on par
with those of the individual chips, indicating no degradation due to
the MCM transition. The applied clock frequency was tuned to the mean
resonance of Chips C and D for this test. Build 2 shows functionality
of all links between the four chips. The operating margin for the
links varied from 2.5\,dB for the short links to 1\,dB to the longest
link through Chips A, B, and C.
Simulation in ADS and HFSS, based on back-annotation of
the carrier physical layout, showed that the margin degradation is due
to inconsistent design of crossovers between I and Q carrier
resonators resulting in an imbalance between the four branches of
MegaZOR. As the imbalance between the two chips tested in Build 1 is
small, the effect on margins is minimal. We conclude that the margin
degradation could be corrected in a redesign, and that synchronous
communication at the scale for four 10$\times$10\,mm$^2$ chips is
practical.
\section{Conclusion}
In this paper we have presented design and test results for
synchronous communication links between multiple chips on an MCM using
resonant clock distribution. These results advance state-of-the-art in
superconducting digital technology in multiple ways. It is the first
demonstration of superconducting data links on MCM involving multiple
chips. The system demonstrates the power of synchronous communication
in RQL technology overcoming the difficulties of clock recovery and
accumulated thermal jitter and parametric timing uncertainty
associated with RSFQ technology. We have shown that digital data on
MCM can be transmitted across 54\,mm between chips with minimal
hardware overhead as the LBW driver and receiver have only six
Josephson junctions, dissipating only 3\,fJ taking the 300\,W/W
cooling overhead into account. The energy transmitted per bit is
three orders of magnitude less than state-of-the-art CMOS BoW-style
data links for the same purpose and distance. The 2\,GHz clock
frequency in the current demonstration can be scaled up to 10\,GHz
based on circuit simulation of the LBW driver, limited by dispersion
of the Nb interconnects. Extension to multi-bit bus communication is
straight forward, with total cross-sectional bandwidth between chips
limited only by bump pitch. All told, we have enabled a critical
interconnect functionality for future RQL systems distributed on
multiple chips using 3D-IC advanced packaging.
\begin{acknowledgments}
The authors acknowledge Andrew Brownfield for developing high
frequency test setup and assisting with circuit test, and Paul Chang for
assisting with data reduction and analysis. Authors also
acknowledge the contributions of the Lincoln Laboratory team,
particularly Rabindra Das and Sergey Tolpygo, in numerous technical
discussions regarding the MCM carrier and bonding process, and
assistance with design. The Northrop Grumman superconducting digital
EDA and PDK teams laid the foundation for MCM design and verification.
\end{acknowledgments}
|
1,477,468,750,153 | arxiv | \section*{Introduction}
In the last 20 years the \emph{window} on our universe has opened to an unprecedented level,
allowing us to bridge a large fraction of the gap that existed between observational fields,
like astrophysics and cosmology, and more experimental ones, like, for instance, high energy particle
physics. A marked difference between observational and experimental disciplines which is
often emphasized is that the former ones usually lack direct control on the object that we wish
to study, which is usually not directly accessible to us. A consequence of this is that, contrary
to experimental disciplines (where we can mostly prepare the system under study to suit our needs
and tackle the data acquisition and analysis under what we judge to be the most appropriate and
fruitful conditions) in cosmology and astrophysics we usually do not have this convenience. The
lack of control on the systems under study and on their environments can then result in uncertainties
which are usually higher than the ones obtained in experimental situations and make more challenging
to single out discrepancies between models and data. Although this stereotype is true to some
extent, the situation has now radically changed, as witnessed, for instance, by precision
measurements of the cosmic microwave background radiation, or by millisecond pulsar timing.
If it is true that, in general, we have no direct control on the phenomena that take place in our universe,
it has nevertheless to be recognized that they are often probing the laws of physics in such
extreme situations, as we will never be able to realize in a human controlled experiment.
So, while on the theoretical side we are still struggling to get a unified picture of fundamental
interactions at the quantum level, it is undoubted that nowadays this understanding has to also,
if not primarily, face the challenge to be consistent with astrophysical and cosmological data,
in addition to those provided by human-scale experiments: in this sense, a synergic interplay between
large and small scale physics is already a reality.
With this higher perspective in mind, we will more modestly present, in this contribution, an example
of how current observations have the potential to constrain emission models for Active Galactic Nuclei.
Our emphasis will be on the importance of a solid statistical analysis, a precious approach to obtain
a quantitative insight and guide us in challenging and, hopefully, disproving existing models in favor
of more refined and realistic ones.
Our presentation will try to be reasonably self-contained, in the sense that we will cover all the
required topics in what follows: we will, however, not be able to cover them with the required depth,
some of which can be obtained starting from the list of references. We will start in Sec.~\ref{sec:agn},
summarizing some key facts about the sources that we will consider here, namely active galactic nuclei.
Some more details will be given for blazars and their emission models in Subsec.~\ref{sec:blazars}.
We will then continue with a concise presentation of algorithms for (least square) minimization in
Sec.~\ref{sec:min}, to finally introduce a solid standard, which is the Levenberg-Marquardt approach
(Subsec.~\ref{sec:levmar}). The problem of statistical significance is then briefly addressed in
Sec.~\ref{sec:stasig}, where the Kolmogorov-Smirnov test is analyzed (Subsec.~\ref{sec:kolsmi}): other methods to test
how \emph{bad} a fit is\footnote{The commonly used name for these tests is \emph{goodness of fit}
tests, although, technically, the meaningful results are those in which these tests \emph{fail},
showing in this way that the model needs to be refined.} do exist, but will not be discussed here.
Following these prerequisites, Sec.~\ref{sec:Mrk421} presents details of a recent analysis in which
multi-wavelength datasets of the active galactic nucleus Markarian~421 where fitted to a given
emission model: this section contains three subsections, corresponding to each of the topics that are
presented in the three background sections\footnote{Numbering has be chosen so that each subsection
index matches the number of the section in which the corresponding topic is discussed in general.}:
Subsec.~\ref{sec:sou} describes the source and the chosen datasets, Subsec.~\ref{sec:chi} describes
the fit algorithm and the fit results, Subsec.~\ref{sec:sta} describes the statistical significance
of the results.
\section{\label{sec:agn}Active Galactic Nuclei}
Active galactic Nuclei are galaxies characterized by a core that
appears to be brighter and more energetic than that of other galaxies.
It is, indeed, rather common that the core luminosity competes, or even
exceeds, the one of the host galaxy: in some cases it seems that
a luminosity of the order of $10 ^{4}$ times the one of a standard
galaxy can be associated with a region with a linear size significantly
smaller than $1\,$pc. In addition, non-thermal AGNs emission covers a very
broad range of the spectrum.
It is usually believed that the engine at the core of AGNs is a black-hole
with a total mass of millions or billions of solar masses: such an object is
called a supermassive black hole. Although there is only indirect
evidence that this supermassive core (SMC) is a black hole, there is
usually agreement about the fact that the energy necessary to sustain
a luminosity as large as the one mentioned above is coming from the infall of surrounding
matter onto the SMC: this process is generically called accretion. The details of matter
accretion onto the SMC are still to be understood.
Another striking feature of AGNs is, in some cases, the presence of two powerful,
highly collimated jets that shoot out in opposite directions. The way in which
these jets are formed and other fundamental aspects, for instance related to
their composition and to the details of the processes involved in the acceleration
of particles inside these jets, are also subject of active research.
From this short, qualitative, introduction it appears that there is still a lot
that we have to understand about AGNs. Nevertheless, several important
steps forward have been made and we will concentrate, in what follows, on what we
think we do actually understand. From an historical perspective, it is important
to recall that what nowadays we call AGNs comprises a very wide class of extragalactic
objects, which observationally can (and do) appear very different one from the other. For this
reason, we will report below a standard classification scheme\cite{bib:MAGIC}.
There are several characteristics that can be used to classify AGNs, and their common
ground is that they are not usually found in standard galaxies. We already mentioned
one of them, which is the \emph{high luminosity}. AGNs can reach luminosities of the
order of $10^{48}\,\mathrm{erg} \cdot \mathrm{s} ^{-1}$, which is $10,000$ times the
average luminosity of a galaxy (intended as the characteristic luminosity of the field
galaxy distribution). This is, the apparent luminosity, which is what we see after the
radiation emitted has been, for instance, absorbed by the circum-object medium: the
intrinsic luminosity could then be even higher. The energy output is also distinctive,
and characterized by a \emph{broadband continuum emission}. Standard galaxies have a
spectrum which, at zero-th order can be considered the sum of the black body spectra of
all the stars composing the galaxy: since each star spectrum is \emph{approximately} a
black-body spectrum with characteristic temperature that equals the surface temperature
of the star, and since the surface temperature of stars spans a range of about one decade,
then a typical galaxy power output is emitted within about one decade of frequency. On the
contrary, AGNs can have a spectrum ranging from the mid-infrared to the hardest X-rays, and
such that a narrower frequency band dominating the emission is missing. AGN emission can be
moreover characterized by \emph{prominent emission lines}, another element of contrast
with most standard galaxies. The broadness of AGNs spectral lines can be
both, i) very sharp or ii) present broad wings (with variability spanning two orders of
magnitudes, depending on the source). The emission also presents a marked \emph{variability}
at high frequencies (whereas, in the optical band, variability is about 10\% on human life
timescale): there is, nevertheless, a small class of AGNs (which will be mainly interested
in what follows) having a much more marked, i.e. with much shorter timescales, variability.
Although the \emph{polarization} of AGNs emission is weak, it can be statistically appreciated
when compared with that of a standard galaxy: a frequency dependence of polarization is also
usually detected. Finally, AGNs can also be characterized by a strong \emph{radio emission},
a phenomenology that has been extensively studied since the beginning of radio astronomical
observations.
An AGN is a source that presents some (not necessarily all) of the above
properties. For instance, the vast majority of AGNs have a spectrum characterized by a broadband
continuum emission, with strong emission lines, some variability and weak polarization. Many of them
also present a small angular size. Finally, in some cases radio emission has
been detected and, occasionally, the variability and the polarization are both strong.
Several objects fall within the above definitions and we will see
later on that it is possible to devise a tentative unification scheme, the so called \emph{unified
model}. We will come to it after a concise description of the diverse observational phenomenology.
A first kind of objects that shows properties typical of AGNs are the so-called quasi stellar
sources, or \emph{QUASAR}s. QUASARs show two evident relativistic jets, but their main characteristic
is a very luminous and unresolved nucleus with angular size smaller than the arc-second. Radio
emission may be also present, in which case they are called, \emph{radio QUASAR}s. QUASARs without
radio emission are instead called \emph{radio quiet QUASAR}s and they are about $20$ times more
common than radio QUASARs. Both radio and radio quiet QUASARs are found at high redshift and
without signs of a surrounding galaxy.
A second kind of objects in the AGN class are \emph{radio galaxies}. Radio galaxies
are characterized by a radio emission which is thousands times the radio emission of a standard galaxy:
the radio emission is also apparent in two lobes, that extend in opposite directions outside a bright
radio core, to which they appear to be connected by emission jets.
In standard galaxies the radio emission can be traced to the production of relativistic electrons in
supernovae explosions. In radio galaxies, instead, the radio emission, which is characteristically
non-thermal, can be identified with synchrotron emission from ultra-relativistic particles. The spectrum,
which is a broad continuum, is markedly non-thermal also in the optical range, where it especially features
superimposed emission lines.
With some properties similar to those of radio quiet QUASARs, \emph{Seyfert galaxies} are also AGNs, but
having a lower luminosity. A further classification within Seyfert galaxies can be made based on their
spectrum, that can feature i) broad band regions as well as narrow lines (these objects are called Seyfert 1 galaxies)
or ii) only broad lines (these galaxies are named Seyfert 2). Seyfert galaxies are so close to radio quiet
QUASARs that most of the times it is the way in which they are discovered that makes them members of one class
or the other: in particular, in presence of a high redshift and the absence of a surrounding galaxy the objects
tends to be classified as a QUASAR, while broad emission cores in known galaxies are usually classified as Seyfert
galaxies.
Finally we come to the last group of objects in the AGN family, namely \emph{blazars}. Blazars are the most
energetic sources that can be observed in the sky. They are also characterized by two powerful jets
shooting out in opposite directions at relativistic speed: their peculiarity is that we are able
to view one of them at a small angle to the object axis (which is also the jet emission direction), which means
that the jet is practically pointed at us. The emission spectrum is extremely broad, from radio to $\gamma$
frequencies and allows an additional distinction: i) when the spectrum is completely missing emission lines
we have the, so-called, \emph{BL Lacertae} (\emph{BL Lac}) \emph{objects}, which are radio loud objects with marked variability and
strong optical polarization; ii) when, instead, the spectrum also features strong broad emission lines in the
optical range, we have what are called \emph{Optical Violent Variable objects}, which in the radio band look
similar to radio quasars.
We will discuss in more detail some features of AGNs emission, with particular emphasis on blazars, in
subsection \ref{sec:blazars}. Before moving to this topic, we would like, however, remark that
despite their apparent difference form the observational point of view, a unified model to interpret all these
observational features has been proposed; according to this unified model, all the different features of AGNs
are related to the orientation of the source with respect to the observer\cite{bib:1995PASP107803UrryPadovani}.
All AGNs would then be compact supermassive objects and they
would be surrounded by an \emph{accretion disk} composed of hot plasma emitting thermal radiation. The broad
emission lines that are detected in some spectra, mostly in the optical range, would be caused by the presence
of a toroidal shape region containing dense molecular clouds (called the \emph{broad emission lines region}): the
broad emission lines region can obscure the view of the central part. At a larger distance from the central
object there is also a \emph{narrow emission lines region} also filled with molecular clouds but of density lower
than the one present in the broad emission lines region. In a direction perpendicular to the plane of the accretion
disk and in opposite directions, two jest of ultra-relativistic particles emerge from the central object and
extend several kilo-parsecs away from the core. As we anticipated this model allows for explanation of the AGNs
phenomenology, once we consider the orientation of the object with respect to the line of sight of the observer.
Radio galaxies and Seyfert galaxies have the jets nearly orthogonal to the line of sight of the observer: the torus
surrounding the core of the object obscures it and reprocesses the radiation coming from the disk and from the
broad emission lines region. The lobes define the region where the jets loose collimation, and both are responsible
for a strong radio emission. A completely different picture of the same model is obtained if we change the orientation
so that the jets now point in the direction of the observer's line of sight: now the boosted jet emission pointing
directly to the observer is dominating over all the other components, and it is the physics of the jet that is
mostly detected. At intermediate angles between the directions in which the line of sight is perpendicular or parallel
to the jets direction, both the central objects and the surrounding regions are seen: these objects are quasars,
and their characteristic emission lines are the result of the light coming from both the broad and narrow emission
line regions.
We will now provide more details on the emission model, with particular emphasis on blazars.
\subsection{\label{sec:blazars}Blazars and emission models}
Among the realizations of AGNs that we have briefly described above a very important role is played by blazars.
Although only in $\lesssim 10 \%$ of the cases it is possible to detect a jet structure in AGNs, a lot of effort
has been put in understanding the origin of the jets and the physical processes that allow for particle acceleration
in such highly collimated beams. A full modelling of AGNs and AGNs features formation and evolution, is
still one of the fundamental open problems in astrophysics; however, it is currently accepted that jets consist of low entropy
flows associated to regions of internal/external shocks, within which the jets dissipate part of their energy.
Since a relativistic jet that moves in a direction that forms a small angle with the line of sight of the observer
is greatly amplified by \emph{relativistic beaming}, blazars observations are an exceptional tool to investigate
the nature and properties of AGNs relativistic jets: indeed blazars' emission is dominated by the jet and makes
these objects the major extragalactic source of $\gamma$-rays, despite the fact that they represent only a minority
of the AGNs population. Two major subclasses can be identified within blazars of the BL Lac type: while all of them show a spectral
energy distribution (SED) with two pronounced bumps, these bumps peak i) in the infrared the first and around GeV
frequencies the second or ii) in the X-ray the first and around TeV frequencies the second. In particular,
BL Lac objects of the first kind are called \emph{low frequency peaked} BL Lac, while BL Lac objects of the
second kind are called \emph{high frequency peaked} BL Lac (\emph{HBL}).
It is generally agreed that the low energy peak is produced by synchrotron emission. Models can differ, instead,
in the description of the very high energy $\gamma$-ray bump, and can be classified in two qualitative different
groups: \emph{leptonic models} and \emph{hadronic models}.
\begin{description}
\item[Leptonic models]$\!\!$. These models explain the very high energy $\gamma$-ray radiation by inverse
Compton scattering of photons off relativistic electrons/positrons. If the scattered photons are synchrotron
photons created by the electrons of the jet, the models are called \emph{synchrotron self Compton} (\emph{SSC})
models\cite{bib:1985ApJ298114Marscher,bib:1992ApJL397L5Maraschi,bib:1996ApJ461657Bloom}.
If, instead, the scattered photons are \emph{ambient} photons or photons coming from the environment,
the models are called \emph{external inverse Compton} (\emph{EIC}) models. The absence of emission lines in
the blazars spectra seems to favor SSC models. SSC models can invoke one region in which the relevant interactions
occur (\emph{one-zone} SSC models), or be refined to take into account more than one region. In the following
we are going to test a one zone SSC model on a set of nine simultaneous multi-wavelength SEDs.
\item[Hadronic models]$\!\!$. In this case the models take advantage from the fact that a proton component
in the jet would be subjected to less synchrotron losses as compared to electrons and could then be accelerated
more efficiently\cite{bib:2005CJAA5195Rieger}. The low energy peak is again explained by synchrotron radiation, so in these models there is
an electron components that is accelerated together with protons. The higher energy bump is instead produced
by the interaction of the accelerated protons with matter and/or ambient photons and or magnetic
fields\cite{bib:1992AAL253L21Mannheim,bib:1993ApJL402L29Bednarek,bib:1993AA26967Mannheim,bib:1997ApJL478L5Dar,bib:1998Sci279684Mannheim
bib:2000NAS5377Aharonian,bib:2000AA354395Pohl,bib:2001APP15121Mueche,bib:2003APP18593Mueche}.
Proton induced cascades, that can in turn induce electromagnetic cascades, and/or proton synchrotron models
have also been considered\cite{bib:1992AAL253L21Mannheim,bib:2001APP15121Mueche,bib:2003APP18593Mueche}.
\end{description}
It is also not excluded that both processes could coexist in the jet\cite{bib:2000ApJ534109Sikora,bib:2005ApJ625656Georganopoulos}.
In what follows we will be interested in
a leptonic model, specifically a one zone SSC model\cite{bib:1998ApJ509608Tavecchio,bib:2010MNRAS4011570Tavecchio}.
This model has shown a good agreement with HBL broadband emission in both, the ground and excited
states\cite{bib:2008ApJ6791029Tagliaferri}. Moreover, in one zone SSC models, since there is only one population of electrons
that generates the doubly peaked emission, there is naturally a correlation between the X-ray and the very high energy
$\gamma$-ray variability. The emission zone is assumed to be a spherical blob of radius $R$, moving with a bulk
Lorentz factor $\Gamma$ in a direction forming an angle $\theta$ with respect to the observer viewing direction.
Special relativistic effects can then be described by a single parameter $\delta = (\Gamma (1 - \beta \cos \theta)) ^{-1}$.
The model assumes that the spherical blob region is uniformly filled by electrons with density $n _{\mathrm{e}}$ and
by a uniform, tangled, magnetic field $B$. Five more parameters complete the model providing a description of the relativistic
electrons' spectrum. This is characterized by a smoothed, broken power-law in $\gamma$, the Lorentz factor of the electrons,
which is bounded by $\gamma _{\mathrm{min}} < \gamma _{\mathrm{max}}$. The transition between the two power-laws takes place at $\gamma _{\mathrm{br}}$.
The energy slope at low and high energies are, respectively, $n _{1}$ and $n _{2}$. Altogether the model has nine free parameters:
three of them ($\delta$, $R$ and $B$) describe the emitting blob; the other six ($n _{1}$, $n _{2}$, $\gamma _{\mathrm{min}}$, $\gamma _{\mathrm{max}}$,
$\gamma _{\mathrm{br}}$, $n _{\mathrm{e}}$) describe the energy distribution of the electrons' plasma.
It is crucial to realize that only \emph{simultaneous}, \emph{multi-wavelength} observations allow the determination of all
the parameters of the model. In particular, if we did not have the very high energy $\gamma$-ray observations that became
available with modern Cherenkov telescopes, we would have only knowledge of the synchrotron peak. This would give us
information about the electrons' distribution, but not on the other parameters of the model, and certainly it would not
help us to remove, for example, the residual degeneracy between the intensity of the magnetic field and the electron density.
This simple example already gives an idea of the importance of simultaneous multi-wavelength observations across all the emission
spectrum of the object.
To determine the parameters of the model that we just described, we will use a rigorous statistical approach, by fitting the
SSC model to several simultaneous multi-wavelength datasets corresponding to different activity states of the source. In this
way we will be able to constrain, within some significance level, the SSC model in each emission state (this is the primary
goal of this contribution). In turn, this allows also an analysis of the behavior of the parameters of the model in different
emission states, a point which is discussed in detail elsewhere\cite{bib:2011ApJ73314Mankuzhiyil}.
Before continuing with the analysis anticipated above, we will introduce in the following sections some basic ideas about non-linear fits and
their significance.
\section{\label{sec:min}Nonlinear $\chi ^{2}$ fitting}
In this section we discuss a technique that allows to identify local minima of real valued functions
and that we will use in the context of non-linear least squares problems: in particular, our final goal
will be to use this technique to minimize the sum of the squares
of the deviations between a given set ${\mathcal{O}} = \{ (x _{i}, y _{i} , \sigma _{i}) | i = 1 , \dots , N\}$
of measured data points (in which $\sigma _{i}$ is the uncertainty in the measured quantity
$y _{i}$ and uncertainties in $x _{i}$ are considered small enough to be neglected) and a model
function $f ( x ; {\mathbf{p}})$ that depends from a set
${\mathbf{p}} = ( p _{j} ) _{j = 1 , \dots , n }$ of $n$ parameters. Under suitable assumptions
it can be proved\cite{bib:2003draeaftpsBevington} that the optimal values for the parameters according to the observed data
can be obtained by minimizing the Chi-Square function, i.e.
\[
\chi ^{2} ( {\mathbf{p}} )
=
\frac{1}{2} \sum _{i} ^{1,N}
\left[
\frac{y _{i} - f (x _{i} ; {\mathbf{p}})}{\sigma _{i}}
\right] ^{2}
.
\]
When the function $f$ is a linear function of the parameters, a closed formula for the minimization of
$\chi ^{2}$ can be obtained. Here we are instead interested in the situation in which $f$ is a non-linear
function of the $p _{j}$. The minimization process can then be performed numerically in several iterations,
the goal of each iteration being to find a perturbation $\delta p _{j}$ of the current values of the parameters
$p _{j}$ that results in a lower value of $\chi ^{2}$. Several methods can be developed to find the values
of the parameters $p _{j}$ that minimize $\chi ^{2}$. In view of our final goal, we will concentrate on two
of these methods, namely the \emph{steepest descent method} and the \emph{Gau\ss{}-Newton} or \emph{Inverse
Hessian} method.
\subsection{The steepest descent method}
The steepest descent method\cite{bib:2004trIMMMadsen,bib:1992nricPress} is based on the evaluation of the gradient
of the objective function (the $\chi ^{2}$
in our case) with respect to the parameters $p _{j}$. The main idea of the method is that
the most direct path in the direction of a local minimum is to descend in the direction opposite to the
gradient of $\chi ^{2}$ with respect to the $p _{j}$. The components of the gradient of $\chi ^{2}$, i.e.
$\partial _{p _{j}} \chi ^{2}$ turn out to be
\[
\partial _{p _{j}} \chi ^{2}
=
- \sum _{i} ^{1,N}
\left [
\frac{y _{i} - f (x _{i} ; \mathbf{p})}{\sigma _{i} ^{2}}
\partial _{p _{j}} f (x _{i} ; {\mathbf{p}})
\right]
.
\]
For later convenience we will set up a more compact matrix notation for the above relation, which is
\[
( \mathrm{\mathbf{grad} _{{\mathbf{p}}}} \chi ^{2} ({\mathbf{p}}) ) ^{T}
=
- ( \mathbf{y} - \mathbf{f}) ^{T} \, \mathbf{\Sigma} \, \mathbf{J}
,
\]
where the gradient is nothing but
\[
\mathrm{\mathbf{grad} _{{\mathbf{p}}}} \chi ^{2} ({\mathbf{p}})
=
( \partial _{p _{j}} \chi ^{2} ) _{j = 1 , \dots , n}
,
\]
$\mathbf{y}$ is the $N$-dimensional constant vector of the dependent variable observed data,
\[
\mathbf{y} = ( y _{i} ) _{i = 1 , \dots , N} ,
\]
$\mathbf{f}$ is the $N$ dimensional vector of the dependent variable values estimated according to the
model described by $f (x , {\mathbf{p}})$,
\[
\mathbf{f} = \mathbf{f} ({\mathbf{p}}) = ( f (x _{i} ; {\mathbf{p}}) ) _{i = 1 , \dots , N} ,
\]
$\mathbf{\Sigma}$ is the $N \times N$ diagonal matrix of the weights corresponding to the dependent variable
uncertainties in the $N$ measurements ${\mathcal{O}}$,
\[
\mathbf{\Sigma} = \mathrm{diag}( \sigma _{i} ^{-2}) _{i = 1 , \dots , N}
\]
and, finally\footnote{In this notation the $\chi ^{2}$ function can be written as
\begin{eqnarray}
\chi ^{2} ({\mathbf{p}}) & = & \frac{1}{2} (\mathbf{y} - \mathbf{f}) ^{T} \, \mathbf{\Sigma} \, (\mathbf{y} - \mathbf{f})
\nonumber \\
& = & \frac{1}{2} \mathbf{y} ^{T} \mathbf{\Sigma} \mathbf{y} + \mathbf{y} ^{T} \mathbf{\Sigma} \mathbf{f} +
\frac{1}{2} \mathbf{f} ^{T} \mathbf{\Sigma} \mathbf{f}
.
\end{eqnarray}
}, $\mathbf{J}$ is the Jacobean matrix of $\mathbf{f}$, i.e.,
\[
\mathbf{J}
=
( \partial _{p _{j}} \mathbf{f} ({\mathbf{p}}) ) _{j = 1 , \dots , n}
=
( \partial _{p _{j}} f (x _{i} ; {\mathbf{p}}) ) _{\begin{matrix}{\scriptstyle{}i = 1 , \dots , N} \hfill \\[-2mm] {\scriptstyle{}j = 1 , \dots , n} \hfill \end{matrix}}
.
\]
A perturbation $\delta \mathbf{p} = (\delta p _{j}) _{j = 1 , \dots , n}$ that updates the parameters in the direction
of the steepest descent, i.e. in the direction opposite to the gradient of $\chi ^{2}$, can then be obtained as
\[
\delta \mathbf{p} = \mu \mathbf{J} ^{T} \, \mathbf{\Sigma} \, ( \mathbf{y} - \mathbf{f}),
\]
where we used the fact that, since $\mathbf{\Sigma}$ is diagonal, $\mathbf{\Sigma} ^{T} = \mathbf{\Sigma}$.
$\mu$ is a positive real number that determines the length of the step in the steepest descent direction.
Based on the above framework, the steepest descent method consists of a sequence of parameters updates that
are always performed in the direction of the steepest descent until a minimum is found with the prescribed
accuracy. For simple objective functions the steepest descent method is recognized as a highly convergent
approach to minimization and, when the number of parameters is high or very high, it can be considered as
the most reliable, if not the only viable, method. There are, nevertheless, weak points of this method,
especially for complex models: these weak points are substantially related to the fact that the method
does not take into account the curvature of the surface which, in our case, is the graphic of the $\chi ^{2}$. Because of
this, it is possible to make too large steps in steep regions or, too small steps in shallow regions: this
clearly affects the convergence of the algorithm. At the same time, particular structures in the $\chi ^{2}$
surface, as for instance narrow valleys, may also damage convergence. In the case of a narrow valley, for instance,
we would need to move a large step in the direction that points along the flat base of the valley, but only
a small one in the direction perpendicular to the valley walls. If it is true that second order information,
i.e. the use of curvature information about the $\chi ^{2}$ surface, would definitely help to improve the
method, it is often (and as we will see later on, in our case in particular) computationally expensive to
access this second order information. We will see that a good compromise can be found: to this end, we need to
first analyze another approach to minimization, which we will do in the following subsection.
\subsection{The inverse Hessian method}
Another approach that can be used to determine a minimum (in particular of the $\chi ^{2}$ function
described above) is the inverse Hessian (or Gauss--Newton) method\footnote{We will use the first denomination
here, although the second is also quite widespread.}. To give a sound motivation to this
approach\cite{bib:1992nricPress,bib:1996nmflspBjorck}
let us first consider a particular case, i.e. the one in which the dependence of the model function from the parameters
is linear. The model function can then be written as
\[
f (x , \mathbf{p}) = f ^{(0)} (x) + ( \mathbf{L} ^{(1)} (x) ) ^{T} \mathbf{p}
\stackrel{\scriptstyle\mathrm{def.}}{=} f ^{(0)} (x) + \sum _{j} ^{1 , n} L ^{(1)} _{j} (x) p _{j}
,
\]
where $\mathbf{L} ^{(1)} (x)$ is the vector $\mathbf{L} ^{(1)} (x) = (L ^{(1)} _{j} (x)) _{j = 1 , \dots , n}$.
For this linear model
\[
\mathbf{f} = \left( f ^{(0)} (x _{i}) + ( \mathbf{L} ^{(1)} (x _{i}) ) ^{T} \mathbf{p} \right) _{i = 1 , \dots , N}
\]
and
\[
\mathbf{J} = ( (\mathbf{L} ^{(1)} (x _{i}) ) ^{T}) _{i = 1 , \dots , N}
\]
so that
\[
\mathbf{f}
=
\mathbf{f} ^{(0)} + \mathbf{J} \mathbf{p} , \quad
\mathbf{f} ^{(0)} \stackrel{\scriptstyle\mathrm{def.}}{=}
(f ^{(0)} (x _{i})) _{i = 1 , \dots , N}
.
\]
The $\chi ^{2}$ is then a quadratic function of the parameters,
\[
\chi ^{2} (\mathbf{p}) = \frac{1}{2} ( \mathbf{y} - \mathbf{f} ^{(0)} ) ^{T} \mathbf{\Sigma} ( \mathbf{y} - \mathbf{f} ^{(0)} )
- ( \mathbf{y} - \mathbf{f} ^{(0)} ) ^{T} \mathbf{\Sigma} \mathbf{J} \mathbf{p}
+ \frac{1}{2} \mathbf{p}^{T} \mathbf{J}^{T} \mathbf{\Sigma} \mathbf{J} \mathbf{p}
\]
and the minimum can be obtained in closed form by algebraically solving for $\mathbf{p}$ the linear equation
\[
-
( \mathbf{y} - \mathbf{f} ^{(0)} ) ^{T} \mathbf{\Sigma} \mathbf{J}
+
\mathbf{p} ^{T} \mathbf{H}
=
0
,
\]
which was obtained remembering that, since $\mathbf{\Sigma}$ is diagonal,
$\mathbf{H} \stackrel{\scriptstyle\mathrm{def.}}{=} \mathbf{J}^{T} \mathbf{\Sigma} \mathbf{J}$
is an $n \times n$ symmetric matrix. The final result is the minimum point $\mathbf{p} _{\mathrm{min}}$,
\[
\mathbf{p} _{\mathrm{min.}}
=
\mathbf{H} ^{-1} \mathbf{J} ^{T} \mathbf{\Sigma} ( \mathbf{y} - \mathbf{f} ^{(0)} )
=
\mathbf{H} ^{-1} \mathbf{J} ^{T} \mathbf{\Sigma} ( \mathbf{y} - \mathbf{f} ) + \mathbf{p}
.
\]
In this special case (the model function is linear in the parameters and, thus, the $\chi ^{2}$ is
quadratic in $\mathbf{p}$) we have that i) $\mathbf{H}$ is exactly the \emph{Hessian} of the $\chi ^{2}$,
ii) $\mathbf{H}$ it is a constant
matrix (specifically, constant with respect to $\mathbf{p}$) and iii) it is possible to write in closed
form an exact solution for the minimum.
The inverse Hessian method takes advantage of the above result, by using it to deal with the general case,
in which the model function is generic. The way in which this is obtained is by iterating successive steps:
in each of them a linear approximation of the model function around the current values of the parameters
is used, which results in a quadratic approximation to the $\chi ^{2}$. The exact solution to this linearized
model is given by the equation derived just above and will provide us with a new value of the model parameters;
of course, in the fully non-linear case these new values will unlikely realize the $\chi ^{2}$ minimum, but they
might turn out to be a much better approximation to it. By successive steps convergence to the minimum might
eventually be achieved. The advantage of this method, as opposed to the steepest descent method described in
the previous subsection, is that we are here using information related to the curvature of the $\chi ^{2}$
surface that might allow us to reach more quickly the sought minimum. At the same time, if we are not close
enough to the minimum, the linearized model will probably not be a good enough approximation of the fully
non-linear model. Several steps might be required to arrive close enough to the minimum, where the method
is particularly efficient.
\subsection{\label{sec:levmar}Levenberg and Levenberg-Marquardt methods}
As we have discussed in the previous subsections, both the steepest descent method and the inverse Hessian
method have advantages and disadvantages. The first of the two, is iteratively trying to converge toward the
minimum by updating the parameters as
\[
\mathbf{p} \longrightarrow \mathbf{p} + \delta \mathbf{p}, \quad
\delta \mathbf{p} = \mu \mathbf{J} ^{T} \, \mathbf{\Sigma} \, ( \mathbf{y} - \mathbf{f}), \quad
\mu \in {\mathbb{R}} _{+}
;
\]
good convergence could be badly affected in situations in which the shape of the $\chi ^{2}$ surface
presents features that require an estimation of quantities related to the surface curvature, which could
be the case in proximity of the minimum. The second method, was, also iteratively, trying to converge to
the minimum by updating the parameters as
\[
\mathbf{p} \longrightarrow \mathbf{p} + \delta \mathbf{p}, \quad
\delta \mathbf{p} = \mathbf{H} ^{-1} \mathbf{J} ^{T} \, \mathbf{\Sigma} \, ( \mathbf{y} - \mathbf{f})
;
\]
good convergence is more likely achieved, in this case, close to the minimum, when a linear approximation to
the model function is more appropriate, and the $\chi ^{2}$ surface, locally, could be well approximated by
a paraboloid.
A vantage element, common to both models, is that they only require the calculation of the first derivatives
of the model function (out of which $\mathbf{J}$ is made): no second derivatives appear in these methods, which
would allow to spare computational time. Also, from the qualitative analysis that we have done of the two methods,
they appear to be more effective in complementary situations, the steepest descent being likely efficient far
away from the minimum, while the inverse Hessian could provide better convergence close to it. These considerations
strongly motivate Levenberg proposal\cite{bib:1944QAM2164Levenberg},
which is to iteratively update the parameters according to the following rule:
\[
\mathbf{p} \longrightarrow \mathbf{p} + \delta \mathbf{p}, \quad
\delta \mathbf{p} = ( \mathbf{H} + \lambda {\mathbb{I}} ) ^{-1} \mathbf{J} ^{T} \, \mathbf{\Sigma} \, ( \mathbf{y} - \mathbf{f}), \quad
\lambda \in {\mathbb{R}} _{+},
\]
where ${\mathbb{I}}$ is the $n \times n$ identity matrix. The positive real number $\lambda$ is fixed at a small value at the
beginning of the computation: it is, then, dynamically adjusted by the algorithm according to the estimated distance
from the expected minimum. When the algorithm estimates to be far from the minimum, $\lambda$ is progressively
increased so that the contribution of $\mathbf{H}$ to $\mathbf{H} + \lambda {\mathbb{I}}$ becomes negligible and
the method behaves as the steepest descent one. On the contrary, when the algorithm estimates to be closer to the expected
minimum, the value of the parameter $\lambda$ is progressively reduced, until it will be so close to zero that the
contribution of $\lambda {\mathbb{I}}$ to $\mathbf{H} + \lambda {\mathbb{I}}$ will be negligible and, in all respect,
the algorithm will be following an inverse Hessian approach. The quantity that is computed to decide the decrease/increase
in $\lambda$ is the absolute difference between the value of $\chi ^{2}$ and the value of its quadratic approximation,
with the assumption that this approximation becomes better and better close to the minimum.
Following Levenberg, Marquardt proposed a further refinement of the model\cite{bib:1992nricPress,bib:1963SIAMJAM11431Marquardt},
hence the name of what can nowadays
be considered a robust standard for $\chi ^{2}$ minimization, i.e. the Levenberg-Marquardt method. The proposal
of Marquardt was to use, also during the steps closer to the steepest descent method, part of the information about
the curvature of the $\chi ^{2}$ surface encoded into $\mathbf{H}$ (we remark again that $\mathbf{H}$ is not the
Hessian of the $\chi ^{2}$ but it is the Hessian of the quadratic approximation to the $\chi ^{2}$ which is obtained
by linearizing the model function). The update rule for the Levenberg-Marquardt algorithm is, therefore,
\begin{equation}
\mathbf{p} \longrightarrow \mathbf{p} + \delta \mathbf{p}, \quad
\delta \mathbf{p} = ( \mathbf{H} + \lambda \mathrm{diag}(\mathbf{H}) ) ^{-1} \mathbf{J} ^{T} \, \mathbf{\Sigma} \, ( \mathbf{y} - \mathbf{f}), \quad
\lambda \in {\mathbb{R}} _{+}.
\label{eq:levmarupdalg}
\end{equation}
It is this method that we will apply in the study discussed in section~\ref{sec:Mrk421}.
\section{\label{sec:stasig}Statistical Significance}
In this section we would like to discuss some selected topics about what are usually
called \emph{goodness of fit tests}. In particular we will concentrate our attention
on a particular test, the Kolmogorov-Smirnov (KS)
test\cite{bib:1933GIIA41Kolmogorov,bib:1939MS63Smirnov,bib:1943AMS14305Scheffe,bib:1949ProcBSMSP93Wolfowitz,bib:1951JASA4668Massey}.
The KS test can be used to
decide if a data sample is coming from a population with a specific distribution,
and will be thoroughly discussed in section~\ref{sec:kolsmi}. As an introduction to this more
specific topic, we will first recall the general problem. Then, in the next subsection,
we will describe a standard approach that can be used with binned data. This will give
us the possibility to appreciate some crucial difference (and advantages) of the KS test,
which will be presented in the last part of this section.
\subsection{The general problem.}
Let us consider the case in which we are given two sets of data, say ${\mathcal{O}} ^{(1)}$
and ${\mathcal{O}} ^{(2)}$ : we may be interested in quantifying our certainty about the fact
that the two sets of data are coming from populations having the same distribution function.
To be more precise, let us consider the following statement, i.e. the \emph{null hypothesis}
${\mathscr{N}} _{0}$,
\begin{description}
\item[${\mathscr{N}} _{0}$]$\!$:$\,\,$\textsc{the two sets of data} ${\mathcal{O}} ^{(1)}$ \textsc{and} ${\mathcal{O}} ^{(2)}$
\textsc{are coming from the same population distribution function}.
\end{description}
We are interested in methods that allow us to \emph{disprove} ${\mathscr{N}} _{0}$
\emph{to a certain level of confidence}; if we can succeed in \emph{disproving}
${\mathscr{N}} _{0}$, then we will conclude that ${\mathcal{O}} ^{(1)}$ and
${\mathcal{O}} ^{(2)}$ are coming from different distributions. We remark that disproving
${\mathscr{N}} _{0}$ \emph{to a certain level of confidence} is as far as we can go from
the statistical perspective and that failure in disproving ${\mathscr{N}} _{0}$ only
shows that at the given \emph{level of confidence} it is \emph{consistent} to consider
the two sets of data as coming from the same distribution. In the general statement
that we are discussing we did not make any assumption about ${\mathcal{O}} ^{(1)}$
and ${\mathcal{O}} ^{(2)}$ that, in general, can be coming from two different unknown
distributions. Later on, we will be interested in a particular case, namely the one in
which one of the two distributions is known. In this case, the null hypothesis will be
\begin{description}
\item[${\mathscr{N}} ' _{0}$]$\!$:$\,\,$\textsc{the set of data} ${\mathcal{O}} ^{(1)}$
\textsc{is coming from a population distributed as ${\mathcal{D}}$}.
\end{description}
\subsection{The Chi-Square test}
The first approach that we will describe is an accepted standard to solve the above problem
for \emph{binned} data\cite{bib:2003draeaftpsBevington,bib:1992nricPress}.
Let us then consider a binning of the sets of data ${\mathcal{O}} ^{(\alpha)}$, $\alpha = 1,2$,
in $N _{\mathrm{b}}$ bins indexed by a set of integers $i \in I$, such data $n ^{(\alpha)} _{i}$,
$\alpha = 1,2$, is the number of observed points of the ${\mathcal{O}} ^{(\alpha)}$ data falling
in the $i^{\mathrm{th}}$-bin (the binning intervals are the same for both sets of measurements).
We can then construct the following estimator
\begin{equation}
\chi ^{2} [ {\mathcal{O}} ^{1} , {\mathcal{O}} ^{2} ; N _{\mathrm{b}} ]
=
\sum _{i \in I \setminus \bar{I}}
\frac{( r ^{(1)} n ^{(1)} _{i} - r ^{(2)} n ^{(2)} _{i} ) ^{2}}{n ^{(1)} _{i} + n ^{(2)} _{i}}
,
\label{eq:chisqutwodat}
\end{equation}
where $\bar{I} = \{ j \in I \, | \, n ^{(1)} _{j} = n ^{(2)} _{j} = 0\}$ is used to exclude
from the sum bins for which the corresponding term would not be well defined and the $r ^{(\alpha)}$,
$\alpha = 1 , 2$, are defined according to the following relations:
\[
N ^{(\alpha)} = \sum _{i \in I} n ^{(\alpha)} _{i} = \# {\mathcal{O}} ^{(\alpha)}
, \quad
r ^{(\alpha)} = \left( \frac{N ^{{(1)}}}{N ^{(2)}} \right) ^{\alpha - \frac{3}{2}}
, \quad
\alpha = 1 , 2 .
\]
To consider the correct $\chi ^{2}$ statistics, we also need the number of degrees of freedom,
$\nu$, that can be associated to the test. This number is $\nu = N _{\mathrm{b}} - N _{\mathrm{c}}$,
where $N _{\mathrm{b}}$, from above, is the number of bins, and $N _{\mathrm{c}}$ is
the number of independent constraints that have been imposed on the sets of data.
A similar test can be applied to the case in which we have only one set of data, say ${\mathcal{O}} ^{(1)}$,
and we would like to disprove if ${\mathcal{O}} ^{(1)}$ is coming from a given distribution ${\mathcal{D}}$.
As before let us consider a binning of ${\mathcal{O}} ^{(1)}$ data in $N _{\mathrm{b}}$ bins; again $n ^{(1)} _{i}$
will be the number of ${\mathcal{O}} ^{(1)}$ data falling in the $i^{\mathrm{th}}$-bin. We will, moreover,
define $\delta _{i}$ as the number of data expected in the $i^{\mathrm{th}}$-bin if the data were distributed according
to ${\mathcal{D}}$ (of course, $\delta _{i}$ does not need to be an integer). If $I$ is the set of integers indexing
the binning and $\bar{I} = \{ j \in I \, | \, n ^{(1)} _{j} = d _{j} = 0\}$,
we can define the following estimator
\begin{equation}
\chi ^{2} [ {\mathcal{O}} ^{(1)} ; {\mathcal{D}} ; N _{\mathrm{b}} ]
=
\sum _{i \in I \setminus \bar{I}}
\frac{( n ^{(1)} _{i} - \delta _{i} ) ^{2}}{\delta _{i}}
.
\label{eq:chisquonedat}
\end{equation}
We remark that, in this case, there are some potentially not well-defined terms in the sum, i.e.
terms corresponding to bins in which $\delta _{i} = 0$ and $n ^{(1)} _{i} = 0$. These terms mean
that, according to the distribution ${\mathcal{D}}$, there are no results expected in the given bin,
whereas the observed data \emph{do} have occurrences in the bin: this is a simple case in which \emph{it is disproved}
that the data can be obtained from the distribution ${\mathcal{D}}$. As in the former case the number of
degrees of freedom is needed to have the correct $\chi ^{2}$ statistics. If $N _{\mathrm{p}}$ is the number
of parameters required to know the distribution ${\mathcal{D}}$ that have been determined from the data, and
if the occurrences in each bin that are expected from the model, $\delta _{i}$, are fixed (and \emph{not}
renormalized to match the total number of observed points) then $\nu = N _{\mathrm{b}} - N _{\mathrm{p}}$.
If, instead, the constraint $\sum _{i \in I} n ^{(1)} _{i} = \sum _{i \in I} \delta _{i}$ is imposed, then
$\nu = N _{\mathrm{b}} - N _{\mathrm{p}} - 1$. Additional independent constraints that should be present,
decrease accordingly the number of degrees of freedom.
Wether we are working in the first framework, with two sets of data, or in the second, with only one set of
data and a comparison distribution, we end up with an estimator
($\chi ^{2} [ {\mathcal{O}} ^{1} , {\mathcal{O}} ^{2} ; N _{\mathrm{b}} ]$ or
$\chi ^{2} [ {\mathcal{O}} ^{(1)} ; {\mathcal{D}} ; N _{\mathrm{b}} ]$ respectively) and an associated number
of degrees of freedom $\nu$. In what follows we will write briefly $\chi ^{2} [ N _{\mathrm{b}} ]$, since our
considerations can be applied in the same way to both cases and we wish to explicitly emphasize the dependence
from the binning that we had to perform. In both definitions of $\chi ^{2} [ N _{\mathrm{b}} ]$ the terms in
the sums are not individually normal. However in the limit in which the number of bins, $N _{\mathrm{b}}$, is
large enough, or the number of events in \emph{each} bin is large enough, it is standard practice to consider
the above defined $\chi ^{2} [ N _{\mathrm{b}} ]$ as the sum of the squares of $\nu$ \emph{normal} random
variables of unit variance and zero mean\footnote{This is the reason why the denominator of~(\ref{eq:chisqutwodat})
is twice the average of $n ^{(1)} _{i}$ and $n ^{(2)} _{i}$: indeed, since the variance of the difference of
two normal variables is the sum of the two individual variances, twice of their average is what is required
to obtain for each term of the sum a random variable with unit variance.}.
The Chi-square probability function $Q (\chi ^{2} | \nu)$, i.e. the probability that the sum of the squares
of $\nu$ random \emph{normal} variables with unit variance and zero mean is greater than $\chi ^{2}$, can be
used with $\chi ^{2} [ N _{\mathrm{b}}]$ to test our null hypothesis. $Q (\chi ^{2} | \nu)$ is defined as
\[
Q (\chi ^{2} | \nu)
=
\frac{1}{\Gamma ( \nu / 2 )}
\int _{\chi ^{2} / 2} ^{+ \infty}
e ^{-t} \, t ^{-1 + \nu / 2} \, d t
\]
and it is tabulated for convenience in statistics textbooks.
The Chi-Square method we just recalled works with binned data; although it is always possible to obtain
binned data from continuous data, there is often a great deal of arbitrariness in the binning process
and it is likely that the outcome of the test will depend on the binning (a fact which we already
emphasized in our notation by writing $\chi ^{2} ( N _{\mathrm{b}} )$ above). At the same time the Chi-square
method makes an assumption about the \emph{normality} of the data: although this assumption is at the
background of many statistical results, it might not be always satisfied and it would be desirable to
obtain ways to test the null hypothesis without relying on this assumption. This turns out to be
possible for continuous distribution, and an accepted standard is the Kolmogorov-Smirnov (KS) test, which
is discussed in the next subsection.
\subsection{\label{sec:kolsmi}The Kolmogorov-Smirnov test}
As we anticipated the KS test can be used for continuous distributions. It avoids binning,
it does not assume normality, and it is based on the concept of empirical cumulative distribution function
which we will soon introduce. We will start our analysis by considering the case of ${\mathscr{N}} ' _{0}$,
assuming that the results in ${\mathcal{O}} ^{(1)}$ are the results obtained by sampling $N = N ^{(1)}$
independent identically distributed random variables $X _{i}$, $i = 1 , \dots , N$, distributed according
to some unknown distribution ${\mathcal{P}}$. We will denote by $P (x)$ the cumulative distribution
function associated with ${\mathcal{P}}$, i.e.
$P (x) \stackrel{\scriptstyle\mathrm{def.}}{=} {\mathrm{Prob}} (X _{1} \leq x)$.
An \emph{empirical cumulative distribution function} is a way to count how many of the observed points can
be found below the value $x$, and it is defined as
\[
P _{N} (x)
=
\frac{1}{N}
\sum _{i} ^{1,N}
{\mathbb{I}}(X _{i} \leq x)
,
\]
where ${\mathbb{I}}({\mathscr{C}}) = 1$ \texttt{if} $\,{\mathscr{C}}$ \texttt{is true}$\,$ and ${\mathbb{I}}({\mathscr{C}}) = 0$
otherwise. $P _{N} (x)$ counts the proportion of ${\mathcal{O}} ^{(1)}$ that can be found below $x$ in steps of $1/N$.
By the law of large numbers it can be seen that the proportion of ${\mathcal{O}} ^{(1)}$ that can be found below $x$
tends to the cumulative distribution function $P (x)$, i.e.
\[
P _{N} (x) \longrightarrow P (x) \quad \mbox{in probability}.
\]
We will now prove a first result\cite{bib:1948AMS19177Feller,bib:1949JASA20343Doob}.
\paragraph{Proposition.} \emph{If $P(x)$ is continuous then the distribution of}
\begin{equation}
\sup _{x \in {\mathbb{R}}} | P _{N} (x) - P (x)|
\label{eq:supempcumdisfun}
\end{equation}
\emph{does not depend on $P$}.
\paragraph{Proof.}{$\quad$}\\
{\hrule\vspace*{2mm}\small{}\noindent{}We are interested in the behavior of the distribution of~(\ref{eq:supempcumdisfun}), i.e.
\begin{equation}
{\mathrm{Prob}} \left( \sup _{x \in {\mathbb{R}}} | P _{N} (x) - P (x)| \leq t \right) .
\label{eq:kolsmipro000}
\end{equation}
Let us define the function $P ^{-1} (z) \stackrel{\scriptstyle\mathrm{def.}}{=} \min \{ x | P (x) \geq z \}$,
where $0 \leq z \leq 1$ because this is the range of $P$, and preliminarily calculate
\begin{equation}
P _{N} (x)
=
P _{N} (P ^{-1} (z))
=
\frac{1}{N}
\sum _{i} ^{1,N}
{\mathbb{I}}(X _{i} \leq P ^{-1} (z))
=
\frac{1}{N}
\sum _{i} ^{1,N}
{\mathbb{I}}(P ( X _{i} ) \leq z )
.
\label{eq:kolsmipro001}
\end{equation}
The above expression, contains $P (X _{i})$, which is a uniform distribution on the interval $[0,1]$,
because the cumulative distribution function $F ( X _{1} )$ is given by
\[
{\mathrm{Prob}} (P (X _{1}) \leq t)
=
{\mathrm{Prob}} (X _{1} \leq P ^{-1} (t))
=
P (P ^{-1} (t))
=
t
.
\]
This implies that the random variables $Y _{i} = P (X _{i})$, $i = 1 , \dots , N$, are independent and
uniformly distributed on $[0,1]$, so that we can continue the chain of equalities in~(\ref{eq:kolsmipro001})
to obtain
\begin{equation}
P _{N} (x)
=
P _{N} (P ^{-1} (z))
=
\dots
=
\frac{1}{N}
\sum _{i} ^{1,N}
{\mathbb{I}}(P ( X _{i} ) \leq z )
=
\frac{1}{N}
\sum _{i} ^{1,N}
{\mathbb{I}}(Y _{i} \leq z )
,
\label{eq:kolsmipro002}
\end{equation}
in which the last term is independent from $P$.\\
The proof can now be quickly completed by performing a change of variable in~(\ref{eq:kolsmipro000})
and then using the result in~(\ref{eq:kolsmipro002}); we get
\begin{eqnarray}
{\mathrm{Prob}} (\sup _{x \in {\mathbb{R}}} | P _{N} (x) - P (x)| \leq t) .
& = &
{\mathrm{Prob}} \left( \sup _{0 \leq z \leq 1} | P _{N} (P ^{-1} (z)) - P (P ^{-1} (z))| \leq t \right)
\nonumber \\
& = &
{\mathrm{Prob}} \left( \sup _{0 \leq z \leq 1} \left | \frac{1}{N} \sum _{i} ^{1,N} {\mathbb{I}}(Y _{i} \leq z ) - z \right | \leq t \right)
\nonumber
\end{eqnarray}
which shows that~(\ref{eq:kolsmipro000}) is independent from P, \emph{Q.E.D.}.\\
\hrule
}
\bigskip
The above results imply that \emph{uniformly over} ${\mathbb{R}}$ we have
\[
\sup _{x \in {\mathbb{R}}} | P _{N} (x) - P (x)| \longrightarrow 0
\]
(i.e. the largest difference between $P _{N}$ and $P$ goes to zero in probability) and
that the distribution of the above supremum does not depend on the \emph{unknown} distribution
of the sample ${\mathcal{O}} ^{(1)}$, i.e. $P$, whenever $P$ is continuous. The final step that
motivates the KS test follows from the observation that, given $x$, the central limit theorem
implies that $\sqrt{N} ( P _{N} ( x ) - P ( x ))$ converges in distribution to a normal
distribution with zero mean and variance $P (x) (1 - P (x))$ (because this is the variance
of ${\mathbb{I}} (X _{1} \leq x)$).
Moreover, $\sqrt{N} \sup _{x \in {\mathbb{R}}} | P _{N} ( x ) - P ( x ) |$ also converges in
distribution, as shown by the following proposition.
\paragraph{Proposition.} \emph{The cumulative distribution function of}
$\sqrt{N} \sup _{x \in {\mathbb{R}}} | P _{N} ( x ) - P ( x ) |$ is such that
\begin{equation}
\mathrm{Prob} \left( \sqrt{N} \sup _{x \in {\mathbb{R}}} | P _{N} ( x ) - P ( x ) | \leq t \right)
\longrightarrow
P _{\mathrm{KS}} ( t ) ,
\label{eq:proempcumdisfun}
\end{equation}
where $P _{\mathrm{KS}} (t) = 1 - 2 \sum _{k = 1} ^{\infty} (-1) ^{k - 1} \exp (- 2 k ^{2} t)$ is the cumulative
distribution function of the Kolmogorov-Smirnov distribution.\\
\paragraph{Proof.}{$\quad$}\\
{\hrule\vspace*{2mm}\small{}\noindent{}The proof of this result will not be given here\cite{bib:1948AMS19177Feller,bib:1949JASA20343Doob}.\\
\hrule
}
\bigskip
The net result of the above analysis is that if $P _{0}$ is the cumulative distribution function
of the distribution associated with ${\mathscr{N}} ' _{0}$, then we can consider the statistics
\begin{equation}
D _{N} = \sqrt{N} \sup _{x \in {\mathbb{R}}} | P _{N} (x) - P _{0} (x)| ,
\label{eq:kolsmista}
\end{equation}
which will depend only on $N$ and can then be tabulated\cite{bib:1933GIIA41Kolmogorov,bib:1939MS63Smirnov}. If $N$ is big enough the distribution
of $D _{N}$ is approximated by the Kolmogorov-Smirnov distribution. We will now consider what
happens if ${\mathscr{N}} ' _{0}$ fails, which means that $P \neq P _{0}$: in this case the
empirical cumulative distribution function will converge to $P$, it will then not approximate
$P _{0}$ and for large $N$ we will have $\sup _{x \in {\mathbb{R}}} | P _{N} (x) - P _{0} (x)| > \epsilon$
for some small enough, positive $\epsilon$, i.e.
\[
D _{N} = \sqrt{N} \sup _{x \in {\mathbb{R}}} | P _{N} (x) - P _{0} (x)| > \sqrt{N} \epsilon .
\]
This will allow to define a decision rule in the form $D _{N} \leq c$, where the constant $c$ is
defined by the significance level and the decision rule can be verified by tabulated values for $D _{N}$.
The Kolmogorov-Smirnov test can also be applied to ${\mathscr{N}} _{0}$, where we are interested
in understanding if the two sets of data ${\mathcal{O}} ^{(1)}$ and ${\mathcal{O}} ^{(2)}$ are coming
from the same distribution. Let $\{ X ^{(\alpha)} _{i} \} _{i = 1 , \dots , N ^{\alpha}}$
be the sample of ${\mathcal{O}} ^{(\alpha)}$ having cumulative distribution function $P ^{(\alpha)}$,
$\alpha = 1, 2$. If $P ^{(\alpha)} _{N ^{(\alpha)}}$, $\alpha = 1, 2$, are the corresponding empirical
cumulative distribution functions, then the two propositions above are also satisfied by the following
statistics,
\[
D _{N ^{(1)} N ^{(2)}}
=
\left(
\frac{N ^{(1)} N ^{(2)}}{N ^{(1)} + N ^{(2)}}
\right) ^{1/2}
\sup _{x \in {\mathbb{R}}}
\left |
P ^{(1)} _{N ^{(1)}} - P ^{(2)} _{N ^{(2)}}
\right |
,
\]
to which a similar decision rule as the above can be applied.
\subsection{\label{sec:stasigfit}Statistical Significance of $\chi ^{2}$ fits}
In the following we will be interested in determining the statistical significance of fitting
observed data points to a highly non-linear model function. In addition, the model function
will depend from several parameters, and, although at the present level of knowledge these
parameters are considered independent, the complexity of the physical situation makes it
possible that, in more refined models, some of them might show correlations. In addition to the
complex structure of the models, there is the fact that the data points will be spread across
several decades (range of the independent variable) and will come from
different instruments based, not only on different hardware/software, but also on
different physical processes for their operation. It is important under these circumstances
to have a check on the goodness of fit. Because of the complexity of the situation, the approach
that we propose is to perform a Kolmogorov-Smirnov test on the normality of the residuals
obtained after the fitting procedure. In this case our null hypothesis will be
\begin{description}
\item[\label{text:nulhyp002}${\mathscr{N}} '' _{0}$]$\!$:$\,\,$\textsc{the residuals of the non-linear fit of
multi-wavelength data to the spectral energy distribution function obtained by implementing
a given synchrotron self Compton model are normally distributed}.
\end{description}
As a final remark, we remember that there are situations in which the critical values of the
test statistics can be difficult to calculate: these include situations in which the samples
are of small size and/or the parameters of the distribution are estimated with the same data
that are being tested. For the second case a convenient solution is the inclusion of a correction
factor, but, in general, the safest way to procede is to use Monte Carlo \label{text:MonCar}
methods, for instance to generate, under the fitted distribution, datasets of the same length as the tested one, and
use these them to obtain the correct critical values.
\section{\label{sec:Mrk421}An application: Markarian~421}
As a \emph{field test} application of what we have seen in the previous sections, we will now apply the models and
techniques introduced above to a concrete case, i.e. the study of the emission properties of the AGN Markarian~421,
following a recent publication\cite{bib:2011ApJ73314Mankuzhiyil}. This section will be divided in subsections. In
each of them we will discuss one of the aspects that we have introduced in the sections above and there will be
a direct correlation with the subsection number and the section number in which the concepts exemplified in each
subsection have been discussed in general.
\subsection{\label{sec:sou}The source}
Markarian~421 (Mrk421) is the closest blazar (at a redshift $z = 0.030$) and the first extragalactic $\gamma$-ray source
with emission in the TeV range detected by Imaging Air Cherenkov Telescopes\cite{bib:1992Nature358477Punch,bib:1996A&A311L13Petry}.
It is, nowadays, the most well known blazar, together with the, also close one, Markarian~501, and falls within the class of
HBL objects. Mrk~421 is a source that shows remarkable variability, both in flux variations
(that were observed to change by almost two orders of magnitude) and time development (flux doubling was detected on a time
scale of the order of 15 minutes\cite{bib:1996Nature383319Gaidos}).
For our purpose Mrk421 is an excellent source, which has been extensively studied across over 19 decades in energy. In particular:
\begin{enumerate}
\item the SSC emission dominates the detected spectrum, with correlated low-high energy fluctuations;
\item the Compton peak is in the range where Cerenkov observations are effective;
\item the spectrum can be described as a single power-law.
\end{enumerate}
For the above reasons, the one-zone SSC model that we have described in the previous section is a good candidate to describe
the SED of this source. At the same time simultaneous multi-wavelength observations are available, and, among them, it was
possible to identify nine, good to very good quality, spectral energy distribution sets of simultaneous data corresponding
to different emissions states.
The detailed description of the datasets is reported below.
\begin{description}
\item[state 1 and state 2]$\!\!$. The first two datasets that we consider correspond to multi-wavelength data obtained from
campaigns triggered by a major outburst of Mrk421 that was detected by the $10$m \emph{Whipple} telescope in April
2006\cite{bib:2009ApJ703169Acciari}. It was unfortunately
impossible to promptly set-up a multi-wavelength campaign because of some visibility constraints on \emph{XMM-Newton}; for this reason
simultaneous observations at different wavelengths were taken during the decaying phase of the flare. The optical monitor of XMM-Newton
was used to collect optical and ultraviolet data, whereas the EPIC-pn detector of the same telescope provided X-ray observations. Very
high energy $\gamma$-ray data were collected by \emph{MAGIC} and Whipple telescopes. Altogether this resulted in more than 7 hours of simultaneous
observations, about 4 hours of which form the datasets for state 1 and more than 3 hours the dataset for state 2.
\item[state 3]$\!\!$. A third dataset contains multi-wavelength observations initiated after a detection by the all-sky monitor of the
\emph{Rossi X-ray Timing Explorer} and by the $10$m Whipple telescope\cite{bib:2006ApJ641740Rebillot}.
The observation campaign continued during December 2002 and January 2003.
The very high energy data was obtained by Whipple between December 4, 2002 and January 15, 2003 and by the \emph{High Energy Gamma Ray Astronomy
CT1} between November 3, 2002 and December 12, 2003. However, since our analysis is centered on simultaneous observations, we have
used only the very high energy data taken at nights during which simultaneous X-ray observations were available. Optical information
consists of the average flux obtained from the \emph{Boltwood observatory} optical, \emph{KVA} and \emph{WIYN} telescopes.
\item[state 4 and state 9]$\!\!$. Two more datasets are the result of a longer campaign undertaken during 2003 and
2004\cite{bib:2005ApJ630130Blazejowski}. The
Rossi X-ray Timing Explorer was used to collect the X-ray flux, that was then classified into three sets, having low-, medium- and
high-flux, respectively. X-ray observations where then complemented by Whipple very high energy $\gamma$-ray data, taken within
one hour of the selected X-ray ones. Whipple Observatory $1.2$m telescope and Boltwood Observatory $0.4$m telescope provided optical
data following the same grouping method: although it was not possible to get optical data simultaneously with the remaining multi-wavelength
data, the fact that minor variations in the optical flux were detected, allowed to consider its maximum and minimum values as
reliable approximations. In this way it was possible to obtain a medium flux dataset, corresponding to state 4, between March 8
and May 3, 2003 and a high flux dataset, corresponding to state 9, between April 16 and 20, 2004.
\item[state 5 and state 7]$\!\!$. Another two datasets are the result of a multi-wavelength campaign that took place between March 18 and
March 25, 2001\cite{bib:2008ApJ677906Fossati}.
X-ray data are the results of Rossi X-ray Timing Explorer observations, whereas the very high energy $\gamma$-ray flux was
obtained by the Whipple telescope. State 5 corresponds to a post-flare state during March 22 and 23, for which optical information
corresponds to the lowest flux detected by the $1.2$m Harvard-Smithsonian telescope on Mt. Hopkins. State 7 is, instead, the high-flux peak
of March 19, also complemented by optical data from the same instrument as for state 5, but using the highest optical flux.
\item[state 6]$\!\!$. This dataset were taken after an outburst in May 2008 and contains the results of about $2 1/2$ hours of
simultaneous observations\cite{bib:2009ApJ703169Acciari}. As for state 1 and state 2 the optical monitor of XMM-Newton was used to
collect optical and ultraviolet data, whereas the EPIC-pn detector of the same telescope provided X-ray observations. The very high
energy $\gamma$-ray flux was in this case obtained by \emph{VERITAS}.
\item[state 8]$\!\!$. The last dataset is the result of a dedicated multi-wavelength campaign on June 6, 2008\cite{bib:2009ApJ691L13Donnarumma}.
Optical data was obtained by \emph{WEBT}, whereas X-ray observations were made by the Rossi X-ray Timing Explorer and by \emph{Swift}/BAT and,
finally, very high energy $\gamma$-ray fluxes were taken by VERITAS.
\end{description}
All sets of data present marked qualitative difference between the optical to X-ray and the very high energy $\gamma$-ray ranges:
the most striking one is the uncertainties in the measured flux, which is very small when not negligible at low energies,
and much sizeable at very high energies. This observation will play a role in our future discussion on the statistical significance
of the fit results.
In what follows we will, first, show how to fit each of these datasets to the chosen SSC model and how to test the goodness of the fit.
An in depth interpretation of the physical conclusions about the source is beyond the scope of this lecture and can be found in the
literature\cite{bib:2011ApJ73314Mankuzhiyil}.
\subsection{\label{sec:chi}Nonlinear $\chi^{2}$ fit}
The fit algorithm will be an implementation of the Levenberg-Marquardt method discussed in
subsection \ref{sec:levmar}. As this is a standard approach in nonlinear minimization, it is
possible to conveniently find code which is optimized and efficient for general problems.
In particular we have used as a starting point the \texttt{mrqmin} function discussed in
Ref.~\refcite{bib:1992nricPress}. This subroutines executes a single minimization step,
which is an update step on the parameters as defined in (\ref{eq:levmarupdalg}).
\begin{figure}
\begin{center}
\fbox{\includegraphics[width=12cm]{fig_minflocha}}
\caption{\label{fig:minflocha} Flow chart of the minimization algorithm
(from Ref.~\protect\refcite{bib:2011ApJ73314Mankuzhiyil}). A careful choice of $\Delta \chi ^{2} _{\mathrm{NI}}$
and of the exit value for $c _{\mathrm{NI}}$ is necessary to obtain consistent results avoiding, at the same time,
unnecessary computational cost. The optimal value for the exit condition on $c _{\mathrm{NI}}$ is used in the
flow-chart. $\Delta \chi ^{2} _{\mathrm{NI}}$ as small as $10 ^{-4}$ was required, especially for sets containing
larger number of data points.}
\end{center}
\end{figure}
Implementation details and additional functions called by \texttt{mrqmin} are described in
in Ref.~\refcite{bib:1992nricPress} and will not be discussed here, except for the case of
the code that is used to evaluate the SED, which we will analyze in more detail in a moment.
The single minimization step performed by \texttt{mrqmin} (or by any other implementation
of the Levenberg-Marquardt method) has to be iterated until convergence to the sought minimum
is considered satisfactory. The flow-chart of the code that we used is reported in
Fig.~\ref{fig:minflocha}. After fixing some trial values for the parameters of the model ($P _{0}$
point in parameter space) and initializing to zero an integer variable, $c _{\mathrm{NI}}$ that will
count how many consecutive individual minimization steps have been performed with a negligible
improvement in decreasing the $\chi ^{2}$, we calculate $\chi ^{2} _{0}$, i.e. the value of $\chi ^{2}$
at the initial point $P _{0}$ in parameter space. Minimization iterations then start: at the beginning
we fix the parameter corresponding to $\lambda$ in (\ref{eq:levmarupdalg}) so that we are performing
a step in which the steepest descent contribution to the algorithm is dominant. The new value of
$\chi ^{2}$ at the new point in parameter space $P$ determined by the algorithm, $\chi ^{2} (P)$,
is calculated. If the $\chi ^{2}$ did decrease in the step, we check if it decreased by a sizeable
amount. If not we increment by one the $c _{\mathrm{NI}}$ counter, before increasing the weight
of the inverse Hessian method and moving to the next iteration. If, instead, the $\chi ^{2}$ increased
at $P$, we increase the importance of the steepest descent method and reset the counter $c _{\mathrm{NI}}$
to zero. The exit condition from the loop is satisfied when $c _{\mathrm{NI}} = 4$. Another crucial
parameter that has to be fixed is the threshold, below which we consider negligible the decrease in
$\chi ^{2}$ (this is called $\Delta \chi ^{2} _{\mathrm{NI}}$ in the flow-chart). It is usually
considered that it is unnecessary to determine to machine accuracy or to roundoff limit the
minimum of the $\chi ^{2}$, as the result provides only a statistical estimation of the parameters
anyway. However, in our experience, it is important to pay particular attention in setting the
exit condition value of $c _{\mathrm{NI}}$ and the negligible $\chi ^{2}$ increment, $\Delta \chi ^{2} _{\mathrm{NI}}$.
\begin{figure}
\begin{center}
\fbox{\includegraphics[width=12cm]{fig_minste}}
\caption{\label{fig:minste}Snapshots of different steps in one run of the minimization code. The
step numbering is conventional and there is no rigorous correspondence between algorithm iteration
number and step number in the figure above, except that a higher step number corresponds to an iteration
that follows the iterations associated to lower-numbered steps (from Ref.~\protect\refcite{bib:2011ApJ73314Mankuzhiyil}).}
\end{center}
\end{figure}
Our experience has shown that the best results were obtained with\footnote{The values of the two
parameters may be correlated, so other choices may work as well.} $c _{\mathrm{NI}} = 4$ and
$\Delta \chi ^{2} _{\mathrm{NI}} = 10 ^{-4}$. As a further test of the minimization algorithm, we
implemented an additional step: after obtaining the minimum we perform a random change of the parameters
and repeat the minimization from this new random point in parameter space. This could help to identify
cases in which minimization remained stuck in a local minimum different from the absolute one, but we never
faced this situation, i.e. the additional minimization step did always converge to the result of the first
one. When we choose smaller/larger values for $c _{\mathrm{NI}}/\Delta \chi ^{2} _{\mathrm{NI}}$ (for instance,
$2$ and $0.01$) the minimization occasionally provided a result which, on closer inspection, turned out to be
a not good approximation to the $\chi ^{2}$ minimum\footnote{Meaning that repeating the minimization with
the optimal values for the parameters resulted in an appreciable different and lower $\chi ^{2}$ (and in
more consistent values for the obtained uncertainties on the parameters, a point which we will discuss in more
detail later on).}. This tendency was extremely more marked in presence of datasets having a larger or much larger
number of data points: in our experience convergence is usually slower in these cases: a tentative visual
representation of some steps in the minimization process is given in Fig.~\ref{fig:minste} (for details, please see
the related caption).
\begin{figure}
\begin{center}
\fbox{\includegraphics[width=12cm]{fig_fitsed}}
\caption{\label{fig:fitsed} Plot of the SEDs obtained for each state with the $\chi ^{2}$ minimization
procedure described in the text (from Ref.~\protect\refcite{bib:2011ApJ73314Mankuzhiyil}). Results for
the parameters and associated uncertainties, together with the values of the reduced $\chi ^{2}$ in
each case, are reported in Tables~{\protect\ref{tab:results01}} and~{\protect\ref{tab:results02}}.}
\end{center}
\end{figure}
From the point of view of the computation time the Levenberg-Marquardt algorithm is quite efficient\footnote{As an
example, the longest minimizations required about $10 ^{2}$ iterations that on a less than average PC (with a
Intel\textregistered{} Core\texttrademark{}2 Duo CPU (E7500, $2.93$GHz) running a x$86$\_$64$ GNU/Linux with $2$GB
Ram) took about 20 minutes to run.} and most of the time was required by the derivation of the SEDs corresponding
to the current values of the parameters at each iteration. In our analysis, following a common assumption in
the literature, we have fixed $\gamma _{\mathrm{min}} = 1$; the redshift of the source is known, so high energy
data points can be corrected to take into account interaction with the extragalactic background light. We are,
then, left with eight parameters to be determined by the fit. According to (\ref{eq:levmarupdalg}) at
each step we need:
\begin{description}
\item[$\mathbf{y}$]$\!\!$: this is just the vector of the observed flux values, which is clearly known;
\item[$\mathbf{\Sigma}$]$\!\!$: this is the vector of the flux uncertainties, also known;
\item[$\mathbf{f}$]$\!\!$: this is the vector of the values of the model SED evaluated at the observed energy/frequency;
\item[$\mathbf{J}$]$\!\!$: this is a matrix which is known once the derivatives of the model SED are known;
\item[$\mathbf{H}$]$\!\!$: this matrix can be calculated knowing $\mathbf{\Sigma}$ and $\mathbf{J}$.
\end{description}
It is clear that $\mathbf{y}$, $\mathbf{\Sigma}$ and $\mathbf{H}$ do not represent a problem. $\mathbf{f}$ and
$\mathbf{J}$ in standard cases, where the model function has a known analytical expression, also do not. In our
case (and in several others), however, the model SED is not analytically known and what we have is just a
discretized sample resulting from a numerical implementation of the SSC model; it is this last numerical
implementation that can be more computationally intensive. This is especially true since the estimation of
$\mathbf{J}$ requires the SEDs partial derivatives with respect to the parameters. Then, for each minimization
iteration, we have in principle to evaluate a number of SEDs equal to twice the number of varying parameters
plus one. In our implementation we tried to reduce the load caused by this task by developing a bookkeeping
mechanism that caches some of the numerically sampled SEDs used in the previous steps: this is especially
useful when the iteration results in a $\chi ^{2}$ increase, since in this case, when returning to the previous
point in parameter space, the required SEDs are already available. \emph{Caching} optimizes the number of times at which
the $\chi ^{2}$ minimization executable needs to stop and wait for the completion of an external module that, independently,
executes the SEDs evaluation.
\begin{table}
\tbl{Best-fit single-zone SSC model parameters for the nine datasets of Mrk~421.
States are labelled according to the convention in Fig.~{\protect\ref{fig:fitsed}}
(from Ref.~\protect\refcite{bib:2011ApJ73314Mankuzhiyil}). Here parameters describing
the emitting blob together with the reduced $\chi ^{2}$ are reported. Results for the remaining
parameters can be found in Table~{\protect\ref{tab:results02}}.}
\begin{tabular}{ l c c c | c}
\noalign{\smallskip}
\toprule
\noalign{\smallskip}
Source & $B$ & $R$ & $\delta$ & $\tilde{\chi} ^{2}$ \\
&[gauss]&[cm]& & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
State 1 & $(9 \pm 3) \times 10 ^{-1}$ & $(9 \pm 4) \times 10 ^{14}$ & $(2.0 \pm 0.5) \times 10 ^{1}$ & $0.84$\\
State 2 & $(8 \pm 6) \times 10 ^{-1}$ & $(8 \pm 4) \times 10 ^{14}$ & $(2.7 \pm 1.1) \times 10 ^{1}$ & $1.86$\\
State 3 & $(6 \pm 6) \times 10 ^{-2}$ & $(2.0 \pm 1.5) \times 10 ^{15}$ & $(1.0 \pm 0.5) \times 10 ^{2}$ & $0.91$\\
State 4 & $(1.21 \pm 0.16) \times 10 ^{-1}$ & $(1.1 \pm 1.3) \times 10 ^{15}$ & $(8 \pm 6) \times 10 ^{1}$ & $0.89$\\
State 5 & $(1.9 \pm 1.3) \times 10 ^{-1}$ & $(10 \pm 4) \times 10 ^{14}$ & $(7 \pm 5) \times 10 ^{1}$ & $0.67$\\
State 6 & $1.0 \pm 0.7$ & $(6 \pm 3) \times 10 ^{14}$ & $(2.8 \pm 1.1) \times 10 ^{1}$ & $1.39$\\
State 7 & $(4 \pm 3) \times 10 ^{-2}$ & $(2 \pm 5) \times 10 ^{15}$ & $(8 \pm 7) \times 10 ^{1}$ & $1.61$\\
State 8 & $(6 \pm 3) \times 10 ^{-2}$ & $(2.0 \pm 1.8) \times 10 ^{15}$ & $(1.1 \pm 0.4) \times 10 ^{2}$ & $0.60$\\
State 9 & $(4 \pm 3) \times 10 ^{-2}$ & $(2 \pm 4) \times 10 ^{15}$ & $(1.2 \pm 1.0) \times 10 ^{2}$ & $0.85$\\
\noalign{\smallskip}
\botrule
\end{tabular}}
\label{tab:results01}
\end{table}
\begin{table}
\tbl{Best-fit single-zone SSC model parameters for the nine datasets of Mrk~421.
States are labelled according to the convention in Fig.~{\protect\ref{fig:fitsed}}
(from Ref.~\protect\refcite{bib:2011ApJ73314Mankuzhiyil}). This table lists the
parameters that describe the energy distribution of the electron plasma. The
parameters describing the emitting blob, together with the reduced $\chi ^{2}$
obtained in the fit, can be found in Table~{\protect\ref{tab:results01}}.}
\begin{tabular}{ l c c c c c }
\noalign{\smallskip}
\toprule
\noalign{\smallskip}
Source & $n_{\rm e}$ & $\gamma_{\rm br}$ & $\gamma_{\rm max}$ & n$_1$ & n$_2$ \\
& [cm$^{-3}$] & & & & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
State 1 & $(1.3 \pm 1.5) \times 10 ^{3}$ & $(2.6 \pm 0.9) \times 10 ^{4}$ & $(1.05 \pm 0.18) \times 10 ^{7}$ & $1.49 \pm 0.19$ & $3.77 \pm 0.11$\\
State 2 & $(1 \pm 3) \times 10 ^{3}$ & $(2.4 \pm 0.9) \times 10 ^{4}$ & $(4.1 \pm 1.1) \times 10 ^{6}$ & $1.5 \pm 0.3$ & $3.62 \pm 0.14$\\
State 3 & $(5 \pm 5) \times 10 ^{3}$ & $(7 \pm 3) \times 10 ^{4}$ & $(7 \pm 5) \times 10 ^{7}$ & $2.05 \pm 0.10$ & $4.8 \pm 0.3$\\
State 4 & $(2 \pm 5) \times 10 ^{3}$ & $(4 \pm 2) \times 10 ^{4}$ & $(8.2 \pm 1.7) \times 10 ^{6}$ & $1.8 \pm 0.3$ & $4.11 \pm 0.13$\\
State 5 & $(2 \pm 5) \times 10 ^{3}$ & $(4.5 \pm 1.9) \times 10 ^{4}$ & $(2.4 \pm 0.3) \times 10 ^{7}$ & $1.7 \pm 0.3$ & $4.3 \pm 0.18$\\
State 6 & $(4 \pm 4) \times 10 ^{3}$ & $(1.9 \pm 0.6) \times 10 ^{4}$ & $(1.8 \pm 0.4) \times 10 ^{6}$ & $1.54 \pm 0.11$ & $4.37 \pm 0.09$\\
State 7 & $(1 \pm 7) \times 10 ^{3}$ & $(8 \pm 6) \times 10 ^{4}$ & $(7 \pm 2) \times 10 ^{6}$ & $1.7 \pm 0.4$ & $4.23 \pm 0.20$\\
State 8 & $(4 \pm 9) \times 10 ^{1}$ & $(5 \pm 2) \times 10 ^{4}$ & $(1.6 \pm 0.4) \times 10 ^{7}$ & $1.5 \pm 0.2$ & $4.22 \pm 0.14$\\
State 9 & $(1 \pm 7) \times 10 ^{2}$ & $(8 \pm 9) \times 10 ^{4}$ & $(1.1 \pm 0.4) \times 10 ^{7}$ & $1.6 \pm 0.5$ & $3.9 \pm 0.2$\\
\noalign{\smallskip}
\botrule
\end{tabular}}
\label{tab:results02}
\end{table}
A first study obtained applying the $\chi ^{2}$ minimization algorithm on the nine datasets described in
Subsec.~\ref{sec:sou} results in the SEDs plotted in Fig.~\ref{fig:fitsed} and in the values of the
parameters and related uncertainties listed in Tables~\ref{tab:results01} and~\ref{tab:results02}. We
are not interested here in a detailed report of the physical conclusions that can be drawn from these
results, for which we refer the reader to Ref.~\refcite{bib:2011ApJ73314Mankuzhiyil}. We will, instead,
consider some elements relevant to the statistical analysis.
First, reduced $\chi ^{2}$ values are reasonable: a couple of cases (states 5 and 8) might require
some additional check, as values are slightly low. Uncertainties in most cases allow to constrain
parameters within physically meaningful and expected ranges. There are some exceptions, in which uncertainties
tend to be larger than usually acceptable, like in the case of the magnetic field of state 3 or
the blob radius of state 5 and, for most states, the blob electron density. In the cases in which the
uncertainties appear to be too high, it is important to remember that these uncertainties are
obtained by considering a quadratic approximation to the $\chi ^{2}$ near the estimated minimum.
This is a good approximation only when the uncertainties are relatively small, because we can
not expect the $\chi ^{2}$ surface to behave as the quadratic approximation far from the minimum.
It might also happen that, because of the nature of the problem/model, the $\chi ^{2}$ has a much
more flat minimum in the direction of some of the parameters. In all these cases the quadratic
approximation might overestimate the uncertainties and it would be preferable to use the criterion
that gives the uncertainties as the absolute difference between the minimum value of each parameter
at the estimated minimum and the value of
the parameter at which the $\chi ^{2}$ has increased by one. This definition of the uncertainty in the
parameters gives, in general, asymmetric error bars, which can be an additional desirable features in
situation in which the uncertainties make parameters that should be positive, compatible with zero
or negative values. Apart from the above considerations, the fitted parameters appear compatible
with what is expected for this source. It is however important to consider in more detail the statistical
significance of these results by applying some \emph{goodness of fit} test, which we will discuss in the
following section.
\subsection{\label{sec:sta}Statistical significance}
After obtaining the fit parameters, we can now proceed with the last step, i.e. discuss their statistical
significance. To this end we will consider, for each of the nine fitted SEDs, the residuals of the fit,
i.e. the differences between the observed points and the value of the fitted SED at the frequency of
the observed points. We then apply the KS test to verify if the residuals are normally
distributed (the $\mathscr{N} _{0} ''$ null-hypothesis of page~\pageref{text:nulhyp002}).
Code for the calculation of the relevant statistics $D _{N}$ (cf. Eq.~(\ref{eq:kolsmista})) is available,
for instance in Ref.~\refcite{bib:1992nricPress}. In this study, however, we used the functions that are
included in Mathematica\textregistered{} since version $8.0$: the reason for this is the fact that these
functions already implement a method for the Monte Carlo approach that we briefly mentioned at the very
end of Sec.~\ref{sec:stasig} on pag.~\pageref{text:MonCar}. A first application of the KS test at the
$5\%$ confidence level, shows that the residuals are \emph{not} normally distributed, i.e. the KS
test fails. Following this result, we applied the KS test again, this time at the $10\%$ confidence level:
again the test failed in all cases, showing also at this confidence level that the residuals are not normally distributed
Failure of the KS test shows that the statistical significance of the fits should be carefully re-considered. In this
case it might be actually possible to explain the reason for this failure\footnote{To
have a more clear understanding of the situation, in each case we then decided to divide the data in two groups,
data at low energy (i.e. within the Synchrotron region of the spectrum) and data at very high energy
(i.e. within the Compton region of the spectrum). For each dataset we then calculated the residuals for the
low-energy subset of the data and for the high energy subset of the data. We then applied to all these subsets
the KS test (we called this procedure \emph{piecewise KS test}, as for each dataset the KS test for ${\mathscr{N}} _{0} ''$
is applied separately to low- and high-energy data). Surprisingly enough, in all these cases the KS test confirmed
normality of the residuals. Further discussion of this point can be found elsewhere\cite{bib:2011ApJ73314Mankuzhiyil}.
For our present purpose this further analysis is not necessary and we will ignore it here.}, but we will not proceed
here in this direction. We will instead draw a bold conclusion and emphasize the fact that, in absence of other
explanations, the failure of the KS test could already bring us to the conclusion that the model might require
improvements: existing observations, even in presence of a successful fit with reasonable values for the parameters
and for the reduced $\chi ^{2}$, could be enough to show the inadequacy of the model. We could have never reached
this result, had we not proceeded through a rigorous statistical approach and we recognize, in this way, the extreme
importance of an in depth statistical analysis to put to the best use our models and the related observations.
\section{Conclusion}
In this contribution we have discussed all the steps that are required to perform a rigorous statistical analysis
on simultaneous multi-wavelength datasets of blazars. Although it is challenging to obtain good datasets, because
observations are often made difficult by the necessity to have several instruments simultaneously
available in absence of observational constraints, we find really exciting to think that such observations (which
probe several order of magnitudes in the source spectrum with different instruments and techniques) could be already
at a good enough level to allow us to discriminate between different emission models. The importance of a statistical
approach is, indeed, two-sided. On one side it can force us to improve our models, to make them compatible with the data. On
the other, it can help us to understand how to plan future instruments and observations and efficiently improve, where
it is most needed, the quality and amount of the data. We have finally shown, in the specific case of Mrk421, how this
analysis has been applied to a specific problem and how existing data could be already suggesting the need for refinements
of the emission model for this source.
\section*{Acknowledgments}
The author would like to acknowledge partial support from ICRANet.
|
1,477,468,750,154 | arxiv | \section{Preliminaries}\label{SEC:prelims}
\subsection{Outline}\label{SEC:outline}
The goal of this work is to study one-dimensional tiling spaces arising from non-primitive substitution rules, in terms of the topology, dynamics, and cohomology.
This study naturally divides into two cases: the case where the tiling space is minimal, and the case where it is non-minimal.
The minimal case is treated in Section \ref{SEC:minimal}, where we identify a property of a substitution, which we call \emph{tameness}, in the presence of which most of the possible pathological behaviours of non-minimal substitutions cannot occur. In particular, all aperiodic substitutions will be seen to be tame. By aperiodic, we mean that the subshift of the substitution has no periodic orbits.
The first main result is Theorem \ref{THM:wild-characterisation}, which gives a characterisation of tameness.
This theorem is used to prove the following result.
\newtheorem*{THM:main}{Theorem \ref{THM:main}}
\begin{THM:main}
Let $\sub$ be a minimal substitution with non-empty minimal subshift $X_\sub$.
There exists an alphabet $\Ze$ and a primitive substitution $\theta$ on $\Ze$ such that $X_\theta$ is topologically conjugate to $X_\sub$.
\end{THM:main}
This is similar to, but slightly stronger than, a result from the section on Open problems and perspectives (Section 6.2) of \cite{D:main-result}.
Examples and applications based on this section are then given in Section \ref{SEC:examples}.
The non-minimal case is treated in Sections \ref{SEC:non-minimal}, \ref{SEC:CISs} and \ref{SEC:non-min-examples}.
Non-minimal substitution tiling spaces fall outside the scope of much of the existing literature, which typically deals only with primitive systems. Nevertheless a theory of non-minimal substitutions is important, even to the study of primitive substitutions, because there are invariants of primitive substitution tiling spaces that are themselves non-minimal substitution tiling spaces. These invariants originally appeared in \cite{BD:proximal}, and are discussed in more detail in Examples \ref{EX:proximal} and \ref{EX:asymptotic}.
The main result of Section \ref{SEC:non-minimal} is the following theorem, which is an extension to the non-minimal setting of results obtained by Anderson and Putnam for primitive substitutions \cite{AP}, although their results apply in arbitrary dimension.
\theoremstyle{theorem}
\newtheorem*{THM:main2}{Theorem \ref{THM:homeo}}
\begin{THM:main2}
Let $\sub$ be a tame recognisable substitution. There exists a complex $\Gamma$ and a map $f\colon \Gamma \to \Gamma$ such that there is a homeomorphism $h \colon \TS_\sub \to \varprojlim(\Gamma,f)$.
\end{THM:main2}
Section \ref{SEC:CISs} is devoted to building a structure theorem of non-minimal tiling spaces in terms of their closed shift-invariant subspaces. In particular, we identify a correspondence between such subspaces and subcomplexes of the complex $\Gamma$ above. The subspaces are found to be homeomorphic to an inverse limit of self-maps acting on the corresponding subcomplex of $\Gamma$.
Examples are given in Section \ref{SEC:non-min-examples} to justify the level of care that needs to be taken in building the machinery, and to give an exposition of how the machinery is put to use when performing calculations.
\subsection{Subshifts and Tiling Spaces}\label{SUBSEC:subshifts}
Let $\Al$ be a finite alphabet and for natural numbers $n$, let $\Al^n$ be the set of words of length $n$ using symbols from $\Al$. We denote the length of the word $u = u_1\ldots u_l$ by $|u| = l$. By convention, $\Al^0 = \{\epsilon\}$ where $\epsilon$ is the empty word and $|\epsilon|=0$. Denote the union of the positive-length words by $\Al^+ = \bigcup_{n \geq 1} \Al^n$. If the empty word $\epsilon$ is also included, then we denote the union $\Al^+ \cup \{\epsilon\}$ by $\Al^\ast$. This set $\Al^*$ forms a free monoid under concatenation of words.
A \emph{substitution} $\sub$ on $\Al$ is a function $\sub \colon \Al \to \Al^+$. We can extend the substitution $\sub$ in a natural way to a morphism $\sub \colon \Al^* \to \Al^*$ given, for a word $u = u_1\ldots u_n \in \Al^n$, by setting $\sub(u) = \sub(u_1) \ldots \sub(u_n)$. The symbol $w_i$ denotes the label assigned to the $i$th component of the bi-infinite sequence $w \in \Al^\mathbb{Z}$. We may further extend the definition of a substitution to bi-infinite sequences $\sub \colon \Al^\mathbb{Z} \to \Al^\mathbb{Z}$. For a bi-infinite sequence $w \in \Al^{\mathbb{Z}}$, with $w = \ldots w_{-2} w_{-1} \cdot w_0 w_1 w_2 \ldots$ we set $$\sub(w) = \ldots \sub(w_{-2}) \sub(w_{-1}) \cdot \sub(w_0) \sub(w_1) \sub(w_2) \ldots$$ with the dot $\cdot$ representing the separator of the $(-1)$st and $0$th component of the respective sequences.
For a substitution $\sub \colon \Al \to \Al^+$ on an alphabet $\Al = \{a^1, a^2, \ldots, a^l\}$, there is an associated substitution matrix $M_{\sub}$ of dimension $l\times l$ given by setting $m_{ij}$, the $i,j$ entry of $M_{\sub}$, to be the number of times that the letter $a^i$ appears in the word $\sub(a^j)$.
A substitution $\sub$ is called \emph{primitive} if there exists a positive natural number $p$ such that the matrix $M_{\sub}^p$ has strictly positive entries. Equivalently, if there exists a positive natural number $p$ such that for all $a,a'\in\Al$ the letter $a'$ appears in the word $\sub^p(a)$.
For words $u, v \in \Al^*$, we write $u \subset v$ to mean $u$ is a subword of $v$, and $u \subsetneq v$ to mean $u$ is a proper subword of $v$. For a bi-infinite word $w \in \Al^\mathbb{Z}$, we similarly write $u \subset w$ to mean $u$ is a subword of $w$.
Let $\sub \colon \Al \to \Al^+$ be a substitution. We say a word $u \in \Al^\ast$ is \emph{admitted} by the substitution $\sub$ if there exists a letter $a\in\Al$ and a natural number $k\geq 0$ such that $u \subset \sub^k(a)$ and denote by $\mathcal{L}^n \subset \Al^n$ the set of all words of length $n$ which are admitted by $\sub$. Our convention is that the empty word $\epsilon$ is admitted by all substitutions. We form the \emph{language} of $\sub$ by taking the set of all admitted words $\mathcal{L} = \bigcup_{n\geq 0} \mathcal{L}^n$.
We say a bi-infinite sequence $w \in \Al^\mathbb{Z}$ is \emph{admitted} by $\sub$ if every subword of $w$ is admitted by $\sub$ and denote by $X_\sub$ the set of all bi-infinite sequences admitted by $\sub$. The set $X_\sub$ has a natural (metric) topology inherited from the product topology on $\Al^\mathbb{Z}$ and a natural shift map $\sigma \colon X_\sub \to X_\sub$ given by $\sigma(w)_i= w_{i+1}$. We call the pair $(X_\sub, \sigma)$ the \emph{subshift} associated to $\sub$ and we will often abbreviate the pair to just $X_{\sub}$ when the context is clear.
We remark that it is not necessarily the case that every word in the language of a substitution appears as the subword of a sequence in the subshift - for example $ab$ is in the language of the substitution $\sub \colon a \mapsto ab,\: b \mapsto b$, but the subshift for this substitution is the single periodic sequence $\ldots bbb \ldots$ which does not contain $ab$ as a subword.
We say a word $u$ is \emph{legal} if it appears as a subword of a sequence of the subshift for the substitution $\sub$. Then the set $\hat{\mathcal{L}}_\sub$ of legal words for $\sub$ is a subset of the language $\mathcal{L}_\sub$. If $\hat{\mathcal{L}}_\sub = \mathcal{L}_\sub$ then we say that $\sub$ is an \emph{admissible} substitution---this definition is borrowed from \cite{CS:non-primitive-invariant-measures} where they give the equivalent definition that $\sub$ is admissible if every $a \in \Al$ is a legal letter. Every primitive substitution is admissible.
Some of the results of this work would be simplified if we chose to focus only on admissible substitutions, however this will not be an assumption that we make.
Let $L$ be a non-empty subset of the subshift $X_\sub$.
If, for every point $w$ in $L$, it is true that $L = \overline {\{\sigma^i(w)\}}_{i \in \mathbb{Z}}$, the orbit closure of $w$, then $L$ is called a \emph{minimal component} of $X_\sub$.
If the subshift $X_\sub$ is a minimal component of itself, then $\sub$ is called a minimal substitution and $X_\sub$ is called a \emph{minimal} subshift, otherwise $\sub$ and $X_\sub$ are called \emph{non-minimal}.
For a primitive substitution, any admitted word is also legal.
If $u$ and $v$ are words, let us use the notation $|v|_u$ to denote the number of occurrences of $u$ as a subword of $v$.
A subshift is called \emph{linearly recurrent} if there exists a natural number $C \in \mathbb{N}$ such that, for all legal words $u$ and $v$, if $|v| > C|u|$, then $|v|_u \geq 1$.
One fact that will play an important role in this section is the following, which was proved in \cite{DL:linearly-recurrent}.
\begin{thm}[Damanik--Lenz]\label{THM:linearly-recurrent}
Let $\sub$ be a substitution on $\Al$.
The subshift $X_\sub$ is minimal if and only if it is linearly recurrent.
\end{thm}
We say $\sub$ is a \emph{periodic} substitution if $X_\sub$ is finite, and $\sub$ is \emph{non-periodic} otherwise. We say $\sub$ is \emph{aperiodic} if $X_\sub$ contains no $\sigma$-periodic points (equivalently, $X_\sub$ contains no periodic closed invariant subspaces). If $\sub$ is non-periodic and primitive, then $X_\sub$ is aperiodic and topologically a Cantor set (in particular $X_\sub$ is non-empty) and $\sub$ a minimal substitution.
\begin{remark}
Often in the literature, the subshift of a substitution is defined in terms of the $\mathbb{Z}$-orbit closure of a particular bi-infinite sequence. For primitive substitutions this sequence-dependent definition coincides with our definition in terms of a language. The terms periodic, non-periodic and aperiodic are then normally associated to properties of a sequence, rather than its subshift. For our purposes, we need to define a substitution subshift independent of some generating sequence, as not all substitution subshifts contain a dense orbit.
One should therefore be careful when justifying the use of the term \emph{non-periodic} to refer to a subshift which may nevertheless contain periodic points under the shift action. In particular, there exist non-periodic substitutions which contain sequences of all three types with respect to the sequence-based definition. For example $\sub\colon a\mapsto aa, b\mapsto aba, c\mapsto ccd, d\mapsto cd, e\mapsto bdecb$.
This potentially confusing naming convention should then be considered in this context: non-periodic substitution subshifts \emph{contain} a point with an infinite orbit (but may also contain finite orbits); aperiodic substitution subshifts consist \emph{only} of points with infinite orbits.
\end{remark}
\subsection{Tiling Spaces}
Let $\sub$ be a substitution on the alphabet $\Al$ with associated subshift $X_\sub$. The \emph{tiling space} associated to $\sub$ is the quotient space $$\TS_\sub = (X_\sub \times [0,1]) /{\sim}$$ where $\sim$ is generated by the relation $(w,0)\sim (\sigma(w),1)$.
The natural translation action $T \mapsto T+t$ for $t\in\mathbb{R}$ equips $\TS_\sub$ with a continuous $\mathbb{R}$ action which is minimal whenever $\sub$ is primitive. In this respect, tiling spaces are closely related to the more well-known spaces, the solenoids. To some degree, tiling spaces may be thought of informally as non-homogeneous solenoids. We note that there exist non-primitive substitutions with associated tiling spaces whose translation action is minimal, so primitivity is only a sufficient condition for minimality. This will be explored in Section \ref{SEC:minimal}.
\begin{defn}
Let $w = \ldots w_{-2} w_{-1} \cdot w_0 w_1 w_2 \ldots$ be a bi-infinite sequence in $X_\sub$ and let $t\in[0,1)$, so that $(w,t)$ is an element of the tiling space $\TS_\sub$. We define a map on the tiling spaces which we call $\sub \colon \TS_\sub \to \TS_\sub$, given by $$\sub(w,t) = (\sigma^{\lfloor \tilde{t} \rfloor}(\sub(w)), \tilde{t}-\lfloor \tilde{t} \rfloor)$$ where $\tilde{t} = |\sub(w_0)| \cdot t$ and $\lfloor - \rfloor$ is the floor function.
\end{defn}
This map is continuous. Intuitively, we take a unit tiling in $\TS_\sub$ with a prescribed origin and partition each tile of type $a$ uniformly with respect to the substituted word $\sub(a)$ into tiles of length $\frac{1}{|\sub(a)|}$. We then expand each tile away from the origin so that each new tile is again of unit length, and with the origin lying proportionally above the tile it appears in after partitioning the original tiling.
A substitution $\sub$ is said to be \emph{recognisable} if the map $\sub \colon \TS_\sub \to \TS_\sub$ is injective.
It is a result of Moss\'{e} \cite{M:aperiodic} that a primitive substitution is aperiodic if and only if it is recognisable.
As with subshifts, there is a notion of minimality and minimal components for tiling spaces.
We call $\Lambda \subset \TS_\sub$ a \emph{minimal component} of $\TS_\sub$ if $\Lambda = (L\times I)/\sim$ for some minimal component $L$ of the subshift $X_\sub$, and we say that $\TS_\sub$ is a \emph{minimal tiling space} if it is a minimal component of itself. In Section \ref{SEC:non-minimal} this notion of minimality will be extended to any compact dynamical system, but for now this definition is more convenient.
There are many properties of primitive substitutions which one is likely to take for granted, and so we take this opportunity to explicitly spell out some of these properties and how such properties can fail in the general case (giving both minimal and non-minimal examples where appropriate).
The following results can be found in various places in the literature. We refer the reader to \cite{S:book} for a concise resource of proofs for most of these results.
\begin{prop}
Let $\Al$ be an alphabet on $k$ letters. If $\sub \colon \Al \to \Al^+$ is primitive, then:
\begin{enumerate}
\item $X_\sub$ is non-empty
\item $X_{\sub^n} = X_\sub$ for all $n \geq 1$
\item $|\sub^n(a)| \to \infty$ as $n \to \infty$ for all $a \in \Al$
\item $\sigma \colon X_\sub \to X_\sub$ is minimal. In particular $\TS_\sub$ is connected
\item $\sub$ is non-periodic if and only if $\sub$ is aperiodic
\item $\operatorname{rk}\check{H}^1(\TS) \leq k^2-k+1$ (see \cite{GM:multi-one-d} and \cite{R:mixed-subs})
\item $\TS_\sub$ has at most $k^2$ asymptotic orbits (see \cite{BDH:asymp-orbits})
\item If $\sub$ is recognisable then $\sub$ is non-periodic
\end{enumerate}
\end{prop}
\begin{prop}Counter examples to the above listed properties in the absence of primitivity are given by:
\begin{enumerate}
\item Let $\Al = \{a, b\}$. If $\sub \colon a\mapsto b,\: b\mapsto a$ then $X_\sub$ is empty
\item Let $\Al = \{0, \overline{0}, 1, X\}$. If $\sub \colon 0 \mapsto \overline{0}\overline{0}1\overline{0},\: \overline{0} \mapsto 0010,\: 1\mapsto 1,\: X \mapsto 0\overline{0}$ then $0\overline{0} \in \hat{\mathcal{L}}_\sub$ but $0\overline{0} \notin \hat{\mathcal{L}}_{\sub^2}$ and so $X_{\sub^2}\subsetneq X_\sub$
\item Let $\Al = \{a, b, c\}$. If $\sub \colon a\mapsto aaca,\: b\mapsto b,\: c\mapsto bb$ then $|\sub^n(b)| \to 1$ and $|\sub^n(c)| \to 2$ as $n \to \infty$. For a non-minimal case, see the above example for point 2 and the letter $1$
\item See the counterexample for point 2 for a connected example. The substitution $a \mapsto ab,\: b \mapsto a,\: c\mapsto cd,\: d \mapsto c$ has a tiling space $\TS$ with two connected components
\item Let $\Al = \{a, b, c, d\}$. If $\sub \colon a \mapsto ab,\: b\mapsto a,\: c \mapsto cc,\: d \mapsto ca$ then $$X_\sub = X_{Fib} \sqcup \bigcup_{n \in \mathbb{Z}} \{\sigma^n(\ldots ccc.abaab\ldots)\} \sqcup \{\ldots cc.cc \ldots\}$$ where $Fib$ is the Fibonacci substitution given by restricting $\sub$ to the subalphabet $\{a, b\}$. The substitution $\sub$ is not aperiodic because it contains the point $\ldots cc.cc \ldots$ which is fixed under $\sigma$. The substitution $\sub$ is non-periodic because $X_{Fib}$ is infinite
\item A minimal counterexample will be given in Section \ref{SEC:examples}
\item A minimal counterexample will be given in Section \ref{SEC:examples}
\item Let $\Al = \{a, b\}$. If $a \mapsto ab,\: b\mapsto b$ then $X_\sub = \{\ldots bb.bb \ldots\}$ and $\TS_\sub$ is homeomorphic to a circle, with the induced substitution map $\sub \colon \TS_\sub \to \TS_\sub$ acting as the identity, hence is injective. It follows that $\sub$ is recognisable, but not non-periodic
\end{enumerate}
\end{prop}
\section{The Minimal Case}\label{SEC:minimal}
There are two main results in this section.
Firstly, we identify a property, which we call \emph{tameness}, such that any substitution with this property is well behaved, in the sense that it does not exhibit certain pathologies that can occur in the non-primitive setting.
Then the first main result is Theorem \ref{THM:wild-characterisation}, which characterises tameness.
The second main result of this section is the following.
\begin{thm}\label{THM:main}
Let $\sub$ be a minimal substitution with non-empty minimal subshift $X_\sub$.
There exists an alphabet $\Ze$ and a primitive substitution $\theta$ on $\Ze$ such that $X_\theta$ is topologically conjugate to $X_\sub$.
\end{thm}
The idea of the theorem is that non-primitive substitutions are `pathological' and primitive ones are `well behaved', and the theorem makes it possible to replace a non-primitive substitution with a primitive one if the substitution is minimal.
This is similar to, but slightly stronger than, a result from the section on Open problems and perspectives (Section 6.2) of \cite{D:main-result}.
There are three reasons for presenting this result here.
Firstly, the result of \cite{D:main-result} does not appear to be well known, but is basic enough that it seems worthwhile to draw attention to it.
Secondly, the proof appearing in \cite{D:main-result} is only a sketch, using the Chacon substitution as an illustrative example, whereas a complete proof appears here.
Thirdly, the result presented here applies to the bi-infinite context, whereas the result of \cite{D:main-result} applies to the one-sided infinite context.
In this bi-infinite context there are significant pathologies that are not possible in the one-sided infinite context.
In particular, there exist examples (such as Example \ref{EX:periodic}) of minimal substitutions for which the subshift is not generated by any finite seed.
Corollary \ref{COR:aperiodic_implies_tame}, which was originally proved in \cite{BKM:recognisable}, and which is a consequence of Theorem \ref{THM:wild-characterisation}, implies that such a pathological substitution must give rise to a periodic subshift, in which case it is easy to find a primitive substitution giving rise to the same subshift.
\begin{example}\label{EX:periodic}
Let
\[
\sub \colon \left\{ \begin{array}{l} a \mapsto ab \\ b \mapsto b \end{array} \right..
\]
Then $X_\sub$ is periodic---it contains only the constant sequence $\ldots bbb\ldots$.
This sequence does not contain any instance of the letter $a$, which is the only letter of which the images under $\sub^n$ grow without bound.
\end{example}
\subsection{Periodicity and Aperiodicity}\label{SUBSEC:periodicity}
The lemmas below divide the class of substitutions with minimal subshifts into two subclasses, depending upon whether or not there is any legal letter, the length of which grows without bound under $\sub$.
In particular, Corollary \ref{COR:aperiodic_implies_tame} implies that, in the absence of such a legal letter, a minimal subshift must be periodic, as in Example \ref{EX:periodic}.
Some of these results apply more generally to non-minimal substitutions, and will see further use in Section \ref{SEC:non-minimal}.
These lemmas involve a partition of the alphabet into two subsets.
\begin{defn}\label{DEF:bounded-expanding}
For a substitution $\sub \colon \Al \to \Al^+$, let us say that a word $u\in \Al^+$ is \emph{bounded} with respect to $\sub$ if there exists $M\in\mathbb{N}$ such that $|\sub^n(u)|\leq M$ for all $n\in\mathbb{N}$, and \emph{expanding} if it is not bounded.
Let us denote the set of bounded letters for $\sub$ by $\Al_B$, and the set of expanding letters by $\Al_\infty$.
\end{defn}
Then $\Al$ is the disjoint union of $\Al_B$ and $\Al_\infty$, and if $X_\sub$ is non-empty then $\Al_\infty$ is non-empty.
Note also that, for every $b \in \Al_\infty$, $\sub(b)$ must contain at least one letter in $\Al_\infty$, whereas for every $a \in \Al_B$, $\sub(a)$ contains only letters in $\Al_B$.
The following definitions will be useful in separating the badly behaved non-primitive substitutions from the well behaved ones.
\begin{defn}\label{DEF:wild}
Let $\sub$ be a substitution, and let $B$ denote the set of bounded legal words for $\sub$. If $B$ is finite, we say $\sub$ is \emph{tame}. If $B$ is infinite, we say $\sub$ is \emph{wild}.
\end{defn}
The substitution in Example \ref{EX:periodic} is wild, as the periodic sequence $\ldots bb.bb \ldots$ is an element of the subshift and the words $b^n$ are all bounded for $\sub$.
\begin{example}
The substitution $\sub' \colon a \mapsto abb,\: b \mapsto bbb$ is tame, as $|\sub(u)| = 3|u|$ for all words $u$.
\end{example}
Note that the subshifts $X_\sub$ and $X_{\sub'}$ for these two examples are the same, so tameness is only a property of a substitution and not its associated subshift.
A particular goal of introducing these definitions is to to show that, if a minimal substitution is wild, then it is periodic.
Wildness can be characterised in terms of the following two sets and the property $(\ast)$.
\begin{defn}\label{DEF:right-bounded}
Let $\Al_{right}\subset \Al_\infty$ denote the set of expanding letters such that for every $a\in\Al_{right}$ the rightmost letter of $\sub(a)$ is a bounded letter, and define $\Al_{left}$ similarly.
\end{defn}
\begin{defn}
Suppose that either there exists a letter $a\in \Al_{right}$ and an increasing sequence of integers $N_i$ such that the rightmost expanding letter appearing in $\sub^{N_i}(a)$ is also in $\Al_{right}$ for all $i\geq 1$, or else there exists a letter $a\in \Al_{left}$ and an increasing sequence of integers $N_i$ such that the leftmost expanding letter appearing in $\sub^{N_i}(a)$ is also in $\Al_{left}$ for all $i\geq 1$.
In such a case, we say that $\sub$ has property $(\ast)$.
\end{defn}
The following lemma is used in the proof of Theorem \ref{THM:wild-characterisation}, which characterises wildness.
\begin{lemma}\label{LEM:leftmost-rightmost-periodic}
Let $\sub$ be a substitution on $\Al$ with property $(\ast)$. Then $X_\sub$ contains a periodic sequence, the letters of which are all bounded.
\end{lemma}
\begin{proof}
Suppose that there exists a letter $a\in \Al_{right}$ and an increasing sequence of integers $N_i$ such that the rightmost expanding letter of $\sub^{N_i}(a)$ is also in $\Al_{right}$; the case with leftmost expanding letters is similar. Note that the rightmost expanding letter of $\sub^{N_k}(a)$ must also have the same property as $a$ for the shifted sequence of integers $M_{i}= N_{i-k}$. So, by possibly choosing a different $a\in \Al_{right}$ we may further assume without loss of generality that there is a power $N$ so that the rightmost expanding letter of $\sub^{N}(a)$ is $a$. So, let $\sub^{N}(a) = vau$ where $u$ is a bounded word. Then by induction, we have $$\sub^{(k+1)N}(a) = \sub^{kN}(v)\ldots \sub^{N}(v) v a u \sub^N(u) \ldots \sub^{kN}(u).$$
Now, as $u$ is a bounded word, there exists a $K$ such that $|\sub^{(K+1)N}(u)| = |\sub^{KN}(u)|$ and as there are only finitely many words of this length, by possibly replacing $\sub$ with a power, we can choose $K$ such that $\sub^{(K+1)N}(u) = \sub^{KN}(u)$. So for all $j \geq K$, the word $(\sub^{KN}(u))^j$ appears as a subword of $\sub^n(a)$ for some $n$. As such, the periodic sequence $$\ldots \sub^{KN}(u) \sub^{KN}(u)\sub^{KN}(u) \ldots$$ is admitted by $\sub$. This means that the subshift $X_\sub$ contains a periodic point, and, as $u$ is bounded, so is $\sub^{KN}(u)$.
\end{proof}
The main result of this section is the following theorem, which characterises wildness.
\begin{thm}\label{THM:wild-characterisation}
Let $\sub$ be a substitution on an alphabet $\Al$.
The following are equivalent:
\begin{enumerate}
\item $\sub$ is wild.
\item $\sub$ has property $(\ast)$.
\end{enumerate}
\end{thm}
\begin{proof}
The fact that (2) implies (1) is an immediate consequence of Lemma \ref{LEM:leftmost-rightmost-periodic}.
To see that (1) implies (2), suppose that $\sub$ does not have property $(\ast)$. Then there is no $a\in \Al_{left}$ and increasing sequence of integers $N_i$ such that the leftmost expanding letter appearing in $\sub^{N_i}(a)$ is also in $\Al_{left}$ for all $i\geq 1$. By this assumption, there exists an $N$ such that the leftmost expanding letter in $\sub^{N+k}(a)$ is never in $\Al_{left}$ for any expanding letter $a$. Let $U_{left}$ be the set of bounded words that appear at the start of any word of the form $\sub^n(a)$ for any expanding letter $a$. This set is finite because $\sub^{N+k}(a)$ will be a word of the form $ubv$ where $u$ is bounded and the leftmost letter of $\sub(b)$ is expanding and also not in $\Al_{left}$. Let $k_{left} = \max\{|u| \mid u\in U_{left}\}$.
As $\sub$ does not have property $(\ast)$, there is also no $a\in \Al_{right}$ and increasing sequence of integers $N_i$ such that the leftmost expanding letter appearing in $\sub^{N_i}(a)$ is also in $\Al_{right}$ for all $i\geq 1$. Then we can similarly form $U_{right}$, the set of bounded words that appear at the end of any word of the form $\sub^n(a)$ for $a\in\Al_{right}$. Let $k_{right} = \max\{|u| \mid u\in U_{right}\}$.
It is easy to see that the only legal bounded words for $\sub$ are either bounded words appearing as subwords contained in the interior of $\sub(a)$ for an expanding $a$, or words of the form $u_1u_2$ for $u_1\in U_{right}$ and $u_2\in U_{left}$. It follows that every bounded word has length at most $\max\{ k_{left} + k_{right}, |\sub(a)| \mid a \in \Al\}$ and so $\sub$ is tame.
\end{proof}
\begin{cor}\label{COR:leftmost-rightmost1}
If the leftmost and rightmost letter of $\sub(a)$ are elements of $\Al_\infty$ for all $a\in\Al_\infty$, then $\sub$ is tame.
\end{cor}
The next corollary, which originally appeared in \cite{BKM:recognisable}, and which says that an aperiodic substitution is tame, follows from Theorem \ref{THM:wild-characterisation} and Lemma \ref{LEM:leftmost-rightmost-periodic}.
\begin{cor}[Proposition 5.5 in \cite{BKM:recognisable}]\label{COR:aperiodic_implies_tame}
Let $\sub$ be a substitution on $\Al$. If $\sub$ is aperiodic, then $\sub$ is tame.
\end{cor}
Lemma \ref{LEM:leftmost-rightmost-periodic}, Theorem \ref{THM:wild-characterisation}, and Corollaries \ref{COR:leftmost-rightmost1} and \ref{COR:aperiodic_implies_tame} do not include minimality as a hypothesis, but in the presence of minimality there are further consequences.
Any non-periodic minimal substitution is aperiodic, so Corollary \ref{COR:aperiodic_implies_tame} implies in particular that a non-periodic minimal substitution is tame.
The following easy lemma is an immediate consequence of the definition of tameness (Definition \ref{DEF:wild}), and will be useful for rewriting tame minimal substitutions.
\begin{lemma}\label{LEM:minimal-seed}
Let $\sub$ be a tame substitution on $\Al$. If $X_\sub$ is non-empty, then it contains a bi-infinite sequence $w \in X_\sub$ with the property that there exists $M\in\mathbb{N}$ such that every word $u\subset w$ of length exceeding $M$ contains an expanding letter. In particular, $w$ contains infinitely many expanding letters.
\end{lemma}
\begin{remark}\label{REM:find-seed}
If $\sub$ is an aperiodic substitution on $\Al$, there is a recipe for finding legal expanding letters.
Let $S$ denote the set of all pointed words in $\mathcal{L}_\sub$ of the form
\begin{equation}\label{EQ:seed}
u = b_{-k-1} a_{-k}\cdots a_{-1}.a_0\cdots a_lb_{l+1},
\end{equation}
where $b_{-k-1},b_{l+1}\in \Al_\infty$ and $a_i\in \Al_B$ for $-k\leq i \leq l$.
Here the pointed words $a.bc$ and $ab.c$ are different.
$S$ consists of words in the language of $\sub$, but they need not all be legal words, although by Lemma \ref{LEM:minimal-seed} $S$ contains at least one legal word.
Define a function $f:S\to S$ as follows.
For a pointed word $u$ of the form in \ref{EQ:seed}, the words $\sub(b_{-k-1})$ and $\sub(b_{m+1})$ are subwords of $w = \sub(u)$ occurring at the beginning and the end respectively.
Each of these words contains at least one expanding letter.
Let $b^-$ be the last such letter occurring in $\sub(b_{-k-1})$, and let $b^+$ be the first such letter occurring in $\sub(b_{m+1})$.
Then $w$ contains a subsequence of the form $w_{m_1}\ldots w_{m_2}$, where $m_1 < 0 \leq m_2$, $w_{m_1} = b^-$, $w_{m_2} = b^+$, and $w_i \in \Al_B$ for all $m_1 < i < m_2$.
Moreover, $w_{m_1} \ldots w_{m_2} \in S$.
Therefore let us define $f(u) = w_{m_1} \ldots w_{m_2}$ (seen as a pointed word).
Choose a word $u \in S$.
Such a word can be found by considering the sets $\{ \sub^n(a) : a \in \Al\}$---by Lemma \ref{LEM:minimal-seed}, for sufficiently high $n$ this set contains a word with expanding letters in at least two positions, which can then be shifted and truncated to obtain $u\in S$.
Consider the forward $f$-orbit of $u$.
If this orbit were infinite, that would imply that $\sub$ satisfied the hypothesis of Lemma \ref{LEM:leftmost-rightmost-periodic}, using either $a = b_{-k-1}$ or $a = b_{l+1}$.
This would imply that $X_\sub$ contained a periodic point, contradicting the hypothesis of aperiodicity.
Therefore the forward $f$-orbit of $u$ is finite.
This means that some word $v$ in this forward $f$-orbit is sent to itself under a power of $f$; $v$ can then be used as the seed to produce a bi-infinite word in $X_\sub$ that is fixed under $\sub$, and both of the expanding letters in $v$ are legal.
\end{remark}
\subsection{A New Substitution}\label{SUBSEC:new-sub}
Any periodic minimal subshift is equal to the subshift of a primitive substitution of constant length (say, the substitution that sends each legal letter to the same sequence $u$ with the property that $\ldots u.uu\ldots$ is in the subshift), so in the periodic case the conclusion of Theorem \ref{THM:main} is immediately true.
Therefore we may suppose henceforth that the minimal subshift $X_\sub$ is non-periodic, and hence tame, and so by Lemma \ref{LEM:minimal-seed} and Remark \ref{REM:find-seed}, that $X_\sub$ contains a bi-infinite sequence $w$ that is invariant under $\sub^N$ and that contains a legal letter $b \in \Al_\infty$.
Moreover, $\sub^N(b)\subset \sub^N(w) \in X_\sub$ and $X_\sub$ is linearly recurrent, so for sufficiently large $N$, $\sub^N(b)$ must contain $b$.
Similarly, linear recurrence implies that any legal word $u$ appears in $\sub^{k_uN}(b)$ for some $k_u\in\mathbb{N}$.
By passing to a multiple of $N$ if necessary, we may suppose further that $\sub^N(b)$ contains at least two copies of the letter $b$.
Define $\Be := \{ bu : u \text{ does not contain } b \text{ and } bub\text{ is legal}\}$.
In the terminology of \cite{D:main-result}, these are the \emph{return words to $b$}.
Enumerate the elements of $\Be\setminus \{ b\}$: $\Be\setminus \{ b\} = \{ v_1,\ldots , v_k\}$.
If $b\in\Be$, then write $v_0 = b$.
We can break $\sub^N(b)$ into block form:
\begin{align*}
\sub^N(b) & = uv_{01}\ldots v_{0r_0},
\end{align*}
where $u$ does not contain $b$ and, for $1\leq j \leq r_0$, $v_{0j}$ has the form $bv$ for some $v$ that does not contain $b$.
Moreover, as $\sub^N(b)$ contains $b$ in at least two distinct places, we know that $r_0 > 1$.
And, as $b$ is legal, so is $\sub^N(b)$, so if $j<r_0$ then the sequence $v_{0j}b$ is legal, and so $v_{0j}\in \Be$.
The word $v_{0r_0}$ need not be in $\Be$.
For each $i\geq 1$, we can write
\begin{align*}
\sub^N(v_i) & = \sub^N(b)w_iv_{i1}\ldots v_{ir_i},
\end{align*}
where $r_i\geq 0$, $w_i$ does not contain $b$, and, for $1\leq j\leq r_i$, $v_{ij}$ has the form $bv$ for some $v$ that does not contain $b$.
If $r_i > 0$, then for all $j < r_i$, the word $v_{ij}b$ appears in $\sub^N(v_i)$, and hence is legal, so $v_{ij} \in \Be$.
The word $v_{ir_i}$ need not be in $\Be$, but $v_ib$ is legal, and hence $\sub^N(v_ib)$ is legal, and this word contains $v_{ir_i}ub$.
Therefore, if $r_i>0$, then $v_{ir_i}u \in \Be$; let us denote this word by $v_{ir_i}'$.
Further, although the word $v_{0r_0}$ from above need not be in $\Be$, for all $i$ with $r_i > 0$ it is true that $v_{0r_0}w_i \in \Be$, and for all $i$ with $r_i = 0$ it is true that $v_{0r_0}w_iu \in \Be$.
Let us denote by $w_i'$ the word $v_{0r_0}w_i$ if $r_i>0$ or $v_{0r_0}w_iu$ if $r_i = 0$.
Also $v_{0r_0}u \in \Be$; let us denote this word by $v_{0r_0}'$.
Let $\Ga$ be a new alphabet, disjoint from $\Al$ and $\Be$, but with the same number of elements as $\Be$, and let $\alpha \colon \Be \to \Ga$ be a bijection of sets.
The function $\alpha$ extends naturally to a map $\Be^+ \to \Ga^+$.
For $v \in \Be$, let $\tilde{v}$ denote $\alpha(v)$.
Define a substitution $\psi \colon \Ga \to \Ga^+$ by
\begin{align*}
\psi(\tilde{v}_0) & = \tilde{v}_{01}\ldots \tilde{v}_{0r_0-1}\tilde{v}_{0r_0}'
\intertext{ if $v_0 = b\in\Be$, and }
\psi(\tilde{v}_i) & = \left\{ \begin{array}{ll} \tilde{v}_{01}\ldots \tilde{v}_{0r_0-1}\tilde{w}_i'\tilde{v}_{i1}\ldots \tilde{v}_{ir_i-1}\tilde{v}_{ir_i}' & \text{ if } r_i>0 \\
\tilde{v}_{01}\ldots \tilde{v}_{0r_0-1}\tilde{w}_i' & \text{ if } r_i=0
\end{array}\right.
\end{align*}
for all $i > 0$.
\begin{lemma}\label{LEM:primitivity}
The substitution $\psi \colon \Ga \to \Ga^+$ defined above is primitive.
\end{lemma}
\begin{proof}
For all $v\in\Be$ there exists $n_v\in\mathbb{N}$ such that $vb$ is a subword of $\sub^{n_vN}(b)$.
But the hypothesis that $b$ is a subword of $\sub^N(b)$ means that, for all $k\leq l$, $\sub^{kN}(b)$ is a subword of $\sub^{lN}(b)$.
Thus, picking $l = \max_{v\in\Be} n_v$ means that, for all $v\in\Be$, $vb$ is a word of $\sub^{lN}(b)$.
Because all of the words $\{ vb : v\in\Be\}$ can be found in $\sub^{lN}(b)$, and because any two of these can have overlap in at most their first or last letters, it is possible to find all of the elements of $\Be$ as subwords of $\sub^{lN}(b)$, no two of which share any common indices.
Moreover, for all $w \in \Be$, $\sub^N(w)$ starts with $uv_{01}$ and $b$ is a subword of $v_{01}$, so $\sub^{(l+1)N}(w)$ contains every $v \in \Be$ within the block $\sub^{lN}(v_{01})$ that begins at index $|\sub^{lN}(u)|$.
Then for all $w \in \Be$, $\psi(\tilde{w})$ starts with $\tilde{v}_{01}$, so $\psi^{l+1}(\tilde{w})$ contains $\tilde{v}$ for all $v \in \Be$.
Therefore $\psi$ is primitive.
\end{proof}
\subsection{Topological Conjugacy}\label{SUBSEC:topological-conjugacy}
The new substitution $\psi$ is related to $\sub$ (specifically, they give rise to homeomorphic tiling spaces---see Section \ref{SUBSEC:subshifts}), but it does not necessarily give rise to a topologically conjugate subshift.
For this the following result, proved in \cite[Proposition 3.1]{D:main-result} and paraphrased here, will be useful.
\begin{prop}\label{PROP:letter-expansion}
Let $\psi \colon \Ga \to \Ga^+$ be a primitive substitution and let $g$ be a map from $\Ga$ to $\Al^+$.
Let $X_g \subset \Al^\mathbb{Z}$ denote the subshift generated by $g(X_\psi)$---that is, $X_g := \{ \sigma_\Al^n(g(x)) : x \in X_\psi, n \in \mathbb{Z}\}$.
Then there exists an alphabet $\Ze$, a primitive substitution $\theta \colon \Ze \to \Ze^+$, and a map $h \colon \Ze \to \Al$ such that $h(X_\theta) = X_g$.
\end{prop}
We can apply this result to the current setting by using the substitution $\psi \colon \Ga \to \Ga^+$ defined above, which was shown to be primitive in Lemma \ref{LEM:primitivity}, and the map $g \colon \Ga \to \Al^+$ defined by $g(\tilde{v}_i) = v_i$, where $v_i \in \Al^+$ is viewed as a word possibly consisting of more than one letter.
Then the subshift $X_g$ from the statement of Proposition \ref{PROP:letter-expansion} is exactly the original substitution subshift $X_\sub$.
Therefore Proposition \ref{PROP:letter-expansion} guarantees the existence of a factor map---in fact, a one-block code \cite{LM:book}---from a primitive substitution subshift $X_\theta$ to the given minimal substitution subshift $X_\sub$.
If we look at how $\Ze$ and $\theta$ are defined in the proof of Proposition \ref{PROP:letter-expansion} in \cite{D:main-result}, then it becomes clear that the factor map $h$ is in fact a topological conjugacy---i.e., it has an inverse that is also a factor map.
Indeed, $\Ze$ is the set of all pairs $(\tilde{v},k)$, where $v \in\Be$ and $1 \leq k \leq |v|$.
Every sequence $w \in X_\sub$ can be represented uniquely as a concatenation of return words $v \in \Be$ (with the origin possibly contained in the interior of such a word).
Then there is a map $p: X_\sub\to \Ze^\mathbb{Z}$ defined in the following way on a sequence $w\in X_\sub$: If $w_j$ falls at position $k$ in the return word $v_i$, then $p(w)_j = (\tilde{v}_i,k)$.
This is a sliding block code with block size equal to $\max_{v \in \Be}|v|$, and the one-block code $h$ is its inverse.
The usefulness of Proposition \ref{PROP:letter-expansion} is in showing that $p(X_\sub)$ is in fact a primitive substitution subshift, which completes the proof of Theorem \ref{THM:main}.
\section{Examples and Applications}\label{SEC:examples}
The primitive substitution subshift $X_\theta$ is topologically conjugate to the original minimal subshift $X_\sub$, which is a very strong condition, but this comes at a price: if we follow the recipe from \cite[Proposition 3.1]{D:main-result} strictly, then the new alphabet $\Ze$ may be quite large---see the discussion after Proposition \ref{PROP:cohomology-rank}, below, for an example in which $|\Al| = 2$, $|\Ga| = 3$ and $|\Ze| = 9$.
For some computational purposes, particularly purposes involving tiling spaces, the substitution $\psi\colon \Ga \to \Ga^+$ can be just as good as $\theta$, and typically uses a smaller alphabet.
Consider the substitutions $\sub \colon \Al \to \Al^+$ and $\psi \colon \Ga \to \Ga^+$ from Theorem \ref{THM:main}, and the map $\alpha \colon \Ga \to \Be^+ \subset \Al^+$.
Then the tiling spaces $\TS_\sub$ and $\TS_\psi$ are homeomorphic via the map
\begin{align*}
f \colon \TS_\psi & \to \TS_\sub \\
(w,t) & \mapsto (\sigma^{\lfloor \tilde{t} \rfloor}(\alpha(w)), \tilde{t}-\lfloor \tilde{t} \rfloor),
\end{align*}
where $\tilde{t}=|\alpha(w_0)|\cdot t$.
This means that, for practical purposes, we can use $\TS_\psi$ to compute the topological invariants of $\TS_\sub$.
This is the approach in the following examples and applications, which illustrate the construction outlined in Section \ref{SEC:minimal}.
The first example illustrates some of the greater `freedom' in behaviour exhibited by minimal non-primitive substitutions on small alphabets.
Recall from \cite{GM:multi-one-d} and \cite{R:mixed-subs} that, if $\TS_\sub$ is the tiling space associated to a primitive substitution $\sub$ on an alphabet $\Al$ with $k$ letters, the rank of the first \v{C}ech{} cohomology $\check{H}^1$ of $\TS_\sub$ is bounded above by $k^2-k+1$ and this bound is tight. Recall from \cite{BDH:asymp-orbits} that $X_\sub$ has at most $k^2$ asymptotic orbits (equivalently, $\TS_\sub$ has at most $k^2$ asymptotic arc components) and this bound is tight. These results both fail spectacularly if we allow for non-primitive minimal substitutions---this result suggests that the alphabet size is not as much of a limiting factor with respect to the topological and dynamical properties of a substitution.
\begin{prop}\label{PROP:cohomology-rank}
Let $\Al = \{a, b\}$ be an alphabet on only two letters. For all $n \geq 2$ there exists a minimal substitution $\sub_n \colon \Al \to \Al^+$ such that $\check{H}^1(\TS_{\sub_n})$ has rank $n$ and $X_{\sub_n}$ has at least $n$ asymptotic orbits.
\end{prop}
\begin{proof}
We construct $\sub_n$ explicitly and use the methods from Section \ref{SEC:minimal} to prove the claim.
We define our family of substitutions $\sub_n$ by
$$
\sub_n \colon
\left\{
\begin{array}{rcl}
a & \mapsto & ab \, ab^2 \, \ldots ab^{n-1} \, ab^n \, a \\
b & \mapsto & b
\end{array}
\right.
$$
We leave confirmation of minimality of the substitution $\sub_n$ to the reader. The decomposition $\Al = \Al_{\infty} \sqcup \Al_B = \{a\} \sqcup \{b\}$ is quickly found and, as $\sub_n$ satisfies the hypotheses of Corollary \ref{COR:leftmost-rightmost1} we know that $a$ can be used as the seed letter for our return words. The set $S = \{ ab^i.b^ja \mid i+j\leq n,\: 0\leq i, j \}$ defined in Remark \ref{REM:find-seed} has a fixed point under $f^1 (=\mbox{Id}_S)$, and $\sub_n(a)$ contains at least two distinct copies of $a$ so we can choose $N=1$.
The return words to $a$ are $\Be_n = \{ ab^i \mid 1\leq i\leq n \}$. Let $v_i = ab^i$. The word $\sub(a)$ can be written as $u v_{01} \ldots v_{0r_0}$ with $u = \epsilon$ the empty word, $r_0 = n+1$, $v_{0i} = ab^i = v_i$ for $1 \leq i \leq n$ and $v_{0r_0} = a$. For each $1 \leq i \leq n$ we can write $\sub(v_i) = \sub(a) w_i$ with $w_i = b^i$ and we note that $r_i = 0$ for each $i\geq 1$. So, $w'_i = v_{0r_0} w_i u = a b^i = v_i$.
We form $\Ga_n = \{ \tilde{v} \mid v \in \Be_n \} = \{ \tilde{v}_i \mid 1\leq i \leq n\}$ and define, for each $1 \leq i \leq n$ the substitution $\psi_n \colon \Ga_n \to \Ga_n^+$ by
$$
\begin{array}{rcl}
\psi_n(\tilde{v}_i) & = & \tilde{v}_{01} \tilde{v}_{02} \ldots \tilde{v}_{0n} \tilde{w}'_{i} \\
& = & \tilde{v}_1 \tilde{v}_2 \ldots \tilde{v}_n \tilde{v}_i.
\end{array}
$$
This can more succinctly be written on the alphabet $\{1,\ldots, n\}$ as
$$\psi_n\colon i \mapsto 1 2 \ldots n i.$$
By Lemma \ref{LEM:primitivity} and the discussion in Section \ref{SUBSEC:subshifts}, $\psi_n$ is a primitive substitution whose tiling space $\TS_{\psi_n}$ is homeomorphic to $\TS_{\sub_n}$. By the non-periodicity of $\sub_n$ and a result of Moss\'{e} \cite{M:aperiodic}, $\psi_n$ is recognisable. We notice that $\psi_n$ is also a left-proper substitution and so by standard results about left-proper substitutions (see \cite{S:book}), the first \v{C}ech{} cohomology of $\TS_{\psi_n}$ (and hence of $\TS_{\sub_n}$) is given by the direct limit of the transpose of the incidence matrix of $\psi_n$ acting on the group $\mathbb{Z}^n$.
The incidence matrix of $\psi_n$ is the symmetric matrix $M_n = \mathbf{1}_n + I_n$ where $\mathbf{1}_n$ is the $n\times n$ matrix of all $1$s, and $I_n$ is the $n\times n$ identity matrix. It is easy to check that $M_n$ has full rank and so $$\mbox{rk} \check{H}^1(\TS_{\sub_n}) = \mbox{rk} \check{H}^1(\TS_{\psi_n}) = \mbox{rk} \varinjlim(M_n) = n.$$
To prove the claim about asymptotic orbits, we note that there exists a right infinite sequence $v$ such that for every $i\in \{1,2,\ldots, n\}$ there exists a left infinite sequence $u_i$ (found by repeated substitution on the sequence $i.1$) such that $u_ii.1v$ is a point in $X_{\psi_n}$. By construction then, $u_ii.1v = u_jj.1v$ if and only if $i=j$, and the bi-infinite sequences $u_ii.1v$ and $u_jj.1v$ agree on all indices right of the origin for all pairs $i,j$. It follows that each pair $i,j$ leads to a right asymptotic pair of orbits in $X_{\psi_n}$ and so there exist at least $n$ asymptotic orbits.
Equivalently then, $\TS_{\psi_n}$ has at least $n$ asymptotic arc components. These are preserved under homeomorphism and so $\TS_{\sub_n}$ also has at least $n$ asymptotic arc components. Equivalently, $X_{\sub_n}$ has at least $n$ asymptotic orbits.
\end{proof}
We emphasise that our algorithm for producing a primitive substitution with a homeomorphic tiling space is much more computationally simple than the corresponding algorithm for the substitution with a topologically conjugate subshift. As an example, compare the presentation and relatively short computation of the substitution $\psi_3$ in the above proof with the following. The reader is invited to verify, following the proof of \cite[Proposition 3.1]{D:main-result}, that the substitution $\theta$ defined by
$$
\begin{array}{lcllcllcl}
\theta(A) & = & AB & \theta(L) & = & AB & \theta(W) & = & AB \\
\theta(B) & = & LMNWXYZAB & \theta(M) & = & LMN & \theta(X) & = & LMN \\
& & & \theta(N) & = & WXYZLMN & \theta(Y) & = & WXYZ \\
& & & & & & \theta(Z) & = & WXYZ
\end{array}
$$
produces a subshift that is topologically conjugate to $\sub_3$, where the conjugacy $$h \colon \{A,B,L,M,N,W,X,Y,Z\} \to \{a,b\}$$ is given by $h(A) = h(L) = h(W) = a$ and $h(B) = h(M) = h(N) = h(X) = h(Y) = h(Z) = b$.
(Of course, it is clear that a smaller alphabet can be used; this is what is obtained when the recipe is followed without modification.)
\begin{comment}
The following example illustrates how $N$ can be greater than $1$ (One can check that the substitution is related to the Thue-Morse substitution $0\mapsto 01,\: 1\mapsto 10$ by mapping the letter $c$ to the empty word $\epsilon$).
\begin{example}\label{EX:thue-morse-c}
Let $\sub \colon a\mapsto acb,\: b\mapsto bca,\: c\mapsto c$.
The decomposition $\Al = \Al_{\infty} \sqcup \Al_B = \{a,b\} \sqcup \{c\}$ is quickly found and, as $\sub$ satisfies the hypotheses of Corollary \ref{COR:minimal-seed} we know that $a$ can be used as the seed letter for our return words. The set $$S = \{ x.cy, xc.y \mid x,y\in\Al_{\infty}\}$$ defined in Remark \ref{REM:find-seed} has a fixed point under $f^2 (=\mbox{Id}_S)$, and $\sub^2(a)$ contains at least two distinct copies of $a$ so we can choose $N=2$.
The return words to $a$ are $\Be = \{ ac,acbc,acbcbc\}$. Let $v_i = a(cb)^{i-1}c$, $i = 1,2,3$. The word $\sub^N(a) = \sub^2(a) = acbcbca$ can be written as $u v_{01} \ldots v_{0r_0}$ with $u = \epsilon$ the empty word, $r_0 = 2$, $v_{01} = acbcbc = v_3$, and $v_{0r_0} = v_{02} = a$. We can write:
\indent\indent $\sub^2(v_1) = \sub^2(a) w_1$ with
\indent\indent $w_1 = c$, $r_1 = 0$
\indent\indent $\sub^2(v_2) = \sub^2(a) w_2 v_{21} v_{22}$ with
\indent\indent $w_2 = cbc$, $v_{21} = ac = v_1$, $v_{22} = acbc = v_2$, $r_2 = 2$
\indent\indent $\sub^2(v_3) = \sub^2(a) w_3 v_{31} v_{32} v_{33} v_{34}$ with
\indent\indent $w_3 = cbc$, $v_{31} = ac = v_1$, $v_{32} = acbcbc = v_3$, $v_{33} = ac = v_1$, $v_{34} = acbc = v_2$, $r_3 = 4$.
\noindent As $u=\epsilon$, $v'_{ir_i}=v_{ir_i}$. As $r_i = 0$ only if $i = 1$, we have:
\indent\indent $w'_1 = v_{0r_0} w_1 u = ac = v_1$
\indent\indent $w'_2 = v_{0r_0} w_2 = acbc = v_2$
\indent\indent $w'_3 = v_{0r_0} w_3 = acbc = v_2$
\indent\indent $\sub^2(v_1) = \sub^2(a) w_1$ with
\indent\indent $w_1 = c$, $r_1 = 0$
\indent\indent $\sub^2(v_2) = \sub^2(a) w_2 v_{21} v_{22}$ with
\indent\indent $w_2 = cbc$, $v_{21} = ac = v_1$, $v_{22} = acbc = v_2$, $r_2 = 2$
\indent\indent $\sub^2(v_3) = \sub^2(a) w_3 v_{31} v_{32} v_{33} v_{34}$ with
\indent\indent $w_3 = cbc$, $v_{31} = ac = v_1$, $v_{32} = acbcbc = v_3$, $v_{33} = ac = v_1$, $v_{34} = acbc = v_2$, $r_3 = 4$.
\noindent As $u=\epsilon$, $v'_{ir_i}=v_{ir_i}$. As $r_i = 0$ only if $i = 1$, we have
\indent\indent $w'_1 = v_{0r_0} w_1 u = ac = v_1$
\indent\indent $w'_2 = v_{0r_0} w_2 = acbc = v_2$
\indent\indent $w'_3 = v_{0r_0} w_3 = acbc = v_2$.
\noindent We form $\Ga = \{ \tilde{v} \mid v \in \Be \} = \{ \tilde{v}_1, \tilde{v_2}, \tilde{v}_3\}$ and define the substitution $\psi \colon \Ga \to \Ga^*$ by
$$
\begin{array}{rcl}
\psi(\tilde{v}_1) & = & \tilde{v}_{01} \tilde{w}'_{1} \\
& = & \tilde{v}_3 \tilde{v}_1\\
\psi(\tilde{v}_2) & = & \tilde{v}_{01} \tilde{w}'_{2} \tilde{v}_{21}\tilde{v}'_{22} \\
& = & \tilde{v}_3 \tilde{v}_2 \tilde{v}_1 \tilde{v}_2.\\
\psi(\tilde{v}_3) & = & \tilde{v}_{01} \tilde{w}'_{3} \tilde{v}_{31} \tilde{v}_{32} \tilde{v}_{33} \tilde{v}'_{34}\\
& = & \tilde{v}_3 \tilde{v}_2 \tilde{v}_1 \tilde{v}_3 \tilde{v}_1 \tilde{v}_2.
\end{array}
$$
This can more succinctly be written on the alphabet $\{1, 2, 3\}$ as
$$
\begin{array}{rcl}
\psi(1) & = & 31\\
\psi(2) & = & 3212 \\
\psi(3) & = & 321312
\end{array}
$$
\end{example}
\end{comment}
\begin{comment}
The following example illustrates where $u$ may be non-trivial. It is also an example of a topologically equivalent primitive substitution on fewer letters than the original minimal substitution (one can check that the associated tiling space is homeomorphic to the tiling space of the Fibonacci substitution). We omit much of the writing and just give a list of notation so that the reader may confirm their own calculations.
\begin{example}
Let $\sub\colon a\mapsto bc,\: b\mapsto b,\: c\mapsto ca$.
\begin{itemize}
\item $\Al = \Al_{\infty} \sqcup \Al_B = \{a,c\} \sqcup \{b\}$
\item Seed letter - $a$
\item $S = \{ a.bc, c.bc, ab.c, cb.c\}$
\item $f^2$ has a fixed point
\item $\sub^4(a)$ contains at least two distinct copies of $a$ $\Longrightarrow N=4$
\item $\Be = \{ abcbc, abc\}$, $v_1 = abcbc$, $v_2 = abc$
\item $\sub^N(a) = \sub^4(a) = bcabcbca$
\item $u = bc$, $v_{01} = abcbc = v_1$, $v_{0r_0} = v_{02} = a$, $r_0 = 2$
\item $\sub^4(v_1) = \sub^4(a) w_1 v_{11} v_{12} v_{13} v_{14} v_{15} v_{16}$
\item $w_1 = bc$, $v_{11} = v_1$, $v_{12} = v_2$, $v_{13} = v_1$, $v_{14} = v_1$, $v_{15} = v_2$, $v_{16} = abc$, $r_1 = 6$
\item $\sub^4(v_2) = \sub^4(a) w_2 v_{21} v_{22} v_{23}$
\item $w_2 = bc$, $v_{21} = v_1$, $v_{22} = v_2$, $v_{23} = abc$, $r_2 = 3$
\item $v'_{16} = v_{16} u = abcbc = v_1$
\item $v'_{23} = v_{23} u = abcbc = v_1$
\item $w'_1 = v_{0r_0} w_1 = abc = v_2$
\item $w'_2 = v_{0r_0} w_2 = abc = v_2$
\end{itemize}
$$
\begin{array}{rcl}
\psi(1) & = & 12121121\\
\psi(2) & = & 12121
\end{array}
$$
\end{example}
\end{comment}
The reader is encouraged to try the example $\sub \colon a\mapsto acb,\: b\mapsto adb,\: c\mapsto dd,\: d\mapsto d$ where the function $f$ is not a bijection; and to also try the example of the non-primitive Chacon substitution $\sub \colon a\mapsto aaba,\: b\mapsto b$ where $\Be$ contains the single letter return word $a$ and so $v_0$ needs to be treated.
\section{The Non-Minimal Case}\label{SEC:non-minimal}
In this section, we now turn our attention to the case of those substitutions which give rise to non-minimal subshifts. We call such substitutions \emph{non-minimal substitutions}.
In the primitive case, a standard approach is to replace the alphabet $\Al$ with a new alphabet consisting of \emph{collared letters}, which are copies of the letters from $\Al$ containing extra information about the letters that appear around them in elements of $X_\sub$.
It is important to be careful about how we extend this idea to the non-minimal case, as a non-minimal substitution is non-primitive, so in particular it may have letters in its alphabet that do not appear in any element of $X_\sub$.
The $n$-collared alphabet, which we will define below, is a subset of $\Al^{2n+1}$, where a $(2n+1)$-letter word should be thought of as a single letter---the one at position $n+1$---along with extra information about the $n$ letters that lie to either side of it.
View the words in $\Al^{2n+1}$ as being indexed so that their middle letter is at position $0$.
Let $a \in \Al$ and let $u = a_{-n}\ldots a_{-1} a a_1 \ldots a_n \in \Al^{2n+1}$ be a word whose central letter is $a$; then we define an \emph{$n$-collared letter} to be this formal pair $(a,u)$ and denote it by $a_u$.
If $u$ is the word $ c_1 c_2 \ldots c_l \in \Al^*$, then for those $i$ and $n$ where it is well-defined, let $c_i(n)$ be the subword $c_{i-n} \ldots c_i \ldots c_{i+n}$.
Suppose $\sub(a) = b_1 \ldots b_k$ and let $a_u$ be an $n$-collared letter. Note that $b_1 \ldots b_k$ is a subword of $\sub(u)$, so we can define $b_i(n)$ for each $1\leq i \leq k$.
There is an induced substitution $\sub_n'$ defined on $\Al^{2n+1}$ by
$$\sub_n'(a) =
({b_1})_{b_1(n)}\ldots ({b_k})_{b_k(n)}.$$
Let us define an $n$-collared substitution, $\sub_n$, by restricting $\sub_n'$ to a sub-alphabet $\Al_n\subset \Al^{2n+1}$. The definition of such a subalphabet requires some work.
In defining the collared version of a substitution, it is tempting to take as a new alphabet the set of all collared letters $a_u$ such that $u$ is a legal word for $\sub$. The problem is that this alphabet essentially ignores the illegal letters in the original alphabet. Illegal letters are still crucial in generating pieces of the subshift which nevertheless do not contain the illegal letter. We are then in the position of having to include collared versions of the illegal letters into our new alphabet in a well-defined and consistent manner. There are various valid methods of doing this and the following is only one approach of many.
We may suppose $\Al$ contains at least one letter, say $b$, otherwise $X_\sub$ is empty.
Then we define $\Al_n$ and $\sub_n$ in terms of this $b$ (so they are not, in general, unique, although we will show below in Proposition \ref{PROP:collared-substitution} that the resulting subshift is always topologically conjugate to $X_\sub$).
\begin{defn}\label{DEF:collar}
Let $n$ be a non-negative integer and let $\sub$ be a substitution on $\Al$.
Define the following two sets.
\begin{align*}
\Al_n' & := \{ a_u\in \Al^{2n+1} : u \text{ is legal } \} \\
\Al_n'' & := \{ a_u \in \Al^{2n+1} : u = a_{-n}\cdots a_n, a_i = b \text{ for all } i\neq 0, a = a_0 \text{ is illegal }\}.
\end{align*}
Then define the \emph{$n$-collared alphabet}
\[
\Al_n := \{ a_u \in \Al^{2n+1} : \text{ there exists } k\geq 0 \text{ such that } a_u \subset \sub_n'^k(c_v) \text{ for some } c_v\in \Al_n' \cup \Al_n''\}.
\]
Define the \emph{$n$-collared substitution} $\sub_n$ to be the restriction of $\sub_n'$ to $\Al_n$.
These definitions depend upon the choice of letter $b$, but let us suppose that a letter $b$ has been chosen, and that, for all $n$, $\Al_n$ and $\sub_n$ have been defined using this letter.
\end{defn}
It is clear from the way that $\Al_n$ is defined that $\sub_n(a_u) \in \Al_n^*$ for all $a_u\in\Al_n$, so $\sub_n$ is indeed a substitution on $\Al_n$.
Next let us show that $X_{\sub_n}$ is topologically conjugate to $X_\sub$.
Every bi-infinite sequence in $X_\sub$ can be rewritten using $n$-collared letters using the local rule that if $u$ is the $(2n+1)$-letter word which contains the symbol $a$ at its center, then this instance of $a$ is replaced by $a_u$. This map embeds the subshift $X_\sub$ into the full shift $(\Al_n')^\mathbb{Z}\subseteq \Al_n^\mathbb{Z}$. Call this embedding $\iota_n \colon X_\sub \to \Al_n^\mathbb{Z}$. This map $\iota_n$ is clearly a topological conjugacy onto its image (the inverse is given by forgetting the collaring $a_u \mapsto a$).
\begin{prop}\label{PROP:collared-substitution}
$X_{\sub_n} = \iota_n(X_\sub)$.
\end{prop}
\begin{proof}
$\iota_n$ is injective, so to prove the claim it will suffice to show that it maps $X_\sub$ surjectively onto $X_{\sub_n}$.
Pick $w\in X_\sub$ and consider its image $\iota_n(w)$.
To show that this image is in $X_{\sub_n}$, it is enough to show that, for an arbitrary $n$-collared word $u\subset \iota_n(w)$ it is true that $u$ is in the language of $\sub_n$.
Say that $u$ is the subword of $\iota_n(w)$ that begins at index $i$ and ends at index $j>i$.
Consider the word $v\subset w$ that begins at index $i-n$ and ends at index $j+n$.
As $v\subset w$, it is in the language of $\sub$, and so there is some letter $a\in \Al$ and $k\geq 0$ such that $v\subset \sub^k(a)$.
$\Al_n$ contains collared versions of all legal and illegal letters, so there is a collared letter $a_s \in \Al_n$.
By construction, $\sub_n^k(a_s)$ contains the collared word $u$.
Since $u\subset \iota_n(w)$ was arbitrary, $\iota_n(w)\in X_{\sub_n}$.
Thus $\iota_n(X_\sub)\subseteq X_{\sub_n}$.
For $w'\in X_{\sub_n}$, let $w\in\Al^\mathbb{Z}$ be the word obtained from $w'$ by forgetting collars.
To prove that $\iota_n$ is surjective, it will suffice to show that, for arbitrary $w'\in X_{\sub_n}$, it is true that $w\in X_\sub$.
Pick an arbitrary $u\subset w$.
Say that $u$ is the subword of $w$ that begins at index $i$ and ends at index $j>i$.
Consider the word $u'\subset w'$ that begins at index $i-n$ and ends at index $j+n$.
As $u'\subset w'$, it is in the language of $\sub_n$, and so there is some collared letter $a_s\in \Al_n$ and $k\geq 0$ such that $u'\subset \sub_n^k(a_s)$.
But then $\sub^k(a)$ contains the word $u$, and hence $u$ is in the language of $\sub$.
Since $u\subset w$ was arbitrary, this means $w\in X_\sub$, and so $\iota_n$ is surjective.
\end{proof}
Under the assumption that $\Al_n$ and $\Al_m$ are defined using the same letter $b$, there is a forgetful map $f_{n,m} \colon \Al_n \to \Al_m$ where, if $$u = a_{-n} \ldots a_{-m} \ldots a_{-1} a a_1 \ldots a_m \ldots a_n$$ and $$v = a_{-m} \ldots a_{-1} a a_1 \ldots a_m $$ then we define $f_{n,m}(a_u) = a_v$.
We can extend this forgetful map to $\Al_n^*$ and $\Al_n^\mathbb{Z}$.
This $n$-collared substitution commutes with the forgetful maps.
That is, $$\sub_m \circ f_{n,m} = f_{n,m} \circ \sub_n.$$
If $l<m<n$, then $f_{m,l}\circ f_{n,m} = f_{n,l}$.
Note that $\Al_0 = \Al$ and $\sub_0 = \sub$, and by Proposition \ref{PROP:collared-substitution}, $f_{n,0}$ is a topological conjugacy, from which it follows that $f_{n,m}\colon X_{\sub_n} \to X_{\sub_m}$ is a topological conjugacy for all $n \geq m \geq 0$.
Recall the following definition.
A substitution \emph{forces the border at level $k$} if every appearance of $\sub^k(a)$ as a $k$-supertile extends uniquely for all $a\in \Al$. That is, if $w \in X_\sub$ is an admitted bi-infinite sequence and it is decomposed into words of the form $\sub^k(a)$ for letters $a\in\Al$, then for every $a$, there exist unique letters $a_l, a_r$ such that every appearance of $\sub^k(a)$ in the decomposition of $w$ sits as an interior subword of a word of the form $a_l\sub^k(a)a_r$.
\begin{lemma}\label{LEM:border-forcing}
Let $\sub$ be a tame substitution. Let $N$ be one greater than the maximum length of any bounded word for $\sub$, $N = \max_{u\in B} |u|+1$. The substitution $\sub_N \colon \Al_N\to \Al_N^+$ forces the border at some level $k$.
\end{lemma}
\begin{proof}
Let $k$ be such that, for every expanding letter $a \in \Al$, we have $|\sub^k(a)| > N$. Let $a_u \in \Al_N$ be an $N$-collared letter such that $\sub^k_N(a_u)$ appears as a subword of $w \in X_{\sub_N}$. By our choice of $N$, there exists a letter $l = w_{i-j}$ to the left of $a$ in $u$ and a letter $r = w_{i+j'}$ to the right of $a$ in $u$ that are both expanding letters. Further, the indices $j$ and $j'$ can be chosen so that $j,j' < N$.
So, $u = w_{i-N}\ldots l \ldots a \ldots r \ldots w_{i+N}$ where $w_i = a$ is the central letter of the word and then $$\sub^k(u) = \sub^k(w_{i-N})\ldots \sub^k(l) \ldots \sub^k(a) \ldots \sub^k(r) \ldots \sub^k(w_{i+N})$$
As $|\sub^k(l)| > N$ and $|\sub^k(r)| > N$, and we know all tiles within $N$ places of $a$ in $u$, we can determine all $N$-collared tiles out until at least the rightmost $N$-collared letter of $\sub^k(l)$ to the left of any appearance of $\sub^k(a_u)$ in the decomposition of $w$ as $k$-supertiles, and at least the leftmost $N$-collared letter of $\sub^k(r)$ to the right of any appearance of $\sub^k(a_u)$ in the decomposition of $w$ as $k$-supertiles. These tiles lie outside of $\sub^k_N(a_u)$ and so $\sub^k_N(a_u)$ uniquely extends as a $k$-supertile in $X_{\sub_N}$. It follows that $\sub_N$ forces the border at level $k$.
\end{proof}
Let $\sub$ be a substitution on the alphabet $\Al$ and let $\TS$ be the associated tiling space. Use the convention that a point $T\in \TS$ is written coordinate-wise as $(w,t)$, $w \in X_\sub$ and $t \in [0,1)$. Recall \cite{AP} that we define the \emph{Anderson-Putnam complex} $\Gamma$ of $\sub$ to be $\TS/{\sim}$ where $\sim$ is the equivalence relation given by taking the transitive closure of the relation $(w,t)\sim (w',t')$ if $t = t' \in (0,1)$ and $w_0 = w'_0$ or $t = t' = 0$ and $w_{-1} = w'_{-1}$ or $w_0 = w'_0$.
\begin{defn}
We define the \emph{$n$-collared Anderson Putnam complex} $\Gamma_n$ to be the Anderson-Putnam complex associated to the $n$-collared substitution $\sub_n$.
Let $p_n \colon \TS \to \Gamma_n$ be the natural quotient map. We define a map $f_n \colon \Gamma_n \to \Gamma_n$ to be the unique map which makes the following square commute
$$
\begin{tikzpicture}[node distance=2.5cm, auto]
\node (00) {$\Gamma_n$};
\node (10) [node distance=2cm, above of=00] {$\TS$};
\node (01) [right of=00] {$\Gamma_n$};
\node (11) [right of=10] {$\TS$};
\draw[->] (10) to node [swap] {$p_n$} (00);
\draw[->] (11) to node [swap] {$p_n$} (01);
\draw[->] (01) to node [swap] {$f_n$} (00);
\draw[->] (11) to node [swap] {$\sub$} (10);
\draw (11) to node {$\cong$} (10);
\end{tikzpicture}
$$
\end{defn}
For a tame substitution $\sub$, let $N_\sub = \max_{u\in B} |u|+1$ be one greater than the length of the longest legal bounded word for $\sub$. The following theorem allows us to replace the hypothesis of primitivity with tameness for the classic Anderson-Putnam Theorem \cite{AP} if we allow ourselves to collar letters out to a sufficient radius.
\begin{thm}\label{THM:homeo}
Let $\sub$ be a tame recognisable substitution. The natural map $h \colon \TS \to \varprojlim(\Gamma_{N_\sub}, f_{N_\sub})$ given by $$h(x) = (p_{N_\sub}(x), p_{N_\sub}(\sub^{-1}(x)), p_{N_\sub}(\sub^{-2}(x)), \ldots )$$ is a homeomorphism.
\end{thm}
\begin{proof}
Recognisability of $\sub$ means that $\sub \colon \TS \to \TS$ has a well-defined inverse and so $h$ is well-defined. By the choice of $N_\sub$ and Lemma \ref{LEM:border-forcing}, the $N_\sub$-collared substitution $\sub_{N_\sub}$ forces the border at level $k$. Hence, a point in the inverse limit describes a unique tiling of the line, and so $h$ is both injective and surjective. As $h$ is a continuous bijection from a compact space to a Hausdorff space, $h$ is a homeomorphism.
\end{proof}
If we are only concerned with aperiodic substitutions (which are tame by Corollary \ref{COR:aperiodic_implies_tame}), then we may further reduce the list of hypotheses for this theorem by making use of a result of Bezuglyi, Kwiatowski and Medynets \cite{BKM:recognisable}.
\begin{thm}[Bezuglyi--Kwiatowski--Medynets]
If $\sub$ is aperiodic, then $\sub$ is recognisable.
\end{thm}
\begin{cor}
Let $\sub$ be an aperiodic substitution. The map $h \colon \TS \to \varprojlim(\Gamma_{N_\sub}, f_{N_\sub})$ is a homeomorphism.
\end{cor}
\begin{remark}
There exist recognisable substitutions which are not aperiodic or even non-aperiodic. Take as an example the substitution $\sub \colon a \mapsto ab,\: b \mapsto b$ from Example \ref{EX:periodic}, whose induced substitution on the tiling space is just the identity map on a circle, and so is injective, hence $\sub$ is recognisable even though $\sub$ is a periodic substitution. This is perhaps surprising to a reader who is used to primitive substitutions, where recognisability, non-periodicity and aperiodicity are all equivalent.
\end{remark}
\section{Closed Invariant Subspaces of Non-minimal Tiling Spaces}\label{SEC:CISs}
\subsection{Invariant Subspaces}
Let $\TS$ be a compact metric space and let $G$ act continuously on the right of $\TS$ via $\rho\colon \TS \times G\to \TS$ and let $\rho_{\tau}\colon \TS \to \TS$ be given by $x \mapsto \rho(x,\tau)$. We will normally only consider $G = \mathbb{R}$ or $G = \mathbb{Z}$, but the following machinery is applicable in the general case (in particular, tiling spaces in higher dimensions which have actions of higher dimensional Euclidean groups).
If $\Lambda$ is a closed subspace of $\TS$ such that $\rho_\tau(\Lambda) = \Lambda$ for all $\tau \in G$, we call $\Lambda$ a \emph{closed invariant subspace} with respect to the action, or \emph{CIS} for short. The set of CISs $\mathcal{C}$ forms a lattice under inclusion of subspaces. The least elements of $\mathcal{C}$ that are not empty are the minimal sets of the action on $\TS$. The unique maximal element of $\mathcal{C}$ is the whole space $\TS$. Let us observe, without making further comment, the interesting fact that $\mathcal{C}^C = \{\TS \setminus \Lambda \mid \Lambda \in \mathcal{C}\}$ is a topology on the set $\TS$ (in general, coarser than the original topology induced by the metric on $\TS$). This topology is indiscrete if and only if $(\TS,\rho)$ is minimal. Any continuous map between dynamical systems which maps orbits to orbits will also induce a continuous map between the spaces endowed with the topology $\mathcal{C}^C$, and so the homeomorphism type of the topological space $(\Omega,\mathcal{C}^C)$ is an orbit equivalence invariant of the dynamical system $(\Omega,\rho)$.
\begin{lemma}\label{LEM:orbits}
Let $f \colon \TS \to \TS'$ be a continuous map which maps $G$-orbits to $G$-orbits. If $\Lambda$ is a CIS of $\TS$ with respect to the action of $G$ on $\TS$, then $f(\Lambda)$ is a CIS of $\TS'$ with respect to the action of $G$ on $\TS'$.
\end{lemma}
\begin{proof}
Let $\Lambda$ be a CIS of $\TS$. As $\TS$ is compact and $\Lambda$ is a closed subspace, $\Lambda$ is compact, so the image of $\Lambda$ under a continuous map is compact. As $\TS'$ is Hausdorff, a compact subspace of $\TS'$ must be closed, and so $f(\Lambda)$ is a closed subspace of $\TS'$.
Let $\mathcal{O}_x = \{ \rho_\tau(x) \mid \tau \in G\}$ be the orbit of a point $x \in \TS$ under the $G$-action. From the definition of a CIS, if $x \in \Lambda$ then $\rho_\tau(x) \in \Lambda$ for all $\tau$ and so $\Lambda$ contains $\mathcal{O}_x$ for all points $x\in \Lambda$. If $y\in f(\Lambda)$, then there exists an $x \in \Lambda$ such that $f(x) = y$. The image of an orbit under $f$ is also an orbit, and so as $f(\mathcal{O}_x) \subset f(\Lambda)$, and as $y$ is a point on that orbit, we find that $f(\mathcal{O}_x) = \mathcal{O}_y$ and $\mathcal{O}_y \subset f(\Lambda)$. Hence $f(\Lambda)$ is invariant under the action of $G$, and so forms a CIS.
\end{proof}
Let $\TS$ be a compact metric space on which the group $G$ acts on the right and let $\mathcal{C}$ be the set of CISs for $\TS$.
\begin{defn}
The \emph{inclusion diagram} $D_{\TS}$ for $\TS$ is a diagram whose objects are the elements of $\mathcal{C}$ and whose arrows $i_{jk} \colon \Lambda_j \to \Lambda_k$ are given by inclusion for every pair $j, k$ such that $\Lambda_j \subset \Lambda_k$.
The \emph{inclusion cohomology diagram} of $\TS$, denoted by $\check{H}^*(D_\TS)$, is given by the diagram of groups induced by applying the \v{C}ech{} cohomology functor to $D_{\TS}$.
\end{defn}
\begin{defn}
The \emph{quotient diagram} $D^{\TS}$ for $\TS$ is a diagram with objects $\TS/\Lambda$ for every $\Lambda \in \mathcal{C}$ and an arrow $q_{jk} \colon \TS/\Lambda_j \to \TS/\Lambda_k$ given by the quotient map for every pair $j,k$ such that $\Lambda_j \subset \Lambda_k$.
The \emph{quotient cohomology diagram} of $\TS$, denoted by $\check{H}^*(D^\TS)$, is given by the diagram of groups induced by applying the \v{C}ech{} cohomology functor to $D^{\TS}$.
\end{defn}
These diagrams clearly have the same shape given by the correspondence between $\Lambda$ and $\TS/\Lambda$ for every CIS $\Lambda$.
\begin{example}
Consider the shift-orbit $\mathcal{O}(w)$ of the sequence $w = \ldots aaaa.bcbcbc\ldots$. The closure of $\mathcal{O}(w)$ in $\{a,b,c\}^\mathbb{Z}$ also contains three additional sequences in the limit and so the subshift $X_w$ generated by $w$ is $$X_w = \mathcal{O}(w) \sqcup \Lambda_1 \sqcup \Lambda_2$$ where $\Lambda_1 = \{\ldots aaa.aaa\ldots\} $ and $\Lambda_2 = \{\ldots bcbc.bcbc\ldots,\ldots cbcb.cbcb \ldots\}$.
The dynamical system has two minimal sets $\Lambda_1$ and $\Lambda_2$ and one other proper non-empty closed invariant subspace given by their disjoint union $\Lambda_3 = \Lambda_1 \sqcup \Lambda_2$. The inclusion and quotient diagrams $D_{X_w}$ and $D^{X_w}$ for this dynamical system are given below, with the composition of arrows being implicit.
$$\begin{tikzpicture}[node distance=1cm, auto]
\node (1) {};
\node (Y) [below of=1] {};
\node (X) [right of=1] {$X_w$};
\node (2) [below of=X] {$\Lambda_3$};
\node (3) [right of=2] {};
\node (4) [below of=Y] {$\Lambda_1$};
\node (5) [right of=4] {};
\node (6) [right of=5] {$\Lambda_2$};
\node (7) [below of=4] {};
\node (8) [right of=7] {$\emptyset$};
\node (9) [right of=8]{};
\draw[right hook->] (4) to node [swap] {} (2);
\draw[right hook->] (6) to node [swap] {} (2);
\draw[right hook->] (8) to node [swap] {} (4);
\draw[right hook->] (8) to node [swap] {} (6);
\draw[right hook->] (2) to node [swap] {} (X);
\node (A) [left of=4, node distance=1.5cm] {$D_{X_w}\colon$};
\node (1b) [right of=1, node distance=6cm]{};
\node (Yb) [below of=1b] {};
\node (Xb) [right of=1b] {$\{\ast\}$};
\node (2b) [below of=Xb] {$\mathbb{Z}^*$};
\node (3b) [right of=2b] {};
\node (4b) [below of=Yb] {$X_w$};
\node (5b) [right of=4b] {};
\node (6b) [right of=5b] {$\mathbb{Z}^{**}$};
\node (7b) [below of=4b] {};
\node (8b) [right of=7b] {$X_w$};
\node (9b) [right of=8b]{};
\draw[->>] (4b) to node [swap] {} (2b);
\draw[->>] (6b) to node [swap] {} (2b);
\draw[->>] (8b) to node [swap] {} (4b);
\draw[->>] (8b) to node [swap] {} (6b);
\draw[->>] (2b) to node [swap] {} (Xb);
\node (Ab) [left of=4b, node distance=1.5cm] {$D^{X_w}\colon$};
\end{tikzpicture}$$
where $\mathbb{Z}^*$ is the one-point compactification of the integers whose point at $\infty$ is the equivalence class $[\Lambda_3]$ shrunk to a point and where $\mathbb{Z}^{**}$ is the two-point compactification of the integers $\mathbb{Z}\cup\{-\infty,\infty\}$ whose point at $-\infty$ is the point $\{\ldots aaa.aaa \ldots\}$ and whose point at $\infty$ is the equivalence class $[\Lambda_2]$ shrunk to a point. Note that one of the copies of $X_w$ in the right hand diagram is really $X_w$ but with the single point $\Lambda_1$ replaced (or rather renamed) with the equivalence class $[\Lambda_1]$.
\end{example}
\begin{example}
The inclusion and quotient diagrams for a compact metric space need not be finite. Take the toral product system $\TS = S^1\times S^1$ with rotation action $\rho \colon \TS \times\mathbb{R} \to \TS$ given by $\rho((\theta_1,\theta_2),t) = (\theta_1, \theta_2+t)\mod 1$. For every $\theta\in S^1$, the meridional circle $\{\theta\}\times S^1$ is closed and invariant (and in fact minimal). Moreover, every CIS of $(\Omega,\rho)$ is of the form $C\times S^1$ for some closed set $C$ and so the inclusion and quotient diagrams are isomorphic (as lattices) to the lattice of closed subsets of the circle $S^1$.
More generally, if every $\rho$-orbit of the compact dynamical system $(\TS,\rho)$ is closed, then the lattice of CISs $\mathcal{C}$ is naturally isomorphic to the lattice of closed sets for the orbit space $\TS/\rho$ under the map taking every CIS to the equivalence class represented by that closed union of orbits.
\end{example}
\begin{remark}
Note that all of the arrows appearing in $D_\TS$ and $D^\TS$ commute with the $G$-action induced on the objects $\Lambda$ and $\TS/\Lambda$ for all $\Lambda \in \mathcal{C}$. (The action is well defined on quotients because either an orbit is mapped injectively onto a subspace of $\TS/\Lambda$ or it is mapped to the point $[\Lambda] \in \TS/\Lambda$.) So the inclusion and quotient diagrams both admit commuting $G$-actions.
\end{remark}
If $D$ and $E$ are diagrams of groups, we say a collection of homomorphisms $f = \{f_i \colon G \to H \mid G\in D, H \in E\}$ is a \emph{map of diagrams} if the diagram $D \sqcup_f E$ commutes, where the objects of $D \sqcup_f E$ are given by the disjoint union of the objects in $D$ and $E$ and the homomorphisms of $D \sqcup_f E$ are given by the union of the homomorphisms in $D$ and $E$ together with the homomorphisms in $f$.
\begin{defn}
Let $f \colon \TS \to \TS'$ be an orbit-preserving map. We define the \emph{induced map on inclusion cohomology diagrams} $f^* \colon \check{H}^*(D_{\TS'}) \to \check{H}^*(D_\TS)$ by $$f^* = \{f|_\Lambda^*\colon \check{H}^*(\Lambda') \to \check{H}^*(\Lambda) \mid f(\Lambda) = \Lambda'\}.$$
\end{defn}
Lemma \ref{LEM:orbits} tells us that this induced map of diagrams of groups $f^*$ is non-empty (for all non-empty $\TS'$).
\begin{lemma}\label{LEM:induced}
The induced map on inclusion cohomology diagrams $f^*$ is a map of diagrams of groups.
\end{lemma}
\begin{proof}
Suppose $f|_\Lambda^*, f|_{\Lambda'}^* \in f^*$ and suppose without loss of generality that $\Lambda \subset \Lambda'$ with inclusion map $i \colon \Lambda \to \Lambda'$. Then we must have $f(\Lambda)\subset f(\Lambda')$ and an inclusion map $j\colon f(\Lambda) \to f(\Lambda')$. If $x \in \Lambda$ then $f|_{\Lambda'} (i (x)) = f|_{\Lambda'}(x) = f(x)$ and $j ( f|_\Lambda (x)) = j(f(x)) = f(x)$. So,
$$f|_{\Lambda'} \circ i = j \circ f|_{\Lambda}$$ and then by applying the \v{C}ech{} cohomology functor we get $$i^* \circ f|_{\Lambda'}^* = f|_\Lambda^* \circ j^*$$ as required.
\end{proof}
For a CIS $\Lambda$ of $\TS$, let $q_\Lambda \colon \TS \to \TS/\Lambda$ be the corresponding quotient map. For an orbit-preserving map $g \colon \TS \to \TS'$, if $g(\Lambda) = \Lambda'$, then there is a unique map $g_\Lambda \colon \TS/\Lambda \to \TS'/\Lambda'$ such that $$g_\Lambda \circ q_\Lambda = q_{\Lambda'} \circ g$$
\begin{defn}
Let $g \colon \TS \to \TS'$ be an orbit-preserving map. We define the \emph{induced map on quotient cohomology diagrams} $g^* \colon \check{H}^*(D^{\TS'}) \to \check{H}^*(D^\TS)$ by $$g^* = \{g_\Lambda^* \colon \check{H}^*(\TS'/\Lambda') \to \check{H}^*(\TS/\Lambda) \mid g(\Lambda) = \Lambda'\}$$
\end{defn}
Lemma \ref{LEM:orbits} tells us that this induced map $g^*$ is non-empty.
\begin{lemma}
The induced map on quotient cohomology diagrams $g^*$ is a map of diagrams of groups.
\end{lemma}
The proof is very similar to the proof of Lemma \ref{LEM:induced}
\begin{thm}
Inclusion and quotient cohomology diagrams, together with their induced maps, are contravariant functors from the category of $G$-actions on compact metric spaces and orbit-preserving maps to the category of diagrams of abelian groups and homomorphisms.
\end{thm}
\begin{proof}
Let $\TS \stackrel{f}{\to} \TS' \stackrel{g} \to \TS''$ be a pair of orbit preserving maps. Let $\Lambda$ be a CIS of $\TS$. The map of diagrams of groups $f^* \circ g^*$ is the set of all compositions $f|_{\Lambda}^* \circ g|_{f(\Lambda)}^*$ which by functoriality of cohomology is equal to $(g \circ f)|_{\Lambda}^*$. The map of diagrams of groups $(g \circ f)^*$ is the set of all maps $(g \circ f)|_{\Lambda}^*$ for CISs $\Lambda$ of $\TS$ and so $f^* \circ g^* = (g \circ f)^*$ as required.
A similar argument shows the functoriality of the quotient cohomology diagram.
\end{proof}
\begin{cor}
Both $\check{H}^*(D_{\TS})$ and $\check{H}^*(D^{\TS})$ are at least as strong an invariant of tiling spaces (up to orbit-equivalence) as \v{C}ech{} cohomology.
\end{cor}
We will see in the next section that examples exist where $\check{H}^*(D_{\TS})$ and $\check{H}^*(D^{\TS})$ can distinguish pairs of spaces whose cohomology coincides. So they are in fact strictly stronger invariants than \v{C}ech{} cohomology on its own.
\subsection{Invariant Subspaces of Substitution Tiling Spaces}
Let $\sub$ be a tame recognisable substitution, let $\TS$ be the associated tiling space and let $\rho \colon \TS \times \mathbb{R} \to \TS$ be the associated flow on $\TS$ given by $$\rho((w,t),\tau) = (\sigma^{\lfloor t+\tau \rfloor} (w),t+\tau \mod 1).$$ Note that orbits in this setting are precisely the path components of the tiling space. So, even though the previous machinery has been defined for dynamical systems, for tiling spaces the dynamical and topological setting coincide. We could have just as easily considered the set of closed unions of path components, rather than closed invariant subspaces.
\begin{lemma}\label{LEM:finite-CISs}
Let $\mathcal{C}$ be the set of CISs for a tame recognisable substitution $\sub$ on the alphabet $\Al$. The set $\mathcal{C}$ is finite.
\end{lemma}
To reduce notation, we identify without further comment the tilings $T \in \TS_{\sub_N}$ and $f_{N,0}(T)\in \TS_\sub$ where $f_{N,0}$ is the induced forgetful map which removes collaring information on a collared letter $a_v \in \Al_N$.\
\begin{proof}
Let $f_N \colon \Gamma_N \to \Gamma_N$ be the induced substitution map on the $N$-collared AP-complex for $\sub$ where $N$ is one greater than the longest bounded legal word for $\sub$, which is well defined by the tameness of $\sub$. Let $\Lambda \in \mathcal{C}$ be a CIS of $\TS$. As $\Lambda$ is invariant under translation $\rho$, the image of $\Lambda$ under the quotient map $p_N \colon \TS \to \Gamma_N$ must be a subcomplex of $\Gamma_N$.
Now, suppose $\Lambda' \in \mathcal{C}$ and that $p_N(\Lambda) = p_N(\Lambda')$. We want to show that $\Lambda$ and $\Lambda'$ must be the same subspace. Suppose for contradiction and without loss of generality, that $\Lambda'\setminus \Lambda$ is non-empty. Let $T$ be a tiling found in $\Lambda'$ but not $\Lambda$. By construction then, $T$ contains a patch of tiles labelled by the word $u \in \Al^*$ which does not appear in any tiling in $\Lambda$. Suppose $u$ contains an expanding letter (if not, extend $u$ in $T$ until it does contain an expanding letter by tameness).
Let $d$ be the length of the shortest legal word $v \in \hat{\mathcal{L}}_{\sub}$ such that $u$ is a subword of $\sub^i(v)$ ($d$ may be greater than $1$ as $\sub$ is not necessarily minimal). As $u$ contains an expanding letter, so then must every such $v$. Using recognisability, it is not hard to see that any such $v$ of minimal length is of the form $a_1v', v'a_2$ or $a_1v'a_2$ where $v'$ is a bounded word (possibly empty) and $a_1,a_2$ are expanding letters. As $\sub$ is tame, we conclude that $d \leq N+1$.
Of those words $v$ of minimal length such that $u$ is a subword of $\sub^i(v)$ for some $i$, let $n$ be the minimal such power. Let $\tilde{V} = \{v \mid |v|=d, u \subset \sub^n(v)\}$. Finally, let $V$ be the set of legal words of length $2N+1$ which contain a word $v \in \tilde{V}$ as a subword. As $d \leq N+1$, it is certainly true that $d\leq 2N+1$, and so $V$ is non-empty because $\tilde{V}$ is non-empty. Note that $V \subset \Al_N'$. In particular, there is a legal $N$-collared letter $a_v$ for every $v\in V$.
Recall that $\sub_N \colon \TS \to \TS$ is a homeomorphism by recognisability, and this function maps orbits to orbits, and so $\sub_N^{-n}(\Lambda)$ and $\sub_N^{-n}(\Lambda')$ are CISs of $\TS$. By our choice of $T$, $\sub_N^{-n}(T)$ is in $\sub_N^{-n}(\Lambda')$ but not $\sub_N^{-n}(\Lambda)$. The tiling $\sub_N^{-n}(T)$ contains a tile $a_v \in V$ and so there exists a $t \in \mathbb{R}$ so that $T_0 = \sub_N^{-n}(T)-t$ has a tile $a_v \in V$ which contains the origin in its interior.
As $\Lambda$ and $\Lambda'$ are CISs, $T_0 \in \sub_N^{-n}(\Lambda')$ and $T_0 \notin \sub_N^{-n}(\Lambda)$.
The image of $T_0$ under the quotient map $p_N$ lies on the interior of the edge of the $N$-collared AP-complex $\Gamma_N$ which is labelled by the $N$-collared letter $a_v$.
If $p_N(\sub_N^{-n}(\Lambda))$ intersected the interior of this edge, then $\sub_N^{-n}(\Lambda)$ would contain an $N$-collared tiling which contained an $a_v$ tile, but then $\Lambda$ would contain a tiling which contained a patch labelled by the word $u$. This contradicts the choice of $u$ not being a patch in any tiling in $\Lambda$.
It follows that if $p_N(\Lambda) = p_N(\Lambda')$ for CISs $\Lambda, \Lambda'$, then $\Lambda = \Lambda'$. Hence, a CIS is fully determined by the associated subcomplex of the $N$-collared AP-complex to which it is sent under the quotient map. There are only finitely many subcomplexes of any AP-complex and so there can only be finitely many CISs in $\mathcal{C}$ of $\TS$.
\end{proof}
Lemma \ref{LEM:finite-CISs} has a similar flavour to a result of Durand that says that a linearly recurrent subshift has a finite number of non-periodic subshift factors \cite{D:lin-recurrent}.
\begin{remark}
It is important to note that the choice of $N$ large enough is key in the proof of the above Lemma. If $N$ is not chosen large enough, then the quotient map $p_N \colon \TS \to \Gamma_N$ may send distinct CISs to the same subcomplex of $\Gamma_N$.
As an example, consider the substitution $\sub \colon a \mapsto aba,\: b \mapsto bbab,\: c \mapsto aa$ whose tiling space has exactly one non-empty proper CIS corresponding to the tilings which do not contain the patch labelled by the word $aa$. The $0$-collared AP-complex is `too small' to distinguish this CIS from the entire space $\TS$ in the way described in the above proof.
\end{remark}
The following theorem gives a homeomorphism between a CIS and the inverse limit of a subdiagram of $\Gamma_N$.
\begin{thm}\label{THM:subcomplex}
Let $\sub$ be a tame recognisable substitution. Let $f_N \colon \Gamma_N \to \Gamma_N$ be the induced substitution map on the $N$-collared AP-complex for $\sub$. There exists an integer $n$ so that for all $\Lambda \in \mathcal{C}$, there exists a subcomplex $\Gamma_{\Lambda} \subset \Gamma_N$ such that $f_N^{n}(\Gamma_{\Lambda}) = \Gamma_{\Lambda}$ and $\varprojlim(\Gamma_{\Lambda},f_N^n) = \Lambda$.
\end{thm}
\begin{proof}
As $\sub$ is recognisable, the substitution acts as a homeomorphism on $\TS$ and so the substitution permutes CISs of the tiling space. By Lemma \ref{LEM:finite-CISs}, $\mathcal{C}$ is finite. As such, an integer $n$ can be chosen so that $\sub^n(\Lambda) = \Lambda$ for all $\Lambda \in \mathcal{C}$.
Let $p_N \colon\TS \to \Gamma_N$ be the quotient map from the tiling space to the $N$-collared AP-complex. Let $p = p_N|_\Lambda$, be the restriction of the quotient map to $\Lambda$. As $\Lambda$ is a CIS, the image $\Gamma_{\Lambda}$ of $p$ is a subcomplex of $\Gamma_N$. Recall that $p_N \circ \sub = f_N\circ p_N$, and so
\begin{equation}\label{EQN:commute}
p \circ \sub^n = f_N^n \circ p.
\end{equation}
Let $h_\Lambda \colon \Lambda \to \varprojlim(\Gamma_{\Lambda},f_N^n)$ be defined by $$h_\Lambda(x) = (p(x), p(\sub^{-n}(x)), p(\sub^{-2n}(x)), \ldots )$$ which is well defined by \ref{EQN:commute}. As $h_\Lambda$ is a telescoped version of $h$ with modified domain and codomain, it is clearly injective, so it only remains to show that $h_\Lambda$ is surjective onto the inverse limit.
A point in the inverse limit corresponds to a unique tiling in the tiling space as $\sub_N$ forces the border. Suppose $(x_0, x_1, x_2,\ldots) \in \varprojlim(\Gamma_\Lambda,f_N^n)$ was not in the image of $h_\Lambda$, then there exists some $i$ for which the patch described by the finite subsequence of points $(x_0 ,x_1,\ldots, x_i)$ does not appear in a tiling in $\Lambda$. But this means that the shifted sequence $(x_i, x_{i+1},\ldots)$ is also not in the image of $h_\Lambda$, as the shift is a homeomorphism, and so the point $x_i\in \Gamma_\Lambda$ must not describe the label $w_0$ of the tile at the origin of any tiling in $\Lambda$. This is impossible by how the Anderson-Putnam complex and the quotient map $p$ are defined, as $\Gamma_\Lambda$ is the image of $\Lambda$ under $p$. It follows that no such point in the inverse limit exists and $h_\Lambda$ is surjective.
\end{proof}
Just as each CIS is homeomorphic to the inverse limit of a subdiagram of $\Gamma_N$, each quotient of $\TS$ by a CIS is homeomorphic to the inverse limit of a quotient of $\Gamma_N$ by a subdiagram.
The proof of this fact uses the following lemma.
\begin{lemma}\label{LEM:quotient-surjective}
Let $(X_i,\sub_i)$ and $(Y_i,\psi_i)$ be two inverse systems of compact Hausdorff spaces $X_i, Y_i$, and let $X$ and $Y$ denote the respective inverse limits.
Let $p_i \colon X \to X_i$ and $q_i \colon Y \to Y_i$ be the projection maps onto approximants.
Let $s_i \colon X_i \to Y_i$ be a sequence of continuous maps such that $s_i \circ \sub_i = \psi_i \circ s_{i+1}$, and let $s \colon X \to Y$ be the unique continuous map such that $s_i \circ p_i = q_i \circ s$ for every $i\geq 0$.
If $\sub_i$ and $s_i$ are surjective for every $i\geq 0$ then $s$ is a surjection.
\end{lemma}
\begin{proof}
\centerline{
\xymatrix{
X \ar@{>}_s[ddd] \ar@{>}_{p_0}[dr] \ar@{>}_{p_1}[drr] \ar@{>}^{p_2}[drrr] & & & & \\
& X_0 \ar@{>>}_{s_0}[d] & X_1 \ar@{>>}^{\varphi_0}[l] \ar@{>>}_{s_1}[d] & X_2 \ar@{>>}^{\varphi_1}[l] \ar@{>>}_{s_2}[d] & \cdots \ar@{>>}^{\varphi_2}[l] \\
& Y_0 & Y_1 \ar@{>}_{\psi_0}[l] & Y_2 \ar@{>}_{\psi_1}[l] & \cdots \ar@{>}_{\psi_2}[l] \\
Y \ar@{>}^{q_0}[ur] \ar@{>}^{q_1}[urr] \ar@{>}_{q_2}[urrr] %
& & & &
}
}
Let $y \in Y$.
As $Y$ is Hausdorff and compact, $\{y\}$ is closed and hence compact.
It follows that $q_i(\{y\})$ is compact and hence closed. As $s_i$ is continuous, the preimage $D_i = s_i^{-1}(q_i(\{y\}))$ is then closed, hence compact, and as $s_i$ is surjective, $D_i \neq \emptyset$.
Note that $\psi_i(q_i(\{y\})) = q_{i-1}(\{y\})$ and so $\psi_i(s_i(D_i)) = q_{i-1}(\{y\})$.
By commutativity then, $s_{i-1}(\sub_i (D_i)) = q_{i-1}(\{y\})$. This means that $\sub_i D_i$ is a compact subset of $D_{i-1}$ and so by continuity and commutativity, $p_i^{-1}(D_i)$ is a compact subset of $p_{i-1}^{-1}D_{i-1}$.
Further, each $p_i^{-1} D_i$ is non-empty by the surjectivity of $\sub_i$ for each $i$.
It follows that $p_0^{-1}D_0 \supset p_1^{-1}D_1 \supset p_2^{-1}D_2 \supset \cdots$ is a nested sequence of closed non-empty subsets of the compact space $X$.
By Cantor's intersection theorem, $X_0 = \bigcap_{i\geq 0} p_i^{-1} D_i$ is non-empty, and by construction for every $x\in X_0$, $s(x)=y$.
\end{proof}
\begin{thm}\label{THM:quotient}
Let $\sub$ be a tame recognisable substitution. Let $f_N \colon \Gamma_N \to \Gamma_N$ be the induced substitution map on the $N$-collared AP-complex for $\sub$, let $\Lambda \in \mathcal{C}$, and let $\Gamma_\Lambda$ denote the subcomplex of $\Gamma$ that corresponds to $\Lambda$. Then for sufficiently large $N$ and for some $n\in\mathbb{N}$, $\TS/\Lambda$ is homeomorphic to $\varprojlim(\Gamma_N/\Gamma_{\Lambda},f_N^n)$.
\end{thm}
\begin{proof}
\centerline{
\xymatrix{
\Omega \ar@{>}^{q_\Lambda}[r] \ar@{>}_{p_N}[d] & \Omega/\Lambda \ar@{>}[d] \\
\Gamma_N \ar@{>}_{Q_\Lambda}[r] & \Gamma_N/\Gamma_\Lambda \\
}
}
Consider the diagram above, where the map in the bottom row is the canonical quotient map from $\Gamma_N$ to $\Gamma_N/\Lambda$, and the map in the right column is the unique continuous map making the diagram commute.
Let $n$ be as given in Theorem \ref{THM:subcomplex} and $F_N$ be the self-map on $\Gamma_N/\Lambda$ that makes the diagram
\centerline{
\xymatrix{
\Gamma_N \ar@{>}[r] \ar@{>}_{f_N^n}[d] & \Gamma_N/\Gamma_\Lambda \ar@{>}^{F_N}[d] \\
\Gamma_N \ar@{>}[r] & \Gamma_N/\Gamma_\Lambda \\
}
}
commute.
Then the universal property of the inverse limit yields a diagram
\centerline{
\xymatrix{\label{DIAG:quotient-diagram}
\Omega \ar@{>}^{q_\Lambda}[r] \ar@{>}_{h}[d] & \Omega/\Lambda \ar@{>}^H[d] \\
\varinjlim (\Gamma_N,f_N^n) \ar@{>}_s[r] & \varinjlim (\Gamma_N/\Gamma_\Lambda, F_N). \\
}
}
The map $s$ in the bottom row of this diagram is surjective by Lemma \ref{LEM:quotient-surjective}, and, as $h$ is a homeomorphism, and hence surjective, $s\circ h$ is surjective, which implies that the map $H$ in the right column is surjective as well.
$H$ is also injective: pick two points $y_1, y_2\in \Omega/\Lambda$ with the same image.
If neither point is $\Lambda$, then their $q_\Lambda$-preimages $x_1, x_2 \in\Omega$ are distinct points not in $\Lambda$.
Thus $h(x_1)$ and $h(x_2)$ are sequences that differ beyond some finite index $i$.
Moreover, there is a finite index $j$ beyond which neither sequence has entries in $\Gamma_\Lambda$.
Then the images in $\varinjlim (\Lambda_N/\Lambda_\Gamma,F_N)$ of $y_1$ and $y_2$ are $s(h(x_1)), s(h(x_2))$, the entries of which differ beyond index $\max \{ i, j\}$.
If $y_1 = \Lambda$, then a similar argument shows that the image of $y_2$ is a sequence, the entries of which are not $\Gamma_\Lambda$ beyond a certain index, while the image of $y_1$ has entry $\Lambda$ at every index, and so these images are different.
Thus $H$ is a continuous bijection.
Its domain is a quotient of a compact space, and hence is compact.
$\Gamma_N/\Gamma_\Lambda$ is Hausdorff, as it is a quotient obtained from a compact Hausdorff space by collapsing a compact subspace to a point.
Thus the codomain of $H$ is an inverse limit of Hausdorff spaces, and hence is Hausdorff.
Then $H$ is a continuous bijection from a compact space to a Hausdorff space, and hence is a homeomorphism.
\end{proof}
\subsection{Identifying Closed Invariant Subspaces}
Let $\sub$ be a tame recognisable substitution on $\Al$, let $K$ be a subcomplex of $\Gamma_N$ and let $EV(K) = \bigcup_{i \geq 0} (f_N^n)^i(K)$ be the \emph{eventual range} of $K$. The eventual range of a subcomplex is itself a subcomplex. The set of eventual ranges $EV = \{EV(K) \mid K \mbox{ is a subcomplex of } \Gamma_N\}$ therefore forms a finite set.
Every CIS in $\mathcal{C}$ corresponds to a unique subcomplex in $EV$ given by the image of the CIS under the quotient map $p_N$ to the $N$-collared AP-complex, so $|\mathcal{C}| \leq |EV|$.
This inequality will often be strict: consider as an example the Chacon substitution $\sub\colon a\mapsto aaba, b\mapsto b$, which is minimal, so there is only one non-trivial CIS, yet for any $n$, the $n$-collared AP-complex will have an element of $EV$ consisting of a single edge that corresponds to a collared $b$-tile.
One can also be in the situation where an eventual range contains multiple expanding edges, yet does not correspond to a CIS: consider the augmented Fibonacci with a handle substitution $\sub \colon a \mapsto aab, b \mapsto ab, c \mapsto c, d \mapsto bca$, where $B = \{c\}$ and so $N=2$ and we have the subcomplex comprising the edges
$$\Gamma = \{a_{abaab}, a_{ababa}, a_{ababc}, a_{baaba}, a_{bcaab}, a_{caaba}, b_{aabaa}, b_{aabab}, b_{babaa}, b_{babca}\}.$$
This subcomplex is an eventual range as $f_N^n(\Gamma) = \Gamma$ but corresponds to no CIS in $\mathcal{C}$.
One can reduce the search for eventual ranges which correspond to a CIS by noting that whenever $\Lambda$ is a CIS, by virtue of $\Lambda$ being translation-invariant, the image of $\Lambda$ under $p_N$ must be a subcomplex which has no leaves (a leaf is a vertex with degree exactly one). It is not immediately clear whether this gives a sufficient condition for identifying all subcomplexes of $\Gamma_N$ which correspond to a CIS in $\mathcal{C}$.
\begin{question}
For a tame recognisable substitution, is there a one-to-one correspondence between the set of leafless eventual ranges of $\Gamma_N$ and the set of CISs $\mathcal{C}$? If not, what condition on a subcomplex $\Gamma \subset \Gamma_N$ is sufficient for $\Gamma$ to correspond to a CIS?
\end{question}
\section{Examples}\label{SEC:non-min-examples}
\begin{example}\label{ex:fib+1}
We define the \emph{Fibonacci substitution with one handle} to be given by $$\sub\colon 0\mapsto 001, \:1\mapsto 01, \:2\mapsto 021$$
By substituting $1$-collared letters (and noting that $B=\emptyset$), we find that there are two non-empty invariant subcomplexes $\Gamma_{\Lambda_1}$ and $\Gamma_{\Lambda_2}$, both fixed under $\sub_1$, corresponding to the collections of $1$-collared letters
$$\Gamma_{\Lambda_1} = \cup\{[0_{001}], [1_{010}], [0_{100}], [0_{101}]\}$$
and
$$\Gamma_{\Lambda_2} = \cup\{[0_{001}], [1_{010}], [2_{021}], [0_{100}], [0_{101}], [0_{102}], [1_{210}]\}.$$
The $1$-collared AP-complex appears in Figure \ref{fig:fib+1}. An oriented edge from $ab$ to $bc$ denotes an edge labelled by the the letter $b_{abc}$ in the alphabet $\Al_1$ of $1$-collared letters.
\begin{figure}[H]
$$
\begin{tikzpicture}[->, node distance=3cm, auto, text=black]
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10j) at (0.000000,0.000000) {$10$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (01j) at (-2.999316,1.732446) {$01$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (00j) at (-2.999316,-1.732446) {$00$};
\draw [->,ultra thick, bend left, draw=blue, looseness=0.7] (01j) to (10j);
\draw [->,ultra thick, bend left, draw=blue, looseness=0.7] (10j) to (00j);
\draw [->,ultra thick, bend left, draw=blue, looseness=0.7] (10j) to (01j);
\draw [->,ultra thick, bend left, draw=blue, looseness=0.7] (00j) to (01j);
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (21k) at (2.999316,1.732446) {$21$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (02k) at (2.999316,-1.732446) {$02$};
\draw [->,thick, bend right, looseness=0.7] (10j) to (02k);
\draw [->,thick, bend right, looseness=0.7] (02k) to (21k);
\draw [->,thick, bend right, looseness=0.7] (21k) to (10j);
\end{tikzpicture}
$$
\caption{The $1$-collared AP-complex for the Fibonacci substitution with one handle, with the subcomplex $\Gamma_{\Lambda_1}$ coloured blue.}
\label{fig:fib+1}
\end{figure}
The subcomplex $\Gamma_{\Lambda_1}$ in blue corresponds to a CIS given by considering the restriction of the substitution to the subalphabet $\{0,1\}$ which is (a re-encoding of) the Fibonacci substitution which is connected and has first cohomology $\check{H}^1(\TS_{Fib}) \cong \mathbb{Z}^2$. The subcomplex $\Gamma_{\Lambda_2}$ corresponds to the CIS which is the entire tiling space, which is connected and has first cohomology $\check{H}^1(\TS) \cong \varinjlim\left(\mathbb{Z}^3,\left(\begin{smallmatrix}1&1&0\\1&2&0\\1&1&1\end{smallmatrix}\right)\right) \cong \mathbb{Z}^3$, where the unimodular matrix $\left(\begin{smallmatrix}1&1&0\\1&2&0\\1&1&1\end{smallmatrix}\right)$ is found by choosing appropriate generators of $H^1(\Gamma_1)$. The only other CIS is the empty set.
So $\check{H}^\ast(D_{\TS})$ is given by the diagrams
$$\begin{array}{rl}
\check{H}^0(D_{\TS}) \colon & \mathbb{Z} \to \mathbb{Z} \to 0 \\
\check{H}^1(D_{\TS}) \colon & \mathbb{Z}^3 \to \mathbb{Z}^2 \to 0
\end{array}$$
We can use Theorem \ref{THM:quotient} to see, as $\Gamma_1/\Gamma_{\Lambda_1}$ is a circle and $\sub_1$ acts on this quotient complex by a map which is homotopic to the identity, that $\check{H}^i(\TS/\Lambda_1) \cong \varinjlim(H^i(S^1),\operatorname{Id}) \cong \mathbb{Z}$ for $i=0,1$.
So $\check{H}^\ast(D^{\TS})$ is given by the diagrams
$$\begin{array}{rl}
\check{H}^0(D^{\TS}) \colon & \mathbb{Z} \to \mathbb{Z} \to \mathbb{Z} \\
\check{H}^1(D^{\TS}) \colon & 0 \to \mathbb{Z} \to \mathbb{Z}^3
\end{array}$$
Alternatively, we could have used the fact that $\Lambda_1$ is a closed connected subspace of $\TS$ and so we get an exact sequence in reduced \v{C}ech{} cohomology
$$0 \to \check{H}^1(\TS/\Lambda) \to \check{H}^1(\TS) \to \check{H}^1(\Lambda) \to 0$$
which splits (as $\check{H}^1(\Lambda) \cong \mathbb{Z}^2$) to give $\check{H}^1(\TS) \cong \check{H}^1(\TS/\Lambda) \oplus \mathbb{Z}^2$. As above, we can identify $\check{H}^1(\TS/\Lambda)$ with $H^1(S^1)$ and so $\check{H}^1(\TS) \cong \mathbb{Z}^3$.
This distinguishes $\TS$ from the tiling space associated to the Tribonacci substitution which has $\check{H}^0(\TS_{\text{Trib}}) \cong \mathbb{Z}$ and $\check{H}^1(\TS_{\text{Trib}}) \cong \mathbb{Z}^3$ but no proper, non-empty CISs. So the diagrams $\check{H}^\ast(D_{\TS_{\text{Trib}}})$ and $\check{H}^\ast(D^{\TS_{\text{Trib}}})$ have a different shape and so cannot be isomorphic to the diagrams for $\sub$.
See Example \ref{ex:fib-proximal} for an example of a substitution with the same inclusion and quotient cohomology diagrams that nevertheless gives rise to a different tiling space.
\end{example}
Consider the following two substitutions.
`Two Tribonaccis with a bridge':
$$\sub_1 \colon 0\mapsto 0201 , \:1\mapsto 001, \:2\mapsto 0, \:\overline{0}\mapsto \overline{0}\overline{2}\overline{0}\overline{1}, \: \overline{1}\mapsto \overline{0}\overline{0}\overline{1}, \:\overline{2}\mapsto \overline{0}, \: X\mapsto 1\overline{0}$$
`Quadibonacci and Fibonacci with a bridge':
$$\sub_2 \colon 0\mapsto 0201 , \:1\mapsto 0301, \:2\mapsto 001, \:3\mapsto 0, \: \overline{0}\mapsto \overline{0}\overline{0}\overline{1}, \:\overline{1}\mapsto \overline{0}\overline{1}, \: X\mapsto 1\overline{0}$$
\begin{prop}\label{PROP:trib+trib}
$\check{H}^\ast(\TS_{\sub_1})$ is isomorphic to $\check{H}^\ast(\TS_{\sub_2})$ but they have degree $1$ inclusion cohomology diagrams
$$\begin{tikzpicture}[node distance=1cm, auto]
\node (1) {};
\node (Y) [below of=1] {};
\node (X) [right of=1] {$\mathbb{Z}^6$};
\node (2) [below of=X] {$\mathbb{Z}^6$};
\node (3) [right of=2] {};
\node (4) [below of=Y] {$\mathbb{Z}^3$};
\node (5) [right of=4] {};
\node (6) [right of=5] {$\mathbb{Z}^3$};
\node (7) [below of=4] {};
\node (8) [right of=7] {$0$};
\node (9) [right of=8]{};
\draw[->] (2) to node [swap] {} (4);
\draw[->] (2) to node [swap] {} (6);
\draw[->] (4) to node [swap] {} (8);
\draw[->] (6) to node [swap] {} (8);
\draw[->] (X) to node [swap] {} (2);
\node (A) [left of=4, node distance=1.5cm] {$\check{H}^1(D_{\TS_{1}})\colon$};
\node (1b) [right of=1, node distance=6cm]{};
\node (Yb) [below of=1b] {};
\node (Xb) [right of=1b] {$\mathbb{Z}^6$};
\node (2b) [below of=Xb] {$\mathbb{Z}^6$};
\node (3b) [right of=2b] {};
\node (4b) [below of=Yb] {$\mathbb{Z}^4$};
\node (5b) [right of=4b] {};
\node (6b) [right of=5b] {$\mathbb{Z}^2$};
\node (7b) [below of=4b] {};
\node (8b) [right of=7b] {$0$};
\node (9b) [right of=8b]{};
\draw[->] (2b) to node [swap] {} (4b);
\draw[->] (2b) to node [swap] {} (6b);
\draw[->] (4b) to node [swap] {} (8b);
\draw[->] (6b) to node [swap] {} (8b);
\draw[->] (Xb) to node [swap] {} (2b);
\node (Ab) [left of=4b, node distance=1.5cm] {$\check{H}^1(D_{\TS_{2}})\colon$};
\end{tikzpicture}$$
and degree $1$ quotient cohomology diagrams
$$\begin{tikzpicture}[node distance=1cm, auto]
\node (1) {};
\node (Y) [below of=1] {};
\node (X) [right of=1] {$0$};
\node (2) [below of=X] {$\mathbb{Z}$};
\node (3) [right of=2] {};
\node (4) [below of=Y] {$\mathbb{Z}^3$};
\node (5) [right of=4] {};
\node (6) [right of=5] {$\mathbb{Z}^3$};
\node (7) [below of=4] {};
\node (8) [right of=7] {$\mathbb{Z}^6$};
\node (9) [right of=8]{};
\draw[->] (2) to node [swap] {} (4);
\draw[->] (2) to node [swap] {} (6);
\draw[->] (4) to node [swap] {} (8);
\draw[->] (6) to node [swap] {} (8);
\draw[->] (X) to node [swap] {} (2);
\node (A) [left of=4, node distance=1.5cm] {$\check{H}^1(D^{\TS_{1}})\colon$};
\node (1b) [right of=1, node distance=6cm]{};
\node (Yb) [below of=1b] {};
\node (Xb) [right of=1b] {$0$};
\node (2b) [below of=Xb] {$\mathbb{Z}$};
\node (3b) [right of=2b] {};
\node (4b) [below of=Yb] {$\mathbb{Z}^2$};
\node (5b) [right of=4b] {};
\node (6b) [right of=5b] {$\mathbb{Z}^4$};
\node (7b) [below of=4b] {};
\node (8b) [right of=7b] {$\mathbb{Z}^6$};
\node (9b) [right of=8b]{};
\draw[->] (2b) to node [swap] {} (4b);
\draw[->] (2b) to node [swap] {} (6b);
\draw[->] (4b) to node [swap] {} (8b);
\draw[->] (6b) to node [swap] {} (8b);
\draw[->] (Xb) to node [swap] {} (2b);
\node (Ab) [left of=4b, node distance=1.5cm] {$\check{H}^1(D^{\TS_{2}})\colon$};
\end{tikzpicture}$$
\end{prop}
\begin{proof}
The proof is left as an exercise.
\end{proof}
\begin{comment}
\begin{proof}
For both substitutions $N=1$.
For $\sub_1$, the $1$-collared alphabet is given by
$$
\begin{array}{rcl}
\Al_1 & = & \{0_{001},0_{002},1_{010},2_{020},0_{100},0_{102},0_{201}\} \cup \{1_{01\overline{0}}, \overline{0}_{1\overline{02}}\} \cup\\
& & \{\overline{0}_{\overline{001}},\overline{0}_{\overline{002}},\overline{1}_{\overline{010}},\overline{2}_{\overline{020}},\overline{0}_{\overline{100}},\overline{0}_{\overline{102}},\overline{0}_{\overline{201}}\}
\end{array}
$$
The $1$-collared AP-complex for $\sub_1$ is given in Figure \ref{fig:two-trib}.
\begin{figure}[H]
$$
\begin{tikzpicture}[->, node distance=3cm, auto, text=black]
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (01i) at (2.000000,0.000000) {01};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (20i) at (0.618485,1.901966) {20};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (02i) at (-1.617476,1.176338) {02};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (00i) at (-1.618870,-1.174419) {00};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10i) at (0.616230,-1.902698) {10};
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00i) to (01i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (00i) to (01i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00i) to (02i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (00i) to (02i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (01i) to (10i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (01i) to (10i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (02i) to (20i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (02i) to (20i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10i) to (00i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (10i) to (00i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10i) to (02i);
\draw [->,thick, bend left, draw=blue, looseness=0.7](10i) to (02i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (20i) to (01i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (20i) to (01i);
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (01j) at (10.000000,0.000000) {$\overline{01}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (20j) at (8.618485,1.901966) {$\overline{20}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (02j) at (6.382524,1.176338) {$\overline{02}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (00j) at (6.382524,-1.174419) {$\overline{00}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10j) at (8.616230,-1.902698) {$\overline{10}$};
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00j) to (01j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (00j) to (01j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00j) to (02j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (00j) to (02j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (01j) to (10j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (01j) to (10j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (02j) to (20j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (02j) to (20j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10j) to (00j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (10j) to (00j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10j) to (02j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (10j) to (02j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (20j) to (01j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (20j) to (01j);
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10k) at (4.191262,0.588169) {$1\overline{0}$};
\draw [-,ultra thick, draw=white, line width=7pt] (01i) to (10k);
\draw [->,thick] (01i) to (10k);
\draw [-,ultra thick, draw=white, line width=7pt] (10k) to (02j);
\draw [->,thick] (10k) to (02j);
\end{tikzpicture}
$$
\caption{The $1$-collared AP-complex for the `Two Tribonaccis with a bridge' substitution.}
\label{fig:two-trib}
\end{figure}
We note that $\TS_1$ has five CISs:
\begin{itemize}
\item $\Lambda_1 = \emptyset$, the empty set
\item \textcolor{blue}{$\Lambda_2$}, given by restricting the substitution to the alphabet $\{0,1,2\}$
\item \textcolor{red}{$\Lambda_3$}, given by restricting the substitution to the alphabet $\{\overline{0},\overline{1},\overline{2}\}$
\item $\Lambda_4 = \textcolor{blue}{\Lambda_2} \sqcup \textcolor{red}{\Lambda_3}$, the union of the disjoint CISs above, given by restricting the substitution to the alphabet $\{0,1,2,\overline{0},\overline{1},\overline{2}\}$
\item $\Lambda_5 = \TS_1$, the full tiling space
\end{itemize}
We note that a choice of $1$-cycles generating the homology $H_1(\Gamma_1)$ of the AP-complex can be given by the oriented sum of edges $$
\begin{array}{rcl}
\gamma_1 & = & [2_{020}]+[0_{201}]+[1_{010}]+[0_{100}]+[0_{002}]\\
\gamma_2 & = & [2_{020}]+[0_{201}]+[1_{010}]+[0_{102}]\\
\gamma_3 & = & [0_{100}]+[0_{001}]+[1_{010}]\\
\gamma_4 & = & [\overline{2}_{\overline{020}}]+[\overline{0}_{\overline{201}}]+[\overline{1}_{\overline{010}}]+[\overline{0}_{\overline{100}}]+[\overline{0}_{\overline{002}}]\\
\gamma_5 & = & [\overline{2}_{\overline{020}}]+[\overline{0}_{\overline{201}}]+[\overline{1}_{\overline{010}}]+[\overline{0}_{\overline{102}}]\\
\gamma_6 & = & [\overline{0}_{\overline{100}}]+[\overline{0}_{\overline{001}}]+[\overline{1}_{\overline{010}}]
\end{array}
$$
and the substitution acts on these $1$-cycles by the matrix
$$
M =
\left(
\begin{matrix}
1 & 1 & 0 & 0 & 0 & 0 \\
2 & 1 & 2 & 0 & 0 & 0 \\
1 & 1 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 1 & 1 & 0 \\
0 & 0 & 0 & 2 & 1 & 2 \\
0 & 0 & 0 & 1 & 1 & 1
\end{matrix}
\right)
$$
hence the cohomology of $\TS_1$ is then given by the direct limit of the transpose of $M$.
Write $M = \begin{pmatrix}M_1 & 0 \\ 0 & M_2\end{pmatrix}$ in block matrix form. Note that $M_1, M_2$ and $M$ are all unimodular. Using Theorem \ref{THM:subcomplex}, we can identify each CIS with an inverse limit of the substitution (in this case to the first power) acting on a particular subcomplex of $\Gamma_1$. These subcomplexes can be found using the method outlined in the above discussion. We have chosen our generators $\gamma_i$ in such a way that the homology of each subcomplex is generated by some subset of these $1$-cycles. We calculate
$$
\begin{array}{rclcl}
\check{H}^1(\Lambda_1) & = & \Check{H}^1(\emptyset) & = & 0\\
\check{H}^1(\Lambda_2) & = & \varinjlim(\mathbb{Z}^3_{\langle \gamma_1,\gamma_2,\gamma_3\rangle},M_1^T) & = & \mathbb{Z}^3\\
\check{H}^1(\Lambda_3) & = & \varinjlim(\mathbb{Z}^3_{\langle \gamma_4,\gamma_5,\gamma_6\rangle},M_2^T) & = & \mathbb{Z}^3\\
\check{H}^1(\Lambda_4) & = & \check{H}^1(\Lambda_2) \oplus \check{H}^1(\Lambda_3) & = & \mathbb{Z}^6\\
\check{H}^1(\Lambda_5) & = & \varinjlim(\mathbb{Z}^3_{\langle \gamma_1,\ldots,\gamma_6\rangle},M^T) & = & \mathbb{Z}^6
\end{array}
$$
From here, we can use the exact sequence in reduced \v{C}ech{} cohomology to find the cohomology groups of each of the quotients, except for the quotient $\TS/\Lambda_4$, where the exact sequence does not reduce nicely to a split short exact sequence, since $\tilde{H}^0(\Lambda_4) \cong \mathbb{Z}$, as $\Lambda_4$ is composed of exact two connected components.
Hence, to find the cohomology of the quotient $\TS/\Lambda$ in this case, we use Theorem \ref{THM:quotient}, and identify the quotient complex $\Gamma_1/\Gamma_{\Lambda_4}$, given in Figure \ref{fig:trib+trib_quotient_complex}. This quotient complex is a circle. The induced substitution acts on $\Gamma_1/\Gamma_{\Lambda_4}$ by a map which is homotopic to the identity, and so $\check{H}^1(\TS/\Lambda_4) \cong \varinjlim(H^1(S^1),\operatorname{Id}^*) \cong \mathbb{Z}$.
\begin{figure}
$$
\begin{tikzpicture}[node distance=4cm, auto]
\node (1) {$1\overline{0}$};
\node (2) [below of=1] {$\Gamma_{\Lambda_4}$};
\draw [->,thick, bend left, looseness=1.7] (2) to (1);
\draw [->,thick, bend left, looseness=1.7] (1) to (2);
\end{tikzpicture}
$$
\caption{The quotient complex $\Gamma_1/\Gamma_{\Lambda_4}$ for the `Two Tribonaccis with a bridge' substitution.}
\label{fig:trib+trib_quotient_complex}
\end{figure}
In the case of the second considered substitution, we give the $1$-collared AP-complex for $\sub_2$ in Figure \ref{fig:quad-fib}. The calculation of cohomology for $\sub_2$ is similar to $\sub_1$, except we can choose six generating $1$-cycles of $H_1(\Gamma_1)$ in such a way that the matrix $M$ acting on $H_1$ has a block diagonal structure with $M_1$ and $M_2$ of ranks $4$ and $2$ respectively, and with all three of $M_1, M_2, M$ still being unimodular. We leave the details to the reader.
\begin{figure}[H]
$$
\begin{tikzpicture}[->, node distance=3cm, auto, text=black]
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (01i) at (2.000000,0.000000) {01};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (30i) at (1.247244,1.563452) {30};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (03i) at (-0.444382,1.950006) {03};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (20i) at (-1.801497,0.868683) {20};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (02i) at (-1.802525,-0.866547) {02};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (00i) at (-0.446693,-1.949478) {00};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10i) at (1.245390,-1.564929) {10};
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00i) to (01i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (00i) to (01i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00i) to (02i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (00i) to (02i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (01i) to (10i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (01i) to (10i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (02i) to (20i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (02i) to (20i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (03i) to (30i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (03i) to (30i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10i) to (00i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (10i) to (00i);
\draw [-,ultra thick, bend right, draw=white, line width=7pt, looseness=0.7] (10i) to (02i);
\draw [->,thick, bend right, draw=blue, looseness=0.7] (10i) to (02i);
\draw [-,ultra thick, bend right, draw=white, line width=7pt, looseness=0.7] (10i) to (03i);
\draw [->,thick, bend right, draw=blue, looseness=0.7] (10i) to (03i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (20i) to (01i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (20i) to (01i);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (30i) to (01i);
\draw [->,thick, bend left, draw=blue, looseness=0.7] (30i) to (01i);
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (01j) at (10.000000,0.000000) {$\overline{01}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (00j) at (7.000684,1.732446) {$\overline{00}$};
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10j) at (7.000684,-1.732446) {$\overline{10}$};
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (01j) to (10j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (01j) to (10j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10j) to (00j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (10j) to (00j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (10j) to (01j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (10j) to (01j);
\draw [-,ultra thick, bend left, draw=white, line width=7pt, looseness=0.7] (00j) to (01j);
\draw [->,thick, bend left, draw=red, looseness=0.7] (00j) to (01j);
\node [circle,inner sep = 0pt, outer sep = 2pt, minimum size=2mm] (10k) at (4.500342,0.866223) {1$\overline{0}$};
\draw [-,ultra thick, draw=white, line width=7pt] (01i) to (10k);
\draw [->,thick] (01i) to (10k);
\draw [-,ultra thick, draw=white, line width=7pt] (10k) to (00j);
\draw [->,thick] (10k) to (00j);
\end{tikzpicture}
$$
\caption{The $1$-collared AP-complex for the `Quadibonacci and Fibonacci with a bridge' substitution.}
\label{fig:quad-fib}
\end{figure}
\end{proof}
\end{comment}
Hence, $\check{H}^1(D_{\TS})$ and $\check{H}^1(D^{\TS})$ can distinguish tiling spaces which have the same cohomology and lattice structure of CISs.
\subsection{Discussion}
\subsubsection{Barge-Diamond Complexes for Non-primitive Substitutions}
One may ask why we have been using collared Anderson-Putnam complexes and not Barge-Diamond complexes \cite{BD} in the sections focussing on non-minimal substitutions. Indeed, (a slightly modified version of) the BD-complex is a suitable replacement for the $N$-collared AP-complex, and most results from the previous section would hold with very little changed. However, the advantages afforded to the Barge-Diamond method are less apparent when there exist bounded letters in the alphabet. When all letters are expanding and the substitution is aperiodic, a very similar argument to the original proof presented by Barge and Diamond \cite{BD} will carry through, and one can then apply the usual method of replacing the induced substitution on the BD-complex with a homotopic map which is simplicial on the vertex-edges.
When there exist bounded words in the subshift, the usual BD-complex with an $\epsilon$-ball collaring at each point\footnote{See \cite{BDHS:other} for an explanation of what it means to \emph{collar points} in the tiling, instead of collaring tiles.} does not suffice to get the necessary homeomorphism to the inverse limit (for broadly the same reasons that the $1$-collaring does not suffice to induce border-forcing when $B$ is non-empty).
Instead, the approach that one should take is to collar points with a ball of radius $N-1+\epsilon$ at each point---this is equivalent to replacing the substitution with its $(N-1)$-collared substitution and then using the $\epsilon$-ball collaring on this collared substitution (and so we are using the usual BD-complex $K_{\sub_{N-1}}$ for the collared substitution $\sub_{N-1}$). This has the advantage of needing to collar out one fewer times than in the AP-complex approach. Moreover, we can still replace the induced substitution map with a homotopic map which acts simplicially on the distinguished subcomplex of transition edges. Unlike in the minimal case, it is not necessarily true that $\tilde{H}^0(K_{\sub_{N-1}})$ is trivial, as $\TS_\sub$ may have multiple connected components and so extension problems coming from the Barge-Diamond exact sequence will in general be more difficult.
To illustrate this alternative method, we present a brief example calculation of cohomology for the Chacon substitution.
\begin{example}
Let $\sub$ be given by $\sub \colon a \mapsto aaba,\: b \mapsto b$, the Chacon substitution on the alphabet $\{a, b\}$. Let $$1 = a_{aaa},\: 2 = a_{aab},\: 3 = b_{aba},\: 4 = a_{bab},\: 5 = a_{baa}.$$ The $1$-collared substitution is given by $$\sub_1 \colon 1 \mapsto 1235,\: 2 \mapsto 1234,\: 3 \mapsto 3,\: 4 \mapsto 5234,:\ 5 \mapsto 5325$$ and the BD-complex is given in Figure \ref{fig:chacon-bd}.
\begin{figure}[h]
$$\begin{tikzpicture}[->, node distance=2cm, auto]
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (ai) at (2.000000,0.000000) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (ao) at (1.618173,1.175379) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (bi) at (0.618485,1.901966) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (bo) at (-0.617358,1.902333) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (ci) at (-1.617476,1.176338) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (co) at (-2.000000,0.001185) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (di) at (-1.618870,-1.174419) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (do) at (-0.619612,-1.901600) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (ei) at (0.616230,-1.902698) {};
\node [fill,circle,draw,inner sep = 0pt, outer sep = 0pt, minimum size=1mm] (eo) at (1.616779,-1.177296) {};
\draw (ai) edge[bend right=110, looseness=3, ->, thick, swap] node {$1$}(ao);
\draw (bi) edge[bend right=110, looseness=3, ->, thick, swap] node {$2$}(bo);
\draw (ci) edge[bend right=110, looseness=3, ->, thick, swap] node {$3$}(co);
\draw (di) edge[bend right=110, looseness=3, ->, thick, swap] node {$4$}(do);
\draw (ei) edge[bend right=110, looseness=3, ->, thick, swap] node {$5$}(eo);
\draw [-,thick, bend right, draw=white, line width=4pt, looseness=0.7] (ao) to (bi);
\draw [->, thick, bend right, looseness=0.7] (ao) to (bi);
\draw [-,thick, bend right, draw=white, line width=4pt, looseness=0.7] (bo) to (ci);
\draw [->, thick, bend right, looseness=0.7] (bo) to (ci);
\draw [-,thick, bend right, draw=white, line width=4pt, looseness=0.7] (co) to (di);
\draw [->, thick, bend right, looseness=0.7] (co) to (di);
\draw [->, thick, bend right, looseness=0.7, draw=red] (co) to (ei);
\draw [-,thick, bend right, draw=white, line width=7pt, looseness=0.7] (do) to (ci);
\draw [->, thick, bend right, looseness=0.7, draw=red] (do) to (ci);
\draw [-,thick, bend right, draw=white, line width=4pt, looseness=0.7] (eo) to (ai);
\draw [->, thick, bend right, looseness=0.7, draw=red] (eo) to (ai);
\draw [->, thick, bend right, looseness=0.7] (eo) to (bi);
\end{tikzpicture} $$
\caption{The Barge-Diamond complex $K_{\sub_1}$ for the $1$-collared Chacon substitution with the subcomplex of transition edges in the eventual range coloured red}
\label{fig:chacon-bd}
\end{figure}
The eventual range of the map $g$ acting on the subcomplex $S$ of transition edges is the collection $\{e_{35}, e_{43}, e_{51}\}$ coloured in red. The substitution acts on this eventual range like the identity. Note that $S$ has exactly three connected components, all of which are contractible. It follows that the Barge-Diamond exact sequence for this substitution is given by $$0 \to \mathbb{Z}^2 \to \varinjlim \left(\mathbb{Z}^5, \left(
\begin{smallmatrix}
1&1&1&0&1\\
1&1&1&1&0\\
0&0&1&0&0\\
0&1&1&1&1\\
0&1&1&0&2
\end{smallmatrix}
\right)\right) \to \check{H}^1(\TS) \to 0 \to 0$$
\end{example}
Experience with examples seems to suggest that it is often the case that the eventual range of $S$ under the induced substitution will often have multiple connected components whenever $N > 1$ and especially when $\sub$ is not minimal, and so we seem to lose the advantage normally afforded to us with Barge-Diamond calculation where it is often the case that the exact sequence splits. In fact, it is probably more efficient in the above example to directly find generators of the cohomology of the entire complex $K_{\sub_1}$ (where in this case there are only three generators) and to calculate the induced substitution on $H^1(K_{\sub_1})$ in order to calculate $\check{H}^1(\TS)$. If we do that, we find that $\check{H}^1(\TS) \cong \varinjlim \left(\mathbb{Z}^3, \left(\begin{smallmatrix}0&1&0\\-1&3&1\\-1&1&1\end{smallmatrix}\right)\right)$.
\subsubsection{Extensions of Substitutions by Other Substitutions}
So far, our only examples of non-minimal substitutions that have been presented have been relatively tame---the tiling spaces have all been a finite collection of minimal tiling spaces which are possibly connected by a finite number of path components which asymptotically approach some sub-collection of the minimal sets. In particular, by quotienting out by the disjoint union of the minimal sets, we are left with a space homeomorphic to a cell complex. While these spaces are interesting, and serve as good test cases for our machinery, the range of possible behaviours for non-minimal substitutions is much more varied.
For instance, we could break the asymptotic behaviour in the above described examples, and instead have new path components which approach minimal sets proximally, instead of asymptotically.
\begin{example}\label{ex:fib-proximal}
Consider the substitution $$\sub \colon 0 \mapsto 001,\: 1 \mapsto 01,\: 2\mapsto X021X,\: X \mapsto X$$ whose proximal path component is the orbit of the word $$\ldots 0010010100101X00100101X001X021X01X00101X00100101001001 \ldots$$ where the sparse appearances of the symbols $X$, which become more rare the further one travels from the single $2$, serve to break the asymptotic nature of the handle.
There is also a single asymptotic handle associated to the bi-infinite word $$\ldots 0010010100101X00100101001001 \ldots.$$
The lattice of CISs for this substitution is $\emptyset \to \Lambda_{Fib} \to \Lambda_{Fib+1} \to \TS$ where $\Lambda_{Fib}$ is a Fibonacci tiling space, $\Lambda_{Fib+1}$ includes the asymptotic handle and $\TS$ is the full tiling space which includes the proximal handle.
The inclusion cohomology diagram in degree $1$ is given by
$H^1(D_{\TS})\colon \: \mathbb{Z}^4 \to \mathbb{Z}^3 \to \mathbb{Z}^2 \to 0$.
\end{example}
\begin{comment}
\begin{example}
Another interesting examples is given by $$ \sub \colon 0 \mapsto 001,\: 1 \mapsto 01,\: a \mapsto aaba,\: b\mapsto b,\: X \mapsto 1aXa0$$ which has a Fibonacci minimal CIS, a Chacon minimal CIS, two single path components associated to the sequences
$$\begin{array}{c}
\ldots 0010010100101aabaaababaaba \ldots\\
\ldots aabaaababaaba0010010100100 \ldots
\end{array}$$
which are asymptotic to the two distinct minimal sets in either direction, and a final path component associated to the sequence
$$\ldots 00101aabaaababaaba01aaba1aXa0aaba001aabaaababaaba00100101 \ldots $$
which is proximal to all of the other CISs in both directions and asymptotic to none.
\end{example}
\end{comment}
To support this direction of exploring more varied behaviour, we introduce a curious family of examples where the quotients of the tiling space by the CISs are of particular interest, and where there is a natural factor map onto the minimal set of the tiling space. In particular the complement $\TS \setminus \Lambda_{\operatorname{min}}$ will often have uncountably many path components, where $\Lambda_{\operatorname{min}}$ is the disjoint union of the minimal CISs. One might think of this construction as `extending' one substitution by another in a proximal fashion.
Suppose that $\sub$ and $\psi$ are substitutions on $\Al$ and $\Be$ respectively, with $\sub$ primitive and suppose that $|\Be| \leq |\Al|$. Let $i \colon \Be \to \Al$ be an injection. Assume that for each $b \in \Be$, if $\psi(b) = b_1 \ldots b_n$, then there exists an interior subsequence $(a_{k_1},\ldots, a_{k_n})$ of $\sub(i(b)) = a_1 \ldots a_m$ of the form $(i(b_1), \ldots, i(b_n))$ (if not, take a high enough power of $\sub$ so that there is). Here by interior, we mean that $a_{k_1} \neq a_1$ and $a_{k_n} \neq a_m$.
\begin{defn}
Let $\sub$ and $\psi$ be as above and choose an injection $i\colon \Be \to \Al$ and a set of subsequences $S = \{s_{b} = (a_{k_1},\ldots, a_{k_n}) \mid b \in \Be\}$ of $\sub(i(b))$ as above.
Define a new substitution $[\sub,\psi]_S$ on the alphabet $\Al \sqcup \Be$ by $[\sub,\psi]_S(a) = \sub(a)$ for all $a \in \Al$ and for $b \in \Be$ by $[\sub,\psi]_S(b) = \sub(i(b))$ except replace the occurrence of $a_{k_j}$ with $b_j$.
\end{defn}
There is a natural factor map $\TS_{[\sub,\psi]_S} \to \TS_\sub$ given by mapping the letters $b \in \Be$ to $i(b)$.
\begin{example}\label{EX:solenoid}
If $\sub \colon 0 \mapsto 00100101,\: 1 \mapsto 00101$ and $\psi \colon a \mapsto aa$, then we could choose the injection $a \mapsto 0$ and then choose as the subsequence of $\sub(i(a)) = 0_{(1)} 0_{(2)} 1_{(3)} 0_{(4)} 0_{(5)} 1_{(6)} 0_{(7)} 1_{(8)}$ the sequence $(0_{(4)}, 0_{(5)})$ so $S = \{(0_{(4)},0_{(5)})\}$. Then our extended substitution $[\sub,\psi]_S$ is given by $$[\sub,\psi]_S \colon 0 \mapsto 00100101,\: 1\mapsto 00101,\: a \mapsto 001aa101.$$
\end{example}
\begin{example}
Let $\psi = \operatorname{Id}$ be the substitution on the alphabet $\{x\}$ given by $\operatorname{Id}(x) = x$, and let $i \colon \{x\} \to \Al$ be given by $i(x)=a$ for some $a \in \Al$. As $\sub$ is primitive by assumption, let $a_{k_1}$ be an occurrence of the letter $a$ in the interior of the word $\sub^n(a)$ for some positive natural $n$. Let $S=\{(a_{k_1})\}$.
The substitution $[\sub,\operatorname{Id}]_S$ is just the substitution $\sub$ with a single handle. That is, the tiling space for $[\sub,\operatorname{Id}]_S$ is just the tiling space for $\sub$ with a single extra one-dimensional path component which asymptotically approaches the minimal component in both directions. The image of the handle under the factor map onto $\TS_\sub$ is precisely the orbit of the limit word $\lim_{j \to \infty}\sub^{jn}(a)$ expanded about the interior letter $a_{k_1}$ appearing in $\sub^n(a)$. By iterating this method, we can add as many handles as we like.
\end{example}
\begin{comment}
\begin{example}
The substitution $$[\operatorname{TM},\operatorname{PD}] \colon 0 \mapsto 01101001,\: 1 \mapsto 10010110,\: a \mapsto 011ab001,\: b \mapsto 10a1a110$$ is an extension of the (cube of the) Thue-Morse substitution $\operatorname{TM}\colon 0\mapsto 01,\: 1\mapsto 10$ by the period doubling substitution $\operatorname{PD} \colon a \mapsto ab,\: b \mapsto aa$.
\end{example}
\end{comment}
In general, the substitution tiling space $\TS_{[\sub,\psi]_S}$ has exactly one non-empty proper CIS which is exactly the tiling space $\TS_\sub$ given by restriction of the substitution to the subalphabet $\Al$.
There is a close relationship between the quotient complex $\Gamma_N/\Gamma_{\TS_\sub}$ and the AP-complex $\Gamma_\psi$ of the substitution $\psi$. Let $f \colon \Gamma_N/\Gamma_{\TS_\sub} \to \Gamma_N/\Gamma_{\TS_\sub}$ and $g \colon \Gamma_{\psi} \to \Gamma_{\psi}$ be the respective bonding maps. It would appear that more often than not there is a map $h \colon \Gamma_N/\Gamma_{\TS_\sub} \to \Gamma_\psi$ which conjugates these bonding maps up to homotopy, that is $g \circ h \simeq h \circ f$. This would seem to suggest a close relationship between the spaces $\TS_{[\sub,\psi]_S}/\TS_\sub$ and $\TS_\psi$, perhaps up to shape equivalence\footnote{For an introduction and overview of the r\^{o}le of shape theory in the study of tiling spaces, we refer the reader to \cite{CH:codim-one-attractors}}.
\begin{question}
What is the relationship between $\TS_{[\sub,\psi]_S}/\TS_\sub$ and $\TS_\psi$?
\end{question}
\begin{comment}
We note that in Example \ref{EX:solenoid} above, $\psi$ is periodic, and we suspect that $\TS_{[\sub,\psi]_S}/\TS_\sub$ is shape equivalent to the dyadic solenoid---in fact $\Gamma_1 / \Gamma_{\TS_{\sub}}$ is a circle, and the induced substitution $[\sub,\psi]_S$ on this quotient complex is homotopic to the doubling map, so by Theorem \ref{THM:quotient} the first cohomology of the quotient is given by $\check{H}^1(\TS_{[\sub,\psi]_S}/\TS_\sub) \cong \mathbb{Z}[1/2]$. As $\TS_\psi$ in this case is a circle with $\check{H}^1(\TS_\psi) \cong \mathbb{Z}$, it would appear that the relationship hinted at above relies at least on the recognisability of $\psi$.
If $S$ is chosen differently, then the quotient complex $\Gamma_{\TS_\sub}$ can be different. For example if $S' = \{(0_{(2)},0_{(7)})\}$ then $\Gamma_{\TS_\sub}$ is homotopy equivalent to a wedge of two circles. However, the induced map on cohomology acts like the matrix $\left(\begin{smallmatrix}1&1\\1&1\end{smallmatrix}\right)$ and so we still find that $\check{H}^1(\TS_{[\sub,\psi]_{S'}}/\TS_\sub) \cong \mathbb{Z}[1/2]$.
\end{comment}
The importance of the choice of the set of subsequences $S$ in the construction of $[\sub,\psi]_S$ is not immediately apparent. It seems unlikely that the resulting tiling space is independent of the choice of $S$. By taking powers of $\sub$, one can generate infinitely many distinct such choices. By construction, the inclusion and quotient cohomology diagrams of these spaces will all be very similar (if not identical), and so a stronger invariant is likely needed to distinguish such substitutions topologically.
\begin{question}
Does there exist a pair of substitution $\sub,\psi$ and sets of subsequences $S, S'$ such that $\TS_{[\sub,\psi]_S}$ and $\TS_{[\sub,\psi]_{S'}}$ are not homeomorphic? If such behaviour is typical, what tools are needed to topologically or dynamically distinguish such pairs of spaces in general?
\end{question}
\begin{example}\label{EX:proximal}
In \cite{BD:proximal}, Barge and Diamond outline a method for associating, to a primitive aperiodic substitution $\sub$, a new substitution $\tilde{\sub}$ which is non-minimal. They show that the homeomorphism type of the tiling space $\TS_{\tilde{\sub}}$ is a homeomorphism invariant of the tiling space $\TS_\sub$, and so the cohomology $\check{H}^i(\TS_{\tilde{\sub}})$ is also a topological invariant for $\TS_\sub$. The method for forming the substitution $\tilde{\sub}$ from the so-called \emph{balanced pairs of words associated to pairs of asymptotic composants} is involved, and it would be cumbersome to reproduce the construction here, so the reader is referred to the paper \cite{BD:proximal}.
Using this construction, it can be shown that given the Fibonacci substitution $\sub_{Fib} \colon 0\mapsto 001,\: 1\mapsto 01$, the associated substitution $\tilde{\sub}_{Fib}$ is given by $\tilde{\sub}_{Fib} \colon a \mapsto aab,\: b \mapsto ab,\: c \mapsto acab$. The tiling space of this substitution is orbit equivalent to a Fibonacci with one handle substitution $[\sub_{Fib},\operatorname{Id}]_S$ (the equivalence is given by the single $c$ tile absorbing the $a$ tile to its right).
\end{example}
\begin{example}\label{EX:asymptotic}
Considering the substitutions
$$
\begin{array}{rlll}
\sub_1 \colon & a \mapsto cab & b \mapsto ac & c \mapsto a\\
\sub_2 \colon & a \mapsto bbac & b \mapsto a & c \mapsto b.
\end{array}
$$
It is an exercise for the reader to check that we have cohomology groups $\check{H}^1(\TS_{\sub_1}) \cong \check{H}^1(\TS_{\sub_2}) \cong \mathbb{Z}^5$. So, cohomology does not distinguish the tiling spaces of these two substitutions. It is also the case that several other invariants of primitive substitution tiling spaces fail to distinguish these substitutions. We can instead form the two new substitutions $\tilde{\sub}_1, \tilde{\sub}_2$. We omit the specific presentations of these substitutions owing to their extremely large size---$\tilde{\sub_1}$ has an alphabet on 19 letters, $\tilde{\sub_2}$ has an alphabet on 87 letters.
Using the results of this work, we can calculate that $\operatorname{rk}\check{H}^1(\TS_{\tilde{\sub}_1}) = 17$ and $68 \leq \operatorname{rk}\check{H}^1(\TS_{\tilde{\sub}_2}) \leq 74$ and so by the result of Barge and Diamond, these invariants distinguish the substitutions $\sub_1$ and $\sub_2$. Hence we have $\TS_{\sub_1} \not\cong \TS_{\sub_2}$.
\emph{Acknowledgement.} The authors thank Scott Balchin for writing a computer program to determine the substitutions $\tilde{\sub}_1$ and $\tilde{\sub}_2$ after it became apparent that hand calculations would not be feasible in a reasonable amount of time.
\end{example}
\bibliographystyle{abbrv}
|
1,477,468,750,155 | arxiv | \section{Introduction}
Although the basis of it was laid in 1982 \cite{We82}, the dynamical
triangulation model of quantum gravity has in the last few years
received a lot of interest. See e.g.\ \cite{AgMi92a,AmJu92}. In this
model, the path integral over euclidean metrics is defined by a sum
over simplicial complexes. This sum can then be approximated using
Monte Carlo techniques, where a computer program generates
appropriately weighted configurations of simplices.
To generate all these configurations we need an algorithm that is
ergodic, i.e.\ a set of moves that can transform any triangulation
into any other triangulation with the same topology. A well known set
of moves that satisfy this condition are the so-called $(k,l)$ moves,
whose ergodicity was shown in \cite{GrVa92}.
\section{Noncomputability}
Unfortunately, the number of moves we need to get from one
configuration to another can be very large. To be more precise, the
following theorem holds: if the manifold under consideration is
unrecognizable, then for any set of local moves the number of moves
needed to get from one configuration of $N$ simplices to another such
configuration is not bounded by a computable function of $N$. This
was shown in ref.\ \cite{NaBe93}. We will explain some of the terms
in this theorem in a way that is not mathematically precise, but
hopefully intuitively clear. See \cite{NaBe93} for details.
A manifold is unrecognizable if, given a triangulation $A$ of this
manifold, there does not exist an algorithm that, given as input an
arbitrary triangulation $B$, can decide whether $A$ and $B$ are
homeomorphic. The definition of unrecognizability is not important
for the rest of this article, it is only important to know that for
some manifolds the above theorem holds. Certain four dimensional
manifolds are unrecognizable, but for the sphere $S^4$, which is
usually used in dynamical triangulation, this is not known. It is
known, however, that the five dimensional sphere $S^5$ is
unrecognizable.
Local moves are moves that involve a number of simplices that is
bounded by a constant, in other words a number that does not grow with
the volume of the configuration.
A computable function is a function from ${\cal N}$ to ${\cal N}$ that can be
computed by a large enough computer. Although the computable
functions are only an infinitesimally small fraction of all the
functions from ${\cal N}$ to ${\cal N}$, most functions one can think of are
computable. A fast-growing example of a computable function would be
$N!!\cdots!$ with $N$ factorial signs.
The above theorem might seem a terrible obstacle for numerical
simulation, but the theorem says nothing about the behaviour of the
number of moves needed to generate any particular size of
configuration. In fact, take any function $g(n)$ with the property
that it is not bounded by a computable function. Replacing any finite
number of values of this function will result in another function
$g'(n)$ that is also not bounded by a computable function.
\section{Barriers}
\newcommand{\ensuremath{N_{\mathrm{int}}(N)}}{\ensuremath{N_{\mathrm{int}}(N)}}
From the theorem stated above it follows that for an unrecognizable
manifold the maximum size \ensuremath{N_{\mathrm{int}}(N)}\ of the intermediate configurations
needed to interpolate between any two configurations of size $N$ is
also not bounded by a computable function of $N$. If \ensuremath{N_{\mathrm{int}}(N)}\ did have
such a bound, a bound on the number of possible configurations of size
less than or equal to \ensuremath{N_{\mathrm{int}}(N)}\ would be a bound on the number of moves
needed, which would violate the theorem. A simple computable bound on
the number of configurations of size $N$ is $((d+1)N)!$, where $d$ is
the dimension of the simplices.
It was pointed out in \cite{AmJu94} that this means that for such a
manifold there must exist barriers of very high sizes between certain
points in configuration space. Although the situation is not clear
from the theorem, it seems natural that these barriers occur at all
scales. We can then apply the following method, which was formulated
in \cite{AmJu94}. We start from an initial configuration with minimum
size. For $S^4$ and $S^5$, there is a unique configuration of minimum
size with 6 and 7 simplices respectively. We increase the volume to
some large number and let the system evolve for a while, which might
take it over a large barrier. Next, we rapidly decrease the volume,
hoping to trap the configuration on the other side of this barrier.
We can check whether this has happened by trying to decrease the
volume even more. If this brings us back to the initial
configuration, we have gone full circle and cannot have been trapped
at the other side of such a barrier. Conversely, if we get stuck we
are apparently in a metastable state, i.e.\ at a point in
configuration space where the volume has a local minimum.
This was tried in \cite{AmJu94} for $S^4$, but no metastable states
were found. To judge the significance of this, it is useful to
investigate the situation for a manifold which is known to be
unrecognizable. It is rather difficult to construct a four
dimensional manifold for which this is known, but if we go to five
dimensions this is easy, because already the sphere $S^5$ is not
recognizable.
\section{Results}
Because my program for dynamical triangulation was written for any
dimension, it was not difficult to investigate $S^5$. The description
by Catterall in \cite{Ca94} of his dynamical triangulation program for
arbitrary dimension turned out to be a very close description of mine,
presumably because both were based on ideas put forward in
\cite{BrMa93,Br93}. The Regge-Einstein action in the five dimensional
model is
\begin{equation}
S = \kappa_5 N_5 - \kappa_3 N_3
\end{equation}
where $N_i$ is the number of simplices of dimension $i$. This is not
the most general action linear in $N_i$ in five dimensions as this
would take three parameters, but for the purposes of this paper this
is not relevant.
I generated 26, 24 and 8 configurations at $N_5 = 8000$, $16000$ and
$32000$ simplices respectively. These were recorded each 1000 sweeps,
starting already after the first 1000 sweeps, were a sweep is $N_5$
accepted moves. All configurations were made at curvature coupling
$\kappa_3 = 0$, making each configuration contribute equally to the
partition function, in other words making them appear equally likely
in the simulation. Looking at the number of hinges $N_3$, the system
seemed to be thermalized after about 6000 sweeps, irrespective of the
volume.
The critical value of $\kappa_5$ (the bare cosmological constant)
below which the volume diverges was measured as explained in
\cite{CaKoRe94b,BaSm94b}. It turned out to be 0.8252(4), 0.8366(5)
and 0.8446(8) for the three volumes used. The last error cannot be
trusted, because of the low statistics at the largest volume.
Starting with these configurations, the volume was decreased by
setting $\kappa_5$ to a fixed number larger than the critical value.
We call this process cooling, because it attempts to reach a
configuration of minimum volume and thereby minimum action.
\begin{figure}[t]
\includegraphics[width=\textwidth]{inset2.eps}
\caption{A typical cooling run starting at $N_5 = 32000$, using
$\kappa_5 = 2$. The horizontal units are 1000 accepted moves. The
vertical axis is the number of 5-simplices. The inset is a blowup
of a small part of the curve.}
\label{run2fig}
\end{figure}
\begin{figure}[t]
\includegraphics[width=\textwidth]{inset.eps}
\caption{As figure \protect\ref{run2fig}, but with $\kappa_5 =3$.}
\label{run3fig}
\end{figure}
For each configuration we cooled four times with $\kappa_5 = 2$ and
two times with $\kappa_5 = 3$. For both values of $\kappa_5$ used one
of the runs is shown in figures \ref{run2fig} and \ref{run3fig}. In
the insets we can see the typical volume fluctuations that occurred.
These were of the order 30 at $\kappa_5 = 2$ and 6 at $\kappa_5 = 3$.
The volume would first decrease very quickly until it reached roughly
a quarter of the starting value and then started to decrease much
slower. In all cases the initial configuration of 7 simplices was
reached. We also tried to use $\kappa_5 = 4$. The same behaviour of
fast and slow cooling was seen, but the latter was so slow that due to
CPU constraints these had to be stopped before either a stable
situation or the minimal volume was reached.
There is an important difference between four and five dimensions. In
four dimensions there is a move that leaves the volume constant.
Therefore the system can evolve at constant volume. In five
dimensions this is not possible, because all moves change the volume.
In this case the volume has to fluctuate for the configuration to
change. This is why much larger values of the cosmological constant
(such as were used in \cite{AmJu94}) would effectively freeze the
system.
\begin{figure}[t]
\includegraphics[width=\textwidth]{times.ps}
\caption{Number of moves needed for cooling at $\kappa_5 = 2$
as a function of the number of sweeps at the large volume of 16000
simplices. Horizontal units are 1000 sweeps, vertical units are
1000 accepted moves.}
\label{timefig}
\end{figure}
Initially, before the system was thermalized, there was a strong
positive correlation between the time used to evolve the system at
large volume and the time needed to cool the system back to the
initial configuration. The rates of fast and slow cooling did not
change, but the volume at which the slow cooling set in became larger.
Some of these times are shown in figure \ref{timefig} for the case of
16000 simplices. This trend does not continue and the cooling times
seem to converge.
\section{Discussion}
So, contrary to expectation, no metastable states were seen. Small
volume fluctuations were necessary, but these gave no indication of
the high barriers expected.
It is not clear why we don't see any metastable states. There are
several possibilities. First the barriers might be much larger than
32000 and we need to go to extremely large volumes before cooling.
Second, there might be no barriers much larger than the volume for the
volumes we looked at and the size of the intermediate configurations
needed still grows very slowly for the volumes considered, even though
this size is not bounded by a computable function. Third, the
metastable regions in configuration space might be very small and the
chance that we see one is therefore also very small.
It was speculated in \cite{AmJu94} that the absence of visible
metastable states might indicate that $S^4$ is indeed recognizable.
The results shown in this paper for $S^5$ (which is known to be
unrecognizable) indicate that, unfortunately, the results for $S^4$
say nothing about its recognizability. This, of course, in no way
invalidates the conclusion of \cite{AmJu94} that the number of
unreachable configurations of $S^4$ seems to be very small.
Recognizability is not the only thing that matters. Even if $S^4$ is
recognizable, this says little about the actual number of $(k,l)$
moves needed to interpolate between configurations, except that this
number might be bounded by a computable function.
|
1,477,468,750,156 | arxiv | \section{Introduction}
The concept of a triangulated category, introduced independently by Verdier \cite{MR1453167} and Puppe \cite{MR0150183}, appears naturally in various branches of mathematics. It is omnipresent in areas like algebraic geometry, stable homotopy theory, and representation theory; triangulated categories give a common framework for doing modern homological algebra in very different contexts.
Now, it might happen that we want to examine a triangulated category $\catT$ which is somehow `too big', making it hopless to really understand certain facets off the bat. For instance, $\catT$ might have coproducts. In this scenario compact objects can be helpful: Following Beligiannis--Reiten~\cite{MR2327478}, any set $\catX$ of compact objects gives rise to a decomposition of the ambient category, in the form of a (stable) t-structure
\begin{align} \label{align the classical t-structure}
\left({}^\perp(\catX^\perp), \catX^\perp\right). \tag{$\text{t}_1$}
\end{align}
If $\catT$ is even generated by compact objects, then more tools become available: Neeman~\cite{MR1308405} proved that $\catT$ satisfies Brown representatibility, which in turn unveils a fairly constructive localization theory. This covers many categories that occur in nature, for instance derived categories of rings and of (most) schemes are compactly generated.
Alas, the naive dual approach leads to a rather empty theory, in the sense that the resulting notion of cocompactness seldom appears in categories we wish to study: We show in Theorem~\ref{theorem cocompacts in derived category} that if $\catA$ is Grothendieck abelian, then the only cocompact object in $\catD(\catA)$ is $0$. On the other hand, non-trivial cocompact objects can only exist in \( \catK (\Modcat R) \) if we put certain subtle restrictions on the underlying set-theory (see Remark~\ref{rem cocompacts in homotopy category}).
So it becomes a natural goal to identify a more applicable variant of cocompactness, that is, a notion which not only allows for far-reaching theorems, but also actually shows up in categories one might be interested in.
In \cite{MR3946864}, coveting a more potent dual of (\ref{align the classical t-structure}), we introduced the weaker notion of 0-cocompactness ad hoc, and showed that if $\catX$ is a set of $0$-cocompact objects, then
\begin{align} \label{align our t-structure}
\left({}^\perp\catX, ({}^\perp\catX)^\perp\right) \tag{$\text{t}_2$}
\end{align}
is a (stable) t-structure in $\catT$. The point was that the definition was forgiving enough to apply to the homotopy categories studied in that paper.
In fact, the analogy to the classical theory has recently been strengthened with a Brown representability theorem for $0$-cocompactly cogenerated triangulated categories \cite{modoi2020weight}. Appendix~\ref{section dual brown rep} offers an enhanced version which additionally provides an explicit construction of the representing objects.
\medskip
In the current manuscript we explain how $0$-cocompact objects---as well as their duals, the $0$-compacts---come up in connection with certain dualities. In particular, there is an abundance of such objects in several categories of interest.
Below we present the main results of the paper, divided into three areas. All categories are $k$-categories for some commutative ring $k$. In this brief introduction we suppress any assumptions on existence of (co)products in the categories that appear; the full picture can be found in the pertinent sections.
\subsection*{I. Partial Serre duality}
The study and use of Serre functors goes back to the work of Bondal--Kapranov \cite{MR1039961} on mutations of exceptional collections. Since its introduction, the concept has become important in several areas. For instance, in representation theory the presence of a Serre functor is equivalent to the existence of almost split triangles \cite{MR910167,MR1112170}, and identifying objects along the Serre functor is the idea behind cluster categories \cite{MR2249625}; for geometers the Serre functor is a most utile weapon for handling the derived category of coherent sheaves \cite{MR1818984}. This classical concept of a Serre functor is limited to triangulated categories which are $\Hom$-finite over some field; we extend the notion in the following sense.
Fix an injective cogenerator $I$ of $\Modcat k$, and write $(-)^\ast=\Hom_k(-,I)$. A \emph{partial Serre functor} for a subcategory $\catX$ of a triangulated category $\catT$, is a functor $\mathbb S\colon \catX\to\mathbb \catT$ such that \[\catT(X,T)^\ast\cong\catT(T,\mathbb SX)\] naturally in $X\in\catX$ and in $T\in\catT$.
Over a field $k$, \cite{BIKP,MR3914144} established such duality formulae in certain singularity categories of algebras and in stable module categories of finite group schemes. On the other hand, for $k=\mathbb Z$ and $I=\mathbb Q/\mathbb Z$ the object $\mathbb SX$ is known by topologists as the `Brown--Comenetz dual' of $X$ \cite{MR405403}.
Experts may recall that the fact that a Serre functor in the sense of \cite{MR1039961} is exact, is a non-trivial result. In Theorem~\ref{theorem main general result on partial serre} (proven in Appendix~\ref{section exactness of serre}) we offer an enhancement to our setup: The collection $\catX$ of all the `Serre dualizable objects' in $\catT$ is a triangulated subcategory. Moreover, there is a partial Serre functor $\mathbb S \colon\catX\to\catT$ which is exact.
However, our main motivation for studying this concept is that it links the notions of compactness and 0-cocompactness.
\begin{introtheorem1
If $\mathbb S\colon \catX\to\catT$ is a partial Serre functor, then $\catX$ consists of compact objects while $\mathbb S(\catX)$ consists of 0-cocompact objects.
\end{introtheorem1}
And so begins the important task of finding examples of partial Serre functors; this becomes a method for identifying 0-cocompact objects in practice.
\subsection*{II. Homotopy categories} If $\catT$ is a compactly generated triangulated category, then by Brown representability each compact object is Serre dualizable, and so there is a partial Serre functor $\mathbb S\colon\catT^{\compacts}\to\catT$. It follows that $\catT$ is also 0-cocompactly cogenerated, by the essential image $\mathbb S(\catT^{\compacts})$ --- see Corollary~\ref{corollary compact generation gives partial serre}.
However, such an approach does not reveal any explicit information about the induced 0-cocompact objects. We would really prefer to actually \emph{construct} partial Serre functors, not least in categories where Brown representability fails.
Let $R$ be any ring (if no other base ring is available, we can always choose $k=\mathbb Z$). Write $(-)^\vee=\Hom_R(-,R)$ and $\nu=(-^\vee)^\ast$, and let $\modcat R$ be the category of finitely presented $R$-modules.
\medskip
Each bounded complex $M$ over $\modcat R$ appears in an exact sequence \[P_1\stackrel{p}\to P_0\to M \to 0,\] where the $P_i$ are contractible and belong to $\Cb(\proj R)$. Indeed, this is nothing but a projective presentation in the category of complexes. Let $\mathbb S_{\mathsf M} M = \Ker\left(\nu(p)\right)[2]$.
\begin{introtheorem2
Let $R$ be a ring. Then $\mathbb S_{\mathsf M}$ defines a partial Serre functor \[\mathbb S_{\mathsf M} \colon \Kb(\modcat R)\to\catK(\Modcat R)\]
\end{introtheorem2}
In Proposition~\ref{proposition AR translate without contractibles} we show how to calculate $\mathbb S_{\mathsf M}M$ using just complexes of projectives, and not projective objects in the category of complexes. In particular, if $M$ is a module, then the 0-cocompact object $\mathbb S_{\mathsf M} M$ is simply the complex \[\tau M\hookrightarrow \nu P'_1 \to \nu P'_0,\] where $P'_1\to P'_0\to M\to0$ is a projective presentation in $\modcat R$ and $\tau M$ is the `usual' AR-translate of $M$.
By construction, the colocalizing subcategory of $\catK(\Inj R)$ generated by the essential image $\mathbb S_{\mathsf M}(\modcat R)$, consists of complexes of pure-injectives. This spurs Corollary~\ref{corollary existence of pure resolutions}, which shows that each complex of $R$-modules admits a pure-injective (and a pure-projective) resolution.
On the other hand, if $\Lambda$ is an Artin algebra then, choosing $I$ as an injective envelope of the semisimple $k/\!\rad k$, the functor $\mathbb S_{\mathsf M}$ becomes an auto-equivalence. In particular, $\Kb(\modcat \Lambda)$ is a set of 0-cocompact objects in $\catK(\Modcat\Lambda)$ (Observation~\ref{observation 0-cocompacts in Kb(Lambda)}).
\medskip
Recall from \cite{MR3212862,MR2923949} that the inclusion $\catK(\Inj R)\hookrightarrow\catK(\Modcat R)$ admits a left adjoint $\lambda$. If $X$ is a left bounded complex, then $\lambda X$ is an injective resolution of $X$.
Dually, by e.g.\ \cite{MR2680406} the inclusion $\catK(\Proj R)\hookrightarrow\catK(\Modcat R)$ admits a right adjoint $\rho$. If $X$ is a right bounded complex, then $\rho X$ is a projective resolution of $X$.
\begin{introtheorem3
Let \( R \) be a ring, and let \( \catK^{\bounded}_{\mathsf{fpr}}(R) \) be the subcategory of \( \Kb(\Modcat R) \) consisting of complexes admitting degree-wise finitely generated projective resolutions.
\begin{enumerate}
\item There is a partial Serre functor \( \mathbb S_{\mathsf I} \colon \lambda( \catK^{\bounded}_{\mathsf{fpr}}(R)) \to \catK(\Inj R) \) given by \[\mathbb S_{\mathsf I}(\lambda X) = \nu \rho X. \]
\item There is a partial Serre functor \( \mathbb S_{\mathsf P} \colon \rho(\catK^{\bounded}_{\mathsf{fpr}}(R^{\op}))^{\vee} \to \catK(\Proj R) \) given by \[ \mathbb S_{\mathsf P}((\rho X)^{\vee}) = \rho(X^\ast). \]
\end{enumerate}
\end{introtheorem3}
Recently, the authors of \cite{BIKP} showed that if $A$ is a Gorenstein algebra which is finite dimensional over a field, then there are Serre functors \[ \mathbb S_{\mathsf{sg}}\colon\Dsg(A)\to\Dsg(A) \text{ and } \mathbb S_{\mathsf G}\colon\Gstable A \to \Gstable A.\] We now realize that there is a bigger picture here: In Section~\ref{section transferring serre} we explain how partial Serre duality may be transported along an adjoint triple, and thus
\begin{itemize}
\item if $R$ is a noetherian ring, then $\mathbb S_{\mathsf I}$ induces a partial Serre functor \[\mathbb S_{\mathsf{sg}} \colon \Dsg(R)\to\Kac(\Inj R) \text{ (Theorem~\ref{theorem partial serre functor on Dsg}) and }\]
\item if $R$ also has a dualizing complex, then $\mathbb S_{\mathsf P}$ induces a partial Serre functor \[\mathbb S_{\mathsf{G}} \colon(\GStable R)^{\compacts}\to \GStable R \text{ (Theorem~\ref{theorem partial serre on gproj}). }\]
\end{itemize}
\subsection*{III. Auslander--Reiten theory} The concept of an almost split triangle in a triangulated category $\catT$, due to Happel \cite{MR910167}, is a powerful combinatorial tool: In fortunate cases, the collection of such triangles determines all the morphisms in $\catT$.
Existence theorems in this direction can thus be of some impact. For instance, if $\catT$ satisfies Brown representability and $X$ is a compact object with local endomorphism ring, then there is an almost split triangle
\begin{align*} \label{align intro almost split triangle}
\tau X \to M \to X \to \tau X [1]. \tag{$\ast$}
\end{align*}
Here $\tau X [1]$ is the representing object of $\Hom_{\End(X)}(\catT(X,-), I_X)$, where $I_X$ is an injective envelope of the simple $\End(X)$-module---see e.g.\ Beligiannis~\cite{MR2079606} or Krause~\cite{MR1803642}. However, $\tau X$ can sometimes be calculated using a more global approach:
\begin{introtheorem4
Suppose that the triangulated category $\catT$ is idempotent complete, and that $\mathbb S\colon\catX\to\catT$ is a partial Serre functor.
For each object $X$ in $\catX$ with local endomorphism ring, the triangle (\ref{align intro almost split triangle}) exists and appears as a summand of a triangle
\begin{align} \label{align our triangle}
\mathbb SX[-1]\to N \to X \to \mathbb SX. \tag{$\ast\ast$}
\end{align}
Suppose moreover that $k$ is noetherian, and that $I$ is the direct sum of the injective envelopes of the simple $k$-modules. If $\End(X)$ is $k$-finite, then (\ref{align our triangle})$=$(\ref{align intro almost split triangle}).
\end{introtheorem4}
By their definition, AR-triangles are completely self-dual. Partial Serre duality, on the other hand, is not self-dual: The Serre dual of a compact object is just 0-cocompact. In Section~\ref{section non-degeneracy} we introduce the notion that \emph{composition from $X$ to $Y$ is non-degenerate} if any non-zero $\catT$-submodule of $\catT(X,-)$ or $\catT(-,Y)$ contains a non-zero morphism $X\to Y$.
In the light of Proposition~\ref{proposition comp to T(X,SX) is non-degenerate}, this is a weaker form of partial Serre duality: If $X$ admits a Serre dual, then composition from $X$ to $\mathbb S X$ is non-degenerate. Still, there is an existence criterion for almost split triangles even in these terms: Suppose that \( X \) and \( Y \) have local endomorphism rings, and that either $\End(X)$ or $\End(Y)$ is artinian. In Corollary~\ref{corollary non-deg implies almost split triangle} we show that if composition from $X$ to $Y$ is non-degenerate, then there is an almost split triangle of the form
\begin{align*} \label{align generic almost split triangle}
Y[-1]\to E \to X \to Y. \tag{$\Delta$}
\end{align*}
Conversely, one might ask what the existence of such a triangle in general tells us about $X$. Clearly, composition from $X$ to $Y$ must be non-degenerate. Now the point is that this weaker form of duality \emph{is} self-dual, capturing more than classical compactness: In Theorem~\ref{theorem non-degeneracy implies 0-cocompactness with correct def} we show that if composition from $X$ to $Y$ is non-degenerate, then $X$ is 0-compact and $Y$ is 0-cocompact. It follows in particular that any object $X$ which appears in an almost triangle of the form (\ref{align generic almost split triangle}), is 0-compact---see Corollary~\ref{corollary almost split forces 0-(co)compactness}.
It might be helpful to summarize some of the connections between these versions of compactness and duality in a graph.
\[\begin{tikzpicture}
\node (uu) at (6.5,2) {$X$ is compact};
\node (ur) at (0,2) {$X$ is `Serre dualizable'};
\node (ul) [text width=4cm] at (6.5,0) {$X$ appears in almost split $\tau X\to M \to X \to $};
\node (d) at (0,0) {$X$ has a `non-degenerate partner'};
\node (dd) at (0,-2) {$X$ is 0-compact};
\draw[-implies, double equal sign distance] (uu) to node[right,text width=2cm]{\scriptsize if local \( \End \) and Brown rep., \cite{MR2079606,MR1803642},Thm~\ref{theorem Krause AR-theorem}}(ul);
\draw[-implies, double equal sign distance,bend right=30] (uu) to node[above]{\scriptsize$\text{if Brown rep.}$}(ur);
\draw[-implies, double equal sign distance] (ur) to node [above]{\scriptsize Thm~\ref{theorem partial Serre implies X compact and Y 0-cocompact}} (uu);
\draw[-implies, double equal sign distance] (ul) to node [below] {\scriptsize Thm~\ref{theorem almost split vs non deg}} (d);
\draw[-implies, double equal sign distance] (ur) to node [left] {\scriptsize Thm~\ref{proposition comp to T(X,SX) is non-degenerate}} (d);
\draw[-implies, double equal sign distance] (d) to node [right] {\scriptsize Thm~\ref{theorem non-degeneracy implies 0-cocompactness with correct def}} (dd);
\draw[-implies, double equal sign distance] (ur) to node [below left=-2mm,text width=2cm] {\scriptsize if local \( \End \) Thm~\ref{theorem our triangle summand of AR triangle}}(ul);
\end{tikzpicture}\]
\begin{ackn}
The authors thank Lidia Angeleri H\"ugel, Rosanna Laking, Jorge Vit\'oria and Georgios Dalezios for many discussions, comments and questions.
\end{ackn}
\section{(Co)compactness and 0-(co)compactness}
In a triangulated category $\catT$ with coproducts, an object $X$ is \emph{compact} if the natural morphism \[\coprod \catT(X,Y_i)\to\catT(X,\coprod Y_i)\] is invertible for each family $\{Y_i\}$. The subcategory of all compact objects in $\catT$ is denoted by $\catT^{\compacts}$. Dually---albeit appearing far less frequently in the literature---if $\catT$ admits products, then an object $Y$ is \emph{cocompact} if the natural morphism \[\coprod\catT(X_i,Y)\to\catT(\prod X_i, Y)\] is an isomorphism for each collection $\{X_i\}$.
We will now recall the more recent notion of $0$-cocompactness as introduced in \cite{MR3946864}, together with its dual. A bit of terminology is involved:
For a class of objects $\catX$ in $\catT$, an object $G$ is said to be a \emph{contravariant $\catX$-ghost} if $\catT(G,\catX)=0$; a morphism $g$ is a \emph{contravariant $\catX$-ghost} if $\catT(g,\catX)=0$. Dually, an object $G$ is a \emph{covariant $\catX$-ghost} if $\catT(\catX,G)=0$. In an abelian category, a diagram \[A_0 \stackrel{a_0}{\to} A_1 \stackrel{a_1}{\to} A_2 \to \cdots\] is \emph{dual Mittag-Leffler} (\emph{dual ML}) if the increasing chain $\Ker a_i \subset \Ker a_{i+1} a_i \subset \cdots$ stabilizes for each $i$.
\begin{definition} \label{definition 0-cocompactness and 0-compactness}
An object $X\in\catT$ is \emph{$0$-cocompact} if $\holim\mathbb T$ is a contravariant $X$-ghost for each sequence \[\mathbb T\colon \cdots\to T_2\to T_1 \to T_0\] such that $\colim \catT(\mathbb T,X)=0$ and $\catT(\mathbb T,X[1])$ is dual ML.
Dually, $X$ is called \emph{$0$-compact} if $\hocolim\mathbb T$ is a covariant $X$-ghost for each sequence \[\mathbb T\colon T_0\to T_1 \to T_2 \to \cdots\] such that $\colim \catT(X,\mathbb T)=0$ and $\catT(X[1],\mathbb T)$ is dual ML.
\end{definition}
It is clear that any (co)compact object is $0$-(co)compact; the question of the converse is more interesting. Indeed, the fact that $0$-cocompactness in practice does appear more often than cocompactness, is the mainspring of our work.
\begin{example} \label{example K(vect)}
Consider the category \( \catK( \Modcat k ) \) for a field \( k \). Then the only cocompact object is \( 0 \), while all objects are \( 0 \)-cocompact.
Since any complex is homotopy equivalent to a complex with zero differential we have \( \catK( \Modcat k ) \simeq (\Modcat k)^{\mathbb{Z}} \), and can mainly argue in the category of vector spaces. For the first claim, note that \( \coprod_I \Hom_k(k, X) \subsetneq \Hom_k( \prod_I k, X) \) whenever the index set \( I \) is infinite and \( X \) is non-zero.
For the second claim let \( X \in \catK( \Modcat k) \) and \( \mathbb{T} \) as in the definition of \( 0 \)-cocompact. We assume that all complexes have zero differential. Let \( d \) be a degree such that \( X^d \neq 0 \). Pick an element \( ( \cdots, t_2, t_1, t_0 ) \in \limit T_i^d \). If \( t_n \neq 0 \) then we can find a linear map \( T_n^d \to X^d \) which does not send \( t_n \) to \( 0 \). This linear map will give rise to a non-zero element of \( \colim \Hom_k(T_i^d, X^d) \), and hence of \( \colim \Hom_{\catK(\Modcat k)}(T_i, X) \), contradicting the first assumption on the sequence. It follows that \( \limit T_i^d = 0 \).
Now assume that the sequence \( \cdots \to T_2^{d-1} \to T_1^{d-1} \to T_0^{d-1} \) does not satisfy the Mittag-Leffler condition. That means there is a subsequence
\[ \cdots \xrightarrow{\varphi_3} T_{n_2}^{d-1} \xrightarrow{\varphi_2} T_{n_1}^{d-1} \xrightarrow{\varphi_1} T_{n_0}^{d-1} \]
such that \( T_{n_0}^{d-1} \supsetneq \Imm \varphi_1 \supsetneq \Imm \varphi_1\varphi_2 \supsetneq \cdots \). It follows that the maps in the sequence
\[ \Hom(T_{n_0}^{d-1}, X^d) \to \Hom(\Imm \varphi_1, X^d) \to \Hom(\Imm \varphi_1\varphi_2, X^d) \to \cdots \]
are all proper epimorphisms. Note that \( X^d = X[1]^{d-1} \), thus we have a contradiction to the assumption that \( \Hom(\mathbb{T}, X[1]) \) is dual ML. It follows that the sequence \( \cdots \to T_2^{d-1} \to T_1^{d-1} \to T_0^{d-1} \) does satisfy the Mittag-Leffler condition, and in particular \( \limit^1 T_i^{d-1} = 0 \).
Finally note that \( \holim \mathbb{T} = \limit \mathbb{T} \oplus \limit^1 \mathbb{T}[-1] \). Thus we have \( \Hom( \holim \mathbb{T}, X) = 0 \) if and only if for any \( d \) such that \( X^d \neq 0 \) we have \( \limit \mathbb{T}^d = 0 \) and \( \limit^1 \mathbb{T}^{d-1} = 0 \), which are exactly the two points established above.
\end{example}
\begin{theorem} \label{theorem cocompacts in derived category}
Let \( \catA \) be a Grothendieck abelian category. The only cocompact object in \( \catD( \catA ) \) is \( 0 \).
\end{theorem}
In the proof we will utilize the following observation.
\begin{lemma} \label{lemma limits not cocompact}
Let $C$ be a cocompact object in a triangulated category $\catT$. If we have $C = \holim C_i$ then \( C \) is a direct summand of a finite direct sum \( \oplus_{i=1}^n C_i \).
\end{lemma}
\begin{proof}
Consider the triangle \( C \to \prod_i C_i \to \prod_i C_i \to C[1] \) defining the homotopy limit. Note that any for any \( n \) we have the following short exact sequence.
\[ \begin{tikzcd}
0 \ar[r] \ar[d, >->] & \bigoplus_{i=1}^n C_i \ar[r] \ar[d, >->] & \bigoplus_{i=1}^n C_i \ar[r] \ar[d, >->] & 0 \ar[d, >->] \\
C \ar[r] \ar[d,->>] & \prod C_i \ar[r] \ar[d,->>] & \prod C_i \ar[r] \ar[d,->>] & C[1] \ar[d,->>] \\
C \ar[r] & \prod_{i>n} C_i \ar[r] & \prod_{i>n} C_i \ar[r] & C[1]
\end{tikzcd} \]
The induced map in the top row is an isomorphism, whence the monomorphism splits coherently. It follows that the bottom row also is a direct summand of the middle row, and in particular is a triangle.
By cocompactness of \( C \), and hence also \( C[1] \), the map \( \prod_i C_i \to C[1] \) vanishes on all but finitely many factors. Thus by choosing \( n \) sufficiently large we can ensure that the rightmost map in the bottom row vanishes. It follows that \( C \) is a direct summand of \( \prod_{i>n} C_i \). Again using that \( C \) is cocompact it is in fact a direct summand of a finite subproduct.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem cocompacts in derived category}]
Note first that any object is the homotopy limit of a sequence of bounded complexes of injectives (given by canonical truncation on the left and stupid truncation on the right). Thus it follows from Lemma~\ref{lemma limits not cocompact} that any cocompact object is a bounded complex of injectives. So consider a non-contractible bounded complex of injectives \( I \). Without loss of generality we assume that $I$ is concentrated in non-positive degrees, and that \( I^{-1} \to I^0 \) is not a split epimorphism.
We consider the map \( \Sigma \colon (I^0)^{(\mathbb N)} \to I^0 \), giving by summing up the elements of any finite sequence. Note that for any map \( \varphi \colon (I^0)^{(\mathbb N)} \to I^0 \) which factors through projection to a finite subcoproduct \( (I^0)^{(\mathbb N)} \to (I^0)^{ \{1, \ldots, n\} } \), the difference \( \Sigma - \varphi \) is still a split epimorphism.
Now, since \( I^0 \) is injective we may extend \( \Sigma \) to a map \( \hat \Sigma \colon (I^0)^{ \mathbb N } \to I^0 \). It follows immediately that also \( \hat \Sigma - \varphi \) is a split epimorphism for any \( \varphi \in \Hom_{\catA}(I^0, I^0)^{(\mathbb{N})} \). Consequently, the natural map
\[ \Hom_{\catD( \catA)}( I^0, I)^{(\mathbb N)} \to \Hom_{\catD(\catA)}((I^0)^{\mathbb{N}}, I) \]
does not hit \( \hat \Sigma \), hence is not an isomorphism.
\end{proof}
\begin{remark} \label{rem cocompacts in homotopy category}
For homotopy categories the situation is more subtle, and depends on our model of set theory. We will use methods we learned from \cite{MR1914985}---see this book also for a full account of the unexplained notation.
On the one hand, one can see that if there is a measurable cardinal, then no category \( \catK( \Modcat R ) \) can contain a non-zero cocompact object: If \( I \) is a set admitting a non-principal \( \omega_1 \)-ultrafilter, then an easy adaptation of \cite[Example~3.1]{MR1914985} gives an element of \( \Hom_{\catK(\Modcat R)}( X^I, X) \) which does not lie in the image of the canonical map from \( \Hom_{\catK(\Modcat R)}(X, X)^{(I)} \) for any object \( X \).
On the other hand, if there is no measurable cardinal, and \( R \) is a slender ring (for instance \( R = \mathbb Z \), see \cite[Section~III.2]{MR1914985}), then the {\L}o\'s--Eda-theorem (see \cite[Corollary~3.3]{MR1914985}) implies that \( R \), considered as a complex concentrated in one degree, is cocompact.
\end{remark}
When it comes to closure properties, (co)compact objects are much better behaved than \( 0 \)-(co)compacts: The triangulated subcategory $\catT^{\compacts}$ is thick, and often easy to describe completely in concrete examples. The collection of $0$-cocompact objects in $\catT$, on the other hand, is typically much more difficult to control. For instance, a summand of a $0$-cocompact object need not be $0$-cocompact again.
We do however have the following closure property.
\begin{lemma} \label{lemma products}
Let \( X_i \) be a set-indexed collection of \( 0 \)-cocompact objects. Then \( \prod X_i \) is \( 0 \)-cocompact.
\end{lemma}
\begin{proof}
Since \( \Hom \) commutes with products in the second argument, it suffices to observe that a product of sequences has vanishing colimit only if each factor has vanishing colimit, and similarly is dual ML only if each factor is dual ML.
\end{proof}
\subsection*{Compact generation}
For each collection $\catS$ of objects in $\catT$ we consider \[\catS^\perp=\{T\in\catT \vert \catT(\catS,T[n])=0 \text{ for each } n\}.\] $\catS$ is said to \emph{generate} $\catT$ if $\catS^\perp=0$; if $\catT$ admits a generating set consisting of compact objects, then $\catT$ is \emph{compactly generated}. If $\catT$ is compactly generated by $\catS$, then $\catT$ coincides with its smallest triangulated subcategory which contains $\catS$ and is closed under coproducts.
The following is Neeman's Brown representability theorem from \cite{MR1812507}.
\begin{theorem} \label{theorem brown representability}
If $\catT$ is a compactly generated triangulated category, then $\catT$ satisfies Brown representability, that is, each cohomological functor $\catT^{\op}\to\Ab$ which takes coproducts of $\catT$ to products in $\Ab$, is isomorphic to $\catT(-,T)$ for some $T\in\catT$.
\end{theorem}
Some useful consequences of Theorem~\ref{theorem brown representability}, extracted from \cite{MR1308405, MR1812507}, are:
\begin{theorem} \label{thm:neemanthreeadjoints}
Suppose $F\colon \catT'\to\catT$ is an exact functor between triangulated categories with $\catT'$ compactly generated.
\begin{enumerate
\item $F$ admits a right adjoint if and only if $F$ preserves coproducts.
\item $F$ admits a left adjoint if and only if $F$ preserves products.
\item If $F$ admits a right adjoint $G$, then $F$ preserves compact objects if and only if $G$ preserves coproducts.
\end{enumerate}
\end{theorem}
Let us also recall a trick from \cite[Theorem~2.1]{MR1191736}.
\begin{theorem} \label{theorem Neeman trick}
Let $\catT$ be a compactly generated triangulated category, and let $X\in\catT^{\compacts}$. Then the subcategory $X^\perp$ of $\catT$ is compactly generated again.
Moreover, the left adjoint to the inclusion $X^\perp\hookrightarrow\catT$ induces an equivalence \[(X^\perp)^{\compacts}\simeq\catT^{\compacts}/\thick X\] up to direct summands.
\end{theorem}
\subsection*{0-cocompact cogeneration}
For a class of objects $\catS$ in $\catT$ we also consider \[{}^\perp\catS=\{T\in\catT \vert \catT(T[n],\catS)=0 \text{ for each } n\}.\] $\catS$ \emph{cogenerates} $\catT$ if ${}^\perp\catS=0$; if $\catT$ admits a cogenerating set which consists of 0-cocompact objects, then $\catT$ is \emph{0-cocompactly cogenerated}. By \cite[Theorem~6.6]{MR3946864}, if $\catT$ is $0$-cocompactly cogenerated by $\catS$, then $\catT$ coincides with its smallest triangulated subcategory which contains $\catS$ and is closed under products.
We end this section with some observations which are (weak) duals of results from the previous subsection. As one would expect, this story is far less complete than its classical counterpart.
For our dual version of Theorem~\ref{theorem brown representability} we refer to Appendix~\ref{section dual brown rep}.
As for Theorem~\ref{thm:neemanthreeadjoints}, we have the following partial `$0$-cocompact dual':
\begin{1weakdual} \label{theorem 1weakdual}
Let $F\colon\catT'\to\catT$ be a triangle functor with a right adjoint $G$. If $F$ preserves countable products, then $G$ preserves $0$-cocompact objects.
In particular, if $F$ additionally reflects $0$-objects, then $G$ takes any set of $0$-cocompact cogenerators for $\catT$ to a set of $0$-cocompact cogenerators for $\catT'$.
\end{1weakdual}
\begin{remark}
In contrast to the situation of Theorem~\ref{thm:neemanthreeadjoints}, here we do not get an ``if and only if'' statement. Indeed we have seen in Example~\ref{example K(vect)} that all objects in \( \catK(\Modcat k) \) are \( 0 \)-cocompact for a field \( k \). Thus any endofunctor of \( \catK(\Modcat k) \) preserves \( 0 \)-cocompacts, but clearly not every left adjoint endofunctor preserves countable products.
\end{remark}
\begin{proof} Let $X\in\catT$ be $0$-cocompact and consider a sequence \[\mathbb T \colon\cdots \to T_2\to T_1 \to T_0\] in $\catT'$ such that $\colim\catT'(\mathbb T, GX)=0$ and $\catT'(\mathbb T, GX[1])$ is dual ML. By adjunction we have that $\colim\catT(F\mathbb T, X)=0$ and that $\catT(F\mathbb T, X[1])$ is dual ML, hence $\catT(\holim F\mathbb T, X)=0$. If $F$ preserves countable products then it preserves homotopy limits, so in particular \[0=\catT(\holim F\mathbb T,X)=\catT(F\holim\mathbb T,X)\cong\catT'(\holim\mathbb T, GX)\] i.e.\ $GX$ is $0$-cocompact. The last claim follows immediately.
\end{proof}
Finally, our statement corresponding to Theorem~\ref{theorem Neeman trick} is
\begin{2weakdual} \label{theorem 2weakdual}
Suppose $\catT$ is 0-cocompactly cogenerated by $\catS$, and let $X\in\catS$. Then the subcategory ${}^\perp X$ of $\catT$ is $0$-cocompactly cogenerated again.
\end{2weakdual}
\begin{proof}
The stable t-structure (\ref{align our t-structure}) --- see page~\pageref{align our t-structure} --- shows that the subcategory ${}^\perp X$ is an aisle, so by \cite{MR907948} the inclusion ${}^\perp X \hookrightarrow\catT$ admits a right adjoint $G$. We now observe that ${}^\perp X$ is closed under countable products: Take a countable subset $\{T_i\}$ of ${}^\perp X$. Then $\prod T_i = \holim \mathbb T$ for the obvious system
\[\mathbb T\colon \cdots \to T_2\oplus T_1\oplus T_0 \to T_1\oplus T_0 \to T_0,\]
and $\catT(\mathbb T,X[i])$ is the zero-sequence for each \( i \). In particular it is dual ML with vanishing colimit, so $\prod T_i\in{}^\perp X$ by the $0$-cocompactness of $X$. It follows from Theorem~\ref{thm:neemanthreeadjoints}${}^{\op}$ that the essential image $G(\catS)$ is a set of $0$-cocompact cogenerators for ${}^\perp X$.
\end{proof}
\section{Partial Serre functors}\label{section rel serre}
Recall that \( \catT \) is a \( k \)-category for some commutative ring \( k \). Let us choose an injective \( k \)-module \( I \), which cogenerates \( \Modcat k \). We write \( (-)^\ast = \Hom_k (-,I) \). Typical examples include $k$ being artinian and $I$ the injective envelope of $k/\!\rad k$, or $k$ being the ring of integers and $I=\mathbb Q/\mathbb Z$.
\begin{observation}
We collect a few central properties of the functor \( (-)^\ast \), all of which are immediate.
\begin{itemize}
\item \( (-)^\ast \) is contravariant and exact.
\item \( (-)^\ast \) reflects \( 0 \)-objects and isomorphisms.
\item There is a natural monomorphism \( \id \to (-)^{\ast\ast} \).
\end{itemize}
\end{observation}
\begin{definition}
A \emph{partial Serre functor} for a subcategory \( \catX\subset\catT \) is a functor \( \mathbb S \colon \catX \to \catT \) such that
\[ \catT(X,T)^\ast \cong \catT(T,\mathbb SX) \]
naturally in \( X \in \catX \) and in \( T \in \catT \).
\end{definition}
The functorial properties of partial Serre functors are interesting in their own right. In particular, in the case that $\catT$ is a triangulated category, a partial Serre functor is automatically a triangle functor. However, here we are mostely concerned with the connection between partial Serre functors and notions of (co)compactness. Therefore the proof of the following theorem, which summarizes the functorial properties, is postponed to Appendix~\ref{section exactness of serre}.
\begin{theorem} \label{theorem main general result on partial serre}
Suppose \( \catT \) is a triangulated category, and let \( \catX \) be the full subcategory consisting of all objects \( X \) such that \( \catT(X, -)^\ast \) is representable.
Then \( \catX \) is a triangulated subcategory of \( \catT \), and there is a partial Serre functor \( \mathbb S \colon \catX \to \catT \). Moreover, \( \mathbb S \) is a triangle functor.
\end{theorem}
\begin{observation}
Any partial Serre functor $\mathbb S\colon \catX\to\catT$ is faithful, since we have \[\catT(X,Y)\hookrightarrow\catT(X,Y)^{\ast \ast} \cong \catT(\mathbb SX, \mathbb SY)\] for each $X,Y\in\catX$ by applying the duality formula twice.
On the other hand, $\mathbb S$ is full only when the subcategory $\catX$ has `sufficiently small' $\Hom$-sets: If $k$ is a field, this amounts to $\catX$ being $\Hom$-finite; if $k=\mathbb Z$ and $I=\mathbb Q/\mathbb Z$, then $\mathbb S$ is full provided that $\catT(X,Y)$ is a finite abelian group for each $X,Y\in\catX$.
\end{observation}
Partial Serre duality links $0$-cocompact objects to compact objects:
\begin{theorem} \label{theorem partial Serre implies X compact and Y 0-cocompact}
Let $\catT$ be a triangulated category. If $X,Y\in\catT$ satisfy \[\catT(X,-)^\ast \cong \catT(-,Y),\] then $X$ is compact and $Y$ is $0$-cocompact.
\end{theorem}
\begin{proof}
Let us first show that $X$ is compact. Consider a set of objects $\{T_i\}\subset \catT$, and for each $i$ let $\mu_i \colon T_i \to \coprod T_i$ be the canonical morphism. By assumption we have
\[ \begin{tikzcd}
\catT(X,\coprod T_i)^\ast \ar[r,"\cong"]\ar[d,"{\catT(X,\mu_i)^\ast}"] & \catT(\coprod T_i, Y) \ar[d,"\mu_i^\ast"] \\
\catT(X, T_i)^\ast \ar[r,"\cong"] & \catT(T_i, Y)
\end{tikzcd} \]
and taking products in the lower row gives another commutative diagram:
\[ \begin{tikzcd}
\catT(X,\coprod T_i)^\ast \ar[r,"\cong"]\ar[d] & \catT(\coprod T_i, Y) \ar[d] \\
\prod\catT(X, T_i)^\ast \ar[r,"\cong"] & \prod\catT(T_i, Y)
\end{tikzcd} \]
Since the right hand vertical morphism is invertible, so is the left hand vertical one. Moreover, the left hand vertical morphism is the dual of the natural morphism \[\coprod \catT(X, T_i) \to \catT(X, \coprod T_i),\] so the claim follows since $(-)^\ast$ reflects isomorphisms.
We now show that $Y$ is $0$-cocompact. Let \[\mathbb T\colon \cdots \to T_2 \to T_1 \to T_0\] be a sequence such that $\colim\catT(\mathbb T,Y)=0$ and $\catT(\mathbb T, Y[1])$ is dual ML. We want to conclude that $\holim \mathbb T$ is a contravariant $Y$-ghost. Equivalently, we can show that $\holim\mathbb T$ is a covariant $X$-ghost. The triangle \[\holim \mathbb T \to \prod T_i\to\prod T_i\to\holim\mathbb T[1]\] induces a long exact sequence
\[\begin{tikzpicture}[descr/.style={fill=white,inner sep=1.5pt}]
\matrix (m) [
matrix of math nodes,
row sep=1em,
column sep=2.5em,
text height=1.5ex, text depth=0.25ex
]
{ \cdots & \catT(X,\prod T_i[-1]) & \catT(X,\prod T_i[-1]) & \catT(X,\holim\mathbb T)\\
& \catT(X,\prod T_i) & \catT(X,\prod T_i) & \cdots. \\
};
\path[overlay,->, font=\scriptsize]
(m-1-1) edge (m-1-2)
(m-1-2) edge (m-1-3)
(m-1-3) edge (m-1-4)
(m-1-4) edge[out=355,in=175] (m-2-2)
(m-2-2) edge (m-2-3)
(m-2-3) edge (m-2-4);
\end{tikzpicture}\]
In particular there is a short exact sequence \[0\to\limit^1\catT(X,\mathbb T[-1])\to\catT(X,\holim\mathbb T)\to\limit\catT(X,\mathbb T)\to0,\] and it suffices to show that the outer terms vanish. Since the system $\catT(\mathbb T,Y[1])$ is the dual of the system $\catT(X,\mathbb T[-1])$, it follows that the latter is ML, so its derived limit vanishes. On the other hand, the vanishing of $\colim\catT(\mathbb T,Y)\cong\colim\left(\catT(X,\mathbb T)^\ast\right)$ implies the vanishing of $\colim\left(\catT(X,\mathbb T)^\ast\right)^\ast\cong \limit\left(\catT(X,\mathbb T)^{\ast\ast}\right)$. Moreover, since $\limit$ is left exact the monomorphism of diagrams $\catT(X,\mathbb T)\to\catT(X,\mathbb T)^{\ast\ast}$ induces a monomorphism $\limit\catT(X,\mathbb T)\to\limit\left(\catT(X,\mathbb T)^{\ast\ast}\right)$, whence $\limit\catT(X,\mathbb T)=0$.
\end{proof}
In particular, Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact} says that if $\mathbb S\colon\catX\to\catT$ is a partial Serre functor, then $\catX\subset\catT^{\compacts}$, while the essential image $\mathbb S(\catX)$ consists of $0$-cocompact objects.
\begin{corollary} \label{corollary compact generation gives partial serre}
Let $\catT$ be a compactly generated triangulated category. Then there is a partial Serre functor $\mathbb S\colon \catT^{\compacts}\to\catT$, and the essential image $\mathbb S(\catT^{\compacts})$ is a set of $0$-cocompact cogenerators for $\catT$.
\end{corollary}
\begin{proof}
For each compact object $X$, the functor $\catT(X,-)^\ast$ is representable by Brown representability (Theorem~\ref{theorem brown representability}), so by Theorem~\ref{theorem main general result on partial serre} there is a partial Serre functor $\mathbb S \colon \catT^{\compacts}\to \catT$. The set $\mathbb S(\catT^{\compacts})$ consists of $0$-cocompact objects by Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact}, hence the last claim follows from the fact that $(-)^\ast$ reflects the vanishing of $k$-modules.
\end{proof}
\begin{example} \label{example 0-cocompacts in D(Lambda)}
Let $\Lambda$ be an Artin algebra. Then $\catD(\Modcat \Lambda)^{\compacts}=\perf\Lambda$, and \[\mathbb S=-\otimes_\Lambda^{\mathbb L}D\Lambda\colon\perf\Lambda\to\catD(\Modcat\Lambda)\] is a partial Serre functor (we will give a more general argument in Example~\ref{ex Serre for derived}) inducing an equivalence $\perf\Lambda\simeq\mathbb S(\perf\Lambda)=\{\text{bounded complexes over }\inj\Lambda\}$ of subcategories of $\catD(\Modcat\Lambda)$. Note that $\mathbb S$ is an autoequivalence on $\perf\Lambda$ if and only if $\Lambda$ is Gorenstein, a fact already observed in \cite{MR910167}, in which case each perfect complex is 0-cocompact in $\catD(\Modcat\Lambda)$.
\end{example}
Recall that a triangle $X\to Y\to Z\to X[1]$ is called \emph{pure} if the morphism $Z\to X[1]$ is a covariant $\catT^{\compacts}$-ghost. Equivalently, for each compact object $C$, the induced sequence $0\to\catT(C,X)\to\catT(C,Y)\to\catT(C,Z)\to0$ is exact. An object $E\in\catT$ is \emph{pure-injective} if $\catT(-,E)$ takes pure triangles to short exact sequences.
\begin{remark}
If $\mathbb S\colon\catX\to\catT$ is a partial Serre functor, then $\mathbb SX$ is pure-injective for each $X\in\catX$. Indeed, we only need to check that $\catT(-,\mathbb SX)\cong\catT(X,-)^\ast$ takes pure triangles to short exact sequences. As $X$ is compact, the functor $\catT(X,-)$ does enjoy this property by definition. Since $(-)^\ast$ is exact, the claim follows.
\end{remark}
We do not know if $0$-cocompactness in general implies pure-injectivity. On the other hand, as was pointed out to us by Angeleri H\"ugel, there are pure-injective objects which are not $0$-cocompact:
\begin{example}
Since $\mathbb Q$ is injective as $\mathbb Z$-module, it is pure-injective in $\catD(\Modcat \mathbb Z)$. However, $\mathbb Q$ is not $0$-cocompact: In the proof of Theorem~\ref{theorem Neeman trick}${}^{\op}$ we saw that if an object $X$ is $0$-cocompact, then the subcategory ${}^\perp X$ is closed under countable products, and it is not hard to realize that ${}^\perp \mathbb Q$ does not enjoy this property. Indeed, let $p$ be a prime, and consider the Prüfer $p$-group $P= \colim \mathbb Z/(p^i)$. Then $\Hom_{\mathbb Z}(P,\mathbb Q)=0$, since $\Hom_{\mathbb Z}(\mathbb Z/(p^i),\mathbb Q)=0$ for each $i$. However, the torsion submodule of $P^\mathbb N$ is a proper submodule, so $\Hom_{\mathbb Z}(P^\mathbb N, \mathbb Q)\ne0$.
Similarly, for a field $k$, the pure-injective $k(X)$ is not $0$-cocompact in $\catD(\Modcat k[X])$.
\end{example}
\section{Partial Serre functors in homotopy categories} \label{section construction of rel serre}
The aim of this section is to construct, in elementary terms, partial Serre functors for homotopy categories of module categories (Theorem~\ref{theorem partial serre for Kb(mod R)}), as well as for homotopy categories of injective or of projective modules (Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)}).
The homotopy category of all modules is typically not compactly generated, and does not even satisfy Brown representability. Thus we cannot apply the general abstract existence result from Corollary~\ref{corollary compact generation gives partial serre}. Also for the homotopy categories of injectives or of projectives, the results in the current section apply beyond the cases where these categories are known to be compactly generated.
Even in cases where the abstract existence result does apply, having an explicit construction can be useful: For instance if we want to explicitly describe almost split triangles, rather than just claim their existence, we need to have an explicit description of the corresponding partial Serre functor first.
Before actually constructing anything, we record some facts from homological algebra that will be useful in the sequel.
\subsection*{Totalization} By a double complex $X$ we mean a diagram
\[ \begin{tikzcd}
& [-1.6em]\ar[d,-,thick,dotted] & \ar[d,-,thick,dotted]&[-1.6em] \\[-1em]
\ar[r,-,thick,dotted]& X^{i,j}\ar[r,"r_X"]\ar[d,"c_X"]&[4em] X^{i+1,j} \ar[d,"c_X"]& \ar[l,-,thick,dotted] \\
\ar[r,-,thick,dotted]& X^{i,j+1}\ar[r,"r_X"]& X^{i+1,j+1} & \ar[l,-,thick,dotted] \\[-1em]
& \ar[u,-,thick,dotted] & \ar[u,-,thick,dotted]&
\end{tikzcd} \]
in which the rows and columns are complexes, and moreover each square commutes. In the following discussion, we denote by $\Tot^\amalg(X)$ the totalization of $X$ with respect to coproducts, while $\Tot^\Pi(X)$ is the totalization of $X$ with respect to products. We denote by $\partial_X$ the differential on either variant of the total complex. Recall that \[\partial_X\vert_{\text{\tiny$X^{i,j}$}}=r_X + (-1)^i c_X.\]
Let $f\colon X\to Y$ be a morphism of double complexes. Then we can form a double complex $\Cone^{\row}(f)$ by taking the row-wise mapping cones of $f$ and equipping the columns of this object with the obvious differentials. In explicit terms, if $X=(X^{i,j}, r_X, c_X)$ and $Y=(Y^{i,j}, r_Y, c_Y)$, then $\Cone^{\row}(f)$ is as follows.
\[ \begin{tikzcd}
&[-1.6em] \ar[d,-,thick,dotted] &[2em] \ar[d,-,thick,dotted]&[-1.6em] \\[-1em]
\ar[r,-,thick,dotted] & Y^{i,j}\oplus X^{i+1,j}\ar[r,"\left(\begin{pampmatrix}r_Y& f\\0& -r_X\end{pampmatrix}\right)"]\ar[d,"\left(\begin{pampmatrix}c_Y& 0\\0& c_X\end{pampmatrix}\right)"]& Y^{i+1,j}\oplus X^{i+2,j} \ar[d,"\left(\begin{pampmatrix}c_Y& 0\\0& c_X\end{pampmatrix}\right)"]& \ar[l,-,thick,dotted] \\[.5em]
\ar[r,-,thick,dotted]& Y^{i,j+1}\oplus X^{i+1,j+1} \ar[r,"\left(\begin{pampmatrix}r_Y& f\\0& -r_X\end{pampmatrix}\right)"]& Y^{i+1,j+1}\oplus X^{i+2,j+1} & \ar[l,-,thick,dotted] \\[-1em]
& \ar[u,-,thick,dotted] & \ar[u,-,thick,dotted]&
\end{tikzcd} \]
\begin{lemma} \label{lemma totalization is associative}
Let $f\colon X \to Y$ be a morphism of double complexes. Then \[\Cone\left(\Tot^\ast(X)\xrightarrow{\Tot^\ast(f)}(\Tot^\ast(Y))\right)=\Tot^\ast\left(\Cone^{\row}(f)\right)\] as complexes for $\ast\in\{\amalg,\Pi\}$.
\end{lemma}
\begin{proof}
Recall that \[\Cone(\Tot^\Pi(f))^n = \left(\prod_{i+j=n} Y^{i,j} \right)\oplus \left(\prod_{i+j=n+1}X^{i,j}\right),\] and that the differentials $d_{\catC}$ of this complex are given by \[d_{\catC}\vert_{\text{\tiny$Y^{i,j}$}} = \partial_Y \text{ and } d_{\catC}\vert_{\text{\tiny$X^{i,j}$}}=f-\partial_X.\] On the other hand, $\Cone^{\row}(f)^{i,j}=Y^{i,j}\oplus X^{i+1,j}$, and the differentials $d_{\catC^{\row}}$ of this double complex are given as $d_{\catC^{\row}}\vert_{\text{\tiny$Y^{i,j}$}} = r_Y + c_Y $ and $\text{ and } d_{\catC^{\row}}\vert_{\text{\tiny$X^{i,j}$}} = f - r_X + c_X$. In particular, \[\Tot^\Pi(\Cone^{\row}(f))^n = \prod_{i+j=n}(Y^{i,j}\oplus X^{i+1,j}),\] so the two complexes of the proposition do coincide in each degree. Moreover, the differentials $\partial_{\catT}$ of the latter complex are given by \[\partial_{\catT}\vert_{\text{\tiny$Y^{i,j}$}}= r_Y + (-1)^i c_Y = \partial_Y\] and, since $X^{i,j}$ lives in degree $(i-1,j)$ of this complex, \[\partial_{\catT}\vert_{\text{\tiny$X^{i,j}$}} = f - r_X + (-1)^{i-1}c_X = f-\partial_X.\]
Of course, the same can be said using $\coprod$ instead of $\prod$.
\end{proof}
If $f\colon X\to Y$ is a morphism of double complexes, then for each $n$ we have a chain map $f^{\bullet, n}\colon X^{\bullet, n}\to Y^{\bullet, n}$. Visually, $f^{\bullet, n}$ is the `horizontal layer' at height $n$ in the triple complex $f$. Notice that $f^{\bullet, n}$ is a quasi-isomorphism if and only if the $n$'th row of the double complex $\Cone^{\row}(f)$ is acyclic.
\begin{lemma} \label{lemma our acyclic assembly}
Let $f\colon X\to Y$ be a morphism of double complexes.
\begin{enumerate}
\item If $X$ and $Y$ are left bounded and each $f^{\bullet, n}$ is a quasi-isomorphism, then the chain map \[\Tot^\amalg(f)\colon\Tot^\amalg(X)\to\Tot^\amalg(Y)\] is a quasi-isomorphism.
\item If $X$ and $Y$ are right bounded and each $f^{\bullet, n}$ is a quasi-isomorphism, then the chain map \[\Tot^\Pi(f)\colon\Tot^\Pi(X)\to\Tot^\Pi(Y)\] is a quasi-isomorphism.
\end{enumerate}
\end{lemma}
\begin{proof}
Claim $(2)$ is dual to claim $(1)$, so it suffices to prove the latter.
By Lemma~\ref{lemma totalization is associative}, it suffices to show that $\Tot^\amalg(\Cone^{\row}(f))$ is acyclic. But by assumption, $\Cone^{\row}(f)$ is a double complex in which each row is acyclic and each diagonal is bounded on the lower left. The claim now follows from the Acyclic Assembly Lemma, see e.g.\ \cite[Lemma 2.7.3 and subsequent Remark]{MR1269324}.
\end{proof}
When $R$ is a ring and $X,Y\in\catC(\Modcat R)$, we write $\HOM_R(X,Y)$ for the double complex having $\Hom_R(X^i,Y^j)$ in position $(i,j)$. In particular \[\Hom_R(X,Y)=\Tot^\Pi(\HOM_R(X,Y)),\] meaning $\Hom_{\catC}(X,Y)=\Z^0 \Hom_R(X,Y)$, and $\Hom_{\catK}(X,Y)=\HH^0 \Hom_R(X,Y)$.
If $Z\in\catC(\Modcat R^{\op})$, then $X\tilde\otimes_R Z$ denotes the double complex having $X^j \otimes_R Z^i$ in position $(i,j)$---note the choice of coordinates---and \[X\otimes_R Z = \Tot^\amalg(X\tilde\otimes_R Z).\] The usual $\otimes$--$\Hom$-adjunction extends to an isomorphism of double complexes
\begin{align} \label{align adjunction type iso for double cpxs}
(X\tilde\otimes_R Z)^\ast \cong \HOM_R(X,Z^\ast).
\end{align}
Let us write $(-)^\vee=\Hom_R(-,R)$. Note that this functor induces an equivalence \( (\proj R)^{\op} \to \proj R^{\op} \), and hence also \[ \catC(\proj R)^{\op} \to \catC(\proj R^{\op}) \text{ and }\catK(\proj R)^{\op} \to \catK(\proj R^{\op}).\]
\begin{lemma} \label{lemma hom alg fact}
Let $R$ be a ring, let $M$ be a complex of $R$-modules, and let \( P \) be a complex of finitely generated projective $R$-modules. Then we have an isomorphism \[\HOM_R(P,M)\cong M\tilde\otimes_R P^\vee \] of double complexes, which is natural in $P$ and in $M$.
\end{lemma}
\begin{proof}
It suffices to observe that for $M\in\Modcat R$ and $P\in\proj R$, there is a natural morphism $M\otimes_R P^\vee \to \Hom_R(P,M)$ given by $m\otimes \phi\longmapsto\left[p\mapsto m\cdot\phi(p)\right]$, which is invertible when $P=R$.
\end{proof}
\begin{remark}
If \( P \) (or \( M \)) is a bounded complex then this isomorphism clearly implies that
\[ \Hom_R(P, M) \cong M \otimes_R P^{\vee}. \]
However, in general these two complexes are different, since the left hand side is a product totalization while the right hand side is a coproduct totalization.
\end{remark}
\subsection*{The homotopy category of modules} \label{subsection rel serre duality in K(Mod)} Let $R$ be a ring. Akin to the classical Auslander--Reiten translation of a finitely presented module, we find for each bounded complex $M$ of finitely presented $R$-modules a complex $\mathbb{S} M$ as follows. Let \[P_1 \stackrel{p}\to P_0 \to M \to 0\] be a projective presentation in the category $\Cb(\modcat R)$. Explicitly, this means that $P_1$ and $P_0$ belong to $\Cb(\proj R)$, and are moreover contractible. Define
\[ \mathbb{S} M = \Ker(\nu(p))[2],\]
where $\nu=(-^\vee)^\ast$.
The fact that this \( \mathbb{S} \) defines a partial Serre functor \( \Kb(\modcat R) \to \catK(\Modcat R) \) is a relatively straightforward extension of the familiar results from $\modcat R$, but we give a thorough account for the convenience of the reader.
We first remark that the required projective presentation exists:
\begin{construction} \label{construction proj pres by contractibles}
For a bounded complex $M$ over $\modcat R$, say \[M = \left(M^0 \to M^1\to M^2 \to \cdots \to M^n\right),\] pick epimorphisms $P_0^i\to M^i$ with $P_0^i\in\proj R$. Form the commutative diagram
\[ \begin{tikzcd}
P_0^0 \ar[r] \ar[d,->>]& P_0^0\oplus P_0^1 \ar[r] \ar[d,->>] & P_0^1\oplus P_0^2 \ar[r]\ar[d,->>] & \cdots \ar[r] & P_0^{n-1}\oplus P_0^n \ar[r]\ar[d,->>]& P^n_0\ar[d,->>]\\
M^0 \ar[r] & M^1 \ar[r] & M^2 \ar[r] & \cdots \ar[r] & M^n \ar[r]& 0
\end{tikzcd} \]
where the top row is equipped with the obvious differentials making it a contractible complex. Now repeat, replacing $M$ by the induced complex of kernels, to obtain a contractible complex $P_1$, and hence a projective presentation of $M$ in $\Cb(\modcat R)$.
\end{construction}
Now we work towards a version of Auslander's defect formula.
\begin{lemma} \label{lemma.prep_for_defect}
Let \( R \) be a ring, \( P \) a finite contractible complex of finitely generated projectives. Then
\[ \Hom_{\catC(\Modcat R)}(P, -)^\ast \cong \Hom_{\catC(\Modcat R)}(-, \nu P[1]) \]
functorially in \( P \).
\end{lemma}
\begin{proof}
As double complexes, we know that
\begin{align*}
\HOM_R(P, -)^\ast & \cong (- \tilde\otimes_R P^{\vee})^\ast && \text{Lemma~\ref{lemma hom alg fact}} \\
& \cong \HOM_R(-, (P^{\vee})^\ast) && \text{(\ref{align adjunction type iso for double cpxs})} \\
& = \HOM_R(-, \nu P).
\end{align*}
Since \( P \) is a finite complex, all these double complexes have finite diagonals. So in particular \( \Tot^{\amalg} \) and \( \Tot^{\Pi} \) coincide and commute with dualizing. Therefore, as complexes we have
\[ \Hom_R(P, -)^{\ast} \cong \Hom_R(-, \nu P). \]
Note that since \( P \) is contractible these two complexes are exact. It follows that
\begin{align*}
\Hom_{\catC(\Modcat R)}(P, -)^\ast &= (Z^0(\Hom_R(P, -)))^{\ast} \\
& \cong Z^1(\Hom_R(P, -)^{\ast}) \\
& \cong Z^1(\Hom_R(-, \nu P)) \\
& = Z^0(\Hom_R(-, \nu P[1])) = \Hom_{\catC(\Modcat R)}(-, \nu P[1]). \qedhere
\end{align*}
\end{proof}
Let $\mathbb E \colon 0\to A\to B\to C\to0$ be an extension in $\catC(\Modcat R)$. The \emph{defects} $\mathbb E_{\defect}$ and $\mathbb E^{\defect}$ are defined, respectively, by exactness of the sequences
\begin{align*}
& 0\to\Hom_{\catC}(C,-)\to\Hom_{\catC}(B,-)\to\Hom_{\catC}(A,-)\to\mathbb E_{\defect}\to0; \\
& 0\to\Hom_{\catC}(-,A)\to\Hom_{\catC}(-,B)\to\Hom_{\catC}(-,C)\to\mathbb E^{\defect} \to0.
\end{align*}
\begin{proposition}[Auslander's defect formula] \label{proposition defect formula}
Let $R$ be a ring and take an extension $\mathbb E \colon 0\to A\to B\to C\to0$ in $\catC(\Modcat R)$. For each $M\in\Cb(\modcat R)$ there is an isomorphism \[\mathbb E^{\defect}(M)^\ast\cong \mathbb E_{\defect} (\mathbb{S} M [-1])\] which is natural in $M$.
\end{proposition}
\begin{proof}
Let $P_1 \stackrel{p}\to P_0\to M\to0$ be a projective presentation. Note that the \( P_i \) satisfy the assumptions of Lemma~\ref{lemma.prep_for_defect} above. The exact sequence
\[ 0 \to \Hom_{\catC(\Modcat R)}(M, -) \to \Hom_{\catC(\Modcat R)}(P_0, -) \to \Hom_{\catC(\Modcat R)}(P_1, -) \]
dualizes to the upper row in the following diagram of functors on $\catC(\Modcat R)$ with exact rows, where we write \( (-,-) \) for \( \Hom_{\catC(\Modcat R)}(-, -) \).
\[ \begin{tikzcd}
& (P_1, -)^\ast \ar[r] \ar[d,"\cong"]& (P_0, -)^\ast \ar[r,->>] \ar[d,"\cong"]& (M, -)^\ast \\
(-, \Ker \nu(p)[1]) \ar[r,>->]& (-,\nu P_1[1])\ar[r] & (-, \nu P_0[1])
\end{tikzcd} \]
The vertical isomorphisms are precisely the ones provided by Lemma~\ref{lemma.prep_for_defect}. Note that \( \Ker \nu(p)[1] = \mathbb{S}M[-1] \) by definition of \( \mathbb{S} \).
From $\mathbb E$ we thus get the following commutative diagram with exact rows and columns.
\[ \begin{tikzcd}
(C, \mathbb SM[-1]) \ar[r,>->] \ar[d,>->] & (B,\mathbb S M[-1]) \ar[r] \ar[d,>->] & (A,\mathbb SM[-1]) \ar[d,>->] \\
(P_1, C)^\ast \ar[r,>->] \ar[d] & (P_1,B)^\ast \ar[r,->>] \ar[d] & (P_1,A)^\ast \ar[d] \\
(P_0, C)^\ast \ar[r,>->] \ar[d,->>] & (P_0,B)^\ast \ar[r,->>] \ar[d,->>] & (P_0,A)^\ast \ar[d,->>] \\
(M, C)^\ast \ar[r] & (M,B)^\ast \ar[r,->>] & (M,A)^\ast
\end{tikzcd} \]
The Snake Lemma yields that the cokernel in the first row, which is \( \mathbb{E}_{\defect}(\mathbb{S}M[-1]) \), coincides with the kernel in the last row, which is \( \mathbb{E}^{\defect}(M)^\ast \).
\end{proof}
\begin{theorem} \label{theorem partial serre for Kb(mod R)}
Let \( R \) be a ring. Then \( \mathbb{S} \) defines a partial Serre functor
\[ \mathbb{S} \colon \Kb(\modcat R) \to \catK(\Modcat R) \colon M \longmapsto \Ker(\nu(p))[2]. \]
\end{theorem}
\begin{proof}
Let \( M \in \Kb(\modcat R) \) and \( X \in \catK(\Modcat R) \). We need to show that
\[ \Hom_{\catK}(M, X)^\ast \cong \Hom_{\catK}(X, \mathbb{S}M) \]
naturally in \( M \) and \( X \).
Let \( C_X =\coCone(\id_X) \). Then we have the short exact sequence
\[ \mathbb{E} \colon 0 \to X[-1] \to C_X \to X \to 0 \]
in \( \catC(\Modcat R) \). Moreover, a map to \( X \) is null-homotopic if and only if it factors through \( C_X \). In other words,
\[ \Hom_{\catK}(-, X) = \mathbb{E}^{\defect}. \]
Similarly \( \Hom_{\catK}(X[-1], -) = \mathbb{E}_{\defect} \). We know from Proposition~\ref{proposition defect formula} that
\begin{align*} \Hom_{\catK}(M, X)^\ast &= \mathbb E^{\defect}(M)^\ast \\ & \cong \mathbb E_{\defect} (\mathbb{S} M [-1]) = \Hom_{\catK}(X[-1], \mathbb{S} M[-1]) = \Hom_{\catK}(X, \mathbb{S} M). \qedhere
\end{align*}
\end{proof}
\begin{observation} \label{observation 0-cocompacts in Kb(Lambda)}
Let $\Lambda$ be an Artin algebra.
We can now offer an arguably more conceptual explanation than the one given in \cite[Corollary~6.10]{MR3946864} of the fact that bounded complexes of finitely generated modules are $0$-cocompact objects in the homotopy category $\catK(\Modcat \Lambda)$: We choose \( I \) to be an injective envelope of \( k /\!\rad k \).
In this setup $\mathbb{S}$ admits a quasi-inverse \( \mathbb{S}^- \). Explicitly, for each $M$ choose an injective copresentation $0\to M \to I^0 \to I^1$ in $\Cb(\modcat \Lambda)$, and let \[\mathbb{S}^-(M)= \Cok\left(\nu^- I^0 \to \nu^-I^1\right)[-2].\] Here we use the fact that $\nu^- = \Hom_{\Lambda^{\op}}((-)^\ast,\Lambda)$ is a quasi-inverse of \( \nu \) as a functor from finitely generated projective to finitely generated injective modules. It is immediate from the construction that \( \mathbb{S} \) and \( \mathbb{S}^- \) are mutually quasi-inverse.
In particular, the partial Serre functor $\mathbb S$ is an auto-equivalence on the subcategory $\Kb(\modcat\Lambda)$, which thus consists of $0$-cocompact objects by Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact}.
\end{observation}
For explicit calculation, it is sometimes convenient to not need projective objects in the category of complexes, but rather just complexes of projectives. The following gives an alternative way of calculating $\mathbb S M$ using these.
\begin{proposition} \label{proposition AR translate without contractibles}
If $Q\stackrel{q}\to P \to M \to 0$ is an exact sequence in $\Cb(\modcat R)$ with $Q$ and $P$ consisting of projective modules, then we have \[\mathbb{S} M = \Tot\left(\Ker(\nu(q)) \hookrightarrow\nu Q \xrightarrow{\nu(q)} \nu P \right) \]
in \( \catK(\Modcat R) \). Here \( \nu P \) is the \( 0 \)-th column of the double complex.
\end{proposition}
\begin{proof} Observe that if \( P \) and \( Q \) happen to be projective in the category of complexes, then the claimed formula is just a restatement of the definition of \( \mathbb{S} \). Indeed, in that case \( \nu P \) and \( \nu Q \) are contractible, and thus the total complex is isomorphic to \( \Ker \nu(q)[2] \). The proof now consists of two independent steps, showing respectively that we may replace \( P \) and \( Q \) by projectives in the category of complexes, without changing the result of the formula.
\emph{Step 1:} Let $Q\stackrel{q}\to P \to M \to 0$ be an exact sequence of complexes with $Q$ and $P$ consisting of projectives.
Choose $\overline P = \coCone(\id_P)$. Then $\overline P$ is projective in the category of complexes, and appears in a canonical degree-wise split exact sequence \[0 \to P[-1] \stackrel{f}\to \overline P \stackrel{g}\to P \to 0.\] Since $g$ is an epimorphism, the pullback of $g$ and $q$ is even bicartesian. In particular, the middle row of the following diagram is also exact.
\[ \begin{tikzcd}
Q \ar[r,"q"] & P \ar[r,->>] & M \ar[d,equal] \\
\tilde Q \ar[r,"\tilde q"] \ar[u] & \overline P \ar[r,->>] \ar[u,swap,"g"] & M \\
P[-1] \ar[r,equal] \ar[u,swap,"h"] & P[-1] \ar[u,swap,"f"]
\end{tikzcd} \]
Notice that $\tilde Q$ is again a complex of projectives; in fact, $\tilde Q = \coCone(q)$. Moreover, since the middle column is degree-wise split, so is the leftmost column. Indeed, a splitting of $h$ is obtained by composing $\tilde q$ with a splitting of $f$. It follows that application of $\nu$ and then totalization, gives
\begin{align*} \label{align first step}
\Tot\left(\Ker(\nu(q)) \hookrightarrow \nu Q \xrightarrow{\nu(q)} \nu P\right) = \Tot\left(\Ker(\nu(\tilde q)) \hookrightarrow \nu \tilde Q \xrightarrow{\nu(\tilde q)} \nu \overline P\right) \tag{$\ast$}
\end{align*}
up to a contractible summand.
\emph{Step 2:} Let $Q\stackrel{q}\to P \to M \to 0$ be an exact sequence of complexes with $Q$ and $P$ consisting of projectives.
Choose $\overline Q = \coCone(\id_Q)$. Then $\overline Q$ is projective in the category of complexes, and appears in a canonical degree-wise split exact sequence \[0 \to Q[-1] \stackrel{f}\to \overline Q \stackrel{g}\to Q \to 0.\] Form the commutative diagram
\[ \begin{tikzcd}
Q \ar[r,"q"] & P \ar[r,->>] & M \ar[d,equal] \\
\overline Q \ar[r,"p"] \ar[u] & P \ar[r,->>] \ar[u,equal] & M \\
Q[-1] \ar[u,swap,"f"]
\end{tikzcd} \]
and note that also the bottom row is exact: Since $g$ is an epimorphism, $p$ and $q$ have the same image. Application of $\nu$ yields the commutative diagram below; the fact that the bottom left corner is $\nu Q[-1]$, follows from the Snake Lemma.
\[ \begin{tikzcd}
\Ker(\nu(q)) \ar[r,>->] & \nu Q \ar[r,"\nu(q)"] & \nu P \ar[d,equal] \\
\Ker(\nu(p)) \ar[r,>->] \ar[u] & \nu \overline Q \ar[r,"\nu(p)"] \ar[u] & \nu P \\
\nu Q[-1] \ar[r,equal] \ar[u,swap,"h"] & \nu Q[-1] \ar[u,swap,"\nu(f)"]
\end{tikzcd} \]
Since the middle column is degree-wise split, so is the leftmost one: A splitting of $h$ is given by composing $\Ker(\nu(p))\hookrightarrow\nu \overline Q$ with a splitting of $\nu(f)$. It follows that, up to contractible summands,
\begin{align*} \label{align second step}
\Tot\left(\Ker(\nu(q))\hookrightarrow\nu Q \xrightarrow{\nu(q)} \nu P\right) = \Tot\left(\Ker(\nu(p))\hookrightarrow \nu \overline Q\xrightarrow{\nu(p)} \nu P\right). \tag{$\ast\ast$}
\end{align*}
Now to complete the proof of the proposition, suppose $Q\stackrel{p}\to P \to M \to 0$ is an exact sequence in $\Cb(\modcat R)$ and that $Q$ and $P$ consists of projective modules. Successive application of the above two steps reveals a commutative diagram
\[ \begin{tikzcd}
Q \ar[r,"q"] & P \ar[r,->>] & M \ar[d,equal] \\
\tilde Q \ar[r,"\tilde q"] \ar[u] & \overline P \ar[r,->>] \ar[u] \ar[d,equal] & M \ar[d,equal] \\
\overline Q \ar[r,"p"] \ar[u] & \overline P \ar[r,->>] & M
\end{tikzcd} \]
with exact rows, where $\overline Q$ and $\overline P$ are projective objects in the category of complexes. Then (\ref{align first step}) and (\ref{align second step}) give us, up to contractible summands,
\begin{align*}
\Tot\left(\Ker(\nu(q))\hookrightarrow \nu Q \xrightarrow{\nu(q)}\nu P\right) & = \Tot\left(\Ker(\nu(\tilde q))\hookrightarrow \nu \tilde Q \xrightarrow{\nu(\tilde q)}\nu \overline P\right) \\
& = \Tot\left(\Ker(\nu(p))\hookrightarrow \nu \overline Q \xrightarrow{\nu(p)}\nu \overline P \right) \\
& = \Ker(\nu (p))[2] \\ & = \mathbb S M. \qedhere
\end{align*}
\end{proof}
\begin{example}
If $M$ is an $R$-module, then we can simply use a projective presentation in $\modcat R$ in order to calculate $\mathbb{S} M$. That is, an exact sequence $P_1\stackrel{p}\to P_0 \to M$ of modules with $P_0, P_1 \in\proj R$, resulting in \[\mathbb{S} M = \Tot\left(\tau M \hookrightarrow \nu P_1 \to \nu P_0\right)=\left(\tau M \hookrightarrow \nu P_1 \to \nu P_0\right),\] where $\tau$ denotes the `usual' Aulander--Reiten translation on the category of finitely presented modules.
Note that \( \nu \) is right exact, so in this case \( \mathbb{S} M \) is quasi-isomorphic to \( \nu M \).
In particular, if \( M \) is a finitely generated projective module, then our formula degenerates and \( \mathbb{S} M = \nu M \).
\end{example}
\subsection*{Homotopy categories of injectives and of projectives} Let $R$ be a ring. It follows from \cite{MR2439608} that the inclusion $\catK(\Proj R)\hookrightarrow\catK(\Modcat R)$ admits a right adjoint $\rho$. Note that if $X$ is right bounded, then $\rho X$ is a projective resolution of $X$. Indeed, let $Q\in\catK(\Proj R)$ and suppose $P$ is a right bounded complex which is quasi-isomorphic to $X$. Denoting by $Q^{\leq n} $ the obvious quotient of $Q$, we get \[\Hom_{\catK}(Q,X) =\Hom_{\catK}(Q^{\leq n},X) \cong\Hom_{\catK}(Q^{\leq n},P)=\Hom_{\catK}(Q,P) \] for sufficiently large $n$, where the isomorphism in the middle holds because $Q^{\leq n}$ is homotopically projective.
On the other hand, by \cite{MR3212862}, the inclusion $\catK(\Inj R)\hookrightarrow\catK(\Modcat R)$ admits a left adjoint $\lambda$. If $X$ is left bounded, then $\lambda X$ is an injective resolution of $X$.
\begin{theorem} \label{theorem serre functors in K(Inj) and K(Proj)}
Let \( R \) be a ring and let \( \catK^{\bounded}_{\mathsf{fpr}}(R) \) denote the subcategory of \( \Kb(\Modcat R) \) consisting of complexes admitting degree-wise finitely generated projective resolutions.
\begin{enumerate}
\item There is a partial Serre functor \( \mathbb{S} \colon \lambda( \catK^{\bounded}_{\mathsf{fpr}}(R) ) \to \catK(\Inj R) \) given by \[ \mathbb{S}(\lambda X) = \nu \rho X.\]
\item There is a partial Serre functor \( \mathbb{S} \colon \rho(\catK^{\bounded}_{\mathsf{fpr}}(R^{\op}))^{\vee} \to \catK(\Proj R) \) given by \[ \mathbb{S}((\rho X)^{\vee}) = \rho(X^\ast). \]
\end{enumerate}
\end{theorem}
\begin{remark}
\begin{enumerate}
\item In particular it follows that \( \lambda( \catK^{\bounded}_{\mathsf{fpr}}(R) ) \) is a set of compact objects in \( \catK(\Inj R) \). Note that if \( R \) is right coherent then \( \lambda(\catK^{\bounded}_{\mathsf{fpr}}(R)) \) is nothing but the bounded derived category of finitely presented modules, realized inside \( \catK(\Inj R) \) via injective resolutions. If \( R \) is even noetherian then it is shown in \cite{MR2157133} that these are in fact all the compact objects, and that \( \catK(\Inj R) \) is compactly generated.
\item Similarly, the set \( \rho(\catK^{\bounded}_{\mathsf{fpr}}(R^{\op}))^{\vee} \) consists of compact objects in \( \catK( \Proj R ) \). If \( R \) is left coherent then this is equivalent to the opposite of the bounded derived category of finitely presented left \( R \)-modules. If \( R \) additionally admits a dualizing complex then it is shown in \cite{MR2132765} that these are in fact all the compact objects, and that \( \catK(\Proj R ) \) is compactly generated.
\end{enumerate}
\end{remark}
\begin{proof}
(1): Let $I\in\catK(\Inj R)$, and let $X\in \catK^{\bounded}_{\mathsf{fpr}}(R)$. Pick a projective resolution with finitely generated terms $\rho X$ of $X$, and let $\rho X \to X$ be a quasi-isomorphism. In particular, $\rho X$ is right bounded. Hence, by Lemma~\ref{lemma our acyclic assembly}, the induced morphism $\HOM_R(X,I) \to \HOM_R(\rho X, I)$ of left bounded double complexes totalizes to a quasi-isomorphism \[\Tot^\amalg(\HOM_R(X,I)) \to \Tot^\amalg(\HOM_R(\rho X, I)).\] Since $X$ is a bounded complex, this means in particular that
\begin{align*}
\Hom_{\catK}(X,I) & = \HH^0\Tot^\Pi(\HOM_R(X, I)) \\
& = \HH^0\Tot^\amalg(\HOM_R(X, I)) \\
& \cong \HH^0\Tot^\amalg(\HOM_R(\rho X, I)).
\end{align*}
Moreover, since $\rho X$ consists of finitely generated projective $R$-modules, Lemma~\ref{lemma hom alg fact} gives an isomorphism of double complexes\[\HOM_R(\rho X,I)\cong I\tilde\otimes_R(\rho X)^\vee.\]
With these observations we can verify the claim of the theorem as follows.
\begin{align*}
\Hom_{\catK}(\lambda X, I)^\ast & \cong \Hom_{\catK}(X, I)^\ast && \text{\( \lambda \) is left adjoint} \\
& \cong \HH^0 \Tot^\amalg(\HOM_R(\rho X,I))^\ast \\
& \cong \HH^0 \Tot^\amalg (I\tilde\otimes_R (\rho X)^\vee)^\ast \\
& \cong \HH^0 \Tot^\Pi \left((I\tilde\otimes_R (\rho X)^\vee)^\ast\right) && \text{dual of \( \amalg \) is \( \Pi \)} \\
& \cong \HH^0 \Tot^\Pi \HOM_R(I, ((\rho X)^\vee)^\ast) && \text{by (\ref{align adjunction type iso for double cpxs})} \\
& = \Hom_{\catK}(I, \nu\rho X).
\end{align*}
(2): Let $P\in\catK(\Proj R)$, and let $Y\in\catK^{\bounded}_{\mathsf{fpr}}(R^{\op})$. Pick a projective resolution with finitely generated terms $\rho Y$ of $Y$, and let $\rho Y \to Y$ be a quasi-isomorphism. Since $Y$ and $\rho Y$ are right bounded, so are $P\tilde\otimes_R Y $ and $P \tilde\otimes_R \rho Y$. In particular, by Lemma~\ref{lemma our acyclic assembly} the induced morphism $P\tilde\otimes_R\rho Y \to P \tilde\otimes_R Y$ of double complexes totalizes to a quasi-isomorphism \[\Tot^\Pi(P\tilde\otimes_R\rho Y)\to\Tot^\Pi(P\tilde\otimes_R Y).\] Moreover, since $\rho Y$ consists of finitely generated $R^{\op}$-modules, we have \[\HOM_R((\rho Y)^\vee, P) \cong P \tilde\otimes_R \rho Y\] by Lemma~\ref{lemma hom alg fact}. Combining these observations yields
\begin{align*}
\Hom_{\catK}((\rho Y)^\vee, P) & = \HH^0\Tot^\Pi(\HOM_R(\rho Y^\vee, P)) \\
& \cong \HH^0\Tot^\Pi(P \tilde\otimes_R \rho Y) \\
& \cong \HH^0\Tot^\Pi(P \tilde\otimes_R Y) \\
& \cong \HH^0\Tot^\amalg(P \tilde\otimes_R Y),
\end{align*}
where the last equality holds since $Y$ is a bounded complex. The claim now follows by the following calculation.
\begin{align*}
(\HH^0\Tot^\amalg(P \tilde\otimes_R Y))^\ast & \cong \HH^0\Tot^\Pi\left((P \tilde\otimes_R Y)^\ast \right) && \text{dual of \( \amalg \) is \( \Pi \)}\\
& \cong \HH^0\Tot^\Pi\HOM_R(P, Y^\ast) && \text{by (\ref{align adjunction type iso for double cpxs})} \\
& = \Hom_{\catK}(P, Y^\ast) \\
& \cong \Hom_{\catK}(P, \rho (Y^\ast)) && \text{\( \rho \) is right adjoint} \qedhere
\end{align*}
\end{proof}
\section{Transferring partial Serre functors to subcategories} \label{section transferring serre}
Let $R$ be a noetherian ring. Recall that an $R$-module is \emph{Gorenstein projective} if it appears as a boundary of a totally acyclic complex over $\Proj R$. Such modules form the full subcategory $\GProj R$ of $\Modcat R$. We write $\Gproj R = \GProj R \cap \modcat R$.
The categories $\GProj R$ and $\Gproj R$ are Frobenius exact, so the stabilizations $\GStable R$ and $\Gstable R$ are triangulated. The \emph{singularity category} of $R$ is the Verdier quotient $\Dsg(R)=\Db(\modcat R)/\perf R$, and there is an exact embedding \[\Gstable R\hookrightarrow\Dsg(R).\]
If $R$ is Gorenstein, then each finitely generated $R$-module admits a `Gorenstein projective approximation'. This means that the inclusion $\Gstable R\hookrightarrow\stablemodules R$ admits a right adjoint functor $\GP$---confer the classical \cite{buchweitz1986maximal,MR1044344}, see also \cite[Proposition~2.5]{BIKP}.
The following is \cite[Theorem~2.9]{BIKP}.
\begin{theorem} \label{theorem BIKP}
Let $A$ be a Gorenstein algebra which is finite dimensional over a field. There are (classical) Serre functors \[ \Gstable A\xrightarrow{[-1]\circ\GP\circ\nu} \Gstable A \text{\,\, and \,\,\,} \Dsg(A)\xrightarrow{[-1]\circ\overline{\mathbb{L} \nu}}\Dsg(A).\]
In \( \nu \) here we dualize with respect to the base field. Note that the derived functor \( \mathbb L \nu \) induces a well-defined functor on singularity categories, denoted by \( \overline{ \mathbb L \nu} \) above.
\end{theorem}
The goal of the current subsection is to extend Theorem~\ref{theorem BIKP}, by relaxing the size condition, lifting the homological restriction of Gorensteinness, and providing partial Serre functors inside larger ambient categories. This will be achieved in Theorem~\ref{theorem partial serre functor on Dsg} and Theorem~\ref{theorem partial serre on gproj}. Our strategy is to investigate how the partial Serre functors of Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)} induce partial Serre functors in certain subcategories of $\catK(\Inj R)$ and of $\catK(\Proj R)$. This will rely on the following general observation.
\begin{lemma} \label{lemma serre duality from an adjoint triple}
Take an adjoint triple of triangle functors
\[ \begin{tikzcd}
\catT' \ar[r,"e"] \ar[r,<-,"\bigR",bend right=40] \ar[r,<-,"\bigL",bend left=40] &[2em] \catT
\end{tikzcd} \]
and let $\mathbb S\colon \catX \to \catT$ be a partial Serre functor for a subcategory $\catX\subset\catT$.
Then the essential image $\bigL(\catX)\subset\catT'$ admits a partial Serre functor $\mathbb S'\colon \bigL(\catX) \to \catT'$ given by $\mathbb S'(\bigL X)= \bigR\mathbb S X$ for each $X\in\catX$.
\end{lemma}
\begin{proof}
$\catT' (\bigL X,-)^\ast \cong \catT(X, e-)^\ast \cong \catT(e-, \mathbb S X) \cong \catT'(-, \bigR \mathbb S X)$.
\end{proof}
\begin{example} \label{ex Serre for derived}
Note that in Lemma~\ref{lemma serre duality from an adjoint triple} it suffices for \( \bigL \) to be defined on \( \catX \) and \( \bigR \) to be defined on \( \mathbb{S} (\catX) \). An instance employing such a partially defined right adjoint is the following.
Consider \( \catD(\Modcat R) \), identified (via injective Cartan--Eilenberg resolutions) with the full subcategory of homotopically injective complexes in \( \catK(\Inj R) \). Note that that this inclusion has a left adjoint: the natural projection \( q \colon \catK(\Inj R) \to \catD(\Modcat R) \). In general \( q \) does not preserve compacts. Therefore our inclusion cannot have a right adjoint in general.
However if we consider \( \catX = \lambda(\Kb(\proj R)) \), the category of perfect complexes as a subcategory of \( \catK(\Inj R) \) via injective resolutions, then the situation improves: By Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)} we have a partial Serre functor
\[ \mathbb{S}_{\catK(\Inj R)} \colon \catX \to \catK(\Inj R) \colon (\lambda X) \longmapsto \nu X \]
for \( X \in \Kb(\proj R) \). Of course now the essential image \( \mathbb{S}_{\catK(\Inj R)} (\catX) \) consists of finite complexes of injectives, and thus lies inside our subcategory of homotopically injectives. Trivially we get a right adjoint defined on \( \mathbb{S}_{\catK(\Inj R)} (\catX) \), namely identity.
Thus we obtain the partial Serre functor \( \Kb(\proj R) \to \catD(\Modcat R) \) given by \( \nu \).
\end{example}
\begin{observation} \label{observation induced rel serre along left adjoint}
Consider a compactly generated triangulated category $\catT$ and some $X\in\catT^{\compacts}$. By Theorem~\ref{theorem Neeman trick} the subcategory $X^\perp\subset\catT$ is compactly generated again, so Corollary~\ref{corollary compact generation gives partial serre} yields the existence of partial Serre functors \[\mathbb S\colon\catT^{\compacts}\to\catT \text{ and } \mathbb S'\colon (X^{\perp})^{\compacts}\to X^{\perp}.\] We may now observe that the latter is induced by the former: The subcategory $X^\perp$ is closed under products and, since $X$ is compact, also under coproducts. By Theorem~\ref{thm:neemanthreeadjoints} the inclusion functor $X^\perp \hookrightarrow \catT$ thus admits a left adjoint $\bigL$ and a right adjoint $\bigR$. Moreover, $(X^\perp)^{\compacts}= \bigL(\catT^{\compacts})$ by Theorem~\ref{theorem Neeman trick}. In particular, since the partial Serre functor $\mathbb S'$ is unique up to natural isomorphism, Lemma~\ref{lemma serre duality from an adjoint triple} yields the following commutative diagram.
\[ \begin{tikzcd}
(X^\perp)^{\compacts} \ar[d,"\mathbb S' "] & \catT^{\compacts} \ar[d,"\mathbb S"] \ar[l,swap,"\bigL"] \\
X^\perp & \catT \ar[l,swap,"\bigR"]
\end{tikzcd} \]
\end{observation}
\subsection*{The singularity category}
Let $\Kac(\Inj R)$ be the subcategory of acyclic complexes in $\catK(\Inj R)$. Note that $\lambda (R)$ is compact when considered as an object in $\catK(\Inj R)$, and moreover that $\Kac(\Inj R) = (\lambda (R))^\perp$ as subcategories of $\catK(\Inj R)$. Using this fact, one may construct --- see e.g.\ \cite{MR2157133} --- for $R$ noetherian, an adjoint triple
\[ \begin{tikzcd}
\Kac(\Inj R) \ar[r,hookrightarrow] & \catK(\Inj R). \ar[l,bend right=45,swap,"\bigL"] \ar[l,bend left=45,swap,"\bigR"]
\end{tikzcd} \]
The left adjoint $\bigL$ takes the subcategory $\catK(\Inj R)^{\compacts}\simeq\Db(\modcat R)$ to a compact generating set for $\Kac(\Inj R)$; up to direct summands, $\Kac(\Inj R)^{\compacts}$ is equivalent to the singularity category $\Dsg(R)$: More explicitly, the sequence of constructions
\[ \Dsg(R) \xrightarrow{\text{pick preimage}} \Db(\modcat R) \xrightarrow{\bigL \lambda} \Kac(\Inj R)^{\compacts} \]
is a well-defined functor, which is fully faithful and dense up to summands.
\begin{theorem} \label{theorem partial serre functor on Dsg}
Let $R$ be a noetherian ring. There is a partial Serre functor \[\mathbb S\colon \Dsg(R)\to\Kac(\Inj R)\] given as follows. For an object \( \overline{X} \) in \( \Dsg(R) \), pick a representative \( X \in \Db(\modcat R) \). Then $\mathbb S \overline{X} =\bigR \nu \rho (X)$.
\end{theorem}
\begin{proof}
In view of the adjoint triple above, the claim follows immediately from Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)} and Observation~\ref{observation induced rel serre along left adjoint} --- note here that \( \overline{X} \) is identified with \( \bigL \lambda X \) when considering it an object of \( \Kac(\Inj R) \).
\end{proof}
It may not be completely obvious that Theorem~\ref{theorem partial serre functor on Dsg} is in fact a generalization of Theorem~\ref{theorem BIKP}. However, If \( R \) is Gorenstein in the sense of Iwanaga \cite{MR597688}, then we have the following more direct description.
\begin{corollary} \label{cor partial serre for Dsg(gorenstein)}
Let \( R \) be an Iwanaga--Gorenstein noetherian ring. The partial Serre functor $\mathbb S\colon \Dsg(R)\to\Kac(\Inj R)$ of Theorem~\ref{theorem partial serre functor on Dsg} is induced by the functor \( \mathbb{L} \nu[-1] \colon \Db(\modcat R) \to \Db(\Modcat R) \). More explicitly we have
\[ \mathbb{S} \bigL \lambda X = \bigL \lambda \mathbb{L} \nu X[-1] \]
for \( X \in \Db(\modcat R) \).
\end{corollary}
\begin{proof}
Note first that \( \mathbb{L} \nu \) maps finite complexes of projectives to finite complexes of injectives, which in turn vanish when pushed to \( \Kac(\Inj R) \). In particular both sides of the above equation vanish on \( \Kb(\proj R) \), and we may replace \( X \) by a Gorenstein projective finitely generated module concentrated in degree \( 0 \). Note in particular that after this replacement \( \mathbb{L} \nu X = \nu X \).
By definition of Gorenstein projective there is a totally acyclic complex \( P \) of finitely generated projectives, such that \( X = \boundary^1 P \). After applying \( \nu \) we obtain the following complex, which is still exact.
\[ \nu P \colon \underbrace{\cdots \to \nu P^{-1} \to \nu P^0}_{\nu \rho X} \to \nu P^1 \to \nu P^2 \to \cdots \]
Recall that there are no maps from acyclic complexes to left bounded complexes of injectives in the homotopy category. Therefore the natural map
\[ \Hom_{\catK}(-, \nu P) \to \Hom_{\catK}(-, \nu \rho X) \]
induces an isomorphism of functors on \( \Kac(\Inj R) \). Since \( \nu P \in \Kac(\Inj R) \) this means that \( \bigR \nu \rho X = \nu P \).
On the other hand, observe that the right half of \( \nu P \) is an injective resolution of \( \nu X \), more precisely
\[ \lambda \nu X [-1] = \left( \cdots \to 0 \to 0 \to \nu P^1 \to \nu P^2 \to \cdots\right), \]
where the shift is due to the fact that the complex on the right hand side begins in degree \( 1 \). Similarly to before, we can observe that \( \bigL \lambda \nu X [-1] = \nu P\): Here we use that \( \Hom_{\catK}(I, \Kac(\Inj R)) = 0 \) for any right bounded complex of injectives \( I \). Indeed, given any map of complexes \( I \to \Kac(\Inj R) \), using the fact that the terms of \( I \) have finite projective dimension (see \cite[Theorem~2]{MR597688}) one iteratively from right to left constructs a null-homotopy.
In particular \( \nu P \) lies in the image of \( \bigL \lambda \) again, and we have
\[ \mathbb{S} \bigL \lambda X \overset{\text{Thm~\ref{theorem partial serre functor on Dsg}}}{=} \bigR \nu \rho (X) = \nu P = \bigL \lambda \nu X [-1]. \qedhere \]
\end{proof}
\subsection*{Gorenstein projectives} Before we go on, let us briefly revisit the Gorenstein projective approximation functor. Let $\Ktac(\Proj R)$ denote the subcategory of totally acyclic complexes in $\catK(\Proj R)$, i.e.\ those exact complexes which remain exact under $\Hom_R(-,\Proj R)$. Recall e.g.\ from \cite[Observation~2.21]{MR3946864} that if the noetherian ring $R$ admits a dualizing complex, then there is an adjoint triple
\[ \begin{tikzcd}
\Ktac(\Proj R) \ar[r,hookrightarrow] & \catK(\Proj R). \ar[l,bend right=45,swap,"\bigL"] \ar[l,bend left=45,swap,"\bigR"]
\end{tikzcd} \]
The construction of this triple utilizes the fact that if \( D_R \) is a dualizing complex then \[\Ktac(\Proj R)=\left(R\oplus \rho\RHom(D_R,R)\right)^\perp\] as subcategories of $\catK(\Proj R)$, and that both $R$ and $\rho\RHom(D_R,R)$ are compact in $\catK(\Proj R)$. See \cite[Proposition~2.15]{MR3946864}.
\begin{proposition} \label{proposition description of the functor GP}
Let $R$ be a noetherian ring which admits a dualizing complex. The inclusion $\GStable R\hookrightarrow\stableModules R$ admits a right adjoint $\GP\colon\stableModules R\to\GStable R$, which is given by choosing a preimage in $\Modcat R$ before applying \[\catK(\Modcat R)\stackrel{\rho}\to\catK(\Proj R)\stackrel{\bigR}\to\Ktac(\Proj R)\stackrel{\boundary^1}\to\GStable R.\]
\end{proposition}
\begin{proof}
Recall that taking (e.g.\ first) boundaries gives a triangle equivalence \[\boundary^1\colon\Ktac(\Proj R)\stackrel{\simeq}\to\GStable R.\] The quasi-inverse $\CR$ of $\boundary^1$ takes a Gorenstein projective $R$-module $X$ to its \emph{complete resolution} $\CR X$, i.e.\ a totally acyclic complex over $\Proj R$ with $\boundary^1(\CR X)=X$.
Now, take $X\in\GProj R$ and $M\in\Modcat R$. We first observe that \[\underline\Hom_R(X,M)\cong\Hom_{\catK}(\CR X, M).\] Indeed, there is an epimorphism $\phi\colon \Hom_R(X,M)\to\Hom_{\catK}(\CR X,M)$, as indicated by the following diagram.
\[ \begin{tikzcd}[row sep=4mm]
\CR X\colon \cdots \ar[r] & P^{-1} \ar[r] \ar[dd] & P^0 \ar[dr,->>] \ar[dd] \ar[rr] && P^1 \ar[r]\ar[dd] & \cdots \\
&&&X\ar[ur,>->] \ar[dl]&&& \\
M\colon \cdots \ar[r] & 0 \ar[r] & M \ar[rr] && 0 \ar[r] & \cdots
\end{tikzcd} \]
The total acyclicity of $\CR X$ means in particular that the morphism $\iota\colon X\hookrightarrow P^1$ is a left $\Proj R$-approximation of $X$. Hence, a morphism $f\colon X\to M$ factors through $\Proj R$ if and only if it factors through $\iota$, which is further equivalent to $f\in\Ker\phi$.
To complete the proof, it now suffices to use the right adjointness of $\rho$ and $\bigR$, and the fact that $\boundary^1$ and $\CR$ are quasi-inverse:
\begin{align*}
\underline\Hom_R(X,M) & \cong \Hom_{\catK}(\CR X, M) \\
& \cong \Hom_{\catK}(\CR X, \bigR\rho M) \\
& \cong \underline\Hom_R(X, \boundary^1\bigR\rho M) \qedhere
\end{align*}
\end{proof}
\begin{remark}
In the case of an Artin algebra $\Lambda$, this description of the functor $\GP$ is fairly explicit. Indeed, the standard dual $D\Lambda$ is a dualizing complex, and $\bigR(X)$ can be calculated by taking `iterated approximations' of $X$ by products of copies of $\Lambda \oplus \rho D\Lambda$. For details, confer \cite[Theorem~6.6; Corollary~6.12]{MR3946864}
\end{remark}
We are now ready to complete this subsection with the following extension of Theorem~\ref{theorem BIKP} for categories of Gorenstein projectives.
\begin{theorem} \label{theorem partial serre on gproj}
Let $R$ be a noetherian ring which admits a dualizing complex. There is a partial Serre functor \[\mathbb S\colon (\GStable R)^{\compacts}\to\GStable R.\]
For \( X \in \Gstable R \) it is given by $\mathbb S(X)=\GP\nu(X)[-1]$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{theorem partial serre on gproj}]
By Observation~\ref{observation induced rel serre along left adjoint}, the adjoint triple ensures the existence of a partial Serre functor \[\mathbb S_{\tac} \colon \Ktac(\Proj R)^{\compacts}\to\Ktac(\Proj R),\] induced by the partial Serre functor $\catK(\Proj R)^{\compacts}\to\catK(\Proj R)$ of Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)}.
It is clear that we can transfer this partial Serre functor to \( \GStable R \) as the composition \[\mathbb S\colon(\GStable R)^{\compacts}\xrightarrow{\CR}\Ktac(\proj R)^{\compacts}\xrightarrow{\mathbb S_{\tac}} \Ktac(\Proj R)\stackrel{\boundary^1}\to\GStable R.\]
\medskip
Now let \( X \in \Gproj R \), and consider
\[ \begin{tikzcd}
\CR X\colon \cdots \ar[r] & P^{-1} \ar[r] & P^0 \ar[dr,->>] \ar[rr] && P^1 \ar[r] & P^2 \ar[r]& \cdots \\
&&&X\ar[ur,>->] &&&
\end{tikzcd} \]
Observe that $\left(\cdots \to (P^2)^{\vee} \to (P^1)^{\vee} \to 0 \to \cdots\right)$ is a projective resolution of \( X^{\vee}[1] \) (when considering \( (P^i)^{\vee} \) in homological degree \( -i \)). It follows that the right half of the above complex is \( \rho( X^{\vee}[1] )^{\vee} \). Since there are no maps in the homotopy category from right bounded complexes of projectives to acyclic complexes we see that
\[ \Hom_{\catK}( \CR X, -) \to \Hom_{\catK}(\rho(X^{\vee}[1])^{\vee}, -) \]
induces a natural isomorphism on \( \Ktac(\Proj R) \), i.e.\ \( \CR X = \bigL (\rho( X^{\vee}[1])^{\vee}) \).
Now
\begin{align*}
\mathbb{S}(X) & = \boundary^1 \mathbb S_{\tac}\CR(X) = \boundary^1 \mathbb S_{\tac} \bigL (\rho( X^{\vee}[1])^{\vee}) \\
& = \boundary^1 \bigR \mathbb{S}_{\catK(\Proj R)} (\rho( X^{\vee}[1])^{\vee}) && \text{by Observation~\ref{observation induced rel serre along left adjoint}} \\
& = \boundary^1 \bigR \rho \underbrace{(X^{\vee}[1])^*)}_{= \nu X[-1]} && \text{by Theorem~\ref{theorem serre functors in K(Inj) and K(Proj)}} \\
& = \GP\nu(X)[-1] && \text{Proposition~\ref{proposition description of the functor GP}} \qedhere
\end{align*}
\end{proof}
\section{Pure resolutions}
Let $R$ be a ring. Following Cohn \cite{MR106918}, a sequence \[\mathbb E\colon0\to A\to B\to C\to0\] in $\Modcat R$ is called \emph{pure exact} if $\mathbb E\otimes_R N$ is exact for each $N\in\Modcat R^{\op}$ or, equivalently, if $\Hom_R(M,\mathbb E)$ is exact for each $M\in\modcat R$. The pure exact sequences define an exact structure on $\Modcat R$; an $R$-module is \emph{pure-projective} (resp. \emph{pure-injective}) if it is projective (injective) with respect to this exact structure. We denote by $\PProj R$ (resp. $\PInj R$) the subcategory of pure-projectives (pure-injectives) in $\Modcat R$.
In this section we will utilize the following theorem. For a collection of objects $\catS$ in a triangulated category, we denote by $\Loc(\catS)$ the smallest triangulated subcategory which contains $\catS$ and is closed under coproducts. On the other hand, $\Coloc(\catS)$ is the smallest triangulated subcategory which contains $\catS$ and is closed under products.
\begin{theorem} \label{theorem the two t-structures}
Let $\catT$ be a triangulated category.
\begin{enumerate}
\item If $\catT$ admits coproducts and $\catS$ is a set of compact objects, then \[\left({}^\perp(\catS^\perp), \catS^\perp\right)\] is a stable t-structure in $\catT$. Moreover, ${}^\perp(\catS^\perp)=\Loc(\catS)$.
\item If $\catT$ admits products and $\catS$ is a set of 0-cocompact objects, then \[\left({}^\perp\catS, ({}^\perp\catS)^\perp\right)\] is a stable t-structure in $\catT$. Moreover, $({}^\perp\catS)^\perp=\Coloc(\catS)$.
\end{enumerate}
\end{theorem}
\begin{proof}
For (1) confer e.g.\ \cite{MR2327478,MR2927802,MR1191736}; (2) is \cite[Theorem~6.6]{MR3946864}.
\end{proof}
Since any finitely presented $R$-module is pure-projective and, as with any notion of projectivity, $\PProj R$ is closed under summands and coproducts, we have
\begin{equation} \label{eqn inclusion 1}
\Loc(\modcat R)\subset\catK(\PProj R), \tag{$\text{i}_1$}
\end{equation} up to isomorphism.
On the other hand, recall from Section~\ref{section rel serre} the functor $(-)^\ast=\Hom_k(-,I)$ (note that we may always choose \( k = \mathbb Z \) if there is no other base ring). For each $N\in\Modcat R^{\op}$, the dual $N^\ast$ belongs to $\PInj R$. Indeed, if $\mathbb E$ is a pure exact sequence, then the sequence $\Hom_R(\mathbb E, N^\ast)\cong \Hom_k(\mathbb E\otimes_R N, I)$ is exact. From the description in Theorem~\ref{theorem partial serre for Kb(mod R)} of the partial Serre functor $\mathbb S\colon \Kb(\modcat R)\to\catK(\Modcat R)$, we infer
\begin{equation} \label{eqn inclusion 2}
\Coloc(\mathbb S(\modcat R))\subset\catK(\PInj R). \tag{$\text{i}_2$}
\end{equation}
A complex $X$ of $R$-modules is \emph{pure acyclic} if $\Hom_R(M,X)$ is acyclic for each $M\in\modcat R$. Let $\Kpac(\Modcat R)$ be the subcategory of $\catK(\Modcat R)$ consisting of pure acyclic complexes. A chain map $f\colon X \to Y$ is a \emph{pure quasi-isomorphism} if $\Cone(f)\in\Kpac(\Modcat R)$, and the \emph{pure derived category} of $R$ is the Verdier quotient \[\Dpure(R)=\frac{\catK(\Modcat R)}{\Kpac(\Modcat R)}.\] The first point of this section is that $\Dpure(R)$ may be realized both as a subcategory of $\catK(\PProj R)$ and as a subcategory of $\catK(\PInj R)$:
\begin{theorem} \label{theorem Dpure as subcat of KPProj and KPInj}
Let \( R \) be a ring.
\begin{enumerate}
\item The pair
\[ ( \Loc( \modcat R ), \Kpac(\Modcat R) ) \]
is a stable t-structure in \( \catK(\Modcat R) \). In particular \( \Dpure(R) \) is equivalent to \( \Loc( \modcat R) \), and the natural quotient functor \( \catK(\Modcat R) \to \Dpure(R) \) has a fully faithful left adjoint.
\item The pair
\[ ( \Kpac(\Modcat R), \Coloc(\mathbb S(\modcat R)) ), \]
is a stable t-structure in \( \catK(\Modcat R) \). In particular \( \Dpure(R) \) is equivalent to \( \Coloc( \mathbb S( \modcat R)) \), and the natural quotient functor \( \catK(\Modcat R) \to \Dpure(R) \) has a fully faithful right adjoint.
\end{enumerate}
\end{theorem}
\begin{proof}
Observe that \( \Kpac(\Modcat R) = (\modcat R)^{\perp} \) by definition, and thus it follows that we also have \( \Kpac(\Modcat R) = {}^{\perp}\mathbb S(\modcat R) \). Moreover, by Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact} we know that \( \modcat R \) consists of compact objects while \( {}^{\perp}\mathbb S(\modcat R) \) consists of \( 0 \)-cocompact objects.
Now the stable t-structures exist by Theorem~\ref{theorem the two t-structures}. For the final claims, note that for a stable t-structure we always have that the localization by the aisle is the co-aisle and vice-versa.
\end{proof}
\begin{definition} Let $R$ be a ring. An object of $\catK(\Modcat R)$ is called
\begin{enumerate}
\item \emph{homotopically pure-projective} if it belongs to ${}^\perp\Kpac(\Modcat R)$; and
\item \emph{homotopically pure-injective} if it belongs to $\Kpac(\Modcat R)^\perp$.
\end{enumerate}
\end{definition}
\begin{corollary} \label{cor homotopically pure proj} Let $R$ be a ring.
\begin{enumerate}
\item The subcategory of homotopically pure-projectives coincides with the subcategory \( \Loc(\modcat R) \). In particular it is contained in \( \catK( \PProj R) \) (up to isomorphism).
\item The subcategory of homotopically pure-injectives coincides with the subcategory \( \Coloc(\mathbb{S}(\modcat R)) \). In particular it is contained in \( \catK( \PInj R) \) (up to isomorphism).
\end{enumerate}
\end{corollary}
\begin{proof}
By Theorem~\ref{theorem Dpure as subcat of KPProj and KPInj}, we have \( \Loc( \modcat R) = {}^{\perp}\Kpac(\Modcat R) \). The ``in particular''-statement follows with \eqref{eqn inclusion 1}. The dual argument proves the second point.
\end{proof}
\begin{remark}
By \cite[Theorem~3.6]{MR3537821} the class of homotopically pure-projectives in $\catK(\Modcat R)$ actually \emph{coincides} with $\catK(\PProj R)$. Combining this fact with our discussion it follows immediately that there is a triangle equivalence
\[\Dpure(R)\simeq\catK(\PProj R)\]
for any ring $R$.
\end{remark}
Let $X$ be a complex of $R$-modules. A \emph{pure-projective resolution} of \( X \) is a pure quasi-isomorphism $P \to X$, with \( P \) homotopically pure-projective. Dually, a \emph{pure-injective resolution} of \( X \) is a pure quasi-isomorphism \( X \to I \), with \( I \) homotopically pure-injective.
The existence of pure-projective (resp. pure-injective) resolutions was established for left (resp. right) bounded complexes in \cite{MR3473427}. We can now get rid of these restrictions:
\begin{corollary} \label{corollary existence of pure resolutions}
Let $R$ be a ring. Each complex of $R$-modules admits
\begin{enumerate}
\item a pure-projective resolution; and
\item a pure-injective resolution.
\end{enumerate}
\end{corollary}
\begin{proof}
Take $X\in\catK(\Modcat R)$. By Theorem~\ref{theorem Dpure as subcat of KPProj and KPInj}, there is a triangle
\[ P \to X \to A \to P[1] \]
with \( P \in \Loc(\modcat R) \) and \( A \in \Kpac(\Modcat R) \). By Corollary~\ref{cor homotopically pure proj}, \( P \) is homotopically projective. Since \( A \) is pure acyclic the map \( P \to X \) is a pure quasi-isomorphism.
The proof of the second point is dual.
\end{proof}
In \cite{MR3473427}, a \emph{pure-projective resolution} of $X$ is defined to be a pure quasi-isomorphism $P \to X$ where $P\in\catK(\PProj R)$ is such that $\Hom_R(P,-)\colon \catK(\Modcat R)\to\catK(\Modcat k)$ preserves pure acyclicity. Dually, a \emph{pure-injective resolution} of $X$ is a pure quasi-isomorphism $X \to I$ where $I\in\catK(\PInj R)$ is such that $\Hom_R(-,I)$ preserves pure acyclicity. We finish this section by showing that the definition of pure-projective and -injective resolutions we worked with here is in fact equivalent to the one in \cite{MR3473427}.
\begin{lemma} \label{closure prop of pure acyclics}
Let $R$ be a ring and let $X$ be a pure acyclic complex of $R$-modules.
\begin{enumerate}
\item The complex $\Hom_k(F, X)$ is pure acyclic for each $F\in\modcat k$.
\item The complex $F \otimes_k X$ is pure acyclic for each \( k \)-module \( F \).
\end{enumerate}
\end{lemma}
\begin{proof}
(1): Let \( M \in \modcat R \). By the adjunction formula we have
\[
\Hom_R(M, \Hom_k(F, X)) = \Hom_R(F \otimes_k M, X).
\]
The latter complex is acyclic, because \( F \otimes_k M \) is a finitely presented \( R \)-module again and $X$ is pure acyclic.
(2): Let \( M \in \Modcat R^{\op} \). We have
\[
(F \otimes_k X) \otimes_R M = F \otimes_k (X \otimes_R M) = (X \otimes_R M) \otimes_k F = X \otimes_R (M \otimes_k F),
\]
where the middle identity holds since \( k \) acts centrally on \( R \). The final tensor product is acyclic by definition of pure acyclicity.
\end{proof}
\begin{proposition} \label{prop.characterizations of homotopically pure proj/inj}
Let $R$ be a ring.
\begin{enumerate}
\item Let $P$ be a complex of $R$-modules. The functor \( \Hom_R(P, -) \) preserves pure acyclicity if and only if $P$ is homotopically pure-projective.
\item Let $I$ be a complex of $R$-modules. The following are equivalent
\begin{enumerate}[label=(\roman*)]
\item $I$ is homotopically pure-injective.
\item \( \Hom_R(-, I) \) preserves pure acyclicity.
\item \( \Hom_R(-, I) \) maps pure acyclic complexes to contractible complexes.
\end{enumerate}
\end{enumerate}
\end{proposition}
\begin{proof}
(1): We observe first that \( P \) is homotopically pure-projective if and only if \( \Hom_R(P, -) \) maps pure acyclic complexes to acyclic complexes. This is just because the homology of the complex $\Hom_R(P, X)$, where $X$ is pure acyclic, is the \( \Hom \)-space in the homotopy category which is zero because $P$ lies in ${}^{\perp}\Kpac(\Modcat R) $.
So it only remains to show that $\Hom_R(P, X)$ is also pure exact. Let \( F \in \modcat k \). Then we have the isomorphism
\[
\Hom_k(F, \Hom_R(P, X)) = \Hom_R(F \otimes_k P, X) = \Hom_R(P, \Hom_k(F, X)),
\]
and the claim follows from the fact that \( \Hom_k(F, X) \) is pure acyclic from Lemma~\ref{closure prop of pure acyclics}.
(2): As in (1), we see that $(ii)$ implies $(i)$. The implication from $(iii)$ to $(ii)$ is immediate. It remains to be shown that $(i)$ implies $(iii)$. So let \( X \) be a pure acyclic complex. Then for any \( k \)-module \( F \) we have
\[
\Hom_k(F, \Hom_R(X, I)) = \Hom_R(F \otimes_k X, I)
\]
which is acyclic, because \( F \otimes_k X \) is pure acyclic by Lemma~\ref{closure prop of pure acyclics}. It follows (picking \( F \) to be cycles of the complex \( \Hom_R(X, I) \)) that \( \Hom_R(X, I) \) is contractible.
\end{proof}
\begin{corollary}
Let \( R \) be a commutative ring and let \( I\in\Modcat R \) be pure-injective. If \( 0 \to A \to B \to C \to 0 \) is a pure exact sequence of $R$-modules, then
\[ 0 \to \Hom_R(C, I) \to \Hom_R(B, I) \to \Hom_R(A, I) \to 0 \]
is split exact.
\end{corollary}
\begin{proof}
Since \( R \) is commutative, we may choose \( k = R \). The pure exact sequence is a pure acyclic complex, and \( I \) --- considered as a complex concentrated in degree \( 0 \) --- is homotopically pure-injective. Now the claim follows from the implication from (a) to (c) in Proposition~\ref{prop.characterizations of homotopically pure proj/inj}(2).
\end{proof}
\begin{remark}
In \cite[Theorem~5.4]{MR3220541} \v{S}\v{t}ov\'{\i}\v{c}ek stated a version of Corollary~\ref{corollary existence of pure resolutions} for an additive finitely accessible category $\mathcal{A}$, using the language of cotorsion pairs. More precisely, he proved that $(\catC(\PProj \mathcal{A}), \catC_{\mathsf{pac}}(\mathcal{A}))$ and $(\catC_{\mathsf{pac}}(\mathcal{A}), \catC(\PInj \mathcal{A}))$ are functorially complete hereditary cotorsion pairs in the category of complexes $\catC(\mathcal{A})$ with the induced
pure exact structure. It would be interesting to find a proof in the general context of additive finitely accessible categories using the direct approach of appealing to the stable t-structures in Theorem~\ref{theorem the two t-structures}.
\end{remark}
\section{Almost split triangles} \label{section AR-triangles}
Let $\catT$ be a triangulated category. Recall that a triangle \[A \stackrel{a}\to B \stackrel{b}\to C \to A[1]\] is called \emph{almost split} if $a$ is left almost split and $b$ is right almost split. In this case $\End_{\catT}(A)$ and $\End_{\catT}(C)$ are local rings.
\begin{theorem}[Beligiannis \cite{MR2079606}, Krause \cite{MR1803642}] \label{theorem Krause AR-theorem}
Let \( X \in \catT \) have local endomorphism ring. Denote by $I_X$ an injective envelope of the simple $\End_{\catT}(X)$-module. If the functor $\Hom_{\End_{\catT}(X)}(\catT(X,-), I_X)$ is representable, then $X$ appears in an almost split triangle
\begin{align*} \label{align Krauses AR triangle}
\tau X \to M \to X \to \tau X[1]. \tag{$\Delta_{\tau}$}
\end{align*}
\end{theorem}
\begin{proof}[Idea of proof]
Let $\rho\colon \End_{\catT}(X)\longrightarrow\mathrel{\mkern-14mu}\rightarrow \End_{\catT}(X)/\!\rad \End_{\catT}(X) \hookrightarrow I_X$ be the canonical map. By assumption the functor \( \Hom_{\End_{\catT}(X)}(\catT(X,-), I_X) \) is representable, and we can choose \( \tau X \) such that \( \tau X[1] \) is a representative. In other words, there is a natural isomorphism
\[ \phi \colon \Hom_{\End_{\catT}(X)}(\catT(X,-), I_X) \to \catT(-,\tau X [1]). \]
It is routine to check that we have an almost split triangle \[\tau X \to M \to X\xrightarrow{\phi_X(\rho)} \tau X[1]. \qedhere\]
\end{proof}
In Theorem~\ref{theorem Krause AR-theorem} the computation of the object $\tau X$ relies on intrinsic properties of the ring $\End_{\catT}(X)$. Our goal now is to show that in the presence of a partial Serre functor, a more unified approach to calculating $\tau$ is sometimes available. We keep our injective cogenerator $I$ of $\Modcat k$, and $(-)^\ast = \Hom_k (-,I)$.
\begin{lemma} \label{lemma our triangle with right almost split map}
Suppose $\mathbb S\colon \catX\to\catT$ is a partial Serre functor. Then each $X\in\catX$ with local endomorphism ring appears in a triangle \begin{align*} \label{align our triangle with right almost split map}
\mathbb SX[-1]\to N \stackrel{n}\to X \to \mathbb SX \tag{$\Delta_{\mathbb S}$}
\end{align*}
with $n$ right almost split.
\end{lemma}
\begin{proof}
By assumption there is an isomorphism $\phi\colon \End_{\catT}(X)^\ast \to \catT(X,\mathbb SX)$. For each non-zero linear form $\gamma$ on $\End_{\catT}(X)$ which vanishes on $\rad \End_{\catT}(X)$, the triangle \[\mathbb SX[-1]\to N \to X \xrightarrow{\phi(\gamma)} \mathbb SX\] has the desired property. Indeed, it suffices to observe that any radical morphism $Y\to X$ composes to zero with $\phi(\gamma)$.
\end{proof}
Our first aim is to show that in the setup of Lemma~\ref{lemma our triangle with right almost split map} there is an almost split triangle ending in \( X \), and moreover that this triangle is a direct summand of (\ref{align our triangle with right almost split map}).
\begin{theorem} \label{theorem our triangle summand of AR triangle}
Assume \( \catT \) is idempotent closed. (Note that this is automatic for instance if \( \catT \) has countable products or coproducts.) Suppose $\mathbb S\colon \catX\to\catT$ is a partial Serre functor. Let \( X \in \catX \) be an object with local endomorphism ring. Then
\begin{enumerate}
\item The functor $\Hom_{\End_{\catT}(X)}(\catT(X,-), I_X)$ of Theorem~\ref{theorem Krause AR-theorem} is representable, so in particular the almost split triangle (\ref{align Krauses AR triangle}) exists.
\item The triangle (\ref{align Krauses AR triangle}) is a direct summand of (\ref{align our triangle with right almost split map}).
\end{enumerate}
\end{theorem}
For the proof, we prepare the following two lemmas.
\begin{lemma} \label{lemma functors summand}
Let \( X \in \catT \) be an object with local endomorphism ring. Then the functor \( \Hom_{\End_{\catT}(X)}(\catT(X,-), I_X) \) is a direct summand of \( \catT(X, -)^\ast \).
\end{lemma}
\begin{proof}
We write \( E = \End_{\catT}(X) \), and denote by \( S \) its simple module.
Since \( \catT(X, -)^\ast = \Hom_{E}(\catT(X, -), E^*) \) it suffices to show that \( I_X \) is a direct summand of \( E^\ast \). Observe that since $\Hom_E(-, E^\ast) = (- \otimes_E E)^\ast$, the $E$-module \( E^\ast \) is injective. Thus, by definition of \( I_X \) it suffices to show that there is a monomorphism from \( S \) to \( E^\ast \).
Since \( I \) is a cogenerator of \( \Modcat k \) there is a non-zero map \( \rho \colon S \to I \). Now we obtain the desired injection as
\[ S \to E^\ast \colon s \longmapsto [ e \mapsto \rho(se) ]. \qedhere \]
\end{proof}
\begin{lemma} \label{lemma AR triangle summand}
Let \( \catT \) be a triangulated category, let \( \tau X \to M \stackrel{m}\to X \stackrel{s}\to \tau X [1] \) be an almost split triangle, and let \( Y \to N \stackrel{n}\to X \to Y[1] \) be a triangle with \( n \) right almost split. Then the former triangle is a direct summand of the latter.
\end{lemma}
\begin{proof}
Consider the following diagram.
\[ \begin{tikzcd}
\Delta_1\colon & \tau X \ar[r] & M \ar[r,"m"] & X \ar[d,equal] \ar[r,"s"] & \tau X[1] \\
\Delta_2\colon & Y \ar[r] & N \ar[r,"n"] & X \ar[r] & Y[1]
\end{tikzcd} \]
Since $m$ and $n$ are both right almost split, the former factors through the latter, and vice versa. This gives rise to morphisms of triangles $\iota\colon\Delta_1 \to \Delta_2$ and $\pi\colon\Delta_2 \to \Delta_1 $. In particular, there are morphisms $i\colon \tau X \to Y$ and $p\colon Y \to \tau X$ such that $(pi)[1] \circ s=s$. But since $\End_{\catT}(\tau X)$ is local, the non-zero morphism $s$ is left minimal. Hence $i$ is a split monomorphism, i.e.\ (\( \Delta_1 \)) is a summand of (\( \Delta_2 \)).
\end{proof}
Now the proof of Theorem~\ref{theorem our triangle summand of AR triangle} is very short.
\begin{proof}[Proof of Theorem~\ref{theorem our triangle summand of AR triangle}]
The first point follows from Lemma~\ref{lemma functors summand}: Since \( \catT \) is idempotent closed, direct summands of representable functors are representable again. Once the first point is established, the second one is an immediate application of Lemma~\ref{lemma AR triangle summand}
\end{proof}
Our next aim is to show that in certain cases, there is no difference between the triangles (\ref{align Krauses AR triangle}) and (\ref{align our triangle with right almost split map}). More precisely, we will show the following.
\begin{theorem} \label{theorem.S triangle almost split}
Assume that \( k \) is noetherian, and let \( I = \coprod_{\mathfrak{m} \in \MaxSpec k} I(k / \mathfrak{m}) \) be the direct sum of the injective envelopes of the simple \( k \)-modules.
Let $\mathbb S\colon \catX\to\catT$ be a partial Serre functor. If the endomorphism ring of $X\in\catX$ is local and finite over $k$, then the triangle (\ref{align our triangle with right almost split map}) is almost split.
\end{theorem}
Also for the proof of this theorem we prepare several lemmas.
\begin{lemma} \label{lemma.induces local map}
Let \( k \) be noetherian, and let \( E \) be a finite \( k \)-algebra which is local. Then \( \mathfrak{m} = \Ker\left( k \to E / \!\rad E\right) \) is a maximal ideal of \( k \).
\end{lemma}
\begin{proof}
Since the target of the map above is a skewfield we observe that \( \mathfrak{m} \) is a prime ideal. Moreover we note that the quotient field of \( k / \mathfrak{m} \) is a \( k \)-submodule of \( E / \rad E \). Since \( k \) is noetherian this quotient field is also finite over $k$, whence even over \( k / \mathfrak{m} \). However, no non-trivial localizations of integral domains are finite. The only remaining possibility is that \( \mathfrak{m} \) is a maximal ideal.
\end{proof}
\begin{lemma} \label{lemma no maps to other maximal}
Let \( k \) be noetherian, and let \( E \) be a finite \( k \)-algebra which is local. Let \( \mathfrak{m} \) be as in Lemma~\ref{lemma.induces local map} above. Then \( \Hom_{k}(E, \coprod_{\substack{\mathfrak{n} \in \MaxSpec k \\ \mathfrak{n} \neq \mathfrak{m}}} I(k / \mathfrak{n})) = 0. \)
\end{lemma}
\begin{proof}
For \( x \in k \setminus \mathfrak{m} \) we observe that \( x \) becomes invertible in \( E \) by Lemma~\ref{lemma.induces local map}.
Let \( \varphi \colon E \to I(k / \mathfrak{n} ) \) for some maximal ideal \( \mathfrak{n} \neq \mathfrak{m} \). Since \( E \) is finitely generated, so is \( \Imm \varphi \). It follows that \( (\Imm \varphi) \mathfrak{n}^s = 0 \) for some \( s \). Choose \( x \in \mathfrak{n} \setminus \mathfrak{m} \). Now \( x \) acts both nilpotently and invertibly on \( \Imm \varphi \), whence \( \varphi = 0 \).
\end{proof}
\begin{lemma} \label{lemma Hom preserves injective envelope}
Let $k$ be a commutative noetherian ring, and let $E$ be a finite \( k \)-algebra which is local. Let \( I \) be as in Theorem~\ref{theorem.S triangle almost split}.
Then $\Hom_k(E,I)$ is an injective envelope of $E/\!\rad E$.
\end{lemma}
\begin{proof}
We have already established, in the proof of Lemma~\ref{lemma functors summand}, that there is a monomorphism from \( E /\!\rad E \) to \( \Hom_k(E, I) \). It follows immediately from its description that this factors through $ \Hom_k(E / \!\rad E, I) \hookrightarrow \Hom_k(E, I).$
Let \( \mathfrak{m} \) be as in Lemma~\ref{lemma.induces local map}. Note that by Lemma~\ref{lemma no maps to other maximal} we may replace \( I \) by \( I(k / \mathfrak{m}) \) without affecting the \( \Hom \)-sets.
Observe that \( \Hom_k(E / \!\rad E, I(k / \mathfrak{m})) = \Hom_k(E / \!\rad E, k/\mathfrak{m}) \), since \( \mathfrak{m} \) annihilates \( E / \!\rad E \) by construction. It follows in particular that the induced monomorphism \( E / \!\rad E \to \Hom_k(E / \!\rad E, I(k/\mathfrak{m})) \) is an isomorphism, since these two objects are finite dimensional of the same dimension over \( k / \!\rad k \).
Thus we need to show that \( \Hom_k(E / \!\rad E, I(k/\mathfrak{m})) \) is an essential submodule of \( \Hom_k(E, I(k/\mathfrak{m})) \). In other words, we need to show that any non-zero submodule of $\Hom_R(E,I(k/\mathfrak{m}))$ contains a morphism which vanishes on $\rad E$.
To this end, we show that for each $\phi\in\Hom_R(E,I(k/\mathfrak{m}))$ there is some $n$ such that $\phi(\rad E)^n=0$. Note that \( E / \mathfrak{m} E \) is local with radical \( \rad E / \mathfrak{m} E \), and moreover finite dimensional over $k / \mathfrak{m}$. It follows that \( \rad E / \mathfrak{m} E \) is nilpotent, that is there is \( s \) such that \( \rad E \subseteq \mathfrak{m}^s E\). Finally, observe that there is some $t$ such that $\phi \mathfrak{m}^t=0$. Indeed, since $E$ is finitely generated over $k$, so is \( \Imm \phi \), so there is a $t$ such that $(\Imm \phi)(\rad k)^t=0$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem.S triangle almost split}]
We write $\End_{\catT}(X)=E$. By Lemma~\ref{lemma Hom preserves injective envelope} we know that \( \Hom_k(E,I) \) is an injective envelope of \( E/\!\rad E \). The argument in the proof of Lemma~\ref{lemma functors summand} shows that
\[ \Hom_E(\catT(X,-), I_X) \cong \catT(X, -)^\ast. \]
Thus \( \tau X [1] \) of Theorem~\ref{theorem Krause AR-theorem} coincides with \( \mathbb{S} X \). It follows that the triangles (\ref{align Krauses AR triangle}) and (\ref{align our triangle with right almost split map}) coincide (by Theorem~\ref{theorem our triangle summand of AR triangle} or directly by comparing the two constructions).
\end{proof}
Connecting back to Section~\ref{section construction of rel serre}, we obtain the following application.
\begin{corollary} \label{corollary Kb(Lambda) has AR}
If $\Lambda$ is an Artin algebra, then $\Kb(\modcat\Lambda)$ has almost split triangles.
\end{corollary}
\begin{proof}
We found a partial Serre functor for \( \catK(\Modcat \Lambda) \) in Section~\ref{section construction of rel serre}, and argued in Observation~\ref{observation 0-cocompacts in Kb(Lambda)} that if we choose \( I \) to be an injective envelope of the semisimple \( k / \!\rad k \), then this functor induces an auto-equivalence on \( \Kb(\modcat \Lambda) \). As the assumptions of Theorem~\ref{theorem.S triangle almost split} are satisfied, we have almost split triangles completely inside \( \Kb(\modcat \Lambda) \), starting and ending in any object with local endomorphism ring in that subcategory.
\end{proof}
\section{Non-degeneracy} \label{section non-degeneracy}
For a partial Serre functor $\mathbb S$, there is no symmetry between the objects \( X \) and \( \mathbb{S} X \). For instance we have seen in Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact} that \( X \) is compact, while \( \mathbb{S} X \) is only \( 0 \)-cocompact. Similarly, in the construction of almost split triangles (Theorem~\ref{theorem BIKP}) the third term is required to be compact, while the first term will typically not be cocompact. However the definition of almost split triangles is completely self-dual.
In this section we study the following concept, which will serve as a weaker but symmetric version of partial Serre duality.
\begin{definition}
Let $X,Y\in \catT$. We say that \emph{composition from \( X \) to \( Y \) is non-degenerate} if the following conditions are satisfied.
\begin{enumerate}
\item For each $0\neq f \colon X\to T$ there is some $g\colon T \to Y$ such that $gf\neq 0$.
\item For each $0\neq g \colon T\to Y$ there is some $f\colon X \to T$ such that $gf\neq 0$.
\end{enumerate}
\end{definition}
\begin{remark}
Composition from \( X \) to \( Y \) is non-degenerate if and only if any non-zero $\catT$-submodule of $\catT(X,-)$ or $\catT(-,Y)$ contains a non-zero map $X\to Y$.
\end{remark}
Our aim is to show that composition being non-degenerate is closely linked to almost split triangles (Theorem~\ref{theorem almost split vs non deg}) and partial Serre functors (Proposition~\ref{proposition comp to T(X,SX) is non-degenerate}). Then, in Theorem~\ref{theorem non-degeneracy implies 0-cocompactness with correct def}, we will show that even this weak notion of duality implies that the two objects are \( 0 \)-compact and \( 0 \)-cocompact, respectively.
\begin{theorem} \label{theorem almost split vs non deg}
Let $X,Y\in\catT$ be such that $\End_{\catT}(X)$ and $\End_{\catT}(Y)$ are local rings. Let \( f \colon X \to Y \) be a non-zero morphism. We denote by
\[ \Delta \colon \hspace{.5em} Y[-1] \stackrel{d}\to E \stackrel{e}\to X \stackrel{f}\to Y \]
the triangle ending in \( f \). Then the following are equivalent.
\begin{enumerate}[label=(\roman*)]
\item \( \Delta \) is an almost split triangle;
\item \( d \) is left almost split;
\item \( e \) is right almost split;
\item \( gf = 0 \) whenever \( g \) is not a split monomorphism;
\item \( fh = 0 \) whenever \( h \) is not a split epimorphism;
\item for each \( 0 \neq t \colon T \to Y \) there is \( s \colon X \to T \) such that \( ts = f \);
\item for each \( 0 \neq s \colon X \to S \) there is \( t \colon S \to Y \) such that \( ts = f \);
\item composition from \( X \) to \( Y \) is non-degenerate, and \( f \cdot \rad \End_{\catT}(X) = 0 \);
\item composition from \( X \) to \( Y \) is non-degenerate, and \( \rad \End_{\catT}(Y) \cdot f = 0 \);
\item composition from \( X \) to \( Y \) is non-degenerate, and any non-zero \( \End_{\catT}(X) \)-submodule of \( \catT(X, Y) \) contains \( f \);
\item composition from \( X \) to \( Y \) is non-degenerate, and any non-zero \( \End_{\catT}(Y)^{\op} \)-submodule of \( \catT(X, Y) \) contains \( f \).
\end{enumerate}
\end{theorem}
\begin{proof}
$(iv) \iff (v)$: Suppose $(iv)$ holds and let $h\colon H\to X$ be such that $fh\neq 0$. Consider the following diagram.
\[ \begin{tikzcd}[scale=.8]
X \ar[r,"f"] & Y \ar[d,equal] \\
H \ar[r,"fh"] \ar[u,swap,"h"] & Y \ar[r,"g"] & \Cone(fh) \ar[r] & \text{}
\end{tikzcd} \]
Then $g$ is not split mono which by assumption implies $g f = 0$. Thus there is some $s\colon X\to H$ such that $f=fhs$, which in turn implies that $hs$ is invertible in $\End_{\catT}(X)$, since this ring is local. So $h$ is a split epimorphism. A dual argument shows that $(v)\implies (iv)$.
\smallskip \noindent
\( (iii) \iff (v) \): This is just the fact that a morphism factors through \( e \) if and only if it becomes \( 0 \) when composing with \( f \) --- a basic property of triangles.
\smallskip \noindent
\( (ii) \iff (iv) \) is the dual of \( (iii) \iff (v) \).
\smallskip
Now we have seen that \( (ii) \) to \( (v) \) are equivalent. Since \( (i) \iff (ii) \wedge (iii) \) by definition, it follows that also \( (i) \) is equivalent to these statements.
\smallskip \noindent
\( (iv) \iff (vi) \): Consider the triangle \( T \stackrel{t}\to Y \stackrel{g}\to G \to T[1] \). Note that we can construct \( t \) from \( g \) and vice versa. Moreover \( t \) is non-zero if and only if \( g \) is not split mono. Now the claimed equivalence is the fact that \( f \) factors through \( t \) if and only if it becomes zero when composing with \( g \).
\smallskip \noindent
\( (v) \iff (vii) \) is the dual of \( (iv) \iff (vi) \).
\smallskip
Now we know that \( (i) \) to \( (vii) \) are equivalent. Clearly \( (vi) \) and \( (vii) \) combined imply that composition from \( X \) to \( Y \) is non-degenerate. Moreover \( f \cdot \rad \End_{\catT}(X) = 0 \) is a special case of \( (v) \), and \( \rad \End_{\catT}(Y) \cdot f = 0 \) is a special case of \( (iv) \). Thus we know that \( (i) \) through \( (vii) \) imply \( (viii) \) and \( (ix) \).
\medskip \noindent
$(viii)\implies(xi)$: It clearly suffices to consider cyclic submodules $\End_{\catT}(Y)\cdot g$ for $0\neq g \in \catT(X,Y)$. Consider the triangle $\Cone(g)[-1] \stackrel{\alpha}\to X \stackrel{g}\to Y \to \Cone(g)$. Suppose $f\alpha \neq 0$. By non-degeneracy of composition from \( X \) to \( Y \) there is some $\beta\colon X\to \Cone(g)[-1]$ such that $f\alpha \beta \neq 0$. But since $g\neq 0$, $\alpha$ is not split epi, which implies $\alpha\beta\in \rad \End_{\catT}(X)$. This contradicts $(viii)$, hence $f\alpha=0$. Thus $f$ factors through $g$, i.e.\ $f \in \End_{\catT}(Y)\cdot g$.
\smallskip \noindent
\( (ix) \implies (x) \) is the dual of \( (viii) \implies (xi) \).
\smallskip \noindent
\( (x) \implies (vi) \): Let \( 0 \neq t \colon T \to Y \). By non-degeneracy there is a map \( s_1 \colon X \to T \) such that \( t s_1 \neq 0 \). By assumption we thus have \( f \in t s_1 \End_{\catT}(X) \), i.e.\ there is \( s_2 \in \End_{\catT}(X) \) such that \( t s_1 s_2 = f \).
\smallskip \noindent
\( (xi) \implies (vii) \) is the dual of \( (x) \implies (vi) \).
\end{proof}
In particular the above theorem says that any almost split triangle gives rise to a non-degenerate composition. In case that one of the endomorphism rings is artinian, we have the following converse.
\begin{corollary} \label{corollary non-deg implies almost split triangle}
Let \( X \) and \( Y \) be objects in \( \catT \) with local endomorphism rings, and assume that at least one of these two endomorphism rings is artinian.
If composition from \( X \) to \( Y \) is non-degenerate, then there is an almost split triangle
\[ Y[-1] \to E \to X \to Y. \]
\end{corollary}
\begin{proof}
Note that \( \catT(X, Y) \neq 0 \) by definition of non-degeneracy. Assume \( \End_{\catT}(X) \) is artinian. This implies that \( \rad \End_{\catT}(X) \) is nilpotent. It follows that there is some non-zero \( f \in \catT(X, Y) \) such that \( f \cdot \rad \End_{\catT}(X) = 0 \). The claim now follows from implication \( (viii) \implies (i) \) in Theorem~\ref{theorem almost split vs non deg} above.
\end{proof}
\begin{proposition} \label{proposition comp to T(X,SX) is non-degenerate}
Let $\mathbb S\colon\catX\to\catT$ be a partial Serre functor. Then composition from \( X \) to \( \mathbb S X \) is non-degenerate for each $X\in\catX$.
\end{proposition}
\begin{proof}
Let us start with a non-zero morphism \( f \colon X \to T \), and complete it to a triangle \( \Cone(f)[-1] \to X \stackrel{f}\to T \to \Cone(f) \). By the naturality of the isomorphism defining partial Serre duality we have the following commutative square.
\[ \begin{tikzcd}
\catT(X, \mathbb S X) \ar[r,"\cong"] \ar[d] & \catT(X, X)^\ast \ar[d] \\
\catT(\Cone(f)[-1], \mathbb S X) \ar[r,"\cong"] & \catT(X, \Cone(f)[-1])^\ast
\end{tikzcd} \]
Since \( f \) is non-zero the map \( \catT(X, \Cone(f)[-1]) \to \catT(X, X) \) is not onto, and hence its dual is not mono. It follows that the left vertical map above is not mono either, that is there is a non-zero map from \( X \) to \( \mathbb{S} X \) such that the comosition with \( \Cone(f)[-1] \to X \) vanishes. It follows that this map factors through \( f \).
Now take a non-zero $g\colon T\to\mathbb SX$. By assumption we have a natural isomorphism \[\phi\colon\catT(-,\mathbb SX)\to\catT(X,-)^\ast.\] Let $\eta=\phi_T(g)$. Then $\eta$ is non-zero, so in particular there is some $f\colon X\to T$ such that $\eta(f)\neq 0$. We claim that $gf\neq 0$. Of course, it suffices to show that $\phi_X(gf)\neq0$. But by the commutative diagram
\[ \begin{tikzcd}
\catT(T,\mathbb SX) \ar[r,"\phi_T"] \ar[d]& \catT(X,T)^\ast \ar[d] \\
\catT(X,\mathbb SX) \ar[r,"\phi_X"] & \catT(X,X)^\ast
\end{tikzcd} \]
we have $\phi_X(gf)=\eta(f\circ -)$, which is non-zero since so is $\eta(f)$.
\end{proof}
\begin{remark}
An object $X$ may have several `non-degenerate partners'. Indeed, if $\mathbb S X$ and $\tau X$ exist, then composition from $X$ to either is non-degenerate. However, in general $\tau X$ is only a summand of $\mathbb S X$.
\end{remark}
\begin{theorem} \label{theorem non-degeneracy implies 0-cocompactness with correct def}
Let $X,Y\in\catT$ be such that composition from \( X \) to \( Y \) is non-degenerate. Then $X$ is $0$-compact and $Y$ is $0$-cocompact.
\end{theorem}
The proof of this result relies on the following observation.
\begin{lemma} \label{lemma non-degeneracy implies covariant and contravariant ghosts coincide}
Let $X,Y\in\catT$ be such that composition from \( X \) to \( Y \) is non-degenerate. Then the following statements hold.
\begin{enumerate}
\item An object is a covariant $X$-ghost if and only if it is a contravariant $Y$-ghost.
\item A morphism $f\colon S\to T$ is a covariant $X$-ghost if and only if it is a contravariant $Y$-ghost.
\end{enumerate}
\end{lemma}
\begin{proof}
Since composition from \( X \) to \( Y \) is non-degenerate, $(1)$ is clear and
\begin{align*}
\text{$f$ is a covariant $X$-ghost} & \text{$\iff$ $f\alpha=0$ for each $\alpha\colon X\to S$} \\
&\text{$\iff$ $\beta f\alpha=0$ for each $\alpha\colon X\to S$ and $\beta\colon T\to Y$} \\
&\text{$\iff$ $\beta f=0$ for each $\beta\colon T\to Y$}\\
&\text{$\iff$ $f$ is a contravariant $Y$-ghost.} \qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem non-degeneracy implies 0-cocompactness with correct def}] We show that $Y$ is $0$-cocompact; the proof that $X$ is $0$-compact is dual.
Take a sequence \[\mathbb T \colon \cdots \to T_2\to T_1\to T_0\] in $\catT$ such that $\catT(\mathbb T[-1], Y)$ is dual ML and $\colim \catT(\mathbb T, Y)=0$. It suffices to show that $\catT(X,\holim \mathbb T)$ vanishes. As in the proof of Theorem~\ref{theorem partial Serre implies X compact and Y 0-cocompact} there is a short exact sequence \[0\to\limit^1\catT(X,\mathbb T[-1])\to\catT(X,\holim\mathbb T)\to\limit\catT(X,\mathbb T)\to 0,\] so we need only prove that the outer terms are zero.
We first show that $\limit\catT(X,\mathbb T)$ vanishes. So assume to the contrary that there is some $\left(\dots, \phi_2,\phi_1,\phi_0\right)\in\limit\catT(X,\mathbb T)$ with $\phi_i\neq0$. Then, by non-degeneracy of composition from \( X \) to \( Y \), there is some $\psi\colon T_i\to Y$ such that $\psi\phi_i\neq0$. But by assumption, the image of $\psi$ in $\colim\catT(\mathbb T,Y)$ vanishes, that is the composition $T_j\to T_i\stackrel{\psi}\to Y$ is zero for sufficiently large $j$. In particular, the non-zero $\psi\phi_i$ factors through the zero morphism $T_j\to Y$, as indicated by the following diagram, and we have a contradiction.
\[ \begin{tikzcd}[scale=.8]
X \ar[dr,swap,"\phi_j"] \ar[r,"\phi_i"] & T_i \ar[r,"\psi"] & Y \\
& T_j \ar[u]
\end{tikzcd} \]
Let us now show that $\limit^1\catT(X,\mathbb T[-1])=0$. It suffices to demonstrate that \[\catT(X,\mathbb T[-1])= \cdots \to\catT(X,T_2[-1])\stackrel{t_2}\to\catT(X,T_1[-1])\stackrel{t_1}\to\catT(X,T_0[-1])\] is ML. Assume to the contrary that for some $k$, the sequence of subgroups
\[\Imm t_k \supset\Imm t_k t_{k+1} \supset \cdots\]
does not stabilize. Without loss of generality, we may assume that $k=0$ and that each image is properly contained in the previous one. In other words, for each $i$ there is some $\phi_i\colon X\to T_0[-1]$ such that $\phi_i$ factors through $T_i[-1]$, say via $\psi_i$, but not through $T_{i+1}[-1]$. The following diagram, with the bottom row a triangle,
\[ \begin{tikzcd}[scale=.8]
X \ar[r,"\psi_i"]\ar[dr,"\phi_i"]\ar[d,dashed,"\nexists"] & T_i[-1] \ar[d] \\
T_{i+1}[-1] \ar[r] & T_0[-1] \ar[r] & \Cone \ar[r] & T_{i+1}
\end{tikzcd} \]
reveals that the composition \[X\stackrel{\psi_i}\to T_i[-1]\to T_0[-1]\to \Cone\] is non-zero. By non-degeneracy of composition from \( X \) to \( Y \), there is some non-zero \[X\stackrel{\psi_i}\to T_i[-1]\to T_0[-1]\to \Cone\to Y.\] In particular, we can find a morphism $\omega_i\colon T_0[-1]\to Y$ such that the composition $T_i[-1]\to T_0[-1] \stackrel{\omega_i}\to Y$ is non-zero, while $T_{i+1}[-1]\to T_0[-1] \stackrel{\omega_i}\to Y$ does vanish. In other words, in the commutative diagram
\[ \begin{tikzcd}
\Ker_i \ar[r,>->] & \catT(T_0[-1],Y) \ar[d,equal] \ar[r,] & \catT(T_i[-1],Y) \ar[d] \\
\Ker_{i+1} \ar[r,>->] & \catT(T_0[-1],Y) \ar[r,] & \catT(T_{i+1}[-1],Y) \\
\end{tikzcd} \]
with exact rows, we have $\omega_i\in\Ker_{i+1}\setminus\Ker_i$. In particular, the sequence \[\Ker_1\subsetneq\Ker_2\subsetneq\Ker_3\subsetneq\cdots\] does not stabilize, contradicting the assumption that $\catT(\mathbb T[-1], Y)$ is dual ML.
\end{proof}
\begin{corollary} \label{corollary almost split forces 0-(co)compactness}
Let \( X \to Y \to Z \to X[1] \) be an almost split triangle in a triangulated category. Then \( X \) is \( 0 \)-cocompact and \( Z \) is \( 0 \)-compact.
\end{corollary}
\begin{proof}
This is an immediate consequence of Theorem~\ref{theorem almost split vs non deg} and Theorem~\ref{theorem non-degeneracy implies 0-cocompactness with correct def}.
\end{proof}
|
1,477,468,750,157 | arxiv | \section{Introduction}
Rotation curves derived from neutral hydrogen observations
at the outer regions of spiral
galaxies unambiguously show that substantial amounts of
dark matter are required (Bosma 1978; Begeman 1987, 1989).
Any physically reasonable distribution of this dark matter
necessitates the presence of at least some of that in the
inner optical disc region, contributing in some degree to the
total rotation in that region.
Unfortunately, from the observed rotation curve and light
distribution one cannot a priori determine the
ratio of dark to luminous matter (van Albada et al. 1985).
This means that the
M/L ratio of the disc cannot be determined from a rotation curve analysis
only. There are arguments which might lead to the so called
``maximum disc hypothesis'' (hereafter: md hypothesis;
van Albada \& Sancisi 1986; Freeman 1992),
favouring a maximum possible rotational
contribution of the disc. However, no hard evidence exists
proving this hypothesis. To determine not only the amount
of dark matter in a galaxy, but also to constrain mass models of
dark matter and galaxy formation scenarios (Katz \& Gunn 1991),
it is of the utmost importance to know the relative rotational
contribution of the disc.
Stellar velocity dispersions provide a direct measure
of the local surface density of a disc and from that the
disc rotation can be calculated.
For a sample of 12 disc-dominated galaxies such
dispersions have been measured. Results are
summarized, discussed and analysed by Bottema (1993, hereafter B93).
It appears that the magnitude of the stellar velocity dispersions,
both in the radial and vertical direction, is proportional to
the square root of the surface density. That can be explained when for
a stellar disc, a constant M/L ratio is combined with the observed
constancy of the scaleheight as a function of radius (van der Kruit \&
Searle 1981a,b, 1982). To compare the stellar kinematics of different
galactic discs the dispersion was parameterized by fitting a radial
relation to the observations and taking the dispersion value at,
for instance, one scalelength. Comparison of the inclined and face-on
systems showed that the ratio of vertical to radial dispersion is close
to 0.6, as is observed in the solar neighbourhood. Moreover, larger and
more massive discs have larger velocity dispersions. These matters are
illustrated in Fig. 1, for the sample of 12 galaxies with
observed dispersions.
For an exponential disc a simple relation can be derived
(Freeman 1970; B93) between the maximum rotation of a disc and the
vertical velocity dispersion $\ifmmode {{\langle}v^2_z{\rangle}^{1/2}$:
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = 0.88\; \ifmmode {{\langle}v^2_z{\rangle}^{1/2}_{R=0} \sqrt{ {{h}\over{z_0}} },
\end{equation}
where the maximum is reached at a radius of 2.2$h$. This relation
involves only the radial scalelength ($h$) and the vertical sech$^2$
scale parameter $z_0$ (van der Kruit \& Searle 1981a; see also Eq. 26),
which is approximately equal to twice the exponential scaleheight for
suitable distances above the plane. The observed dispersions
then allow the calculation of the rotation which the disc can supply
to the total galactic rotation. For a reasonable $h/z_0$ behaviour
it appears that the maximum disc contribution is
63\% $\pm$ 10\%, roughly independent of the mass of a galaxy. The
missing rotation then has to be supplied by the dark halo (and the
bulge, if present). Note that this 63\% criterion does \underbar{not}
depend on the colour and surface brightness of the disc. When
applying the md hypothesis the disc contributes at 2.2 scalelengths
between 85 to 90\% of the observed maximum rotation. The value following
from the observed velocity dispersion is considerably lower, which
leads to disc masses and M/L ratios being a factor of two smaller.
Nevertheless, with the 63\% contribution, the disc is still dominant
in the inner regions.
Ideally if one would like to know the disc rotational contribution
one should take a spectrum of the galaxy and determine the
velocity dispersion of the disc. That is, unfortunately, quite difficult
and time consuming. How then to determine quickly the amount of disc
rotation and disc mass? For a ``normal'' galaxy or sample with normal
galaxies the 63\% criterion can safely be assumed although with
an error of 10\%. Here normal means
comparable to the sample in B93 with $v^{\rm obs}_{\rm max}$
$\ga$ 100 km~s$^{-1}$, B-V around 0.7 and central surface brightness (${\mu}_0$)
close to the value of Freeman's (1970) law;
${\mu}_0 = {\mu}_{0,F}$ = 21.65 B-mag. arcsec$^{-2}$.
Problems arise for galaxies which are small, faint, or blue.
Small galaxies are often faint, meaning having a low surface
brightness (LSB) although occasionally also large LSB galaxies are found
(de Blok \& McGaugh 1996). Such small and/or faint galaxies generally
have rotation curves which do not reach a maximum velocity
over the observed radial extent (Casertano \& van Gorkom 1991).
Obviously the 63\% criterion then has no meaning, but could the
description be generalized in some way?
\begin{figure}[htbp]
\psfig{figure=maxd.f1.ps,width=8.8cm}
\caption{
Observed disc stellar velocity dispersions of a sample of 12
spiral galaxies (B93) as a function of the absolute luminosity of
the old disc population (see Sect. 2).
The disc dispersion is parameterized in
such a way that for $\ifmmode {{\langle}v^2_z{\rangle}^{1/2} / \ifmmode {{\langle}v^2_R{\rangle}^{1/2} = 0.6$ data for
face-on and inclined systems should fall on the same relation; as
appears to be the case. Obviously, the brighter and more
massive galaxies have larger velocity dispersions.
}
\end{figure}
Most of the light of very blue galaxies originates from a
young stellar population which has a negligible
mass and consequently this light
is not representative for the massive stellar population
which determines the velocity dispersion. Therefore a population
correction is needed when comparing velocity dispersion with the
brightness of a galaxy. This was dealt with in B93 by assigning
a so-called ``old disc population'' absolute magnitude ($M_{\rm od}$)
to a disc. An extensive description of this procedure is given in Sect. 2
such that for a galaxy with arbitrary colour the disc rotational
contribution can be determined.
The remainder of this paper deals with the construction of a general
description for the maximum rotation of a galactic disc. To that aim in
Sect. 2 a simple but adequate population correction procedure is discussed.
Section 3 analyses the situation for a single colour, fixed central
surface brightness
disc and in Sect. 4 this is generalized for an arbitrary disc.
The inferred mass-to-light ratio is calculated in Sect. 5,
and Sect. 6 describes two examples: the normal spiral NGC 6503
and LSB galaxy NGC 1560. Finally in Sect. 7 the method and its
applicability is discussed and conclusions are formulated.
Throughout a Hubble constant of 75 km~s$^{-1}$ Mpc$^{-1}$ is adopted.
\section{The old disc population}
Velocity dispersions measure the local mass density in the disc.
To make any comparison between dynamical quantities derived from the
dispersion and the emitted light one would like a reasonable
indication of the light emitted by the population
that contains nearly all the mass in the disc.
In principle, the light of any young, massless population should
be subtracted.
A mass-to-light ratio $(M/L)_{\rm od}$ can then be assigned
to the remaining old disc population
and the
principal assumption for the remainder of this paper will be
that this $(M/L)_{\rm od}$ is the same for all galactic discs.
This is equivalent to assuming an equal M/L ratio for galaxies having
the same colour. Such an assumption is perfectly reasonable
and several arguments for this are given in B93. The basic underlying
hypothesis is that for the low mass stars the IMF is the same for all discs
and that the range of metallicities is not too broad. Although there is
no proof that the low mass end of the IMF is universal, there is
certainly no indication for the contrary (Laughlin \& Bodenheimer 1993;
Wyse 1995).
To obtain the luminosity of the old disc, the bulge light, of course,
also has to be subtracted from the total light of the galaxy.
In Bottema (1988) a so called ``poor man's'' population synthesis (pmps)
was performed to treat the problem of colour gradients in the disc
of NGC 3198. This pmps has been applied to the galaxy sample in B93
and will presently be described and discussed in detail.
A galactic disc is assumed to consist of only two stellar populations;
an old disc population and a young disc population defined as:
\medskip
The old disc population:
\begin{itemize}
\item contains \underbar{all} the mass in a disc.
\item has B-V = 0.97
\end{itemize}
\medskip
The young disc population:
\begin{itemize}
\item contains no mass.
\item has B-V = -0.03.
\end{itemize}
Using the observed B-V colour the amount of light from each component
can be determined in the B or the V band. This is illustrated in Fig. 2,
where as a function of B-V colour the ratio of old disc to total disc
light is presented. For example, for B-V = 0.6, 52\% of the light
in the B-band originates from the old disc population. The absolute
magnitude of the old disc population in the B-band ($M^B_{\rm od}$) is
related to the total magnitude in B ($M^B_{\rm tot}$) as
\begin{equation}
M^B_{\rm od} = M^B_{\rm tot} - ct,
\end{equation}
with the correction term $ct$ given by
\begin{equation}
ct = 2.5\; {\rm log}_{10}\left[ {{ 1 - 0.973 W}\over{1.470W}} \right],
\end{equation}
and
\begin{equation}
W = 10^{-0.4(B-V)}.
\end{equation}
This treatment has its shortcomings
A preferable complete population synthesis, however, is
much more complicated (Larson \& Tinsley 1978; Bruzual \& Charlot 1993;
Worthey 1994),
both in handling such models and in applying it to the present
specific situation. Even then effects of dust and metallicity
are not or only partially included. In Sect. 7 the influence of
particularly these two parameters on the employed pmps is investigated.
It appears that a good assessment of the effects can be made;
which counteract one another and are individually always
below a 15\% level for the galaxies of interest.
\begin{figure}[htbp]
\psfig{figure=maxd.f2.ps,width=8.8cm}
\caption{
The proportion of the light of the old stellar, mass containing,
population for different B-V colours.
}
\end{figure}
The pmps gives a total mass-to-light ratio $(M/L)_B$ proportional
to $10^{0.4ct}$, and after stellar velocity dispersions are compared with
the luminosity of galaxies in the following sections, the absolute scale
of the mass-to-light ratio is fixed at
\begin{equation}
(M/L)_B = 2.84\; 10^{0.4ct} = 1.93\; 10^{0.4(B-V)} - 1.88
\end{equation}
in solar units.
This can be compared with predictions of stellar population models.
Unfortunately such models cannot predict the absolute scale of
the mass-to-light ratio, only the functionality with colour. This is caused
by the uncertainty of the IMF at the low mass end. For instance,
$M/L \propto m_l^{1-x}$, proportional to the low mass cutoff $(m_l)$
to the power $1-x$, where $x = 1.35$ for a Salpeter (1955) IMF.
This means that mass-to-light ratios can be increased simply by adding
more low mass stars. The uncertainty in absolute $M/L$ ratio is even
more increased because $M/L$ ratios derived from observations are
proportional to the adopted Hubble constant. Hence $M/L$ ratios can only
be compared differentially and presently values will be fixed at
$(M/L)_B = 1.79$ for B-V = 0.7 as given by Eq. (5). Results of the pmps
are compared with population synthesis models of Tinsley (1981, hereafter
T81) and those of Jablonka \& Arimoto (1992, hereafter JA92). Presented
in Fig. 3 is a comparison of the B-band mass-to-light ratio as a function
of B-V, for the pmps, T81 and JA92. In all cases $(M/L)_B$ is increasing
towards redder colours, though for the T81 models more steeply
than the others. It is striking that, despite its simplicity, the
pmps shows a nearly identical trend as the other more sophisticated models.
This supports the applicability of the method.
\begin{figure}[htbp]
\psfig{figure=maxd.f3.ps,width=8.8cm}
\caption{
Total mass-to-light ratio (in B) versus B-V colour according to
Tinsley (T81), the described poor man's population synthesis (pmps), and
according to Jablonka \& Arimoto (JA92). The curves have been scaled
to coincide at $(M/L)_B = 1.79$ for B-V = 0.7. Despite its simplicity,
the predicted mass-to-light ratio for the pmps is similar to that of the
sophisticated population synthesis models.
}
\end{figure}
The present paper deals with the amount of mass in a galactic disc,
which is given by the observed velocity dispersions. Therefore it is
possible to fix the absolute scale of the mass-to-light ratio when
comparing dispersions with luminosities and colours.
To that aim a few relations will be derived for an exponential
disc, leading to a relation which can eventually be compared with
the observed dispersions in Fig. 1. For an exponential old
disc
\begin{equation}
L_{\rm od} = 2\pi ({\mu}_0)_{\rm od} h^2
\end{equation}
\begin{equation}
{\sigma}_0 = ({\mu}_0)_{\rm od} \; \left( {{M}\over{L}}
\right)_{\rm od}
\end{equation}
and
\begin{equation}
v_{\rm max,od} = \ifmmode {v^{\rm disc}_{\rm max} = 0.88\; \sqrt{\pi G {\sigma}_0 h},
\end{equation}
(Freeman, 1970) such that
\begin{equation}
v^4_{\rm max} = 0.3 \pi G^2 ({\mu}_0)_{\rm od}
\left( {{M}\over{L}} \right)^2_{\rm od} L_{\rm od}
\end{equation}
where $L_{\rm od}$ is the total luminosity of the old disc,
$({\mu}_0)_{\rm od}$ the central surface brightness of the old disc
in linear units e.g. $L_{\odot} {\rm pc}^{-2}$,
${\sigma}_0$ the central surface density and $h$ the scalelength.
Equation (9) holds exactly and as noted above, $(M/L)_{\rm od}$
is considered a universal constant.
\section{All discs have the same colour, B-V = 0.7, and obey Freeman's law}
For such a situation Eq. (9) can be written as
\begin{equation}
M_{\rm od} = -10\; {\rm log}_{10}(\ifmmode {v^{\rm disc}_{\rm max}) + P
\end{equation}
which is a kind of Tully-Fisher relation. It appears that
for an exponential old disc this TF relation holds with
a coefficient of exactly 10. Or,
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = 10^{0.1P}\cdot 10^{-0.1M_{\rm od}}
\end{equation}
For an exponential disc the maximum rotation (at 2.2$h$) is
related to the vertical velocity dispersion through Eq. (1),
which, when combined with Eq. (11) gives
\begin{equation}
\ifmmode {{\langle}v^2_z{\rangle}^{1/2}_{R=0} = A^{-1} \sqrt{ {{z_0}\over {h}} } \; 10^{-0.1M_{\rm od}}.
\end{equation}
This relation is equal to Eq. (19) in B93 for
\begin{equation}
A = 0.88 \; 10^{-0.1P}
\end{equation}
For Eq. (12) a fit can be made to the observed velocity
dispersions as a function of $M_{\rm od}$ by chosing a certain $h/z_0$
behaviour. In B93 three choices of this behaviour are presented: one where
$h/z_0$ is constant at five, secondly a functionality such that the
dispersion versus luminosity relation (Fig. 1) becomes linear, and thirdly,
an intermediate situation where $h/z_0 = 0.6M_{\rm od} + 17.5$.
All three give a satisfactory fit to the dispersion data yielding a
disc TF relation (Eq. 10) with almost the same constant $P$. Still, the last
behaviour is preferred.
This is because then the $h/z_0$ value is somewhat larger
for the smaller galaxies as might have been observed (Bottinelli et al 1983;
Heidmann et al. 1972). In addition, the fit to the observed
dispersions is marginally better than for the $h/z_0 = 5$
(van der Kruit \& Searle 1981a, b, 1982) case (see Fig. 8 in B93).
A linear dispersion versus $M_{\rm od}$ relation
leads to the undesired property that for the least massive galaxies
the velocity dispersion in the disc becomes negative.
Observations of $h/z_0$ values are scarce and it is not a priori
predictable if and how $h/z_0$ is related to galaxy size.
Therefore, at the moment, with the limited information available,
the best suited linear functionality is adopted. Individual deviations
in $h/z_0$ values will certainly be the largest source of scatter
in any diagram of velocity dispersion versus galaxy size.
\begin{figure}[htbp]
\psfig{figure=maxd.f4.ps,width=8.8cm}
\caption{
The observed velocity dispersion values of Fig. 1. Given by
the solid line and shaded area is a fit to these data
of Eq. (12) for the adopted $h/z_0$ behaviour. The data are
for galactic discs with on average ${\mu}_0 \sim 21.65$
B-mag. arcsec$^{-2}$ and B-V = 0.7.
Also indicated are the expected dispersion functionalities
for lower surface brightness galaxies with ${\mu}_0 = 23$ and
24 B-mag. arcsec$^{-2}$.
}
\end{figure}
Taking $h/z_0 = 0.6 M_{\rm od} + 17.5$, Eq. (12) can now be compared
with the observed dispersions. This has been done in Fig. 8b in
B93 and presently in Fig. 4. The best fit is achieved for
$A = 0.75$. By eye an error estimate has been made which is shown in
Fig. 4 as the shaded area around
the best fit for $A = 0.75 \pm 0.1$. From this a $(M/L)_B$ ratio
was derived for B-V equal to 0.7 of 1.79 $\pm$ 0.48. Substituting
the value found for $A$ into Eq. (13) and Eq. (11) one gets
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = (1.17 \pm 0.16)\; 10^{-0.1M_{\rm od}}
\end{equation}
providing the maximum rotation of a single colour (B-V = 0.7) galactic disc
where $M_{\rm od}$ is given by Eq. (2) as $M_{\rm od} = M_{\rm tot} + 0.5$.
In Fig. 5 this maximum rotation of the disc only is compared
with the observed maximum rotation for
the total galaxy. The observations are for optical emission line
rotation curves of a sample of Sc and Sb galaxies
by Rubin et al. (1985) and of the galaxies by Mathewson et al. (1992).
For the latter absolute B magnitudes were obtained from the ESO-LV
catalogue (Lauberts \& Valentijn 1989) by Rhee (1996).
The shaded area shown in Fig. 5
corresponds to the error given by the
shading in Fig. 4. It is obvious, comparing the disc-only
TF relation (Eq. 14) with the data in Fig. 5 that the
maximum rotation of the disc is considerably lower than
the observed maximum rotation. In fact, this is the
63\% criterion cast into a Tully-Fisher representation.
This now provides the possibility
to investigate the consequences for
a galactic disc with less restricted parameters.
\section{Discs with different colours and different central
surface brightnesses}
To investigate this general case Eq. (9) is rewritten to
\begin{equation}
v^4_{\rm max} = 0.3 \pi G^2 {\mu}_{0,F}
{{({\mu}_0)_{\rm od} }\over{{\mu}_{0,F} }} \left( {{M}\over{L}}
\right)^2_{\rm od} L^{\rm od}
\end{equation}
where ${\mu}_{0,F}$ is Freeman's value which is constant
(by definition).
Use
\begin{equation}
{{ {\mu}_0^{\rm od} }\over{ {\mu}_0 }} =
{{ L_{\rm od} } \over{ L_{\rm tot} }} = 10^{0.4ct}
\end{equation}
and one finds
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = {\rm const} \ast \left( {{ {\mu}_0 }
\over{ {\mu}_{0,F} }} \right)^{1/4} \;
10^{0.1ct}\;
10^{-0.1M_{\rm od} }
\end{equation}
Define the average colour correction term
${\langle}ct{\rangle}$
for the sample
for which dispersions have been measured.
The sample has
galaxies with colours all close to B-V = 0.7 so that $ {\langle}ct{\rangle}
\sim -0.5$.
Insert this and Eq. (1) into Eq. (17) to get
\begin{eqnarray}
{\langle}v^2_z{\rangle}^{1/2}_{R=0} &=&
{{\rm const}\over{0.88}} 10^{0.1 {\langle}ct{\rangle} } \sqrt{ {{z_0}\over{h}}}
{{ 10^{0.1ct} }\over{ 10^{0.1 {\langle}ct{\rangle} } }}\nonumber \\
& & \cdot \left( {{ {\mu}_0 }\over{ {\mu}_{0,F} } } \right)^{1/4}
\; 10^{-0.1M_{\rm od} }
\end{eqnarray}
which can in principle again be fitted to Bottema's sample
of galactic disc dispersion measurements.
For these $ {\mu}_0 \sim {\mu}_{0,F}$ and $ct \sim {\langle}ct{\rangle} = -0.5$.
Adopt again $h/z_0 = 0.6M_{\rm od} + 17.5$, and one finds
for the same fit to the same data that:
\begin{equation}
{\rm const} \ast 10^{0.1 {\langle}ct{\rangle} } = 1.17
\end{equation}
Substitute back into Eq. (17) to find:
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = 1.17\; \left( {{ {\mu}_0 }\over { {\mu}_{0,F} } }
\right)^{1/4} 10^{0.1(ct- {\langle}ct{\rangle} )}\; 10^{-0.1M_{\rm od} }
\end{equation}
Once more $M_{\rm od}$ can be converted to observed absolute magnitudes
\begin{equation}
\ifmmode {v^{\rm disc}_{\rm max} = 1.17\; \left( {{ {\mu}_0 }\over { {\mu}_{0,F} }} \right)^{1/4}
10^{-0.1M^B_{\rm tot}} \cdot 10^{0.2(ct + 0.25)}
\end{equation}
which is the most general expression for the maximum rotational
velocity of a disc and hence the principal result
of this paper.
The stellar velocity dispersion of a disc is found by substituting
Eq. (19) into Eq. (18) to get
\begin{eqnarray}
{\langle}v^2_R{\rangle}^{1/2}_{R=0} &=&
1.33 \sqrt{ {{z_0}\over{h}} }
\left( {{ {\mu}_0 } \over { {\mu}_{0,F} }} \right)^{1/4}
10^{0.1(ct - {\langle}ct{\rangle} )}\nonumber \\
& & \cdot \; 10^{-0.1M_{\rm od}}
\end{eqnarray}
This relation is shown in Fig. 4 for B-V = 0.7 and
${\mu}_0$ = 23 and 24 mag. arcsec$^{-2}$.
Different B-V values have not been plotted,
to avoid confusion, but the result of any preferred colour - surface
brightness combination can be inferred from Eq. (22).
Figure 4 shows that the LSB discs have lower stellar velocity dispersions
than normal discs with the same luminosity.
However, this is only valid for an isolated stellar disc.
For small and/or LSB discs, for example, there may be large quantities of gas
available, which will increase the dispersion. Also a dark halo will
increase the stellar velocity dispersion in the outer parts of the
disc (B93). The extrapolation to $M_{\rm od} > -18$ in Fig. 4 is for the
adopted behaviour of $h/z_0$ as a function of luminosity. For a
different behaviour the result will, of course, be different.
\setlength{\unitlength}{1cm}
\begin{figure*}
\begin{minipage}{12.1cm}
\begin{picture}(12.1,12.4)
\psfig{figure=maxd.f5.ps,width=12.1cm}
\end{picture}
\end{minipage}
\hfill
\begin{minipage}{5cm}
\begin{picture}(5,0.1)
\end{picture}
\caption{
Representation of the TF relation for \underbar{observed} maximum rotations
given by the squares and crosses, and for the \underbar{disc only}
maximum rotation, as implied by the measured disc velocity
dispersions, given by the lines.
Observational rotations are derived from optical emission
lines for Sb and Sc galaxies by Rubin et al. (1985) and for a sample of late
type galaxies by Mathewson et al. (1992, MFB). The disc-only maximum
rotation follows from Eq. (21) and is shown for a few central surface
brightness and colour combinations; the shaded area corresponds to the
shading in Fig. 4.
The disc-only has considerably lower rotational velocities than observed
for the total galaxy, a difference becoming even more pronounced for the
lower surface brightness systems. As an example, for NGC 1560, the disc-only
and total rotation is indicated by the left
and right asterisk respectively.
}
\end{minipage}
\end{figure*}
At this stage the implications of the general disc-only
TF relation (Eq. 21), will be investigated.
For ${\mu}_0 = {\mu}_{0,F}$ and B-V = 0.7,
Eq. (14) is retrieved as a special case which is already
depicted in Fig. 5 where disc-only rotations are compared
with total observed rotations of a galaxy.
In addition in Fig. 5
Eq. (21) is plotted for B-V = 0.7 and ${\mu}_0$ = 23 and 24
mag. arcsec$^{-2}$ and for B-V = 0.4 with ${\mu}_0$ is 24;
the regime of low surface brightness galaxies. For such objects
the disc-only rotation is at lower velocities than that of the
normal surface brightness discs and at even lower velocities than
given by the observations. Since LSB galaxies seem to follow the same
observed TF relation as non LSB galaxies (Zwaan et al. 1995) it
can thus be concluded that LSB discs contain a larger fraction
of dark matter than normal discs.
\section{The mass-to-light ratio}
For an exponential disc:
\begin{equation}
\ifmmode {{\langle}v^2_z{\rangle}^{1/2}_{R=0} = \sqrt{ \pi G {\mu}_0 \left( {{M}\over{L}}\right)_{\rm tot}
z_0 }
\end{equation}
If we substitute Eq. (22) into Eq. (23) and
eliminate the dispersion one gets after some
algebra and unit conversions:
\begin{equation}
\left( {{M}\over{L}} \right)_B = 28.1 \;\; 10^{-0.2 M_B^{\odot} } \;
10^{0.4 ct} 10^{-0.2 {\langle}ct{\rangle} }
\end{equation}
Where $M_B^{\odot}$ is the absolute magnitude of
the sun. For $M_B^{\odot} = 5.48$ (Allen 1973) Eq. (24)
changes to
\begin{equation}
\begin{array}{lcl}
\left( {{M}\over{L}} \right)_B &=& 1.79\;\;
10^{0.4ct} \; 10^{0.2(0.5 - {\langle}ct{\rangle} )}\\
&=& 1.93\;\; 10^{0.4(B-V)} - 1.88,
\end{array}
\end{equation}
being equal to the result already given in Eq. 5.
This shows that the observed dispersions actually fix the
mass-to-light ratio of the stellar population in an absolute sense.
There is a small discrepancy with B93 because in that paper
$M_B^{\odot} = 5.41$ was used (Allen 1963) leading to a coefficient
of 1.85 instead of 1.79 in Eq. (25). Therefore now for
$B-V = 0.7$, the value used for the one colour, one brightness
disc in B93, one finds $ct = -0.5$ and $ {\langle}ct{\rangle} = -0.5 \Rightarrow
(M/L)_B = 1.79$.
Note that the mass-to-light ratio of the general disc does \underbar{not}
depend on the central surface brightness of the disc. This is not surprising
since one of the assumptions was that the old disc population has
the same mass-to-light ratio for all discs. Hence discs with equal
colours also have equal total mass-to-light ratios irrespective of
the brightness.
\section{Two examples: NGC 6503 and NGC 1560}
To get a feeling for the implications of the results derived,
two specific examples will be discussed. For two galaxies for which
detailed and well resolved rotation curves have been measured, a decomposition
will be made of these curves into the contributions of the galactic
constituents. This is done for the maximum disc hypothesis situation and
for the disc contribution determined by velocity dispersions
and colour as given in Eq. (21).
The galaxy is supposed to consist of three components. First, a
disc with density distribution $\rho (z)$ as
\begin{equation}
\rho (z) = \rho (z=0)\; {\rm sech}^2 \left( {{z}\over{z_0}}\right),
\end{equation}
with thickness parameter $z_0$ being equal to 1/6 radial
scalelength. The radial density distribution was proportional to the
observed radial photometric profile. A rotation curve was calculated
according to Casertano (1983). Secondly, a thin gas layer
with surface density proportional to the observed radial H\,{\sc i} density
profile, multiplied with a factor 1.4 to account for Helium.
Thirdly, disc and gaslayer are embedded in a spherical pseudo
isothermal dark halo (Carignan \& Freeman 1985) with rotation
curve
\begin{equation}
v_{\rm halo}(R) = v_{\rm max}^{\rm halo}\;
\sqrt{ 1 - {{R_{\rm core}}\over{R}} {\rm arctan}\left(
{{R}\over{R_{\rm core}}}\right) }
\end{equation}
A least-squares fit is made of the sum
of the individual contributions to the observed rotation
and best fitting parameters
$v^{\rm halo}_{\rm max}$ and $R_{\rm core}$ are determined.
The examples are NGC 6503 and NGC 1560; the first being a normal
surface brightness galaxy of moderate size and the second a typical example
of an LSB galaxy with ${\mu}^B_0$ = 23.23 mag. arcsec$^{-2}$.
A number of relevant galaxy parameters and results of
the fit are given in Table 1. Figures 6 and 7 show the
results for NGC 6503 and NGC 1560 respectively. For the
md hypothesis case and velocity dispersion implied case the fits are
equally valid. But there are appreciable differences in the
parameters of the individual components.
In the case where the disc mass is based on dispersions,
the disc is less massive and core radius and asymptotic halo velocity are
smaller than for the md hypothesis situation.
This implies that there is approximately a factor of two more
dark matter present in the disc region,
a conclusion which also applies for galaxies other than the two discussed
here.
\begin{figure}[htbp]
\psfig{figure=maxd.f6.ps,width=8.8cm}
\caption{
{\bf a and b}
Rotation curve decomposition for NGC 6503. The dots are the
observed values and upper solid line is the fit to these for:
{\bf a.} disc mass and rotation implied by stellar velocity dispersions
(Eq. 21) and {\bf b.} The maximum disc hypothesis.
}
\end{figure}
NGC 6503 offers the unique opportunity to compare maximum velocities of
a disc
derived in three different ways because it is in the sample for which
dispersions have been measured. Based on Eq. (21) a \ifmmode {v^{\rm disc}_{\rm max} of 82
$\pm$ 11 km~s$^{-1}$ is found. According to the 63 \% criterion ($\pm$ 10\%)
one finds \ifmmode {v^{\rm disc}_{\rm max} = 76 $\pm$ 8 km~s$^{-1}$ and the maximum velocity of the disc
can be calculated directly from the observed dispersions using
Eq. (1) with $h/z_0 = 6$ and $\ifmmode {{\langle}v^2_R{\rangle}^{1/2}_{R=h} = \ifmmode {{\langle}v^2_z{\rangle}^{1/2}_{R=0} = 33 \pm 4$ km~s$^{-1}$ such that
\ifmmode {v^{\rm disc}_{\rm max} = 71 $\pm$ 9 km~s$^{-1}$. The three results agree and show which
errors can be expected.
For NGC 1560, \ifmmode {v^{\rm disc}_{\rm max} based on dispersion is 25 km~s$^{-1}$
which is plotted in the TF relation in Fig. 5. The observed maximum
H\,{\sc i} rotation is 78 km~s$^{-1}$ and optically one would obtain at $\sim$
4${{1}\over{2}}$ $h$ a maximum velocity of 72 km~s$^{-1}$.
The average of these two is also given in Fig. 5 representing the
observed maximum rotation. The distance between disc only and observed
data point then gives a graphical
representation of the amount of dark matter. It illustrates nicely
the dominance of that component in LSB galaxies.
\begin{table*}
\caption{Parameters of the galaxies}
\begin{flushleft}
\begin{tabular}{llll}
\noalign{\hrule\smallskip}
& NGC 6503 & NGC 1560 & Ref.\\
\noalign{\smallskip\hrule\smallskip}
$(B-V)^o_{\rm T}$ & 0.57 & 0.57 & RC3/RC3, but corrected \\
Distance (Mpc) & 6.0 & 3.0 & Bottema (1989)/ Broeils (1992) \\
$M_B^{o,i}$ (mag) & -18.76 & -15.87 & Sandage \& Tammann (1981)/
Broeils (1992) \\
${\mu}^B_0$ (mag. arcsec$^{-2}$) & 20.9 & 23.23 & Wevers et al. (1986)
+ Bottema (1989)/ Broeils (1992) \\
$ct$ & -0.78 & -0.78 & Eq. (3) \\
$v^{\rm disc}_{\rm max}$ (km~s$^{-1}$) & 82 & 25 & Eq. (21) \\
$M_{\rm disc}$ ($10^9\; M_{\odot}$) & 5.42 & 0.52 & calculated \\
$R_{\rm core}^{\rm halo}$ (kpc) & 1.08 & 2.63 & lsq fit \\
$v^{\rm halo}_{\rm max}$ (km~s$^{-1}$) & 111 & 89 & lsq fit \\
$v^{\rm disc}_{\rm max}$ md hyp. (km~s$^{-1}$) & 108 & 43 & lsq fit \\
$M_{\rm disc}$ md hyp. ($10^9\; M_{\odot}$) & 9.46 & 1.53 & calculated \\
$R^{\rm halo}_{\rm core}$ md hyp. (kpc) & 3.37 & 15 & lsq fit \\
$v^{\rm halo}_{\rm max}$ md hyp. (km~s$^{-1}$) & 119 & 243 & lsq fit \\
\noalign{\smallskip\hrule\medskip}
\end{tabular}
\end{flushleft}
\end{table*}
\section{Discussion and conclusions}
The developed description for maximum disc rotational velocities
(Eq. 21) can be applied for all discs, no matter the form
of the observed rotation curve. Furthermore, an additional value of the
method is that it has a physical basis. Namely the equal
mass-to-light ratio for stellar populations having the same colour.
This mass-to-light ratio is gauged by the observed velocity dispersion
of the sample of galactic discs. The present description is unlike
that of the md hypothesis, which is purely ad-hoc, or the
63\% criterion which is established observationally.
\begin{figure}[htbp]
\psfig{figure=maxd.f7.ps,width=8.8cm}
\caption{
{\bf a and b}
As Fig. 6, but now for the LSB galaxy NGC 1560.
}
\end{figure}
However, there are some disadvantages of the method. To calculate
the maximum velocity of the disc according to Eq. (21), one needs
to know the absolute magnitude, central surface brightness, and
the colour of the disc. The absolute magnitude depends on the distance
and galactic and internal absorption correction. If these are ill
determined appreciable errors will result. To obtain the central
surface brightness a correction to face-on is needed, also a source
of errors. In addition the colour can never be determined with infinite
accuracy. Fortunately none of the parameters enters into Eq. (21)
in a dominant manner so that some error will not directly generate
a huge error in the maximum rotational velocity of the disc.
The poor man's population synthesis as described in section 2,
applies for a metallicity regime not too far from solar abundances.
For the majority of the nearby high surface brightness
(hereafter HSB; meaning ${\mu}_0 \sim {\mu}_{0,F}$) galactic discs
the metallicity can indeed be assumed to ly close to solar.
However, for the LSB galactic discs abundance determinations
(Mc Gaugh 1994) indicate a metallicity content of typically 0.3
to 0.1 times the solar values. This could pose a problem and therefore
the effect of a lower metallicity on the pmps will be investigated.
Less metals in the same stellar population with the same age produce
a bluer B-V colour. For a solar abundance a B-V of 0.97
for the old disc population (5 to 10 Gyrs) was assumed in section 2.
For a metal poorer od population the colour can be determined from
Worthey (1994, his fig 34). Assuming a worst case scenario with
$Z = 0.1 Z_{\odot}$ the od B-V colour has to be decreased to $\sim$ 0.8.
The young disc colour is assumed to be roughly independent
of metallicity remaining at B-V = -0.03; indicated by
the observed small range in colours of young star
forming regions. The pmps was repeated for (B-V)$_{\rm yd}$ = -0.03 and
(B-V)$_{\rm od}$ = 0.8 giving a new colour correction term
$ct^{\prime} = 2.5\; {\rm log}_{10}[(1-0.973W)/1.116W]$.
Comparison with the solar abundance colour correction term $ct$
(Eq. 3) shows that $ct^{\prime} - ct = 0.30$ for all colours. This
means that for $Z = 0.1 Z_{\odot}$ the old disc population is 0.3
magnitudes brighter for the same observed colour; and hence the stellar
disc contains more mass generating a higher maximum rotation.
The latter can now be calculated simply from Eq. (21) when the metal
poor $ct^{\prime}$ is substituted resulting in a maximum disc velocity for
$Z = 0.1 Z_{\odot}$ being 15\% higher than that for a solar abundance.
Applied to the LSB example NGC 1560 this means that if
$Z$(N1560) $= 0.1 Z_{\sun}$ the maximum disc velocity should
be increased from 25 km~s$^{-1}$ to 29 km~s$^{-1}$. This still falls
substantially below the md hypothesis value of 43 km~s$^{-1}$. It should
be kept in mind that this calculation is for a case
expected to be a limiting situation. The 15\% increase is
therefore a maximum.
In relation to LSBs there is another matter that has to be discussed.
The central surface brightness is an observed quantity; only a
correction to face-on is made but no absorption correction. HSB
discs are semi transparent with most of the extinction concentrated
in the inner regions (Huizinga \& van Albada 1992; Giovanelli
et al. 1994). On the other hand, low luminosity spiral discs and
LSB discs appear to be by and large transparent over the whole
extent of the disc (Bosma et al. 1992; Mc Gaugh 1994). Thus the
Freeman's law value derived for HSB discs of 21.65 B mag. arcsec$^{-2}$
is compromised by extinction with respect to LSB discs.
In order to correctly compare central surface densities of
HSB and LSB systems a correction should be applied to the
observed central surface brightness. Or, such a correction has
to make the surface brightness of HSBs larger, or, that of LSBs smaller.
Either way, in calculating maximum disc rotations the
central surface brightness quotient ${\mu}_0/{\mu}_{0,F}$ has to be
lowered for LSB systems with the typical amount of extinction in HSB discs.
Various studies indicate that this typical extinction
ranges between 0.5 to 1 mag. in B (Keel 1983; Andredakis \& van der Kruit
1992; White \& Keel 1992; Knapen \& van der Kruit 1991; Byun et al. 1994;
Huizinga 1994). Inserting this correction in Eq. (21) results in a
lower maximum disc rotation for LSB systems of the order of 10 to 20\%.
According to the scheme developed in this paper a maximum disc
rotational velocity can be calculated for a galactic disc akin to
that of the solar neighbourhood. The lower metallicity
in LSBs would lead to a higher rotational velocity of at most 15\%.
On the other hand a different dust content of LSBs would lead to
a lower rotational velocity of 10 to 20\%. There is no direct
evidence that lower metals in discs is accompanied by less dust,
but it is more than likely. Therefore it is expected that in LSBs both
effects approximately cancel such that the original description also
applies for these systems. Moreover, effects of up to 15\% will leave
all conclusions essentially unchanged.
The maximum disc velocity following from Eq. (21) is for a strict
exponential disc. In that sense it can be applied very well in a
statistical way comparing galaxies with one another as for example
in Fig. 5. For individual cases, when the photometric profile
deviates strongly from exponential, the method can in principle
be extended to be used in a radially differential way.
Finally, a compilation of the main conclusions:
\begin{itemize}
\item
A general relation has been derived giving the maximum rotation of a
galactic disc as a function of its absolute magnitude, central
surface brightness, and colour. As a physical basis for this
relation serves an adopted universal M/L ratio as a function
of colour.
\item
This relation and the involved M/L ratios are fixed in
an absolute sense by the observed stellar velocity dispersions.
\item
Comparing derived maximum rotations of a disc with
the observed total rotation shows that even in the disc region,
normal galaxies contain an appreciable amount of dark matter.
Disc only rotations are lower by a factor of 0.7 compared
to the rotation implied by the maximum disc hypothesis.
\item
Low surface brightness galaxies contain an even larger
amount of dark matter.
\item
Derived disc M/L ratios are 1.79 $\pm$ 0.48 in the B-band
for a B-V of 0.7. This M/L value is around a factor
or two lower than the md hypothesis values.
\end{itemize}
\begin{acknowledgements}
I thank W. de Blok, R. Giovanelli, J. van der Hulst,
and M.-H. Rhee for stimulating
discussions, encouragement and criticism. The Kapteyn Institute is
acknowledged for hospitality and support.
\end{acknowledgements}
|
1,477,468,750,158 | arxiv | \section{\protect\bigskip Introduction}
\quad There is a lot of interest in recent years on the study of non-commutative
canonical-type, quantum mechanics, quantum field theory and string theory
\left[ 1,2,3\right] $. On the other hand, the solutions of classical dynamical
problems of physical systems obtained in terms of complex space variables
are well-known. There are also interests on the complex quantum mechanical
systems (in two dimensions) $\left[ 4,5,6\right] $, in which we consider a
quantum relativistic Landau problem and harmonic oscillator in
non-commutative complex space $\left[ 7\right] $ (so coordinate and momentum
operators of this space are written as $\hat{z}=\hat{x}+i\hat{y}$ and $p_{\hat{
}}=\left( p_{x}-ip_{y}\right) /2$), where
\begin{equation}
\hat{x}^{\mu }=x^{\mu }-\frac{\theta ^{\mu \nu }}{2}p_{\nu }.
\end{equation}
For the non-commutative space of canonical-type, the parameter $\theta ^{\mu
\nu }$ is an anti-symmetric real matrix of length-square dimension. It
appears that the most natural places to search the non-commutative complex
coordinates effects are non-relativistic and relativistic quantum mechanics systems
in two dimensions. So far many interesting topics in
non-commutative non-relativistic and relativistic quantum mechanics in two
dimensions such as oscillator in the presence of uniform magnetic field
\left[ 8-25\right] $, and Landau problem $\left[ 26-31\right] $, have been
studied extensively. The purpose of this paper is to study the relativistic
Landau problem and relativistic oscillator in the presence of uniform complex
magnetic field on non-commutative complex space, where the non-commutative
complex coordinates could give an additional contribution. In this work, we
apply the algebraic techniques of creation and annihilation operators
to solve the relativistic Landau problem and relativistic oscillator in the
presence of uniform magnetic field on non-commutative complex space. In this
formalism, the presence of energy levels degeneracy was completely removed.
This paper is organized as follows: In section 2, the Landau problem in
non-commutative complex space is exactly solved, the corresponding exact
energy levels and algebraic function states are obtained respectively, while
the non-relativistic limit of the energy spectrum is obtained. In section 3, the
Klein-Gordon eigenvalues equation of oscillator in the presence of symmetric
complex gauge field is exactly solved in non-commutative complex space. The
conclusion is given in Section 4.
\section{ Landau problem in Non-commutative complex space}
\quad The conditions under which the relativistic Landau problem in non-commu-\\tative
quantum complex space and the Pauli equation are equivalent theories are
explored. In two-dimensional space, the complex coordinates system $\left( z
\bar{z}\right) $ and momentum $\left( p_{z},\bar{p}_{z}\right) $ are defined
by:\emph{\
\begin{eqnarray}
z &=&x+iy,\quad \bar{z}=x-iy, \\
p_{z} &=&\frac{1}{2}\left( p_{x}-ip_{y}\right) ,\quad \bar{p}_{z}=\frac{1}{2
\left( p_{x}+ip_{y}\right) =-p_{\bar{z}}
\end{eqnarray}
We are interested in introducing the non-commutative complex operators of
coordinates and momentum in a two-dimensional space:
\begin{eqnarray}
\hat{z}=\hat{x}+i\hat{y}=z+i\theta \bar{p}_{z}, &\qquad &\widehat{\bar{z}}
\hat{x}-i\hat{y}=\bar{z}-i\theta p_{z}, \\
\hat{p}_{z}=p_{z}=-i\frac{d}{dz}, &\qquad &\widehat{\bar{p}}_{z}=\bar{p
_{z}=i\frac{d}{d\bar{z}}
\end{eqnarray
The non-commutative algebra $(1)$ can be rewritten as:
\begin{equation}
\left[ \hat{z},\hat{\bar{z}}\right] =2\theta ,\,\left[ \hat{z},p_{\hat{z}
\right] =\left[ \hat{\bar{z}},p_{\hat{z}}\right] =0,\,\left[ \hat{z},p_{\hat
z}}\right] =\left[ \hat{\bar{z}},p_{\hat{z}}\right] =\hbar ,\,\left[ p_{\hat
z}},p_{\hat{z}}\right] =0.
\end{equation}
Now we will discuss the relativistic Landau problem on non-commutative
complex quantum space in this formulation, we consider an electron of charge
$e$ and mass $m$ moves on a complex space in the presence of a symmetric gauge
complex potential $A\left( i\frac{B}{2}z,-i\frac{B}{2}\bar{z}\right) $, the
relativistic quantum equation in complex space is defined by the following
form:
\begin{equation}
\left( 2\bar{p}_{z}-eA_{z}\right) \left( 2p_{z}-e\bar{A}_{z}\right) \psi
=\left( E^{2}-m^{2}\right) \psi .
\end{equation
which can be written in commutative complex space as:
\begin{equation}
\left( 4\bar{p}_{z}p_{z}+\frac{e^{2}B^{2}}{4}z\bar{z}-eB\left(
L_{z}+1\right) \right) \psi =\left( E^{2}-m^{2}\right) \psi .
\end{equation}
where $L_{z}=i\left( zp_{z}-\bar{z}\bar{p}_{z}\right) ,$ is the $z$-component
of the orbital angular momentum, then the Hamiltonian of the
system is given by:
\begin{equation}
H=\frac{2}{m}p_{z}p_{\bar{z}}+m\frac{\omega _{c}^{2}}{2}z\bar{z}-\omega
_{c}\left( L_{z}+1\right) ,\text{ \ \ \ \ }\omega _{c}=\frac{eB}{2m}
\end{equation}
In a non-commutative complex space, eq.(7) is described by the following
equation:
\begin{equation}
\left(
\begin{array}{cc}
\left( 2p_{z}-e\widehat{\bar{A}}_{z}\right) \left( 2\bar{p}_{z}+e\widehat{A
_{z}\right) & 0 \\
0 & \left( 2\bar{p}_{z}+e\widehat{A}_{z}\right) \left( 2p_{z}-e\widehat{\bar
A}}_{z}\right)
\end{array
\right) \psi =\left( E^{2}-m^{2}\right) \psi .
\end{equation}
Using the definition of the non-commutative complex coordinates, we can
rewrite this equation in a commutative complex space as:
\begin{equation}
\left( \frac{2}{\tilde{m}}p_{z}p_{\bar{z}}+\frac{\tilde{m}}{2}\tilde{\omega
^{2}z\bar{z}-\frac{eB}{2m}L_{z}-s_{z}\frac{e^{2}B^{2}}{4m}\theta -\frac
e^{2}B^{2}}{8m}\theta L_{z}\right) \psi =\bar{E}\psi ,
\end{equation}
\bigskip
where $\tilde{m}=m\left( 1+\frac{eB}{2}\theta \right) $, $\tilde
\omega}=\frac{eB}{2\tilde{m}}\left( 1+\frac{eB}{4}\theta \right) ,s_{z}=\pm
1/2$ and $\bar{E}=\frac{E^{2}-m^{2}+eB}{2m}$.
We note that the term $s_{z}\frac{e^{2}B^{2}}{4m}\theta ,$ is similar to the
spin-magnetic momentum interaction and the term $\frac{e^{2}B^{2}}{8\tilde{m
}\theta L_{z},$ is similar to the spin-orbit interaction. So the equation (11)
is similar to the equation of the electron with spin $\frac{1}{2}$ in a plane
under a symmetric gauge field. Thus, the corresponding Hamiltonian of
equation (11) is written as:
\begin{equation}
H=\frac{2}{\tilde{m}}p_{z}p_{\bar{z}}+\tilde{m}\frac{\tilde{\omega}^{2}}{2}
\bar{z}-\omega _{c}\left( 1+\frac{eB}{4}\theta \right) L_{z}-s_{z}\frac
e^{2}B^{2}}{4m}\theta
\end{equation}
So that a critical point is obtained when the coefficient of $L_{z}$ equals to
zero in this case where the non-commutative parameter $\theta =-\frac{4}{eB}$. In
this critical point the Hamiltonian of the system is:
\begin{equation}
H=\frac{1}{2\tilde{m}}p^{2}+\frac{m}{2}\left( \frac{eB}{2m}\right)
^{2}r^{2}+2\frac{e}{2m}Bs_{z},
\end{equation}
where it represents the oscillation of single electron with spin $\frac{1}{2}$
in a constant magnetic field, where the energy spectrum is given by:
\begin{equation}
E^{2}=2eB\left( n\pm \frac{1}{2}\right) +m^{2}
\end{equation}
The non-relativistic limit is given as:
\begin{equation}
E_{nr}=\frac{eB}{m}\left( n\pm \frac{1}{2}\right) \text{ \ \ \ with \ \
n=0,1,2,...
\end{equation}
Each of these energy levels is splitting into two levels, hence we can say
that the particle in non-commutative complex space describes the electron
with spin $1/2$ in magnetic field. Where the non-commutativity creates
automatically the total magnetic momentum of particle with spin $1/2$, which in
turnshifted creates the spectrum of energy.
If $\theta \neq -\frac{4}{eB},$ the equation $(11)$ can be written according
to the eigenvalues equation as following:
\begin{equation}
H\psi =\bar{E}\psi
\end{equation}
To solve this equation, we can use the algebraic techniques of creation and
annihilation operators. To this aim, we define:
\begin{eqnarray}
a &=&\frac{2ip_{z}+e\tilde{B}\bar{z}}{2\sqrt{e\tilde{B}}}, \\
b &=&\frac{-2i\bar{p}_{z}+e\tilde{B}z}{2\sqrt{e\tilde{B}}},
\end{eqnarray
where $\tilde{B}=\frac{B}{2}\left( 1+\frac{eB}{4}\theta \right) $, the
corresponding creation operators $a^{+}$ and $b^{+}$ satisfy the usual
commutation relations:
\begin{equation}
\left[ a,a^{+}\right] =\left[ b,b^{+}\right] =1.
\end{equation
All the other commutation relations are zero. Now we can write the equation $(16)$ in
terms of the operators $a^{+}a$ and $b^{+}b$ as:
\begin{equation}
\left( \frac{e\tilde{B}}{\tilde{m}}\left( b^{+}b+a^{+}a+1\right) +\frac{
\tilde{B}}{m}+\frac{e\tilde{B}}{m}\left( b^{+}b-a^{+}a\right) +s_{z}\frac
e^{2}B^{2}}{4m}\theta \right) \psi ^{\sigma _{z}}=\bar{E}\psi ^{\sigma _{z}},
\end{equation}
where $\sigma _{z}=\pm 1$, the states $\psi ^{\sigma _{z}}$ are labeled
by the number $n_{1}$ for the quanta excitation of the operator $a$, and the number
n_{2}$ for the quanta excitation of the operator $b$:
\begin{eqnarray}
a^{+}a\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{1}\psi _{n_{1},n_{2}}^{\sigma
_{z}} \\
b^{+}b\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{2}\psi _{n_{1},n_{2}}^{\sigma
_{z}}
\end{eqnarray}
The energy levels of the equation $(16)$ are given as follows:
\begin{eqnarray}
E^{2} &=&m^{2}+2\frac{me\tilde{B}}{\tilde{m}}\left( n_{2}+n_{1}+1\right) +2
\tilde{B}\left( n_{2}-n_{1}\right)-eB\pm \frac{e^{2}B^{2}}{4}\theta
\end{eqnarray}
The non-relativistic limit is given as:
\begin{equation}
E_{nr}=\frac{e\tilde{B}}{\tilde{m}}\left( n_{2}+n_{1}+1\right) +\frac{
\tilde{B}}{m}\left( n_{2}-n_{1}\right) -\frac{eB}{2m}\pm \frac{e^{2}B^{2}}{8
}\theta
\end{equation}
In this formulation, there is an important observation about the removed degeneracy of this
spectrum and its splitting into two levels. Such effects are similar to
the Zeeman splitting in a commutative space. We find that four
eigenstates components for the $n^{th}$ Landau levels with the quantum number
$\sigma _{z}$ have the form:
\begin{equation}
\psi _{n_{1},n_{2}}^{\sigma _{z}}=\left\vert n_{1},n_{2}\right\rangle
\left\vert \pm \right\rangle
\end{equation}
where
\begin{eqnarray}
\psi _{0,0}^{\sigma _{z}} &=&\left\vert 0,0\right\rangle \left\vert \pm
\right\rangle =\left\vert 0\right\rangle \left\vert \pm \right\rangle \\
\psi _{n_{1},0}^{\sigma _{z}} &=&\frac{\left( a^{+}\right) ^{n_{1}}}{\sqrt
n_{1}!}}\left\vert 0\right\rangle \left\vert \pm \right\rangle \\
\psi _{0,n_{2}}^{\sigma _{z}} &=&\frac{\left( b^{+}\right) ^{n_{1}}}{\sqrt
n_{2}!}}\left\vert 0\right\rangle \left\vert \pm \right\rangle \\
\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&\frac{\left( a^{+}\right)
^{n_{1}}\left( b^{+}\right) ^{n_{1}}}{\sqrt{n_{1}!n_{2}!}}\left\vert
0\right\rangle \left\vert \pm \right\rangle
\end{eqnarray}
and
\begin{eqnarray}
s_{z}\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&\frac{1}{2}\sigma _{z}\psi
_{n_{1},n_{2}}^{\sigma _{z}} \\
a^{+}a\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{1}\psi _{n_{1},n_{2}}^{\sigma
_{z}} \\
b^{+}b\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{2}\psi _{n_{1},n_{2}}^{\sigma
_{z}}
\end{eqnarray}
There are four independent states:\\
$$\psi _{n_{1},0}^{+}\left( z,\bar{z
\right),\psi _{n_{1},0}^{-}\left( z,\bar{z}\right) $$
and\\
$$\psi_{0,n_{2}}^{+}\left( z,\bar{z}\right),\psi _{0,n_{2}}^{-}\left( z,\bar{z
\right)$$
These particles are positioned in the four equivalent points
\left( z^{+},\bar{z}^{+},z_{-},\bar{z}_{-}\right)$. These results came from the
fact that the particle has a spin $\frac{1}{2}$ induced by the non-commutativity
effects in complex space.
\section{ Relativistic Landau problem plus oscillator potential in non-commutative
complex space }
\quad We consider the movement of an electron by oscillation on the complex space $\left( z
\bar{z}\right) ,$ subjected to a complex gauge potential field $A\left( -i
\frac{B}{2}\bar{z},i\frac{B}{2}z\right) ,$ where $B$ is the magnetic
momentum. In this gauge, the relativistic quantum equation in complex space
can be defined by the following equation:
\begin{equation}
\left[ \left( 2p_{z}+ie\frac{B}{2}\bar{z}\right) \left( 2p_{\bar{z}}-ie\frac
B}{2}z\right) +m^{2}\omega ^{2}z\bar{z}\right] \psi =\left(
E^{2}-m^{2}\right) \psi ,
\end{equation
which can be rewritten as:
\begin{equation}
\left( 4p_{\bar{z}}p_{z}+\left( m^{2}\omega ^{2}+\frac{e^{2}B^{2}}{4}\right)
z\bar{z}-eB\left( L_{z}+1\right) \right) \psi =\left( E^{2}-m^{2}\right)
\psi .
\end{equation}
\bigskip The corresponding Hamiltonian of the equation (34) is:
\begin{equation}
H=\frac{2}{m}p_{z}p_{\bar{z}}+\frac{m}{2}\left( \omega ^{2}+\omega
_{c}^{2}\right) z\bar{z}-\omega _{c}\left( L_{z}+1\right)
\end{equation}
The eigenvalues for the Hamiltonian in equation $(35)$ are:
\begin{equation}
E^{2}=2m\left( \omega ^{2}+\omega _{c}^{2}\right) ^{1/2}\left(
n_{1}+n_{2}+1\right) +eB\left( n_{1}-n_{2}-1\right) +m^{2}
\end{equation}
The non-relativistic limit is given as:
\begin{equation}
E_{nr}=\left( \omega ^{2}+\omega _{c}^{2}\right) ^{1/2}\left(
n_{1}+n_{2}+1\right) +\omega _{c}\left( n_{1}-n_{2}-1\right)
\end{equation}
In the non-commutative complex space $\left( \hat{z},\hat{\bar{z}}\right) $,
the relativistic quantum oscillator in a complex symmetric gauge potential
field $A=\left( i\frac{B}{2}z,-i\frac{B}{2}\bar{z}\right)$ is described by
the following equation:
\begin{eqnarray}
\left(
\begin{array}{cc}
\left( 2p_{z}+ie\frac{B}{2}\hat{\bar{z}}\right) \left( 2p_{\bar{z}}-ie\frac{
}{2}\hat{z}\right) +m^{2}\omega ^{2}\hat{\bar{z}}\hat{z} & 0 \\
0 & \left( 2p_{\bar{z}}-ie\frac{B}{2}\hat{z}\right) \left( 2p_{z}+ie\frac{B}
2}\hat{\bar{z}}\right) +m^{2}\omega ^{2}\hat{z}\hat{\bar{z}
\end{array
\right) \psi \notag \\=\left( E^{2}-m^{2}\right) \psi
\end{eqnarray
Using the relations $(3$) and $(4)$ we can rewrite the equation $(38)$ in
commutative complex space as:
\begin{eqnarray}
\left( 4(1-e\frac{B}{4}\theta )^{2}p_{z}p_{\bar{z}}+\left( m^{2}\omega
^{2}+\left( e\frac{B}{2}\right) ^{2}\right) z\bar{z}-2\left( e\frac{B}{2
\right) L_{z}+1\right. \notag \\
\left. -\left( m^{2}\omega ^{2}+\left( e\frac{B}{2}\right) ^{2}\right)
\theta \left( L_{z}\mp 1\right) \right) \psi =\left( E^{2}-m^{2}\right) \psi
\end{eqnarray
The equation $(39)$ can be written in a very simple way as:
\begin{equation}
\left( \frac{2}{\tilde{m}}p_{z}p_{\bar{z}}+\frac{\tilde{m}}{2}\varpi ^{2}
\bar{z}-\left( \frac{\tilde{m}}{2}\varpi ^{2}\theta +\omega _{c}\right)
L_{z}+s_{z}m\varpi ^{2}\theta \right) \psi =\bar{E}\psi ,
\end{equation}
where $\varpi ^{2}=\frac{\omega ^{2}+\left( \frac{eB}{2m}\right) ^{2}}{(1+
\frac{B}{2}\theta )}$, $\tilde{m}=m(1+e\frac{B}{2}\theta )$ and $\bar{E}
\frac{E^{2}-m^{2}+eB}{2m}$. The equation $\left( 40\right) $ is similar to
the Pauli equation of motion for a fermion of spin $\frac{1}{2}$ in a
constant magnetic field. So that a critical point is obtained when the
coefficient of a new constant equals to zero, in this case the relation
between the magnetic momentum and the non-commutative parameter is given by:
\begin{equation}
B=-\frac{m^{2}\omega ^{2}}{2e}\theta
\end{equation}
The negative sign means that the non-commutative parameter is in the opposite
direction of the vector $\overrightarrow{L}_{z}.$ If we substitute the
parameter $\theta $ in Eq. $(40)$, it leads to an oscillator with spin
\frac{1}{2}$ on a commutative complex space in a constant magnetic field:
\begin{equation}
\left( \frac{2}{m}p_{z}p_{\bar{z}}+\frac{m}{2}\left( \omega ^{2}+\left(
\frac{eB}{2m}\right) ^{2}\right) z\bar{z}-2\frac{\omega ^{2}+\left( \frac{e
}{2m}\right) ^{2}}{m^{2}\omega ^{2}}s_{z}B\right) \psi =\bar{E}\psi ,
\end{equation}
The accompanying Hamiltonian to the equation is written as:
\begin{equation}
H=\frac{2}{m}p_{z}p_{\bar{z}}+\frac{m}{2}\left( \omega ^{2}+\left( \frac{eB}
2m}\right) ^{2}\right) z\bar{z}-2\frac{\omega ^{2}+\left( \frac{eB}{2m
\right) ^{2}}{m^{2}\omega ^{2}}s_{z}B
\end{equation}
The eigenvalues for the Hamiltonian in equation $(43)$ are:
\begin{equation}
E^{2}=2m\left( \omega ^{2}+\left( \frac{eB}{2m}\right) ^{2}\right)
^{1/2}\left( 2n+1\right) \mp 2\frac{\omega ^{2}+\left( \frac{eB}{2m}\right)
^{2}}{m\omega ^{2}}B-eB+m^{2}
\end{equation}
The non-relativistic limit is given as:
\begin{equation}
E_{nr}=\left( \omega ^{2}+\left( \frac{eB}{2m}\right) ^{2}\right)
^{1/2}\left( 2n+1\right) -\frac{eB}{2m}\mp \frac{\omega ^{2}+\left( \frac{e
}{2m}\right) ^{2}}{m^{2}\omega ^{2}}B
\end{equation}
At the critical point, the energy spectrum is splitting to two levels. So
the charged oscillator in non-commutative complex space at the critical point
is similar to Pauli particle in commutative ordinary space.
If $B\neq -\frac{m^{2}\omega ^{2}}{2e}\theta ,$ the equation $(40)$ can be
written according to the eigenvalues equation as following:
\begin{equation}
H\psi =\bar{E}\psi
\end{equation}
where
\begin{equation}
H=\frac{2}{\tilde{m}}p_{z}p_{\bar{z}}+\frac{\tilde{m}}{2}\varpi ^{2}z\bar{z
-\left( \frac{\tilde{m}}{2}\varpi ^{2}\theta +\omega _{c}\right)
L_{z}+s_{z}m\varpi ^{2}\theta
\end{equation}
To solve Eq.$(46)$, we can use the algebraic techniques of creation and
annihilation operators. For this purpose, we define:
\begin{eqnarray}
\tilde{a} &=&\frac{2ip_{z}+\tilde{m}\varpi \bar{z}}{2\sqrt{\tilde{m}\varpi }
, \\
\tilde{b} &=&\frac{-2i\bar{p}_{z}+\tilde{m}\varpi z}{2\sqrt{\tilde{m}\varpi
},
\end{eqnarray}
\bigskip We can now write the equation $(46)$ in terms of these operators as:
\begin{equation}
\left( \varpi \left( \tilde{a}^{+}\tilde{a}+\tilde{b}^{+}\tilde{b}+1\right)
-\left( \frac{m}{2}\varpi ^{2}\theta +\omega _{c}\right) \left( \tilde{a}^{+
\tilde{a}-\tilde{b}^{+}\tilde{b}\right) +s_{z}m\varpi ^{2}\theta \right.
\bigg)\psi _{n_{1},n_{2}}^{\sigma _{z}}=\bar{E}\psi _{n_{1},n_{2}}^{\sigma
_{z}},
\end{equation}
where the states $\psi _{n_{1},n_{2}}^{\sigma _{z}}$ are labeled by the
number $n_{1}$ for the quanta excitation of the operator $\tilde{a}$,
and the number $n_{2}$ for the quanta excitation of the operator $\tilde{b}$:
\begin{eqnarray}
\tilde{a}^{+}\tilde{a}\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{1}\psi
_{n_{1},n_{2}}^{\sigma _{z}} \\
\tilde{b}^{+}\tilde{b}\psi _{n_{1},n_{2}}^{\sigma _{z}} &=&n_{2}\psi
_{n_{1},n_{2}}^{\sigma _{z}}
\end{eqnarray}
The energy eigenvalues for eq. $(50)$ are given by:
\begin{equation}
E^{2}=m^{2}-eB+2m\varpi \left( n_{1}+n_{2}+1\right) -2m\left( \frac{m}{2
\varpi ^{2}\theta +\omega _{c}\right) \left( n_{1}-n_{2}\right) \pm
m^{2}\varpi ^{2}\theta ,
\end{equation}
The non-relativistic limit is given as:
\begin{equation}
E_{nr}=\frac{\varpi }{m}\left( n_{1}+n_{2}+1\right) -\left( \frac{m\varpi
^{2}\theta +2\omega _{c}}{2m}\right) \left( n_{1}-n_{2}\right) -\frac{eB}{2m
\pm \frac{m}{2}\varpi ^{2}\theta
\end{equation}
where $n_{1},n_{2}=0,1,2,....$ and $m_{l}=n_{1}-n_{2}=0,\pm 1,\pm 2,.......$\\
The energy level is splitting into two levels ( they are labeled by
n_{1},n_{2} $), which removed the degeneracy. It is similar to the Zeeman effect.
Hence we can say that the particle in non-commutative complex space describes
the particle with spin $1/2$ in magnetic field, therefore the system with
spin in a magnetic field will have a resonance $\left[ 32\right] $. Then the
critical values of $\theta =-\frac{2e}{m^{2}\omega ^{2}}$ can be considered
as a resonance point. At this point, the system can be treated as landau problem
with spin $ 1/2$.
\section{Conclusion}
\quad In this work we started from charged relativistic quantum particle and
charged oscillator in a uniform magnetic field in a canonical
non-commutative complex space. By using the Moyal product up to first order
in the non-commutative parameter $\theta $, we derived the deformed
relativistic Landau problem and Klein-Gordon oscillator equations. By
solving them exactly we found that the energy removed the degeneracy and
shifted up it to the first order in $\theta $ by two levels, such effects are
similar to the Zeeman splitting in a commutative ordinary space. In
addition, we also obtained the non-relativistic limit of the energy spectrum.
|
1,477,468,750,159 | arxiv | \section{Introduction and motivation}
\label{sec:intro}
According to today's knowledge, Active Galactic Nuclei (AGN) are powered
by accretion onto a supermassive black hole ($10^6-10^{10}\,M_{\odot}$,
e.g.\/ \citealp{Shankar_04}) residing in their centres. Thereby,
gravitational energy is converted into heat by viscous processes within
the surrounding accretion disk, which extends from the
marginally stable orbit up to several thousands of Schwarzschild radii.
The emitted UV/optical light illuminates the attached, toroidally shaped dust
reservoir.
The concept of this {\it obscuring torus} was introduced in order to unify mainly two
classes of observed spectral energy distributions (SEDs):
one shows a peak in the UV-region with overlayed broad and narrow optical emission lines,
the other class shows only narrow optical emission lines.
This can be interpreted as an inclination angle dependence. For
viewing angles within the dust-free cone of the torus (type\,1 sources), direct signatures of the
accretion disk (a peak in the UV-range) and the region close to the centre
within the funnel of the torus show up. This is where gas moves fast and, therefore, produces
broad emission lines (the region is hence called the {\it Broad Line
Region (BLR)} of the nucleus). For edge-on lines of sight (type\,2 sources), the direct view onto the
centre is blocked and optical emission
lines can only be detected from gas beyond the torus funnel. Being further away from
the centre, it moves slower and hence produces narrow emission lines only.
This is the so-called {\it Unified Scheme
of Active Galactic Nuclei} \citep{Antonucci_93,Urry_95}.
First evidence for this scenario came from spectropolarimetric observations of
type~2 sources \citep{Miller_83}, clearly displaying type~1
signatures in the polarised light, which is scattered by electrons and
tenuous dust within the funnel above the torus.
The opening angle of the torus can be estimated with the help of statistics of the different
types of Seyfert galaxies.
\citet{Maiolino_95} find a ratio between Sy~2 to Sy~1 galaxies of
4:1 in their sample, which results in an opening angle of the light cones of
$74\degr$, in concordance with many observations of ionisation cones of
individual galaxies.
Direct support for the idea of geometrically thick tori comes from recent
interferometric observations in the mid-infrared \citep[e.g.\/][]{Jaffe_04,Tristram_07}.
These tori are made up of at
least three components: (i) hot ionised gas, (ii) warm molecular gas and (iii) dust.
Krolik \& Begelman (1988) proposed that the dusty part has to be organised in a
clumpy structure in order to prevent the grains from being
destroyed by the hot surrounding gas (with temperatures
of the order of $10^6~$K) in which the clouds are supposed to be embedded.
Another hint for the clumpy nature of the obscuring material -- in this case
mainly for the distribution of neutral gas -- comes
from X-ray measurements of the absorbing column density.
\citet{Risaliti_02} claim that the observed
variability of these measurements
on timescales from months to several years can be explained by a clumpy
structure of the torus.
Combining X-ray absorbing column densities with spectral information further
strengthens the claim for a clumpy distribution of the dust \citep{Shi_06}.
Earlier work on torus simulations concentrated mostly on smooth dust
distributions
\citep[e.g.\/][]{Pier_92b,Granato_94,Bemmel_03,Schartmann_05}.
This was mainly caused by the lack of appropriate (3D)
radiative transfer codes and computational
power.
Nevertheless, such models are good approximations for the case that
the clumps that build up the torus are small compared to the total
torus size, as is also shown in a parameter study described in
Sect.~\ref{sec:volfill}.
These continuous models are able to
describe the gross observable features of these objects (see
\citealp[e.~g.][]{Schartmann_05}). However, problems arose from
too strong emission features of silicate dust compared to the observations, when
looking directly onto the
inner rims of the model structures (face-on views). They had never been
observed before that time, although almost all models showed them for the face-on
view. Therefore, much theoretical effort was undertaken in order to find models
showing no silicate feature at all in the face-on case, while retaining the silicate absorption
feature in the edge-on case.
\citet{Manske_98} for example succeeded in avoiding silicate emission features
with a flared dust disk of high optical depth in combination with an
anisotropic radiation characteristic of the central illuminating source.
A very promising
idea was to solve
the problem naturally by splitting the
dust distribution into single clouds.
This was first attempted by \citet{Nenkova_02}.
A one-dimensional code for the simulation of radiative transfer through
single clumps was used and, in a second step, the torus and its emitted
SED was assembled
by adding many clouds of different aspect angles
with the help of a statistical method.
With this approach, they could show that a clumpy dust distribution of this kind can significantly
smear out the prominent silicate emission feature of the SEDs of type~1 objects
at $10\,\muup$m for a large range of parameter values. No
more fine-tuning was needed, as in the previously proposed solutions with
the help of special continuous models. Subsequently, real
two-dimensional radiative transfer calculations were undertaken by
\citet{Dullemond_05}. Clouds were modelled as concentric rings.
A direct comparison between these kinds of clumpy models and the corresponding
continuous models did not show evidence for a systematic suppression of the
silicate feature in emission in the clumpy models.
Meanwhile, silicate features
in emission were found with the help of the
Infrared Spectrograph (IRS) onboard the Spitzer space telescope
\citep[e.~g.][]{Siebenmorgen_05,Hao_05,Sturm_05,Weedman_05}.
For these kinds of studies, Spitzer
is superior to other available facilities, due to its high
sensitivity and the coverage of a wavelength range including both silicate
features (at $9.7\,\muup$m and $18.5\,\muup$m)
and the surrounding continuum emission. Silicate emission
features were found in different levels of AGN activity,
ranging from very luminous quasars down to weak LINERS.
These findings are in good agreement with a geometrical unification
by an optically thick dusty torus, as silicate emission features can
be produced even in the simplest models.
But one has to be cautious, as due to the large beam of the Spitzer space telescope
and the low temperatures measured,
it is unclear whether these silicate features result from dust emission
in the innermost parts of the torus
or from optically thin regions surrounding them.
Very detailed simulations of clumpy tori were undertaken recently by
\citet{Hoenig_06}. They apply a similar method as \citet{Nenkova_02}, but use a
2D radiative transfer code for the simulation of SEDs of individual spherical clumps at
various positions in the torus and with various illumination patterns: directly
illuminated and/or illuminated by reemitted light of surrounding clouds.
In a second step, these clouds are distributed according to physical models by
\citet{Vollmer_04} and \citet{Beckert_04}. A comparison of the resulting SEDs
and images with spectroscopic and
interferometric observations shows good agreement.
This model is characterised by a large number of small clouds with a very large
optical depth, especially close to
the centre. We compare our models with these models in Sect.\,\ref{sec:torus_other}.
Despite the detection of geometrically thick dust tori in nearby Seyfert
galaxies (e.g.\/ \citealp{Jaffe_04,Tristram_07}), many questions remain: How are
these tori formed? How are they stabilised against gravity? Do steady torus
solutions exist?
Several attempts to answer these questions have been made. For example
\citet{Krolik_88} and \citet{Beckert_04} support the scale-height of their
tori with the help of discrete clumps, moving at supersonic velocities,
maintained by mainly elastic collisions with the help of strong magnetic fields.
Other groups replace the torus by a magnetically-driven wind solution \citep{Koenigl_94}.
The most recent suggestion comes from \citet{Krolik_07}, building up on an idea of
\citet{Pier_92a}, where the scale-height of
tori can be maintained with the help of infrared radiation pressure, as shown
with an idealised analytical model. A more detailed review of possible solutions
and their drawbacks is given in \citet{Krolik_07}.
Another possible scenario, where the effects of stellar feedback from a nuclear cluster
play a major role, is discussed in \citet{Schartmann_08}.
In this paper, we address the implications of clumpiness on the temperature
structure, the infrared spectral
energy distributions, surface brightness distributions as well as interferometric visibilities
by implementing fully three-dimensional
radiative transfer calculations through a clumpy dust distribution and discuss
the possible mechanisms causing this behaviour.
In Sect.\,\ref{sec:Model}, a description of our model is
given, before we present the basic results for our standard model (Sect.\,\ref{sec:results_stanmodel})
and for several parameter studies (Sect.\,\ref{sec:param_study})
and discuss the findings (Sect.\,\ref{sec:discussion}), as well as differences and
similarities to other models. In Sect.\,\ref{sec:MIDI_interferometry} we interpret our results in
terms of MIDI interferometric observations and compare them to data for the
Circinus galaxy. Finally we draw our conclusions in Sect.
\,\ref{sec:conclusions}. \\
\section{The model}
\label{sec:Model}
\subsection{Assembly of our clumpy standard model}
\label{sec:torus_assembly}
We apply a very simple, wedge-like torus geometry with a half opening angle of
$45\degr$ in order to gain
resolution. In our previous two-dimensional continuous {\it TTM}-models \citep{Schartmann_05},
the simulation of the whole
$\theta$-range was necessary, due to the radial as well as $\theta$-dependence of the
dust distribution. It resulted from an equilibrium between turbulent pressure forces and forces due to an
effective potential. The latter is mainly made up of gravitational forces due to the central black hole and the
central stellar distribution, as well as rotation.
The cloudy dust distribution is set up on a spherical three-dimensional
grid $\vec{r}=(r,\theta,\phi)$, which is linear in $\theta$ and $\phi$ and logarithmic in $r$.
To obtain the clumpy density structure,
the following procedure is applied:
A random number generator (RAN2 taken from \citealp{Press_92})
determines the radial coordinate of the clump centre, which is equally distributed between the
inner and outer radius. The $\theta$ and $\phi$ coordinates are chosen such that
the resulting points are equally distributed on spherical shells.
In a second step, the spatial distribution found so far is coupled to
the dust density distribution of the continuous model:
\begin{eqnarray}
\begin{centering}
\label{equ:den_dis}
\rho_{\mathrm{cont}}(r,\theta,\phi) = \rho_0 \, \left(\frac{r}{1\,\mathrm{pc}}\right)^{\alpha}.
\end{centering}
\end{eqnarray}
The radii of individual clumps $a_{\mathrm{clump}}$ vary with distance from the centre according to a
distribution
\begin{eqnarray}
\begin{centering}
a_{\mathrm{clump}} = a_0 \,\left(\frac{r_{\mathrm{clump}}}{1\,\mathrm{pc}}\right)^{\beta}.
\end{centering}
\end{eqnarray}
All cells within this clump radius are homogeneously filled with dust.
All clumps possess the same optical depth $\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{clump}}$,
measured along a radial ray through the clump centre.
We further require that clumps always
have to be completely contained within the model space, but are allowed
to intersect. Such a combination of intersecting clumps will be called
a {\it cloud} from now on.
Thus, a cloud may contain overdensities, where the intersection happens.
A clump size distribution as described above seems to be reasonable, as
shear forces due to the differential rotation increase towards the centre.
Therefore, clouds are more easily disrupted in the inner part of the torus.
Furthermore, clouds become compressed when moving towards the centre due to the
increasing ambient pressure in a deeper potential well.
All other routines and algorithms used in this paper are identical to the
modelling described in \citet{Schartmann_05} and will only be mentioned briefly in
Sect.\,\ref{sec:preconditions}.
The main model parameters of the continuous and clumpy distributions
are summarised in Table\,\ref{tab:model_param},
\begin{table}
\centering
\caption[Model parameters for continuous and clumpy wedge models]{Main model
parameters for our continuous and clumpy standard model.}
\begin{tabular}{lcc}
{\bf both models} & & \\
\hline
\hline
inner radius of the torus & {\bf $R_{\mathrm{in}}$} & 0.4 pc \\
outer radius of the torus & $R_{\mathrm{out}}$ & 50 pc \\
half opening angle of the torus & {\bf $\theta_{\mathrm{open}}$} & $45\degr$ \\
total optical depth in equatorial plane
& {\bf $\left<\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{equ}}\right>_{\phi}$} & 2.0 \\
exponent of continuous density distribution & {\bf $\alpha$} & -0.5 \\
number of grid cells in $r$ direction & & 97 \\
number of grid cells in $\theta$ direction & & 31 \\
number of grid cells in $\phi$ direction & & 120 \\
& & \\
{\bf additional in clumpy model} & & \\
\hline
\hline
number of clumps & {\bf $N_{\mathrm{clump}}$ } & 400 \\
exponent of clump size distribution & {\bf $\beta$} & 1.0 \\
constant of clump size distribution & {\bf $a_0$} & 0.2 pc\\
optical depth of each clump & {\bf $\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{clump}}$ } & 0.38 \\
average number of cells per clump & & 272 \\
\end{tabular}
\label{tab:model_param}
\end{table}
where the numerical values refer to our clumpy and continuous standard model.
The dust density distribution for the clumpy case is shown in Fig.~\ref{fig:standardmodel_vis}.
The torus possesses a volume filling factor of 30\% and the dust
mass was chosen such that the optical depth of the torus within the
equatorial plane (averaged over all angles $\phi$) reaches a value of two at
$9.7\,\muup$m. With this value, the resulting absorption column densities are
in concordance with observations of Seyfert type\,2 galaxies
obtained with the IRS
spectrometer onboard Spitzer (see e.~g.~\citealp{Shi_06}), and the modelled silicate absorption
feature depth compares well with observations. If not stated otherwise, the optical depth
always refers to a wavelength of $9.7\,\muup$m throughout this paper.
\begin{figure}
\centering
\resizebox{0.8\hsize}{!}{\includegraphics[angle=0,clip]{./figures/fig01.eps}}
\caption[3D rendering of the clump distribution of our standard clumpy model]{3D
rendering of the clump distribution of our standard clumpy torus model.
The chosen inclination angle corresponds to a Seyfert\,1 type (face-on) view onto the
torus.}
\label{fig:standardmodel_vis}
\end{figure}
As in \citet{Nenkova_02}, all clumps possess the same optical
depth in our standard model. A total optical depth within the equatorial plane
of $\left<\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{equ}}\right>_{\phi}$ = 2.0
results in an optical depth of $\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{clump}}=0.38$ along
a radial ray through the centre of the clump.
The corresponding continuous model has the same geometrical structure, continuously filled with dust according to the
density distribution given in equation\,\ref{equ:den_dis}.
\subsection{Summary of methods}
\label{sec:preconditions}
A very brief overview of the dust composition, the heating source and the numerical method of
radiative transfer will be given in this section.
Although several hints \citep{Maiolino_01b,Maiolino_01a,Jaffe_04}
point to the possibility that dust in
the nuclear regions of AGN is dominated by large grains,
we will limit our
present investigation to the classic MRN-model \citep{Mathis_77}
for three reasons: first, and
most important, we aim for comparability with our earlier paper on
continuous tori \citep{Schartmann_05}. Second, we have tested that our
essential results about the change in grain size distribution (Sect.\,3.9 in
\citealp{Schartmann_05}) remain unchanged when distributing the dust in a clumpy structure.
Third, our approach, which explicitly takes into account
the size-dependent sublimation radius, is generically more robust against changes in the
grain distribution than calculations that ignore this effect.
For our current simulations, we represent the MRN-model by three different
grain species with 5 different grain sizes each. Taking different sublimation
radii of the various grains into account then partially accounts for the
destruction of small grains in the harsh environment of the quasar,
as they possess larger sublimation radii.
The dust distribution is heated by a point-like, central accretion disk
with the SED of a mean
quasar spectrum (see Fig.\,3b in \citealp{Schartmann_05}).
The radiation characteristic is chosen to follow a $|\cos(\theta)|$ law for
all wavelengths. For the simulations shown in this paper, the accretion disk SED
is normalised to a bolometric luminosity of $1.2 \times 10^{11}
\, L_{\sun}$, except for the comparison with the Circinus galaxy.
\begin{figure}[t!]
\centering
\includegraphics[angle=0,clip,width=1.0\linewidth]{./figures/fig02.eps}
\caption[Quality tests]{
{\it a)} SEDs for a photon number study. The solid
curves refer to our standard model and the dotted graphs (identical with the solid curves)
result after
doubling the number of photon packages. {\it b)} SEDs for a resolution study:
high resolution (solid curves -- our standard model) and a factor of 3
reduced number of grid cells (dotted curves). Shown are the cases for inclination
angles $0\degr$ and $90\degr$.}
\label{fig:quality_check}
\end{figure}
In order to obtain the temperature, the SEDs and the
surface brightness distributions of the dusty torus, we use the
three-dimensional radiative transfer code MC3D\footnote{MC3D ({\it Monte Carlo 3D}) has been tested
extensively against other radiative transfer codes for 2D structures \citep{Pascucci_04} and we also
performed a direct comparison for the special case of AGN dust tori with the
simulations of \citet{Granato_94}, one of the standard torus models for comparison,
calculated with his grid based code (see Fig.~4 in
\citealp{Schartmann_05}).} \citep{Wolf_99a,Wolf_03b}. We apply the
Monte Carlo procedure mainly for the calculation of temperature distributions and
the scattering part whenever necessary,
whereas SEDs and surface brightness maps for dust
reemission are
obtained with the included raytracer.
The main advantage compared to other codes is MC3D's capability to cope with real
three-dimensional dust density distributions, needed for a realistic modelling of
the dust
reemission from a clumpy torus.
For this paper, we implemented the automatic determination of the sublimation surfaces of
the various grain species in
three dimensions. As we expect the sublimation to happen along irregularly
shaped surfaces in a three dimensional, discontinuous model, a raytracing
technique is used to solve the (1D) radiative transfer equation
approximatively in all directions of the model space.
For further information on the radiative transfer procedure used and the other
preconditions (mainly primary source and dust composition), see
\citet{Wolf_99a}, \citet{Wolf_00}, \citet{Wolf_01,Wolf_03b} and \citet{Schartmann_05}.
\subsection{Resolution study}
\label{sec:mc3d_restest}
In Fig.~\ref{fig:quality_check}a, we show SEDs for our standard
clumpy model (solid line, $5\cdot 10^6$ monochromatic photon packages)
and for the same model, but with twice as many
photon packages ($10^7$) used for the simulation of the temperature distribution (dotted graphs,
identical with solid lines).
Despite slight differences in the temperature distributions of single grains, we find an almost
identical behaviour in the displayed SEDs, with differences smaller than the thickness of the lines.
Maps at 12\,$\mu$m display the same distribution with
slight problems along the projected torus axis, which are not visible
in the single surface brightness distributions
and without any noticeable effect on the interferometric visibility distributions calculated from these maps.
In Fig.~\ref{fig:quality_check}b, the solid
curves displays the SEDs for our high spatial resolution standard model and
the dotted lines refer to a model with a factor of roughly three less grid
cells. Only very small deviations are visible at short wavelengths.
Fig.~\ref{fig:quality_check} clearly shows that the results and conclusions we draw from our simulations
are neither affected by photon noise nor by too low spatial resolution.
\begin{figure}[b!]
\resizebox{1.0\hsize}{!}{\includegraphics[angle=0]{./figures/fig03.eps}}
\caption[Comparison between the radial temperature distribution of the
clumpy model with the continuous model]{Comparison between radial temperature distributions
(for the smallest silicate grains) in all directions of the clumpy standard
model (panel a) with the
temperature distribution of the corresponding
continuous model (panel b). The red curve indicates the temperature averaged over
all angles $\theta$ and $\phi$.}
\label{fig:temp_comp_equ}
\end{figure}
\section{Analysis of our standard model}
\label{sec:results_stanmodel}
In most of the SEDs discussed in this paper, only pure dust reemission SEDs are shown
and an azimuthal viewing angle of $45\degr$ is used, if not stated otherwise.
\subsection{Temperature distribution}
In Fig.~\ref{fig:temp_comp_equ}, the temperature distribution of all cells in all $\theta$ and
$\phi$ directions for the smallest silicate grain component is plotted a) for our
clumpy standard model and b) for the corresponding
continuous model. The red curves show the radial temperature profile, averaged
over all $(\theta,\phi)$ directions. It is evident that the
spread of temperature values for a given distance from the primary
radiation source is much larger for the clumpy models than for the
continuous ones.
Higher temperatures are possible even in parts of the torus further out, as dust
free or optically thin lines of sight exist far out, depending on the distribution
of single clumps. Therefore, a direct illumination
of clouds is possible even at large radii.
Concerning
the continuous model, the
scatter decreases significantly from $2\,$pc outwards. Further in,
the $\theta$ dependent radiation characteristic of the primary
source causes greater scatter due to higher temperatures further away from the
midplane.
\subsection{Viewing angle dependence}
\label{sec:viewing_angle}
\begin{figure}[b!]
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=0]{./figures/fig04.eps}}
\caption[Viewing angle dependence of the SEDs of our clumpy standard
model]{Dependence of
the SEDs on the viewing
angle: {\bf a)} different inclination angles for a common azimuthal angle
$\phi$\,$=$\,$45\degr$. Inclination angles shown are:
$0\degr$ (solid line), $30\degr$ (dotted line), $60\degr$ (dashed
line), $90\degr$ (dashed-dotted line).
{\bf b)} Different azimuthal
angles $\phi$ for a common inclination angle of $i=60\degr$. Azimuthal angles shown are:
$0\degr$ (solid line), $45\degr$ (dotted line) and $180\degr$ (dashed
line).}
\label{fig:standardmodel_theta_dep_sed}
\end{figure}
Fig.~\ref{fig:standardmodel_theta_dep_sed}a shows the dependence of the spectral
energy distributions on the inclination of the torus.
One can only see a clear distinction between lines of sight within the
dust-free funnel ($0\degr$ and
$30\degr$ inclinations) and those within the wedge-shaped disk ($60\degr$
and $90\degr$). This was already reported by \citet{Granato_94}, based on their
continuous wedge models.
In our case, it is caused by the relatively large volume filling fraction and the large
clouds in the outer part of the torus. Therefore, only a weak dependence of the
dust density distribution on the polar angle exists, which we chose for simplicity.
In our previous (2D) modelling (see \citealp{Schartmann_05}),
we obtained the expected smooth transition in
the polar direction.
In Fig.~\ref{fig:standardmodel_theta_dep_sed}b, the azimuthal angle is
varied for a constant inclination angle of $60\degr$.
Nearly identical SEDs result, which is understandable when considering our large volume
filling factors. The largest deviations appear at the shortest wavelengths, where
the emission results from the hottest parts of the torus, which are also the most centrally
concentrated parts. Therefore, this wavelength range is most sensitive to
changes of the optical depth along the direct line of sight towards the centre.
The dependence on the inclination angle of images is shown in Fig.\,\ref{fig:dustmass_im}
(upper two rows).
It is especially interesting that the different inclination angles look
very similar, which was not the case for the continuous model
(see lower two rows of Fig.\,\ref{fig:dustmass_im}).
There, the images at larger inclination angles are dominated by the boundaries of
the disk, which are not so well defined in the clumpy case.
In the zoomed-in images
(Fig.~\ref{fig:random_distribution}, upper row), the basic features of our
model are directly visible, as one can see the different illumination
patterns of clouds: Clouds in the innermost part are fully illuminated
and, therefore, show bright inner rims and cold outer parts. Other clouds are
partly hidden behind clouds further in and appear as bright spots only.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=0]{./figures/fig05.eps}}
\caption[Comparison of images with various dust masses]{Inclination angle study of images of clumpy
(first two rows) and
continuous models (third and fourth row) with two different dust masses
at $12\,\muup$m. Shown are the extreme cases with half of the mass of the
standard model (first and third row) and eight times the mass (second and fourth row). For details of the
mass study, see Sect.~\ref{sec:dustmass_study}.
The inclination angles
$i=0\degr$, $60\degr$, $90\degr$ are shown in different columns. The scaling is logarithmic with
a range between the maximum value of all images and the $10^{-6}$th
fraction of it (excluding the central point source). Labels are sizes in pc.}
\label{fig:dustmass_im}
\end{figure}
\subsection{Wavelength dependency}
\label{sec:wavelength_dependency}
Fig.~\ref{fig:standardmodel_sb_wave_dep} shows the wavelength dependency of
our standard model. At short wavelengths, the hottest inner parts
dominate the brightness distribution.
Further out, a few more directly illuminated clumps are
visible as bright spots.
At the longest wavelengths, emission arises from clumps all over the
torus, as colder dust emits strongest at these wavelengths. This dust is spread
over a larger volume, due to the steeply decreasing temperature distribution
at small radii.
Furthermore, the extinction
curve has dropped by a large factor at these wavelengths and, therefore, the torus becomes
optically thin and the whole range of cloud sizes is visible.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=0]{./figures/fig06.eps}}
\caption[Wavelength dependence of the surface brightness distributions of our
clumpy standard model]{Wavelength dependence of the surface brightness distributions:
$\lambda=4.6\,\muup$m (upper row),
$\lambda=9.7\,\muup$m (second row), $\lambda=12.0\,\muup$m (third row) and
$\lambda=30.2\,\muup$m (lower row).
Within the rows, the inclination angle changes from face-on view (leftmost
panel) over $60\degr$ to $90\degr$ (rightmost panel).
The images are given in
logarithmic scaling with a range of values between the
global maximum of all images and the
$10^{-5}\mathrm{th}$ fraction of it (central source excluded). Labels are in pc.}
\label{fig:standardmodel_sb_wave_dep}
\end{figure}
\section{Parameter variations}
\label{sec:param_study}
\subsection{Different realisations of the clumpy distribution}
As already discussed in \citet{Schartmann_05}, SEDs of dust reemission depend strongly on the distribution of
dust in the innermost region. Changing the random arrangements of clumps -- as
done in this section -- therefore is expected to cause significant changes of the
SEDs, especially for the case of a small number of clouds. The second
important parameter is the optical depth of the single clumps.
The larger it is, the stronger is the
dependence of the SEDs on the dust distribution in the innermost region.
For the case of our modelling, the small number of clouds is expected to
cause large differences in the observed SEDs. But this effect is partially compensated --
in most of the simulations -- by optically thin individual clumps, resulting in a more similar behaviour
of the SEDs.
\begin{figure*}
\centering
\resizebox{0.6\hsize}{!}{\includegraphics[angle=0]{./figures/fig07.eps}}
\caption[Dependence on the random arrangements of clumps]{Different random
arrangements of
clumps. The rows show three different inclination angles:
$0\degr$ (first row), $60\degr$ (second row), $90\degr$ (third row). Given in columns are
the SEDs (first column) and the images at $12\,\muup$m for the three
different random cloud arrangements (column 2 to 4), all having the
same parameters (see Table~\ref{tab:model_param}). Here,
the solid line in the SED corresponds to the first column of images, the
dotted line to the second and the dashed line to the third column.
Images
are given in logarithmic color scale ranging from the maximum of
all images to the $10^{-4}$th part of it (excluding the central source).
Labels denote distance to the centre in pc.}
\label{fig:random_distribution}
\end{figure*}
Looking at the simulated SEDs
and matching them with the images,
the following results can be seen (compare to Fig.~\ref{fig:random_distribution}):
At $0\degr$ inclination angle (upper row), we observe nearly identical
SEDs. The dashed line (corresponding to the fourth column) shows a slightly
enhanced flux at short wavelengths,
as a larger number of clouds are close to the
central source. In the third column,
the cloud number density in the central part is the lowest of the three examples.
For the case of the middle row ($i=60\degr$), the largest deviations are visible for the case
of the dotted line (third column). Here, the silicate feature even appears in emission.
This is visible in the surface brightness distributions, as more directly illuminated clouds
are visible on unobscured lines of sight, resulting in a brighter central region compared to the other two maps.
At an inclination angle of $90\degr$ (third row), absorption along the line of sight increases
drastically from the second to the fourth column, visible in a deepening of the silicate absorption
feature and the darkening of the central region of the surface brightness distributions.
\subsection{Different volume filling factors}
\label{sec:volfill}
Starting
from the standard model with a volume filling factor of $30\%$ and 400 clumps
within the whole model space, we halved it once by distributing only 160
clumps within the calculation domain and doubled it, for which 1500 clumps were
needed due to the applied procedure of randomly distributing clumps.
\begin{figure}[b!]
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=0]{./figures/fig08.eps}}
\caption[Surface brightness distributions at various volume filling factors
of our clumpy standard model]{Different volume filling factors:
$15\%$ (upper row), $30\%$ (middle
row), $60\%$ (lower row) for $\lambda=12.0\,\muup$m.
From left to right, the inclination angle
changes: $i=0\degr$, $60\degr$, $90\degr$. The scaling is logarithmic with
a range between the maximum value of all images and the $10^{-6}$th
fraction of it (excluding the central point source). Length scales are given in pc.}
\label{fig:volfill_sb}
\end{figure}
\begin{figure}[b!]
\centering
\resizebox{0.75\hsize}{!}{\includegraphics[angle=0]{./figures/fig09.eps}}
\caption[SEDs at various volume filling factors
of our clumpy standard model]{Different volume filling factors:
$15\%$ (dotted line), $30\%$ (solid
line), $60\%$ (dashed line).
Rows show SEDs for the three inclination angles: $i=0\degr$, $60\degr$, $90\degr$. The
viewing angle in $\phi$-direction is always $45\degr$.}
\label{fig:volfill_sed}
\end{figure}
The resulting surface brightness distributions at $\lambda\,=\,12.0\,\muup$m are
shown in Fig.~\ref{fig:volfill_sb}
with the three models given in different rows and for three different
inclination angles: $i=0\degr$, $60\degr$, $90\degr$.
In the case of the lowest volume filling factor, individual clouds are
visible. The distribution of the surface brightness of these individual clouds
reflects the temperature structure within single clumps. The directly
illuminated clumps are hotter and, therefore, appear brighter. When adding
more and more clumps (increasing the volume filling factor), the chance of
directly illuminating clumps further out decreases and at higher filling factors
it is only possible for clumps close to the funnel. This is clearly visible at
higher inclination angles: the higher the volume filling factor, the clearer
the x-shaped feature appears, as only clumps within or close to the funnel can
be directly illuminated. At a volume filling factor of $60\%$, the surface
brightness distribution looks very similar to that of the corresponding
continuous model (compare to Fig.~\ref{fig:dustmass_im}).
For large volume filling factors and close to edge-on, substructure is only visible from
clouds in a viewing direction towards the dust-free cones.
The corresponding SEDs are shown in Fig.~\ref{fig:volfill_sed}.
With increasing filling factor, more and more flux at short wavelengths
appears for the face-on case, as seen in \citet{Schartmann_05}.
The shape of the clumpy model SEDs resemble the
corresponding continuous model most (compare to Fig.~\ref{fig:dustmass}, right column) for
the highest volume filling factor.
Concerning the silicate feature, it increases slightly in
emission as the amount of dust at the appropriate temperature increases as
well (visible at the transition from the lowest to the medium volume filling factor).
The silicate absorption feature at higher inclinations strongly depends on the viewing
angle (compare Fig.~\ref{fig:volfill_sed}b and c) especially for the model with
the least number of clumps (dotted line).
Thus, this study shows the validity of the simplification of using a smooth
dust distribution in the case of very high torus volume filling factors, as was assumed in previous
simulations.
\subsection{Dust mass study}
\label{sec:dustmass_study}
To study the dependence of the SEDs on the
optical depth of the torus, we carried out a study with
0.5, 1, 2, 4 and 8 times the dust mass in the standard model.
This leads to an optical
depth at 9.7$\,\muup$m within the equatorial plane, averaged over all angles of
$\phi$ of
$\left<\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{equ}}\right>_{\phi} = $ 1, 2, 4, 8, 16.
Single clumps then change from optically thin to optically thick
($\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{clump}} = 0.19, 0.38, 0.76, 1.52, 3.04$).
\begin{figure*}
\centering
\resizebox{0.6\hsize}{!}{\includegraphics[angle=0]{./figures/fig10.eps}}
\caption[SEDs for different enclosed dust masses]{SEDs for different enclosed dust masses. The left
column shows the
case for the clumpy models and a face-on view (upper row) as well as an edge-on
view (lower row). In the right column, continuous models are displayed.
The solid line corresponds to the standard model, the dotted to half of the mass,
the dashed double the mass, the dash-dotted to four times the mass and the
dash-triple-dotted to eight times the mass of the standard model. \label{fig:dustmass}}
\end{figure*}
\begin{figure}
\centering
\resizebox{0.75\hsize}{!}{\includegraphics[angle=0]{./figures/fig11.eps}}
\caption[Close-up of the spectrum between 7 and $15\,\muup$m for the face-on
case of the dust mass study]{Close-up
of the spectrum between 7 and $15\,\muup$m of the face-on case of the dust mass study for the clumpy
model in linear display. The lines are defined as in Fig.~\ref{fig:dustmass}.}
\label{fig:dustmass_close}
\end{figure}
The resulting behaviour of the SEDs is shown in
Fig.~\ref{fig:dustmass}, where it is also compared to the corresponding
continuous models. Concerning the silicate feature (in emission) for the
face-on case (top row), we see a very similar behaviour of the SEDs of the clumpy and
continuous model. Increasing the mass and with it the total optical depth leads to a flattening of
the SED around the silicate feature, even more pronounced in the continuous case.
In addition to that, a slight shift of the maximum of the
silicate feature towards longer wavelengths is visible for the case of the highest dust
mass, apparent in the zoom-in of Fig.\,\ref{fig:dustmass} around the silicate feature
(see Fig.~\ref{fig:dustmass_close}).
This is due to the increasing underlying
continuum towards longer wavelengths.
Although the principal behaviour of the silicate feature is identical for our
clumpy and our continuous model, the reasons
differ: In the case of a continuous wedge-like torus, the inner, directly illuminated
walls are only visible through a small amount of dust. From step to
step, the walls become more opaque and shield the directly illuminated
inner rim better,
decreasing the height of the silicate feature.
This was not the case for
our continuous {\it TTM}-models in \citet{Schartmann_05}. With them, it was not possible
to significantly reduce the silicate feature height within reasonable optical depth ranges.
This was caused by the fully visible, directly illuminated
inner funnel.
Therefore,
in the wedge-shaped continuous models,
the reduction of the feature is an artefact caused by the unphysical, purely geometrically
motivated shape. Furthermore, it also involves very deep and so far unobserved
silicate absorption features for
the edge-on case (see lower right panel in Fig.~\ref{fig:dustmass}).
Concerning the clumpy model, the explanation for the flattening of the
silicate feature in the edge-on case with increasing dust mass of the torus will be given in
Sect.~\ref{sec:model_reduction}.
The Wien branches show different behaviour when looking face-on.
For the case of the clumpy torus model, increasing
the optical depth means that the Wien branch moves to larger
wavelengths, as expected for the edge-on case. This is understandable when most of the directly
illuminated surfaces of the clouds are then hidden behind other clouds, an argument which
is not valid if the clouds are too optically thin in the inner part.\\
For the edge-on case, we qualitatively obtain a comparable behaviour as in the
continuous case, because of the large number of clumps and the same
optical depth within the equatorial plane.
But a very important difference can be seen in the appearance of the silicate
feature in absorption:
when we want to have only very weak silicate emission features in the
face-on case, a large optical depth is needed, resulting in an unphysically
deep silicate feature in absorption in the edge-on case of the continuous models,
whereas the silicate feature remains
moderate for many lines of sight for the clumpy model, where we see a large
scatter for different random arrangements of clumps (compare to Fig.~\ref{fig:random_distribution}).
Concerning surface brightness distributions (see
Fig.~\ref{fig:dustmass_im}), one can see that
the objects appear smaller at mid-infrared wavelengths for the case of higher dust
masses: the larger the optical
depth, the brighter the inner region and the dimmer the outer part.
This is caused by a steepening of the radial dust temperature distribution with
increasing mass of the objects, as the probability of
photon absorption increases in the central region. Especially for the
continuous case, the asymmetry at intermediate
inclination angles becomes visible for larger optical depths caused by extinction on the line of sight due to
cold dust in the outer parts of the torus.
\subsection{Concentration of clumps in radial direction}
\begin{figure}[t!]
\centering
\resizebox{\hsize}{!}{\includegraphics[angle=0]{./figures/fig12.eps}}
\caption[Surface brightness distributions for various slopes of the density
distribution]{Surface brightness distributions for various slopes of
the density distribution in the corresponding continuous
model ($\rho_{\mathrm{cont}} \propto r^{\alpha}$)
leading to different concentrations of clumps in the radial
direction in the clumpy model. The slopes are: $\alpha=0.0$ (upper row), $\alpha=-0.5$ (middle
row), $\alpha=-1.0$ (lower row). From left to right, the inclination angle
increases from face-on to edge-on: $i=0\degr$, $60\degr$,
$90\degr$. Shown are images at $12\,\muup$m with a logarithmic color scale
ranging from the maximum of all images to the $10^{-6}$th part of it.
Labels denote the distance to the centre in pc.}
\label{fig:concentration_sb}
\end{figure}
As already described in the model section (\ref{sec:Model}), clump positions
are also chosen in accordance with the density distribution of the
corresponding continuous model. Therefore, changing the slope of this radial density
distribution, defined to be
$\rho_{\mathrm{cont}}(r,\theta,\phi) = \rho_0 \, \left(\frac{r}{1\,\mathrm{pc}}\right)^{\alpha}$,
leads to a different concentration of clumps along radial
rays. In this section, we vary the slope of the distribution $\alpha$ from a
homogeneous dust distribution ($\alpha=0.0$) over $\alpha=-0.5$ (our standard model) to $\alpha = -1.0$.
Decreasing $\alpha$ leads to an enhancement of the clump number density towards the
central region. In order to keep the volume filling fraction at a constant
level of $30\%$, we need to increase the number of clumps, as
their size decreases towards the central region. All clumps possess the same
optical depth. In order to have a constant mean optical
depth in the midplane,
the total dust mass has to be decreased. For an overview of the modified parameters see Table\,
\ref{tab:slope}.
\begin{table}[!b]
\caption[Varied parameters of the clump concentration study]{Varied parameters of the clump concentration study.}
\centering
\begin{tabular}{lccc}
\hline
\hline
Parameter & $\alpha=0$ & $\alpha=-0.5$ & $\alpha=-1$ \\
\hline
No. of clumps & 250 & 400 & 900 \\
Dust mass [$\mathrm{M}_{\sun}$] & 22950 & 12562 & 6418 \\
\hline
\end{tabular}
\label{tab:slope}
\end{table}
The change of clump concentration can be seen directly from the simulated images at $12.0\,\muup$m
in Fig.~\ref{fig:concentration_sb}, especially in the face-on case (first column). In the
upper panel, single reemitting clumps are visible in the central region. This
changes more and more to a continuous emission for the case of the highest
cloud concentration in the centre due to multiple clumps along the line of
sight and intersecting clumps. At higher inclination angles, the higher
concentration leads to a sharper peak of the surface brightness.
The same behaviour is visible in the corresponding SEDs
shown in Fig.~\ref{fig:concentration_sed}. Decreasing the amount
of dust in the centre near the heating source leads to decreasing flux at
near-infrared wavelengths, whereas the flux at far-infrared wavelengths
increases (reflecting the enhancement of dust in the outer part).
\begin{figure}[ht]
\centering
\resizebox{0.95\hsize}{!}{\includegraphics[angle=0]{./figures/fig13.eps}}
\caption[SEDs for the clump concentration study]
{SEDs for the clump concentration study. The varied slopes of the underlying density distribution are:
$\alpha=0.0$ (dotted line), $\alpha=-0.5$ (solid
line), $\alpha=-1.0$ (dashed line):
{\bf a)} face-on,
{\bf b)} edge-on.}
\label{fig:concentration_sed}
\end{figure}
\subsection{Dependence on the clump size distribution}
In our clumpy standard model, a radially changing clump size
proportional to the radial distance to the
centre was chosen.
In this section, we test the effects of decreasing the slope
$\beta$
of the radial size distribution $a_{\mathrm{clump}} = a_0
\,\left(\frac{r_{\mathrm{clump}}}{1\,\mathrm{pc}}\right)^{\beta}$
of the clumps. This is done in a way that the volume filling fraction as well
as the optical depth in the midplane, averaged over all azimuthal angles $\phi$, remain
constant. It is achieved by changing the proportionalisation constant of the
clump size distribution $a_0$ and the total dust mass of
the torus.
Doing this results in very well resolved clumps in the inner part. Beyond a
distance of approximately $25\,$pc, the number of grid cells per clump drops below the value
of our standard model.
\begin{figure}[b!]
\resizebox{1.0\hsize}{!}{\includegraphics[angle=0]{./figures/fig14.eps}}
\caption[Images for a clumpy distribution with a constant clump size]{Images
at $12\,\muup$m and inclination angles $i=0\degr$ and
$90\degr$ for a clumpy distribution with a constant clump size, independent
of the radial position. The scaling is logarithmic with
a range between the maximum value of all images and the $10^{-6}$th
fraction of it (excluding the central point source).}
\label{fig:clump_size_sb}
\end{figure}
Surface brightness distributions for the extreme case of a constant clump size are
shown in Fig.~\ref{fig:clump_size_sb}. Due to the large clump radius of $5\,$pc
even in the central region and that
fractions of clumps at the model space boarder are prohibited, it leads to a density
distribution with a quite large, unevenly shaped central cavity,
as can be seen in the face-on view (left panel of Fig.~\ref{fig:clump_size_sb}).
The inner rim is given by only a few
intersecting clumps, instead of the otherwise defined spherical central
cavity. Therefore, in the edge-on case, the surface brightness distribution
shows an inner boundary, which is bent towards the centre (convex shaped).
In these models, due to the large clump
size in the inner region, many clumps intersect, producing a nearly continuous
dust distribution at the inner boundaries, which lets the -- typical for continuous
models -- x-shaped structure appear again. For the same reason, the
extinction band due to the $|\cos(\theta)|$-radiation characteristic is visible in
the edge-on view. Especially at the $90\degr$ inclination angle,
single clumps are directly visible (above and below the centre).
In these cases, their shading
directly shows the illumination pattern due to the primary source (accretion
disk), emission from other clumps and extinction from the foreground dust
distributions.
\begin{figure}
\centering
\resizebox{0.95\hsize}{!}{\includegraphics[angle=0]{./figures/fig15.eps}}
\caption[Dependence of SEDs on the clump size for different inclination
angles]{Dependence
on the clump size for different inclination angles
(rows: $0\degr$, $60\degr$, $90\degr$). The solid line corresponds to our standard model with
$\beta = 1$. For the dashed line,
clumps have equal size ($a_{\mathrm{clump}} = 5\,$pc, $\beta=0$), independent of the
radial position and the dotted line corresponds to an intermediate model
with $\beta=0.5$.}
\label{fig:clump_size_sed}
\end{figure}
The corresponding SEDs (Fig.~\ref{fig:clump_size_sed}) mainly reflect the
increase of the inner cavity and, therefore, the lack of flux at short wavelengths. The
convex shape of that region causes a larger directly illuminated area at the
funnel walls and, therefore, slightly strengthens the silicate emission
feature in a face-on view. A different appearance (emission/absorption)
of the $10\,\muup$m feature at $i=60\degr$ (middle panel) is seen. This is due to the
lower number density of clumps in the inner part, enforced by the restriction of
having only whole clumps within the model space.
A dust mass study for the case of the large,
constant diameter clump model ($\beta=0$) reveals the same behaviour as discussed in
Sect.~\ref{sec:dustmass_study} when looking edge-on onto the torus.
However, the face-on case differs:
only the relative height of the silicate feature changes slightly. This was already
observed in our {\it TTM}-models in \citet{Schartmann_05} and is due to the
now inwardly bent inner walls of the funnel (see Fig.\,\ref{fig:clump_size_sb}, right panel),
caused here by the very large and
spherical clumps in the innermost torus region.
\section{Discussion}
\label{sec:discussion}
\subsection{Explanation for the reduction of the silicate feature}
\label{sec:model_reduction}
\begin{figure}[b!]
\centering
\resizebox{0.75\hsize}{!}{\includegraphics[]{./figures/fig16.eps}}
\caption[Sketch of our clumpy torus model]{Sketch of our clumpy torus model.
Indicated in yellow are directly
illuminated surfaces of the clumps.
$i$ is the inclination angle, $\theta_{\mathrm{open}}$ is the half
opening angle of the torus.}
\label{fig:explain_model}
\end{figure}
The results shown in the subsections before can be explained with the following
model, which was partially discussed by \citet{Nenkova_02}.
It is illustrated in Fig.~\ref{fig:explain_model},
where yellow denotes directly illuminated clump surfaces.
Many of the explained features can also be seen
in the zoomed-in versions of the surface brightness distributions
(Fig.~\ref{fig:random_distribution}) for the face-on case (upper row).
As already pointed out in \citet{Schartmann_05}, the SEDs of dust tori in the mid-infrared
wavelength range are
mainly determined by the inner few parsecs of the toroidal dust
distribution. In each of the central clouds of the clumpy model,
the dust temperature drops from
the inner directly illuminated edge towards the cloud's outer surface.
With an inclination angle close to
$i=90\degr$, we expect -- for realistic volume filling factors --
comparable behaviour of the SED as in the continuous case. The silicate feature has a smaller
depth, as discussed in Sect.\,\ref{sec:dustmass_study}. But the situation changes with decreasing
inclination angle. Here, one has to distinguish between different cases:
\begin{enumerate}
\item With a relatively high volume filling factor and a not too small
extension of the clouds in the central region of the torus, it is likely
that the directly illuminated part of most of the clouds is hidden from direct
view by other clouds. Therefore, the directly illuminated surface area
is reduced compared to the corresponding continuous
model. As this area is responsible for the emission fraction of the silicate
feature within the SED, it shows less silicate emission.
In order to produce such a shadowing effect, clouds have to
possess a large enough optical depth, which means that they have to be
either small or massive ($\tau_{\mathrm{clump}} \propto
m_{\mathrm{clump}}\,a_{\mathrm{clump}}^{-2}$, where $m_{\mathrm{clump}}$ is
the mass of the clump).
\item For the case of a too small optical depth of the clumps in the innermost
region, we expect the silicate feature to appear in strong emission.
\item Another possibility of producing silicate features in emission is when the model
possesses a small number of clumps in the inner part, making the
shadowing effect inefficient.
\item A further effect on the silicate feature strength
arises from the grain dependent sublimation implemented
in our models. As graphite grains possess a higher sublimation temperature as
silicate grains, they are able to partially shelter the silicate grains from
direct irradiation.
\end{enumerate}
Thus, the strength of the silicate feature is mainly determined by
the distribution, size and optical depth of the clouds in the direct vicinity
of the sublimation surface of the dust.
We will see in the next section that this finding well explains
the fact that \citet{Nenkova_02}
and \citet{Dullemond_05} come to different conclusions concerning the reduction of the
strength of the silicate feature due to clumpiness, as the distribution (and
size) of their clouds differ.
\subsection{Comparison with other torus models}
\label{sec:torus_other}
The results of \citet{Nenkova_02} - the pioneering work in the field of clumpy tori -
are broadly consistent with the explanations given in Sect.\,\ref{sec:model_reduction}
of this paper.
\citet{Dullemond_05} model 2D clumps in the form of rings with a two-dimensional
radiative transfer code. In contradiction to all other simulations, no systematic
reduction of the silicate feature due to clumpiness is found. The reason for this is understandable
with the explanations given in Sect.\,\ref{sec:model_reduction}, as their model features
a small clump number density in the central region and
shadowing effects are rather small. Therefore, they find both strengthening of the
silicate feature and reduction, depending on the random ring distribution.
\begin{figure}[b!]
\centering
\resizebox{0.95\hsize}{!}{\includegraphics[]{./figures/fig17.eps}}
\caption[Comparison of our clumpy standard model with simulations by
S.~F.~H\"onig]{Comparison
of our clumpy standard model and two other random realisations of the clump
distribution (blue lines) with
simulations done by S.~F.~H\"onig
(private communication, described in \citealp{Hoenig_06}), shown
by the yellow lines, for 10 different random realisations of their
model. The latter are scaled with a factor of 2.2 in order to give
rough agreement between the two models (see text for further explanation).}
\label{fig:hoenig_comp}
\end{figure}
A comparison of our clumpy standard model and two other random cloud
distributions with simulations by
\citet{Hoenig_06} is shown in Fig.~\ref{fig:hoenig_comp}.
They follow a different, multi-step approach: 2D radiative transfer
calculations of individual clouds at different positions and with various illumination
patterns within the torus are carried out.
In a second step, the SED of the total system is calculated.
The cloud distribution and parameters such as optical depth or size arise from an accretion
scenario of self-gravitating clouds close to the shear limit \citep{Vollmer_04,Beckert_04}.
The advantage of this approach is that
resolution problems can be overcome easily, as only 2D real radiative transfer
calculations of single clumps are needed.
Characteristics for their modelling are small cloud sizes with very high optical depths
in the inner part of the torus and a large number of clumps.
For comparison, a cloud at the sublimation radius of their model has a radial size of
$R_{\mathrm{cloud}}=0.02\,$pc with an optical depth of
$\tau_{0.55\,\muup\mathrm{m}}^{\mathrm{clump}}\approx 250$.
In our standard model, clouds at the sublimation radius are four times larger and possess
an optical depth of only $\tau_{0.55\,\muup\mathrm{m}}^{\mathrm{clump}}\approx 3$.
The large optical depth in the innermost part in combination with the large
number density there reduces the silicate feature significantly by shadowing
with respect to their single clump
calculations. Their finding that the silicate emission
feature can be reduced further by increasing
the number density of clumps in the innermost part perfectly fits
our explanation presented in Sect.\,\ref{sec:model_reduction}.
Deviations between the two approaches (see Fig.~\ref{fig:hoenig_comp}) are mainly due to
the approximately eight times larger primary luminosity and the larger optical
depth, at least in the midplane of the \citet{Hoenig_06} modelling compared to
our standard model. Furthermore,
in our simulations only dust reemission SEDs are shown.
This leads to relatively higher fluxes at short
wavelengths compared to long wavelengths for the $i=30\degr$ case
(Fig.~\ref{fig:hoenig_comp}a) and to more extinction within the midplane and,
therefore, a shift of the Wien branch towards longer wavelengths in the edge-on
case (lower panel).
\section{MIDI interferometry}
\label{sec:MIDI_interferometry}
Even with the largest single-dish mid-infrared telescopes, it is impossible to directly resolve the dust torus
of the nearest Seyfert galaxies.
Therefore, interferometric measurements are needed. Recently,
\citet{Jaffe_04} succeeded for the first time to resolve the dusty structure around an AGN in the
mid-infrared wavelength range. In
this case, they probed the active nucleus of the nearby Seyfert\,2 galaxy NGC\,1068 with the help of the
MID-infrared interferometric Instrument (MIDI, \citealp{Leinert_03}). It is located at the European Southern
Observatory's (ESO's) Very Large Telescope Interferometer (VLTI) laboratory on Cerro Paranal in Chile.
Its main objective is the coherent combination of the beams of
two 8.2\,m diameter Unit Telescopes (UTs) in order to obtain structural properties of the observed objects
at high angular resolution. A spatial resolution of up to $\lambda / (2\,B) \approx 10\,$mas
at a wavelength of $\lambda=10\,\muup$m can be obtained
for the largest possible separation of two Unit Telescopes of $B \approx 120\,$m.
Operating in the N-band ($8-13.5\,\muup$m),
it is perfectly suited to detect thermal emission of dust in the innermost parts
of nearby Seyfert galaxies.
MIDI is designed as a classical Michelson interferometer. Being a two-element beam combining instrument, it
measures so-called visibility amplitudes.
Visibility is defined as the ratio between the correlated flux and the total flux.
Its interpretation is not straightforward, since no direct image can be reconstructed.
Therefore, a model has to be assumed,
which can then be compared to the visibility data.
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[]{./figures/fig18.eps}}
\caption[Visibilities of the continuous model]{Visibilities of our continuous standard model
at a wavelength of $12\,\muup$m
plotted against the projected baseline length.
Colour of the visibility distributions
refers to different position angles of the projected baseline w.~r.~t.~the torus axis.
Each panel shows a different inclination angle, as
indicated in the upper right corner.}
\label{fig:vis_angles_cont}
\end{figure}
MIDI works in dispersed mode, which means that visibilities for the whole wavelength range
are derived. The dust emission is probed depending on the orientation of the projected baseline.
Point-like objects result in a visibility of one, as the correlated flux equals the total flux.
The more extended the object, the lower the visibility.
With the help of a density distribution, surface brightness distributions in the mid-infrared
can be calculated by applying
a radiative transfer code. A Fourier transform of the brightness distribution
then yields the visibility information,
depending on the baseline orientation and length within the so-called {\it
U-V-plane} (or {\it Fourier-plane}).
The main goal of the following analysis is to investigate whether MIDI can distinguish
between clumpy and continuous torus models of the kind presented above.
Furthermore, we try to derive characteristic features of the respective models and
show a comparison to data obtained for the Circinus galaxy.
\subsection{Model visibilities}
\label{sec:model_visi}
\begin{figure}
\centering
\resizebox{\hsize}{!}{\includegraphics[]{./figures/fig19.eps}}
\caption[Visibilities of our clumpy standard model]{Visibilities of our clumpy standard model at a wavelength of $12\,\muup$m
plotted against the projected baseline length. Colour of the visibility distributions
refers to different position angles of the baseline w.~r.~t.~the torus axis.
Each panel shows a different inclination angle, as
indicated in the upper right corner.}
\label{fig:vis_angles_clumpy}
\end{figure}
In Fig.~\ref{fig:vis_angles_cont},
calculated visibilities for four inclinations of our continuous standard model at
a wavelength of $\lambda$\,$=$\,$12\,\muup$m are
shown. Various orientations of the projected baseline are colour coded (the given
position angle is counted anti-clockwise from the projected torus axis).
Due to the axisymmetric setup, all lines
coincide for the face-on case. For all other inclination angles, visibilities
decrease until a position angle of $90\degr$ is reached and increase
symmetrically again. This means that the torus appears elongated
perpendicular to the torus axis at this wavelength.
Fig.~\ref{fig:vis_angles_clumpy} shows the same study, but for the
corresponding clumpy model. The basic behaviour is the same, but the visibilities
show fine structure
and the scatter is much greater, especially visible in the
comparison of the $i=0\degr$ cases.
Furthermore, while all of the curves of the continuous model monotonically
decrease with baseline length, we see rising and falling values with increasing baseline
length for the same position angle in the clumpy case. In addition, for the continuous models,
curves do not intersect, in contrast to our clumpy models.
However, to detect such fine structure in observed MIDI data, a very high
accuracy in the visibility measurements of the order of $\sigma_v \approx 0.02$
and a very dense sampling is required.
\begin{figure*}
\centering
\resizebox{0.75\hsize}{!}{\includegraphics[]{./figures/fig20.eps}}
\caption[Comparison of visibilities of our standard clumpy and continuous
model at different wavelengths]{Visibilities of our clumpy (first two panels) and continuous (last two
panels) standard
model at different wavelengths (colour coded),
plotted against the projected baseline length for the two position angles $0\degr$ (in
torus axis direction) and $90\degr$ (along the midplane) for an
edge-on view onto the torus ($i=90\degr$).}
\label{fig:vis_lam_clump}
\end{figure*}
In Fig.~\ref{fig:vis_lam_clump}, the wavelength dependence of the visibilities
is shown.
The first two panels represent the
case of the clumpy standard model and the third and fourth the continuous model.
Each of the two panels of the respective model visualises a different position angle (counted
anti-clockwise from the projected torus axis).
An inclination angle of $90\degr$ is used in all panels.
Three different wavelengths are colour coded: $8.2\,\muup$m at the beginning of
the MIDI-range (black dotted line), $9.8\,\muup$m within the silicate feature (blue) and $12.6\,\muup$m at
the end of the MIDI wavelength range (yellow), outside the silicate feature.
While the continuous model results in smooth curves (see also Fig.~\ref{fig:vis_angles_cont}),
much fine structure is visible for the case of the clumpy model.
The differences between the displayed wavelengths relative to the longest
wavelength are smaller for the clumpy models than in the continuous case.
\begin{figure}[t!]
\centering
\resizebox{\hsize}{!}{\includegraphics[]{./figures/fig21.eps}}
\caption[Visibilities of our clumpy standard model plotted against position angle]{Visibilities of our clumpy standard
model at different inclination angles (as annotated in the upper right corner)
plotted against the position angle for various projected baseline lengths (colour coded)
and a wavelength of $12\,\muup$m.}
\label{fig:vis_inc_posang}
\end{figure}
Fig.~\ref{fig:vis_inc_posang} shows visibilities for our clumpy standard
model at $12\,\muup$m, plotted against the position angle (counter-clockwise from the
projected torus axis). Baselines are colour coded between $20\,$m and $100\,$m in steps of
$20\,$m. A longer baseline means that structures are better resolved, leading
to decreasing visibilities. For the case of inclination angles close to
edge-on, the visibility distribution changes from more or less flat to a
characteristic oscillating distribution at longer baselines (from $60\,$m
onwards) with minima around $100\degr$ and $300\degr$. This means that our
torus model seems to be more elongated within the equatorial plane and has the
smallest width along the projected torus axis at this wavelength. But this only applies for the innermost
part; the torus as a whole looks approximately spherically symmetric.
At small inclination angles no such favoured size distribution
is visible.
\subsection{Comparison with MIDI-data for the Circinus galaxy}
\label{sec:MIDI_comp}
\begin{table}[b!]
\caption[Parameters for the Circinus model]{Circinus model parameters: For an explanation of the parameters
see Sect.\,\ref{sec:Model} and Table\,\ref{tab:model_param}. $M_{\mathrm{BH}}$ is the mass of the central
black hole \citep[from][]{Greenhill_03} and $L_{\mathrm{disk}}/L_{\mathrm{edd}}$
is the Eddington luminosity ratio resulting for the assumed luminosity of the central source.
}
\centering
\begin{tabular}{lcclc}
\hline
\hline
Parameter & Value & \hspace{0.5cm} & Parameter & Value \\
\hline
{\bf $R_{\mathrm{in}}$} & 0.6 pc & & {\bf $N_{\mathrm{clump}}$ } & 500 \\
$R_{\mathrm{out}}$ & 30 pc & & {\bf $\beta$} & 1.0 \\
{\bf $\theta_{\mathrm{open}}$} & $65\degr$ & & {\bf $a_0$} & 0.2 pc\\
{\bf $\left<\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{equ}}\right>_{\phi}$} & 3.9
& & {\bf $\tau_{9.7\,\muup\mathrm{m}}^{\mathrm{clump}}$ } & 0.96 \\
{\bf $M_{\mathrm{BH}}$} & $1.7 \cdot 10^6\,M_{\sun}$ & & {\bf $L_{\mathrm{disk}}/L_{\mathrm{edd}}$} & 30\%\\
{\bf $\alpha$} & -0.5 & & & \\
\hline
\end{tabular}
\label{tab:Cir_param}
\end{table}
Unfortunately, a fitting procedure involving a large parameter study is not possible
with our current model, due to the very long
computational times of the order of 30 to 40 hours per inclination angle (including calculation
of the temperature distribution, the SED and surface brightness distribution).
Therefore, we applied the following procedure:
From our experience with modelling the SED of the Circinus galaxy with our previously used continuous
{\it Turbulent Torus Models} (see \citealp{Schartmann_05}),
we adopt the size of the object used there. Furthermore, we
tried to stay as close to our clumpy standard model as possible (for the parameters of the
clumpy standard model, compare to Table\,\ref{tab:model_param}) and copied the parameters $\alpha$, $\beta$
and $a_0$. The rest of the parameters were changed, in order to obtain the best possible adaptation
to the data, within the investigated parameter range.
The comparison of our current clumpy Circinus model as described above
and in Table\,\ref{tab:Cir_param} (yellow stars) to
interferometric observations with MIDI \citep{Tristram_07} of the
Circinus galaxy (black) is shown in
Fig.~\ref{fig:vis_circinus}.
In contrast to the presentation of continuous visibility curves above,
single measurements of combinations of various baseline lengths and position angles are
displayed in this plot. Position angle now refers to the angle on the sky measured from north
in a counter-clockwise direction. The rotation axis of our simulated torus has a position angle of
approximately $-45\degr$
according to this definition.
The black numbers denote the length of the projected baseline (given in m) of the corresponding data point.
From the approximate correspondence of the model values with the data, one can see that the
size of the emitting region at the two wavelengths is reproduced
quite well. Most of the local extrema of the curve can be reproduced for the case of $9.1\,\muup$m.
Larger deviations are visible for $\lambda = 12.0\,\muup$m. The good adaptation partly is due to
the changes in baseline length. Longer baselines naturally result in smaller visibilities, as we are probing
smaller and smaller structures (see also Fig.\,\ref{fig:vis_angles_clumpy}).
Greater visibilities for shorter or equal baselines and similar position angle,
therefore, have to be due to those curves in
Fig.\,\ref{fig:vis_angles_clumpy} with increasing visibility with baseline or a very inhomogeneous distribution
of dust with position angle. Both can be interpreted as signs of clumpiness.
The SED of the same Circinus model is plotted
over current high resolution data in Fig.\,\ref{fig:sed_circinus}.
The NIR (near-infrared) data points were obtained with the NACO camera at the VLT and corrected for foreground
extinction by $A_{\mathrm{V}}=6\,$mag \citep{Prieto_04}.
Different symbols refer to various aperture sizes (see figure caption). The thick green line
shows the MIDI spectrum \citep{Tristram_07} and the black line is our Circinus model as
discussed above for an aperture of $0.4\arcsec$ in radius; the yellow line denotes
the same model, but calculated for the whole simulated model space.
Both modelled SEDs include the direct radiation of the central source (calculated with real Monte Carlo radiative
transfer), which in these examples dominates over dust reemission for the small wavelength part from about 2 to $3\,\muup$m
downwards and shows some noise, due to the low photon packet numbers used.
In contradiction to our continuous Circinus model in
\citet{Schartmann_05}, enough nuclear radiation can be observed
in order to explain the turnover of the SED at small wavelengths and we do not
need to assume scattering by material (dust and electrons) within the torus
funnel. As can be seen from these figures, our model is able to qualitatively explain
the SED as well as the visibility information.
However, as we are not able to investigate the whole parameter range
of our models, we cannot exclude that a different parameter set can
describe the data equally well. This degeneracy problem was already pointed out
by \citet{Galliano_03} for the case of SED fitting. Adding new clumpiness parameters will
even strengthen this degeneracy. On the other hand, adding more data such as more visibility
information will place more constraints and will weaken this problem.
\begin{figure*}
\begin{minipage}[b]{0.47\linewidth}
\resizebox{1.0\hsize}{!}{\includegraphics[]{./figures/fig22.eps}}
\caption[Comparison of model visibilities with data of the Circinus
galaxy]{Comparison of
model visibilities (yellow stars and lines) for an azimuthal
viewing angle of
$\phi=225\degr$ with MIDI observations for two different
wavelengths. The baseline length for all data points is given above the data.
Data courtesy of \citet{Tristram_07}. \vspace{0.4cm}}
\label{fig:vis_circinus}
\end{minipage}
\hfill
\begin{minipage}[b]{0.47\linewidth}
\resizebox{1.0\hsize}{!}{\includegraphics[]{./figures/fig23.eps}}
\caption[Comparison of SEDs of our clumpy Circinus model with high
resolution data of
the Circinus galaxy]{Comparison of model SEDs with data for the Circinus galaxy.
Different symbols refer to various aperture radii:
blue stars -- $0.38\arcsec$ (NACO), red rectangle -- $0.1\arcsec$ (HST/NICMOS)
and black triangle -- $1.0\arcsec$ (ESO/TIMMI2).
Data compilation by \citealp{Prieto_04}. The thick green line shows
the total MIDI spectrum \citep{Tristram_07}.
Our model (see model parameters in Table\,\ref{tab:Cir_param}) is calculated for an aperture radius
of $0.4\arcsec$ (black line) and the total model space (yellow line).
}
\label{fig:sed_circinus}
\end{minipage}
\end{figure*}
\section{Conclusions}
\label{sec:conclusions}
In this paper, we implemented a new clumpy torus model in three
dimensions. For computational reasons, a wedge-like shaped disk
is used. In the discussion of our results, we place special emphasis on the
comparison with continuous models and their
differentiation using interferometric observations, such as with MIDI.
In \citet{Schartmann_05}, we had found that the SEDs of AGN
tori in the mid-infrared
wavelength range are mainly determined by the innermost part of the torus.
With the presented clumpy torus models, this claim can be
further strengthened.
According to the new simulations, the silicate feature strength is mainly
determined by the number density and distribution, as well as the
optical depth and size of the clumps in the inner region. With a
sufficiently high optical depth
of the clouds in the inner part, shadowing effects become important,
which hide the illuminated cloud surfaces from direct view
and, thereby, reduce the silicate feature in emission. At the same time,
enough lines of sight with low optical depth remain so that only weak
absorption features result for the edge-on case.
Continuous models with special and unrealistic morphologies (like the
wedge-shaped tori used here) are also
able to weaken the silicate emission feature for the face-on view when applying an
anisotropic radiation characteristic, but fail to simultaneously
account for moderate absorption features, when looking edge-on to the torus.
Due to the large clumps in our model, appreciable scatter in SEDs
for different random realisations of the torus are expected.
A contrary effect is caused by the small optical depth of the single
clumps and also of many dust-free lines of sight towards the centre.
Direct comparison between calculated interferometric visibilities for clumpy and the corresponding
continuous models show that clumpy models naturally possess more
fine structure, which can partly be resolved by MIDI.
We also showed that these kinds of models are able to qualitatively
describe the available interferometric visibility and high resolution spectroscopic data of the
Circinus galaxy at the same time. Currently, it is one of the best studied
Seyfert galaxies in terms of
mid-infrared visibility measurements \citep{Tristram_07}.
The decreasing slope of the SED at short wavelengths
can be described with our clumpy model, whereas it was at odds with the
continuous model described in \citet{Schartmann_05}.
\vspace{0.5cm}
\begin{acknowledgements}
We would like to thank the anonymous referee for comments, as well as
C.\,P.\,Dullemond for useful discussions and
S.\,F.\,H\"onig for providing some of his torus models for the
comparison with our work. S.\,W.\,was supported by the German
Research Foundation (DFG) through the Emmy Noether grant WO\,857/2.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,477,468,750,160 | arxiv | \section{Introduction}
Let $k>0$ be the wave number, and let $\mathbb{R}^2_{+}:=\mathbb{R}\times (0, \infty)$ be the upper half plane, and let $W:=\mathbb{R}\times (0, h)$ be the waveguide in $\mathbb{R}^2_{+}$. We denote by $\Gamma_a:=\mathbb{R}\times\{ a\}$ for $a>0$. Let $n \in L^{\infty}(\mathbb{R}^2_{+})$ be real value, $2\pi$-periodic with respect to $x_1$ (that is, $n(x_1+2\pi,x_2)=n(x_1,x_2 )$ for all $x=(x_1,x_2) \in \mathbb{R}^2_{+}$), and equal to one for $x_2>h$. We assume that there exists a constant $n_0>0$ such that $n \geq n_0$ in $\mathbb{R}^2_{+}$. Let $q \in L^{\infty}(\mathbb{R}^2_{+})$ be real value with the compact support in $W$. We denote by $Q:=\mathrm{supp}q$. In this paper, we consider the following scattering problem: For fixed $y \in \mathbb{R}^2_{+} \setminus \overline{W}$, determine the scattered field $u^{s} \in H^{1}_{loc}(\mathbb{R}^2_{+})$ such that
\begin{equation}
\Delta u^{s}+k^2(1+q)nu^{s}=-k^{2}qnu^{i}(\cdot, y) \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{1.1}
\end{equation}
\begin{equation}
u^{s}=0 \ \mathrm{on} \ \Gamma_0, \label{1.2}
\end{equation}
Here, the incident field $u^{i}$ is given by $u^{i}(x,y)=G_n(x,y)$, where $G_n$ is the Dirichlet Green's function in the upper half plane $\mathbb{R}^2_{+}$ for $\Delta +k^2n$, that is,
\begin{equation}
G_n(x,y):=G(x,y)+\tilde{u}^{s}(x,y), \label{1.3}
\end{equation}
where $G(x,y):=\Phi_k(x,y)-\Phi_k(x,y^{*})$ is the Dirichlet Green's function in $\mathbb{R}^2_+$ for $\Delta +k^2$, and $y^{*}=(y_1, -y_2)$ is the reflected point of $y$ at $\mathbb{R}\times \{0\}$. Here, $\Phi_k(x,y)$ is the fundamental solution to Helmholtz equation in $\mathbb{R}^2$, that is,
\begin{equation}
\Phi_k(x,y):= \displaystyle \frac{i}{4}H^{(1)}_0(k|x-y|), \ x \neq y. \label{1.4}
\end{equation}
$\tilde{u}^{s}$ is the scattered field of the unperturbed problem by the incident field $G(x,y)$, that is, $\tilde{u}^{s}$ vanishes for $x_2=0$ and solves
\begin{equation}
\Delta \tilde{u}^{s}+k^2n\tilde{u}^{s}=k^{2}(1-n)G(\cdot, y) \ \mathrm{in} \ \mathbb{R}^2_{+}. \label{1.5}
\end{equation}
If we impose a suitable radiation condition introduced by Kirsch and Lechleiter \cite{Kirsch and Lechleiter2}, the unperturbed solution $\tilde{u}^{s}$ is uniquely determined. Later, we will explain the exact definition of this radiation condition (see Definition 2.4).
\par
In order to show the well-posedness of the perturbed scattering problem (\ref{1.1})--(\ref{1.2}), we make the following assumption.
\begin{ass}
We assume that $k^2$ is not the point spectrum of $\frac{1}{(1+q)n}\Delta$ in $H^{1}_{0}(\mathbb{R}^{2}_{+})$, that is, evey $v \in H^{1}(\mathbb{R}^{2}_{+})$ which satisfies
\begin{equation}
\Delta v+k^2(1+q)nv=0 \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{1.6}
\end{equation}
\begin{equation}
v=0 \ \mathrm{on} \ \Gamma_0, \label{1.7}
\end{equation}
has to vanish for $x_2>0$.
\end{ass}
If we assume that $q$ and $n$ satisfy in addition that $\partial_2 \bigl((1+q)n\bigr) \geq 0$ in $W$, then $v$ which satisfies (\ref{1.6})--(\ref{1.7}) vanishes, that is, under this assumption all of $k^2$ is not the point spectrum of $\frac{1}{(1+q)n}\Delta$. We will prove it in Section 6. Our aim in this paper is to show the following theorem.
\begin{thm}
Let Assumptions 1.1 and 2.1 hold and let $k>0$ be regular in the sense of Definition 2.3 and let $f \in L^{2}(\mathbb{R}^2_{+})$ such that $\mathrm{supp}f=Q$. Then, there exists a unique solution $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ such that
\begin{equation}
\Delta u+k^2(1+q)nu=f \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{1.8}
\end{equation}
\begin{equation}
u=0 \ \mathrm{on} \ \Gamma_0, \label{1.9}
\end{equation}
and $u$ satisfies the radiation condition in the sense of Definition 2.4.
\end{thm}
Roughly speaking, the radiation condition of Definition 2.4 requires that we have a decomposition of the solution $u$ into $u^{(1)}$ which decays in the direction of $x_1$, and a finite combination $u^{(2)}$ of {\it propagative modes} which does not decay, but it exponentially decays in the direction of $x_2$.
\par
This paper is organized as follows. In Section 2, we briefly recall a radiation condition introduced in \cite{Kirsch and Lechleiter2}, and show that the solution of (\ref{2.1})--(\ref{2.2}) has an integral representation (\ref{2.18}). Under the radiation condition in the sense of Definition 2.4, we show the uniqueness of $u^{(2)}$ and $u^{(1)}$ in Section 3 and 4, respectively. In Section 5, we show the existence of $u$. In Section 6, we will give an example of $n$ and $q$ with respect to Assumption 1.1.
\section{A radiation condition}
In Section 2, we briefly recall a radiation condition introduced in \cite{Kirsch and Lechleiter2}. Let $f \in L^{2}(\mathbb{R}^2_{+})$ have the compact support in $W$. First, we consider the following problem: Find $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ such that
\begin{equation}
\Delta u+k^2nu=f \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{2.1}
\end{equation}
\begin{equation}
u=0 \ \mathrm{on} \ \Gamma_0. \label{2.2}
\end{equation}
(\ref{2.1}) is understood in the variational sense, that is,
\begin{equation}
\int_{\mathbb{R}^2_{+}} \bigl[ \nabla u \cdot \nabla \overline{\varphi}-k^2nu\overline{\varphi} \bigr]dx=-\int_W f \overline{\varphi}dx, \label{2.3}
\end{equation}
for all $\varphi \in H^{1}(\mathbb{R}^2_{+})$, with compact support. In such a problem, it is natural to impose the {\it upward propagating radiation condition}, that is, $u(\cdot, h) \in L^{\infty}(\mathbb{R})$ and
\begin{equation}
u(x)=2\int_{\Gamma_h}u(y)\frac{\partial\Phi_k(x,y)}{\partial y_2} ds(y)=0,\ x_2>h. \label{2.4}
\end{equation}
However, even with this condition we can not expect the uniqueness of this problem. (see Example 2.3 of \cite{Kirsch and Lechleiter2}.) In order to introduce a {\it suitable radiation condition}, Kirsch and Lechleiter discussed limiting absorption solution of this problem, that is, the limit of the solution $u_{\epsilon}$ of $\Delta u_{\epsilon}+(k+i\epsilon)^2nu_{\epsilon}=f$ as $\epsilon \to 0$. For the details, we refer to \cite{Kirsch and Lechleiter1, Kirsch and Lechleiter2}.
\par
Let us prepare for the exact definition of the radiation condition. First we recall that the {\it Floquet Bloch transform} $T_{per} : L^{2}(\mathbb{R}) \to L^{2}\bigl( (0, 2\pi) \times (-1/2, 1/2) \bigr)$ is defined by
\begin{equation}
T_{per}f(t, \alpha) = \tilde{f}_{\alpha}(t) := \sum_{m \in \mathbb{Z}}f(t+ 2\pi m)e^{-i\alpha(t+ 2\pi m)}, \label{2.5}
\end{equation}
for $(t, \alpha) \in (0, 2\pi) \times (-1/2, 1/2)$. The inverse transform is given by
\begin{equation}
T^{-1}_{per}g(t) = \int^{1/2}_{-1/2}g(t, \alpha)e^{i\alpha t}d\alpha,\ t \in \mathbb{R}. \label{2.6}
\end{equation}
By taking the Floquet Bloch transform with respect to $x_1$ in (\ref{2.1})--(\ref{2.2}), we have for $\alpha \in [-1/2, 1/2]$
\begin{equation}
\Delta \tilde{u}_{\alpha}+2i\alpha \frac{\partial \tilde{u}_{\alpha}}{\partial x_1} + (k^2n-\alpha^2)\tilde{u}_\alpha=\tilde{f}_{\alpha} \ \mathrm{in} \ (0,2\pi) \times (0, \infty). \label{2.7}
\end{equation}
\begin{equation}
\tilde{u}_\alpha=0 \ \mathrm{on} \ (0,2\pi)\times \{0 \}. \label{2.8}
\end{equation}
By taking the Floquet Bloch transform with respect to $x_1$ in (\ref{2.4}), $\tilde{u}_\alpha$ satisfies the {\it Rayleigh expansion} of the form
\begin{equation}
\tilde{u}_{\alpha}(x)=\sum_{n \in \mathbb{Z}}u_{n}(\alpha)e^{inx_1+i\sqrt{k^2-(n+\alpha)^2}(x_2-h)}, \ x_2>h, \label{2.9}
\end{equation}
where $u_{n}(\alpha):=(2\pi)^{-1}\int_{0}^{2\pi}u_{\alpha}(x_1,h)e^{-inx_1}dx_1$ are the Fourier coefficients of $u_{\alpha}(\cdot,h)$, and $\sqrt{k^2-(n+\alpha)^2}=i\sqrt{(n+\alpha)^2-k^2}$ if $n+\alpha>k$.
\par
We denote by $C_{R}:=(0,2\pi) \times (0, R)$ for $R \in (0,\infty]$, and $H^{1}_{per}(C_R)$ the subspace of the $2 \pi$-periodic function in $H^{1}(C_R)$. We also denote by $H^{1}_{0,per}(C_{R}):=\{u \in H^{1}_{per}(C_{R}) : u = 0 \ \mathrm{on}\ (0,2\pi)\times \{0 \} \}$ that is equipped with $H^{1}(C_R)$ norm. The space $H^{1}_{0,per}(C_{R})$ has the inner product of the form
\begin{equation}
\langle u, v \rangle_{*}=\int_{C_h}\nabla u \cdot \nabla \overline{v}dx + 2\pi \sum_{n \in \mathbb{Z}}\sqrt{n^2+1}u_n\overline{v_n}, \label{2.10}
\end{equation}
where $u_n=(2\pi)^{-1}\int_{0}^{2\pi}u(x_1,R)e^{-inx_1}dx_1$.
The problem (\ref{2.7})--(\ref{2.9}) is equivalent to the following operator equation (see section 3 in \cite{Kirsch and Lechleiter2}),
\begin{equation}
\tilde{u}_{\alpha}-K_{\alpha}\tilde{u}_{\alpha}=\tilde{f}_{\alpha} \ \mathrm{in} \ H^{1}_{0,per}(C_h),\label{2.11}
\end{equation}
where the operator $K_{\alpha}:H^{1}_{0,per}(C_h) \to H^{1}_{0,per}(C_h)$ is defined by
\begin{eqnarray}
\langle K_{\alpha}u, v \rangle_{*}&=&-\int_{C_h}\left[i\alpha \biggl(u \frac{\partial \overline{v}}{\partial x_1} -\overline{v}\frac{\partial \overline{u}}{\partial x_1}
\biggr)+(\alpha^2-k^2n)u\overline{v}\right]dx
\nonumber\\
&+& 2\pi i \sum_{|n+\alpha|\leq k}u_n\overline{v_n} \bigl( \sqrt{k^2-(n+\alpha)^2}-i\sqrt{n^2+1} \bigr)
\nonumber\\
&+& 2\pi \sum_{|n+\alpha|> k}u_n\overline{v_n} \bigl(\sqrt{n^2+1}- \sqrt{(n+\alpha)^2-k^2} \bigr).
\label{2.12}
\end{eqnarray}
For several $\alpha \in (-1/2, 1/2]$, the uniqueness of this problem fails. We call $\alpha$ {\it exceptional values} if the operator $I-K_{\alpha}$ fails to be injective. For the difficulty of treatment of $\alpha$ such that $|\alpha+l|=k$ for some $l \in \mathbb{Z}$ in periodic scattering problem, we set $A_k:=\{\alpha \in (-1/2, 1/2]: \exists l \in \mathbb{Z} \ s.t. \ |\alpha+l|=k \}$, and make the following assumption:
\begin{ass}
For every $\alpha \in A_k$, $I-K_{\alpha}$ has to be injective.
\end{ass}
The following properties of exceptional values was shown in \cite{Kirsch and Lechleiter2}.
\begin{lem}
Let Assumption 2.1 hold. Then, there exists only finitely many exceptional values $\alpha \in (-1/2, 1/2]$. Furthermore, if $\alpha$ is an exceptional value, then so is $-\alpha$. Therefore, the set of exceptional values can be described by $\{\alpha_j:j\in J \}$ where some $J \subset \mathbb{Z}$ is finite and $\alpha_{-j}=-\alpha_j$ for $j \in J$. For each exceptional value $\alpha_j$ we define
\begin{equation}
X_j:=\left\{ \phi \in H^{1}_{loc}(\mathbb{R}^2_+):\begin{array}{cc}
\Delta \phi+2i\alpha_j\frac{\partial \phi}{\partial x_1}+(k^2n-\alpha^2)\phi=0 \ \mathrm{in} \ \mathbb{R}^2_+, \\
\phi=0 \ \mathrm{for} \ x_2=0, \ \ \ \phi \ \mathrm{is} \ 2\pi \mathrm{-periodic} \ \mathrm{for}\ x_1, \\
\phi \ \mathrm{satisfies \ the \ Rayleigh\ expansion}\ (\ref{2.9})
\end{array}
\right\} \nonumber
\end{equation}
Then, $X_j$ are finite dimensional. We set $m_j=\mathrm{dim}X_j$. Furthermore, $\phi \in X_j$ is evanescent, that is, there exists $c>0$ and $\delta>0$ such that $|\phi(x)|, \ |\nabla \phi(x)|\leq ce^{-\delta |x_2|}$ for all $x\in \mathbb{R}^2_+$.
\end{lem}
Next, we consider the following eigenvalue problem in $X_j$: Determine $d \in \mathbb{R}$ and $\phi \in X_j$ such that
\begin{equation}
\int_{C_{\infty}}\left[-i\frac{\partial \phi}{\partial x_1}+\alpha_j \phi \right] \overline{\psi} dx= dk\int_{C_{\infty}}n\phi \overline{\psi}dx,\label{2.13}
\end{equation}
for all $\psi \in X_j$. We denote by the eigenvalues $d_{l,j}$ and eigenfunction $\phi_{l,j}$ of this problem, that is,
\begin{equation}
\int_{C_{\infty}}\left[-i\frac{\partial \phi_{l,j}}{\partial x_1}+\alpha_j \phi_{l,j} \right] \overline{\psi} dx= d_{l,j}k\int_{C_{\infty}}n\phi_{l,j} \overline{\psi}dx,\label{2.14}
\end{equation}
for every $l=1,...,m_j$ and $j \in J$. We normalize the eigenfunction $\{\phi_{l,j}: l=1,...,m_j \}$ such that
\begin{equation}
k\int_{C_{\infty}}n\phi_{l,j} \overline{\phi_{l',j}}dx=\delta_{l,l'},\label{2.15}
\end{equation}
for all $l, l'$. We will assume that the wave number $k>0$ is {\it regular} in the following sense.
\begin{dfn}
$k>0$ is {\it regular} if $d_{l,j}\neq 0$ for all $l=1,...m_j$ and $j \in J$.
\end{dfn}
Now we are ready to define the radiation condition.
\begin{dfn}
Let Assumptions 2.1 hold, and let $k>0$ be regular in the sense of Definition 2.3. We set
\begin{equation}
\psi^{\pm}(x_1):=\frac{1}{2} \left[ 1\pm \frac{2}{\pi}\int_{0}^{x_1/2}\frac{sint}{t}dt \right] , \ x_1 \in \mathbb{R}.\label{2.16}
\end{equation}
Then, $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ satisfies the {\it radiation condition} if $u$ satisfies the upward propagating radiation condition (\ref{2.4}), and has a decomposition in the form $u=u^{(1)}+u^{(2)}$ where $u^{(1)} \bigl|_{\mathbb{R} \times (0,R)} \in H^{1}(\mathbb{R} \times (0,R))$ for all $R>0$, and $u^{(2)}\in L^{\infty}(\mathbb{R}^{2}_{+})$ has the following form
\begin{equation}
u^{(2)}(x)=\psi^{+}(x_1)\sum_{j \in J} \sum_{d_{l,j}>0}a_{l,j}\phi_{l,j}(x)+\psi^{-}(x_1)\sum_{j \in J} \sum_{d_{l,j}<0}a_{l,j}\phi_{l,j}(x) \label{2.17}
\end{equation}
where some $a_{l,j} \in \mathbb{C}$, and $\{d_{l,j},\phi_{l,j}: l=1,...,m_j \}$ are normalized eigenvalues and eigenfunctions of the problem (\ref{2.8}).
\end{dfn}
\begin{rem}
It is obvious that we can replace $\psi^{+}$ by any smooth functions $\tilde{\psi}^{\pm}$ with $\tilde{\psi}^{+}(x_1)=1+\mathcal{O}(1/x_1)$ as $x_1\to \infty$ and $\tilde{\psi}^{+}(x_1)=\mathcal{O}(1/x_1)$ as $x_1\to -\infty$ and $\frac{d}{dx_1}\tilde{\psi}^{+}(x_1)\to 0$ as $|x_1|\to \infty$ (and analogously for $\psi^{-}$).
\end{rem}
The following was shown in Theorems 2.2, 6.6, and 6.8 of \cite{Kirsch and Lechleiter2}.
\begin{thm}
Let Assumptions 2.1 hold and let $k>0$ be regular in the sense of Definition 2.3. For every $f \in L^{2}(\mathbb{R}^2_{+})$ with the compact support in $W$, there exists a unique solution $u_{k+i \epsilon} \in H^{1}(\mathbb{R}^{2}_{+})$ of the problem (\ref{2.1})--(\ref{2.2}) replacing $k$ by $k+i\epsilon$. Furthermore, $u_{k+i \epsilon}$ converge as $\epsilon \to +0$ in $H^{1}_{loc}(\mathbb{R}^{2}_{+})$ to some $u \in H^{1}_{loc}(\mathbb{R}^{2}_{+})$ which satisfy (\ref{2.1})--(\ref{2.2}) and the radiation condition in the sense of Definition 2.4. Furthermore, the solution $u$ of this problem is uniquely determined.
\end{thm}
We have recalled the radiation condition and its properties. Finally in this section, we will show the following integral representation.
\begin{lem}
Let $f \in L^2(\mathbb{R}^2_+)$ have a compact support in $W$, and let $u$ be a solution of (\ref{2.1})--(\ref{2.2}) which satisfying the radiation condition in the sense of Definition 2.4. Then, $u$ has an integral representation of the form
\begin{equation}
u(x)=k^2\int_{W} (n(y)-1) u(y)G(x,y)dy-\int_{W} f(y)G(x,y)dy, \ \ x \in \mathbb{R}^2_+ \label{2.18}
\end{equation}
\end{lem}
\begin{proof}[Proof of Lemma 2.7]
Let $\epsilon >0$ be small enough and let $u_{\epsilon} \in H^{1}(\mathbb{R}^{2}_{+})$ be a solution of the problem (\ref{2.1})--(\ref{2.2}) replacing $k$ by $k+i\epsilon$, that is, $u_{\epsilon}$ satisfies
\begin{equation}
\Delta u_{\epsilon}+(k+i\epsilon)^2nu_{\epsilon}=f \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{2.19}
\end{equation}
\begin{equation}
u_{\epsilon}=0 \ \mathrm{on} \ \Gamma_0. \label{2.20}
\end{equation}
Let $G_{\epsilon}(x,y)$ be the Dirichlet Green's function in the upper half plane $\mathbb{R}^2_{+}$ for $\Delta +(k+i\epsilon)^2$. Let $x \in \mathbb{R}^{2}_+$ be always fixed such that $x_2>R$. Let $r>0$ be large enough such that $x \in B_r(0)$ where $B_r(0) \subset \mathbb{R}^{2}$ be a open ball with center $0$ and radius $r>0$. By Green's representation theorem in $B_r(0)\cap \mathbb{R}^{2}_+$ we have
\begin{eqnarray}
u_{\epsilon}(x)&=&\int_{\partial B_r(0) \cap \mathbb{R}^{2}_+}\bigl[\frac{\partial u_{\epsilon}}{\partial \nu}(y)G_{\epsilon}(x,y)-u_{\epsilon}(y)\frac{\partial G_{\epsilon}}{\partial \nu}(x,y)\bigr]ds(y)
\nonumber\\
&-&\int_{B_r(0)\cap \mathbb{R}^{2}_+} \bigl[\Delta u_{\epsilon}(y)+(k+i\epsilon)^2u_{\epsilon}(y)\bigr]G_{\epsilon}(x,y)dy
\nonumber\\
&=&\int_{\partial B_r(0) \cap \mathbb{R}^{2}_+}\bigl[\frac{\partial u_{\epsilon}}{\partial \nu}(y)G_{\epsilon}(x,y)-u_{\epsilon}(y)\frac{\partial G_{\epsilon}}{\partial \nu}(x,y)\bigr]ds(y)
\nonumber\\
&+&(k+i\epsilon)^2\int_{B_r(0)\cap \mathbb{R}^{2}_+} (n(y)-1) u_{\epsilon}(y)G_{\epsilon}(x,y)dy
\nonumber\\
&-&
\int_{B_r(0)\cap \mathbb{R}^{2}_+} f(y)G_{\epsilon}(x,y)dy.
\label{2.21}
\end{eqnarray}
Since $u_{\epsilon} \in H^{1}(\mathbb{R}^{2}_{+})$, the first term of the right hand side converges to zero as $r \to \infty$. Therefore, as $r \to \infty$ we have for $x \in \mathbb{R}^{2}_+$
\begin{equation}
u_{\epsilon}(x)=(k+i\epsilon)^2\int_{W} (n(y)-1) u_{\epsilon}(y)G_{\epsilon}(x,y)dy-\int_{W} f(y)G_{\epsilon}(x,y)dy.\label{2.22}
\end{equation}
We will show that (\ref{2.22}) converges as $\epsilon \to 0$ to
\begin{equation}
u(x)=k^2\int_{W} (n(y)-1) u(y)G(x,y)dy-\int_{W} f(y)G(x,y)dy.\label{2.23}
\end{equation}
Indeed, by the argument in (3.8) and (3.9) of \cite{Chandler and Christopher}, $G_{\epsilon}(x,y)$ is of the estimation
\begin{equation}
|G_{\epsilon}(x,y)| \leq C \frac{x_2 y_2}{1+|x-y|^{3/2}}, \ |x-y|>1, \label{2.24}
\end{equation}
where above $C$ is independent of $\epsilon>0$. Then, by Lebesgue dominated convergence theorem we have the second integral in (\ref{2.22}) converges as $\epsilon \to 0$ to one in (\ref{2.23}). So, we will consider the convergence of the first integral in (\ref{2.22}).
\par
By the beginning of the proof of Theorem 6.6 in \cite{Kirsch and Lechleiter2}, $u_{\epsilon}$ can be of the form $u_{\epsilon}=u^{(1)}_{\epsilon}+u^{(2)}_{\epsilon}$ where $u^{(1)}_{\epsilon}$ converges to $u^{(1)}$ in $H^{1}(W)$, and $u^{(2)}_{\epsilon}$ is of the form for $x \in W$
\begin{equation}
u^{(2)}_{\epsilon}(x)=\sum_{j \in J} \sum_{l=1}^{m_j}y_{l,j}\int ^{1/2}_{-1/2}\frac{e^{i\alpha x_1}}{i\epsilon-d_{l,j}\alpha}d\alpha \ \phi_{l,j}(x), \label{2.25}
\end{equation}
which converges pointwise to $u^{(2)}(x)$. Here, $y_{l,j} \in \mathbb{C}$ is some constant. From the convergence of $u^{(1)}_{\epsilon}$ in $H^{1}(W)$ we obtain that $\int_{W} (n(y)-1) u^{(1)}_{\epsilon}(y)G_{\epsilon}(x,y)dy$ converges $\int_{W} (n(y)-1) u^{(1)}(y)G(x,y)dy$ as $\epsilon \to 0$.
\par
By the argument of (b) in Lemma 6.1 of \cite{Kirsch and Lechleiter2} we have
\begin{eqnarray}
\lefteqn{\psi_{l,j,\epsilon}(x_1):=\int ^{1/2}_{-1/2}\frac{e^{i\alpha x_1}}{i\epsilon-d_{l,j}\alpha}d\alpha}
\nonumber\\
&=&
-\frac{i}{|d_{l,j}|}\int ^{|d_{l,j}|/(2\epsilon)}_{-|d_{l,j}|/(2\epsilon)}\frac{\mathrm{cos}(t \epsilon x_1/|d_{l,j}|)}{1+t^{2}}dt-2id_{l,j}\int ^{x_1/2}_{0}\frac{t\mathrm{sin}t}{x^2_1\epsilon^2+d_{l,j}^2t^{2}}dt, \ \ \ \ \ \ \ \ \ \ \ \label{2.26}
\end{eqnarray}
which implies that for all $x_1 \in \mathbb{R}$
\begin{eqnarray}
\lefteqn{\bigl|\psi_{l,j,\epsilon}(x_1)\bigr|\leq C\biggl(\int^{\infty}_{-\infty}\frac{dt}{1+t^2}+\int^{|x_1|/2}_{0}\biggl|\frac{\mathrm{sin}t}{t}\biggr|dt\biggr)}
\nonumber\\
&\leq&
C\biggl(\int^{\infty}_{-\infty}\frac{dt}{1+t^2}dt+\int^{1}_{0}\biggl|\frac{\mathrm{sin}t}{t}\biggr|dt+\int^{|x_1|+1}_{1}\frac{1}{t}dt\biggr)
\nonumber\\
&\leq&
C\bigl(1+\mathrm{log}(|x_1|+1)\bigr), \label{2.27}
\end{eqnarray}
where above $C$ is independent of $\epsilon>0$. Then, we have that for $y \in W$
\begin{equation}
\bigl|(n(y)-1) u^{(2)}_{\epsilon}(y)G_{\epsilon}(x,y)\bigr| \leq \frac{C\bigl(1+\mathrm{log}(|y_1|+1)\bigr)}{1+|x-y|^{3/2}},\label{2.28}
\end{equation}
where above $C$ is independent of $y$ and $\epsilon$. Then, right hand side of (\ref{2.28}) is an integrable function in $W$ with respect to $y$. Then, by Lebesgue dominated convergence theorem $\int_{W} (n(y)-1) u^{(2)}_{\epsilon}(y)G_{\epsilon}(x,y)dy$ converges to $\int_{W} (n(y)-1) u^{(2)}(y)G(x,y)dy$ as $\epsilon \to 0$. Therefore, (\ref{2.23}) has been shown.
\end{proof}
\section{Uniqueness of $u^{(2)}$}
In Section 3, we will show the uniqueness of $u^{(2)}$ in Theorem 1.2.
\begin{lem}
Let Assumptions 2.1 hold and let $k>0$ be regular in the sense of Definition 2.3. If $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ such that
\begin{equation}
\Delta u+k^2(1+q)nu=0, \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{3.1}
\end{equation}
\begin{equation}
u=0 \ \mathrm{on} \ \Gamma_0, \label{3.2}
\end{equation}
and $u$ satisfies the radiation condition in the sense of Definition 2.4, then $u^{(2)}=0$ in $\mathbb{R}^2_{+}$.
\end{lem}
\begin{proof}[{\bf Proof of Lemma 3.1}]
By the definition of the radiation condition, $u$ is of the form $u=u^{(1)}+u^{(2)}$ where $u^{(1)} \bigl|_{\mathbb{R} \times (0,R)} \in H^{1}(\mathbb{R} \times (0,R))$ for all $R>0$, and $u^{(2)}\in L^{\infty}(\mathbb{R}^{2}_{+})$ has the form
\begin{equation}
u^{(2)}(x)=\psi^{+}(x_1)\sum_{j \in J} \sum_{d_{l,j}>0}a_{l,j}\phi_{l,j}(x)+\psi^{-}(x_1)\sum_{j \in J} \sum_{d_{l,j}<0}a_{l,j}\phi_{l,j}(x), \label{3.3}
\end{equation}
where some $a_{l,j} \in \mathbb{C}$, and $\{d_{l,j},\phi_{l,j}: l=1,...,m_j \}$ are normalized eigenvalues and eigenfunctions of the problem (\ref{2.13}). Here, by Remark 2.5 the function $\psi^{+}$ is chosen as a smooth function such that $\psi^{+}(x_1)=1$ for $x_1\geq \eta$ and $\psi^{+}(x_1)=0$ for $x_1\leq -\eta$, and $\psi^{-}:=1-\psi^{+}$ where $\eta>0$ is some positive number.
\vspace{3mm}\\
{\bf Step1} (Green's theorem in $\Omega_N$): We set $\Omega_N:=(-N,N) \times (0, \phi(N))$ where $\psi(N):=N^{s}$. Later we will choose a appropriate $s \in (0,1)$. Let $R>h$ be large and always fixed, and let $N$ be large enough such that $\phi(N)>R$. We denote by $I_{\pm N}^{R}:=\{\pm N \}\times (0,R)$, $I_{\pm N}^{\phi(N)}:=\{\pm N \}\times (R,\phi(N))$, and $\Gamma_{\phi(N), N}:=(-N,N)\times \{\phi(N) \}$. (see the figure below.) We set $I_{\pm N}:=I_{\pm N}^{R} \cup I_{\pm N}^{\phi(N)}$. \par \vspace{3mm}
\begin{tikzpicture}
\path[draw,-{Stealth[length=3mm]}] (-5, 0) -- (5,0) node[above right] {\large $x_1$};
\path[draw,-{Stealth[length=3mm]}] (0, -0.5) -- (0,4) node[right=2mm] {\large $x_2$} ;
\coordinate (O) at (0,-0.25) node at (O) [right] {$O$};
\draw (4,3) -- (4,0) node [below] {$N$};
\draw (-4,3) -- (-4,0) node [below] {$-N$};
\draw (-4,3) -- (0,3);
\draw (0,3) -- (4,3);
\draw (-4,1.5) -- (0,1.5);
\draw (0,1.5) -- (4,1.5);
\node (A) at (0.5,3) [below] {$\phi(N)$};
\node (B) at (0.2,1.5) [below] {$R$};
\node (C) at (-1,4) [below] {$\Gamma_{\phi(N), N}$};
\node (D) at (0,3) [above] {\scalebox{6.7}[1]{\rotatebox{270}{$\Biggl\{$}}};
\node (E) at (-4.2,0) [above] {{\large$ \Biggl\{$}};
\node (F) at (-4.2,1.5) [above] {{\large$ \Biggl\{$}};
\node (G) at (4.2,0) [above] {{\large$ \Biggr\}$}};
\node (H) at (4.2,1.5) [above] {{\large$ \Biggr\}$}};
\node (I) at (-4.7,0.3) [above] {$I_{-N}^{R}$};
\node (J) at (-4.75,1.8) [above] {$I_{-N}^{\phi(N)}$};
\node (K) at (4.6,0.3) [above] {$I_{N}^{R}$};
\node (L) at (4.8,1.85) [above] {$I_{N}^{\phi(N)}$};
\end{tikzpicture}
\vspace{5mm}\par
By Green's first theorem in $\Omega_N$ and $u=0$ on $(-N,N)\times \{ 0\}$, we have
\begin{eqnarray}
\lefteqn{ \int_{\Omega_N}\{-k^2(1+q)n|u|^{2}+|\nabla u|^{2} \}dx=\int_{\Omega_N}\{ \overline{u}\Delta u+|\nabla u|^{2} \}dx}
\nonumber\\
&=&\int_{I_{N}} \overline{u}\frac{\partial u}{\partial x_1} ds-\int_{I_{-N}} \overline{u}\frac{\partial u}{\partial x_1} ds +\int_{\Gamma_{\phi(N),N}} \overline{u}\frac{\partial u}{\partial x_2} ds
\nonumber\\
&=&\int_{I_{N}} \overline{u^{(2)}}\frac{\partial u^{(2)}}{\partial x_1} ds-\int_{I_{-N}} \overline{u^{(2)}}\frac{\partial u^{(2)}}{\partial x_1} ds\nonumber
\end{eqnarray}
\begin{eqnarray}
&+&\int_{I_{N}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds+\int_{I_{N}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds+\int_{I_{N}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&-&\int_{I_{-N}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds-\int_{I_{-N}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds-\int_{I_{-N}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&+&\int_{\Gamma_{\phi(N),N}} \overline{u}\frac{\partial u}{\partial x_2} ds.\label{3.4}
\end{eqnarray}
By the same argument in Theorem 4.6 of \cite{Kirsch and Lechleiter1} and Lemma 6.3 of \cite{Kirsch and Lechleiter2}, we can show that
\begin{eqnarray}
\lefteqn{ \int_{I_{N}} \overline{u^{(2)}}\frac{\partial u^{(2)}}{\partial x_1} ds-\int_{I_{-N}} \overline{u^{(2)}}\frac{\partial u^{(2)}}{\partial x_1} ds }
\nonumber\\
&+&\int_{I_{N}^{R}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds+\int_{I_{N}^{R}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds+\int_{I_{N}^{R}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&-&\int_{I_{-N}^{R}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds-\int_{I_{-N}^{R}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds-\int_{I_{-N}^{R}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&=&\frac{1}{2 \pi}\sum_{j \in J} \sum_{d_{l,j},d_{l',j}>0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l',j}}{\partial x_1}dx
\nonumber\\
&-&\frac{1}{2 \pi}\sum_{j \in J} \sum_{d_{l,j},d_{l',j}<0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l',j}}{\partial x_1}dx+o(1),\label{3.5}
\end{eqnarray}
and the first and second term in the right hand side converge as $N \to \infty$ to $\frac{ik}{2\pi}\sum_{j \in J}\sum_{d_{l,j}>0}|a_{l,j}|^{2}d_{l,j}$ and $-\frac{ik}{2\pi}\sum_{j \in J}\sum_{d_{l,j}<0}|a_{l,j}|^{2}d_{l,j}$ respectively. Therefore, taking an imaginary part in (\ref{3.4}) yields that
\begin{eqnarray}
\lefteqn{0=\mathrm{Im}\Biggl[\frac{1}{2\pi} \sum_{j \in J} \sum_{d_{l,j},d_{l',j}>0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l',j}}{\partial x_1}dx \Biggr]}
\nonumber\\
&-&\mathrm{Im}\Biggl[\frac{1}{2\pi} \sum_{j \in J}\sum_{d_{l,j},d_{l',j}<0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l',j}}{\partial x_1}dx\Biggr]
\nonumber\\
&+&\mathrm{Im}\int_{I_{N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds+\mathrm{Im}\int_{I_{N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds+\mathrm{Im}\int_{I_{N}^{\phi(N)}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&-&\mathrm{Im}\int_{I_{-N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds-\mathrm{Im}\int_{I_{-N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds-\mathrm{Im}\int_{I_{-N}^{\phi(N)}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds
\nonumber\\
&+&\mathrm{Im}\int_{\Gamma_{\phi(N),N}} \overline{u}\frac{\partial u}{\partial x_2}ds+o(1).\label{3.6}
\end{eqnarray}
We set
\begin{equation}
J_{\pm}(N):=\pm \mathrm{Im}\int_{I_{\pm N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_1} ds\pm \mathrm{Im}\int_{I_{\pm N}^{\phi(N)}} \overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_1} ds\pm \mathrm{Im}\int_{I_{\pm N}^{\phi(N)}} \overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_1} ds,\label{3.7}
\end{equation}
and we will show that $\mathrm{limsup_{N\to \infty}}J_{\pm}(N)\geq0$.
\vspace{2mm}\\
{\bf Step2 ($\mathrm{limsup_{N\to \infty}}J_{\pm}(N)\geq0$):}
By Cauchy Schwarz inequality we have
\begin{eqnarray}
\lefteqn{|J_{+}(N)|\leq \biggl(\int^{\phi(N)}_{R} |u^{(1)}(N,x_2)|^2dx_2\biggr)^{1/2}\biggl(\int^{\phi(N)}_{R} \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N,x_2)\biggr|^2dx_2\biggr)^{1/2}}
\nonumber\\
&+&\biggl(\int^{\phi(N)}_{R} |u^{(1)}(N,x_2)|^2dx_2\biggr)^{1/2}\biggl(\int^{\phi(N)}_{R} \biggl|\frac{\partial u^{(2)}}{\partial x_1} (N,x_2)\biggr|^2dx_2\biggr)^{1/2}
\nonumber\\
&+&\biggl(\int^{\phi(N)}_{R} |u^{(2)}(N,x_2)|^2dx_2\biggr)^{1/2}\biggl(\int^{\phi(N)}_{R} \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N,x_2)\biggr|^2dx_2\biggr)^{1/2}
\nonumber\\
&\leq&\biggl(\int^{\phi(N)}_{R} |u^{(1)}(N,x_2)|^2dx_2\biggr)^{1/2}\biggl(\int^{\phi(N)}_{R} \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N,x_2)\biggr|^2dx_2\biggr)^{1/2}
\nonumber\\
&+&C(\phi(N)-R)^{1/2}\biggl(\int^{\phi(N)}_{R} |u^{(1)}(N,x_2)|^2dx_2\biggr)^{1/2}
\nonumber\\
&+&C(\phi(N)-R)^{1/2}\biggl(\int^{\phi(N)}_{R} \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N,x_2)\biggr|^2dx_2\biggr)^{1/2}.\label{3.8}
\end{eqnarray}
In order to estimate $u^{(1)}$, we will show the following lemma.
\begin{lem}
$u^{(1)}$ has an integral representation of the form
\begin{eqnarray}
u^{(1)}(x)&=&\int_{y_2>0}\sigma(y)G(x,y)dy+k^{2}\int_{W} \bigl(n(y)(1+q(y))-1\bigr) u^{(1)}(y)G(x,y)dy
\nonumber\\
&+&k^2\int_{Q} n(y)q(y)u^{(2)}(y)G(x,y)dy, \ \ x_2>0,\label{3.9}
\end{eqnarray}
where $\sigma:=\Delta u^{(2)}+k^2nu^{(2)}$.
\end{lem}
\begin{proof}[Proof of Lemma 3.2]
First, we will consider an integral representation of $u^{(2)}$. Let $N>0$ be large enough. By Green's representation theorem in $(-N,N)\times (0,N^{1/4})$, we have
\begin{eqnarray}
u^{(2)}(x)&=&\int_{(-N,N) \times \{N^{1/4} \}} \bigl[u^{(2)}(y)\frac{\partial G}{\partial y_2}(x,y)-G(x,y)\frac{\partial u^{(2)}}{\partial y_2}(y)\bigr]ds(y)
\nonumber
\end{eqnarray}
\begin{eqnarray}
&+&\biggl(\int_{\{N \} \times (0, N^{1/4})}-\int_{\{-N \} \times (0, N^{1/4})}\biggr) \bigl[u^{(2)}(y)\frac{\partial G}{\partial y_1}(x,y)-G(x,y)\frac{\partial u^{(2)}}{\partial y_1}(y)\bigr]ds(y)
\nonumber\\
&-&\int_{(-N,N) \times (0,N^{1/4})}\bigl[\sigma(y)+k^2(1-n(y))u^{(2)}(y)\bigr]G(x,y)dy.\label{3.10}
\end{eqnarray}
By Lemma 3.1 of \cite{Chandler and Christopher}, the Dirichlet Green's function $G(x,y)$ is of the estimation
\begin{equation}
|G(x,y)|, \ |\nabla_y G(x,y)| \leq C \frac{x_2 y_2}{1+|x-y|^{3/2}}, \ |x-y|>1. \label{3.11}
\end{equation}
By Lemma 2.2 we have that $|u^{(2)}(x)|, \ \bigl|\frac{\partial u^{(2)}(x)}{\partial x_2}\bigr| \leq ce^{-\delta |x_2|}$ for all $x \in \mathbb{R}^{2}_{+}$, and some $c,\delta >0$. Then, we obtain
\begin{eqnarray}
\lefteqn{\Biggl|\int_{(-N,N) \times \{N^{1/4} \}} \bigl[u^{(2)}(y)\frac{\partial G}{\partial y_2}(x,y)-G(x,y)\frac{\partial u^{(2)}}{\partial y_2}(y)\bigr]ds(y)\Biggr|}
\nonumber\\
&\leq&C \int_{-N}^{N} \frac{x_2 e^{-\delta N^{1/4}}}{|N^{1/4}-x_2|^{3/2}} dy_2 \leq C\frac{x_2 N e^{-\delta N^{1/4}}}{|N^{1/4}-x_2|^{3/2}}.\label{3.12}
\end{eqnarray}
Furthermore,
\begin{eqnarray}
\lefteqn{\Biggl|\int_{\{\pm N \} \times (0, N^{1/4}) } \bigl[u^{(2)}(y)\frac{\partial G}{\partial y_1}(x,y)-G(x,y)\frac{\partial u^{(2)}}{\partial y_1}(y)\bigr]ds(y)\Biggr|}
\nonumber\\
&\leq&C \int_{0}^{N^{1/4}} \frac{x_2 y_2}{|\pm N-x_1|^{3/2}} dy_2 \leq C\frac{x_2 N^{1/2}}{|\pm N-x_1|^{3/2}}.\label{3.13}
\end{eqnarray}
Therefore, as $N \to \infty$ in (\ref{3.10}) we get
\begin{eqnarray}
u^{(2)}(x)=-\int_{y_2>0}\sigma(y)G(x,y)dy+k^2\int_{W}(n(y)-1)u^{(2)}(y)G(x,y)dy. \ \ \ \label{3.14}
\end{eqnarray}
By Lemma 2.7, we have (substitute $-k^2qnu$ for $f$ in (\ref{2.18}))
\begin{equation}
u(x)=k^2\int_{W}\bigl(n(y)-1\bigr) u(y)G(x,y)dy+k^2\int_{Q}q(y)n(y)u(y)G(x,y)dy. \label{3.15}
\end{equation}
Combining (\ref{3.14}) with (\ref{3.15}) we have
\begin{eqnarray}
u^{(1)}(x)&=&-u^{(2)}(x)+k^2\int_{W}\bigl(n(y)-1\bigr) u(y)G(x,y)dy+k^2\int_{Q}q(y)n(y)u(y)G(x,y)dy
\nonumber
\end{eqnarray}
\begin{eqnarray}
&=&\int_{y_2>0}\sigma(y) G(x,y)dy-k^2\int_{W}(n(y)-1)u^{(2)}(y)G(x,y)dy
\nonumber\\
&+&k^2\int_{W}\bigl(n(y)-1\bigr) u(y)G(x,y)dy+k^2\int_{Q}q(y)n(y)u(y)G(x,y)dy
\nonumber\\
&=&\int_{\mathbb{R}_{+}^{2}}\sigma(y) G(x,y)dy+k^{2}\int_{W} \bigl( n(y)(1+q(y))-1\bigr) u^{(1)}(y)G(x,y)dy
\nonumber\\
&+&k^2\int_{Q}n(y)q(y)u^{(2)}(y)G(x,y)dy.\label{3.16}
\end{eqnarray}
Therefore, Lemma 3.2 has been shown.
\end{proof}
\vspace{0.5cm}
We set $u^{\pm}(x):=\sum_{j \in J} \sum_{d_{l,j}\lessgtr 0}a_{l,j}\phi_{l,j}(x)$. Then, by simple calculation we can show
\begin{equation}
\sigma(y)=\frac{d^{2} \psi^{+}(y_1)}{d y^{2}_1}u^{+}(y)+2\frac{d \psi^{+}(y_1)}{d y_1}\frac{\partial u^{+}(y)}{\partial y_1}+\frac{d^{2} \psi^{-}(y_1)}{d y^{2}_1}u^{-}(y)+2\frac{d \psi^{-}(y_1)}{d y_1}\frac{\partial u^{-}(y)}{\partial y_1},\label{3.17}
\end{equation}
which implies that $\mathrm{supp}\sigma \subset (-\eta, \eta)\times (0,\infty)$. By Lemma 3.2 we have for $R<x_2<\phi(N)$
\begin{eqnarray}
\lefteqn{|u^{(1)}(N,x_2)|, \ \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N,x_2)\biggr|\leq C\int_{(-\eta, \eta)\times (0,\infty)} |\sigma(y)|\frac{\phi(N) y_2}{|N-\eta|^{3/2}}dy}
\nonumber\\
&+&
C\int_{W} |u^{(1)}(y)|\frac{\phi(N) h}{(1+|N-y_1|)^{3/2}}dy
+C\int_{Q} \frac{\phi(N)|u^{(2)}(y)|}{|N-y_1|^{3/2}}dy
\nonumber\\
&\leq&C\frac{\phi(N)}{N^{3/2}}+C\phi(N)\int_{W} \frac{|u^{(1)}(y)|}{(1+|N-y_1|)^{3/2}}dy.\label{3.18}
\end{eqnarray}
We have to estimate the second term in right hand side. The following lemma was shown in Lemma 4.12 of \cite{Chandler}.
\begin{lem}
Assume that $\varphi \in L^{2}_{loc}(\mathbb{R})$ such that
\begin{equation}
\mathrm{sup}_{A>0}\Bigl\{ (1+A^{2})^{-\epsilon}\int^{A}_{-A}|\varphi(t)|^{2}dt\Bigr\}< \infty, \label{3.19}
\end{equation}
for some $\epsilon>0$. Then, for every $\alpha \in [0,\frac{1}{2}-\epsilon)$ there exists a constant $C>0$ and a sequence $\{A_m \}_{m\in \mathbb{N}}$ such that $A_m \to \infty$ as $m \to \infty$ and
\begin{equation}
\int_{K_{A_m}}|\varphi(t)|^{2}dt \leq C A^{-\alpha}_m, \ m \in\mathbb{N},\label{3.20}
\end{equation}
where $K_A:=K^{+}_{A}\cup K^{-}_{A}$, $K^{+}_{A}:=(-A^{+},A^{+})\setminus(-A,A)$, $K^{-}_{A}:=(-A,A)\setminus(-A^{-},A^{-})$, and $A^{\pm}:=A\pm A^{1/2}$ for $A \in [1, \infty)$.
\end{lem}
Applying Lemma 3.3 to $\varphi=\bigl(\int^{h}_{0} \bigl|u^{(1)} (\cdot,y_2)\bigr|^2dy_2\bigr)^{1/2} \in L^{2}(\mathbb{R})$, there exists a sequence $\{N_m \}_{m\in \mathbb{N}}$ such that $N_m \to \infty$ as $m \to \infty$ and
\begin{equation}
\int_{K_{N_m}}\int_{0}^{h}|u^{(1)} (y_1,y_2)|^{2}dy_1dy_2 \leq C N^{-1/4}_m, \ m \in\mathbb{N}.\label{3.21}
\end{equation}
Then, by Cauchy Schwarz inequality we have
\begin{eqnarray}
\lefteqn{\int_{W} \frac{|u^{(1)}(y)|}{(1+|N-y_1|)^{3/2}}dy=\biggl(\int_{-N_{m}^{-}}^{N_{m}^{-}}+\int_{K_{N_{m}}}+\int_{\mathbb{R}\setminus [-N_m^+,N_m^+]}\biggr)\int_{0}^{h} \frac{|u^{(1)}(y)|}{(1+|N_m-y_1|)^{3/2}}dy}
\nonumber\\
&\leq&C\biggl(\int_{-N_{m}^{-}}^{N_{m}^{-}}\frac{dy_1}{(1+N_m-|y_1|)^{3}}\biggr)^{1/2}+C\biggl(\int_{K_{N_m}}\int_{0}^{h}|u^{(1)} (y_1,y_2)|^{2}dy_1dy_2 \biggr)^{1/2}
\nonumber\\
&+&C\biggl(\int_{\mathbb{R}\setminus [-N_m^+,N_m^+]}\frac{dy_1}{(1+|y_1|-N_m)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq&C\biggl(\int_{0}^{N_{m}^{-}}\frac{dy_1}{(1+N_m-y_1)^{3}}\biggr)^{1/2}+CN^{-1/8}_m+C\biggl(\int_{N_{m}^+}^{\infty}\frac{dy_1}{(1+y_1-N_m)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq&CN^{-1/8}_m.\label{3.22}
\end{eqnarray}
With (\ref{3.18}) we have for $m \in \mathbb{N}$,
\begin{eqnarray}
|u^{(1)}(N_m,x_2)|, \ \biggl|\frac{\partial u^{(1)}}{\partial x_1} (N_m,x_2)\biggr|
\leq C\frac{\phi(N_m)}{N_m^{1/8}}.\label{3.23}
\end{eqnarray}
Therefore, by (\ref{3.8}) we have
\begin{eqnarray}
|J_{+}(N_m)|
&\leq&C(\phi(N_m)-R)\frac{\phi(N_m)^{2}}{N_m^{1/4}}
+C(\phi(N_m)-R)\frac{\phi(N_m)}{N_m^{1/8}}
\nonumber\\
&\leq&C(\phi(N_m)-R)\frac{\phi(N_m)^{2}}{N_m^{1/8}}\leq C\frac{\phi(N_m)^{3}}{N_m^{1/8}}.\label{3.24}
\end{eqnarray}
Since $\phi(N)=N^s$, if we choose $s \in (0,1)$ such that $3s<\frac{1}{8}$, that is, $0<s<\frac{1}{24}$ the right hand side in (\ref{3.24}) converges to zero as $m \to \infty$. Therefore, $\mathrm{limsup_{N\to \infty}}J_{+}(N)\geq0$. By the same argument of $J_{+}$, we can show that $\mathrm{limsup_{N\to \infty}}J_{-}(N)\geq0$, which yields Step 2.
\vspace{5mm}\\
Next, we discuss the last term in (\ref{3.6}). By the same argument in Lemma 3.2 that we apply Green's representation theorem in $x_2>h$ and use the Dirichlet Green's function $G_h$ of $\mathbb{R}^{2}_{x_2>h}(:=\mathbb{R}\times (h, \infty))$ insted of $G$, $u^{(1)}$ can also be of another integral representation for $x_2>h$
\begin{eqnarray}
u^{(1)}(x)&=&\int_{y_2>h}\sigma(y)G_h(x,y)dy+2\int_{\Gamma_h} u^{(1)}(y)\frac{\partial \Phi_k(x,y)}{\partial y_2}ds(y)
\nonumber\\
&=:&v^{1}(x)+v^{2}(x),\label{3.25}
\end{eqnarray}
where $G_h$ is defined by $G_h(x,y):=\Phi_k(x,y)-\Phi_k(x,y^*_h)$ where $y^*_h=(y_1, 2h-y_2)$. We define approximation $u^{(1)}_N$ of $u^{(1)}$ by
\begin{eqnarray}
u^{(1)}_N(x)&:=&\int_{y_2>0}\chi_{\phi(N)-1}(y_2)\sigma(y)G(x,y)dy+2\int_{\Gamma_h} \chi_{N}(y_1)u^{(1)}(y)\frac{\partial \Phi_k(x,y)}{\partial y_2}ds(y)
\nonumber\\
&=:&v^{1}_{N}(x)+v^{2}_{N}(x), \ \ \ x_2>h,\label{3.26}
\end{eqnarray}
where $\chi_a$ is defined by for $a>0$,
\begin{equation}
\chi_{a}(t):=\left\{
\begin{array}{rl}
1 & \quad \mbox{for $|t|\leq a$} \\
0 & \quad \mbox{for $|t|> a$}.
\end{array}\right.\label{3.27}
\end{equation}
By Lemma 3.4 of \cite{Chandler and Zhang2} and Lemma 2.1 of \cite{Chandler and Zhang1} we can show that $v^{1}_{N}$ and $v^{2}_{N}$ satisfy the upward propagating radiation condition, which implies that so does $u^{(1)}_N$. Furthermore, by the definition of $u^{(1)}_N$ we can show that $u^{(1)}_N(\cdot, \phi(N)-1) \in L^{2}(\mathbb{R})\cap L^{\infty}(\mathbb{R})$. Then, by Lemma 6.1 of \cite{Chandler and Zhang2} we have that
\begin{equation}
\mathrm{Im}\int_{\Gamma_{\phi(N)}}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds \geq0.\label{3.28}
\end{equation}
Combining (\ref{3.6}) with (\ref{3.28}) we have
\begin{eqnarray}
\lefteqn{0\geq-\mathrm{Im}\int_{\Gamma_{\phi(N)}}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds}
\nonumber\\
&=&\mathrm{Im}\Biggl[ \frac{1}{2\pi}\sum_{j \in J} \sum_{d_{l,j},d_{l',j}>0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l,j}}{\partial x_1}dx \Biggr]
\nonumber\\
&-&
\mathrm{Im}\Biggl[\frac{1}{2\pi}\sum_{j \in J} \sum_{d_{l,j},d_{l',j}<0}\overline{a_{l,j}}a_{l',j}\int_{C_{\phi(N)}}\overline{\phi_{l,j}}\frac{\partial \phi_{l,j}}{\partial x_1}dx\Biggr]+J_{+}(N)+J_{-}(N)
\nonumber\\
&+&
\mathrm{Im}\int_{\Gamma_{\phi(N),N}} \overline{u}\frac{\partial u}{\partial x_2}-\mathrm{Im}\int_{\Gamma_{\phi(N)}}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds+o(1).\label{3.29}
\end{eqnarray}
We observe the last term
\begin{equation}
\mathrm{Im}\int_{\Gamma_{\phi(N),N}} \overline{u}\frac{\partial u}{\partial x_2}-\mathrm{Im}\int_{\Gamma_{\phi(N)}}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds=:L(N)+M(N),\label{3.30}
\end{equation}
where
\begin{equation}
L(N):=\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_2}ds-\mathrm{Im}\int_{\Gamma_{\phi(N)}}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds,\label{3.31}
\end{equation}
\begin{equation}
M(N):=\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(1)}}\frac{\partial u^{(2)}}{\partial x_2}ds+\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(2)}}\frac{\partial u^{(1)}}{\partial x_2}ds+\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(2)}}\frac{\partial u^{(2)}}{\partial x_2}ds.\label{3.32}
\end{equation}
By Lemma 3.2 we can show $|u^{(1)}(x_1, \phi(N))|$, $|\frac{\partial u^{(1)}}{\partial x_2}(x_1,\phi(N))|\leq C\phi(N)$ for $x_1\in \mathbb{R}$, and by Lemma 2.2 we have $|u^{(2)}(x_1, \phi(N))|$, $|\frac{\partial u^{(2)}}{\partial x_2}(x_1, \phi(N))|\leq Ce^{-\delta \phi(N)}$ for $x_1\in \mathbb{R}$. Then, we have
\begin{eqnarray}
|M(N)|
&\leq&
\int^{N}_{-N}|u^{(1)}(x_1, \phi(N))|\Bigl|\frac{\partial u^{(2)}}{\partial x_2}(x_1,\phi(N))\Bigr|dx_1
\nonumber\\
&+&
\int^{N}_{-N}|u^{(2)}(x_1, \phi(N))|\Bigl|\frac{\partial u^{(1)}}{\partial x_2}(x_1,\phi(N))\Bigr|dx_1
\nonumber\\
&+&\int^{N}_{-N}|u^{(2)}(x_1, \phi(N))|\Bigl|\frac{\partial u^{(2)}}{\partial x_2}(x_1,\phi(N))\Bigr|dx_1
\nonumber\\
&\leq&C(N\phi(N)e^{-\delta \phi(N)}+Ne^{-2\delta \phi(N)})
\nonumber\\
&\leq&
CN\phi(N)e^{-\delta \phi(N)},
\label{3.33}
\end{eqnarray}
which implies that $M(N)=o(1)$ as $N \to \infty$. Hence, we will show that $\mathrm{limsup_{N\to \infty}}L(N)\geq0$.\vspace{1mm}\\
{\bf Step3 ($\mathrm{limsup_{N\to \infty}}L(N)\geq0$):}
First, we observe that
\begin{eqnarray}
|L(N)|&\leq&
\biggl|\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(1)}}\frac{\partial u^{(1)}}{\partial x_2}ds-\mathrm{Im}\int_{\Gamma_{\phi(N),N}}\overline{u^{(1)}}\frac{\partial u^{(1)}_N}{\partial x_2}ds\biggr|
\nonumber\\
&+&\biggl| \mathrm{Im}\int_{\Gamma_{\phi(N)},N}\overline{u^{(1)}}\frac{\partial u^{(1)}_N}{\partial x_2}ds-\mathrm{Im}\int_{\Gamma_{\phi(N)},N}\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds\biggr|
\nonumber\\
&+&\biggl| \mathrm{Im}\int_{\Gamma_{\phi(N)}\setminus\Gamma_{\phi(N),N} }\overline{u^{(1)}_N}\frac{\partial u^{(1)}_N}{\partial x_2}ds\biggr|
\nonumber
\end{eqnarray}
\begin{eqnarray}
&\leq&
\int_{-N}^{N}|u^{(1)}(x_1,\phi(N))|\Bigl|\frac{\partial u^{(1)}}{\partial x_2}(x_1,\phi(N))-\frac{\partial u^{(1)}_N}{\partial x_2}(x_1,\phi(N))\Bigr|ds\nonumber\\
&+&\int_{-N}^{N}|u^{(1)}(x_1,\phi(N))-u^{(1)}_N(x_1,\phi(N))|\Bigl|\frac{\partial u^{(1)}_N}{\partial x_2}(x_1,\phi(N))\Bigr|ds
\nonumber\\
&+&\int_{\mathbb{R}\setminus (-N,N)}|u^{(1)}_N(x_1,\phi(N))|\Bigl|\frac{\partial u^{(1)}_N}{\partial x_2}(x_1,\phi(N))\Bigr|ds\label{3.34}.
\end{eqnarray}
By Lemma 2.2 $\sigma$ has a exponential decay in $y_2$. Then, we have for $x_1\in \mathbb{R}$,
\begin{eqnarray}
\lefteqn{|v^{1}(x_1,\phi(N))|, \ \biggl|\frac{\partial v^{1}}{\partial x_2} (x_1,\phi(N))\biggr|, \ |v^{1}_N(x_1,\phi(N))|, \ \biggl|\frac{\partial v^{1}_N}{\partial x_2} (x_1,\phi(N))\biggr|}
\nonumber\\
&\leq&
C\int_{(-\eta, \eta)\times(0,\infty)} \frac{e^{-\delta y_2}\phi(N) y_2}{(1+|x_1-y_1|)^{3/2}}dy\leq C\frac{\phi(N)}{(1+|x_1|)^{3/2}},\label{3.35}
\end{eqnarray}
and
\begin{eqnarray}
\lefteqn{|v^{1}(x_1,\phi(N))-v^{1}_N(x_1,\phi(N))|, \ \biggl|\frac{\partial v^{1}}{\partial x_2} (x_1,\phi(N))-\frac{\partial v^{1}_N}{\partial x_2} (x_1,\phi(N))\biggr|}
\nonumber\\
&\leq&
C\int_{(-\eta, \eta)\times(\phi(N)-1,\infty)} \frac{e^{-\delta y_2}\phi(N) y_2}{(1+|x_1-y_1|)^{3/2}}dy
\nonumber\\
&\leq&
C\biggl(\int_{\phi(N)}^{\infty} e^{-\delta y_2}y_2dy_2\biggr)\frac{\phi(N)}{(1+|x_1|)^{3/2}}dy\leq \frac{e^{-\delta \phi(N)}\phi(N)}{(1+|x_1|)^{3/2}}.\label{3.36}
\end{eqnarray}
Since the fundamental solution to Helmholtz equation $\Phi(x,y)$ is of the following estimation (see e.g., \cite{Chandler and Christopher}) for $|x-y|\geq 1$
\begin{equation}
\biggl|\frac{\partial \Phi}{\partial y_2}(x,y)\biggr|\leq C\frac{|x_2-y_2|}{1+|x-y|^{3/2}}, \ \ \ \biggl|\frac{\partial^{2} \Phi}{\partial x_2 \partial y_2}(x,y)\biggr|\leq C\frac{|x_2-y_2|^2}{1+|x-y|^{3/2}}, \label{3.37}
\end{equation}
we can show that for $x_1 \in \mathbb{R}$
\begin{equation}
|v^{2}(x_1,\phi(N))|\leq C\phi(N)W_{\infty}(x_1), \ \ \ |v^{2}_N(x_1,\phi(N))|\leq C\phi(N)W_{N}(x_1),\label{3.38}
\end{equation}
and
\begin{equation}
\biggl|\frac{\partial v^{2}}{\partial x_2}(x_1,\phi(N))\biggr|\leq C\phi(N)^2 W_{\infty}(x_1), \ \ \ \biggl|\frac{\partial v^{2}_N}{\partial x_2}(x_1,\phi(N))\biggr| \leq C\phi(N)^2 W_{N}(x_1),\label{3.39}
\end{equation}
and
\begin{equation}
|v^{2}(x_1,\phi(N))-v^{2}_N(x_1,\phi(N))|\leq C\phi(N)\bigl(W_{\infty}(x_1)- W_{N}(x_1)\bigr),\label{3.40}
\end{equation}
and
\begin{equation}
\biggl|\frac{\partial v^{2}}{\partial x_2}(x_1,\phi(N))-\frac{\partial v^{2}_N}{\partial x_2}(x_1,\phi(N))\biggr|\leq C\phi(N)^2 (W_{\infty}(x_1)- W_{N}(x_1)\bigr),\label{3.41}
\end{equation}
where $W_N$ is defined by for $N \in (0, \infty]$
\begin{equation}
W_N(x_1):=\int_{-N}^{N}\frac{|u^{(1)}(y_1,h)|}{(1+|x_1-y_1|)^{3/2}}dy_1, \ \ \ x_1 \in \mathbb{R}.\label{3.42}
\end{equation}
Using (\ref{3.35})--(\ref{3.41}), we continue to estimate (\ref{3.34}). By Cauchy Schwarz inequality we have
\begin{eqnarray}
\lefteqn{|L(N)|\leq C\int_{-N}^{N}\Bigl\{\frac{\phi(N)}{(1+|x_1|)^{3/2}}+\phi(N)W_{\infty}(x_1)\Bigr\}}
\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\Bigl\{\frac{\phi(N)e^{-\sigma \phi(N)}}{(1+|x_1|)^{3/2}}+\phi(N)^2\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)\Bigr\}dx_1
\nonumber\\
&+&\int_{-N}^{N}\Bigl\{\frac{\phi(N)e^{-\sigma \phi(N)}}{(1+|x_1|)^{3/2}}+\phi(N)\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)\Bigr\}
\nonumber\\
&&\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times \Bigl\{\frac{\phi(N)}{(1+|x_1|)^{3/2}}+\phi(N)^2W_{N}(x_1)\Bigr\}dx_1
\nonumber\\
&+&\int_{\mathbb{R}\setminus (-N,N)}\Bigl\{\frac{\phi(N)}{(1+|x_1|)^{3/2}}+\phi(N)W_{N}(x_1)\Bigr\}\Bigl\{\frac{\phi(N)}{(1+|x_1|)^{3/2}}+\phi(N)^2W_{N}(x_1)\Bigr\}dx_1
\nonumber
\end{eqnarray}
\begin{eqnarray}
&\leq&
C\phi(N)^{3}\int_{-N}^{N}W_{\infty}(x_1)\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)dx_1
\nonumber\\
&+&C\phi(N)^{3}\int_{-N}^{N}\frac{1}{(1+|x_1|)^{3/2}}\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)dx_1
\nonumber\\
&+&C\phi(N)^{2}\int_{\mathbb{R}\setminus (-N,N)}\frac{1}{(1+|x_1|)^{3}}dx_1+C\phi(N)^{2}\int_{\mathbb{R}\setminus (-N,N)}\frac{1}{(1+|x_1|)^{3/2}}W_{N}(x_1)dx_1
\nonumber\\
&+&C\phi(N)^{3}\int_{\mathbb{R}\setminus (-N,N)}|W_{N}(x_1)|^2dx_1+o(1)
\nonumber\\
&\leq&
C\phi(N)^{3}\biggl\{\Bigl(\int_{-N}^{N}\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)^2dx_1\Bigr)^{1/2}+\Bigl(\int_{\mathbb{R}\setminus (-N,N)}W_{N}(x_1)^2dx_1\Bigr)^{1/2}\biggr\}
\nonumber\\
&& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ +o(1).\label{3.43}
\end{eqnarray}
Finally, we will estimate $\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)$ and $W_{N}(x_1)$. Since $u^{(1)}(\cdot,h) \in L^{2}(\mathbb{R})$, by Lemma 3.3 there exists a sequence $\{N_m \}_{m\in \mathbb{N}}$ such that $N_m \to \infty$ as $m \to \infty$ and
\begin{equation}
\int_{K_{N_m}}|u^{(1)}(y_1,h)|^{2}dy_1 \leq C N^{-\frac{1}{4}}_m, \ m \in\mathbb{N},\label{3.44}
\end{equation}
where $K_A:=K^{+}_{A}\cup K^{-}_{A}$, $K^{+}_{A}:=(-A^{+},A^{+})\setminus(-A,A)$, $K^{-}_{A}:=(-A,A)\setminus(-A^{-},A^{-})$, and $A^{\pm}:=A\pm A^{1/2}$ for $A \in [1, \infty)$.
\par
By Cauchy Schwarz inequality we have for $|x_1|>N_m$,
\begin{eqnarray}
\int_{-N_{m}^{-}}^{N_{m}^{-}}\frac{|u^{(1)}(y_1,h)|}{(1+|x_1-y_1|)^{3/2}}dy_1
&\leq&\biggl(\int_{-N_{m}^{-}}^{N_{m}^{-}}|u^{(1)}(y_1,h)|^{2}dy_1\biggr)^{1/2}\biggl(\int_{-N_{m}^{-}}^{N_{m}^{-}}\frac{dy_1}{(1+|x_1|-y_1)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq& \frac{C}{1-|x_1|-N^{-}_m},\label{3.45}
\end{eqnarray}
and
\begin{eqnarray}
\int_{K_{N^{-}_m}}\frac{|u^{(1)}(y_1,h)|}{(1+|x_1-y_1|)^{3/2}}dy_1
&\leq&\biggl(\int_{K_{N_m}}|u^{(1)}(y_1,h)|^{2}dy_1\biggr)^{1/2}\biggl(\int_{K^{-}_{N_m}}\frac{dy_1}{(1+|x_1|-y_1)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq& \frac{C}{N^{1/8}_{m}(1+|x_1|-N_m)}.\label{3.46}
\end{eqnarray}
Therefore, we obtain
\begin{eqnarray}
\lefteqn{\int_{\mathbb{R}\setminus (-N_m,N_m)}W_{N}(x_1)^2dx_1}
\nonumber\\
&\leq&C\int_{N_m}^{\infty}\frac{dx_1}{(1-|x_1|-N^{-}_m)^2}+\frac{C}{N_m^{1/4}}\int_{N_m}^{\infty}\frac{dx_1}{(1-|x_1|-N_m)^2}
\nonumber\\
&\leq&\frac{C}{1+N^{1/2}_m}+\frac{C}{N^{1/4}_{m}}\ \leq \ \frac{C}{N^{1/4}_{m}}.\label{3.47}
\end{eqnarray}
By Cauchy Schwarz inequality we have for $|x_1|<N_m$,
\begin{eqnarray}
\lefteqn{\int_{\mathbb{R}\setminus (-N_{m}^{+},N_{m}^{+})}\frac{|u^{(1)}(y_1,h)|}{(1+|x_1-y_1|)^{3/2}}dy_1}
\nonumber\\
&\leq&\biggl(\int_{\mathbb{R}\setminus (-N_{m}^{+},N_{m}^{+})}|u^{(1)}(y_1,h)|^{2}dy_1\biggr)^{1/2}\biggl(\int_{\mathbb{R}\setminus (-N_{m}^{+},N_{m}^{+})}\frac{dy_1}{(1+y_1-|x_1|)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq& \frac{C}{1+N^{+}_m-|x_1|},\label{3.48}
\end{eqnarray}
and
\begin{eqnarray}
\int_{K_{N^{+}_m}}\frac{|u^{(1)}(y_1,h)|}{(1+|x_1-y_1|)^{3/2}}dy_1
&\leq&\biggl(\int_{K_{N_m}}|u^{(1)}(y_1,h)|^{2}dy_1\biggr)^{1/2}\biggl(\int_{K^{+}_{N_m}}\frac{dy_1}{(1+y_1-|x_1|)^{3}}\biggr)^{1/2}
\nonumber\\
&\leq& \frac{C}{N^{1/8}_{m}(1+N_m-|x_1|)}.\label{3.49}
\end{eqnarray}
Therefore, we obtain
\begin{eqnarray}
\lefteqn{\int_{-N_m}^{N_m}\bigl(W_{\infty}(x_1)-W_{N}(x_1)\bigr)^2dx_1}
\nonumber\\
&\leq&C\int_{-N_m}^{N_m}\frac{dx_1}{(1+N^{+}_m-|x_1|)^2}+\frac{C}{N_m^{1/4}}\int_{-N_m}^{N_m}\frac{dx_1}{(1+N_m-|x_1|)^2}
\nonumber\\
&\leq&\frac{C}{1+N^{1/2}_m}+\frac{C}{N^{1/4}_{m}}\ \leq \ \frac{C}{N^{1/4}_{m}}.\label{3.50}
\end{eqnarray}
Therefore, Collecting (\ref{3.43}), (\ref{3.47}), and (\ref{3.50}) we conclude that $|L(N_m)|\leq C\frac{\phi(N_m)^{3}}{N_m^{1/8}}$. Since $\phi(N)=N^s$, if we choose $s \in (0,1)$ such that $3s<\frac{1}{8}$, that is, $0<s<\frac{1}{24}$, the term $\frac{\phi(N_m)^{3}}{N_m^{1/8}}$ converges to zero as $m \to \infty$. Therefore, $\mathrm{limsup_{N\to \infty}}L(N)\geq0$, which yields Step 3.
\vspace{5mm}\\
By taking $\mathrm{limsup_{N \to \infty}}$ in (\ref{3.29}) we have that
\begin{eqnarray}
0&\geq&\frac{k}{2\pi} \sum_{j \in J} \Biggl[ \sum_{d_{l,j}>0}|a_{l,j}|^2d_{l,j} -\sum_{d_{l,j}<0}|a_{l,j}|^2d_{l,j}\Biggr]
\nonumber\\
&+&\mathrm{limsup}_{N \to \infty}\Bigl(J_{+}(N)+J_{-}(N)+L(N)\Bigr).\label{3.51}
\end{eqnarray}
By Steps 2 and 3 and choosing $0<s<\frac{1}{24}$ the right hand side is non-negative. Therefore, $a_{l,j}=0$ for all $l,j$, which yields $u^{(2)}=0$. Theorem 3.1 has been shown, and in next section we will show the uniqueness of $u^{(1)}$.
\end{proof}
\section{Uniqueness of $u^{(1)}$}
In Section 4, we will show the following lemma.
\begin{lem}
If $u \in H^{1}_{loc}(\mathbb{R}^2_+)$ satisfies
\begin{description}
\item[(i)] $u \in H^{1}(\mathbb{R}\times (0,R))$ for all $R>0$,
\item[(ii)] $\Delta u+k^2(1+q)nu=0 \ \mathrm{in} \ \mathbb{R}^2_{+}$,
\item[(iii)]$u$ vanishes for $x_2=0$,
\item[(iv)] There exists $\phi \in L^{\infty}(\Gamma_h)\cap H^{1/2}(\Gamma_h)$ with $u(x)=2\int_{\Gamma_h}\phi(y)\frac{\partial\Phi_k(x,y)}{\partial y_2} ds(y)$ for $x_2>h$,
\end{description}
then, $u \in H^{1}_{0}(\mathbb{R}^{2}_{+})$.
\end{lem}
By using Lemma 4.1, we have the uniqueness of solution in Theorem 1.2.
\begin{thm}
Let Assumptions 1.1 and 2.1 hold and let $k>0$ be regular in the sense of Definition 2.3. If $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ satisfies (\ref{3.1}), (\ref{3.2}), and the radiation condition in the sense of Definition 2.4, then $u$ vanishes for $x_2>0$.
\end{thm}
\begin{proof}[{\bf Proof of Theorem 4.2}]
Let $u \in H^{1}_{loc}(\mathbb{R}^2_{+})$ satisfy (\ref{3.1}), (\ref{3.2}), and the radiation condition in the sense of Definition 2.4. By Lemma 3.1, $u^{(2)} = 0$ for $x_2>0$. Then, $u^{(1)}$ satisfies the assumptions (i)--(iv) of Lemma 4.1, which implies that $u^{(1)} \in H^{1}_{0}(\mathbb{R}^{2}_{+})$. By Assumption 1.1, $u^{(1)}$ vanishes for $x_2>0$, which yields the uniqueness.
\end{proof}
\begin{proof}[{\bf Proof of Lemma 4.1}]
Let $R>h$ be fixed. We set $\Omega_{N,R}:=(-N,N) \times (0, R)$ where $N>0$ is large enough. We denote by $I^R_{\pm N}:=\{\pm N \}\times (0,R)$, $\Gamma_{R, N}:=(-N,N)\times \{R \}$, and $\Gamma_{R}:=(-\infty,\infty) \times \{R \}$. By Green's first theorem in $\Omega_{N,R}$ and assumptions {\bf(ii)}, {\bf(iii)} we have
\begin{eqnarray}
\lefteqn{ \int_{\Omega_{N,R}}\{-k^2(1+q)n|u|^{2}+|\nabla u|^{2} \}dx=\int_{\Omega_{N,R}}\{ \overline{u}\Delta u+|\nabla u|^{2} \}dx}
\nonumber\\
&=&\int_{I^R_{N}} \overline{u}\frac{\partial u}{\partial x_1} ds-\int_{I^R_{-N}} \overline{u}\frac{\partial u}{\partial x_1} ds +\int_{\Gamma_{R,N}} \overline{u}\frac{\partial u}{\partial x_2} ds. \label{4.3}
\end{eqnarray}
By the assumption {\bf (i)}, the first and second term in the right hands side of (\ref{4.3}) go to zero as $N \to \infty$. Then, by taking an imaginary part and as $N \to \infty$ in (\ref{4.3}) we have
\begin{equation}
\mathrm{Im} \int_{\Gamma_R} \overline{u} \frac{\partial u}{\partial x_2}ds = 0. \label{4.4}
\end{equation}
By considering the Floquet Bloch transform with respect to $x_1$ (see the notation of (\ref{2.5})), we can show that
\begin{equation}
\int_{\Gamma_R} \overline{u} \frac{\partial u}{\partial x_2}ds = \int_{-1/2}^{1/2}\int_{0}^{2\pi} \overline{\tilde{u}_{\alpha}}(x_1, R)\frac{\partial \tilde{u}_{\alpha}(x_1, R)}{\partial x_2}dx_1d\alpha. \label{4.5}
\end{equation}
Since the upward propagating radiation condition is equivalent to the Rayleigh expansion by the Floquet Bloch transform (see the proof of Theorem 6.8 in \cite{Kirsch and Lechleiter2}), we can show that
\begin{equation}
\tilde{u}_{\alpha}(x)=\sum_{n \in \mathbb{Z}}u_{n}(\alpha)e^{inx_1+i\sqrt{k^2-(n+\alpha)^2}(x_2-h)}, \ x_2>h, \label{4.6}
\end{equation}
where $u_{n}(\alpha):=(2\pi)^{-1}\int_{0}^{2\pi}u_{\alpha}(x_1,h)e^{-inx_1}dx_1$. From (\ref{4.4})--(\ref{4.6}) we obtain that
\begin{eqnarray}
0&=&\mathrm{Im}\int_{-1/2}^{1/2}\int_{0}^{2\pi} \overline{\tilde{u}_{\alpha}}(x_1, R)\frac{\partial \tilde{u}_{\alpha}(x_1, R)}{\partial x_2}dx_1d\alpha
\nonumber\\
&=&\mathrm{Im}\sum_{n \in \mathbb{Z}}\int_{-1/2}^{1/2}2\pi|u_{n}(\alpha)|^2i\sqrt{k^2-(n+\alpha)^2}, \label{4.7}
\end{eqnarray}
Here, we denote by $k=n_0+r$ where $n_0 \in \mathbb{N}_0$ and $r \in [-1/2,1/2)$. Then by (\ref{4.7}) we have
\begin{equation}
u_n(\alpha)=0 \ \mathrm{for} \ |n|<n_0, \ \mathrm{a.e.} \ \alpha \in (-1/2, 1/2), \nonumber
\end{equation}
\begin{equation}
u_{n_0}(\alpha)=0 \ \mathrm{for} \ \alpha \in (-1/2, r), \nonumber
\end{equation}
\begin{equation}
u_{-n_0}(\alpha)=0 \ \mathrm{for} \ \alpha \in (-r, 1/2) \label{4.8}.
\end{equation}
By (\ref{4.8}) we have
\begin{eqnarray}
&&\int_{-1/2}^{1/2}\int_{0}^{2\pi}\int_{R}^{\infty} |\tilde{u}_{\alpha}(x)|^2dx_2dx_1d\alpha
\nonumber\\
&=&2\pi \int_{-1/2}^{1/2} \sum_{|n|>n_0}|u_{n}(\alpha)|^2 \int_{R}^{\infty}e^{-\sqrt{(n+\alpha)^2-k^2}(x_2-h)}dx_2d\alpha \nonumber\\
&+&2\pi \int_{r}^{1/2} |u_{n_0}(\alpha)|^2 \int_{R}^{\infty}e^{-\sqrt{(n_0+\alpha)^2-k^2}(x_2-h)}dx_2d\alpha\nonumber\\
&+&2\pi \int_{-1/2}^{-r} |u_{-n_0}(\alpha)|^2 \int_{R}^{\infty}e^{-\sqrt{(-n_0+\alpha)^2-k^2}(x_2-h)}dx_2d\alpha \nonumber
\end{eqnarray}
\begin{eqnarray}
&\leq& 2\pi \sum_{|n|>n_0} \int_{-1/2}^{1/2} \frac{|u_{n}(\alpha)|^2 e^{-\sqrt{(n+\alpha)^2-k^2}(R-h)} }{\sqrt{(n+\alpha)^2-k^2}}d\alpha \nonumber\\
&+& 2\pi \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2 e^{-\sqrt{(n_0+\alpha)^2-k^2}(R-h)}}{\sqrt{(n_0+\alpha)^2-k^2}}d\alpha \nonumber\\
&+& 2\pi \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2 e^{-\sqrt{(-n_0+\alpha)^2-k^2}(R-h)}}{\sqrt{(-n_0+\alpha)^2-k^2}}d\alpha \nonumber\\
&\leq& C \sum_{|n|>n_0} \int_{-1/2}^{1/2} |u_{n}(\alpha)|^2d\alpha \nonumber\\
&+& C \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2}{\sqrt{\alpha-r}}d\alpha + C \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2}{\sqrt{-\alpha-r}}d\alpha,
\label{4.9}
\end{eqnarray}
and
\begin{eqnarray}
&&\int_{-1/2}^{1/2}\int_{0}^{2\pi}\int_{R}^{\infty} |\partial_{x_1} \tilde{u}_{\alpha}(x)|^2dx_2dx_1d\alpha \nonumber\\
&=&2\pi \sum_{|n|>n_0} \int_{-1/2}^{1/2} \frac{|u_{n}(\alpha)|^2 n^2 e^{-\sqrt{(n+\alpha)^2-k^2}(R-h)}}{\sqrt{(n+\alpha)^2-k^2}}d\alpha \nonumber\\
&+&2\pi \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2 n^2_0 e^{-\sqrt{(n_0+\alpha)^2-k^2}(R-h)}}{\sqrt{(n_0+\alpha)^2-k^2}}d\alpha \nonumber\\
&+&2\pi \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2 n^2_0 e^{-\sqrt{(-n_0+\alpha)^2-k^2}(R-h)}}{\sqrt{(-n_0+\alpha)^2-k^2}}d\alpha \nonumber
\end{eqnarray}
\begin{eqnarray}
&\leq& C \sum_{|n|>n_0} \int_{-1/2}^{1/2} |u_{n}(\alpha)|^2d\alpha \nonumber\\
&+& C \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2}{\sqrt{\alpha-r}}d\alpha + C \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2}{\sqrt{-\alpha-r}}d\alpha.
\label{4.10}
\end{eqnarray}
By the same argument in (\ref{4.10}) we have
\begin{equation}
\int_{-1/2}^{1/2}\int_{0}^{2\pi}\int_{R}^{\infty} |\partial_{x_2} \tilde{u}_{\alpha}(x)|^2dx_2dx_1d\alpha
\leq C \sum_{|n|>n_0} \int_{-1/2}^{1/2} |u_{n}(\alpha)|^2d\alpha \nonumber
\end{equation}
\begin{equation}
\vspace{3mm}+ C \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2}{\sqrt{\alpha-r}}d\alpha + C \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2}{\sqrt{-\alpha-r}}d\alpha.
\label{4.11}
\end{equation}
It is well known that the Floquet Bloch Transform is an isomorphism between $H^1(\mathbb{R}^2_{+})$ and $L^2\bigl((-1/2,1/2)_{\alpha}; H^1((0,2\pi)\times \mathbb{R})_{x}\bigr)$ (e.g., see Theorem 4 in \cite{Lechleiter2}). Therefore, we obtain from (\ref{4.9})--(\ref{4.11})
\begin{eqnarray}
\left\| u \right\|^2_{H^1(\mathbb{R}\times (R, \infty))}&\leq& C \int_{-1/2}^{1/2}\int_{0}^{2\pi}\int_{R}^{\infty}|\tilde{u}_{\alpha}(x)|^2+|\partial_{x_1} \tilde{u}_{\alpha}(x)|^2+ |\partial_{x_2} \tilde{u}_{\alpha}(x)|^2dx_2dx_1d\alpha \nonumber\\
&\leq& C \sum_{|n|>n_0} \int_{-1/2}^{1/2} |u_{n}(\alpha)|^2d\alpha \nonumber\\
&+& C \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2}{\sqrt{\alpha-r}}d\alpha + C \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2}{\sqrt{-\alpha-r}}d\alpha.
\nonumber\\
&\leq& C\int_{-1/2}^{1/2} \int_{0}^{2\pi} |\tilde{u}_{\alpha}(x_1,h)|^2dx_1d\alpha \nonumber\\
&+& C \int_{r}^{1/2} \frac{|u_{n_0}(\alpha)|^2}{\sqrt{\alpha-r}}d\alpha + C \int_{-1/2}^{-r} \frac{|u_{-n_0}(\alpha)|^2}{\sqrt{-\alpha-r}}d\alpha.
\label{4.12}
\end{eqnarray}
If we can show that
\begin{equation}
\exists \delta>0 \ \ \mathrm{and} \ \ \exists C>0 \ \ \mathrm{s.t.} \ \ |u_{\pm n_0}(\alpha)| \leq C \ \ \mathrm{for} \ \ \mathrm{all} \ \ \alpha \in (-\delta \pm r,\delta \pm r),\label{4.13}
\end{equation}
then the right hands side of (\ref{4.12}) is finite, which yield Lemma 4.1.
\par Finally, we will show (\ref{4.13}). By the same argument in section 3 of \cite{Kirsch and Lechleiter2} we have
\begin{equation}
(I-K_{\alpha})\tilde{u}_{\alpha}=f_{\alpha} \ \mathrm{in} \ H^{1}_{0,per}(C_h),\label{4.14}
\end{equation}
where the operator $K_{\alpha}$ is defined by (\ref{2.12}) and $f_{\alpha}:=-(T_{per}k^2nqu)(\cdot, \alpha)$. Since the function $k^2nqu$ has a compact support, $\left\|f_{\alpha} \right\|^2_{H^1(C_h)}$ is bounded with respect to $\alpha$. By Assumption 2.1 and the operator $K_{\alpha}$ is compact, $(I-K_{\alpha})$ is invertible if $\alpha \in A_k$. Since $\pm r \in A_k$, $(I-K_{\pm})$ is invertible. Since the exceptional values are finitely many (see Lemma 2.2), $(I-K_{\alpha})$ is also invertible if $\alpha$ is close to $\pm r$. Therefore, there exists $\delta>0$ such that $(I-K_{\alpha})$ is invertible for all $\alpha \in (-\delta+r,\delta+r) \cup (-\delta-r,\delta-r)$.
\par
The operator $(I-K_{\alpha})$ is of the form
\begin{equation}
(I-K_{\alpha})=(I-K_{\pm r}) \Bigl(I-(I-K_{\pm r})^{-1}[I-K_{\pm r}-(I-K_{\alpha})] \Bigr)=(I-K_{\pm r})(I-M_{\alpha}),\label{4.15}
\end{equation}
where $M_{\alpha}:=(I-K_{\pm r})^{-1}(K_{\alpha}-K_{\pm r})$. Next, we will estimate $(K_{\alpha}-K_{\pm r})$. By the definition of $K_{\alpha}$ we have for all $v, w \in H^{1}_{0,per}(C_h)$,
\begin{eqnarray}
\langle (K_{\alpha}-K_{\pm r})v, w \rangle_{*}&=&-\int_{C_h}\left[i(\alpha \mp r) \biggl(v \frac{\partial \overline{w}}{\partial x_1} -\overline{v}\frac{\partial \overline{w}}{\partial x_1}
\biggr)+(\alpha^2-r^2)v\overline{w}\right]dx
\nonumber\\
&+& 2\pi i \sum_{|n|\neq n_0}v_n\overline{w_n} \bigl( \sqrt{k^2-(n+\alpha)^2}-\sqrt{k^2-(n\pm r)^2} \bigr)
\nonumber\\
&+& 2\pi i \sum_{|n|= n_0}v_n\overline{w_n} \bigl( \sqrt{k^2-(n+\alpha)^2}-\sqrt{k^2-(n\pm r)^2} \bigr).\nonumber\\ \label{4.16}
\end{eqnarray}
Since
\begin{equation}
|\sqrt{k^2-(n+\alpha)^2}-\sqrt{k^2-(n\pm r)^2}|= \biggl|\frac{\pm 2nr+r^2-2n\alpha-\alpha^2}{\sqrt{k^2-(n+\alpha)^2}+\sqrt{k^2-(n\pm r)^2}} \biggr|
\nonumber
\end{equation}
\begin{eqnarray}
\leq \left\{ \begin{array}{ll}
\frac{|n||\alpha \pm r|+|r^2-\alpha^2|}{\sqrt{|k^2-(n\pm r)^2|}} & \quad \mbox{for $|n| \neq n_0$} \\
\frac{|n||\alpha \pm r|+|r^2-\alpha^2|}{\sqrt{|r+\alpha||r-\alpha|}} & \quad \mbox{for $|n|=n_0$}, \\
\end{array} \right. \label{4.17}
\end{eqnarray}
we have for all $\alpha \in (-\delta+r,\delta+r) \cup (-\delta-r,\delta-r)$
\begin{eqnarray}
|\langle (K_{\alpha}-K_{\pm r})v, w \rangle_{*}|&\leq&C|\alpha\mp r| \left\|v \right\|_{H^1(C_h)}\left\|w \right\|_{H^1(C_h)}
\nonumber\\
&+&C \sum_{|n|\neq n_0}|v_n||w_n| \frac{|n||\alpha\mp r|}{\sqrt{|k^2-(n\pm r)^2|}}
\nonumber\\
&+&C \sum_{|n|= n_0}|v_n||w_n| n_0 \sqrt{|\alpha \mp r|}
\nonumber\\
&\leq&C \sqrt{|\alpha \mp r|}\left\|v \right\|_{H^1(C_h)}\left\|w \right\|_{H^1(C_h)}.
\label{4.18}
\end{eqnarray}
(we retake very small $\delta>0$ if needed.) This implies that there is a constant number $C>0$ which is independent of $\alpha$ such that $\left\| K_{\alpha}-K_{\pm r} \right\| \leq C\sqrt{|\alpha \mp r|}$. Therefore, by the property of Neumann series, there is a small $\delta>0$ such that for all $\alpha \in (-\delta+r,\delta+r) \cup (-\delta-r,\delta-r)$
\begin{equation}
(I-M_{\alpha})^{-1}=\sum_{n=0}^{\infty}M_{\alpha}^{n}\ \ \mathrm{and} \ \ \left\|M_{\alpha} \right\| \leq 1/2.\label{4.19}
\end{equation}
By Cauchy-Schwarz, the boundedness of trace operator, and (\ref{4.19}) we have
\begin{eqnarray}
|u_{\pm n_0}(\alpha)|&\leq& \int_{0}^{2\pi}|\tilde{u}_{\alpha}(x_1,h)|dx_1\leq C\left\|\tilde{u}_{\alpha} \right\|_{H^1(C_h)}
\nonumber\\
&=&
C\left\|(I-M_{\alpha})^{-1}(I-K_{\pm r})^{-1} f_{\alpha} \right\|_{H^1(C_h)} \nonumber\\
&\leq&
C\left\| (I-M_{\alpha})^{-1} \right\| \left\| (I-K_{\pm r})^{-1} f_{\alpha} \right\| \nonumber\\
&\leq&
C\sum_{n=0}^{\infty}\left\|M_{\alpha} \right\|^n<C\sum_{n=0}^{\infty}(1/2)^j <\infty,
\label{4.20}
\end{eqnarray}
where constant number $C>0$ is independent of $\alpha$. Therefore, we have shown (\ref{4.13}).
\end{proof}
\section{Existence}
In previous sections we discussed the uniqueness of Theorem 1.2. In Section 5, we will show the existence. Let Assumptions 1.1 and 2.1 hold and let $k>0$ be regular in the sense of Definition 2.3. Let $f \in L^{2}(\mathbb{R}^2_{+})$ such that $\mathrm{supp}f=Q$. We define the solution operator $S:L^{2}(Q)\to L^{2}(Q)$ by
$Sg:=v\bigl|_{Q}$ where $v$ satisfies the radiation condition and
\begin{equation}
\Delta v+k^2nv=g, \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{5.1}
\end{equation}
\begin{equation}
v=0 \ \mathrm{on} \ \Gamma_0. \label{5.2}
\end{equation}
Remark that by Theorem 2.6 we can define such a operator $S$, and $S$ is a compact operator since the restriction to $Q$ of the solution $v$ is in $H^{1}(Q)$. We define the multiplication operator $M:L^{2}(Q)\to L^{2}(Q)$ by $Mh:=k^{2}nqh$. We will show the following lemma.
\begin{lem}
$I_{L^{2}(Q)}+SM$ is invertible.
\end{lem}
\begin{proof}[{\bf Proof of Lemma 5.1}]
By the definition of operators $S$ and $M$
we have $SMg=v\bigl|_Q$ where $v$ is a radiating solution of (\ref{5.1})--(\ref{5.2}) replacing $g$ by $k^{2}nqg$. If we assume that $(I_{L^{2}(Q)}+SM)g=0$, then $g=-v\bigl|_{Q}$, which implies that $v$ satisfies $\Delta v+k^2n(1+q)v=0$ in $\mathbb{R}^2_{+}$. By the uniqueness we have $v=0$ in $\mathbb{R}^2_{+}$, which implies that $I_{L^{2}(Q)}+SM$ is injective. Since the operator $SM$ is compact, by Fredholm theory we conclude that $I_{L^{2}(Q)}+SM$ is invertible.
\end{proof}
We define $u$ as the solution of
\begin{equation}
\Delta u+k^2nu=f-M(I_{L^{2}(Q)}+SM)^{-1}Sf, \ \mathrm{in} \ \mathbb{R}^2_{+}. \label{5.3}
\end{equation}
satisfying the radiation condition and $u=0$ on $\Gamma_0$. Since
\begin{eqnarray}
u\bigl|_{Q}
&=&S(f-M(I_{L^{2}(Q)}+SM)^{-1}Sf)
\nonumber\\
&=&(I_{L^{2}(Q)}+SM)(I_{L^{2}(Q)}+SM)^{-1}Sf-SM(I_{L^{2}(Q)}+SM)^{-1}Sf
\nonumber\\
&=&(I_{L^{2}(Q)}+SM)^{-1}Sf,\label{5.4}
\end{eqnarray}
we have that
\begin{equation}
\Delta u+k^2nu=f-k^{2}nqu, \ \mathrm{in} \ \mathbb{R}^2_{+}, \label{5.5}
\end{equation}
and $u$ is a radiating solution of (\ref{1.8})--(\ref{1.9}). Therefore, Theorem 1.2 has been shown.
\section{Example of Assumption 1.1}
In Section 6, we will show the following lemma in order to give one of the example of Assumption 1.1.
\begin{lem}
Let $q$ and $n$ satisfy that $\partial_2 \bigl((1+q)n\bigr) \geq 0$ in $W$, and let $v \in H^{1}(\mathbb{R}^2_+)$ satisfy (\ref{1.6})--(\ref{1.7}). Then, $v$ vanishes for $x_2>0$.
\end{lem}
\begin{proof}[{\bf Proof of Lemma 6.1}]
Let $R>h$ be fixed. For $N>0$ we set $\Omega_{N,R}:=(-N,N) \times (0, R)$ and $I_{\pm N}^{R}:=\{\pm N \}\times (0,R)$ and $\Gamma_{R, N}:=(-N,N)\times \{R \}$. By Green's first theorem in $\Omega_{N,R}$ we have
\begin{eqnarray}
\lefteqn{ \int_{\Omega_{N,R}}\{-k^2(1+q)n|v|^{2}+|\nabla v|^{2} \}dx=\int_{\Omega_{N,R}}\{ \overline{v}\Delta v+|\nabla u|^{2} \}dx}
\nonumber\\
&=&\int_{I_{N}^{R}} \overline{v}\partial_1 v ds-\int_{I_{-N}^{R}} \overline{v}\partial_1 v ds +\int_{\Gamma_{R,N}} \overline{v}\partial_2 v ds.\label{6.1}
\end{eqnarray}
Since $v \in H^{1}(\mathbb{R}^2_+)$ the first and second term in the right hand side of (\ref{1.6}) go to zero as $N \to \infty$. Then, by taking an imaginary part in (\ref{6.1}) and as $N \to \infty$ we have
\begin{equation}
\mathrm{Im}\int_{\Gamma_{R}}\overline{v}\partial_2 v ds=0.\label{6.2}
\end{equation}
By the simple calculation, we have
\begin{eqnarray}
\lefteqn{2\mathrm{Re}\bigl(\partial_2\overline{v}(\Delta v+k^2(1+q)nv)\bigl)}
\nonumber\\
&=&2\mathrm{Re}\bigl(\nabla\cdot(\partial_2\overline{v}\nabla v)\bigr)-\partial_2(|\nabla v|^2)+k^2(1+q)n\partial_2(|v|^2), \label{6.3}
\end{eqnarray}
which implies that
\begin{eqnarray}
\lefteqn{0=2\mathrm{Re}\int_{\Omega_{N,R}}\partial_2 \overline{v}\bigl(\Delta v+k^2(1+q)nv)\bigl)dx=2\mathrm{Re}\int_{\Omega_{N,R}}\nabla\cdot(\partial_2\overline{v}\nabla v)dx}
\nonumber\\
&-&\int_{\Omega_{N,R}}\partial_2(|\nabla v|^2)dx+\int_{\Omega_{N,R}}k^2(1+q)n\partial_2(|v|^2)dx
\nonumber\\
&=&2\mathrm{Re}\biggl(-\int_{\Gamma_{0,N}}\partial_2\overline{v}\partial_2vds+\int_{I_N^R}\partial_2\overline{v}\partial_1vds-\int_{I_{-N}^R}\partial_2\overline{v}\partial_1vds+\int_{\Gamma_{R,N}}\partial_2\overline{v}\partial_2vds \biggr)
\nonumber\\
&-&\biggl(-\int_{\Gamma_{0,N}}|\nabla v|^2ds+\int_{\Gamma_{R,N}}|\nabla v|^2ds \biggr)
\nonumber\\
&-&\int_{\Gamma_{0,N}}k^2(1+q)n|v|^2ds+\int_{\Gamma_{R,N}}k^2(1+q)n|v|^2ds-\int_{\Omega_{N,R}}k^2\partial_2 \bigl((1+q)n \bigr)|v|^2dx
\nonumber\\
&=&-\int_{\Gamma_{0,N}}|\partial_2v|^2ds+\int_{\Gamma_{R,N}}\bigl(|\partial_2v|^2-|\partial_1v|^2+k^2|v|^2\bigr)ds
\nonumber\\
&-&\int_{\Omega_{N,R} \cap W}k^2\partial_2 \bigl((1+q)n \bigr)|v|^2dx+o(1).\label{6.4}
\end{eqnarray}
Since $\partial_2 \bigl((1+q)n\bigr) \geq 0$ in $W$, we have
\begin{equation}
\int_{\Gamma_{0,N}}|\partial_2v|^2ds \leq \int_{\Gamma_{R,N}}\bigl(|\partial_2v|^2-|\partial_1v|^2+k^2|v|^2\bigr)ds+o(1).\label{6.5}
\end{equation}
By taking limit as $N \to \infty$ we have
\begin{equation}
\int_{\Gamma_{R}}|\partial_2 v|^2ds\leq \int_{\Gamma_{R}}\bigl(|\partial_2v|^2-|\partial_1v|^2+k^2|v|^2\bigr)ds.\label{6.6}
\end{equation}
By Lemma 6.1 of \cite{Chandler and Zhang2} we have
\begin{equation}
\int_{\Gamma_{R}}\bigl(|\partial_2v|^2-|\partial_1v|^2+k^2|v|^2\bigr)ds \leq
2\mathrm{Im}\int_{\Gamma_{R}}\overline{v}\partial_2 vds.\label{6.7}
\end{equation}
From (\ref{6.2}), (\ref{6.6}), and (\ref{6.7}) we obtain that $\partial_2 v=0$ on $\Gamma_0$. We also have $v=0$ on $\Gamma_0$, which implies that by Holmgren's theorem and unique continuation principle we conclude that $v=0$ in $\mathbb{R}^{2}_{+}$.
\end{proof}
\section*{Acknowledgments}
The author thanks to Professor Andreas Kirsch, who supports him in the study of this waveguide problem, and gives him many comments to improve this paper.
|
1,477,468,750,161 | arxiv | \section{Introduction}
The study of spin models on complex networks
has been carried out \cite{DGM1,IT,KAI}.
We study a spin model on
random graphs with arbitary degree distributions
as an example of the study of a spin model on complex networks.
The behavior of spins on a no growing network is investigated.
We investigate a Potts gauge glass model \cite{NS}
as a spin model.
The Potts gauge glass model is a spin-glass model
and is an extended model of the Edwards-Anderson model \cite{EA}
which is known as a spin-glass model.
The understanding of the Edwards-Anderson model
on random graphs and on the Bethe lattice is still incompleted \cite{DGM1, VB, MP}.
The understanding of the Potts gauge glass model
on random graphs and on the Bethe lattice is also incompleted.
The Nishimori line \cite{NS} is a line on the phase diagram
for the exchange interactions and the temperature.
The internal energy and the upper bound of the specific heat
are exactly calculated on the Nishimori line \cite{NS}.
The location of the multicritical point
on the square lattice was conjectured,
and it was shown that
the conjectured value is in good agreement
with the other numerical estimates
\cite{NN}.
We will obtain results on the Nishimori line.
There is a case where
a percolation transition of networks occures.
A network is divided into many networks
by deleting some of its nodes and/or links.
There is also a case where a percolation transition
of clusters occurs. A cluster consists of fictitious bonds.
The bond is put between spins.
One of the clusters becomes a giant component when a cluster is percolated.
We discuss the percolation transition of a cluster.
We investigate
the percolation transition of the Fortuin-Kasteleyn (FK)
cluster in the FK random cluster model \cite{FK, KF}.
In the ferromagnetic spin model,
the percolation transition point
of the FK cluster
agrees with the phase transition point.
For example,
the agreement in the ferromagnetic Ising model is described
in Ref.~\citen{CK}.
On the other hand,
in the Edwards-Anderson model that has a conflict
in the interactions,
the percolation transition point of the FK cluster
disagrees with the phase transition point.
It was pointed out by de Arcangelis et al. that,
despite the disagreement,
the correct understanding of the percolation phenomenon
of the FK cluster in the Edwards-Anderson model
is important since a dynamical transition,
which is characterized by a parameter
called the Hamming distance or damage, is
occurred at a temperature very close to the percolation
temperature, and the dynamical transition and
the percolation transition are related to
a transition for a signal propagating between spins \cite{ACP}.
We analytically obtain
the percolation thresholds of the FK cluster
for the Potts gauge glass model.
We use a gauge transformation for deriving results.
The gauge transformation was proposed in Ref.~\citen{NS}.
In addition to the application of the gauge transformation,
results are shown by applying a criterion \cite{Y}
for spin models on the random graphs
with arbitary degree distributions.
In Ref.~\citen{Y}, by applying the criterion with
a gauge transformation,
the percolation thresholds of the FK cluster
for the Edwards-Anderson model on the random graphs
with arbitary degree distributions were analytically
calculated on the Nishimori line.
We also show the results for the infinite-range model.
This article is organized as follows.
First in \S\ref{sec:2}, a complex network model and the Potts gauge glass model
are described. The FK cluster is described in \S\ref{sec:3}
and appendix.
A criterion for percolation of cluster is
explained in \S\ref{sec:4}.
We will find in \S\ref{sec:5} the percolation thresholds.
This article is summarized in \S\ref{sec:6}.
\section{A complex network model and a Potts gauge glass model} \label{sec:2}
A network consists of nodes and links.
A link connected between nodes.
The complex network model that
we investigate is
random graphs with arbitary degree distributions.
The networks have no correlation between nodes.
The node degree, $k$, is given with
a distribution $p (k)$.
The links are randomly connected between nodes.
We define a variable $b (i, j)$, where
$b (i, j)$ gives one when nodes $i$ and $j$
is connected by a link.
$b (i, j)$ gives zero when
nodes $i$ and $j$ are not connected by the link.
The degree $k (i)$ of node $i$ is given by
\begin{equation}
k (i) = \sum_j b (i, j) \, .
\end{equation}
The coordination number (the average of the node degree for links),
$\langle k \rangle_N$, is given by
\begin{equation}
\langle k \rangle_N = \frac{1}{N} \sum_i^N k (i) \, ,
\end{equation}
where $\langle \, \rangle_N$ is the average over the entire network.
$N$ is the number of nodes.
The average of the square of the node degree for links,
$\langle k^2 \rangle_N$, is given by
\begin{equation}
\langle k^2 \rangle_N = \frac{1}{N} \sum_i^N k^2 (i) \, .
\end{equation}
We define
\begin{equation}
a = \frac{2 \langle k \rangle_N}{\langle k^2 \rangle_N} \, , \label{eq:a}
\end{equation}
where $a$ represents an aspect of the network.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{./a.eps}
\end{center}
\caption{
Relation between the aspect $a$ and the model on the network.
\label{fig:a}
}
\end{figure}
Figure~\ref{fig:a} shows the
relation between the aspect $a$ and the model on the network.
The network is almost a complete graph when $a$
is close to zero, and
the model on the network is almost an infinite-range model.
The model on the network is the infinite-range model
when $\langle k \rangle_N = N - 1$,
$\langle k^2 \rangle_N = (N - 1)^2$, and $a = 2 / (N - 1)$.
The network consists of many cycle graphs
when $a$ is one and $\langle k \rangle_N$ is two.
The model on the network consists of many chain models
when $a$ is one and $\langle k \rangle_N$ is two.
In the Erd\H{o}s-R\'enyi (ER) random graph model and in the Gilbert model,
the distribution of node degree is the Poisson distribution \cite{DGM1}.
The ER random graph model is a network model wherein the network
consists of a fixed number of nodes and
a fixed number of links, and the links are randomly connected between the nodes.
The Gilber model is a network model wherein
the link between nodes is connected with a given probability.
In the ER random graph model and in the Gilbert model,
$\langle k^2 \rangle_N = \langle k \rangle_N (\langle k \rangle_N + 1)$,
and $a = 2 / (\langle k \rangle_N + 1)$.
The Hamiltonian for a Potts gauge glass model, ${\cal H}$,
is given by \cite{NS}
\begin{equation}
{\cal H} = - \frac{J}{2 q}
\sum^N_i \sum_{\{ j | b(i, j) = 1\}}
\sum_{r_{i, j} = 1}^{q - 1} e^{\frac{2 \pi i}{q} ( \nu_{i, j} + q_i - q_j) r_{i, j}} \, ,
\label{eq:Hamiltonian}
\end{equation}
where $q_i$ denotes the state of the spin on node $i$, and $q_i = 0, 1, \ldots, q - 1$.
$\nu_{i, j}$ denotes a variable related to the strength
of the exchange interaction between
the spins on nodes $i$ and $j$, and $\nu_{i, j} = 0, 1, \ldots, q - 1$.
$q$ is the total number of states that a spin takes.
We use representations: $\lambda_i = e^{\frac{2 \pi i}{q} q_i}$ and
$J^{(r_{i, j})}_{i, j} = J e^{\frac{2 \pi i}{q} \nu_{i, j} r_{i, j}}$.
Then, the Hamiltonian (Eq.~(\ref{eq:Hamiltonian}))
is given by
\begin{equation}
{\cal H} = - \frac{1}{2 q} \sum^N_i \sum_{\{ j | b(i, j) = 1\}}
\sum_{r_{i, j} = 1}^{q - 1} J^{( r_{i, j} )}_{i, j}
\lambda^{r_{i, j}}_i
\lambda^{q - r_{i, j}}_j \, .
\end{equation}
The value of $\nu_{i, j}$ is given with a distribution $P (\nu_{i, j} )$.
The distribution $P ( \nu_{i,j} )$
is given by
\begin{equation}
P ( \nu_{i, j} ) = p \, \delta_{ \nu_{i, j}, 0} + \frac{1 - p}{q - 1}
(1 - \delta_{ \nu_{i, j}, 0} ) \, , \label{eq:Pnuij}
\end{equation}
where
$p$ is the probability that the exchange interaction between the spins
is ferromagnetic. $\delta$ is the Kronecker delta.
The normalization of $P ( \nu_{i, j} )$ is given by
\begin{equation}
\sum^{q - 1}_{\nu_{i, j} = 0}
P ( \nu_{i, j} ) = 1 \, . \label{eq:pnuijnorm}
\end{equation}
When $\nu_{i, j} = 0$ ($J^{( r_{i, j} )}_{i, j} = J$)
for all $(i, j)$ pairs,
the model becomes the ferromagnetic Potts model.
When $q = 2$, the model becomes the Edwards-Anderson model
and is especially called the $\pm J$ Ising model.
In Ref.~\citen{T},
it was pointed out that
a gauge transformation has no effect on thermodynamic quantities.
To calculating thermodynamic quantities,
a gauge transformation wherein the transformation is performed by
\begin{equation}
J^{( r_{i, j} ) }_{i, j} \to J^{( r_{i, j} )}_{i, j} \mu^{q - r_{i, j}}_i
\mu^{r_{i, j}}_j \, , \quad \lambda_i \to \lambda_i \mu_i \label{eq:GaugeT}
\end{equation}
is used, where $\mu_i = e^{\frac{2 \pi i}{q} \tilde{q}_i}$, and
$\tilde{q}_i$ is an arbitary valuve for $q_i$.
This gauge transformation was proposed in Ref.~\citen{NS}.
By the gauge transformation,
the Hamiltonian ${\cal H}$ part becomes ${\cal H} \to {\cal H}$.
By using
Eqs.~(\ref{eq:Pnuij}), (\ref{eq:pnuijnorm}), and (\ref{eq:Pnuij2}),
the distribution $P ( \nu_{i, j} )$ is given by \cite{NS}
\begin{equation}
P ( \nu_{i, j} ) = A e^{\frac{\beta_{\rm P} }{q}
\sum_{r_{i, j} = 1}^{q - 1} J^{(r_{i, j})}_{ij} ( \nu_{i, j} )} \, ,
\label{eq:Pnuij2}
\end{equation}
where $A$ and $\beta_P$ are respectively
\begin{eqnarray}
A &=& \frac{1}{e^{\frac{\beta_P J}{q} (q - 1)}
+ (q - 1) e^{- \frac{\beta_P J}{q}}}
\, , \label{eq:A} \\
\beta_P &=& \frac{1}{J}
\ln \biggl[ p \biggl( \frac{q - 1}{1-p} \biggr)
\biggr] \, . \label{eq:betaPnuij}
\end{eqnarray}
By performing the gauge transformation,
the distribution $P (\nu_{i, j})$ part becomes
\begin{eqnarray}
\prod_{\langle i, j \rangle} P (\nu_{i, j})
&=&
A e^{\frac{\beta_{\rm P} }{q} \sum_{\langle i, j \rangle}
\sum_{r_{i, j} = 1}^{q - 1} J^{(r_{i, j})}_{i, j} ( \nu_{i, j} )}
\nonumber \\ &\to&
\frac{A}{q^N} \sum_{\{ \mu_i \}} e^{\frac{\beta_{\rm P} }{q} \sum_{\langle i, j \rangle}
\sum_{r_{i, j} = 1}^{q - 1}
J^{( r_{i, j} )}_{i, j} ( \nu_{i, j} ) \mu^{q - r_{i, j}}_i
\mu^{r_{i, j}}_j } \, , \label{eq:Pnuij3}
\end{eqnarray}
where $\langle x, y \rangle$ denotes the nearest neighbor pairs conneced by links.
\section{The Fortuin-Kasteleyn cluster} \label{sec:3}
The bond for the FK cluster is put between spins
with probability $P_{\rm FK} (q_i , q_j, \nu_{i, j})$.
The value of $P_{\rm FK}$ depends on
the interaction between spins and the states of spins.
We call the bond the FK bond in this article.
$P_{\rm FK} (q_i, q_j, \nu_{i, j})$ is given by
\begin{equation}
P_{\rm FK} (q_i, q_j, \nu_{i, j}) =
1 - e^{- \frac{\beta}{q} [
\sum_{r_{i, j} = 1}^{q - 1} J^{(r_{i, j} ) }_{i, j} (\nu_{i, j} )
\lambda^{r_{i, j}}_i ( q_i )
\lambda^{q - r_{i, j}}_j ( q_j ) + J ]} \, ,
\label{eq:PbondliljJij}
\end{equation}
where $\beta$ is the inverse temperature, and $\beta = 1 / k_B T$.
$k_B$ is the Boltzmann constant, and $T$ is the temperature.
By connecting the FK bonds, the FK clusters are generated.
In appendix, we will derive Eq.~(\ref{eq:PbondliljJij}).
By the gauge transformation,
the $P_{\rm FK}$ part becomes $P_{\rm FK} \to P_{\rm FK}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\linewidth]{./network-cluster-Potts.eps}
\end{center}
\caption{
Network and FK cluster.
Three nodes, six links, three spins, an FK bond, and an FK cluster are depicted.
Spins are aligned on each node and are represented by spin states.
In this picture,
the states of two spins are one and the state of a spin is zero.
\label{fig:network-cluster-Potts}
}
\end{figure}
Figure \ref{fig:network-cluster-Potts} shows a conceptual diagram
of a network and an FK cluster.
Three nodes, six links, three spins, an FK bond, and an FK cluster are depicted.
Spins are aligned on each node and are represented by spin states.
The thermodynamic quantity of the FK bond put between
the spins on nodes $i$ and $j$,
$[\langle b_{\rm FK} (i, j) \rangle_T ]_R$, is given by
\begin{equation}
[\langle b_{\rm FK} (i, j) \rangle_T ]_R =
[\langle P_{\rm FK} (q_i, q_j, \nu_{i, j}) \rangle_T ]_R \, , \label{eq:bbondijTR}
\end{equation}
where $\langle \, \rangle_T$ is the thermal average,
and $[ \, ]_R$ is the random configuration average.
The thermodynamic quantity of the node degree for FK bonds at node $i$,
$[\langle k_{\rm FK} (i) \rangle_T ]_R$, is given by
\begin{equation}
[\langle k_{\rm FK} (i) \rangle_T ]_R
= [\langle \sum_{\{ j | b(i, j) = 1\}}
P_{\rm FK} (q_i, q_j, \nu_{i, j}) \rangle_T ]_R \, . \label{eq:kbondiTR}
\end{equation}
The thermodynamic quantity of the square of the node degree for FK bonds at node $i$,
$[\langle k^2_{\rm FK} (i) \rangle_T ]_R$, is given by
\begin{eqnarray}
& & [\langle k^2_{\rm FK} (i) \rangle_T ]_R \nonumber \\
&=& [\langle \sum_{\{ j | b(i, j) = 1\}}
\sum_{\{ l | b(i, l) = 1\}} P_{\rm FK} (q_i, q_j, \nu_{i, j})
P_{\rm FK} (q_i, q_l, \nu_{i, l}) (1 - \delta_{j, l} )
\nonumber \\
&+& \sum_{\{ j | b(i, j) = 1\}} P_{\rm FK} (q_i, q_j, \nu_{i, j}) \rangle_T ]_R \, .
\label{eq:k2bondiTR}
\end{eqnarray}
The thermodynamic quantity of the node degree for FK bonds,
$[\langle k_{\rm FK} \rangle_T ]_R$, is given by
\begin{equation}
[\langle k_{\rm FK} \rangle_T ]_R = \frac{1}{N} \sum_i^N
[\langle k_{\rm FK} (i) \rangle_T ]_R \, . \label{eq:kbondTR}
\end{equation}
The thermodynamic quantity of the square of the node degree for FK bonds,
$[\langle k^2_{\rm FK} \rangle_T ]_R$, is given by
\begin{equation}
[\langle k^2_{\rm FK} \rangle_T ]_R = \frac{1}{N} \sum_i^N
[\langle k^2_{\rm FK} (i) \rangle_T ]_R \, . \label{eq:k2bondTR}
\end{equation}
\section{A criterion for percolation of cluster} \label{sec:4}
We use a conjectured criterion for deriving the percolation thresholds.
This criterion is a criterion of the percolation of cluster for spin models
on the random graphs with arbitary degree distributions,
and is given by \cite{Y}
\begin{equation}
[\langle k^2_{\rm bond} \rangle_T ]_R
\ge 2 [\langle k_{\rm bond} \rangle_T ]_R \, ,
\label{eq:k2k1bond}
\end{equation}
where $k_{\rm bond}$ is a quantity for a bond put between spins.
$k_{\rm bond}$ for the FK bond is $k_{\rm FK}$ for example.
Equation~(\ref{eq:k2k1bond}) is given by the inequality when
the cluster is percolated.
Equation~(\ref{eq:k2k1bond}) is given by the equality when
the cluster is at the percolation transition point.
In addition, Eq.~(\ref{eq:k2k1bond}) is true for sufficiently
large number of nodes in the case that
the magnitude of the bond does not depend on the degree $k (i)$.
We consider the condition that
the magnitude of the bond does not depend on the degree $k (i)$.
We define a variable for the inverse temperature $\beta$
as $\rho (\beta )$.
We set
\begin{equation}
0 < \rho (\beta ) \le 1 \, . \label{eq:rhorange}
\end{equation}
We consider a case that
$[\langle b_{\rm bond} (i, j) \rangle_T]_R$,
$[\langle k_{\rm bond } (i) \rangle _T]_R$, and
$[\langle k^2_{\rm bond} (i) \rangle_T ]_R$ are respectively written as
\begin{equation}
[\langle b_{\rm bond} (i, j) \rangle_T ]_R =
\rho (\beta ) \, , \label{eq:bondij}
\end{equation}
\begin{equation}
[\langle k_{\rm bond} (i) \rangle_T ]_R =
\rho (\beta ) \, k (i) \, , \label{eq:k1bondi}
\end{equation}
\begin{eqnarray}
[\langle k^2_{\rm bond} (i) \rangle_T ]_R &=&
\rho^2 ( \beta ) \, k (i)
[ k (i) - 1 ] \nonumber \\
&+& \rho ( \beta ) \, k (i) \, .
\label{eq:k2bondi}
\end{eqnarray}
In the case, it is implied that
the bias for $k (i)$ does not appear
in the statistical results of the bonds.
Therefore, we describe the case that
$[\langle b_{\rm bond} (i, j) \rangle_T]_R$,
$[\langle k_{\rm bond } (i) \rangle _T]_R$, and
$[\langle k^2_{\rm bond} (i) \rangle_T ]_R$ are respectively written as
Eqs.~(\ref{eq:bondij}), (\ref{eq:k1bondi}), and (\ref{eq:k2bondi}) as the case
that the magnitude of the bond does not depend on $k (i)$.
This criterion is a conjectured criterion and is not exactly derived yet.
On the other hand, it was confirmed~\cite{Y} that
this criterion is exact for several extremal points
when applied to the Edward-Anderson model.
In this article,
we do not examine this criterion and
just apply this criterion to the present system.
\section{Results} \label{sec:5}
We will obtain the percolation thresholds of the FK cluster in this section.
By using Eqs.~(\ref{eq:GaugeT}), (\ref{eq:Pnuij3}),
(\ref{eq:PbondliljJij}), and (\ref{eq:bbondijTR}),
when $\beta = \beta_P$,
the thermodynamic quantity of the FK bond put between
the spins on nodes $i$ and $j$,
$[\langle b_{\rm FK} (i, j) \rangle_T ]_R$, is obtained as
\begin{eqnarray}
& & [\langle b_{\rm FK} (i, j ) \rangle_T ]_R \nonumber \\
&=& \sum_{ \{ \nu_{l, m} \}}
\prod_{\langle l, m \rangle} P ( \nu_{l, m})
\frac{\sum_{\{ q_l \} }
P_{\rm FK} ( q_i, q_j, \nu_{i, j})
\, e^{ - \beta_P {\cal H} (\{ q_l \}, \{ \nu_{l, m} \})}}
{\sum_{\{ q_l \} }
e^{- \beta_P {\cal H} (\{ q_l \}, \{ \nu_{l, m} \})}} \nonumber \\
&=& \frac{A^{N_B}}{q^N} \sum_{ \{ \nu_{l, m} \}}
\sum_{\{ S_l \} } P_{\rm FK} (q_i, q_j, \nu_{i, j})
\, e^{- \beta_P {\cal H} (\{ q_l \}, \{ \nu_{l, m} \})} \nonumber \\
&=& \frac{e^{\beta_P J} - 1}{e^{\beta_P J} + q - 1 } \, ,
\label{eq:bbondij}
\end{eqnarray}
where $N_B$ is the number of all links, and $N_B = N \langle k \rangle_N / 2$.
By using Eqs.~(\ref{eq:GaugeT}), (\ref{eq:Pnuij3}),
(\ref{eq:PbondliljJij}), and (\ref{eq:kbondiTR}),
when $\beta = \beta_P$,
the thermodynamic quantity of the node degree for FK bonds at node $i$,
$[\langle k_{\rm FK} (i) \rangle_T ]_R$, is obtained as
\begin{equation}
[\langle k_{\rm FK} (i) \rangle_T ]_R =
\biggl( \frac{e^{\beta_P J} - 1}{e^{\beta_P J} + q - 1 } \biggr)
\, k (i) \, . \label{eq:k1bondi2}
\end{equation}
By using
Eqs.~(\ref{eq:GaugeT}), (\ref{eq:Pnuij3}), (\ref{eq:PbondliljJij}),
and (\ref{eq:k2bondiTR}),
when $\beta = \beta_P$,
the thermodynamic quantity of the square of the node degree for FK bonds at node $i$,
$[\langle k^2_{\rm FK} (i) \rangle_T ]_R$, is obtained as
\begin{eqnarray}
& & [\langle k^2_{\rm FK} (i) \rangle_T ]_R \nonumber \\
&=&
\biggl( \frac{e^{\beta_P J} - 1}{e^{\beta_P J} + q - 1 } \biggr)^2
\, k (i) [ k (i) - 1 ]
\nonumber \\
&+&
\biggl( \frac{e^{\beta_P J} - 1}{e^{\beta_P J} + q - 1 } \biggr)
\, k (i) \, .
\label{eq:k2bondi2}
\end{eqnarray}
When we set
\begin{equation}
\rho ( \beta_P ) =
\frac{e^{\beta_P J} - 1}{e^{\beta_P J} + q - 1 }
\, , \label{eq:rho}
\end{equation}
Eqs~(\ref{eq:bbondij}),
(\ref{eq:k1bondi2}), and (\ref{eq:k2bondi2}) are
respectively formulated as Eqs.~(\ref{eq:bondij}),
(\ref{eq:k1bondi}), and (\ref{eq:k2bondi}).
Therefore,
the magnitude of the bond does not depend on the $k (i)$.
By using Eqs.~(\ref{eq:kbondTR}), (\ref{eq:k2bondTR}), (\ref{eq:k2k1bond}),
(\ref{eq:k1bondi2}), and (\ref{eq:k2bondi2}), we obtain
\begin{equation}
\frac{2 (e^{\beta_P J} - 1)}{2 (e^{\beta_P J} - 1) + q}
\ge \frac{2 \langle k \rangle_N}{\langle k^2 \rangle_N} \, . \label{eq:bondPc}
\end{equation}
Equation~(\ref{eq:bondPc}) is given by the inequality when
the cluster is percolated.
Equation~(\ref{eq:bondPc}) is given by the equality when
the cluster is at the percolation transition point.
From Eqs.~(\ref{eq:rhorange}) and (\ref{eq:rho}),
there is the percolation transition point for $0 < \beta_P \le \infty$.
From Eq.~(\ref{eq:bondPc}),
there is the percolation transition point for $0 < a \le 1$.
By using Eqs.~(\ref{eq:betaPnuij}) and (\ref{eq:bondPc}),
the probability $p$ that
the interaction is ferromagnetic is obtained as
\begin{equation}
p = \frac{2 (1 - a) + a q}{(2 - a) q} \label{eq:pbond}
\end{equation}
at the percolation transition point.
By using Eqs.~(\ref{eq:betaPnuij}) and (\ref{eq:pbond}),
the percolation transition temperature $T_P$ is obtained as
\begin{equation}
T_P = \frac{J}{k_B \ln \biggl[ \frac{2 ( 1 - a ) + a q}{2 (1 - a)}
\biggr]}
\, . \label{eq:TPbond}
\end{equation}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{./GPG.eps}
\end{center}
\caption{
Percolation thresholds of the FK cluster for the Potts gauge glass model.
(a) The relation between the aspect $a$ and the probability
$p$ is shown.
(b) The relation between the aspect $a$ and the percolation
transition temperature $T_P$ is shown.
The solid line shows the result of $q = 3$,
the dotted line shows the result of $q = 10$,
and the short-dashed line shows the result of $q = 100$.
$J / k_B$ is set to 1.
\label{fig:GPG}
}
\end{figure}
Figure~\ref{fig:GPG} shows
the percolation thresholds of the FK cluster for the Potts gauge glass model.
Figure~\ref{fig:GPG}(a) shows
the relation between the aspect $a$ and the probability $p$.
Eq.~(\ref{eq:pbond}) is used for showing Fig.~\ref{fig:GPG}(a).
Figure~\ref{fig:GPG}(b) shows
the relation between the aspect $a$ and the percolation
transition temperature $T_P$.
Eq.~(\ref{eq:TPbond}) is used for showing Fig.~\ref{fig:GPG}(b).
The solid line shows the result of $q = 3$,
the dotted line shows the result of $q = 10$,
and the short-dashed line shows the result of $q = 100$.
$J / k_B$ is set to $1$.
For the ferromagnetic Potts model on the same network,
the phase transition temperature $T^{({\rm Ferro})}_C$ is \cite{DGM2}
\begin{equation}
T^{({\rm Ferro})}_C
= \frac{J}{k_B \ln \biggl[ \frac{2 ( 1 - a ) + a q}{2 (1 - a)}
\biggr]} \, .
\end{equation}
$T_P$ (Eq.~(\ref{eq:TPbond}))
coincides with
$T^{({\rm Ferro})}_C$.
The complete graph is considerd as $a \sim 0$.
We set
$\langle k \rangle_N = N - 1$, $\langle k^2 \rangle_N = (N - 1)^2$,
$a = 2 / (N - 1)$, and $J \to J / \sqrt{N}$.
From the settings, the model on the network becomes the infinite-range model.
By using Eq.~(\ref{eq:pbond}),
the probability $p^{({\rm IR})}$ that
the interaction is ferromagnetic is obtained as
\begin{equation}
p^{({\rm IR})} = \frac{N - 3 + q}{(N - 2) q} \to \frac{1}{q} \label{eq:pbond2}
\end{equation}
for a sufficiently large number of nodes at the percolation transition point.
By using Eq.~(\ref{eq:TPbond}),
the percolation transition temperature $T^{({\rm IR})}_P$ is obtained as
\begin{equation}
T^{({\rm IR})}_P =
\frac{J}{k_B \sqrt{N} \ln ( 1 + \frac{q}{N - 3})}
\to \frac{J \sqrt{N}}{k_B q} \label{eq:TPbond2}
\end{equation}
for a sufficiently large number of nodes.
\section{Summary} \label{sec:6}
In this article,
the Potts gauge glass model on the random graphs with artibary
degree distributions was investigated.
The value of
$[\langle b_{\rm FK} (i, j) \rangle_T]_R$,
$[\langle k_{\rm FK} (i) \rangle_T]_R$, $[\langle k^2_{\rm FK} (i) \rangle_T]_R$,
$[\langle k_{\rm FK} \rangle_T]_R$, and $[\langle k^2_{\rm FK} \rangle_T]_R$
on the Nishimori line were shown.
They are quantities for the FK bonds and are exact even on a finite number of nodes.
It is known that
the internal energy and the upper bound of the specific heat
are exactly calculated
on the Nishimori line in the Potts gauge glass model
without the dependence
of the network (lattice) \cite{NS}.
In this article,
it was realized that, as a property for the Nishimori line,
the magnitude of the FK bond does not depend on the degree $k (i)$.
In this article,
we showed the percolation thresholds of the FK cluster.
It was shown that
the percolation transition temperature $T_P$ (Eq.~(\ref{eq:TPbond}))
on the Nishimori line
for the Potts gauge glass model on the present network coincides with
the phase transition temperature $T^{({\rm Ferro})}_C$ \cite{DGM2}
for the ferromagnetic Potts model on the same network.
The percolation thresholds of the FK cluster
for the infinite-range model were also shown.
We used a conjectured criterion Eq.~(\ref{eq:k2k1bond})
to obtain the percolation thresholds.
For this criterion, it was confirmed~\cite{Y} that
this criterion is exact for several extremal points
when applied to the Edward-Anderson model.
Therefore, our entire set of results may be exact.
\section*{Acknowledgment}
We would like to thank F. Igl\'oi for useful comments.
\section*{Appendix: the probabilities for connecting spins}
We will derive Eq.~(\ref{eq:PbondliljJij})
according to the method of Kawashima and Gubernatis in Ref.~\citen{KG}.
We define the weight of two spins on nodes connected by a link
as $w (q_i, q_j, \nu_{i, j})$.
$w (q_i, q_j, \nu_{i, j})$ is given by
\begin{equation}
w (q_i, q_j, \nu_{i, j})
= \exp \biggl\{ \frac{\beta J}{q}
\sum_{r_{i, j} = 1}^{q - 1}
\exp \biggl[ \frac{2 \pi i}{q}
\biggr( \nu_{i, j} + q_i - q_j \biggl) r_{i, j}
\biggr] \biggr\} \, .
\label{eq:a-1}
\end{equation}
We define the weight $w$ for $\nu_{i, j} + q_i - q_j = 0$
as $w_{\rm para}$. We obtain
\begin{equation}
w_{\rm para} (q_i, q_j, \nu_{i, j})
= \exp \biggl[ \frac{\beta J (q - 1) }{q} \biggr] \, . \label{eq:a-2}
\end{equation}
We define the weight $w$ for $\nu_{i, j} + q_i - q_j \ne 0$
as $w_{\rm anti}$. We obtain
\begin{equation}
w_{\rm anti} (q_i, q_j, \nu_{i, j})
= \exp \biggl( - \frac{\beta J}{q} \biggr) \, . \label{eq:a-3}
\end{equation}
We define the weight of graph for connecting two spins as $w (g_{\rm conn})$.
We define the weight of graph for disconnecting two spins
as $w (g_{\rm disc})$.
We are able to write
\begin{eqnarray}
w_{\rm para} (q_i, q_j, \nu_{i, j})
&=& w (g_{\rm conn}) + w (g_{\rm disc}) \, , \label{eq:a-5} \\
w_{\rm anti} (q_i, q_j, \nu_{i, j})
&=& w (g_{\rm disc}) \, . \label{eq:a-6}
\end{eqnarray}
By using Eqs.~(\ref{eq:a-2}), (\ref{eq:a-3}),
(\ref{eq:a-5}), and (\ref{eq:a-6}),
we obtain
\begin{eqnarray}
w (g_{\rm conn})
&=& \exp \biggl[ \frac{\beta J (q - 1) }{q} \biggr]
- \exp \biggl( - \frac{\beta J}{q} \biggr) \, , \label{eq:a-7} \\
w (g_{\rm disc})
&=& \exp \biggl( - \frac{\beta J}{q} \biggr) \, . \label{eq:a-8}
\end{eqnarray}
We define the probability of
connecting two spins for $\nu_{i, j} + q_i - q_j = 0$ as
$P_{\rm para} (g_{\rm conn})$.
We define the probability of
connecting two spins for $\nu_{i, j} + q_i - q_j \ne 0$
as $P_{\rm anti} (g_{\rm conn})$.
We are able to write
\begin{eqnarray}
P_{\rm para} (g_{\rm conn})
&=& \frac{w (g_{\rm conn})}{w (g_{\rm conn}) + w (g_{\rm disc})} \,
, \label{eq:a-9} \\
P_{\rm anti} (g_{\rm conn})
&=& 0 \, . \label{eq:a-10}
\end{eqnarray}
By using Eqs.~(\ref{eq:a-7}), (\ref{eq:a-8}), (\ref{eq:a-9}), and (\ref{eq:a-10}),
we derive Eq.~(\ref{eq:PbondliljJij}).
|
1,477,468,750,162 | arxiv | \section{Introduction}
The short-time motion of small particles fluctuating in a Newtonian fluid is strongly affected by fluid inertia, that is, the vorticity of the host fluid surrounding the dispersed particles.
If a dispersed particle accelerates due to Brownian forces, it affects the motion of the host fluid in the neighborhood of the particle, while the vorticity generated by the particle's motion then affects the motion of the same particle.
These effects are referred to as memory effects and have an important role in the dynamical motion of a dispersion on short time scales \cite{Weitz,Zhu,lukic}.
The vorticity diffuses away on a kinematic time scale $\tau_\nu = a^2/\nu$, where $a$ is the radius of the spherical particle and $\nu$ is the kinematic viscosity of the host fluid ($\sim 10^{-6}$ s for $1\ \mu$m in water).
When the vorticity diffuses away much faster than the particle's motion, {\it i}.{\it e}. $\tau_\nu \ll \tau_B=M/6\pi\eta a$ where $M$ is the mass of a Brownian particle and $\eta$ is the shear viscosity of the host fluid, the motion of a Brownian particle is well approximated by the normal Langevin equation, which is valid for strong damping (Reynolds number $Re\rightarrow 0$) or long time scales;
however,
this equation, wherein the effects of the fluid inertia are ignored is not applicable to a dispersion composed of neutrally buoyant particles since $\tau_B$ is comparable to $\tau_\nu$.
For a complete understanding of the short-time motion of a dispersion, the inertias of the particle and the host fluid cannot be neglected.
One way to account for memory effects is to simultaneously resolve the fluid motion with the particle motion as a boundary condition to be satisfied.
We refer to this approach as the direct numerical simulation(DNS) approach.
Within the DNS approach, various numerical methods have been developed \cite{LB,FP,LB0,SR,NS,IBM}, and the power-law decay behavior in the velocity autocorrelations of a free Brownian particle has been accurately reproduced.
This slow relaxation of the correlation behavior is one of the main features of memory effects.
Although most of these methods have been applied to dispersions composed of free Brownian particles at thermal equilibrium, dispersions under flows that are far from equilibrium have not been examined in detail, even for the simple case of a single Brownian particle in a shear flow on a short time scale, $\tau_{\nu}$.
Furthermore, most numerical methods used for concentrated dispersions under shear, which are widely used for measuring rheological properties, are limited to a Reynolds number of zero, and the short-time motions of the dispersed particles in concentrated dispersions cannot be correctly tracked.
Recently, we have developed a numerical method, known as the "Smoothed Profile Method (SPM)" \cite{key1,key2}, for the DNS of particulate flows.
Its computational accuracy and efficiency have been examined carefully
by Lio {et al.} for several flow problems \cite{err}.
The SPM has been applied to a dispersion composed of free Brownian particles at thermal equilibrium \cite{key3,key5}.
We have also succeeded in reproducing the power-law decay behavior in the translational and rotational velocity autocorrelations of a Brownian particle, and these results
agree well with hydrodynamic analytical solutions for a free Brownian particle in an infinite fluid that accounts for memory effects.
In order to simulate dispersions in non-equilibrium conditions on short time scales in which memory effects become significant, we have modified the SPM.
The primary objective of the present work is to accurately examine the short-time motion of Brownian particles in a simple shear flow by using the modified SPM.
In this paper, we first present the modified SPM, in which external forces are introduced to impose a shear flow into the system.
In order to validate the method, we next compare the numerical results for the mean square displacement (MSD) with a hydrodynamic analytical solution of a Brownian particle in simple shear flows that account for fluid inertia.
Furthermore, we apply our method to a dispersion composed of many spherical particles under shear.
The MSD in the vorticity direction is then calculated for several volume fractions, and the time evolution is discussed.
Finally, the shear-induced diffusion coefficients are measured from the long-time behavior of the MSD, and the dependence of the determined diffusion coefficients on the volume fraction is examined.
\section{Simulation Method}
Let us consider a monodisperse dispersion of repulsive spherical particles in a Newtonian host fluid.
The dispersion is subjected to shear by an external force.
The position of the $i$th particle is ${\bm R_i}$, the translational velocity is ${\bm V_i}$, and the rotational velocity is ${\bm \Omega_i}$.
The velocity and pressure fields of the host fluid are $\bm v(\bm x, t)$ and $p(\bm x, t)$, respectively.
These field quantities are defined on three-dimensional Cartesian grids; $\bm x \in [0,L_x] [-L_y/2,L_y/2][0,L_z]$.
In order to distinguish the particle and fluid domains on the grids, a smoothed function $\phi(\bm x, t)$, which is equal to $1$ in the particle domains and $0$ in the fluid domains, is introduced.
These domains are separated by a thin interfacial domain of thickness $\xi$.
The length unit is taken to be the lattice spacing $\Delta$, and the time unit is $\rho_f\Delta^2/\eta$, where $\rho_f$ denotes the density of the host fluid.
The time evolution of the $\it i$th dispersed particle with mass $M_i$ and moment of inertia $\bm I_i$ is governed by Newton's equations of motion:
\begin{align}
M_i \dot {\bm V_i}&= {\bm F^H_i} + {\bm F^C_i} + {\bm G_i^V},\ \ \
\dot {\bm R_i} = {\bm V_i},\\
{\bm I_i}\cdot \dot{\bm \Omega_i} &= {\bm N^H_i} + {\bm G_i^\Omega},
\end{align}
where $\bm F^H_i$ and $\bm N^H_i$ are the hydrodynamic forces and torques exerted by the host fluid on the particle \cite{key1,key2}.
$\bm F_i^C$ is a repulsive force that is employed to prevent particle overlaps, and a truncated Lennard-Jones potential, $V(r_{ij})=4[(\sigma/r_{ij})^{36}-(\sigma/r_{ij})^{18}+1/4]$ for $r_{ij}<2^{1/18}\sigma$ or $V(r_{ij})=0$, is adopted in this work.
Here $r_{ij}=|\bm R_i - \bm R_j|$, and $\sigma=2a$ represents the diameter of particles.
$\bm G_i^V$and $\bm G_i^\Omega$ are random forces and torques, respectively, due to thermal fluctuations.
These fluctuations are introduced as white noise with a zero mean and correlations $ \langle \bm G_i^V(t) \bm G_j^V(0)\rangle= \alpha^V \bm I\delta(t)\delta_{ij}$ and $\langle \bm G_i^{\Omega}(t) \bm G_j^{\Omega}(0)\rangle= \alpha^\Omega \bm I\delta(t)\delta_{ij}$, where $\alpha^V$ and $\alpha^\Omega$ are numerical parameters that control the translational and rotational particle temperatures, namely, $T^V$ and $T^\Omega$ \cite{key3,key5}.
The angular brackets denote taking an average over an equilibrium ensemble.
The temperatures are determined by the following procedure;
First a single Brownian particle at thermal equilibrium is simulated with fixed $\alpha_V$ and $\alpha_{\Omega}$.
Then the translational and rotational long-time diffusion coefficients are obtained from the simulation.
By comparing these diffusion coefficients with Stokes-Einstein diffusion coefficients, $D^V_0=k_BT^V/6\pi\eta a$ for the translational motion and $D^{\Omega}_0=k_BT^{\Omega}/8\pi\eta a^3$ for the rotational motion, we finally can determine the temperatures.
Since both $\alpha_V$ and $\alpha_{\Omega}$ are chosen to satisfy $k_BT^V=k_BT^{\Omega}$, the temperatures are simply written as $k_BT$ throughout this paper.
The time evolution of the host fluid is governed by the Navier-Stokes equations:
\begin{align}
\rho_f(\partial_t {\bm v} + {\bm v}\cdot \nabla {\bm v}) &= \nabla \cdot\bm \sigma
+\rho_f\phi{\bm f_p} +\rho_f \bm f^{shear},
\label{nseq}\\
\bm \sigma &= -p\bm I + \eta\{\nabla \bm v + (\nabla \bm v)^T\}
\end{align}
where the incompressibility condition, $\nabla \cdot \bm v=0$. $\bm f^{shear}(\bm x, t)$ is an external force field that is introduced to enforce the following velocity profile,
\begin{align}
v_{x}(y) &=
\begin{cases}
\dot\gamma (-y - L_y/2), &(-L_y/2<y\leq -L_y/4)\\
\dot\gamma y, &(-L_y/4<y\leq L_y/4)\\
\dot\gamma (-y + L_y/2) &(L_y/4<y\leq L_y/2)\\
\end{cases}
\end{align}
where $\dot\gamma$ denotes the shear rate of the imposed flow and $y$ denotes the distance in the velocity-gradient direction.
This velocity profile, schematically depicted in Fig. \ref{profile}, enables us to solve the motion of the host fluid with periodic boundary conditions.
Couette flow profiles are approximately formed over a range from $y=-L_y/4$ to $y=L_y/4$, although the zigzag profile strictly becomes a Couette flow when $L_y \rightarrow \infty$.
$\phi {\bm f_p}$ represents the body force that ensures the rigidity of particles and the aprropriate non-slip boundary conditions at the fluid/particle interface, which is further elaborated in reference \cite{key1,key2, err}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.5]
{fig1.eps}
\end{center}
\caption{\label{profile}{Schematic diagrams of a zigzag velocity flow. The arrows represent the velocity vectors of the host fluid in the flow direction. The $x$, $y$, and $z$ axis represent the flow, velocity-gradient, and vorticity directions, respectively}}
\end{figure}
\section{Results and Discussion}
The computational domain has three dimensions $64 \times 64 \times 64$ and periodic boundary conditions.
The numerical parameters for both the host fluid and the spherical particles are $\Delta=1$, $\eta=1$, $\rho_f=1$, $a=5$, $\xi=2$, and particle density $\rho_p=1$.
The imposed shear rate is $\dot\gamma=0.005$, and the temperature is $k_BT=0.07$.
The temperature was determined by measuring the long-time diffusion coefficient of a single Brownian particle at
thermal equilibrium before the shear was imposed.
This system has dimensionless parameters such that the Peclet number $Pe=6\pi\eta a^3\dot\gamma/k_BT\simeq 170$ and the particle Reynolds number $Re_p=\rho_p\dot\gamma a^2 /\eta= 0.125$.
The initial configuration of the spherical particle is located at the central position of the system.
The volume fraction of a single particle is $\Phi=0.002$.
An important feature of a single Brownian particle in a simple shear flow is that the MSD in the flow direction varies asymptotically with time as $t^3$.
Although we would expect t-dependence of the MSD in the long-time limit, this non-diffusional behavior is explained as a result of a coupling between diffusive motion in the velocity-gradient direction and convective motion due to shear in the flow direction.
Theoretical solutions to this problem show the MSD in the flow direction($x$) is composed of three parts:
\begin{align}
\langle R^x_i(t)^2 \rangle = v^2_x(y_0)t^2 +\langle R^x_i(t)^2 \rangle_{\dot\gamma} + \langle R^x_i(t)^2 \rangle_0 \label{msdx}
\end{align}
assuming that, at $t=0$, the particle is in the initial position ($0$, $y_0$, $0$),
wherein the first term on the right hand side of Eq. (\ref{msdx}) is a shear contribution only, and represents the simple translation of a particle along the streamline in the flow direction.
The second term is the coupling term described above, which represents the convection induced by diffusion.
The third term, which is in the same form as the MSD of a free Brownian particle, is a thermal contribution only.
Similar to that of the $x$ direction, the MSDs in the velocity-gradient direction($y$) and the vorticity direction($z$) are derived theoretically;
$\langle R^y_i(t)^2 \rangle=\langle R^y_i(t)^2 \rangle_0$ and
$\langle R^z_i(t)^2 \rangle=\langle R^z_i(t)^2 \rangle_0$.
These quantities are determined analytically for a single Brownian particle obeying the normal Langevin equation (LE) or the generalized Langevin equation (GLE) with memory effects \cite{ANA,ANA1,ANA2}.
The time evolution of the MSD in the $x$ direction ($\bigcirc$) for a single spherical particle under shear is shown in Fig. \ref{pict2}.
The MSD was calculated via
$\langle |R_i^x(t)-R^x_i(0)-v_x(y_0)t|^2\rangle$
to eliminate the purely shear contribution that corresponds to the first term of Eq. (\ref{msdx}).
The MSD is scaled by $k_BT$.
The MSD is in excellent agreement with
the analytical solution for the GLE.
For short times of up to $t\sim10^2$, the motion of the Brownian particle in the $x$ direction is like that of a free Brownian particle obeying the GLE.
For long times, $t\gg \tau_\nu=25$, the MSD asymptotically approaches $2D_0\dot\gamma^2 t^3/3$, where $D_0=k_BT/6\pi\eta a$.
This $t^3$ regime in the MSD is approached in a much slower manner, $t^{-1/2}$, than that of the analytical solution for the LE.
The transient behavior of the MSD depends strongly on whether the memory effects are taken into account.
The MSD $\langle |R^z_i(t)-R^z_i(0)|^2\rangle$ in the $z$ direction ($\blacksquare$) is also plotted in Fig. \ref{pict2}.
The numerical results of the MSD in the $z$ direction agree well with the analytical solution for the GLE of a free Brownian particle where the $t$ regime is approached in a much slower manner, $t^{-1/2}$, and the diffusive motion is attained on time scales of $O(10^4)$.
The diffusion time characterizing the diffusive motion is $\tau_D=a^2/D_0\simeq 3.4 \times 10^4$, which measures the particle diffusion over the particle size.
Another dynamical quantity of interest is the positional cross-correlation $\langle R^x_i(t)R^y_i(t)\rangle$ of a Brownian particle in a simple shear flow.
This analytical form is derived for both the LE and GLE.
These long-time behaviors show the same dependence on $t$, which is $D_0\dot\gamma t^2$.
The cross-correlation was calculated via $\langle (R_i^x(t) - R_i^x(0)-V^x_i(0)t)(R_i^y(t)-R_i^y(0))\rangle$.
The numerical results ($\bigtriangleup$) are plotted in Fig. \ref{pict2} and are in good agreement with the analytical solution for the GLE.
This detailed analysis of the MSD is sufficient to confirm the validity of our method of incorporating memory effects;
however, within our method, it is assumed that the thermal fluctuations can be represented by white and gaussian noise.
This means that correlations between thermal noise at different times are completely ignored, although
in the theoretical framework of the GLE, noises are correlated.
The thermal noise memory, which we ignore, is considered to affect the particle's motion on very fast time scales $t<\tau_B\sim 4.5$.
On these very short time scales, $t<\tau_B$,
a gap between the hydrodynamic analytical solutions and the numerical results is clearly observed in previous studies \cite{key3, key5}.
Although our method is not applicable to particle motion for $t<\tau_B$, the method can be accurately applied to the motion of particles in a shear flow on time scales $t>\tau_B$, in which the memory effects become significant.
\begin{figure}[hbt]
\begin{center}
\includegraphics[scale=0.8]
{fig2.eps}
\end{center}
\caption{\label{pict2}{Mean square displacement scaled by $k_BT$ of a single spherical particle fluctuating in a Newtonian fluid under shear at $k_BT=0.07$ and $\dot\gamma=0.005$: ($\bigcirc$) flow direction, ($\blacksquare$) vorticity direction, and ($\bigtriangleup$) positional cross-correlation.
The analytical solution of the MSD in the flow direction ($x$) for a Brownian particle in a shear flow: the dashed line represents the LE and the solid line the GLE. {The dash-dotted line is the analytical solution of the MSD in the vorticity direction ($z$) for the GLE, which is the same form as that for the GLE of a free Brownian particle}
The analytical solution of the positional cross-correlation for a Brownian particle in a shear flow: the dotted line represents the LE and the dashed line the GLE. The Brownian time $\tau_B\simeq 4.5$, the kinematic time $\tau_\nu=25$, and the diffusion time $\tau_D\simeq 3.4\times 10^4$. The parameters for both the particle and fluid are $a=5$, $\xi=2$, $\rho_p=1$, $\rho_f=1$, and $\eta=1$. The time unit is $\Delta^2/\nu=1$, and the length unit is $\Delta=1$.}}
\end{figure}
Most numerical approaches so far performed for concentrated
particles dispersions are based on the Stokesian dynamics \cite{SD},
thus those simulations are valid only for the zero Reynolds number limit.
Recently, concentrated dispersions at finite Reynolds number have been
simulated using the DNS methods based on the stochastic rotation dynamics \cite{SRDm} or the lattice Boltzmann method \cite{LB1, LB2, LB3},
We have applied the modified SPM to concentrated dispersions at a finite
Reynolds number $Re_p=0.125$ in order to examine the short-time motion
of Brownian particles in a shear flow at finite volume fractions.
Simulations were performed at several volume fractions from a very dilute
case with $\Phi=0.002$ to very dense cases with $\Phi=0.4$ and $0.5$,
which are apparently higher than the previous DNS simulations.
Since $Pe\simeq170$, the hydrodynamic shear forces become dominant over the thermal forces in the particle' motion.
The initial configurations of the dispersed particles are randomly distributed.
Figure \ref{msd_t3} shows the time evolution of the MSD in the $z$ direction for several volume fractions, up to
$\Phi=0.5$.
The time is scaled by $1/\dot\gamma$.
As the volume fraction is increased,
the MSD grows more rapidly in time due to the hydrodynamic and direct interactions between the particles, resulting in a increase in the slope of the MSD for a small strain $\dot\gamma t\sim O(10^{-1})$.
For high volume fractions $0.2 \leq \Phi \leq 0.4$,
the diffusive behavior is attained at a smaller strain $\dot\gamma t \sim O(1)$ rather than at $\Phi=0.002$, as for a Brownian particle.
The accelerated Stokesian dynamics (ASD) for non-Brownian particles at $\Phi=0.2$ show
the diffusive region is attained at larger strains
than at least 10 \cite{ASD}
Compared with these numerical results, the onset of the diffusive region in the present simulations is much faster.
The time at which the diffusive region is attained shifts to shorter times at higher volume fractions.
For the highest volume fraction $\Phi=0.5$, however, we see that the motion of
Brownian particles are trapped within the effective cages formed by the
surrounding particles.
This is because the particles start to form string-like objects in the
flow direction, and finally the whole system evolves into a
two-dimensional ordered structure in the plane perpendicular to the
flow,
similarly to the flow-induced ordering commonly observed in experiments \cite{EXOD1,EXOD0,EXOD2} and by computer simulations
\cite{NMD,SD01,SD00} under shear flow.
In the present case with $Re_p = 0.125$ and $Pe \sim 170$,
we found that the shear-induced ordering occurs at the high volume fraction
between $\Phi=0.4$ and $0.5$.
The long-time diffusion coefficient $D_{z}$ was calculated by linearly fitting the data over the diffusion regions.
The inset of Fig. \ref{msd_t3} shows the volume fraction dependence of $D_{z}$ scaled by $\dot\gamma a^2$.
The diffusion coefficient increases rapidly up to $\Phi=0.3$ and reaches a plateau at a volume fraction beyond $\Phi>0.3$.
This behavior of $D_z$ is remarkably different from that at thermal equilibrium, where the diffusion coefficient decreases with increasing volume fraction.
The enhancement of the diffusion coefficient with increasing volume fraction is a typical characteristic of shear-induced diffusion coefficients.
These results exhibit the same qualitative behavior as the experimental results obtained for non-Brownian particles
\cite{EXP0,EXP,EXP1}.
\begin{figure}[tbh]
\begin{center}
\includegraphics[scale=0.9]{fig3.eps}
\end{center}
\caption{\label{msd_t3}{The time evolution of the mean square displacement in the vorticity direction ($z$) for several volume fractions $\Phi$ at $k_BT=0.07$ and $\dot\gamma=0.005$. The Peclet number $Pe\simeq 170$ and the particle Reynolds number $Re_p=0.125$.
The inset shows the volume fraction dependence of $D_z$ scaled by $\dot\gamma a^2$.
The parameters for both the particle and fluid are $a=5$, $\xi=2$, $\rho_p=1$, $\rho_f=1$, and $\eta=1$.}}
\end{figure}
\section{Conclusion}
In conclusion, the short-time motion of Brownian particles in an incompressible Newtonian fluid under shear was examined by using a modified SPM in which external forces are introduced to approximately form a simple shear flow throughout the entire system with periodic boundary conditions.
The validity of the method was carefully examined by comparing the present numerical results for the MSD with the hydrodynamic analytical solution of the GLE of a single Brownian particle in a simple shear flow.
{In the present study, we aimed to modify the original SPM by
incorporating thermal fluctuations so that the modified SPM is valid for
$\tau_B<t$, while other computational methods such as Brownian dynamics
and ASD, which are based on the LE that ignores memory effects due to
fluid motions, are valid only for $\tau_B\ll \tau_D<t$.}
Simulations were then performed for monodisperse dispersions of repulsive spherical particles at volume fractions ranging from $\Phi=0.002$-$0.50$.
We found that the MSD in the vorticity direction grows rapidly in time and with increasing particle volume fraction. At a strain $\dot\gamma t\sim O(1)$, the diffusive region is attained for $0.2 \leq \Phi \leq 0.4$.
The onset of the diffusive region shifts to shorter times at higher volume fractions.
For $\Phi=0.5$, however, the particles are no more diffusive because of
the shear-induced ordering.
The diffusion coefficient in the vorticity direction was obtained from the long-time behavior of the MSD.
For volume fractions of up to $0.3$, the diffusion coefficient rises rapidly with increasing volume fraction.
It then levels off for volume fractions beyond $0.3$.
|
1,477,468,750,163 | arxiv | \section{\bf Introduction}
In a previous paper \cite{ITD2} we studied a scalar field theory model in which each field
was associated with a distinct metric. For simplicity we assumed that spacetime
was represented by a flat background and parametrised by coordinates $x^\mu$
such that the theory is invariant under the translations $x^\mu\rightarrow x^\mu+a^\mu$.
The metrics are then independent of the coordinates $x^\mu$. We formulated the theory so
that it was invariant under general linear transformations $x^\mu\rightarrow M^\mu_{~~\nu}x^\nu$.
The implications of the model were that in addition to a renormalisation of the
coupling constants and masses such theories required a renormalisation of the metrics.
An examination of the renormalisation group showed that the important effect was
a renormalisation of the {\it relationships} between the metrics and their associated lightcones.
These relationships are therefore dependent on the renormalisation scale $\mu$ (we
use dimensional regularisation). We found that at each stage the associated lightcones must
overlap by sharing some interior vectors that are timelike in all the metrics. This constraint
on the lightcones originates in a requirement that the evolution of the full system
of interacting quantum fields is causal for some set of observers. The renormalisation of the metric relationship
will have implications for models in which two (or more) metrics become dynamical degrees
of freedom.
Of course such models exhibit a breakdown of Lorentz invariance and correspond to a subset
of $CPT$-even violations. A more general set of effects has been the focus of a wide range
of investigations by many authors \cite{KOST1,KOST2,KOST3,KOST4,COLGL1,COLGL2}. We believe however that
our approach of concentrating on multimetric theories sheds further light on this particular sector of
the violation of Lorentz invariance.
In this paper we consider a version of QED, Bimetric QED (BIMQED) that associates one metric with
the electromagnetic and another with the electron field. A simple extension of the model could
involve the introduction of a third metric associated with the muon field, though we do not
pursue this here. Of course we do not see this model as any more than a demonstration of the ideas
in a simple gauge theory since there is so far no observational reason for anticipating a breakdown
of Lorentz invariance in QED. The same analysis applied to theories with non-abelian gauge groups
is also of interest especially in relation to high energy scattering. In fact the model is related to,
but in some ways simpler than, those investigated by Nielsen and Ninomiya \cite{NLSN1}, and subsequently by Nielsen
and Picek \cite{NLSN2} and Chadha and Nielsen \cite{NLSN3}.
Our starting point is a formulation of electrodynamics that has been referred to as "pre-metric" \cite{ITIN}.
We examine this formulation and show that in fact there is a {\it preferred} metric. Identifying this metric
also permits a clear statement of the nature of Lorentz symmetry breaking for electrodynamics.
The breaking of Lorentz symmetry is associated with a tensor that has the symmetry properties
of the Weyl tensor in general relativity. We refer to this as a Weyl-like tensor (WLT).
The Petrov classification \cite{PTRV} for such tensors can be used to identify the different kinds of symmetry
breaking that are possible. Of course this analysis is consistent with other work \cite{KLINK,SCHRK,LEHN} on
the breaking of Lorentz symmetry in electrodynamics. A feature of our approach is that while we do use the standard
perturbation expansion in electric charge we are not constrained to treat Lorentz symmetry breaking
perturbatively, unless this happens to be convenient.
\section{\label{PREFM} Preferred Metric in Electrodynamics}
The pre-metric formulation of electrodynamics \cite{ITIN} in its most general form replaces
Maxwell's equations for the gauge field $A_\mu(x)$ with a modified set of the form
\begin{equation}
\partial_\mu {\tilde{U}}^{\mu\nu\sigma\tau}F_{\sigma\tau}(x)=0,
\label{GINVEQM1}
\end{equation}
where
\begin{equation}
F_{\sigma\tau}(x)=\partial_\sigma A_\tau(x)-\partial_\tau A_\sigma(x),
\label{EMT}
\end{equation}
and the (constant) tensor density ${\tilde{U}}^{\mu\nu\sigma\tau}$ satifies
\begin{equation}
{\tilde{U}}^{\mu\nu\sigma\tau}=-{\tilde{U}}^{\nu\mu\sigma\tau}=-{\tilde{U}}^{\mu\nu\tau\sigma}.
\end{equation}
In this general formulation there is no requirement that ${\tilde{U}}^{\mu\nu\sigma\tau}={\tilde{U}}^{\sigma\tau\mu\nu}$.
Indeed the non-vanishing of the antisymmetric contribution ${\tilde{U}}^{\mu\nu\sigma\tau}-{\tilde{U}}^{\sigma\tau\mu\nu}$
produces what are referred to as skewon effectsi \cite{ITIN}. Since however we wish to derive our
dynamical equations from a Lagrangian formulation we will exclude skewon effects. We will also
introduce a (constant) metric $g_{\mu\nu}$ and set
\begin{equation}
{\tilde{U}}^{\mu\nu\sigma\tau}=\Omega U^{\mu\nu\sigma\tau},
\end{equation}
where
\begin{equation}
\det g_{\mu\nu}=-\Omega^2,
\label{DET1}
\end{equation}
and the tensor $U^{\mu\nu\sigma\tau}$ satisfies
\begin{eqnarray}
U^{\mu\nu\sigma\tau}&=&-U^{\nu\mu\sigma\tau},\\\nonumber
U^{\mu\nu\sigma\tau}&=&-U^{\mu\nu\tau\sigma},\\\nonumber
U^{\mu\nu\sigma\tau}&=&U^{\sigma\tau\mu\nu},
\end{eqnarray}
together with
\begin{equation}
U^{\mu\nu\sigma\tau}+U^{\mu\sigma\tau\nu}+U^{\mu\tau\nu\sigma}=0.
\end{equation}
As remarked by Nielsen and Ninomiya \cite{NLSN1}, this gives $U^{\mu\nu\sigma\tau}$ the algebraic properties
of the Riemann tensor although there is no necessary connection. However see ref \cite{ITDSJH,SHHO1,SHHO2,SHHO3}.
For the given metric there is a standard decomposition of $U^{\mu\nu\sigma\tau}$ in the form
\begin{equation}
U^{\mu\nu\sigma\tau}=\frac{1}{12}U(g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma})
+\frac{1}{2}(g^{\mu\sigma}S^{\nu\tau}+S^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}S^{\nu\sigma}-S^{\mu\tau}g^{\nu\sigma})
-C^{\mu\nu\sigma\tau},
\end{equation}
where
\begin{eqnarray}
S^{\mu\sigma}&=&U^{\mu\sigma}-\frac{1}{4}Ug^{\mu\sigma},\\\nonumber
U^{\mu\sigma}&=&g_{\nu\tau}U^{\mu\nu\sigma\tau},\\\nonumber
U&=&g_{\mu\sigma}U^{\mu\sigma},
\end{eqnarray}
and
\begin{equation}
g_{\nu\tau}C^{\mu\nu\sigma\tau}=0.
\end{equation}
Clearly $g_{\mu\sigma}S^{\mu\sigma}=0$ and $C^{\mu\nu\sigma\tau}$ has the algebraic properties of the Weyl tensor.
So far the metric is arbitrary. We can arrive at a preferred metric by demanding that
the tensor $U^{\mu\nu\sigma\tau}$ is most nearly like $g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma}$.
We implement this idea by requiring that the overlap amplitude $U^{\mu\nu\sigma\tau}(g_{\mu\sigma}g_{\nu\tau}-g_{\mu\tau}g_{\nu\sigma})$
is stationary with respect to variations of $g_{\mu\sigma}$ subject to the constraint in eq(\ref{DET1}).
Introducing the Lagrange multiplier $\lambda$ and setting
\begin{equation}
{\cal F}=U^{\mu\nu\sigma\tau}(g_{\mu\sigma}g_{\nu\tau}-g_{\mu\tau}g_{\nu\sigma})-\lambda\det g_{\mu\sigma}
\end{equation}
we require that
\begin{equation}
\frac{\partial {\cal F}}{g_{\mu\sigma}}=0.
\end{equation}
This yields
\begin{equation}
U^{\mu\sigma}+\lambda\Omega^2g^{\mu\sigma}=0.
\label{STMT}
\end{equation}
It follows that for this {\it stationary} value of the metric that
\begin{equation}
S^{\mu\sigma}=0.
\end{equation}
Of course we are assuming that $U^{\mu\nu\sigma\tau}$ is such that it yields a unique solution of eq(\ref{STMT})
with the right type of (lightcone generating) metric.
The action for the elctromagnetic field that gives rise to eq(\ref{GINVEQM1}) is $S_{(p)}$ where
\begin{equation}
S_{(p)}=-\frac{1}{8}\int d^4x\Omega U^{\mu\nu\sigma\tau}F_{\mu\nu}(x)F_{\sigma\tau}(x).
\label{EMACT}
\end{equation}
For a given metric we are free to adjust the normalisation of the field $A_\mu(x)$ so that
the normalisation of $U^{\mu\nu\sigma\tau}$ is such that $U=12$. We have then $S_{(p)}$ is given by
eq(\ref{EMACT}) where
\begin{equation}
U^{\mu\nu\sigma\tau}=(g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma})-C^{\mu\nu\sigma\tau}.
\label{PRFMD}
\end{equation}
It follows that when the Weyl-like tensor $C^{\mu\nu\sigma\tau}$ vanishes $S_\gamma$ is invariant under the
Lorentz group that leaves $g_{\mu\nu}$ invariant and there is no birefringence in the evolution of the
elctromagnetic field. However when $C^{\mu\nu\sigma\tau}$ is non-vanishing we do encounter birefringence and
the breaking of Lorentz invariance. The possible ways in which Lorentz symmetry breaking occurs can
therefore be given a Petrov classification appropriate to the tensor $C^{\mu\nu\sigma\tau}$.
\section{\label{EQM} Equations of Motion}
The gauge invariant equations of motion, eq(\ref{GINVEQM1}), now take the form
\begin{equation}
(g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma}-C^{\mu\nu\sigma\tau})\partial_\sigma\partial_\mu A_\nu(x)=0.
\label{GINVEQM2}
\end{equation}
This is the standard form with Lorentz symmetry breaking for electrodynamics.
Our analysis makes clear that the there is no lack of generality in this form
and that we can always choose the preferred metric for which the Lorentz
symmetry breaking is described by the traceless WLT $C^{\mu\nu\sigma\tau}$.
If we seek a solution of the form
\begin{equation}
A_\nu(x)={\varepsilon}_\nu e^{-iq.x},
\end{equation}
then ${\varepsilon}_\nu$ satisfies
\begin{equation}
M^{\tau\nu}{\varepsilon}_\nu=0,
\label{GINVEQM3}
\end{equation}
where
\begin{equation}
M^{\tau\nu}=q^2g^{\tau\nu}-q^\tau q^\nu-C^{\mu\nu\sigma\tau}q_\mu q_\sigma,
\label{GINVEQM4}
\end{equation}
and we set $q^\mu=g^{\mu\nu}q_\nu$.
Of course eq(\ref{GINVEQM3}) has the solution ${\varepsilon}_\mu\propto q_\mu$ that for any vlaue of $q_\mu$.
It corresponds to a gauge degree of freedom. The physical solutions appear only when $q_\mu$ is constrained
to satisfy a dispersion relation that permits the rank of the matrix $M^{\tau\nu}$ to drop below the value 3
and its kernel to have a dimension greater than 1. See also ref \cite{ITIN}.
Following conventional lines of reasoning
we can explore plane wave solutions by imposing the gauge condition
\begin{equation}
q^\nu{\varepsilon}_\nu=0.
\label{GC1}
\end{equation}
Eq(\ref{GINVEQM3}) becomes
\begin{equation}
\left\{q^2g^{\tau\nu}-C^{\mu\nu\sigma\tau}q_\mu q_\sigma\right\}{\varepsilon}_\nu=0.
\label{GINVEQM5}
\end{equation}
It is easy to see that eq(\ref{GINVEQM5}) implies
\begin{equation}
q^2q^\nu{\varepsilon}_\nu=0.
\end{equation}
Hence eq(\ref{GC1}) can be imposed in a consistent manner. The problem then reduces to
finding the conditions on $q_\mu$ that allow eq(\ref{GINVEQM5}) to have nontrivial solutions.
We will return to the issue of gauge conditions later.
\subsection{\label{NP} Newman-Penrose Tetrad}
It is convenient to reformulate these equations of motion in terms of a Newman-Penrose
tetrad \cite{NEWPEN,JMS}. It comprises four null vectors, $l_\mu$, $n_\mu$, $m_\mu$ and ${\bar m}_\mu$
where $l_\mu$ and $n_\mu$ are real and $m_\mu$ and ${\bar m}_\mu$ are complex conjugates.
They satisfy the relations
\begin{equation}
l^\mu l_\mu=n^\mu n_\mu=m^\mu m_\mu={\bar m}^\mu {\bar m}_\mu=0,
\end{equation}
and
\begin{equation}
l^\mu m_\mu=l^\mu {\bar m}_\mu=n^\mu m_\mu=n^\mu {\bar m}_\mu=0,
\end{equation}
together with
\begin{equation}
l^\mu n_\mu=-m^\mu {\bar m}_\mu=1.
\end{equation}
We have also
\begin{equation}
g^{\mu\nu}=l^\mu n^\nu+n^\mu l^\nu-m^\mu{\bar m}^\nu-{\bar m}^\mu m^\nu.
\end{equation}
Hence ${\varepsilon}$ can be re-expressed in terms of its components in the NP tetrad basis,
\begin{equation}
{\varepsilon}^\mu=l^\mu(n.{\varepsilon})+n^\mu(l.{\varepsilon})-m^\mu({\bar m}.{\varepsilon})-{\bar m}^\mu(m.{\varepsilon}).
\end{equation}
For later convenience we reformulate eq(\ref{GINVEQM5}) also in the tetrad basis.
Introduce $N^{\nu\tau}$ where
\begin{equation}
N^{\nu\tau}=C^{\mu\nu\sigma\tau}q_\mu q_\sigma,
\end{equation}
and the matrix entries $N_{ll}=N^{\nu\tau}l_\nu l_\tau$, $N_{ln}=N^{\nu\tau}l_\nu n_\tau$ {\it etc}.
We have then
\begin{equation}
\left(
\begin{array}{cccc}
q^2-N_{ln}&-N_{ll}&N_{l{\bar m}}&N_{lm}\\
-N_{nn}&q^2-N_{nl}&N_{n{\bar m}}&N_{nm}\\
-N_{mn}&-N_{ml}&q^2+N_{m{\bar m}}&N_{mm}\\
-N_{{\bar m} n}&-N_{{\bar m} l}&N_{{\bar m}\mb}&q^2+N_{{\bar m} m}
\end{array}
\right)
\left(
\begin{array}{c}
l.{\varepsilon}\\ n.{\varepsilon}\\ m.{\varepsilon}\\ {\bar m}.{\varepsilon}
\end{array}
\right)=0.
\label{GINVEQM6}
\end{equation}
For non-trivial solutions we require the vanishing of the determinant of the matrix in eq(\ref{GINVEQM6}).
In examples below we will see that this determinant has a factor of $(q^2)^2$ corresponding to
gauge modes. The remaining factor yields the lightcone structure of the physical modes.
\section{\label{PETROV} Petrov Classification of Lorentz Symmetry Breaking}
The Petrov classification of Weyl-like tensors (WLTs) \cite{PTRV} can be expressed in a number of ways.
A powerful way of understanding the structure of WLTs is the Newman-Penrose (NP) formalism together
with the Penrose spinor approach \cite{PENRIN}.
A succinct account of the this formalism is provided by Stewart \cite{JMS}. A simple account may also
be found in \cite{PODON}. An important concept in
relation to a WLT is that of a principal null direction (PND). Such a PND is represented
by a null vector, $v_\mu$, that satisfies the constraint
\begin{equation}
v_{[\alpha}C_{\mu]\nu\sigma[\tau}v_{\beta]}v^\nu v^\sigma=0.
\label{PR1}
\end{equation}
In general there are four distinct directions that are solutions of this equation and
the WLT is Petrov class I. When the constraint has one double root and there are three
distinct PNDs the WLT is Petrov class II. The case of two double roots is Petrov class D.
The PND corresponding to a double root in these two cases satisfies a modified
(but consistent) constraint
\begin{equation}
C_{\mu\nu\sigma[\tau}v_{\beta]}v^\nu v^{\sigma}=0.
\label{PR2}
\end{equation}
When there is a triple root, and two distinct PNDs, the WLT is of Petrov class III
and the PND corresponding to the triple root satisfies a further modified constraint
\begin{equation}
C_{\mu\nu\sigma[\tau}v_{\beta]}v^\sigma=0.
\label{PR3}
\end{equation}
Finally when all four roots coincide the WLT is of Petrov class N and the PND
satisfies the constraint
\begin{equation}
C_{\mu\nu\sigma\tau} v^\sigma=0.
\label{PR4}
\end{equation}
The underlying algebra that supports these results together with further implications
utilises the NP and spinor formalism that is explained in refs \cite{NEWPEN,PENRIN,JMS,PODON}.
\subsection{\label{CANP} Canonical Forms for the Weyl-like Tensor}
It is useful for the purposes of explicit calculation to identify canonical forms
associated with the Petrov classification of the WLTs. While these are not unique
they may be expressed in terms of the NP tetrad.
It is convenient to introduce the antisymmetric tensors, $A_{\mu\nu}$, $B_{\mu\nu}$ and $D_{\mu\nu}$
where
\begin{eqnarray}
A_{\mu\nu}&=&l_\mu m_\nu-l_\nu m_\mu\nonumber\\
B_{\mu\nu}&=&{\bar m}_\mu n_\nu-{\bar m}_\nu n_\mu\nonumber\\
D_{\mu\nu}&=&l_\mu n_\nu-l_\nu n_\mu+{\bar m}_\mu m_\nu-{\bar m}_\nu m_\mu
\end{eqnarray}
For class N we can choose the single PND to be $l_\mu$ and the WLT to
have the form
\begin{equation}
C_{\mu\nu\sigma\tau}=A_{\mu\nu}A_{\sigma\tau}+\mbox{c.c.},
\end{equation}
where $\mbox{c.c.}$ indicates complex conjugate.
For class III we have
\begin{equation}
C_{\mu\nu\sigma\tau}=A_{\mu\nu}D_{\sigma\tau}+D_{\mu\nu}A_{\sigma\tau}+\mbox{c.c.}.
\end{equation}
For class D
\begin{equation}
C_{\mu\nu\sigma\tau}=\lambda\{A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}
+D_{\mu\nu}D_{\sigma\tau}\} +\mbox{c.c.}.
\end{equation}
For class II
\begin{equation}
C_{\mu\nu\sigma\tau}=\lambda\{A_{\mu\nu}A_{\sigma\tau}+\frac{1}{6}[A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}
+D_{\mu\nu}D_{\sigma\tau}]\}+\mbox{c.c.}.
\end{equation}
For class I
\begin{equation}
C_{\mu\nu\sigma\tau}=\mu\{A_{\mu\nu}A_{\sigma\tau}+B_{\mu\nu}B_{\sigma\tau}\}
+\lambda\{A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}+D_{\mu\nu}D_{\sigma\tau}\}+\mbox{c.c.}.
\end{equation}
In classes I, II, and D, $\lambda$ and $\mu$ are complex parameters. In classes III and N,
any such complex parameter can be absorbed into the definition of $l$ and $m$
by subjecting them to an appropriate Lorentz transformation and rotation respectively.
However in the context of renormalisation analysis this may not always be convenient.
Where appropriate we will reinstate coefficients.
\subsection{\label{PETO} Example for Petrov Class O}
The very simplest case of Petrov class O, in which $C^{\mu\nu\sigma\tau}$ vanishes is
not included in the above list. For a non-interacting elctromagnetic field it implies
no Lorentz symmetry breakdown. However in the case of QED, Lorentz symmetry may be broken
because of differing lightcone structure for the photons and the electrons without
the introducion of a WLT into the dynamics. We will study such cases later in the context of
renormalisation theory.
\subsection{\label{PETN} Example for Petrov Class N}
The simplest non-trivial case is Petrov class N. Eq(\ref{GINVEQM6}) becomes
\begin{equation}
\left(
\begin{array}{cccc}
q^2&0&0&0\\
-(m.q)^2-({\bar m}.q)^2&q^2&(l.q)(m.q)&(l.q)({\bar m}.q)\\
-(l.q)({\bar m}.q)&0&q^2&(l.q)^2\\
-(l.q)(m.q)&0&(l.q)^2&q^2
\end{array}
\right)\left(
\begin{array}{c}
l.{\varepsilon}\\ n.{\varepsilon}\\ m.{\varepsilon}\\ {\bar m}.{\varepsilon}
\end{array}
\right)=0.
\label{GINVEQM7}
\end{equation}
It is easy to show that the determinant $\Delta$ of the matrix in eq(\ref{GINVEQM7})
is given by
\begin{equation}
\Delta=(q^2)^2((q^2)^2-(l.q)^4).
\label{DELPN1}
\end{equation}
We have then either
\begin{equation}
q^2=0~~\mbox{(twice)},
\end{equation}
or
\begin{equation}
q^2=\pm (l.q)^2.
\label{DELPN2}
\end{equation}
When $q^2=0$, $(n.{\varepsilon})$ is arbitrary and there remains the solution for which ${\varepsilon}\propto q$.
The general solution is then
\begin{equation}
{\varepsilon}_\tau=\alpha q_\tau+\beta l_\tau,
\end{equation}
where $\alpha$ and $\beta$ are arbitrary parameters.
When $q^2\ne 0$ we find $l.{\varepsilon}=0$. Then we have
\begin{eqnarray}
q^2m.{\varepsilon}+(l.q)^2{\bar m}.{\varepsilon}&=&0,\nonumber\\
(l.q)^2m.{\varepsilon}+q^2{\bar m}.{\varepsilon}&=&0.
\end{eqnarray}
Eq(\ref{DELPN2}) then allows non-trivial solutions ${\varepsilon}^{(\pm)}$ which
satisfy
\begin{equation}
(m\pm{\bar m}).{\varepsilon}^{(\pm)}=0.
\label{DELPN3}
\end{equation}
In the present example then we see that Lorentz symmetry breakdown is revealed
by the birefringence associated with the two lightcones implicit in eq(\ref{DELPN2}).
The corresponding polarisation vectors are determined through eq(\ref{DELPN3})
by the spatial axes $m$ and ${\bar m}$.
One further point of significance is that by making a Lorentz transformation of
coordinates in the $l$-$n$ plane with a hyperbolic angle $\psi$,
the PND vector becomes $e^\psi l_\mu$. Correspondingly $n_\mu\rightarrow e^{-\psi}n_\mu$.
The size of the components of $l_\mu$, as remarked above, can therefore be
adjusted arbitrarily simply by making an appropriate choice of $\psi$.
In a sense then, the same Lorentz symmetry breaking situation
can be viewed as either large or small depending which coordinate basis is appropriate
for describing the motion of the relevant observer.
\subsection{\label{PETIII} Example for Petrov Class III}
When the WLT is Petrov class III there are two PNDs, a triple root $l_\mu$ and a single root $n_\mu$.
They can be embedded in the NP tetrad as above and used to construct the the WLT thus
\begin{equation}
C_{\mu\nu\sigma\tau}=A_{\mu\nu}D_{\sigma\tau}+D_{\mu\nu}A_{\sigma\tau}+\mbox{c.c.}.
\end{equation}
We find that eq(\ref{GINVEQM6}) becomes
\begin{equation}
{\cal M}\left(
\begin{array}{c}
l.{\varepsilon}\\hfil\break.{\varepsilon}\\m.{\varepsilon}\\{\bar m}.{\varepsilon}
\end{array}
\right)=0,
\label{GINVEQM8}
\end{equation}
where the columns, ${\cal M}_i$ $(i=1,2,3,4)$, of the matrix ${\cal M}$ are given by
\begin{equation}
{\cal M}_1=
\left(
\begin{array}{c}
q^2+l.qm.q+l.q{\bar m}.q\\
-2n.pm.q-2n.q{\bar m}.q\\
-l.qn.q-m.q{\bar m}.q+(m.q)^2\\
-l.qn.q-m.q{\bar m}.q+({\bar m}.q)^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_2=
\left(
\begin{array}{c}
0\\
q^2+l.qm.q+l.q{\bar m}.q\\
(l.q)^2\\
(l.q)^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_3=
\left(
\begin{array}{c}
-(l.q)^2\\
m.q{\bar m}.q+l.qn.q-({\bar m}.q)^2\\
q^2-l.qm.q-l.q{\bar m}.q\\
2l.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_4=
\left(
\begin{array}{c}
-(l.q)^2\\
l.qn.q+m.q{\bar m}.q-(m.q)^2\\
2l.qm.q\\
q^2-l.qm.q-l.q{\bar m}.q
\end{array}
\right)
\end{equation}
After some calculation the determinant of ${\cal M}$ can be obtained in the form
\begin{equation}
\det{\cal M}=(q^2)^2[(q^2-(l.q)^2)^2-(4l.qm.q-(l.q)^2)(4l.q{\bar m}.q-(l.q)^2)].
\label{DELPIII1}
\end{equation}
As expected on general grounds, see ref \cite{ITIN}, the second factor in eq(\ref{DELPIII1})
is a quartic expression the vanishing of which
yields the dispersion relations for the two physical photon modes. There are two branches
\begin{equation}
q^2-(l.q)^2=\pm (l.q)\sqrt{(4m.q-l.q)(4{\bar m}.q-l.q)}
\end{equation}
However in contrast to the previous example for Petrov class N, the quartic does not have quadratic factors.
Therefore the birefringence in this case {\it cannot} be described by two distinct conventional lightcones.
As is implicit in the derivation, the absolute strength of $C_{\mu\nu\sigma\tau}$ can can be adjusted by
changing coordinates through Lorentz boosts in the $l_\mu$-$n_\mu$ plane and rotations in the $m$-${\bar m}$ plane.
In this way Petrov class III has features in common with class N.
\subsection{\label{PETD} Example for Petrov Class D}
When the WLT is Petrov class D, there are again two PNDs, $l_\mu$ and $n_\mu$ both
being double roots. They can be embedded in the NP tetrad and yield a WLT of the form
\begin{equation}
C_{\mu\nu\sigma\tau}=\lambda\{A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}
+D_{\mu\nu}D_{\sigma\tau}\}+\mbox{c.c.}.
\end{equation}
We find that eq(\ref{GINVEQM6}) takes the form eq(\ref{GINVEQM8}) where
the columns of ${\cal M}$
are (we denote the complex conjugate of $\lambda$ by ${\bar \ll}$)
\begin{equation}
{\cal M}_1=
\left(
\begin{array}{c}
q^2+(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
-(\lambda+{\bar \ll})(n.q)^2\\
(2\lambda-{\bar \ll})n.qm.q\\
(2{\bar \ll}-\lambda)n.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_2=
\left(
\begin{array}{c}
-(\lambda+{\bar \ll})(l.q)^2\\
q^2+(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
(2{\bar \ll}-\lambda)l.qm.q\\
(2\lambda-{\bar \ll})l.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_3=
\left(
\begin{array}{c}
-(2\lambda-{\bar \ll})l.q{\bar m}.q\\
-(2{\bar \ll}-\lambda)n.q{\bar m}.q\\
q^2-(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
(\lambda+{\bar \ll})({\bar m}.q)^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_4=
\left(
\begin{array}{c}
-(2{\bar \ll}-\lambda)l.qm.q\\
-(2\lambda-{\bar \ll})n.qm.q\\
(\lambda+{\bar \ll})(m.q)^2\\
q^2-(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)
\end{array}
\right)
\end{equation}
In the present case the coefficient $\lambda$ cannot be absorbed by a redefinition of the
vectors of the NP tetrad. The determinant of ${\cal M}$ can be obtained in the form
\begin{equation}
\det{\cal M}=(1+\lambda+{\bar \ll})(q^2)^2\{(q^2-(\lambda+{\bar \ll})l.qn.q+\rho m.q{\bar m}.q)^2
-9\lambda{\bar \ll}(1+\kappa)(1+{\bar \kk})(m.q)^2({\bar m}.q)^2\},
\label{DELPD1}
\end{equation}
where the parameters $\kappa$ and $\rho$ are given by
\begin{equation}
\kappa=\frac{1-(2\lambda-{\bar \ll})}{1+\lambda+{\bar \ll}}.
\end{equation}
and
\begin{equation}
\rho=\lambda+{\bar \ll}-\frac{9\lambda{\bar \ll}}{1+\lambda+{\bar \ll}}.
\end{equation}
The dispersion relation for the physical modes implied by the vanishing of $\det {\cal M}$
does factorise in this case and yields two distinct light cones each with a dispersion
relation that is quadratic in $q$,
\begin{equation}
q^2-(\lambda+{\bar \ll})l.qn.q+\rho m.q{\bar m}.q=\pm 3\sqrt{\lambda{\bar \ll}(1+\kappa)(1+{\bar \kk})}m.q{\bar m}.q.
\end{equation}
\subsection{\label{PETII} Example for Petrov Class II}
When the WLT is Petrov class II, there are three PNDs. In terms of the
basis of the NP tetrad the double root is $l_\mu$ and
the two single roots are $l_\mu+n_\mu\mp i(m_\mu-{\bar m}_\mu)$.
The WLT has the form
\begin{equation}
C_{\mu\nu\sigma\tau}=\lambda\{A_{\mu\nu}A_{\sigma\tau}+A_{\mu\nu}A_{\sigma\tau}
+\frac{1}{6}[A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}
`+D_{\mu\nu}D_{\sigma\tau}]\}+\mbox{c.c.}.
\end{equation}
We find that eq(\ref{GINVEQM6}) takes the form eq(\ref{GINVEQM7}) where
the columns of ${\cal M}$ are
\begin{equation}
{\cal M}_1=
\left(
\begin{array}{c}
q^2+\frac{1}{6}(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
-\lambda(m.q)^2-{\bar \ll}({\bar m}.q)^2-\frac{1}{6}(\lambda+{\bar \ll})(n.q)^2\\
-{\bar \ll} l.q{\bar m}.q+\frac{1}{6}(2\lambda-{\bar \ll})n.qm.q\\
-\lambda l.qm.q+\frac{1}{6}(2{\bar \ll}-\lambda)n.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_2=
\left(
\begin{array}{c}
-\frac{1}{6}(\lambda+{\bar \ll})(l.q)^2\\
q^2+\frac{1}{6}(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
\frac{1}{6}(2{\bar \ll}-\lambda)l.qm.q\\
\frac{1}{6}(2\lambda-{\bar \ll})l.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_3=
\left(
\begin{array}{c}
-\frac{1}{6}(2\lambda-{\bar \ll})l.q{\bar m}.q\\
\lambda l.qm.q-\frac{1}{6}(2{\bar \ll}-\lambda)n.q{\bar m}.q\\
q^2-\frac{1}{6}(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
\lambda(l.q)^2+\frac{1}{6}(\lambda+{\bar \ll})({\bar m}.q)^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_4=
\left(
\begin{array}{c}
-\frac{1}{6}(2{\bar \ll}-\lambda)l.qm.q\\
{\bar \ll} l.q{\bar m}.q-\frac{1}{6}(2\lambda-{\bar \ll})n.qm.q\\
{\bar \ll}(l.q)^2+\frac{1}{6}(\lambda+{\bar \ll})(m.q)^2\\
q^2-\frac{1}{6}(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)
\end{array}
\right)
\end{equation}
We find that
\begin{eqnarray}
\det{\cal M}&=
&[(1+\frac{1}{6}(\lambda+{\bar \ll})](q^2)^2\{[(1-\frac{1}{12}(\lambda+{\bar \ll}))q^2
+\frac{1}{2}{\bar \ll}(\kappa-1)m.q{\bar m}.q]~~~~~~~~~\\\nonumber
&&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\times[(1-\frac{1}{12}(\lambda+{\bar \ll}))q^2+\frac{1}{2}\lambda({\bar \kk}-1)m.q{\bar m}.q]\\\nonumber
&&~~~~~~~~~~~~~~~~~-[\lambda(l.q)^2+\frac{1}{2}\lambda(\kappa+1)({\bar m}.q)^2][{\bar \ll}(l.q)^2+\frac{1}{2}{\bar \ll}({\bar \kk}+1)(m.q)^2]\}.
\label{DELPII1}
\end{eqnarray}
Here we have
\begin{equation}
\kappa=\frac{1-(2\lambda-{\bar \ll})/6}{1+(\lambda+{\bar \ll})/6}
\end{equation}
Clearly the quartic expression providing the dispersion relation for the physical modes
again does not, in general, factorise into separate quadratic factors so the lightcone structure is
more complex than two separate simple lightcones. A special case is $\lambda$ real when
factorisation does occur.
\subsection{\label{PETI} Example for Petrov Class I}
When the WLT is Petrov class I, there are four PNDs. A canonical form
for the WLT in terms of the basis of the NP tetrad is
\begin{equation}
C_{\mu\nu\sigma\tau}=\mu\{A_{\mu\nu}A_{\sigma\tau}+B_{\mu\nu}B_{\sigma\tau}\}
+ \lambda\{A_{\mu\nu}B_{\sigma\tau}+B_{\mu\nu}A_{\sigma\tau}
+D_{\mu\nu}D_{\sigma\tau}\}+\mbox{c.c.}.
\end{equation}
We find that eq(\ref{GINVEQM6}) takes the form eq(\ref{GINVEQM7}) where
the columns of ${\cal M}$ are
\begin{equation}
{\cal M}_1=
\left(
\begin{array}{c}
q^2+(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
-\mu(m.q)^2-{\bar \mu}({\bar m}.q)^2-(\lambda+{\bar \ll})(n.q)^2\\
-{\bar \mu} l.q{\bar m}.q+(2\lambda-{\bar \ll})n.qm.q\\
-\mu l.qm.q+(2{\bar \ll}-\lambda)n.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_2=
\left(
\begin{array}{c}
-\mu({\bar m}.q)^2-{\bar \mu}(m.b)^2-(\lambda+{\bar \ll})(l.q)^2\\
q^2+(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
-\mu n.q{\bar m}.q+(2{\bar \ll}-\lambda)l.qm.q\\
-{\bar \mu} n.qm.q+(2\lambda-{\bar \ll})l.q{\bar m}.q
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_3=
\left(
\begin{array}{c}
{\bar \mu} n.qm.q-(2\lambda-{\bar \ll})l.q{\bar m}.q\\
\mu l.qm.q-(2{\bar \ll}-\lambda)n.q{\bar m}.q\\
q^2-(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)\\
\mu(l.q)^2+{\bar \mu}(n.q)^2+(\lambda+{\bar \ll})({\bar m}.q)^2
\end{array}
\right)
\end{equation}
\begin{equation}
{\cal M}_4=
\left(
\begin{array}{c}
\mu n.q{\bar m}.q-(2{\bar \ll}-\lambda)l.qm.q\\
{\bar \mu} l.q{\bar m}.q-(2\lambda-{\bar \ll})n.qm.q\\
\mu(n.q)^2+{\bar \mu}(l.q)^2+(\lambda+{\bar \ll})(m.q)^2\\
q^2-(\lambda+{\bar \ll})(l.qn.q+m.q{\bar m}.q)
\end{array}
\right)
\end{equation}
The determinant of ${\cal M}$ is
\begin{equation}
\det {\cal M}=(q^2)^2\{\Delta_0+\mu\Delta_1+{\bar \mu}\Delta_2+\mu^2\Delta_3+{\bar \mu}^2\Delta_4
+\mu{\bar \mu}\Delta_5+\mu^2{\bar \mu}\Delta_6+\mu{\bar \mu}^2\Delta_7\},
\label{DELPI1}
\end{equation}
where
\begin{eqnarray}
\Delta_0&=&\frac{1}{4}(\lambda+{\bar \ll}+1)(\lambda+{\bar \ll}-2)(q^2)^2+18\lambda{\bar \ll} l.qn.qm.q{\bar m}.q,\nonumber\\
\Delta_1&=&-3{\bar \ll}(2\lambda-{\bar \ll}-2)[(l.q)^2(m.q)^2+(n.q)^2({\bar m}.q)^2],\nonumber\\
\Delta_2&=&-3\lambda(2{\bar \ll}-\lambda-2)[(l.q)^2({\bar m}.q)^2+(n.q)^2(m.q)^2],\nonumber\\
\Delta_3&=&-[(\lambda+{\bar \ll}+1)((l.q)^2(n.q)^2+(m.q)^2({\bar m}.q)^2)+2(2{\bar \ll}-\lambda-1)l.qn.qm.q{\bar m}.q],\nonumber\\
\Delta_4&=&-[(\lambda+{\bar \ll}+1)((l.q)^2(n.q)^2+(m.q)^2({\bar m}.q)^2)+2(2\lambda-{\bar \ll}-1)l.qn.qm.q{\bar m}.q],\nonumber\\
\Delta_5&=&-(\lambda+{\bar \ll}+1)[(l.q)^4+(n.q)^4+(m.q)^4+({\bar m}.q)^4],\nonumber\\
\Delta_6&=&-[(l.q)^2({\bar m}.q)^2+(n.q)^2(m.q)^2],\nonumber\\
\Delta_7&=&-[(l.q)^2(m.q)^2+(n.q)^2({\bar m}.q)^2].
\end{eqnarray}
The quartic factor in eq(\ref{DELPI1}) yields the dispersion relation for the physical modes.
Unsurprisingly it does not exhibit any obvious factorisation properties. Hence we expect
for this case also there are no simple lightcones controlling photon propagation. There may be
special choices for the parameters that does allow factorisation.
\section{\label{GENGF}Generalised Gauge Fixing for the EM Field}
In the above discussion we obtained the dispersion relations for physical modes
by imposing the gauge condition $g^{\mu\nu}\partial_\mu A_\nu(x)=0$. However
in anticipation of issues that arise in the context of the renormalisation of gauge theories
with a multi-metric structure we examine a more general form of gauge fixing for the
electromagnetic (EM) field. We introduce a metric-like object $\Lambda^{\mu\nu}$ and
impose the gauge condition
\begin{equation}
\Lambda^{\mu\nu}\partial_\mu A_\nu(x)=0.
\end{equation}
In appendix \ref{GFIX2} we present the standard argument using the fuctional formalism
to derive the gauge fixed action for the EM field. It is
\begin{equation}
S_{(p)}=\int d^nx \{-\frac{1}{8}U^{\mu\nu\sigma\tau}F_{\mu\nu}(x)F_{\sigma\tau(x)}
-\frac{1}{2}\Lambda^{\mu\nu}\Lambda^{\sigma\tau}\partial_\mu A_\nu(x)\partial_\sigma A_\tau(x)
-\partial_\mu{\bar c}(x)\Lambda^{\mu\nu}\partial_\nu c(x)\}.
\end{equation}
Here $c(x)$ and ${\bar c}(x)$ are the ghost fields.
Because we intend to use dimensional regularisation we express our results in $n$ dimensions.
The argument identifying the preferred metric generalises straightforwardly to $n$ dimensions,
hence we can express $U^{\mu\nu\sigma\tau}$ using eq(\ref{PRFMD}) interpreted in $n$ dimensions.
The equation of motion for the photon field is then
\begin{equation}
(g^{\mu\sigma}g^{\nu\tau}-(g^{\mu\tau}g^{\nu\sigma}-\Lambda^{\mu\nu}\Lambda^{\sigma\tau})
-C^{\mu\nu\sigma\tau})\partial_\sigma\partial_\mu A_\nu(x)=0,
\label{EQMOTPH}
\end{equation}
and those for the ghost fields are
\begin{eqnarray}
\Lambda^{\mu\nu}\partial_\mu\partial_\nu c(x)&=&0,\nonumber\\
\Lambda^{\mu\nu}\partial_\mu\partial_\nu{\bar c}(x)&=&0.
\label{EQMOTGST}
\end{eqnarray}
Clearly $\Lambda^{\mu\nu}$ plays the role of the (inverse) metric for the ghost fields.
It follows that the null mass-shell condition for the ghosts is
\begin{equation}
\Lambda^{\mu\nu}q_\mu q_\nu=0.
\end{equation}
Even in the case of no Lorentz symmetry breaking this null mass-shell is distinct
from the photon null mass-shell unless we set $\Lambda^{\mu\nu}=g^{\mu\nu}$.
In QED, of course the ghosts do not interact with the photons. For this reason the ghosts
are usually ignored in QED calculations. We will keep them in mind in particular because they
play a more significant role in the corresponding situation in non-abelian gauge theories.
The presence of multiple null mass-shells or multiple lightcones in the theory raises the same
same issues dealt with in a previous paper discussing a bimetric model with scalar fields. For the
moment we will restrict our observations to the requirement that the parameters of the theory
should be constrained so that there exist foliations of spacetime that are spacelike with respect to
all relevant metrics in order to permit a causal structure in the theory including
the ghosts \cite{ITD2}.
\subsection{\label{PHWFS} Photon Wavefunctions}
Plane wave solutions of eq(\ref{EQMOTPH}) have the form
\begin{equation}
A_\nu(x)={\varepsilon}_\nu e^{-iq.x},
\end{equation}
where ${\varepsilon}_\nu$ satisfies
\begin{equation}
M^{\tau\nu}{\varepsilon}_\nu=0,
\label{WF1}
\end{equation}
with
\begin{equation}
M^{\tau\nu}={\cal M}^{\tau\nu}-q^\tau q^\nu+Q^\tau Q^\nu,
\end{equation}
where we have set $Q^\mu=\Lambda^{\mu\nu}q_\nu$ and where
\begin{equation}
{\cal M}^{\tau\nu}=q^2g^{\tau\nu}-N^{\tau\nu}.
\end{equation}
Recall $N^{\tau\nu}=N^{\nu\tau}=C^{\mu\nu\sigma\tau}q_\mu q_\sigma$.
In this notation the ghost mass shell condition is
\begin{equation}
q.Q=0.
\end{equation}
We have also
\begin{equation}
N^{\mu\nu}q_\nu=q_\mu N^{\mu\nu}=0.
\end{equation}
We then find easily that eq(\ref{WF1}) implies
\begin{equation}
q.QQ.{\varepsilon}=0.
\end{equation}
Provided that $q$ does not lie on the ghost mass shell the photon wavefunction satisfies
\begin{equation}
Q.{\varepsilon}=0,
\end{equation}
which is precisely the gauge condition we wish to impose.
In order that eq(\ref{WF1}) have a solution it is necessary that $M^{\tau\nu}$ be singular. The inverse
of $M^{\tau\nu}$ is $M_{\nu\lambda}$ satisfying $M^{\tau\nu}M_{\nu\lambda}={\delta}^\tau_\lambda$ and has the form
\begin{equation}
M_{\tau\nu}=\left({\delta}^\rho_\tau-\frac{q_\tau Q^\rho}{Q.q}\right){\cal M}_{\rho\lambda}\left({\delta}^\lambda_\nu-\frac{Q^\lambda q_\nu}{Q.q}\right)
+\frac{q_\tau q_\nu}{(Q.q)^2},
\label{WF2}
\end{equation}
where ${\cal M}_{\rho\lambda}$ is the inverse of ${\cal M}^{\rho\lambda}$.
Clearly we can expect $M^{\tau\nu}$ to be singular when either
\begin{equation}
Q^\mu q_\mu=\Lambda^{\mu\nu}q_\mu q_\nu=0,
\end{equation}
or ${\cal M}^{\rho\lambda}$ is singular.
That is either $q_\mu$ lies on the ghost mass-shell or $q_\mu$ satisfies $\det{\cal M}^{\rho\lambda}=0$.
There may be special cases when these two constraints intersect and $q_\mu$ lies
in the intersection. For simplicity of exposition we will not consider these special cases explicitly.
The former condition above signals the presence of ghost contributions to the photon propagator.
This is not different from the standard case except that our gauge condition separates the ghost mass-shell
from that of the physical modes. The physical modes are associated with the vanishing of $\det{\cal M}^{\rho\lambda}$.
However as we have seen from our examination of the various Petrov classes $\det{\cal M}^{\rho\lambda}$, in four
dimensions, contains a factor $(q^2)^2$. This remains true in $n$ dimensions. Nevertheless $q^2=0$ does
not correspond to a singularity of $M^{\nu\tau}$. We show this explicitly in appendix \ref{FALSING}.
The relationship between elements of the kernals of $M^{\tau\nu}$ and ${\cal M}^{\tau\nu}$ can be exhibited
directly. Let ${\varepsilon}'$ satisfy
\begin{equation}
{\cal M}^{\tau\nu}{\varepsilon}'_\nu=0.
\end{equation}
It follows, assuming $q^2\ne 0$, that $q.{\varepsilon}'=0$. Now introduce ${\varepsilon}$ where
\begin{equation}
{\varepsilon}={\varepsilon}'+\alpha q,
\end{equation}
$\alpha$ being chosen so that $Q.{\varepsilon}=0$. It is then easy to check that
\begin{equation}
M^{\tau\nu}{\varepsilon}_\nu=0.
\end{equation}
This relationship, a gauge transformation of course, explains why the
same physical dispersion relation emerges from either method of fixing the gauge.
Because a factor of $(q^2)^2$ can be extracted from $\det {\cal M}^{\tau\nu}$
it follows that the vanishing of the remaining factor is homogeneous in $q$
of degree $2(n-2)$ \cite{ITIN}. Confining attention to positive energy solutions we can
expect in general $n-2$ branches for the physical disperion relation. As we have seen
from our analysis of the Petrov classification, these may or may not factorise into separate
lightlike mass-shells.
There is also a solution for which ${\varepsilon}\propto q$, when $q$ lies on the ghost mass-shell.
The final contribution to the full suite of $n$ solutions is not pure plane wave but contains a secular term.
It is not unique but can be chosen to have the form
\begin{equation}
A_\tau(x)=(a_\tau+iq_\tau x^0)e^{-iq_\mu x^\mu},
\end{equation}
Here $q_\tau$ lies on the ghost mass-shell. The requirement that the above wavefunction
is a solution of eq(\ref{EQMOTPH}) only fixes $a_\tau$ up to a gauge transformation $a_\tau\rightarrow a_\tau+\alpha q_\tau$.
It is convenient to complete the determination of $a_\tau$ by choosing $\alpha$ so that $q.a=0$.
An analogous solution appears in conventional gauge fixing in standard gauge theories
where it can be understood as the limit of a difference of two coinciding solutions in
the Stuckelberg approach to gauge theories.
\subsection{\label{PHGF} Photon Green's Functions}
We can compute the photon Green's functions from the generating functional, $Z_{(p)}[J]$,
where
\begin{equation}
Z_{(p)}[J]=\int d[A]\exp\left\{iS_{(p)}+i\int d^nx A_\mu(x)J^\mu(x)\right\},
\end{equation}
and
\begin{equation}
S_{(p)}=\int d^nx \{-\frac{1}{8}U^{\mu\nu\sigma\tau}F_{\mu\nu}(x)F_{\sigma\tau}(x)
-\frac{1}{2}\Lambda^{\mu\nu}\Lambda^{\sigma\tau}\partial_\mu A_\nu(x)\partial_\sigma A_\tau(x)\}.
\label{PHACTION}
\end{equation}
Here $S_{(p)}$ is the gauge fixed action without the ghosts which do not play a role in QED calculations.
On completing the square in the exponent in the functional integral we find that
\begin{equation}
Z_{(p)}[J]=C\exp\left\{i\int d^nxd^nx'(-\frac{1}{2}J^\sigma(x)N_{\sigma\tau}(x-x')J^\tau(x'))\right\},
\label{GENF4}
\end{equation}
where
\begin{equation}
N_{\tau\lambda}(x-x')=-\int\frac{d^nq}{(2\pi)^n}e^{-iq_\mu(x-x')^\mu}M_{\tau\lambda}(q),
\end{equation}
The external factor $C=Z_{(p)}[0]$ and is irrelevant to subsequent calculations. It will be omitted
from now on.
The two-point Green's function for free photons is $iG_{F\mu\nu}(x-x')$ where
\begin{equation}
iG_{F\mu\nu}(x-x')=\frac{1}{Z[J]}\frac{{\delta}}{i{\delta} J^\mu(x)}\frac{{\delta}}{i{\delta} J^\nu(x)}Z[J]|_{J=0}
=\int \frac{d^nq}{(2\pi)^n} e^{-iq_\mu(x-x')^\mu}(-iM_{\mu\nu}(q)).
\end{equation}
\subsection{\label{PHGFREP} Representation for the Photon Green's Function}
It is convenient, for later use in computing divergences, to construct a representation for the photon
Green's function. The essential step is to obtain a representation for the core quantity ${\cal M}_{\rho\lambda}(q)$.
First we introduce the matrix ${\cal M}^\mu_{~~\lambda}(q)$ given by
\begin{equation}
{\cal M}^\mu_{~~\lambda}={\cal M}^{\mu\nu}(q)g_{\nu\lambda}=q^2{\delta}^\mu_\lambda-N^\mu_{~~\lambda}(q),
\end{equation}
where
\begin{equation}
N^\mu_{~~\lambda}(q)=N^{\mu\nu}(q)g_{\nu\lambda}=C^{\mu\sigma~~\tau}_{~~~~\lambda}q_\sigma q_\tau.
\end{equation}
Formally we write
\begin{equation}
{\cal M}^\mu_{~~\lambda}(q)=({\bf \Mc}(q))^\mu_{~~\lambda},
\end{equation}
and
\begin{equation}
N^\mu_{~~\lambda}(q)=({\bf N}(q))^\mu_{~~\lambda},
\end{equation}
hence, in matrix notation,
\begin{equation}
{\bf \Mc}(q)=q^21-{\bf N}(q).
\end{equation}
We have then
\begin{equation}
{\cal M}_{\nu\rho}(q)=g_{\nu\lambda}({\bf \Mc}^{-1}(q))^\lambda_{~~\rho}.
\end{equation}
We now introduce the identity
\begin{equation}
{\bf \Mc}^{-1}(q)=-i\int_0^\infty du\exp\{iu({\bf \Mc}(q)+i{\varepsilon})\}=-i\int_0^\infty du e^{iu(q^2+i{\varepsilon})}\exp\{-iu{\bf N}(q)\}.
\end{equation}
We set
\begin{equation}
\exp\{-iu{\bf N}(q)\}=\exp\{-i{\bf N}(-i\partial_z)\}e^{{i\sqrt u}q.z},
\end{equation}
with the proviso that afterwards we set $z=0$.
We then have the result
\begin{equation}
{\bf \Mc}^{-1}(q)=-i\exp\{-i{\bf N}(-i\partial_z)\}\int_0^\infty du e^{iu(q^2+q.z/\sqrt{u}+i{\varepsilon})}.
\label{INVREP}
\end{equation}
We will find this result useful later. It has the advantage of exhibiting the
dependence of the photon propagator on the WLT, $C^{\mu\nu\sigma\tau}$.
\subsection{\label{EGF} Electron Green's Functions}
Because we wish to associate a new lightcone with the electron field we introduce
a vierbein ${\bar e}^a_{~~\mu}$ which allows us to create a new metric ${\bar g}_{\mu\nu}=\eta_{ab}{\bar e}^a_{~~\mu}{\bar e}^a_{~~\nu}$
and appropriate Dirac matrices ${\bar \gg}^\mu={\bar e}_{~~a}^\mu\gamma^a$. Here $\gamma^a$ are standard Dirac matrices
that satisfy $\{\gamma^a,\gamma^b\}=2\eta^{ab}$, hence
\begin{equation}
\{{\bar \gg}^\mu,{\bar \gg}^\nu\}=2{\bar g}^{\mu\nu}.
\end{equation}
The volume element is again $d^nx$. This can be achieved by adjusting appropriately the normalisation of the
electron field.
The action for the free electron field is $S_{(e)}$ where
\begin{equation}
S_{(e)}=\int d^nx{\bar \psi}(x)(i{\bar \gg}^\mu\partial_\mu-m)\psi(x),
\label{EACTION}
\end{equation}
and $m$ is the mass parameter of the electron.
The Green's functions for the elctron field are obtained from the generating functional $Z_{(e)}[\eta,{\bar \eta}]$
where
\begin{equation}
Z_{(e)}[\eta,{\bar \eta}]=\int d[\psi]d[{\bar \psi}]\exp\left\{iS_{(e)}+i\int d^nx({\bar \eta}(x)\psi(x)+{\bar \psi}(x)\eta(x))\right\},
\end{equation}
and $\eta(x)$ and ${\bar \eta}(x)$ are anticommuting fields. Completing the square in the exponent in the functional integral
we find
\begin{equation}
Z_{(e)}[\eta,{\bar \eta}]=C' \exp\left\{-i\int d^nxd^nx'{\bar \eta}(x)\Delta(x-x')\eta(x')\right\},
\end{equation}
where
\begin{equation}
\Delta(x-x')=\int \frac{d^nq}{(2\pi)^n}\frac{({\bar \gg}^\mu q_\mu+m)e^{-iq_\mu(x-x')^\mu}}{{\bar g}^{\mu\nu} q_\mu q_\nu-m^2+i\epsilon}.
\end{equation}
The coefficient $C'$ can be omitted. We have
\begin{equation}
iS_F(x-x')=\frac{1}{Z[J,\eta,{\bar \eta}]}\frac{1}{i}\frac{{\delta}}{{\delta} \eta(x')}\frac{1}{i}\frac{{\delta}}{{\delta} {\bar \eta}(x)}Z[J,\eta,{\bar \eta}]|_{J=\eta={\bar \eta}=0},
\end{equation}
giving the result
\begin{equation}
iS_F(x-x')=i\int\frac{d^nq}{(2\pi)^n}\frac{({\bar \gg}^\mu q_\mu+m)e^{-iq_\mu(x-x')^\mu}}{{\bar g}^{\mu\nu}q_\mu q_\nu-m^2+i\epsilon}.
\end{equation}
\section{\label{BASMOD} Bimetric QED}
We now arrange for the electrons to interact with the electromagnetic field.
Of course we must allow for the renormalisation of the bare parameters of the theory.
We denote these bare parameters with a zero suffix. The bare electric charge is $-e_0$ and
the bare mass parameter is $m_0$. From our previous paper we can anticipate that the
metrics in the theory will also require renormalisation so we make the replacements
$g_{\mu\nu}\rightarrow g_{0\mu\nu}$, ${\bar e}^a_{~~\mu}\rightarrow {\bar e}^a_{0~\mu}$ and ${\bar \gg}^\mu\rightarrow {\bar \gg}_0^\mu={\bar e}^\mu_{0~a}\gamma^a$.
We also have ${\bar g}_{\mu\nu}\rightarrow{\bar g}_{0\mu\nu}=\eta_{ab}{\bar e}^a_{0~\mu}{\bar e}^b_{0~\nu}$. We must also
allow for the necessity of renormalising the the WLT representing Lorentz symmetry breaking for the photon field
and make the replacement $C^{\mu\nu\sigma\tau}\rightarrow C^{\mu\nu\sigma\tau}_0$ and hence the obvious
corresponding replacement $U^{\mu\nu\sigma\tau}\rightarrow U^{\mu\nu\sigma\tau}_0$.
In addition the gauge fixing metric is modified with $\Lambda^{\mu\nu}\rightarrow \Lambda_0^{\mu\nu}$, in order to
accommodate possible renormalisation effects.
We deviate a little from the approach of \cite{ITD2} by identifying the volume elements of the two metrics
as $d^nx$ which then remains unrenormalized. Instead we
we rely in a conventional way, on field renormalisations to render the Green's functions of the theory finite.
In \cite{ITD2} we absorbed this field renormalisation into the renormalisation of the volume elements. In fact a
study of the renormalisation group in \cite{ITD2} showed that it may be factored out again. In the present model it
turns out that we cannot in any case, carry out this manoeuvre because of the photon action depends quadratically on the
(inverse) metric. We will therefore adopt the more conventional scheme and treat the field renormalisations
independently of the metric.
The action for the electromagnetic field $A_\mu(x)$ is the gauge fixed action $S_{(p)}$ is now
given by eq(\ref{PHACTION}) with the replacement of parameters by their bare versions.
We omit the ghost fields from now on since they play no role in QED calculations.
The action for the electron field is $S_{(e)}$ is obtained similarly from eq(\ref{EACTION})
together with the inclusion of the elctromagnetic interaction to yield
\begin{equation}
S_{(e)}=\int d^nx{\bar \psi}(x)(i{\bar \gg}_0^\mu\partial_\mu+e_0{\bar \gg}_0^\mu A_\mu(x)-m_0)\psi(x).
\end{equation}
The total action for the theory is
\begin{equation}
S=S_{(p)}+S_{(e)}.
\end{equation}
The generating functional for the Green's functions is
\begin{equation}
Z[J,\eta,{\bar \eta}]=\int d[A]d[\psi]d[{\bar \psi}]\exp\{iS+i\int d^nx(J^\mu(x)A_\mu(x)+({\bar \eta}(x)\psi(x)+{\bar \psi}(x)\eta(x)))\}.
\label{GENF1}
\end{equation}
When $e_0=0$ $S$ reduces to $S_f$ the free action and $Z[J,\eta,{\bar \eta}]$ becomes the free-particle generating functional,
$Z_f[J,\eta,{\bar \eta}]$, where
\begin{equation}
Z_f[J,\eta,{\bar \eta}]=\int d[A]d[\psi]d[{\bar \psi}]\exp\{iS_f+i\int d^nx(J^\mu(x)A_\mu(x)+({\bar \eta}(x)\psi(x)+{\bar \psi}(x)\eta(x)))\}.
\label{GENF2}
\end{equation}
The full generating functional, $Z[J,\eta,{\bar \eta}]$, can be reconstructed from $Z_f[J,\eta,{\bar \eta}]$ by
means of the standard formula
\begin{equation}
Z[J,\eta,{\bar \eta}]=\exp\left\{ie_0\int d^nx
\frac{1}{i}\frac{{\delta}}{{\delta} \eta(x)}{\bar \gg}^\mu_0\frac{1}{i}\frac{{\delta}}{{\delta} J^\mu(x)}\frac{1}{i}\frac{{\delta}}{{\delta}{\bar \eta}(x)}\right\}
Z_f[J,\eta,{\bar \eta}].
\label{GENF3}
\end{equation}
\subsection{\label{GF}Green's Functions with Interaction}
The Green's functions that are important for a study of the renormalisation of the theory are
the two point Green's function for photons, $G_{\mu\nu}(x-x')$, where
\begin{equation}
iG_{\mu\nu}(x-x')=\frac{1}{Z[J,\eta,{\bar \eta}]}\frac{1}{i}\frac{{\delta}}{{\delta} J^\mu(x)}\frac{1}{i}\frac{{\delta}}{{\delta} J^\nu(x')}
Z[J,\eta,{\bar \eta}]|_{J=\eta={\bar \eta}=0}.
\label{GF1}
\end{equation}
the two point Green's function for electrons, $S(x-x')$, where
\begin{equation}
iS(x-x')=\frac{1}{Z[J,\eta,{\bar \eta}]}\frac{1}{i}\frac{{\delta}}{{\delta} \eta(x')}\frac{1}{i}\frac{{\delta}}{{\delta} {\bar \eta}(x)}Z[J,\eta,{\bar \eta}]|_{J=\eta={\bar \eta}=0}.
\label{GF2}
\end{equation}
and the vertex function $V_\mu(x,x',y)$, where
\begin{equation}
iV_\mu(x,x',y)=\frac{1}{Z[J,\eta,{\bar \eta}]}\frac{1}{i}\frac{{\delta}}{J^\mu(y)}\frac{1}{i}\frac{{\delta}}{{\delta} \eta(x')}\frac{1}{i}\frac{{\delta}}{{\delta} {\bar \eta}(x)}
Z[J,\eta,{\bar \eta}]|_{J=\eta={\bar \eta}=0}.
\label{GF3}
\end{equation}
The lowest approximation to $G_{\mu\nu}(x-x')$ is obtained by substituting $Z_f$ for $Z$ in eq(\ref{GF1}). We have
\begin{equation}
iG_{F\mu\nu}(x-x')=\frac{1}{i}\int \frac{d^nq}{(2\pi)^n}
e^{-iq_\mu(x-x')^\mu}M_{0\mu\nu}(q).
\label{GF4}
\end{equation}
We have made the substitution $M_{\mu\nu}\rightarrow M_{0\mu\nu}$ to indicate that bare parameters are involved in the
construction of $M_{0\mu\nu}$.
In the same way we obtain the lowest approximation to $S(x-x')$. It is
\begin{equation}
iS_F(x-x')=-\frac{1}{i}\int\frac{d^nq}{(2\pi)^n}\frac{({\bar \gg}_0^\mu q_\mu+m_0)e^{-iq_\mu(x-x')^\mu}}{{\bar g}_0^{\mu\nu}q_\mu q_\nu-m_0^2+i\epsilon}.
\label{GF5}
\end{equation}
The lowest approximation to $iV_\mu(x,x',y)$ is
\begin{equation}
iV_{F\mu}(x,x',y)= \int d^ny'iG_{F\mu\nu}(y-y')iS_F(x-y')(ie_0{\bar \gg}_0^\nu) iS_F(y'-x').
\label{GF6}
\end{equation}
\subsection{\label{FEYN} Feynman Rules for Bimetric QED}
The Feynman rules for computing the perturbative expansions of Green's functions (in terms of bare parameters) can
be read off from the expansion for $Z[J,\eta,{\bar \eta}]$ in eq(\ref{GENF3}) and the Green's function formulae in eqs(\ref{GF1}), (\ref{GF2}).
The Feynman diagrams are the conventional ones with the lines corresponding to $iG_{F\mu\nu}(x-x')$
for photons, $iS_F(x-x')$ for electrons and $ie_0{\bar \gg}_{0\mu}$ for each vertex. In momentum space a photon line with
momentum $q_\mu$ is associated with a factor
$$
iG_{F\mu\nu}(q)=-iM_{0\mu\nu}(q),
$$
and each electron line with
$$
iS_F(q)=i\frac{({\bar \gg}_0^\mu q_\mu+m_0)}{{\bar g}_0^{\mu\nu}q_\mu q_\nu-m_0^2+i\epsilon}.
$$
The vertex is $ie_0{\bar \gg}_{0\mu}$. Of course there is momentum conservation at each vertex and each loop momentum $q_\mu$ is
integrated with a measure
$$
\int\frac{d^nq}{(2\pi)^n}.
$$
Each electron loop has associated with it a factor of $(-1)$.
\subsection{\label{RENORM} Renormalisation}
We use dimensional regularisation with minimal subtraction to renormalise the theory. We introduce a scale $\mu$ so that the electron
charge can be expressed in the form
\begin{equation}
e_0=(\mu)^{(4-n)/2}e(1+\sum_{k=1}^\infty a^{(k)}(n)(e^2)^k).
\label{REN1}
\end{equation}
The term in the sum that is $O(e^2)$ is simply a single pole at $n=4$. The renormalized charge $e$ is dimensionless.
Similarly the bare mass is expressed in the form
\begin{equation}
m_0=m(1+\sum_{k=1}^\infty b^{(k)}(n)(e^2)^k),
\label{REN2}
\end{equation}
where $m$ is the renormalized mass parameter. Again the term that is $O(e^2)$ is a simple pole at $n=4$.
The new aspect of the renormalisation procedure required for bimetric QED is that we must also
allow for a renormalisation of the metrics thus
\begin{equation}
g_0^{\mu\nu}=g^{\mu\nu}+\sum_{k=1}^\infty g^{(k)\mu\nu}(e^2)^k.
\label{REN3}
\end{equation}
Similarly for the vierbein associated with the electron
\begin{equation}
{\bar e}_{0~a}^{\mu}={\bar e}_{~~a}^{\mu}+\sum_{k=1}^\infty {\bar e}_{~~~~~a}^{(k)\mu}(e^2)^k.
\label{REN4}
\end{equation}
The WLT term also requires renormalisation
\begin{equation}
C_0^{\mu\nu\sigma\tau}=C^{\mu\nu\sigma\tau}+\sum_{k=1}^\infty C^{(k)\mu\nu\sigma\tau}(e^2)^k.
\label{REN5}
\end{equation}
Finally
\begin{equation}
\Lambda_0^{\mu\nu}=\Lambda^{\mu\nu}+\sum_{k=1}^\infty \Lambda^{(k)\mu\nu}(e^2)^k.
\label{REN6}
\end{equation}
Again the terms of $O(e^2)$ contain a simple pole at $n=4$. Higher order terms contain poles of increasingly higher order.
Note that since we require $\det g_0^{\mu\nu}=\det g^{\mu\nu}=-1$, it follows that
\begin{equation}
g_{\mu\nu}g^{(1)\mu\nu}=0.
\label{REN7}
\end{equation}
The renormalisation procedure is carried out by inserting these parameter expansions into the bare perturbation
series and re-expanding in the renormalised charge $e$. The $n$-dependent coefficients of powers of $e^2$ are adjusted so that
there are sufficient cancellations of poles at $n=4$ that residual singular structure can be removed by appropriate
field renormalisation factors yielding finally Green's functions that are without divergent terms at $n=4$.
\section{\label{VACP} Vacuum Polarisation}
The lowest order contribution to the photon two-point function is represented by the Feynman
diagram in Fig \ref{FIG1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{FIG1}
\caption{Contributions to the photon propagator to $O(e_0^2)$. }
\label{FIG1}
\end{figure}
Making use of the Feynman rules above we obtain to order $e_0^2$
\begin{equation}
iG_{\mu\nu}(q)=iG_{F\mu\nu}(q)+iG_{F\mu\sigma}(q)[i\Sigma^{\sigma\tau}(q)]iG_{F\tau\nu}(q),
\label{VACP1}
\end{equation}
where
\begin{equation}
i\Sigma^{\sigma\tau}(q)=(-1)(ie_0)^2\int \frac{d^nk}{(2\pi)^n}{\hbox{Tr}}\{{\bar \gg}_0^\sigma iS_F(k-q){\bar \gg}_0^\tau iS_F(k){\bar \gg}_0^\tau\}.
\label{VACP2}
\end{equation}
If we introduce the inverse of the Green's function, $G^{\mu\nu}(q)$ where $G^{\mu\nu}(q)G_{\nu\lambda}(q)={\delta}^\mu_\lambda$
and correspondingly $G_F^{\mu\nu}(q)$ where $G_F^{\mu\nu}(q)G_{F\nu\lambda}(q)={\delta}^\mu_\lambda$ we find to second order
\begin{equation}
G^{\mu\nu}(q)=G_F^{\mu\nu}(q)+\Sigma^{\mu\nu}(q).
\label{VACP3}
\end{equation}
Since our calculation is an expansion to one loop we restrict the coupling constant expansion to $O(e^2)$. We have then
\begin{equation}
\Sigma^{\mu\nu}(q)=ie^2\mu^{(4-n)}\int\frac{d^nk}{(2\pi)^n}\frac{{\hbox{Tr}}[{\bar \gg}^\mu({\bar \gg}^\lambda(k-q)_\lambda+m){\bar \gg}^\nu({\bar \gg}^\rho k_\rho+m)]}
{({\bar g}^{\alpha\beta}(k-q)_\alpha(k-q)_\beta-m^2+i\epsilon)({\bar g}^{\mu\nu}k_\mu k_\nu-m^2+i\epsilon)}.
\label{VACP4}
\end{equation}
The calculation can be performed along essentially conventional lines. The divergent behaviour is
exhibited by calculating to $O(q^2)$ and we find
\begin{equation}
\Sigma^{\mu\nu}(q)\simeq -\frac{e^2}{3}\frac{d(n)}{(4\pi)^{n/2}}\left(\frac{\mu}{m}\right)^{4-n}
\Gamma(2-n/2)\left({\bar g}^{\mu\nu}{\bar g}^{\alpha\beta}-{\bar g}^{\mu\beta}{\bar g}^{\alpha\nu}\right)q_\alpha q_\beta.
\label{VACP5}
\end{equation}
Here $d(n)$ is the dimension of the $\gamma$-matrix representation. Of course $d(4)=4$. Therefore
we have a pole at $n=4$ of the form
\begin{equation}
\Sigma^{\mu\nu}(q)\simeq \frac{e^2}{6\pi^2}\frac{1}{n-4}
\left({\bar g}^{\mu\nu}{\bar g}^{\alpha\beta}-{\bar g}^{\mu\beta}{\bar g}^{\alpha\nu}\right)q_\alpha q_\beta.
\label{VACP6}
\end{equation}
Let the tensor $W^{\mu\alpha\nu\beta}$ be given by
\begin{equation}
W^{\mu\alpha\nu\beta}={\bar g}^{\mu\nu}{\bar g}^{\alpha\beta}-{\bar g}^{\mu\beta}{\bar g}^{\alpha\nu}.
\label{VACP7}
\end{equation}
Since it has the same symmetry properties as $U^{\mu\alpha\nu\beta}$, $W^{\mu\alpha\nu\beta}$ can (in four dimensions)
be expressed in the form
\begin{equation}
W^{\mu\alpha\nu\beta}=\frac{1}{12}W(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\alpha\nu})
+\frac{1}{2}(V^{\mu\nu}g^{\alpha\beta}+g^{\mu\nu}V^{\alpha\beta}-V^{\mu\beta}g^{\alpha\nu}-g^{\mu\beta}V^{\alpha\nu})
-V^{\mu\nu\sigma\tau},
\label{VACP8}
\end{equation}
where
\begin{eqnarray}
W^{\mu\nu}&=&W^{\mu\alpha\nu\beta}g_{\alpha\beta},\nonumber\\
W&=&W^{\mu\nu}g_{\mu\nu},\nonumber\\
V^{\mu\nu}&=&W^{\mu\nu}-\frac{1}{4}Wg^{\mu\nu},\nonumber\\
V^{\mu\alpha\sigma\beta}g_{\alpha\beta}&=&0.
\label{VACP9}
\end{eqnarray}
Here $V^{\mu\alpha\nu\beta}$ is a WLT constructed ultimately from ${\bar g}^{\mu\nu}$. Later we will examine
how different choices for ${\bar g}^{\mu\nu}$ influence the Petrov class of $V^{\mu\alpha\nu\beta}$. Of course
there is an $n$-dimensional version of this argument. However since we are simply calculating pole resiues
at $n=4$ we will find here and later that the 4-dimensional calculation is sufficient.
From eq(\ref{GF4}) and eq(\ref{VACP3}) we see that
\begin{equation}
G_F^{\mu\nu}(q)=-M_0^{\mu\nu}(q)+\Sigma^{\mu\nu}(q)
\end{equation}
that is
\begin{equation}
G_F^{\mu\nu}(q)=-(g_0^{\mu\nu}g_0^{\alpha\beta}-g_0^{\mu\beta}g_0^{\alpha\nu}+\Lambda_0^{\mu\beta}\Lambda_0^{\alpha\nu}-C_0^{\mu\alpha\nu\beta}
-\frac{e^2}{6\pi^2}\frac{1}{n-4}W^{\mu\alpha\nu\beta})q_\alpha q_\beta
\end{equation}
Using the expansions to $O(e^2)$ in eq(\ref{REN3}) to eq(\ref{REN5}) we see that we can remove
some of the UV poles at $n=4$ by choosing
\begin{equation}
e^2g^{(1)\mu\nu}=\frac{e^2}{12\pi^2}\frac{1}{n-4}V^{\mu\nu},
\label{VACP10}
\end{equation}
and
\begin{equation}
e^2C^{(1)\mu\alpha\nu\beta}=\frac{e^2}{6\pi^2}\frac{1}{n-4}V^{\mu\alpha\nu\beta}-\frac{e^2}{72\pi^2}\frac{W}{n-4}C^{\mu\alpha\nu\beta},
\label{VACP11}
\end{equation}
with the result
\begin{equation}
G_F^{\mu\nu}(q)=-\left\{\left(1-\frac{e^2}{72\pi^2}\frac{W}{n-4}\right)(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\alpha\nu}-C^{\mu\alpha\nu\beta})
+\Lambda_0^{\mu\beta}\Lambda_0^{\alpha\nu}\right\}q_\alpha q_\beta.
\end{equation}
This may be expressed in the form
\begin{equation}
G_F^{\mu\nu}(q)=-\left(1-\frac{e^2}{72\pi^2}\frac{W}{n-4}\right)\left\{(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\alpha\nu})
+\Lambda^{\mu\beta}\Lambda^{\alpha\nu}\right\}q_\alpha q_\beta,
\end{equation}
provided we arrange the expansion for the ghost mass-shell metric to satisfy
\begin{equation}
\Lambda_0^{\mu\beta}=\left(1-\frac{e^2}{144\pi^2}\frac{W}{n-4}\right)\Lambda^{\mu\beta}.
\end{equation}
The renormalised Green's function, $G_{RF}^{\mu\nu}$ is then obtained by means of the appropriate multiplicative photon
wavefunction renormalisation yielding
\begin{equation}
G_{RF}^{\mu\nu}(q)=-\left\{(g^{\mu\nu}g^{\alpha\beta}-g^{\mu\beta}g^{\alpha\nu})
+\Lambda^{\mu\beta}\Lambda^{\alpha\nu}-C^{\mu\alpha\nu\beta}\right\}q_\alpha q_\beta.
\end{equation}
The reason then that we introduced a distinct ghost mass-shell metric was to permit
this multiplicative renormalisation for the photon Green's function.
\section{\label{ELPROP} Electron Propagator}
The lowest contributions to the electron propagator are shown in Fig \ref{FIG2}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{FIG2}
\caption{Contributions to the electron propagator to $O(e_0^2)$. }
\label{FIG2}
\end{figure}
To $O(e_0^2)$ we have
\begin{equation}
iS(p)=iS_F(p)+iS_F(p)i\Sigma(p)iS_F(p),
\end{equation}
where
\begin{equation}
i\Sigma(p)=(ie_0)^2\int\frac{d^nq}{(2\pi)^n}{\bar \gg}_0^\mu iS_F(p-q){\bar \gg}_0^\nu(-iM_{0\mu\nu}(q)).
\end{equation}
To second order we have
\begin{equation}
S^{-1}(p)=S_F^{-1}(p)+\Sigma(p).
\label{PROPREN1}
\end{equation}
Restricting the calculation to $O(e^2)$ we find
\begin{equation}
\Sigma(p)=(ie)^2\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}{\bar \gg}^\mu
\frac{({\bar \gg}^\sigma(p_\sigma-q_\sigma)+m)}{{\bar g}^{\alpha\beta}(p_\alpha-q_a)(p_\beta-q_\beta)-m^2+i{\varepsilon}}{\bar \gg}^\nu(-iM_{\mu\nu}(q)).
\end{equation}
The UV divergences in $\Sigma(p)$ are contained in the first two twrms of the Taylor series
\begin{equation}
\Sigma(p)=\Sigma(0)+p_\mu\Sigma^\mu(0)+O(p^2),
\end{equation}
where
\begin{equation}
\Sigma^\mu(0)=\frac{\partial}{\partial p_\mu}\Sigma(p)|_{p=0}.
\end{equation}
We have then
\begin{equation}
\Sigma(0)=(ie)^2\mu^{4-n}m\int\frac{d^nq}{(2\pi)^n}\frac{{\bar g}^{\mu\nu}}{{\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2+i{\varepsilon}}(-iM_{\mu\nu}(q)),
\label{PROPREN1A}
\end{equation}
and
\begin{equation}
\Sigma^\tau(0)=-(ie)^2\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{{\bar \gg}^\mu({\bar \gg}^\sigma q_\sigma-m){\bar \gg}^\tau ({\bar \gg}^\rho q_\rho-m){\bar \gg}^\nu}{({\bar \gg}^{\alpha\beta}q_\alpha q_\beta-m^2)^2}(-iM_{\mu\nu}(q)).
\label{PROPREN1B}
\end{equation}
It follows that
\begin{equation}
\Sigma^\tau(0)=H^\tau_{~~\rho}{\bar \gg}^\rho,
\end{equation}
where
\begin{equation}
H^\tau_{~~\rho}=H^{(1)\tau}_{~~~~\rho}+H^{(2)\tau}_{~~~~\rho},
\label{PROPREN1C}
\end{equation}
with
\begin{equation}
H^{(1)\tau}_{~~~~~\rho}=(ie)^2\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{{\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu}}{{\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2+i{\varepsilon}}(-iM_{\mu\nu}(q)),
\label{PROPREN1D}
\end{equation}
and
\begin{equation}
H^{(2)\tau}_{~~~~~\rho}=-2(ie)^2\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{{\bar g}^{\tau\beta}q_\beta q_\sigma({\delta}^\mu_\rho{\bar g}^{\sigma\nu}+{\delta}^\nu_\rho{\bar g}^{\sigma\mu}-{\delta}^\sigma_\rho{\bar g}^{\mu\nu})}{({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2+i{\varepsilon})^2}(-iM_{\mu\nu}(q)).
\label{PROPREN1E}
\end{equation}
It is useful to split $H^\tau_{~~\rho}$ into a trace part and a traceles part,
\begin{equation}
H^\tau_{~~\rho}=\frac{1}{n}H{\delta}^\tau_\rho + h^\tau_{~~\rho},
\end{equation}
where
\begin{equation}
H=H^\tau_{~~\tau},
\end{equation}
and
\begin{equation}
h^\tau_{~~\tau}=0.
\end{equation}
The pole at $n=4$ in $H$ determines the field renormalisation of the electron propagator
while the pole in $h^\tau_{~~\rho}$ fixes the counter term in $e^\mu_{0~a}$. We have then from
eq(\ref{PROPREN1})
\begin{equation}
S^{-1}(p)={\bar \gg}_0^\mu p_\mu-m_0+\Sigma(0)+\Sigma^\mu(0)p_\mu,
\end{equation}
where we retain only the pole contributions in $\Sigma(0)$ {\it etc}. Using eq(\ref{REN2}) and eq(\ref{REN4})
we have
\begin{equation}
S^{-1}(p)=({\bar e}^\mu_{~~a}+e^2{\bar e}^{(1)\mu}_{~~~~~a})\gamma^ap_\mu-m(1+e^2b^{(1)})
+\Sigma(0)+(h^\mu_{~~\rho}+\frac{1}{4}H{\delta}^\mu_\rho){\bar \gg}^\rho p_\mu.
\end{equation}
If we set
\begin{equation}
e^2{\bar e}^{(1)\mu}_{~~~~~a}{\bar e}^a_{~~\rho}=-h^\mu_{~~\rho};
\label{PROPREN2}
\end{equation}
and
\begin{equation}
me^2b^{(1)}=\frac{1}{4}mH+\Sigma(0)
\label{PROPREN3}
\end{equation}
then eq(\ref{PROPREN1}) becomes
\begin{equation}
S^{-1}(p)=(1+\frac{1}{4}H)({\bar \gg}^\mu p_\mu-m).
\label{PROPREN4}
\end{equation}
Finally we see that the field renormalisation for the electron is
\begin{equation}
Z_e=(1-\frac{1}{4}H),
\label{PROPREN5}
\end{equation}
Hence the renormalized inverse propagator $S^{-1}_R(p)=Z_eS^{-1}(p)$ is finite to $O(e^2)$.
It is useful to note that eq(\ref{PROPREN2}) implies
\begin{equation}
{\bar g}^{\mu\nu}_0={\bar g}^{\mu\nu}-h^\mu_{~~\rho}{\bar g}^{\rho\nu}-h^\nu_{~~\rho}{\bar g}^{\rho\mu}.
\label{PROPREN6}
\end{equation}
\section{\label{VTX} Vertex}
The complete vertex amplitude to $O(e^3)$ corresponds to the diagrams in Fig \ref{FIG3}.
It has the form $V^\mu(p,p')=ie_0{\bar \gg}^\tau+{\cal V}^\tau(p,p')$ where
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{FIG3}
\caption{Contributions to the electron-photon vertex to $O(e_0^3)$. }
\label{FIG3}
\end{figure}
\begin{equation}
{\cal V}^\tau(p,p')=(ie)^3\int\frac{d^nq}{(2\pi)^n}\frac{{\bar \gg}^\mu i({\bar \gg}^\sigma(p'_\sigma-q_\sigma)+m){\bar \gg}^\tau i({\bar \gg}^\rho(p_\rho-q_\rho)+m){\bar \gg}^\nu(-iM_{\mu\nu}(q))}{({\bar g}^{\alpha'\beta'}(p'_{\alpha'}-q_{\alpha'})(p'_{\beta'}-q_{\beta'})-m^2)({\bar g}^{\alpha\beta}(p_\alpha-q_\alpha)(p_\beta-q_\beta)-m^2)}.
\end{equation}
The divergence is contained in
\begin{equation}
{\cal V}^\tau(0,0)=-(ie)^3\int\frac{d^nq}{(2\pi)^n}{\bar \gg}^\mu\frac{({\bar g}^\sigma q_\sigma-m){\bar \gg}^\tau({\bar \gg}^\rho q_\rho-m)}{({\bar \gg}^{\alpha\beta}q_\alpha q_\beta-m^2)^2}{\bar \gg}^\nu(-iM_{\mu\nu}(q)).
\end{equation}
It follows that
\begin{equation}
{\cal V}^\tau(0,0)=ie\Sigma^\tau(0).
\end{equation}
Hence the divergence of ${\cal V}^\tau(0,0)$ is the same as that of $\Sigma^\tau(0)$ up to a factor $ie$.
Using eq(\ref{REN1}) we have
\begin{equation}
V^\mu(0,0)=\mu^{4-n}ie(1+e^2a^{(1)})({\bar e}^\mu_{~~a}+e^2{\bar e}^{(1)\mu}_{~~~~~a})\gamma^a+ie\Sigma^\mu(0).
\end{equation}
It follows that if we use eq(\ref{PROPREN2}) we find to $O(e^3)$
\begin{equation}
V^\mu(0,0)=ie((1+\frac{1}{4}H+e^2a^{(1)}){\bar \gg}^\mu=ie(1+\frac{1}{4}H)(1+e^2a^{(1)}){\bar \gg}^\mu.
\end{equation}
The renormalized vertex $V_R^\mu(0,0)=Z_e\sqrt{Z_\gamma}V^\mu(0,0)$ is finite provided we set
\begin{equation}
1+e^2a^{(1)}=\frac{1}{\sqrt{Z_\gamma}},
\end{equation}
that is
\begin{equation}
e^2a^{(1)}=-\frac{e^2}{144\pi^2}\frac{W}{n-4},
\label{CHARGEREN1}
\end{equation}
or
\begin{equation}
e_0^2=\mu^{4-n}e^2\left(1-\frac{e^2}{72\pi^2}\frac{W}{n-4}\right).
\label{CHARGEREN2}
\end{equation}
\section{\label{POLES} Pole Divergences at $n=4$}
The pole divergences at $n=4$ can be exhibited explicitly by making use of
the photon propagator representation in eq(\ref{INVREP}). For example
from eq(\ref{PROPREN1A}) we find
\begin{equation}
\Sigma(0)=-i(ie)^2\mu^{4-n}m{\bar g}^{\mu\nu}g_{\mu\rho}\exp\{-i{\bf N}(\partial_z)\}^\rho_{~~\nu}L(z),
\label{DIV1}
\end{equation}
where
\begin{equation}
L(z)=-i\int du\int\frac{d^nq}{(2\pi)^n}\frac{1}{{\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2+i{\varepsilon}}e^{iu(q^2+z.q/\sqrt{u}+i{\varepsilon})},
\end{equation}
and where finally we set $z=0$. We can now express $L(z)$ in the form
\begin{eqnarray}
L(z)&=&(-i)^2\int dudv\int\frac{d^nq}{(2\pi)^n}e^{iu(q^2+z.q/\sqrt{u})+iv({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2+i{\varepsilon})}\nonumber\\
&=&(-i)^2\int_0^1dx\int_0^\infty d\lambda\ll\int\frac{d^nq}{(2\pi)^n}e^{i\lambda({\hat g}^{\alpha\beta}(x)q_\alpha q_\beta+xz^\alpha q_\alpha/\sqrt{u}-m^2(1-x)+i{\varepsilon})},
\end{eqnarray}
where we have set $u=\lambda x$ and $v=\lambda(1-x)$ and have introduced the
interpolated (inverse) metric ${\hat g}^{\alpha\beta}(x)=xg^{\alpha\beta}+(1-x){\bar g}^{\alpha\beta}$. This metric was introduced in reference \cite{ITD2}
where it was emphasised that it should remain non-singular for $0\le x\le 1$ as a condition of
acceptable causal structure. This was ensured by the requirement that lightcones associated
with $g_{\alpha\beta}$ and ${\bar g}_{\alpha\beta}$ overlap so that there exists a shared set of spacetime vectors
that are timelike in both metrics. The same point holds here. Making this assumption we can evaluate $L(z)$ as
\begin{equation}
L(z)=(-i)^2\int_0^1dx\int_0^\infty d\lambda\ll\int\frac{d^nq'}{(2\pi)^n}e^{i\lambda({\hat g}^{\alpha\beta}q'_\alpha q'_\beta+x^2{\hat g}_{\alpha\beta}z^\alpha z^\beta/(4u)-m^2(1-x)+i{\varepsilon})},
\end{equation}
where $q_\alpha=q'_\alpha-x{\hat g}_{\alpha\beta}z^\beta$ and ${\hat g}_{\alpha\beta}(x)$ is the inverse of ${\hat g}^{\alpha\beta}(x)$.
The non-singularity condition on ${\hat g}^{\alpha\beta}(x)$ allows us to evaluate the gaussian integral
yielding
\begin{equation}
L(z)=(-i)^2\int_0^1dx\int_0^\infty d\lambda\ll i\left(\frac{\pi}{i\lambda}\right)^{n/2}\frac{1}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}e^{x{\hat g}_{\alpha\beta}(x)z^\alpha z^\beta/4}e^{-i\lambda m^2(1-x)}.
\end{equation}
Hence
\begin{eqnarray}
L(z)&=&\frac{i}{(4\pi)^{n/2}}\int_0^1dx\Gamma(2-\frac{n}{2})(m^2(1-x))^{n/2-2}\frac{1}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}e^{ix{\hat g}^{\alpha\beta}(x)z^\alpha z^\beta/4}\nonumber\\
&=&-\frac{i}{8\pi^2}\frac{1}{n-4}\int_0^1dx\frac{1}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}e^{i{\hat g}_{\alpha\beta}(x)z^\alpha z^\beta/4}.
\label{POL1}
\end{eqnarray}
Finally we can reconstruct $\Sigma(0)$ using eq(\ref{DIV1}).
Similarly we see that
\begin{equation}
H^{(1)\tau}_{~~~~\rho}=ie^2\mu^{4-n}({\delta}^\mu_\rho {\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})g_{\mu\lambda}(e^{-i{\bf N}(-i\partial_z)})^\lambda_{~~\nu}L(z).
\end{equation}
From a slightly more complex calculation we find
\begin{equation}
H^{(2)\tau}_{~~~~\rho}=-2ie^2{\bar g}^{\tau\lambda}({\delta}^\mu_\rho{\bar g}^{\sigma\nu}+{\delta}^\nu_\rho{\bar g}^{\sigma\mu}-{\delta}^\sigma_\rho{\bar g}^{\mu\nu})g_{\mu\kappa}(e^{-i{\bf N}(-i\partial_z)})^\kappa_{~~\nu}L_{\lambda\sigma}(z).
\end{equation}
where
\begin{equation}
L_{\lambda\sigma}(z)=-i\int du\int\frac{d^nq}{(2\pi)^n}\frac{q_\lambda q_\sigma}{({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)^2}e^{iu(q^2+z.q/\sqrt{u}+i{\varepsilon})}.
\end{equation}
The pole at $n=4$ has the form
\begin{equation}
L_{\lambda\sigma}(z)=-\frac{1}{8\pi^2}\frac{1}{n-4}\int_0^1dx\frac{(1-x)}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}(\frac{i}{2}{\hat g}_{\lambda\sigma}(x)+\frac{x}{4}z^\alpha z^\beta{\hat g}_{\lambda\alpha}(x){\hat g}_{\sigma\beta}(x))e^{ix{\hat g}_{\alpha\beta}(x)z^\alpha z^\beta/4}.
\label{POL2}
\end{equation}
\section{\label{WLTBM} Petrov Class of Bimetrically Generated WLTs}
It would be interesting to develop a general theory of how the bimetric structure of the theory
affects the nature of Lorentz symmetry breaking through the renormalisation process.
At present this seems rather difficult. For now we confine attention to some particular
examples involving the simpler Petrov classes.
We will adopt a minimal approach that assumes presence of contributions to the WLT $C^{\mu\nu\sigma\tau}$
only of a type forced on us by the need to accomodate the divergences accompanying the WLT $V^{\mu\nu\sigma\tau}$
in section \ref{VACP}.
\subsection{\label{WLT_0} Class O}
In fact the simplest non-trivial case is class O for which $V^{\mu\nu\sigma\tau}=0$. There are three
cases. They arise
when there is a coordinate frame for which ${\bar g}^{\mu\nu}$
has the form
\begin{equation}
{\bar g}^{\mu\nu}=bg^{\mu\nu}\pm(a-b)k^\mu k^\nu.
\label{METRIC_O}
\end{equation}
We choose $+$ or $-$ according as $k^\mu$ is timelike $(g_{\mu\nu} k^\mu k^\nu>0)$ or spacelike $(g_{\mu\nu} k^\mu k^\nu<0)$.
In order to maintain $\det {\bar g}^{\mu\nu}=-1$ we impose $ab^3=1$.
There is a third case $k^\mu=l^\mu$ where the lightlike vector $l^\mu$ satisfies $g_{\mu\nu}l^\mu l^\nu=0$
and
\begin{equation}
{\bar g}^{\mu\nu}=g^{\mu\nu}+wl^\mu l^\nu.
\end{equation}
It is easy to verify that the WLT $V^{\mu\nu\sigma\tau}$ vanishes in all three cases. In our minimal approach we therefore
assume $C^{\mu\nu\sigma\tau}$ vanishes. In the timelike case the underlying reason, of course, is that we are maintaining
invariance under the rotation group that leaves $k^\mu$ invariant and a WLT cannot exhibit such an invariance unless it
is null. Similar remarks apply in the other cases. Under these circumstances although we still have Lorentz symmetry
breaking, the lightcone associated with photons being distinct from that associated with electrons, we do not have
birefringence for the photons.
\subsection{\label{WLT_N} Class N}
The next most simple case is class N. Such a Lorentz symmetry breaking situation may be induced
by using the NP tetrad to express the electron inverse metric in the form
\begin{equation}
{\bar g}^{\mu\nu}=g^{\mu\nu}+l^\mu({\bar c} m^\nu+c{\bar m}^\nu)+({\bar c} m^\mu+c{\bar m}^\mu)l^\nu+\alpha l^\mu l^\nu,
\label{METN1}
\end{equation}
where for the moment $\alpha$ is an arbitrary real parameter.
It is readily verified that in this case $\det {\bar g}^{\mu\nu}=-1$ and
\begin{equation}
V^{\mu\nu\sigma\tau}={\bar c}^2A^{\mu\nu}A^{\sigma\tau}+c^2{\bar A}^{\mu\nu}{\bar A}^{\sigma\tau}.
\label{METN2}
\end{equation}
These two cases will be examined in detail later. However a further example provides
additional insight into the effect of vacuum polarisation in the bimetric context.
\subsection{\label{WLT_D} Class D}
One simple way of constructing a new metric from the standard one is to consider a
shearing of space-time. Such a shearing is represented by the mapping $x^\mu\rightarrow x'^\mu=T^\mu_{~~\nu}x^\nu$,
where
\begin{equation}
T^\mu_{~~\nu}={\delta}^\mu_\nu+vf^\mu h_\sigma,
\end{equation}
and
\begin{eqnarray}
h^2&=&1\nonumber\\
f^2&=&-1\nonumber\\
f.h&=&0.
\end{eqnarray}
We can then define a new metric to be of the form
\begin{equation}
{\bar g}^{\mu\nu}=T^\mu_{~~\sigma}T^\nu_{~~\tau}\eta^{\sigma\tau}.
\end{equation}
If now we calculate $W^{\mu\nu\sigma\tau}={\bar \gg}^{\mu\sigma}{\bar \gg}^{\nu\tau}-{\bar \gg}^{\mu\tau}{\bar \gg}^{\nu\sigma}$,
we obtain
\begin{eqnarray}
W^{\mu\nu\sigma\tau}&=&\eta^{\mu\sigma}\eta^{\nu\tau}-\eta^{\mu\tau}\eta^{\nu\sigma}\nonumber\\
&&+\eta^{\mu\sigma}(v(f^\nu h^\tau+f^\tau h^\nu)+v^2f^\nu f^\tau)\nonumber\\
&&+\eta^{\nu\tau}(v(f^\mu h^\sigma+f^\sigma h^\mu)+v^2f^\mu f^\sigma)\nonumber\\
&&-\eta^{\mu\tau}(v(f^\nu h^\sigma+f^\sigma h^\nu)+v^2f^\nu f^\sigma)\nonumber\\
&&-\eta^{\nu\sigma}(v(f^\mu h^\tau+f^\tau h^\mu)+v^2f^\mu f^\tau)\nonumber\\
&&-v^2(f^\mu h^\nu-f^\nu h^\mu)(f^\sigma h^\tau-f^\tau h^\sigma).
\end{eqnarray}
When we extract $V^{\mu\nu\sigma\tau}$ we obtain
\begin{eqnarray}
V^{\mu\nu\sigma\tau}&&-v^2\{\frac{1}{3}(\eta^{\mu\sigma}\eta^{\nu\tau}-\eta^{\mu\tau}\eta^{\nu\sigma})\nonumber\\
&&+\frac{1}{2}\eta^{\mu\sigma}(f^\nu f^\tau-h^\nu h^\tau)\nonumber\\
&&+\frac{1}{2}\eta^{\nu\tau}(f^\mu f^\sigma-h^\mu h^\sigma)\nonumber\\
&&-\frac{1}{2}\eta^{\mu\tau}(f^\nu f^\sigma-h^\nu h^\sigma)\nonumber\\
&&-\frac{1}{2}\eta^{\nu\sigma}(f^\mu f^\tau-h^\mu h^\tau)\nonumber\\
&&-(f^\mu h^\nu-f^\nu h^\mu)(f^\sigma h^\tau-f^\tau h^\sigma)\}.
\end{eqnarray}
Setting $f^\mu=(l^\mu+n^\mu)/\sqrt{2}$ and $h^\mu=(l^\mu-n^\mu)/\sqrt{2}$ we find
\begin{equation}
V^{\mu\nu\sigma\tau}=\frac{1}{6}v^2\{A^{\mu\nu}B^{\sigma\tau}+B^{\mu\nu}A^{\sigma\tau}+D^{\mu\nu}D^{\sigma\tau}\}+\mbox{c.c.}.
\end{equation}
In this case of sheared lightcones we find that the WLT is indeed of Petrov class D. However since the coefficient
is real it is not quite the most general case for class D.
The remaining classes, although obviously worth investigating, are considerably more elaborate.
For example the case for which ${\bar g}^{\mu\nu}$ is obtained from $g^{\mu\nu}$ by separate rescalings
in each of the timelike and spacelike directions leads to a WLT of Petrov class I. We postpone such
a completion of our program for a later discussion.
\section{\label{RENEX} Special Examples of Renormalized Bimetric Theory.}
We look in more detail at renormalisation in two special cases, Petrov classes O and N that
are particularly tractable.
\subsection{\label{RENEX0} Bimetric Theory with Petrov Class O }
It is convenient to construct the the three metric cases in class O by introducing a reference metric
which we choose to be the standard Lorentz metric $\eta^{\mu\nu}$. For the timelike case
we have
\begin{eqnarray}
g_0^{\mu\nu}&=&\beta_0\eta^{\mu\nu}+(\alpha_0-\beta_0) k^\mu k^\nu,\nonumber\\
g^{\mu\nu}&=&\beta\eta^{\mu\nu}+(\alpha-\beta) k^\mu k^\nu,\nonumber\\
{\bar g}_0^{\mu\nu}&=&{\bar \bb}_0\eta^{\mu\nu}+({\bar \aa}_0-{\bar \bb}_0) k^\mu k^\nu,\nonumber\\
{\bar g}^{\mu\nu}&=&{\bar \bb}\eta^{\mu\nu}+({\bar \aa}-{\bar \bb}) k^\mu k^\nu,
\end{eqnarray}
where $k^\mu=(1,0,0,0)$ and hence $\eta_{\mu\nu}k^\mu k^\nu=1$. We also find it convenient to set ${\bar \aa}=a\alpha$ and ${\bar \bb}=b\beta$.
Each of the above metrics has a determinant of $-1$. In particular we have $\alpha\beta^3={\bar \aa}{\bar \bb}^3=1$.
This implies that $ab^3=1$. The inverses of the above metrics are easily constructed by
the replacements $\alpha\rightarrow\alpha^{-1}$ and $\beta\rightarrow\beta^{-1}$ {\it etc}. We have also
\begin{equation}
{\bar g}^{\mu\nu}=bg^{\mu\nu}+\alpha(a-b)k^\mu k^\nu.
\end{equation}
We then find that
\begin{equation}
W^{\mu\nu\sigma\tau}=b^2(g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma}) +\alpha(a-b)(g^{\mu\sigma}k^\nu k^\tau
+g^{\nu\tau}k^\mu k^\sigma-g^{\mu\tau}k^\nu k^\sigma-g^{\nu\sigma}k^\mu k^\tau)
\end{equation}
It follows that
\begin{equation}
V^{\mu\sigma}=\frac{1}{2}b(b-a)(g^{\mu\sigma}-4\alpha k^\mu k^\sigma),
\end{equation}
and
\begin{equation}
W=6b(a+b)=6(b^2+b^{-2}).
\end{equation}
It is easily confirmed that the WLT $V^{\mu\nu\sigma\tau}$ vanishes. The pole divergence in $C_0^{\mu\nu\sigma\tau}$
at $n=4$ therefore also vanishes and we are free to apply our minimal assumption that $C^{\mu\nu\sigma\tau}=0$.
In that case the representation of the photon propagator simplifies and we can deduce
\begin{equation}
\Sigma(0)=-i(ie)^2\mu^{4-n}m{\bar g}^{\mu\nu}g_{\mu\nu}L(0)=m\frac{e^2}{8\pi^2}\frac{1}{n-4}{\bar g}^{\mu\nu}g_{\mu\nu}\int_0^1dx\frac{1}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}},
\end{equation}
and
\begin{eqnarray}
H^{(1)\tau}_{~~~~\rho}&=&-i(ie)^2\mu^{4-n}(2g_{\mu\rho}{\bar g}^{\tau\mu}-{\delta}^\tau_\rho {\bar g}^{\mu\nu}g_{\mu\nu})L(0)\nonumber\\
&=&\frac{e^2}{8\pi^2}(2g_{\mu\rho}{\bar g}^{\tau\mu}-{\delta}^\tau_\rho {\bar g}^{\mu\nu}g_{\mu\nu})\frac{1}{n-4}\int_0^1dx\frac{1}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}.
\end{eqnarray}
From eq(\ref{POL2}) we have
\begin{eqnarray}
H^{(2)\tau}_{~~~~\rho}&=&2i(ie)^2\mu^{4-n}(2g_{\rho\nu}{\bar g}^{\sigma\nu}-{\delta}^\sigma_\rho g_{\mu\nu}{\bar g}^{\mu\nu}){\bar g}^{\tau\beta}L_{\beta\sigma}(0)\nonumber\\
&=&-\frac{e^2}{8\pi^2}\frac{1}{n-4}(2g_{\rho\nu}{\bar g}^{\sigma\nu}-{\delta}^\sigma_\rho {\bar g}^{\mu\nu}g_{\mu\nu}){\bar g}^{\tau\beta}\int_0^1dx\frac{1-x}{\sqrt{-\det{\hat g}^{\alpha\beta}(x)}}{\hat g}_{\beta\sigma}(x).
\end{eqnarray}
Using the explicit form for $g^{\mu\nu}$ and $k^\mu$ indicated in subsection \ref{WLT_0} we have
\begin{equation}
{\bar g}^{\alpha\beta}=\left(
\begin{array}{cccc}
a\alpha&0&0&0\\
0&-b\beta&0&0\\
0&0&-b\beta&0\\
0&0&0&-b\beta
\end{array}
\right)
\end{equation}
We have then
\begin{equation}
{\hat g}^{\alpha\beta}(x)=\left(
\begin{array}{cccc}
\alpha(a+(1-a)x)&0&0&0\\
0&-\beta((b+(1-b)x))&0&0\\
0&0&-\beta(b+(1-b)x))&0\\
0&0&0&-\beta(b+(1-b)x))
\end{array}
\right)
\end{equation}
The inverse matrix ${\hat g}_{\alpha\beta}(x)$ is obvious and
\begin{equation}
-\det {\hat g}^{\alpha\beta}(x)=(a+(1-a)x)(b+(1-b)x)^3.
\end{equation}
We require the integrals
\begin{equation}
J_0=\int_0^1dx\frac{1}{(a+(1-a)x)^{1/2}(b+(1-b)x)^{3/2}}=\frac{2}{\sqrt{b}(\sqrt{a}+\sqrt{b})}.
\end{equation}
\begin{equation}
J_1=\int_0^1dx\frac{1-x}{(a+(1-a)x)^{3/2}(b+(1-b)x)^{3/2}}=\frac{2}{\sqrt{ab}(\sqrt{a}+\sqrt{b})^2}.
\end{equation}
\begin{equation}
J_2=\int_0^1dx\frac{1-x}{(a+(1-a)x)^{1/2}(b+(1-b)x)^{5/2}}=\frac{2}{3}\frac{(\sqrt{a}+2\sqrt{b})}{b^{3/2}(\sqrt{a}+\sqrt{b})^2}.
\end{equation}
The poles at $n=4$ are then
\begin{equation}
\Sigma(0)=m\frac{e^2}{8\pi^2}\frac{2(a+3b)}{\sqrt{b}(\sqrt{a}+\sqrt{b})}\frac{1}{n-4}.
\end{equation}
\begin{equation}
H^{(1)\tau}_{~~~~\rho}=\frac{e^2}{8\pi^2}
\left(
\begin{array}{cccc}
(a-3b)J_0&0&0&0\\
0&-(a+b)J_0&0&0\\
0&0&-(a+b)J_0&0\\
0&0&0&-(a+b)J_0
\end{array}
\right)\frac{1}{n-4}.
\end{equation}
\begin{equation}
H^{(2)\tau}_{~~~~\rho}=-\frac{e^2}{8\pi^2}\left(
\begin{array}{cccc}
a(a-3b)J_1&0&0&0\\
0&-b(a+b)J_2&0&0\\
0&0&-b(a+b)J_2&0\\
0&0&0&-b(a+b)J_2
\end{array}
\right)\frac{1}{n-4}.
\end{equation}
\begin{equation}
H^{(1)}=H^{(1)\tau}_{~~~~\tau}=-\frac{e^2}{8\pi^2}2(a+3b)J_0\frac{1}{n-4}.
\end{equation}
The traceless part is
\begin{equation}
h^{(1)\tau}_{~~~~\rho}=H^{(1)\tau}_{~~~~\rho}-\frac{1}{4}H^{(1)}{\delta}^\tau_\rho
=\frac{e^2}{8\pi^2}\frac{3}{2}(a-b)J_0T^\tau_{~~\rho}\frac{1}{n-4},
\end{equation}
where $T^\tau_{~~\rho}$ is the diagonal traceless matrix with diagonal
entries $(1,-1/3,-1/3,-1/3)$. We have also
\begin{equation}
H^{(2)}=H^{(2)\tau}_{~~~~\tau}=\frac{e^2}{8\pi^2}\frac{4}{n-4}.
\end{equation}
The traceless part is
\begin{equation}
h^{(2)\tau}_{~~~~\rho}
=-\frac{e^2}{8\pi^2}\frac{2a^{3/2}+a\sqrt{b}-4\sqrt{a}b+b^{3/2}}{\sqrt{b}(\sqrt{a}+\sqrt{b})^2}T^\tau_{~~\rho}\frac{1}{n-4}.
\end{equation}
On combining these results and setting $a=b^{-3}$ we find
\begin{equation}
H=H^{(1)}+H^{(2)}=-\frac{e^2}{8\pi^2}\frac{4(2b^4-b^2+1)}{b^2(1+b^2)}\frac{1}{n-4},
\end{equation}
and
\begin{equation}
h^\tau_{~~\rho}=h^{(1)\tau}_{~~~~\rho}+h^{(2)\tau}_{~~~~\rho}=\frac{e^2}{8\pi^2}f(b)T^\tau_{~~\rho}\frac{1}{n-4},
\end{equation}
where
\begin{equation}
f(b)=\frac{(1-b^2)(1+3b^2+4b^4)}{b^2(1+b^2)^2}.
\end{equation}
Note that $h^\tau_{~~\rho}$ vanishes when $b=1$ as it should since this value corresponds to
the restoration of Lorentz symmetry.
\subsubsection{\label{REGP_0}Renormalisation Group for Bimetric Theory - Petrov Class O}
With the above information we can calculate the renormalisation counterterms for the bare parameters of the theory.
In the present model $W=6(b^2+b^{-2})$ with the result that
\begin{equation}
e_0^2=\mu^{4-n}e^2\left(1-\frac{e^2}{12\pi^2}\frac{(b^2+b^{-2})}{n-4}\right).
\label{RENEX1}
\end{equation}
Assuming that the renormalisation process works beyond our second order calculation
we can explore the implication of the renormalisation group for this model.
Setting $t=\log(\mu/\mu_S)$ where $\mu_S$ is a standard scale for which the corresponding renormalised
charge, $e_S$ is small,
we can use the lack of dependence of the bare parameter $e_0$ on $\mu$ to deduce that
\begin{equation}
\frac{d}{dt}e_0^2=0.
\end{equation}
It follows then from eq(\ref{RENEX1}) to $O(e^2)$ that
\begin{equation}
\frac{d}{dt}(e^2)=e^2\left(-(4-n)+\frac{e^2}{12\pi^2}(b^2+b^{-2})\right)
\label{REGP_01}
\end{equation}
The bare metric $g^{\mu\nu}_0$ is a diagonal matrix with entries $(\alpha_0,\beta_0,\beta_0,\beta_0)$.
We can infer to $O(e^2)$ from eq(\ref{PROPREN6}) that
\begin{equation}
e^2g^{(1)\mu\nu}=\frac{e^2}{24\pi^2}b(b-a)\frac{1}{n-4}(\beta\eta^{\mu\nu}-(\beta+3\alpha)k^\mu k^\nu)
\label{RENEX2}
\end{equation}
We find then
\begin{eqnarray}
\alpha_0&=&\alpha\left(1-3\frac{e^2}{24\pi^2}b(b-a)\frac{1}{n-4}\right),\nonumber\\
\beta_0&=&\beta\left(1+\frac{e^2}{24\pi^2}b(b-a)\frac{1}{n-4}\right).
\end{eqnarray}
Note that these results are of course consistent with (to $O(e^2)$) with the relation $\alpha_0\beta_0^3=1$.
Again the bare parameter $\beta_0$ is independent of $\mu$ therefore we can conclude that
\begin{equation}
\frac{d\beta}{dt}=-\beta\frac{e^2}{24\pi^2}b(b-a)=-\beta\frac{e^2}{24\pi^2}(b^2-b^{-2}).
\end{equation}
From eq(\ref{PROPREN6}) we can deduce that
\begin{eqnarray}
{\bar \aa}_0&=&{\bar \aa}\left(1-\frac{e^2}{4\pi^2}f(b)\frac{1}{n-4}\right),\nonumber\\
{\bar \bb}_0&=&{\bar \bb}\left(1+\frac{e^2}{12\pi^2}f(b)\frac{1}{n-4}\right).
\end{eqnarray}
This is consistent with ${\bar \aa}_0{\bar \bb}_0^3=1$, and we have
\begin{equation}
\frac{d{\bar \bb}}{dt}=-{\bar \bb}\frac{e^2}{12\pi^2}f(b).
\end{equation}
Recalling ${\bar \bb}=b\beta$ we have
\begin{equation}
\frac{{\bar \bb}_0}{\beta_0}=b\left(1+\frac{e^2}{24\pi^2}F(b)\frac{1}{n-4}\right),
\end{equation}
where
\begin{equation}
F(b)=2f(b)-(b^2-b^{-2}).
\end{equation}
We obtain the result
\begin{equation}
\frac{db}{dt}=-b\frac{e^2}{24\pi^2}F(b).
\end{equation}
From eq(\ref{PROPREN4}) we see that
\begin{equation}
m_0=m\left(1+\frac{e^2}{8\pi^2}\frac{4b^4+b^2+1}{b^2(1+b^2)}\frac{1}{n-4}\right).
\end{equation}
The renormalisation group equation is
\begin{equation}
\frac{dm}{dt}=-\frac{e^2}{8\pi^2}\frac{4b^4+b^2+1}{b^2(1+b^2)}m.
\label{REGP_02}
\end{equation}
These RG equations have a particularly significant fixed point at $e^2=0$ and $b=1$ which corresponds
to the Lorentz invariant case at zero coupling. For small departures from Lorentz invariance, $b=1+y$
where $y$ is small, we find on expanding in powers of $y$ and retaining only linear terms
\begin{equation}
\frac{d(e^2)}{dt}=\frac{e^4}{6\pi^2},
\label{REGP_04}
\end{equation}
and
\begin{equation}
\frac{dy}{dt}=3\frac{e^2}{6\pi^2}y.
\label{REGP_05}
\end{equation}
The solution for the RG trajectory in the neighbourhood of the fixed point is
\begin{equation}
\frac{e^2}{e_S^2}=\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-1},
\end{equation}
and
\begin{equation}
\frac{y}{y_S}=\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-3},
\end{equation}
where $e_S$ and $y_S$ are the assigned values of $e$ and $y$ at $t=0$ or $\mu=\mu_S$.
This shows that the fixed point is IR attractive and that in its neighbourhood we have
\begin{equation}
\frac{y}{y_S}=\left(\frac{e^2}{e_S^2}\right)^3.
\end{equation}
Here $e^2_S$ and $y_S$ are the coupling and (small) departure from Lorentz invariance at
the standard scale $\mu=\mu_S$. It follows that in the IR limit the theory exhibits the same
behaviour as as implied by the analysis in references \cite{NLSN1,KOST4}. In the same approximation
we find from eq(\ref{REGP_02})
\begin{equation}
\frac{dm}{dt}=-\frac{3e^2}{8\pi^2}m.
\end{equation}
and therefore we have the result, identical with that for the Lorentz case,
\begin{equation}
m=m_S\left(1-\frac{e_S^2}{6\pi^2}t\right)^{9/4},
\end{equation}
where $m_S$ is the value of the mass parameter when $\mu=\mu_S$.
This gives a description of the behaviour of the effective parameters in the neighbourhood of the
point $e^2=0$, $b=1$.
However in our approach we can compute the complete RG trajectory without constraint on $b$,
provided of course that $e^2$ does not become too large. The results are illustrated in Fig.\ref{FIG4}.
An important observation is that no matter how small $e^2$ or how large $b$ the RG trajectory
never approaches the axis $e^2=0$ except at the fixed point discussed above.
The axis $e^2=0$ is of course a line of fixed points corresponding to
a theory with no coupling between electrons and photons. Such a non-interacting theory can
maintain any Lorentz symmetry breaking imposed on it. The conclusion is then that no matter how weak
the electron-photon coupling or how large the Lorentz symmetry breaking at higher energies
the theory will at least in the massless case exhibit Lorentz symmetry in the IR limit as proposed in
reference \cite{NLSN1}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\linewidth]{FIG4}
\caption{Renormalisation group trajectories Petrov class O: rotationally invariant case. }
\label{FIG4}
\end{figure}
The case in which the vector $k^\mu$ is spacelike results in an essentially identical analysis
which need not be repeated explicitly. The case in which $k^\mu$ is replaced by a lightlike vector is
different in detail but yields essentially similar results, in particular that Lorentz symmetry is
restored in the IR limit. It can be viewed as a special case of the model
discussed in the next section and we do not treat it separately.
\subsection{\label{RENEXN} Bimetric Theory with Petrov Class N }
The next most tractable example is provided by the choice that ${\bar g}^{\mu\nu}$ has the structure
exhibited in eq(\ref{METN1}). There is no loss of generality in choosing the parameter $c$ to be real.
In order to parametrise the various metrics we introduce in this case a reference metric which
we are free to choose to be $\eta^{\mu\nu}$
and an associated NP tetrad $l,n,m,{\bar m}$ with the properties $l^\mu=\eta^{\mu\nu}l_\nu$, $l^2=n^2=m^2={\bar m}^2=0$,
$l.n=-m.{\bar m}=1$, and $l.m=l.{\bar m}=n.m=n.{\bar m}=0$. We now construct the parametrised metrics in the form
\begin{eqnarray}
g_0^{\mu\nu}&=&\eta^{\mu\nu}+s_0P^{\mu\nu}+r_0l^\mu l^\nu,\nonumber\\
g^{\mu\nu}&=&\eta^{\mu\nu}+sP^{\mu\nu}+rl^\mu l^\nu,\nonumber\\
{\bar g}_0^{\mu\nu}&=&\eta^{\mu\nu}+u_0P^{\mu\nu}+v_0l^\mu l^\nu,\nonumber\\
{\bar g}^{\mu\nu}&=&\eta^{\mu\nu}+uP^{\mu\nu}+vl^\mu l^\nu,
\end{eqnarray}
where
\begin{equation}
P^{\mu\nu}=l^\mu(m^\nu+{\bar m}^\nu)+(m^\mu+{\bar m}^\mu)l^\nu.
\end{equation}
Of course the parameters $s,r,u,v$ are the renormalised versions of $s_0,r_0,u_0,v_0$
so that $s_0=s+e^2s^{(1)}$ to $O(e^2)$ where $s^{(1)}$ has a pole at $n=4$ and similarly for the other parameters.
We also require an NP tetrad $l(s),m(s),n(s),{\bar m}(s)$ associated with the metric $g^{\mu\nu}$.
We achieve this by setting $l_\mu(s)=l_\mu$, $m_\mu(s)=m_\mu$. By imposing the relations
$l^\mu(s)=g^{\mu\nu}l_\nu(s)$ etc, we find
\begin{eqnarray}
l^\mu(s)&=&l^\mu\nonumber,\\
m^\mu(s)&=&m^\mu-sl^\mu\nonumber,\\
{\bar m}^\mu(s)&=&{\bar m}^\mu-sl^\mu.
\end{eqnarray}
It is easily checked that $l^\mu(s)l_\mu(s)=m^\mu(s)m_\mu(s)={\bar m}^\mu(s){\bar m}_\mu(s)=0$
and $l^\mu(s)m_\mu(s)=l^\mu(s){\bar m}_\mu(s)=0$. In addition $m^\mu(s){\bar m}_\mu(s)=-1$.
Although we will not make use of it we give for completeness the form of the remaining element
of the tetrad thus $n_\mu(s)=n_\mu-(s^2+s+r/2)l_\mu-s(m_\mu+{\bar m}_\mu)$ and
$n^\mu(s)=g^{\mu\nu}n_\nu(s)=n^\mu+(s^2-s+r/2)l^\mu$.
It is also easily shown that
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}-sP_{\mu\nu}-(2s^2+r)l_\mu l_\nu.
\end{equation}
The relation between $g^{\mu\nu}$ and ${\bar g}^{\mu\nu}$ is
\begin{equation}
{\bar g}^{\mu\nu}-g^{\mu\nu}=cP^{\mu\nu}+wl^\mu l^\nu,
\end{equation}
where $c=u-s$ and $w=v-r$. If we define $P^{\mu\nu}(s)=l^\mu(m^\nu(s)+{\bar m}^\nu(s))+(m^\mu(s)+{\bar m}^\mu(s))l^\nu$
then we have
\begin{equation}
P^{\mu\nu}(s)=P^{\mu\nu}-4sl^\mu l^\nu.
\end{equation}
Hence
\begin{equation}
{\bar g}^{\mu\nu}=g^{\mu\nu}+cP^{\mu\nu}(s)+(w+4sc)l^\mu l^\nu.
\label{METN3}
\end{equation}
This is of the same form as eq(\ref{METN1}) with the parameter $\alpha=w+4sc$. The NP tetrad
is of course that appropriate to $g^{\mu\nu}$ as constructed here. The result after some algebra is that
\begin{equation}
V^{\mu\nu\sigma\tau}=c^2(A^{\mu\nu}A^{\sigma\tau}+{\bar A}^{\mu\nu}{\bar A}^{\sigma\tau}),
\end{equation}
with $c$ real. Note that strictly speaking we should have used
$A^{\mu\nu}(s)=l^\mu(m^\nu(s)+{\bar m}^\nu(s))-l^\nu(m^\mu(s)+{\bar m}^\mu(s))$ but it is
obvious that $A^{\mu\nu}(s)=A^{\mu\nu}$. We also have $W=12$ and
\begin{equation}
V^{\mu\sigma}=2cP^{\mu\sigma}(s)+2(w+2sc+c^2)l^\mu l^\sigma.
\end{equation}
This may also be expressed in the form
\begin{equation}
V^{\mu\sigma}=2cP^{\mu\sigma}+2(w-2sc+c^2)l^\mu l^\sigma.
\end{equation}
From eqs(\ref{VACP10}) and eq(\ref{CHARGEREN2}) we find here that
\begin{eqnarray}
e_0^2&=&\mu^{4-n}e^2\left(1-\frac{e^2}{6\pi^2}\frac{1}{n-4}\right),\nonumber\\
s_0&=&s+\frac{e^2}{6\pi^2}\frac{c}{n-4},\nonumber\\
r_0&=&r+\frac{e^2}{6\pi^2}\frac{(w+c^2)}{n-4}.
\label{METN4}
\end{eqnarray}
Again we use the minimalist approach which allows us to write
\begin{eqnarray}
C_0^{\mu\nu\sigma\tau}&=&\kappa_0(A^{\mu\nu}A^{\sigma\tau}+{\bar A}^{\mu\nu}{\bar A}^{\sigma\tau}),\nonumber\\
C^{\mu\nu\sigma\tau}&=&\kappa(A^{\mu\nu}A^{\sigma\tau}+{\bar A}^{\mu\nu}{\bar A}^{\sigma\tau}).
\end{eqnarray}
From eq(\ref{VACP11}) we find
\begin{equation}
\kappa_0=\kappa\left(1-\frac{e^2}{6\pi^2}\frac{1}{n-4}\right)+\frac{e^2}{6\pi^2}\frac{c^2}{n-4}.
\label{METN5}
\end{equation}
In order to discuss the renormalisation of the electron parameters it is necessary
to consider the lowest order photon propagator. Although the representation for the photon propagator
in eq(\ref{INVREP}) is useful for exhibiting the pole divergences at $n=4$ in a general context,
it is implicitly a power series in the WLT associated with birefringence and the breakdown of Lorentz
invariance. In the present case of Petrov class N, it is possible and more convenient to obtain a
complete expression for the photon propagator that can be used in perturbation theory
calculations. We will choose the gauge so that $\Lambda^{\mu\nu}=g^{\mu\nu}$.
In lowest order in $e^2$ the inverse photon propagator is given by
\begin{equation}
M^{\mu\sigma}(q)={\cal M}^{\mu\sigma}(q)=q^2g^{\mu\nu}-C^{\mu\nu\sigma\tau} q_\nu q_\tau.
\end{equation}
That is, for Petrov class N,
\begin{equation}
M^{\mu\sigma}(q)=q^2g^{\mu\sigma}-\kappa(P^\mu P^\sigma+{\bar P}^\mu{\bar P}^\sigma),
\end{equation}
where
\begin{equation}
P^\mu=A^{\mu\nu}q_\nu=l^\mu m^\nu(s)q_\nu-m^\mu(s)l^\nu q_\nu=l^\mu m^\nu q_\nu-m^\mu l^\nu q_\nu,
\end{equation}
and
\begin{equation}
{\bar P}^\mu={\bar A}^{\mu\nu}q_\nu=l^\mu {\bar m}^\nu(s) q_\nu-{\bar m}^\mu(s)l^\nu q_\nu=l^\mu {\bar m}^\nu q_\nu-{\bar m}^\mu l^\nu q_\nu.
\end{equation}
If we set $P_\mu=g_{\mu\nu}P^\nu$ and ${\bar P}_\mu= g_{\mu\nu}{\bar P}^\nu$ then
\begin{equation}
P^2=P^\mu P_\mu={\bar P}^2={\bar P}^\mu{\bar P}_\mu=0,
\end{equation}
and
\begin{equation}
P.{\bar P}=P^\mu{\bar P}_\mu=-(l.q)^2=-(l^\mu q_\mu)^2.
\end{equation}
It is then easily verified that the inverse of $M^{\mu\nu}(q)$ is
\begin{equation}
M_{\mu\nu}(q)=\frac{1}{q^2}g_{\mu\nu}+\kappa\frac{(P_\mu P_\nu+{\bar P}_\mu{\bar P}_\nu)}{(q^2-\kappa(l.q)^2)(q^2+\kappa(l.q)^2)}
-\kappa^2\frac{(l.q)^2(P_\mu{\bar P}_\nu+{\bar P}_\mu P_\nu)}{q^2(q^2-\kappa(l.q)^2)(q^2+\kappa(l.q)^2)}.
\end{equation}
This may be rewritten as
\begin{equation}
M_{\mu\nu}(q)=\frac{1}{q^2}g_{\mu\nu}
+\kappa\frac{(P_\mu P_\nu+{\bar P}_\mu{\bar P}_\nu)}{(g^{(-)\alpha\beta}q_\alpha q_\beta)(g^{(+)\alpha\beta}q_\alpha q_\beta)}
-\kappa^2\frac{(l.q)^2(P_\mu{\bar P}_\nu+{\bar P}_\mu P_\nu)}{q^2(g^{(-)\alpha\beta}q_\alpha q_\beta)(g^{(+)\alpha\beta}q_\alpha q_\beta)},
\end{equation}
where
\begin{equation}
g^{(\pm)\alpha\beta}=g^{\alpha\beta}\pm\kappa l^\alpha l^\beta.
\end{equation}
In discussing the divergence structure of the electron propagator we find from eq(\ref{PROPREN1A}) that
\begin{equation}
\Sigma(0)=\Sigma^{(1)}+\Sigma^{(2)}+\Sigma^{(3)},
\end{equation}
where
\begin{equation}
\Sigma^{(1)}=ie^2m\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{{\bar g}^{\mu\nu}g_{\mu\nu}}
{q^2({\bar g}^{\alpha\beta} q_\alpha q_\beta-m^2)},
\end{equation}
\begin{equation}
\Sigma^{(2)}=ie^2m\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{{\bar g}^{\mu\nu}(P_\mu P_\nu+{\bar P}_\mu{\bar P}_\nu)}
{(g^{(+)\alpha\beta} q_\alpha q_\beta)(g^{(-)\alpha\beta} q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)},
\end{equation}
\begin{equation}
\Sigma^{(3)}=-ie^2m\mu^{4-n}\int\frac{d^nq}{(2\pi)^n}\frac{(l.q)^2{\bar g}^{\mu\nu}(P_\mu{\bar P}_\nu+{\bar P}_\mu P_\nu)}
{q^2(g^{(+)\alpha\beta} q_\alpha q_\beta)((g^{(-)\alpha\beta}q_\alpha q_\beta)g^{\alpha\beta} q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)}.
\end{equation}
It is easily checked that ${\bar g}^{\mu\nu}g_{\mu\nu}=n$, ${\bar g}^{\mu\nu}P_\mu P_\nu={\bar g}^{\mu\nu}{\bar P}_\mu{\bar P}_\nu=0$
and ${\bar g}^{\mu\nu}P_\mu{\bar P}_\nu=-(l.q)^2$. We have then
\begin{equation}
\Sigma^{(1)}=ie^2m\mu^{4-n}nI,
\end{equation}
where
\begin{equation}
I=\int\frac{d^nq}{(2\pi)^n}\frac{1}{q^2({\bar g}^{\alpha\beta} q_\alpha q_\beta-m^2)},
\end{equation}
\begin{equation}
\Sigma^{(2)}=0,
\end{equation}
and
\begin{equation}
\Sigma^{(3)}=2ie^2m\mu^{4-n}l^\mu l^\nu l^\sigma l^\tau I_{\mu\nu\sigma\tau},
\end{equation}
where
\begin{equation}
I_{\mu\nu\sigma\tau}=\int\frac{d^nq}{(2\pi)^n}\frac{q_\mu q_\nu q_\sigma q_\tau}
{q^2(g^{(+)\alpha\beta} q_\alpha q_\beta)((g^{(-)\alpha\beta}q_\alpha q_\beta)g^{\alpha\beta} q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)}.
\end{equation}
In appendix (\ref{INTEGRALS}) we show that
\begin{equation}
I\simeq -\frac{i}{8\pi^2}\frac{1}{n-4},
\end{equation}
and, making use of results for integrals listed there we can show that
\begin{equation}
l^\mu l^\nu l^\sigma l^\tau I_{\mu\nu\sigma\tau}=0.
\end{equation}
It follows that
\begin{equation}
\Sigma(0)\simeq\frac{e^2}{2\pi^2}m\frac{1}{n-4}.
\end{equation}
We have also, referring to eq(\ref{PROPREN1D}) and eq(\ref{PROPREN1E}),
\begin{equation}
H^{(1)\tau}_{~~~~~\rho}=H^{(11)\tau}_{~~~~~~\rho}+H^{(12)\tau}_{~~~~~~\rho}+H^{(13)\tau}_{~~~~~~\rho},
\end{equation}
and
\begin{equation}
H^{(2)\tau}_{~~~~~\rho}=H^{(21)\tau}_{~~~~~~\rho}+H^{(22)\tau}_{~~~~~~\rho}+H^{(23)\tau}_{~~~~~~\rho},
\end{equation}
where
\begin{equation}
H^{(11)\tau}_{~~~~~~\rho}
=ie^2\mu^{4-n}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})g_{\mu\nu}I,
\end{equation}
\begin{equation}
H^{(12)\tau}_{~~~~~~\rho}=ie^2\mu^{4-n}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})
\kappa K_{\mu\nu},
\end{equation}
where
\begin{equation}
K_{\mu\nu}=\int\frac{d^nq}{(2\pi)^n}\frac{P_\mu P_\nu+{\bar P}_\mu{\bar P}_\nu}
{(g^{(+)\alpha\beta}q_\alpha q_\beta)(g^{(-)\alpha\beta}q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta)-m^2)}.
\end{equation}
\begin{equation}
H^{(13)\tau}_{~~~~~~\rho}=-ie^2\mu^{4-n}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})
\kappa^2J_{\mu\nu},
\end{equation}
with
\begin{equation}
J_{\mu\nu}=\int\frac{d^nq}{(2\pi)^n}\frac{(l.q)^2(P_\mu{\bar P}_\nu+{\bar P}_\mu P_\nu)}
{q^2(g^{(+)\alpha\beta}q_\alpha q_\beta)(g^{(-)\alpha\beta}q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q\beta-m^2)}.
\end{equation}
With the aid of integrals evaluated in appendix \ref{INTEGRALS} it can be shpown that
\begin{equation}
K_{\mu\nu}\simeq 0.
\end{equation}
and
\begin{equation}
J_{\mu\nu}\simeq 0.
\end{equation}
Hence
\begin{equation}
H^{(1)\tau}_{~~~~~\rho}\simeq\frac{e^2}{8\pi^2}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})g_{\mu\nu}\frac{1}{n-4}.
\end{equation}
We have
\begin{equation}
H^{(21)\tau}_{~~~~~~\rho}=-2ie^2\mu^{4-n}{\bar g}^{\tau\beta}I'_{\beta\sigma}({\delta}^\mu_\rho{\bar g}^{\sigma\nu}+{\delta}^\nu_\rho{\bar g}^{\sigma\mu}-{\delta}^\sigma_\rho{\bar g}^{\mu\nu})g_{\mu\nu},
\end{equation}
where
\begin{equation}
I'_{\beta\sigma}=\int\frac{d^nq}{(2\pi)^n}\frac{q_\beta q_\sigma}{q^2({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)^2},
\end{equation}
\begin{equation}
H^{(22)\tau}_{~~~~~~\rho}=-2ie^2\kappa\mu^{4-n}{\bar g}^{\tau\beta}
({\delta}^\mu_\rho{\bar g}^{\sigma\nu}+{\delta}^\nu_\rho{\bar g}^{\sigma\mu}-{\delta}^\sigma_\rho{\bar g}^{\mu\nu})K'_{\alpha\beta\mu\nu},
\end{equation}
where
\begin{equation}
K'_{\beta\sigma\mu\nu}=\int\frac{d^nq}{(2\pi)^n}\frac{q_\beta q_\sigma(P_\mu P_\nu+{\bar P}_\mu{\bar P}_\nu)}
{(g^{(+)\alpha\beta}q_\alpha q_\beta)(g^{(-)\alpha\beta}q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)^2},
\end{equation}
\begin{equation}
H^{(23)\tau}_{~~~~~~\rho}=2ie^2\kappa^2\mu^{4-n}{\bar g}^{\tau\beta}
({\delta}^\mu_\rho{\bar g}^{\sigma\nu}+{\delta}^\nu_\rho{\bar g}^{\sigma\mu}-{\delta}^\sigma_\rho{\bar g}^{\mu\nu})
l^\xi l^\eta K'_{\beta\sigma\xi\eta\mu\nu},
\end{equation}
where
\begin{equation}
K'_{\beta\sigma\xi\eta\mu\nu}=\int\frac{d^nq}{(2\pi)^n}\frac{q_\beta q_\sigma q_\xi q_\eta(P_\mu{\bar P}_\nu+{\bar P}_\mu P_\nu)}
{q^2(g^{(+)\alpha\beta}q_\alpha q_\beta)(g^{(-)\alpha\beta}q_\alpha q_\beta)({\bar g}^{\alpha\beta}q_\alpha q_\beta-m^2)^2},.
\end{equation}
Again using the integrals evaluated in appendix \ref{INTEGRALS} it can be shown that
$H^{(22)\tau}_{~~~~~~\rho}\simeq H^{(23)\tau}_{~~~~~~\rho}\simeq 0$. Hence
\begin{equation}
H^{(2)\tau}_{~~~~~~\rho}(0)\simeq-\frac{e^2}{4\pi^2}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})g_{\mu\nu}\frac{1}{n-4}.
\end{equation}
It follows that
\begin{equation}
H^\tau_{~~\rho}(0)=-\frac{e^2}{8\pi^2}({\delta}^\mu_\rho{\bar g}^{\tau\nu}+{\delta}^\nu_\rho{\bar g}^{\tau\mu}-{\delta}^\tau_\rho{\bar g}^{\mu\nu})g_{\mu\nu}\frac{1}{n-4},
\end{equation}
and
\begin{equation}
H=H^\tau_{~~\tau}(0)=-\frac{e^2}{2\pi^2}\frac{1}{n-4}.
\end{equation}
We have then
\begin{equation}
h^\tau_{~~\rho}=-\frac{e^2}{4\pi^2}({\bar g}^{\tau\mu}g_{\mu\rho}-{\delta}^\tau_\rho)\frac{1}{n-4}.
\end{equation}
Introducing the expressions for ${\bar g}^{\mu\nu}$ we find
\begin{equation}
h^\tau_{~~\rho}=\frac{e^2}{6\pi^2}(cP^\tau_{~~\rho}(s)+(w+4sc)l^\tau l_\rho)\frac{1}{n-4}.
\end{equation}
We find
\begin{equation}
h^\mu_{~~\rho}{\bar g}^{\rho\nu}=\frac{e^2}{6\pi^2}{cP^\mu\nu}+(w-2c^2))\frac{1}{n-4}.
\end{equation}
Recall that $e^2g^{(1)\mu\nu}=-h^\mu_{~~\rho}{\bar g}^{\rho\nu}-h^\nu_{~~\rho}{\bar g}^{\rho\mu}$ we find
\begin{eqnarray}
u_0&=&u-\frac{e^2}{3\pi^2}\frac{c}{n-4}.\nonumber\\
v_0&=&v-\frac{e^2}{3\pi^2}\frac{w-2c^2}{n-4}.
\label{METN6}
\end{eqnarray}
\subsubsection{\label{RENGP_N} Renormalisation Group for Bimetric Theory - Petrov Class N}
From eqs(\ref{METN4}) using the independence of the bare charge on the renormalisation scale
we find
\begin{equation}
\frac{de^2}{dt}=e^2\left((n-4)+\frac{e^2}{6\pi^2}\right).
\end{equation}
In 4 dimensions this becomes
\begin{equation}
\frac{de^2}{dt}=\frac{(e^2)^2}{6\pi^2},
\end{equation}
with the solution
\begin{equation}
\frac{e^2}{e_S^2}=\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-1},
\end{equation}
where $e_S$ is the value of the coupling when $\mu=\mu_S$.
From eqs(\ref{METN4})and eqs(\ref{METN6})e have the result
\begin{equation}
u_0-s_0=c\left(1-\frac{e^2}{2\pi^2}\frac{1}{n-4}\right).
\end{equation}
Since $u_0$ and $s_0$ are independent of the renormalisation scale
it follows that
\begin{equation}
\frac{1}{c}\frac{dc}{dt}=\frac{e^2}{2\pi^2}
\end{equation}
The solution is
\begin{equation}
\frac{c}{c_S}=\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-3},
\end{equation}
$c_S$ being the value of the coupling $c$ at the scale $\mu_S$.
Similarly
\begin{equation}
v_0-r_0=w-\frac{e^2}{2\pi^2}\frac{w-c^2}{n-4},
\end{equation}
leading to
\begin{equation}
\frac{dw}{dt}=\frac{e^2}{2\pi^2}(w-c^2),
\end{equation}
with the solution
\begin{equation}
w=\left(w_S-\frac{c_S^2}{2\pi^2}\left(\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-6}-1\right)\right)\left(1-\frac{e_S^2}{2\pi^2}t\right)^{-3}.
\end{equation}
We see immediately that in the infra red limit $\mu\rightarrow 0$ or $t\rightarrow -\infty$
\begin{eqnarray}
e^2&\rightarrow 0\nonumber\\
c&\rightarrow 0\nonumber\\
w&\rightarrow 0
\end{eqnarray}
Hence as we expect the infrared limit is the weak coupling limit and in this limit
both $c$ and $w$ vanish bringing the metrics $g^{\mu\nu}$ and ${\bar g}^{\mu\nu}$ into
coincidence thus potentially removing the breakdown of Lorentz invariance, at least in the massless case.
We have also from eq(\ref{METN5})
\begin{equation}
\frac{d\kappa}{dt}=\frac{e^2}{6\pi^2}\kappa-\frac{e^2}{6\pi^2}c^2.
\end{equation}
Using the above results we have
\begin{equation}
\kappa=\left(\kappa_S-\frac{c_S^2}{5}\left(\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-5}-1\right)\right)\left(1-\frac{e_S^2}{6\pi^2}t\right)^{-1}.
\end{equation}
It follows that in the infrared limit $\kappa$ vanishes and hence any birefringence.
Note that it we can choose $c_S=\kappa_S=0$ which implies $c=\kappa=0$ and therefore we can consistently set
$s=u=\kappa=0$. This leaves $w$ as the only significant remaining variable which provides a
Lorentz breaking scenario that is the lightlike case of Petrov class O referred to in the previous section.
Were we then to choose $w_S=0$ we would return to a situation of Lorentz invariance. If however $c_S\ne 0$
then we induce nonvanishing values for $w$. For the massless case we can again argue that Lorentz invariance
returns in the infrared limit. However if we examine the renormalisation group for the mas we find
\begin{equation}
m=m_S\left(1-\frac{e_S^2}{6\pi^2}t\right)^{9/4}.
\end{equation}
This is the same behaviour as the Petrov class 0 model in the neighbourhood of the IR fixed point.
However in this case it holds for finite values of the Lorentz breaking parameters. Again we
require a closer examination of the infrared limit in this case to deal with a non-zero mass
for the electron.
\section{\label{CONC} Conclusions}
We have examined the breakdown of Lorentz invariance in QED
through a premetric formulation of elctrodynamics parametrised by a
tensor $U^{\mu\nu\sigma\tau}$ that has the same symmetry properties
as the Riemann tensor. However we showed that in fact there is a {\it preferred}
metric $g^{\mu\nu}$ that allows us to decompose $U^{\mu\nu\sigma\tau}$ in the form
$$
U^{\mu\nu\sigma\tau}=g^{\mu\sigma}g^{\nu\tau}-g^{\mu\tau}g^{\nu\sigma}-C^{\mu\nu\sigma\tau}.
$$
where $C^{\mu\nu\sigma\tau}$ has the symmetry properties of the Weyl tensor.
We can therefore use the Petrov classification for $C^{\mu\nu\sigma\tau}$ to
delineate all the possible forms of Lorentz symmetry breaking in electrodynamics
and ultimately in QED. Apart from the null case for which there is no Lorentz
symmetry breaking in electrodynamis (QED requires further analyis) all the other
canonical examples exhibit birefringence. We established the dispersion relations
for each Petrov class. In all cases this has the form of homogeneous quartic
constraint on the wave vector of the mode. In some cases this quartic has two
quadratic factors each corresponding to a particular polarisation. Each factor
yields a separate and distinct light cone. In other cases the quartic does not factorise
in this way and hence is inherently more complex than a simple double lightcone structure.
In examining the plane wave solutions of the general Lorentz symmetry breaking case we
made use of the gauge condition on the elctromagnetic field $g^{\mu\nu}\partial_\mu A_\nu(x)=0$.
However, motivated by the potential absence of Lorentz symmetry we also explored a more general
gauge condition $\Lambda^{\mu\nu}\partial_\mu A_\nu(x)=0$. The choice of gauge condiditon does not
affect the physical solutions but it does affect the unphysical ones. In fact $\Lambda^{\mu\nu}$
determines the light cone for these unphysical modes and also for the associated ghost
modes. The latter do not play any role in electrodynamics or QED but in a non-abelian gauge theory
they will do. In fact it turns out that the more general gauge condition
comes into its own when we consider the renormalisation program for QED.
We examine the general structure of renormalised BIMQED to one loop order in
perturbation theory but without assuming that the Lorentz symmetry breaking
is itself small. The nature of this breaking is determined by the metric ${\bar g}^{\mu\nu}$
governing the propagation of the electron field through the tensor
$W^{\mu\nu\sigma\tau}={\bar g}^{\mu\sigma}{\bar g}^{\nu\tau}-{\bar g}^{\mu\tau}{\bar g}^{\nu\sigma}$ which appears as
a factor in the residue of the UV divergence of the vacuum polarisation diagram.
The standard decomposition of this tensor leads to a traceless Weil-like tensor $V^{\mu\nu\sigma\tau}$.
The Petrov class of this tensor can be used to determine the nature of the Lorentz symmetry breaking
in the model. We give examples, though by no means a complete list, of how different choices
for ${\bar g}^{\mu\nu}$ lead to different Petrov classes for $V^{\mu\nu\sigma\tau}$.
Finally we apply the renormalisation program in detail to the two simplest Petrov
classes of symmetry breaking, namely class O and class N. We derive the renormalisation
group flows in these cases and conclude that Lorentz symmetry breaking is suppressed in the
infra-red limit at least in the massless case. The results are entirely consistent with previous
analyses. In our case we are not restricted to small deviations and can show, to $O(e^2)$, that the result
holds however large the breaking at shorter distances. That is there appear to be no
unexpected fixed points for nonvanishing Lorentz symmetry breaking.
It is of course of great interest to examine the corresponding case of a non-abelian
gauge theory such as QCD where the weak coupling fixed point occurs in the ultraviolet
rather than the infrared limit. We will consider this case in a later paper.
|
1,477,468,750,164 | arxiv | \section{Introduction}
\label{Introduction}
Trans-Neptunian Objects (TNOs) are thought to be among the least evolved relics of the solar system formation, residing in the outer parts of the solar system, where the influence of the Sun is less severe than in the inner parts of the solar system. Thus, these icy objects are very important bodies that carry plenty of information on the physical and dynamical processes that shaped our solar system. Therefore, they are key bodies for our understanding of the formation and evolution of the solar system.
According to \cite{Fernandez2020} ``we are at the beginning of the exploration of the Trans-Neptunian region so we look forward to new and
important discoveries that will very likely revolutionize our current view of how the solar system formed and evolved''.
At the time of this writing (February 2020) there are 2416 TNOs (including Pluto), 1085 Scattered Disc Objects (SDOs) plus Centaurs, and 24 Neptune Trojans as listed by the Minor Planet Center\footnote{\url{https://www.minorplanetcenter.net/iau/lists/MPLists.html}.}.
In order to study these objects, there are many different observational strategies. Among the tools to study TNOs, stellar occultations offer the most powerful means of observing them from the ground to determine key physical properties such as size and shape. Through stellar occultations, sizes and shapes with kilometric accuracy can be derived, as well as accurate geometric albedos (when the data are combined with reflected light measurements). This technique is specially fruitful in combination with thermal measurements and modeling \citep{Muller2018}. Finding potential atmospheres on them is also theoretically possible through occultations. Besides, after the recent discovery of a dense ring around Haumea \citep{Ortiz2017} in the context of the previous findings of a ring system around the centaur Chariklo \citep{Braga-Ribas2014}, and a structure in Chiron closely resembling that of Chariklo \citep{Ortiz2015, Sickafoose2019}, there is even more interest in the field of stellar occultations by TNOs, as more rings can potentially be found in the Trans-Neptunian region.
However, the process from predicting to observing stellar occultations by TNOs is complex and has lots of difficulties. As a result, most of the positive occultation detections thus far have been made from single sites, which gives only limited information \cite[for a review see, e.g.,][]{Ortiz2020}.
Fortunately, there have been several cases with multichord detections, from which plenty of information was retrieved and published \cite[e.g.,][]{Elliot2010,Sicardy2011,Ortiz2012,Braga-Ribas2013,Benedetti2016,Schindler2017,DiasOliveira2017,Ortiz2017,Leiva2017,Benedetti2019}.
Here we present the observations and the results of the stellar occultation by 2002~TC$_{302}$ on 28$^{th}$ January 2018, which set a record in the number of detections, together with additional information on rotational light curves and time series astrometry to try to put together a coherent picture of this body. The results are interpreted in the context of other bodies of similar size and features.
\section{Occultation predictions}
\label{Occultation predictions}
The occultation by 2002~TC$_{302}$ on 28$^{th}$ January 2018 was predicted within our program of physical characterization of TNOs by means of stellar occultations. Important international efforts in this regard are currently being coordinated within the framework of the Lucky Star project\footnote{\url{http://lesia.obspm.fr/lucky-star/}} and the observations of this event were organized in that context of collaboration. The prediction was made in different steps with different star catalogs. We used the HSOY \citep{Altmann2017} and UCAC5 \citep{Zacharias2017} catalogs because the Gaia DR2 catalog \citep{Lindegren2018} did not exist until April 2018 and Gaia DR1 did not have information on proper motions, whereas UCAC5 and HSOY contained that information for a bright subset of Gaia stars. Additionally, we used different orbit solutions for the TNO (from the AstOrb\footnote{\url{https://asteroid.lowell.edu/main/astorb}}, MPCORB\footnote{\url{https://minorplanetcenter.net/iau/MPCORB.html}}, JPL Horizons\footnote{\url{https://ssd.jpl.nasa.gov/?horizons}}, AstDys\footnote{\url{https://newton.spacedys.com/astdys/}} sites and our own orbit fits). Once the potential occultation seemed favorable enough to be observed, we carried out specific astrometric monitoring runs for 2002~TC$_{302}$ to narrow down the shadow path uncertainty. As it is well known, the positions of stars down to $\sim $20 magnitude in V are now available to a good accuracy, of the order of or below a milliarcsecond (mas) thanks to the Gaia DR2 catalog including proper motion information \citep{Lindegren2018}, but unfortunately, the positions of the TNOs are not known to that level of accuracy. Hence, specific methods have to be developed to solve this problem \citep[see][]{Ortiz2020}. Usually, a careful astrometric monitoring of the target TNO with sufficiently large telescopes and within a few months to a few days prior to the potential occultation combined with a specific analysis of the measurements, is one of the preferred techniques that have yielded positive occultation results. But care must be taken with the existence of satellites, with contamination from faint background stars, and with other effects that may bias the astrometric observations. The main parameters of the star relevant for the occultation are listed in Table \ref{tb:physical_properties}. Note that the angular diameter was estimated according to the expressions given in \citet{vanBelle1999} with colors from the NOMAD catalog \citep{Zacharias2004}.
From the astrometric monitoring of 2002~TC$_{302}$ carried out in several days of October, November, December 2017 and January 2018 through the use of the Sierra Nevada 1.5m telescope (Granada, Spain) and the Calar Alto 1.2m telescope (Almer\'ia, Spain), we obtained refined predictions with respect to the initial one. From the latest measurements made just a few days prior to the stellar occultation by 2002~TC$_{302}$, the path of the occultation was predicted to be favorable to a large area in Europe (see Figure \ref{predictmap}), although there was some concern that these measurements could be affected by the presence of a potential satellite and the centroid biased to a wrong position. Nevertheless, the prediction made with a specific orbital fit of the NIMA type \citep{Desmars2015} to all the available data indicated a similar path on Earth\footnote{\url{http://lesia.obspm.fr/lucky-star/predictions/single.php?p=3492}}. The final path, reconstructed from the fit to the occultation chords described in subsequent sections,
is depicted in Figure \ref{observedmap}.
\begin{figure}
\includegraphics[width=\columnwidth]{Figura1_mapa_articulo_201801_preocultacion_recortada.jpg}
\caption{Map showing the shadow path prediction (blue lines) for the stellar occultation by 2002~TC$_{302}$ on 28$^{th}$ January 2018, based on the last set of astrometry measurements obtained prior to the occultation. The green line shows the center of the shadow path. The motion of the shadow is from north to south. The width of the shadow path in the map is 500 \si{\kilo\meter}. The real shadow path of the occultation, obtained after the analysis of the occultation chords, is shown in Figure \ref{observedmap}.}
\label{predictmap}
\end{figure}
The astrometric observations at the 1.5-m telescope of Sierra Nevada Observatory consisted of CCD images taken with the $2{\rm k}\times2{\rm k}$ Versarray camera\footnote{\url{https://www.osn.iaa.csic.es/en/page/ccdt150-camera}}, which has a field of view (FoV) of $7.8\times7.8$ arcmin and a scale of 0.23 arcsec/pixel. The integration time was 400 s and no filters were used in order to maximize the signal to noise ratio of the observations. Typical seeing ranged from 1.3 to 2.5 arcsec. The Calar Alto 1.2m telescope images were acquired using the $4{\rm k}\times4{\rm k}$ DLR camera\footnote{\url{https://www.caha.es/CAHA/Instruments/IA123/index.html}}, which has a FoV of $22\times22$ arcmin and a pixel scale of 0.32 arcsec/pixel. We used the Johnson-Cousins $R$ filter to avoid fringing in the near-IR, which is a known issue in this camera. The typical seeing was around 1.5 arcsec and the used integration times were 400s, with sidereal tracking. At both telescopes, images were obtained using $2\times2$ binning mode and sidereal tracking. All images were bias subtracted and flatfield corrected using bias frames taken each night and using a median of flatfield frames obtained on each observing night (if this was not possible, flatfield frames from a previous night were used).
The images were analyzed with our own software that extracts the sources of the images, solves for the plate constants using a specific astrometric catalog (selected by the user) and then performs the astrometry of the target. The astrometric catalog used was Gaia DR1 because Gaia DR2 had not been released prior to the occultation. Therefore, no proper motion corrections were applied to the reference stars. This process is fully automated, but visual inspection was made to discard images in which the target could be blended with a background star and to discard images with cosmic ray hits near the target. Also, care was taken so that charge bleeding or blooming or ghosts from bright stars in the FoV or any anomalous aspect did not affect the TNO measurements. Online Table 1 lists all the measurements referred to the J2000 equinox. The NIMA prediction also included observations performed at Pico dos Dias Observatory (OPD) in October (18/10/2017) and November (12/11/2017) with the 1.6-m Perkin Elmer Telescope and using the Andor-IKon camera\footnote{\url{http://www.lna.br/opd/instrum/ccd/manual_ikon.pdf}} (pixel scale = 0.180 arcsec/pixel, FoV $=7\times7$ arcsec). The images were calibrated with bias and flatfields taken during the same night and the used exposure times were 180s in Johnson - $I$ filter.
\begin{figure}
\includegraphics[width=\columnwidth]{Figura2_mapa_articulo_201801_ocultacion_real_corregida_recortada.jpg}
\caption{Map showing shadow path (blue lines) reconstructed after the occultation results were obtained. The width of the shadow path used in the map is 500 \si{\kilo\meter}, which is the equivalent-area diameter derived from the occultation. The green line shows the center of the shadow path. The shadow moved from North to South. The blue marks show the observatory sites where the occultation was detected. The two red marks indicate the two observatory sites closest to the shadow, where the event was negative.}
\label{observedmap}
\end{figure}
\begin{table*}
\centering
\caption{ Main characteristics of the occulted star.}
\label{tb:physical_properties}
\begin{tabular}{cccccc}
\hline
\hline
Gaia ID & RA & Dec & G & Ang. Diam. & Speed \\
&(hh mm ss.s)&(\degr ~\arcmin ~\arcsec)&(mag)&(mas)&(\si{\kilo\meter\per\second})\\\hline
130957817758443648 & 02 21 49.3853105797 & +28 24 13.439342645 & 15.6 & 0.009 & 4.77 \\\hline
\end{tabular}
\tablefoot{Abbreviations are defined as follows: Gaia DR2 identification number (Gaia ID), J2000 coordinates of the star from Gaia DR2 (right ascension and declination, RA and Dec., respectively), G magnitude (G), angular diameter (Ang. Diam.), speed relative to the observer (speed).}
\end{table*}
\section{Occultation observations}
\label{Occultation Observations}
The occultation observations were performed with different telescopes and with various camera setups. The main observational details for the occultation observations at the main sites involved in the campaign are listed in Table \ref{tb:sites}. We list only the sites where the event was positive or where a close miss to the final shadow path was produced (providing at least constraints to the final fit of a shape model). More observatories than those listed in Table \ref{tb:sites} monitored the event, but did not achieve positive results for a variety of reasons (because they were far from the final occultation path or because they were clouded out or had technical problems). Unfortunately, a complete list of all the observatories that participated in the campaign cannot be derived, because there were amateur observers alerted through different internet tools and other procedures that we could not monitor. Nevertheless, we are keeping a registry\footnote{\url{http://asteroidstnos.iaa.es/content/sharedfiles}} of all the observers who report their participation. This registry will be updated whenever new information becomes available.
Most of the observations from the sites in Table \ref{tb:sites} consisted of sequences of images taken with different telescopes and different CCDs or CMOS detectors, as specified in the table. Other observations, indicated also in the table, were acquired in video mode. The video observations required a different analysis as explained in Section \ref{Analysis}. No filters were used to maximize the number of photons received in order to get the highest possible signal to noise ratio. Table \ref{tb:sites} shows the names of the observing sites, their topocentric coordinates (longitude and latitude), the exposure time, the cycle time between consecutive exposures, the diameters of the telescopes, and the detector manufacturers and models.
The observations started typically 15 minutes before the predicted time for the occultation and were finalized around 15 minutes after the event. This was done in order to determine a good base line for the photometric analysis, and to determine its noise level before and after the occultation event. The moon was 90\% full at 52 degrees from the target. This means that considerable sky background affected the observations and therefore the signal to noise ratio of the observations was not as high as it could have been in the absence of moon illumination. Weather conditions were mostly clear in all the observatories except at Tavolaia, where intermittent clouds were present. Nevertheless, all the observing sites achieved enough signal to noise ratio in the images (see last column of Table \ref{tb:times}) so that the occultation brightness drop was clearly detected.
The campaign around this occultation was a major achievement because no stellar occultation by a TNO had ever been observed with so many chords across the main body and with near misses.
\begin{table*}
\sisetup{
table-format = 2.3 ,
table-number-alignment = center ,
}
\centering
\caption{Observatories and characteristics of the observations.}
\label{tb:sites}
\begin{tabularx}{\textwidth}{p{3cm}p{2cm}c*{3}{S}Y}
\hline
\hline
Site & \multicolumn{1}{Y}{Longitude~(E)} & Latitude (N) & \multicolumn{1}{Y}{Exp. time} & \multicolumn{1}{Y}{Cycle time} & \multicolumn{1}{Y}{Telescope diameter} & Detector \\
& \multicolumn{1}{Y}{(\degr)} & (\degr) & \multicolumn{1}{Y}{(s)} & \multicolumn{1}{Y}{(s)} & \multicolumn{1}{Y}{(cm)} & \\\hline
Crni Vrh & 14.071083 & 45.945833 & 3.4 & 4.995 & 60 & Apogee Alta U9000HC \\
Asiago & 11.568806 & 45.849444 & 5.0 & 8.301 & 67 & Moravian G4-16000LC, KAF-16803 \\
S. Marcello Pistoiese & 10.803889 & 44.064167 & 3.0 & 3.933 & 60 & Apogee Alta U6 \\
Tavolaia & 10.673306 & 43.736500 & 7 & 7.23 & 40 & ASI 174MM \\
Mount Agliale & 10.514944 & 43.995278 & 2.0 & 3.718 & 50 & FLI proline 4710\\
La Spezia & 9.853528 & 44.126278 & 4.0 & 7.830 & 40 & Sbig STXL 6303e \\
Gnosca & 9.024028 & 46.231444 & 2.56 & 2.568 & 28 & video WAT-910HX-RC, ICX429ALL \\
Varese, Schiaparelli Observatory & 8.770278 & 45.868056 & 2.0 & 5.716 & 84 & SBIG STX-16803 \\
Observatoire de Cote d'Azur, Nice & 7.299833 & 43.725806 & 2.2 & 2.2005 & 40 & ASI 174MM \\
Aosta Valley & 7.478333 & 45.789444 & 2.0 & 2.695 & 40 & FLI1001E \\
Biot, Nice & 7.077778 & 43.617222 & 20.0 & 26.463 & 20 & video QSI 583 wsg, KAF 8300 \\
Vinon sur Verdon & 5.796111 & 43.737778 & 10 & 30.467 & 30 & Atik 383L, KAF 8300\\\hline\hline
Near Misses & & & & & \\\hline
Osservatorio Salvatore Di Giacomo & 14.564056 & 40.623944 & 2 & 3.4 & 50 & FLI PL4220, E2V CCD42-40-1-368 \\
Agerola & 14.571556 & 40.626083 & 2.5 & 2.5 & 25 & ASI 178M, IMX178 \\
Sabadell & 2.090167 & 41.550000 & 2.56 & 2.56 & 50 & Watec 910HX-RC \\
San Esteve & 1.872528 & 41.493750 & 2 & 2 & 40 & Point Grey Chameleon3 \\\hline
\end{tabularx}
\end{table*}
\section{Analysis of the occultation observations}
\label{Analysis}
Synthetic aperture photometry of the occultation star (blended with the TNO) was derived for the sequences of images of the different cameras in order to obtain the light curves for each site. The synthetic aperture photometry results were derived by means of an Interactive Data Language (IDL) code that uses the implementation of the well known \texttt{DAOPHOT} photometry package \citep{Stetson1987}. Also, synthetic aperture photometry was derived for comparison stars close to the target star in the FoV of the cameras, so that sky transparency fluctuations as well as seeing variations could be taken into account. The final light curves were obtained by taking the ratio of the occulted star flux in ADUs to that of a comparison star (or the combination of several comparison stars if this was possible). We carefully monitored the dispersion of the final light curves and chose the synthetic aperture diameters as well as the rest of parameters of the synthetic aperture technique to get the least possible scatter in the photometry. Centroid tracking to recenter the apertures was done for the reference stars but not for the occultation star, whose position was kept fixed with respect to the references. The time of each photometry point was derived from the time stamps in the FITS headers of the images. It must be noted that some cameras inserted header times rounded off to the nearest second or truncated seconds. In these cases, we interpolated times for each point by using linear fits to the times versus frame number, in the same way as described in \citet{Sicardy2011}. It must also be noted that for the Tavolaia observations, there was a problem in the acquisition that prevented from saving the time in the headers of the images, so we could only use the time of the recording of the file to disk as provided by the operating system. Hence, the timing of these observations is more uncertain than the rest, and in fact, we have accounted for this as explained in section \ref{Size and shape}.
The rest of the sites other than Tavolaia used internet NTP time servers to synchronize their image acquisition computers. Even though this technique is capable of providing accuracies of 0.01 s, the way in which different operating systems and camera control software deal with time (and possible shutter opening delays) is often not accurate to the level of 0.1 s. In fact, errors up to a few tenths of a second have been reported in internet-synchronized devices under Windows operating systems \cite[e.g.,][]{BarryGault2015}. As we mention in a paragraph below, these time errors are nevertheless smaller than those arising from the square-well fits to determine ingress and egress times (given the photometric errors and relatively large exposure times as well as large readout times).
The video observations required a specific analysis. We used the Tangra video analysis tool\footnote{\url{http://www.hristopavlov.net/Tangra3/}} to derive the light curves and the timings that came from the video-inserted GPS-based time stamps whose accuracy is in the order of the millisecond once the small time delays of the video cameras (which depend on the manufacturer) are taken into account.
For the instrumental delay Tangra uses the tables provided by
Gerhard Dangl.\footnote{ \url{ http://www.dangl.at/ausruest/vid_tim/vid_tim1.htm}}
All the light curves were normalized to the mean brightness level of the blended star and TNO, outside the main occultation event, by dividing the flux by that mean level. Hence, the mean level of the normalized light curve has the value 1 outside the occultation part. The light curves in normalized flux are shown in Figure \ref{observedlightcurves}.
The uncertainties in the fluxes were derived from the photometry using Poisson noise estimations as obtained in the photometry software package \texttt{DAOPHOT}. Note that all the uncertainties or error bars given throughout the paper are 1$\sigma$. The average values of the theoretical uncertainties were obtained and compared with the standard deviation of the observations. If a departure was present in the theoretical errors compared to the standard deviation, the individual errors were multiplied by the ratio of the observed and the computed standard deviation. This was often needed because the gain values (number of electrons per count on the detector) were not well known for most of the devices. Hence, we assigned a different error bar to each individual measurement (in other words, we did not assume a constant error bar equal to the standard deviation of the measurements). This is particularly relevant for the values at the occultation, when the relative uncertainties are much higher than outside the occultation.
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{figure3Ortizetal2020.png}
\caption{Light curves in normalized flux from all the observatories. The light curves have been displaced in the horizontal axis to account for the different longitudes so that all the occultation drops are visually aligned.}
\label{observedlightcurves}
\end{figure*}
Once all the occultation light curves were derived, we proceeded to fit square well models to the parts of the light curves that showed the occultation. The fits were performed using the same expressions and methodology as in other occultation works \cite[e.g.,][]{Braga-Ribas2014,Benedetti2016,DiasOliveira2017,Ortiz2017,Benedetti2019}. From those fits, the star disappearance and reappearance times were derived and their uncertainties were determined as those values providing fits such that the values of $\chi^2$ were within the interval [$\chi^2_{min}$,$\chi^2_{min} + 1$]. The disappearance and reappearance times and their errors are listed in Table \ref{tb:times}.
\begin{table*}
\centering
\caption{Disappearance and reappearance times.}
\label{tb:times}
\begin{tabular}{cr@{\ \ $\pm$ \ \ }lr@{\ \ $\pm$ \ \ }lc
\hline
\hline
Site &\multicolumn{2}{c}{Ingress time} &\multicolumn{2}{c}{Egress time} & \multicolumn{1}{c}{rms of the} \\
& \multicolumn{2}{c}{(hh:mm:ss.s $\pm$ s.s)} & \multicolumn{2}{c}{(hh:mm:ss.s $\pm$ s.s)} & \multicolumn{1}{c}{normalized flux} \\
\hline
Crni Vrh &21:52:51.687 & 1.150 &21:53:35.510 & 0.350 & 0.085 \\
Asiago &21:52:43.400 & 1.400 &21:54:08.875 & 0.200 & 0.042 \\
S. Marcello Pistoiese &21:53:22.837 & 0.475 &21:54:54.051 & 0.163 & 0.062 \\
Tavolaia &21:53:36.3 & 2.5 &21:55:08.7 & 2.5 & 0.36 \\
Mount Agliale &21:53:23.150 & 0.300 &21:54:57.450 & 0.100 & 0.061 \\
La Spezia &21:53:18.850 & 0.500 &21:54:55.550 & 2.400 & 0.139 \\
Gnosca &21:52:37.170 & 1.250 &21:54:21.850 & 1.250 & 0.173 \\
Varese, Schiaparelli Observatory &21:52:42.585 & 0.100 &21:54:24.775 & 0.100 & 0.049 \\
Observatoire de Cote d'Azur, Nice &21:53:39.661 & 0.280 &21:55:23.301 & 0.720 & 0.174 \\
Aosta Valley &21:52:49.240 & 0.440 &21:54:31.565 & 0.420 & 0.176 \\
Biot, Nice &21:53:46.800 & 5.500 &21:55:24.750 & 3.500 & 0.174 \\
Vinon sur Verdon &21:53:52.08 & 6.70 &21:55:21.13 & 6.00 & 0.096 \\
\hline
\end{tabular}
\end{table*}
Since the brightness drops at the occultation were all sharp, there is no evidence at all for a global atmosphere in 2002~TC$_{302}$. The square well models provided very satisfactory fits, without any need for incorporating an atmosphere. Specific calculations to derive upper limits for the pressure of putative atmospheres of different compositions would be needed, but such calculations are beyond the scope of this paper because the range of possible atmospheric compositions and temperature profiles is too wide. However, to give an idea, based on similar calculations made for Makemake \citep{Ortiz2012}, Quaoar \citep{Braga-Ribas2013}, and 2003~AZ$_{84}$ \citep{DiasOliveira2017}, for which similar noise levels of the light curves were obtained and given that the sizes of the bodies were similar, we can guess that 2002~TC$_{302}$ lacks a global atmosphere with upper limits of the order of 100 nbar in pressure for N$_2$ and CH$_4$ compositions, as described in the papers mentioned above.
The apparent magnitude of 2002~TC$_{302}$ on 22$^{nd}$ January 2018, measured with respect to the Gaia DR1 G band, turned out to be 20.40 $\pm$ 0.07 mag, using the image set from the 1.5-m Sierra Nevada telescope taken closest to the occultation date. Given that the occulted star has a G magnitude of 15.589 according to the Gaia DR1 catalog, this means that the expected brightness at the bottom of the occultation would be 0.012 in normalized flux.
The derived times of ingress and egress and their uncertainties were the basis to determine the chords of the occultation, once the positions of the TNO were projected in the plane of the sky. The chords were then used for the subsequent step of determining the projected size and shape.
\section{Projected size and shape}
\label{Size and shape}
Since we expect that large TNOs like 2002~TC$_{302}$ should be triaxial ellipsoids or spheroids \citep[e.g.,][]{TancrediFavre2008}, and the two-dimensional projection of these shapes is an ellipse, it makes sense fitting such an ellipse to the extremities of the chords.
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{ChordsLinearFit_2002TC302_12may2020_OLD-shifts.png}
\caption{Chords of the occultation in the sky plane and a linear fit to the centers of the chords. North is up, East to the left. Each color represents a different chord as labeled in the insert. The time shifts needed for centers of the chords to be aligned are also shown in the insert. The black dots denote the centers of the chords and the dashed line represents the fit.}
\label{linearfitchordscenters}
\end{figure*}
Because, as explained in section \ref{Occultation Observations}, the chord from Tavolaia was affected by a considerable uncertainty in time due to the technical issue, we decided to look for the best shifts of the chords that would result in their centers lying on a straight line. We know that this condition must be satisfied by an ellipse and we have the a priori knowledge that the projected shape of 2002~TC$_{302}$ must be an ellipse. Hence, by shifting the chords in this way we make sure that we get the shape that best matches with the theoretical one. Note that, in addition to small topographic relief, which can cause small decentering of the chords, unexplained time shifts of up to 27s have been reported before \citep{Elliot2010}, and \citet{Braga-Ribas2013} also identified smaller but noticeable shifts, so the possibility to shift the chords must be considered. Hence, a linear fit of the chord centers (with each chord center weighted by its nominal uncertainty) was performed (see Figure \ref{linearfitchordscenters}). The shifts were determined from the residuals of the straight line fit.
The ellipse fit was carried out following the same methods as in previous occultations, \cite[e.g.,][]{Braga-Ribas2013,Braga-Ribas2014,Benedetti2016,DiasOliveira2017,Ortiz2017,Benedetti2019}. The fitted parameters are the center of the ellipse, the semiaxes and the tilt angle of the ellipse.
The resulting ellipse fit is illustrated in Figure \ref{fittochords}. The axes of the ellipse are 543.2 $\pm$ 18 \si{\kilo\meter} and 459.5 $\pm$ 11 \si{\kilo\meter}, with a position angle of $3\pm1$ degrees. In this case the errors were determined by using the same procedure as in previous occultation works \cite[e.g.,][]{Braga-Ribas2013,Braga-Ribas2014,Benedetti2016,DiasOliveira2017,Ortiz2017,Benedetti2019}. The equivalent diameter in projected area is 499.6 \si{\kilo\meter}.
Whether the three dimensional shape of 2002~TC$_{302}$ is a triaxial ellipsoid (with $a>b>c$, and $a$, $b$, $c$ being the semiaxes of the body) or an oblate spheroid (with $a=b>c$) is something that cannot be determined from an occultation alone (several occultations at different rotational phases would be needed, or rotational light curves should be obtained to complement the occultation information). In our case we combined rotational light curves with the occultation. We discuss about the two possible shapes in the next sections.
\begin{figure*}
\centering
\includegraphics[width=0.9\hsize]{Figure5Ortizetal2020.png}
\caption{Chords of the occultation in the plane of the sky and elliptical fit to the chords. The color coding is the same as that in Figures \ref{observedlightcurves} and \ref{linearfitchordscenters}. The red segments at the chord extremities show the uncertainties from the ingress and egress fits. Note the asymmetry between ingress and egress timing uncertainties in tracks 1, 2, and 6. This is due to operational overheads of the detectors and if the ingress (or the egress) takes place during one of these periods, the uncertainty is larger than that solely due to the photometry noise}. The near misses at Sabadell and Agerola are indicated as dashed red lines (easternmost and westernmost respectively).
\label{fittochords}
\end{figure*}
\section{Light curves of 2002~TC$_{302}$ to determine the rotational period}
\label{Rotational light curves}
The occultation-derived ellipse is just an instantaneous projection of the three-dimensional shape. Therefore, in order to interpret the occultation results, determining the rotation period and rotational light curve is important. If the rotational light curve is double peaked, then it is very likely that the object is a triaxial body, although it could also be an oblate spheroid with a large irregularity. There are also cases of bodies with an oblate shape that present double-peaked rotational light curves arising from albedo variability on their surfaces. A notable example of this is the dwarf planet Ceres, whose oblate shape is well known from stellar occultations \citep[e.g.,][]{GomesJunior2015} and the DAWN spacecraft visit \citep[e.g.,][]{Russel2016}, while it exhibits a low-amplitude double-peaked rotational light curve due to albedo features \cite[e.g.,][]{Chamberlain2007}.
\cite{Thirouin2012} obtained in 2010 the rotational light curve of 2002~TC$_{302}$ by using the 1.5-m telescope at Sierra Nevada Observatory. That rotational light curve appeared to be single-peaked with a period of 5.41 h, although periods of 4.87 and 6.08 \si{\hour} were also possible.
Unfortunately, the amplitude of the variability was not high (only $0.04\pm0.01$ mag), and in such cases, the confidence in the determined rotation period is difficult to asses. In fact, a clear example of this problem is illustrated with the case of the dwarf planet Makemake, for which 24h-aliases can have very similar spectral power or even higher spectral power than the true rotation period when using data with noise levels similar to the amplitude of the variability. For the dwarf planet Makemake, potential periods of 11.24, 11.41, and 20.57 \si{\hour} were initially identified in the periodogram derived shortly after its discovery \citep{Ortiz2007}, but later on, it was found that the 11.41 \si{\hour} period was the closest 24-h alias of a preferred single-peaked period of 7.77 \si{\hour} \citep{HeinzeDeLahunta2009}. However, the most recent work on Makemake photometry, using a very large time series, indicates that a double-peaked rotational light curve is favored and the true rotation period is twice the 11.41 \si{\hour} period \citep{Hromakina2019}. Therefore, finding the correct rotation period and rotational light curve of bodies with variability of low amplitude is considerably difficult and there is a clear bias to detect shorter periods, which are much easier to detect than longer periods \citep{Sheppard2008}.
Hence, it was important to analyze more data on 2002~TC$_{302}$ to try to shed light on its rotational light curve and rotation period. Within our program of physical characterization of TNOs, we had observed 2002~TC$_{302}$ in 2014 and 2016 with the 1.5-m telescope at Sierra Nevada Observatory and with the 1.2-m telescope at Calar Alto observatory in specific photometry runs. After the successful occultation we also carried out specific runs in 2018 and 2019.
The observations were performed in the same way and with the same instruments as described in section \ref{Occultation predictions}.
The methods and tools used to extract the photometry were the same as those explained in \cite{Fernandez-Valenzuela2019}. A total of 875 measurements were obtained, which can be found as on-line material.
The observing runs in 2019 were the most complete ones in terms of the number of consecutive observation nights and the coverage in number of hours per night (and also in terms of signal to noise ratio). In those runs we observed the target for 7 to 9 nights in a row, and most of the nights covered more than 8 hours and up to 10 hours on the target. Using the photometry from those observation nights in 2019 it was already clear that a rotation period of $\sim5.41$ h was not seen in the data. The light curves folded to that period (or values close to it) did not show convincing variability and low dispersion. Given that most of the nights had data in time spans longer than 8 h and no clear variability was seen, it appeared that longer periods than 8 h would be favored.
There is additional reasoning to favor that the preferred rotation period for 2002~TC$_{302}$ should be longer than $\sim5.4$ \si{\hour}.
According to \cite{TancrediFavre2008} the typical size for which hydrostatic equilibrium would be expected in icy bodies like the TNOs is well below that of 2002~TC$_{302}$. The equilibrium shapes can be Maclaurin or Jacobi shapes \citep{Chandrasekhar1987}.
The minimum density that a Maclaurin body with 5.4 \si{\hour} rotation could have is $\sim$ 1150 \si{\kilo\gram\per\meter\cubed}. For this density the axial ratio $a/c$ is larger than 2.5 using the Maclaurin sequence. Hence, in order to give rise to the projected axial ratio $a/c$ of 1.18 seen in the occultation, the aspect angle would have to be extremely low, which is very unlikely, and besides, the density is considerably high for a TNO of this size \citep[see e.g. density vs. size plots in][]{Grundy2015,BiersonandNimmo2019,Grundy2019}. Another possibility is a Jacobi body; however, given the short rotation period and axis ratio detected from the occultation, this would require an even larger density.
For more plausible densities, well below 1150 \si{\kilo\gram\per\meter\cubed}, there is no hydrostatic equilibrium shape possible for a homogeneous body with rotation period of 5.4 \si{\hour}. There is the possibility that 2002~TC$_{302}$ has adopted an oblate spheroid shape but that the true density could be smaller than predicted by the hydrostatic equilibrium, if the system behaves like a granular medium or if the object is not homogeneous (differentiated). These are two scenarios to explain the low density of Haumea compared to the hydrostatic equilibrium one for a homogeneous body, as shown in \citet{Ortiz2017}. In other words, the same reasons that make Haumea less dense than expected could be at play for 2002~TC$_{302}$. Therefore, in summary, either 2002~TC$_{302}$ is governed by granular physics (and/or it is differentiated) or the rotation of the body is slower than 5.4 \si{\hour}. We think the latter possibility is more plausible.
On the other hand, as already mentioned, it is well known that the scientific literature is biased toward short periods \citep{Sheppard2008} and in order to look for long rotation periods, extensive datasets are needed.
Hence, we combined all our runs in order to look for long periods in the data. To combine all our runs from 2014 to 2019, we did not use absolute photometry because it is often extremely difficult to achieve 0.02mag accuracy in the absolute calibrations using standard Johnson or Sloan filters, and we would still have to correct for solar phase angle effects. The absolute calibrations are even more problematic with unfiltered observations, which was our case (in order to achieve high signal to noise ratio). For all the above, the use of absolute calibrations results in jumps of several cents of magnitude from run to run, which is unacceptable to derive low amplitude light curves. Hence, we normalized the fluxes of each campaign to the mean value of the run. In other words, the fluxes were divided by the mean flux of the run. For long runs this method should work properly because the mean flux in the run should be similar to the mean flux averaged over a rotation cycle, but for short runs or for very long rotation periods, this method can introduce small shifts in the photometry and spurious frequencies in the periodograms. In our case, all runs lasted more than 3 days so the way of combining the runs by normalizing to the mean value is much better than using absolute calibrations, but we cannot completely discard that some effects are present.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure6Ortizetal2020.png}
\caption{Phase Dispersion Minimization results for the entire photometry data set from 2014 to 2019. The horizontal line indicates where the minimum value is obtained, which corresponds a period of $\sim$56.1 h}
\label{PDMphotometry}
\end{figure}
Once we combined all the datasets we analyzed all the photometry (with times corrected for light travel time) from 2014 to 2019 using the Phase Dispersion Minimization (PDM) technique. The PDM technique has the advantage over other period-finding techniques that a period is found independently of the shape of the light curve, and it is therefore more robust to finding double-peaked light curves caused by shape effects than other techniques. We found that a period of $\sim$56.1 h gives a clear minimum (see Figure \ref{PDMphotometry}). This period corresponds to a shape-induced rotational light curve because a relative minimum at $\sim 28$ h (half the best period) is also seen in the PDM plot. In the plot, there is also a sharp minimum at $\sim64$ h, but it is considerably less pronounced so our preferred period is $\sim$56.1 h. From the analysis with the PDM technique it appears that we have a shape-induced rotational light curve of low amplitude with a period of $\sim$56.1 h. The peak to valley amplitude of a four order Fourier fit to the light curve folded to 56.1 h is $0.06\pm0.01$ mag (see Figure \ref{LightCurve}). This period is consistent with the period found in the astrometry residuals (explained in the next section) and therefore it would be consistent with the orbital period of a potential satellite whose spin and orbit would be locked. But it is possible that we have two or more photometric periods superimposed in the light curve (from the satellite and from the primary), making the analysis of this low-amplitude light curve even more complicated and making the identification of just one period difficult.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figure7lightcurve_corrected_c.png}
\caption{Photometry measurements of 2002 TC$_{302}$ folded to a period of 56.0642h. The small dots correspond to the single data points whereas the asterisks represent the median values binned in phase bins of 0.083. The error bars show the dispersion of the data in each bin. The solid line represents a smoothed curve using a width of 80 points and the dashed line shows a fourth order Fourier fit to the data. The light-travel corrected epoch for zero phase is JD 2456895.252777}
\label{LightCurve}
\end{figure}
\section{Time series astrometry}
\label{Astrometry}
The same large image dataset that was obtained for photometry purposes was also analyzed astrometrically with the same tools and techniques described in section \ref{Occultation predictions}. The astrometry was derived with respect to the Gaia DR2 catalog because it was already available when we started this analysis (but not at the time when the occultation preditions were made). The complete astrometry dataset from 2014 to 2019 is presented as online material. An orbital fit to all the data was carried out and the residuals to the orbit were obtained both in Right Ascension and Declination.
The standard deviation of the residuals is 0.06 arcsec. This is somewhat larger than what can be expected given the signal to noise ratio of the observations. So this can mean that there are systematic errors or there are real short-term or long-term oscillations in the data. If the oscillations are real, they can be tied to the presence of a satellite.
The epochs of the residuals were corrected for light travel time and the residuals were analyzed using the Lomb-Scargle periodogram technique as done for Orcus to reveal the oscillation caused by its satellite Vanth \citep{Ortiz2011}. Depicted in Figure \ref{Lomb-residuals}, the periodogram of the declination residuals shows three main peaks at 0.4239, 0.4265, and 0.4408 cycles/day (56.61, 56.27, and 54.44 h, respectively). The spectral power of these peaks are well above any other and their significance level is well above 99.9\%. The peak at 0.4239 cycles/day has a higher spectral power than the other two peaks but the other two cannot be ruled out, so it is clear that there is a periodic signal with frequency in the range 0.4239 to 0.4408 cycles/day, or periods between 54.4 and 56.6 h. In right ascension (RA) the periodogram of the residuals (Figure \ref{Lomb-residuals-RA}) shows peaks at 0.4233 and 0.4260 cycles/day although the highest peak in the periodogram corresponds to a frequency of 0.05823 cycles/day or a period of 412.1 h (17.17 days). If such a 17.17-day oscillation were caused by a satellite, the satellite would have been easily spotted in Hubble Space Telescope (HST) images (see discussion), so we tend to believe that it is an artifact because the residuals in RA showed larger scatter and might be more affected by differential chromatic refraction than those in declination.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figure8periodogram_astrometry_dec_c.png}
\caption{Lomb-Scargle periodogram of the Declination residuals of an orbital fit to the astrometry measurements of 2002 TC$_{302}$. }
\label{Lomb-residuals}
\end{figure}
It therefore appears that the residuals have a periodicity in the range 0.4239 to 0.4408 cycles/day. Although the exact frequency/period is difficult to determine within this interval, our preferred one is 0.4239 cycles/day (56.61 h), which is very similar the period found with the Phase Dispersion Minimization (PDM) technique in the photometric datasets (see section \ref{Size and shape}). The peak to valley amplitude of a sinusoidal fit to the RA residuals folded to the 56.61 \si{\hour} period is 0.017 $\pm$ 0.006 arcsec. The peak to valley amplitude of a sinusoidal fit to the declination residuals phased to the 56.61 \si{\hour} period is 0.009 $\pm$ 0.003 arcsec.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{Figure8periodogram_astrometry_RA_c.png}
\caption{Lomb-Scargle periodogram of the Right Ascension residuals of an orbital fit to the astrometry measurements of 2002 TC$_{302}$. }
\label{Lomb-residuals-RA}
\end{figure}
All this phenomenology is consistent with the idea that a satellite could be causing oscillations in the position of the photocenter. This is analyzed in some detail in the discussion (section \ref{Discussion}).
In principle, the study of the phase of the RA and the phase of the Declination errors (when folded to the orbital period) could potentially give an idea of the orbit orientation of the satellite. If the orbit is more face-on, the RA and declination residuals should be out of phase with one another. For instance, a circular, clockwise, face-on orbit would first show the maximum of the Declination residuals, followed by the minimum in the RA residuals and after that the minimum of Declination would follow. Finally, the maximum in Right Ascension residuals would be reached. If the orbit is more edge-on, the RA and Declination residuals would be more in phase. Unfortunately, in our case we do not know the exact orbital period and the phases change dramatically depending on the period.
Note that the 56.61 \si{\hour} period is close, but not an exact match to the $\sim$ 56.1 \si{\hour} period in the photometry. We would expect the two periods to be identical if the putative satellite has its spin locked to its orbit.
\section{Discussion}
\label{Discussion}
Not counting the Pluto-Charon system, this is the best observed occultation by a TNO in terms of the number of chords, published in the literature thus far. The occultation by the dwarf planet Haumea had set a record the year before to this occultation \citep{Ortiz2017}. More recently, a stellar occultation by the TNO Huya was observed and the number of chords was even larger than for the case of 2002~TC$_{302}$, but the analysis of the Huya results is still ongoing and only preliminary results have been presented in \cite{Santos-Sanz2019}. The large number of chords obtained on 2002~TC$_{302}$ has allowed us to determine size and shape accurately.
From Herschel and Spitzer thermal measurements, \cite{Fornasier2013} derived an equivalent-area diameter of $584.1^{+105}_{-88}$ \si{\kilo\meter} for 2002~TC$_{302}$ with uncertainties at the 1 $\sigma$ confidence level. This is around 84 \si{\kilo\meter} larger than the value derived here from the occultation $(543\times460)^{1/2}=499.8$ \si{\kilo\meter}. Even though the values are compatible within the large 88 \si{\kilo\meter} error bar of the thermal measurements (which is only at the 1 $\sigma$ level), it is pertinent to note that thermal models tend to underestimate the real effective diameters for ellipsoidal bodies, specially for those with high obliquity, as pointed out by \cite{Brown1985} and as clearly demonstrated for Haumea \citep{Ortiz2017}. In other words, we were expecting that the occultation would give a larger equivalent-area diameter than that of the thermal models, but found the opposite.
Therefore, the difference of $\sim84$ \si{\kilo\meter} could be even more significant.
The difference can potentially be explained by the existence of a satellite. Note that the equivalent-area diameter of a binary would be $(543 \times 460) + D_s^2 = 584^2$, where $D_s^2$ is the equivalent-area diameter of the putative satellite. When solving for $D_s^2$ we come up with $D_s=302$ \si{\kilo\meter}. An upper limit to the diameter for the putative satellite can be obtained from $(543\times460) + (D_s^{max})^2 = (584+105)^2$. The resulting maximum diameter for the satellite $D_s^{max}$ would be 474 km, which is almost as large as the main body and this is probably too large to have been undetected in the occultation. If the albedo of the potential satellite was smaller than that of the primary, a somewhat smaller satellite diameter would be possible while still giving the thermal output modeled in \cite{Fornasier2013}. For that reason we are giving a plausible size range of 100 \si{\kilo\meter} to 300 \si{\kilo\meter} for the potential satellite.
It is also worth mentioning that the 88 km error bar of the thermal diameter reported in \cite{Fornasier2013} is strikingly large compared to other error bars determined for TNOs of similar size. When looking at the fit to the spectral energy distribution of 2002~TC$_{302}$ in that paper (their figure 15), it turns out that the fit is poor.
According to Kiss (2019, priv. comm.) the Herschel PACS fluxes used in \cite{Fornasier2013} came from the combination of observations at two epochs, but a close look at the Herschel PACS images of the second epoch revealed contamination from a bright source. If only the uncontaminated epoch is used to derive the fluxes, the fit improves considerably, decreasing the error bars and the fitted effective diameter is somewhat larger. With this information the difference from the thermal diameter and the occultation diameter is probably significant at more than 2$\sigma$.
The existence of a satellite of at least around 100 \si{\kilo\meter} in size and up to $\sim$300 \si{\kilo\meter}, close to the body, might explain at least part of the difference in size with the thermal measurements and could also explain the oscillations in the high-accuracy time series astrometry as well as the photometry variability.
Regarding the photometry variability, if 2002~TC$_{302}$ has a satellite of $>$ 100 \si{\kilo\meter} in size, with a similar albedo to the main body, the contribution of the satellite to the total brightness of the system would be $>$(100/499.8)$^2$ in percentage, or around $\gtrsim$0.05 mag. A satellite with a diameter of 200 \si{\kilo\meter} would be capable of contributing 0.16 mag, and a satellite of 300 \si{\kilo\meter} would give rise to a 0.26 mag contribution, so rotational modulation of the satellite would have contributions below 0.05, 0.16 and 0.26 mag respectively and might explain the low amplitude of the light curve. Hence, we believe that an unresolved satellite could account for a large part of the short-term variability observed. Given that a satellite with 30\% of variability due to shape irregularities and 200 km in size would produce oscillations of $0.3\times0.16\%$ or around 0.05 mag, which is not far from the 0.06 mag variability observed, the scenario of a $\sim200$ km satellite producing the observed light curve looks coherent. Note that TNOs in the 200 km size range may already be too small to have hydrostatic-equilibrium dominated shapes and there is some evidence that small TNOs have higher light curve amplitudes than the larger TNOs. In fact there is a correlation of light curve amplitude with absolute magnitude \citep{Sheppard2008}.
A potential scenario to explain our photometric observations would be that the putative satellite gives rise to the $\sim$56.1 \si{\hour} period whereas the main body would have been slowed down by the tidal interaction with the satellite and be an oblate spheroid with low variability, which would be difficult to detect and disentangle from the longer period. The periodicity of $\sim$56.1 h seems consistent with the orbital period of an unresolved satellite from the astrometry residuals, although the two periods do not exactly match. Note that the light curve would be induced by the shape of the satellite, not by eclipses.
According to the different expressions for the tidal locking timescales applied to TNOs with satellites in \cite{Thirouin2014} and \cite{Fernandez-Valenzuela2019}, the putative satellite would have synchronized its orbit and spin in times of the order of one to one hundred million years, assuming typical values for the tidal dissipation parameter Q and for the rigidity of ice, and assuming a satellite radius of 100km and a density of 600 kg/m$^3$. This timescale is orders of magnitude lower than the age of the solar system. Hence, it appears likely that the putative satellite would be tidally locked, although a caveat must be made in the sense that most of the tidal timescale expressions often rely on oversimplifications of the complex physics of the tidal interaction \citep{EfroimskyWilliams2009}. Therefore we cannot discard that the rotation period of the potential satellite could be different than the orbital period.
Regarding the rotation period of the primary, as mentioned before, we could not firmly determine it. It appears possible that the primary rotation could also be tidally locked with the satellite orbital period, but in that case, a large body such as 2002~TC$_{302}$, presumably in hydrostatic equilibrium or close to it, spinning at $\sim$56.1 h would have an oblate shape with axial ratios close to 1. This is far from the observed 1.18 value for the projected axial ratio seen in the occultation. Hence, a faster spin period than $\sim$56.1 h for the primary seems to be required. If the primary rotated at half that period the axial ratio of a Maclaurin body with a density around 800 kg/m$^3$ would be 1.02, still considerably below 1.18, so this possibility also seems incompatible with the occultation observations and a shorter rotation period for the primary would be favored.
The putative satellite would have to be very close to the main body so that HST observations did not reveal it. There are only 2 images of 2002~TC$_{302}$ in the HST archive from which no satellite has been reported. We know that HST cannot resolve binary objects that are separated less than at least several tens of mas, which is the diffraction limit of the telescope. Therefore, any satellite orbiting at less than $\sim 2000$ \si{\kilo\meter} from the main body, would be challenging to detect.
Assuming that the density of 2002~TC$_{302}$ is around 800 \si{\kilo\gram\per\meter\cubed}, which is the density expected for a TNO of this size \cite[according to the plots shown in, e.g.,][]{Grundy2015,BiersonandNimmo2019,Grundy2019}, the mass of the central body can be estimated. This allows us to compute the distance at which a putative satellite would have an orbital period of $\sim$56.1 \si{\hour}. This distance would be $\sim$ 1780 \si{\kilo\meter}.
Given that 2002~TC$_{302}$ is currently at 43 au, a semiaxis of 1780 \si{\kilo\meter} implies 0.058 arcsec. This is the maximum angular separation and would already be challenging to resolve with HST. During most of the parts of the orbit the angular separation would be well below the resolution limit for HST, but it may be possible to resolve using the Near Infrared Camera on board the James Webb Space Telescope, which has a pixel scale of 0.031 arcsec/pixel at short wavelengths (0.6 to 2.3 \si{\micro\meter}).
If the putative satellite has an effective equivalent-area diameter of 200 \si{\kilo\meter} and that of the main body is 499.8 \si{\kilo\meter}, the ratio of areas and therefore the ratio of brightness is (200/499.8)$^2$ = 0.16. Hence, the photocenter would lie at a distance of $0.058 \times 0.16 = 0.009$ arcsec from the central body and this means that the total oscillation would be twice that value, or 0.018 arcsec. This is close to the fitted amplitude of the oscillation of the astrometric residuals. A small correction must be applied because the photocenter rotates around the barycenter and the main body is not exactly at the barycenter, but 0.004 arcsec away from it. Therefore, the expected amplitude of the residuals would be 0.014 arcsec, which is even closer to the observational results. This derivation assumed equal albedo for the satellite and the main body. The reality may be somewhat different, so the needed diameter for the satellite may not be exactly 200 \si{\kilo\meter}, depending on the exact value of its geometric albedo.
A close satellite would have also been capable of slowing down the rotation of 2002~TC$_{302}$ through tidal interaction. This would also explain that 2002~TC$_{302}$ could have adopted a nearly oblate shape instead of a more triaxial shape as is the case for 2003~VS$_2$, which is a very similar body to 2002~TC$_{302}$ in terms of size \citep[its triaxial shape has been determined using the same techniques presented in this work;][]{Benedetti2019}. A slow rotation of $\sim20$ \si{\hour} could be explained by the tidal interaction of, for instance, a $\sim 200$ \si{\kilo\meter} satellite orbiting at $\sim$ 1800 \si{\kilo\meter} from the primary.
In this regard, it is pertinent to analyze the system in terms of the specific angular momentum (angular momentum divided by $(G M^3 R)^{1/2}$ where $G$ is the gravitational constant, $M$ the mass and $R$ the radius of the body).
The specific total angular momentum ($H$) of a system formed by a primary and a satellite
(understood as the sum of the orbital angular momentum plus the angular momentum of the primary and that of the satellite)
was computed according to the following equations from \cite{DescampsMarchis2008}:
\begin{multline}
H =\frac{q}{(1+q)^{\frac{13}{6}}}\sqrt{\frac{a(1-e^{2})}{R_{p}}} + \frac{2}{5} \frac{\lambda_{p}}{(1+q)^{\frac{5}{3}}} \Omega + \\
\ \frac{2}{5} \lambda_{s} \frac{q^{\frac{5}{3}}}{(1+q)^{\frac{7}{6}}}\left (\frac{R_{p}}{a} \right)^{\frac{3}{2}}
\end{multline}
where {\it q} is the secondary-to-primary mass ratio, {\it a} the semimajor axis, {\it e} the eccentricity, and {\it R$_{p}$} the primary radius.
The {\it $\Omega$} parameter is the normalized spin rate expressed as:
\begin{equation}
\Omega = \frac{\omega_{p}}{\omega_{c} }
\end{equation}
where $\omega_{p}$ is the primary rotation rate and $\omega_{c}$ the critical spin rate for a spherical body:
\begin{equation}
\omega_{c} = \sqrt{\frac{GM_{p}}{{R_{p}^3}}};
\end{equation}
here {\it G} is the gravitational constant and {\it M$_{p}$} the mass of the primary.
Assuming a triaxial primary with semi-axes as $ a_{o}> a_{1} > a_{2}$, the $\lambda_{p}$ shape parameter is
\begin{equation}
\lambda_{p}= \frac{1+\beta^{2}}{2(\alpha\beta)^\frac{2}{3}}
\end{equation}
where $\alpha$ = $a_{2}/a_{0}$ and $\beta$ = $a_{1}/a_{0}$. \\
In this work, we considered the primary to be nearly spherical ($\lambda_{s}$=1.0) and the satellite to be somewhat nonspherical with $\lambda_{s}$=1.2\\
\newline
Using those expressions, the specific angular momentum of
a body of the size of 2002~TC$_{302}$ (with an expected density around 800 \si{\kilo\gram\per\meter\cubed}) spinning at a primordial $\sim$ 7.7 \si{\hour} period would be $\sim$ 0.2. The primordial rotation period is taken from the Maxwellian fit to the rotation periods of TNOs presented in \cite{Duffard2009}.
On the other hand, a body with a rotation period of 20 \si{\hour} would have a much smaller specific angular momentum than 0.21, but if 2002~TC$_{302}$ rotates at 20 \si{\hour} and has, for instance, a tidally locked satellite with a mass ratio of 0.065 to the primary and orbiting at 1780 \si{\kilo\meter} from the main body, the specific angular momentum would be 0.21, meaning that such a satellite could have slowed down 2002~TC$_{302}$ from a primordial spin to a rotation period of $\sim20$ \si{\hour}, conserving the total angular momentum of the system. Note that a body with a size around 200 \si{\kilo\meter} and with the same density as the central body would have a mass ratio of 0.065 (which is the $q$ parameter that enters the specific angular momentum expression). If the density of the satellite is somewhat smaller than that of the primary, a slightly larger size would be needed for the satellite to give the required angular momentum. Other satellite configurations with smaller mass ratios are possible, and would have slowed down the primary to a period smaller than 20h. For instance, a satellite with a 145 \si{\kilo\meter} diameter would give a mass ratio around 0.024 and would have slowed down the primary to $\sim$ 10 \si{\hour} if orbiting at 1780 \si{\kilo\meter}. As a summary, a satellite with a size ranging from $\sim$ 145 to $\sim$ 200 \si{\kilo\meter} orbiting at $\sim$ 1800 \si{\kilo\meter} seems to offer a good overall fit to the phenomenology observed.
Using the projected area of the occultation ellipse and the absolute magnitude of 2002~TC$_{302}$ \citep[H$_V$=4.23;][]{Tegler2016}), we can derive the geometric albedo as done in, e.g., \cite{Ortiz2017} for Haumea. The resulting value is 0.147 for 2002~TC$_{302}$. Comparing the geometric albedo of 2002~TC$_{302}$ to those of similar size TNOs for which stellar occultations have been observed (so that the size is accurately known and thus the geometric albedo can also be derived with high accuracy), it turns out that 2002~TC$_{302}$ has a higher geometric albedo than that of 2003~AZ$_{84}$ \citep[after accounting for its known satellite as shown in][]{Ortiz2020}, a slightly higher albedo than that of 2003~VS$_2$ \citep{Benedetti2019} and also slightly higher than that of G!k\'un$||$'h\`omd\'im\`a (2007~UK$_{126}$) \citep[also accounting for its known satellite,][]{Ortiz2020}. Also, the geometric albedo of 2002~TC$_{302}$ is higher than that of Quaoar, which is considerably larger than 2002~TC$_{302}$. The somewhat higher albedo than expected might also be a hint that 2002~TC$_{302}$ could have a large satellite because in that case, we would be using the H$_V$ corresponding to the combination of the satellite and the main body, whereas we should use a larger H$_V$ value, which would decrease the geometric albedo computation. A satellite of 200 \si{\kilo\meter} with a similar albedo to the primary would contribute around 16\% of the brightness, so the geometric albedo would be around 16\% smaller or around 0.127. This value is closer to the geometric albedo determined by Herschel and Spitzer measurements \citep{Fornasier2013} and closer to that of similar size TNOs. Nevertheless, we cannot expect that the geometric albedo of the TNOs depends only on size. Different formation processes in different areas of the solar nebula may imply different surface compositions, and also posterior dynamical and evolution processes including collisions may have important effects on the final albedo of a TNO. Therefore, the argument on the albedo is just another hint but cannot be considered as conclusive evidence. According to \cite{Barkume2008}, 2002~TC$_{302}$ has water ice spectroscopically detected and since water ice is highly reflective, it makes sense that 2002~TC$_{302}$ could be somewhat brighter than the average TNO (for non-water-ice-bearing bodies), but note that 2003~VS$_2$, 2007~UK$_{126}$, 2003~AZ$_{84}$ and Quaoar also have indications of water ice in their spectra \citep{Barucci2011} whereas their geometric albedos are lower than for 2002~TC$_{302}$.
In the context of the Uranian satellites it is well known that they show strong water ice bands, and have a variety of moderately low albedos, starting just a little higher than that of 2002~TC$_{302}$ \citep[e.g.][]{Buratti1991} that depend on the mixture of water ice with pollutants. There is also abundant laboratory and modeling literature that describes how the visible albedo and near-infrared absorption band depths in granular water ice are affected by admixture of dark pollutants \citep[e.g.][]{Clark1984}.
While there are clear indications for a satellite, none of the 12 occultation light curves have detected the putative satellite. Given that the inter-spacing of the chords is smaller than 200 \si{\kilo\meter}, the putative satellite should have been detected if it were sufficiently close to the main body at the time of the occultation. But the sampling of the chords is not good enough away from the main body in the cross track direction, so a satellite could have been easily missed. We know that Huya, a TNO of similar size to 2002~TC$_{302}$, has a large and close satellite of 213$\pm$30 km in diameter, as estimated from unresolved thermal observations \citep{Fornasier2013}, but this satellite has not been detected in the stellar occultation preliminary reported in \cite{Santos-Sanz2019}, despite a large number of chords. This clearly illustrates that the putative satellite of 2002~TC$_{302}$ could have easily gone undetected.
Apart from no satellite detection, no ring features or dust structures have been detected during the occultation either. From the occultation dataset with the least scatter, an upper limit to the width of a Chariklo-like dense ring is 7 \si{\kilo\meter} at 3$\sigma$. This means that a ring similar to that observed at Chariklo would have been missed, but a ring similar in width to that of Haumea would have been detectable. An intermediate ring between that of Chariklo and Haumea, in terms of width, would also have been detected if it existed.
\section{Conclusions}
\label{conclusion}
2002~TC$_{302}$ caused a stellar occultation on 28$^{th}$ January 2018 from which its projected size and shape at the time of the occultation could be derived with high accuracy. Not counting the Pluto-Charon system, this is the best observed occultation by a TNO in terms of the number of chords published in the scientific literature thus far. The elliptical fit to the 12 chords has a major axis of $543\pm18$ \si{\kilo\meter} and minor axis of $460\pm11$ \si{\kilo\meter}. This implies an equivalent-area diameter of 499.8 \si{\kilo\meter} and a geometric albedo of 0.147 (for an absolute magnitude of 4.32). The smaller equivalent diameter than the radiometrically derived value from Herschel and Spitzer observations and the larger albedo could be hints for the presence of a large satellite close to 2002~TC$_{302}$. There are other hints from ground based observations that point in that direction.
From the sharp disappearance and reappearance of the star in the occultation, we can conclude that 2002~TC$_{302}$ lacks a global atmosphere, with an upper limit of the order of 100 nbar. No ring features or dust structures were detected close to the nucleus, although the data lacked the needed quality to discover a dense ring of the width of Chariklo's. However, a dense ring of the width of that of Haumea would have been easily detected. An intermediate ring in terms of width would also have been detected if present.
\begin{acknowledgements}
This research was partially based on data taken at the Sierra Nevada Observatory, which is operated by the Instituto de Astrof\'isica de
Andaluc\'ia (CSIC). This research is also partially based on data taken at the German-Spanish Calar Alto observatory, which is jointly
operated by the Max Planck Institute für Astronomie and the Instituto de Astrof\'isica de Andaluc\'ia (CSIC). Part of the results were also based on observations taken at the 1.6m telescope on Pico dos Dias Observatory. This research was partially based on observations collected at the Schmidt telescope 67/92 cm (Asiago, Italy) of the INAF - Osservatorio Astronomico di Padova.
Funding from Spanish projects
AYA2014-56637-C2-1-P, AYA2017-89637-R, from FEDER, and Proyecto de Excelencia de la Junta de Andaluc\'ia 2012-FQM1776 is acknowledged.
We would like to acknowledge financial support by the Spanish grant AYA-RTI2018-098657-JI00 ``LEO-SBNAF'' (MCIU/AEI/FEDER, UE) and the financial support from the State Agency for Research of the Spanish MCIU through the ``Center of Excellence Severo Ochoa'' award for the Instituto de Astrof\'isica de Andaluc\'ia (SEV- 2017-0709). Part of the research received funding from the European Union's Horizon 2020 Research and Innovation Programme, under grant agreement no. 687378 and from the ERC programme under Grant Agreement no. 669416 Lucky Star.
The following authors acknowledge the respective CNPq grants: FB-R 309578/2017-5; RV-M 304544/2017-5, 401903/2016-8; J.I.B.C.
308150/2016-3; MA 427700/2018-3, 310683/2017-3, 473002/2013-2. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001 and the National Institute of Science and Technology of the e-Universe project (INCT do e-Universo, CNPq grant 465376/2014-2). GBR acknowledges CAPES-FAPERJ/PAPDRJ grant E26/203.173/2016, MA FAPERJ grant
E-26/111.488/2013 and ARGJr FAPESP grant 2018/11239-8.
E. F-V. acknowledges support from the 2017 Preeminent Postdoctoral Program (P$^3$) at UCF.
C.K., R.S., A.F-T., and G.M. have been supported by the K-125015 and GINOP-2.3.2-15-2016-00003 grants of the Hungarian National Research, Development and Innovation Office (NKFIH), Hungary.
G.M. was also supported by the Hungarian National Research, Development and Innovation Office (NKFIH) grant PD-128360. R.K. and T.P. were supported by the VEGA 2/0031/18 grant.
We acknowledge the use of Occult software by D. Herald.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,477,468,750,165 | arxiv | \section{Introduction}
Thermoelectric effects in nanostructures \cite{Pekola-reviews,Casati-review,Sothmann-Sanchez-Jordan-review,Haupt-review} and molecules \cite{Paulsson-Datta2003,Reddy2007}
are of great current interest. They might enable
efficient electricity generation and refrigeration \cite{books,DiSalvo-review,Shakouri-reviews}, and could also lead to new types of sub-Kelvin refrigeration,
cooling electrons in solid-state samples to lower temperatures than with conventional cryostats
\cite{Pekola-reviews},
or cooling fermionic atomic gases \cite{Grenier2012,Brantut-Grenier-et-al2013,Grenier2014}.
However, they are also extremely interesting at the level of fundamental physics,
since they allow one to construct the simplest possible quantum machine that converts heat flows into useful work (electrical power in this case) or vice versa.
This makes them an ideal case study for {\it quantum thermodynamics},
i.e. the thermodynamics of quantum systems \cite{QuantumThermodyn-book}.
\begin{figure}
\includegraphics[width=\columnwidth]{figure1.pdf}
\caption{\label{Fig:thermocouple}
(a) The simplest heat-engine is a thermocouple circuit made of two thermoelectrics
(filled and open circles).
The filled and open circles are quantum systems with opposite thermoelectric responses,
an example could be that in (b).
For a heat-engine, we assume $T_L > T_R$, so heat flows as shown,
generating a current $I$, which provides power to a load (battery charger, motor, etc.) that converts
the
electrical power into some other form of work.
The same thermocouple circuit can act as a refrigerator; if one replaces the load with a power supply that generates the current $I$. This induces the heat flow out of Reservoir $L$, which thereby
refrigerates Reservoir $L$, so $T_L < T_R$.
Note that in both cases the circuit works because the two
thermoelectrics are electrically in series but thermally in parallel.
In (b), $N$ indicates the number of transverse modes in the narrowest part of the quantum system.
}
\end{figure}
The simplest heat-engine is a thermocouple circuit, as shown in Fig.~\ref{Fig:thermocouple}.
It consists of a pair of thermoelectrics with opposite thermoelectric responses
(filled and open circles) and a load, connected in a ring.
Between each such circuit element is a big reservoir of electrons, the reservoir on the left ($L$)
is hotter than the others, $T_L > T_R$, so heat flows from left to right.
One thermoelectric's response causes an electric current to flow in the opposite direction
to the heat flow (filled circle), while the other's causes an electric current to flow
in the same direction as the heat flow (open circle).
Thus, the two thermoelectrics turn heat energy into electrical work; a current flow $I$
through the load.
The load is assumed to be a device that turns the electrical work into some other form of work;
it could be a battery-charger (turning electrical work into chemical work) or a motor (turning electrical work
into mechanical work).
The same thermocouple circuit can be made into a refrigerator simply by replacing the
load with a power supply. The power supply does work to establish the current $I$ around the circuit,
and this current through the thermoelectrics can ``drag'' heat out of reservoir $L$.
In other words, the electrical current and heat flow are the same as for the heat-engine,
but now the former causes the latter rather than vice versa.
Thus, the refrigerator cools reservoir $L$, so $T_L < T_R$.
The laws of classical thermodynamics inform us that entropy production can never be negative, and maximal efficiency occurs
when a system operates reversibly (zero entropy production). Thus, it places fundamental bounds
on heat-engine and refrigerator efficiencies, known as Carnot efficiencies.
In both cases, the efficiency is defined as the power output divided by the power input.
For the heat-engine, the power input is the heat current out of the hotter reservoir (reservoir $L$), $J_L$,
and the power output is the electrical power generated $P_{\rm gen}$.
Thus, the heat-engine (eng) efficiency is
\begin{eqnarray}
\eta_{\rm eng} = P_{\rm gen}\big/ J_L.
\label{Eq:eff-eng}
\end{eqnarray}
This efficiency can never exceed Carnot's limit,
\begin{eqnarray}
\eta_{\rm eng}^{\rm Carnot} &=& 1-T_R/ T_L,
\label{Eq:Carnot-eng}
\end{eqnarray}
where we recall that we have $T_L > T_R$.
For the refrigerator the situation is reversed, the load is replaced by a power supply, and the power input is the electrical power that the circuit absorbs from the power supply, $P_{\rm abs}$. The power output is the heat current out of the colder reservoir (reservoir $L$), $J_L$. This is called the cooling power, because it is the rate at which the circuit removes heat energy from reservoir $L$.
Thus, the refrigerator (fri) efficiency is
\begin{eqnarray}
\eta_{\rm fri} = J_L \big/ P_{\rm abs}.
\label{Eq:eff-fri}
\end{eqnarray}
This efficiency is often called the coefficient of performance or COP.
This efficiency can never exceed Carnot's limit,
\begin{eqnarray}
\eta_{\rm fri}^{\rm Carnot} &=& (T_R/T_L -1)^{-1}, \ \
\label{Eq:Carnot-fri}
\end{eqnarray}
where we recall that $T_L < T_R$ (opposite of heat-engine).
Strangely, the laws of classical thermodynamics do not appear to place a fundamental bound on the power output associated with reversible (Carnot efficient) operation. Most textbooks say that reversibility requires ``small'' power output, but rarely define what ``small'' means.
The central objective of Ref.~[\onlinecite{2014w-prl}] was to find the meaning of ``small'', and find a fundamental upper bound on the efficiency of an irreversible system in which the power output was {\it not} small.
Ref.~[\onlinecite{2014w-prl}] did this for the class of quantum thermoelectrics that
are well modelled by a scattering theory,
which enables one to straightforwardly treat quantum and thermodynamic effects on an equal footing.
It summarized two principal results absent from classical thermodynamics.
Firstly, there is a quantum bound (qb) on the power output, and no quantum system can exceed this bound
(open circles in Fig.~\ref{Fig:summary}).
Secondly, there is a upper bound on the efficiency at any given power output less than this bound
(thick black curves in Fig.~\ref{Fig:summary}).
The efficiency at given power output can only reach Carnot efficiency when the power output
is very small compared to the quantum bound on power output. The upper bound on efficiency then
decays monotonically as one increases the power output towards the quantum bound.
The objective of this article is to explain in detail the methods used to derive these results,
along with the other results that were summarized in Ref.~[\onlinecite{2014w-prl}].
\begin{figure}
\includegraphics[width=\columnwidth]{figure2.pdf}
\caption{\label{Fig:summary}
The thick black curves are qualitative sketches of the maximum efficiency as a function of
heat-engine power output (main plot), or refrigerator cooling power (inset),
with the shaded regions being forbidden.
Precise plot of such curves for different temperature ratios, $T_R/T_L$,
are shown in Fig.~\ref{Fig:allpowers}.
The colored loops (red, grey and blue) are typical sketches of the efficiency versus power of {\it individual} heat-engines
as we increase the load resistance (direction of arrows on loop).
The power output $P_{\rm gen}=IV$ vanishes when the load resistance is
zero (for which $V=0$) or infinite (for which $I=0$), with a maximum at an intermediate resistance (open square).
The curves have a characteristic loop form \cite{Casati-review},
however the exact shape of the loop depends on many system specific details, such as charging effects.
The dashed blue loop is for a typical non-optimal system (always well below the upper bound),
while the solid red and grey loops are for systems which achieve the upper bound for a
particular value of the load.
The star marks the Curzon-Ahlborn efficiency.
}
\end{figure}
\subsection{Contents of this article}
This article provides detailed derivations of the results in Ref.~[\onlinecite{2014w-prl}].
The first part of this article is an extended introduction.
Section~\ref{Sect:literature} is a short review of the relevant literature.
Section~\ref{Sect:Unique} discusses how we define temperature, heat and entropy.
Section~\ref{Sect:entropy-prod} recalls the connection between efficiency and entropy production
in any thermodynamic machine.
Section~\ref{Sect:ScatteringTheory} reviews the nonlinear scattering theory,
which section~\ref{Sect:over-estimates} uses to make very simple over-estimates
of a quantum system's maximum power output.
The second part of this article considers how to optimize a system
which is free of relaxation and has no phonons or photons.
Section~\ref{Sect:guess-heat} gives a hand-waving explanation of the optimal heat engine,
while Section~\ref{Sect:eng} gives the full derivation.
Section~\ref{Sect:guess-fri} gives a hand-waving explanation of the optimal refrigerator,
while Section~\ref{Sect:fri} gives the full derivation.
Section~\ref{Sect:chain} proposes a system which could in principle come arbitrarily close to the
optimal properties given in sections~\ref{Sect:eng} and \ref{Sect:fri}.
Section~\ref{Sect:in-parallel} considers many quantum thermoelectrics in parallel.
The third part of this article considers certain effects neglected
in the above idealized system.
Section~\ref{Sect:ph} adds the parasitic effect of phonon or photon carrying heat in parallel to the electrons.
Section~\ref{Sect:Relax} treats relaxation within the quantum system.
\section{Comments on existing literature}
\label{Sect:literature}
There is much interest in using thermoelectric effects to cool fermionic atomic
gases \cite{Grenier2012,Brantut-Grenier-et-al2013,Grenier2014},
which are hard to cool via other methods.
This physics is extremely similar to that in this work, but there is a crucial difference.
For the electronic systems that we consider, we can assume the temperatures to be much less than the reservoir's Fermi energy, and so take all electrons to have
the same Fermi wavelength. In contrast, fermionic atomic gases have temperatures of order the Fermi energy, so the high-energy particles in a reservoir have a different wavelength from the low-energy ones.
Thus, our results do not apply to atomic gases,
although our methodology does\cite{Grenier2014}.
\subsection{Nonlinear systems and the figure of merit $ZT$}
\label{Sect:nonlinear+ZT}
Engineers commonly state that wide-ranging applications for thermoelectrics would require them to have
a dimensionless figure of merit, $ZT$, greater than three.
This dimensionless figure of merit is a dimensionless combination of the linear-response coefficients \cite{books} $ZT= TGS^2/\Theta$,
for temperature $T$,
Seebeck coefficient $S$, electrical conductance $G$, and thermal conductance $\Theta$ .
Yet for us, $ZT$ is just a way to characterize the efficiency, via
\begin{eqnarray}
\eta_{\rm eng} = \eta_{\rm eng}^{\rm carnot} {\sqrt{ZT+1} -1 \over \sqrt{ZT+1} +1},
\nonumber
\end{eqnarray}
with a similar relationship for refrigerators.
Thus, someone asking for a device with a $ZT > 3$,
actually requires one with an efficiency of more than one third of Carnot efficiency.
This is crucial, because the efficiency is a physical quantity in linear and nonlinear situations,
while $ZT$ is only meaningful in the linear-response regime \cite{Zebarjadi2007,Grifoni2011,2012w-pointcont,Meair-Jacquod2013,Michelini2014,Azema-Lombardo-Dare2014}.
Linear-response theory rarely fails for bulk semiconductors,
even when $T_L$ and $T_R$ are very different. Yet it is completely
{\it inadequate} for the quantum systems that we consider here.
Linear-response theory requires the temperature drop on the scale of
the electron relaxation length $l_{\rm rel}$
(distance travelled before thermalizing) to be much less than the average temperature.
For a typical millimetre-thick bulk thermoelectric between a diesel motor's exhaust system
($T_L\simeq 700$K) and its surroundings ($T_R\simeq 280$K),
the relaxation length (inelastic scattering length) is of order the mean free path; typically 1-100nm.
The temperature drop on this scale is tens of thousands of times smaller than the temperature drop
across the whole thermoelectric. This is absolutely tiny compared with the average temperature,
so linear-response \cite{Mahan-Sofo1996} works well, even though $(T_L-T_R)/T_L$ is of order one.
In contrast, for quantum systems ($L \ll l_{\rm rel}$), the whole temperature drop occurs on the scale of a few nanometres or less, and so linear-response theory is inapplicable whenever
$(T_L-T_R)/T_L$ is not small.
\subsection{Carnot efficiency}
\label{Sect:Carnot}
A system must be reversible (create no entropy) to have Carnot efficiency;
proposals exist to achieve this in bulk \cite{Mahan-Sofo1996} or quantum
\cite{Humphrey-Linke2005,Kim-Datta-Lundstrom2009,Jordan-Sothmann-Sanchez-Buttiker2013} thermoelectric.
It requires that electrons only pass between reservoirs L and R at the energy where
the occupation probabilities are identical in the two reservoirs \cite{Humphrey-Linke2005}.
Thus, a thermoelectric requires two things to be reversible.
Firstly, it must have a $\delta$-function-like transmission \cite{Mahan-Sofo1996,Humphrey-Linke2005,Kim-Datta-Lundstrom2009,Jordan-Sothmann-Sanchez-Buttiker2013,Sothmann-Sanchez-Jordan-Buttiker2013},
which only lets electrons through at energy $\epsilon_0$.
Secondly,\cite{Humphrey-Linke2005} the load's resistance must be such
that $ e^{\operatorname{-}} V = \epsilon_0 (1-T_R/T_L)$,
so the reservoirs' occupations are equal at $\epsilon_0$, see Fig.~\ref{Fig:Fermi}.
By definition this means the current vanishes, and thus so does the power output, $P_{\rm gen}$.
However, one can see how $P_{\rm gen}$ vanishes by
considering a quantum system which lets electrons through in a tiny energy window $\Delta$ from $\epsilon_0$
to $\epsilon_0+\Delta$, see Fig~\ref{Fig:tophat-width}.
When we take $\Delta\big/(k_{\rm B} T_{L,R}) \to 0$,
one has Carnot efficiency, however we will see
(leading order term in Eq.~(\ref{Eq:Pgen-eng-lowpower})) that
\begin{eqnarray}
P_{\rm gen} \propto {1 \over \hbar} \Delta^2,
\label{Eq:power-for-Delta-to-zero}
\end{eqnarray}
which vanishes as $\Delta\big/(k_{\rm B} T_{L,R}) \to 0$.
\subsection{Heat-engine efficiency at finite power output and Curzon-Ahlborn efficiency}
\label{Sect:eff-CA}
To increase the power output beyond that of a reversible system,
one has to consider irreversible machines which generate a finite amount of entropy
per unit of work generated.
Curzon and Ahlborn\cite{Curzon-Ahlborn1975} popularized the idea
of studying the efficiency of a heat-engine running at its maximum power output.
For classical pumps, this efficiency is $\eta_{\rm eng}^{\rm CA} = 1- \sqrt{T_L/T_R}$, which is now called the Curzon-Ahlborn efficiency,
although already given in Refs.~[\onlinecite{Yvon1956,Chambadal57,Novikov57}].
As refrigerators, these pumps have an efficiency at maximum cooling power of zero, although
Refs.~[\onlinecite{Velasco1997,Tomas2012,Apertet2013,Correa2014}] discuss ways around this.
The response of a given heat-engine is typically a ``loop''
of efficiency versus power (see Fig.~\ref{Fig:summary}) as one varies the
load on the system\cite{Casati-review}. For a peaked transmission function with width $\Delta$
(see e.g.~Fig.~\ref{Fig:tophat-width}), the loop moves to the left as one reduces $\Delta$.
In the limit $\Delta \to 0$, the whole loop is squashed onto the $P_{\rm gen}=0$ axis.
In linear-response language, this machine has $ZT \to \infty$.
In this limit,
the efficiency at maximum power can be very close to that of Curzon and Ahlborn \cite{Esposito2009-thermoelec} (the star in Fig.~\ref{Fig:summary}),
just as its maximum efficiency
can be that of Carnot\cite{Humphrey-Linke2005} (see previous section).
However, its maximum power output is $\propto e^{\operatorname{-}} V\Delta/\hbar$ for small $\Delta$
(where $V$ is finite, chosen to ensure maximum power), which vanishes for $\Delta \to 0$, although it is much larger than Eq.~(\ref{Eq:power-for-Delta-to-zero}).
Fig.~\ref{Fig:summary} shows that a system with larger $\Delta$ (such as the red curve) operating near its maximum efficiency will have both higher efficiency and higher power output than the one with small $\Delta$ (left most grey curve) operating at maximum power.
This article shows how to derive the thick black curve in
Fig.~\ref{Fig:summary}, thereby showing that there is a fundamental trade-off between efficiency
and power output
in optimal thermodynamic machines made from thermoelectrics \cite{footnote:casati-review}.
As such, our work overturns the idea that maximizing efficiency at maximum power is the best route
to machines with both high efficiency and high power.
It also overturns the idea that systems with the
narrowest transmission distributions (the largest $ZT$ in linear-response)
are automatically the best thermoelectrics.
At this point we mention that other works\cite{Nakpathomkun-Xu-Linke2010,Leijnse2010,Meden2013,Hershfield2013} have studied efficiencies
for various systems with finite width transmission functions, for which power outputs can be finite.
In particular, Ref.~[\onlinecite{Hershfield2013}] considered a boxcar transmission function,
which is the form of transmission function that we have shown can be made optimal \cite{2014w-prl}.
\subsection{Pendry's quantum bound on heat-flow}
\label{Sect:Pendry}
An essential ingredient in this work
is Pendry's upper bound \cite{Pendry1983} on the heat-flow through a quantum system between two reservoirs of fermions.
He found this bound using a scattering theory of the type discussed in
Section~\ref{Sect:ScatteringTheory} below.
It is a concrete example of a general principle due to Bekenstein \cite{Bekenstein},
and the same bound applies
in the presence of thermoelectric effects \cite{2012w-2ndlaw}.
The bound on the heat flow out of reservoir $L$
is achieved when all the electrons and holes arriving at the quantum system from reservoir $L$ escape into reservoir $R$
without impediment, while there is no back-flow of electrons or holes from reservoir $R$ to L.
The easiest way to achieve this is to couple reservoir $L$ through the
quantum system to a reservoir $R$ at zero temperature, and then ensure the quantum system does not reflect any particles. In this case the heat current equals
\begin{eqnarray}
J^{\rm qb}_L = {\pi^2 \over 6h} N k_{\rm B}^2 T_L^2,
\label{Eq:Jqb}
\end{eqnarray}
where $N$ is the number of transverse modes in the quantum system.
We refer to this as the quantum bound (qb) on heat flow,
because it depends on the quantum wave nature of the electrons;
it depends on $N$, which is given by the cross-sectional area of the quantum system
divided by $\lambda_{\rm F}^2$, where $\lambda_{\rm F}$ is the electron's Fermi wavelength.
As such $J_L^{\rm qb}$ is ill-defined within classical thermodynamics.
\section{Uniquely defining temperature, heat and entropy}
\label{Sect:Unique}
\begin{figure}
\includegraphics[width=\columnwidth]{figure3.pdf}
\caption{\label{Fig:Unique}
To implement the procedure in Section~\ref{Sect:Unique},
one starts with the circuit unconnected, as in (a), one then connects the circuit, as in (b).
After a long time $t_{\rm expt}$, one disconnects the circuit, returning to (a).
The circles are the quantum thermoelectrics,
as in Fig.~\ref{Fig:thermocouple}.
}
\end{figure}
Works on classical thermodynamics have shown that the definition of heat and entropy flows
can be fraught with difficulties. For example, the rate of change of entropy cannot always be uniquely defined in classical continuum
thermodynamics\cite{Kern1975,Day1977,book:irreversible-thermodyn}.
Here the situation is even more difficult, since the
electrons within the quantum systems (circles in
Fig.~\ref{Fig:thermocouple}) are not at equilibrium, and so their temperature cannot be defined.
Thus, it is crucial to specify the logic which leads to our definitions of temperature, heat flow and entropy flow.
Our definition of heat flow originated in Refs.~[\onlinecite{Engquist-Anderson1981,Sivan-Imry1986,Butcher1990}],
the rate of change of entropy is then found using the Clausius relation \cite{Footnote:Sivan-Imry} (see below).
To explain these quantities and show they are unambiguous, we
consider the following three step procedure for a heat engine.
An analogue procedure works for a refrigerator.
\begin{itemize}
\item[]{\bf Step 1.}
Reservoir $L$ is initially decoupled from the rest of the circuit (see Fig.~\ref{Fig:Unique}a),
has internal heat energy $Q_L^{(0)}$, and is in internal equilibrium at temperature $T_L^{(0)}$.
The rest of the circuit is in equilibrium at temperature $T_R^{(0)}$
with internal heat energy $Q_R^{(0)}$. The internal heat energy is
the total energy of the reservoir's electron gas minus the energy which that gas would have in its ground-state.
As such, the internal energy can be written as a sum over electrons and holes, with an electron at energy $\epsilon$ above the reservoir's chemical potential (or a hole at energy $\epsilon$ below that chemical potential) contributing $\epsilon$ to this internal heat energy.
The initial entropies are then $S^{(0)}_i = Q^{(0)}_i \big/ T^{(0)}_i$ for $i=L,R$.
\item[]{\bf Step 2.}
We connect reservoir $L$ to the rest of the circuit ( (see Fig.~\ref{Fig:Unique}b)
and leave it connected for a long time $t_{\rm expt}$.
While we assume $t_{\rm expt}$ is long, we also assume that the reservoirs are all large enough that the energy distributions within them change very little during time $t_{\rm expt}$.
Upon connecting the circuit elements, we assume a transient response during a time $t_{\rm trans}$,
after which the circuit achieves a steady-state.
We ensure that $t_{\rm expt}\gg t_{\rm trans}$,
so the physics is dominated by this steady-state.
Even then the flow will be noisy \cite{Blanter-Buttiker} due to the fact
electrons are discrete with probabilistic dynamics.
So we also ensure that $t_{\rm expt}$ is much longer than the noise correlation time, so that the noise in the currents is negligible compared to the average currents.
\item[]{\bf Step 3.}
After the time $t_{\rm expt}$, we disconnect reservoir $L$ from the rest of the circuit. Again, there will be a transient response, however we assume that a weak relaxation mechanism within the reservoirs will cause the two parts of the circuit to each relax to internal equilibrium (see Fig.~\ref{Fig:Unique}a).
After this one can unambiguously identify the temperature, $T_i$, internal energy $Q_i$
and Clausius entropy $S_i=Q_i\big/ T_i$ of the two parts of the circuit (for $i=L,R$).
Since the reservoirs are large, we assume $T_i = T_i^{(0)}$.
\end{itemize}
Thus, we can unambiguously say that
the heat-current out of reservoir $i$ {\it averaged} over the time $t_{\rm expt}$ is
\begin{eqnarray}
\langle J_i \rangle = \big (Q^{(0)}_i - Q_i \big) \big/ t_{\rm expt}.
\end{eqnarray}
For the above thermocouple, we treat the currents for each thermoelectric separately,
writing the heat current out of reservoir $L$ as $J_L+J_{L'}$, where
$J_L$ is the heat current from reservoir $L$ into the lower thermoelectric in Fig.~\ref{Fig:thermocouple}
(the filled circle),
and $J_{L'}$ is the heat current from reservoir $L$ into the upper thermoelectric in Fig.~\ref{Fig:thermocouple}
(the open circle).
Treating each thermoelectric separately is convenient,
and also allows one to generalize the results to ``thermopiles'',
which contain hundreds of thermoelectrics arranged so that they are electrically in series, but
thermally in parallel.
The average rate of change of entropy in the circuit is
$\langle \dot S_{\rm circuit} \rangle = \langle \dot S \rangle +\langle \dot S' \rangle$,
where $\langle \dot S \rangle$ is the average rate of change of entropy associated with the lower
thermoelectric in Fig.~(\ref{Fig:thermocouple}), while $\langle \dot S' \rangle$ is that for the upper thermoelectric. Then
\begin{eqnarray}
\langle \dot S\rangle = \langle \dot S_L \rangle + \langle \dot S_R \rangle
= -{\langle J_{\rm L} \rangle \big/ T_L} \,-\, {\langle J_{\rm R} \rangle \big/ T_R}\,,
\label{Eq:average-dotS-def}
\end{eqnarray}
while $\langle \dot S'\rangle$ is the same with $J_L,J_R,T_R$ replaced by $J_{L'},J_{R'},T_{R'}$.
We neglect the entropy of the thermoelectrics and load, by assuming their initial and final state are the same.
This will be the case if they are small compared to the reservoirs, so their initial and final states a simply given by the temperature $T_R$.
The nonlinear scattering theory in Ref.~[\onlinecite{Christen-ButtikerEPL96}] captures
long-time average currents (usually called the DC response in electronics),
such as electrical current $\langle I_i \rangle$ and heat current $\langle J_i \rangle$,
see references in Section~\ref{Sect:ScatteringTheory}.
It is believed to be exact for non-interacting particles, and
also applies when interactions can be
treated in a mean-field approximation (see again section \ref{Sect:ScatteringTheory}).
A crucial aspect of the scattering theory is that we do not need to describe the
non-equilibrium state of the quantum system during step 2.
Instead, we need that quantum system's
transmission function, defined in section \ref{Sect:ScatteringTheory}.
In this article we will {\it only} discuss the long-time average of the rates of flows
(not the noisy instantaneous flows), and thus
will not explicitly indicate the average; so $I_i$, $J_i$ and $\dot S_i$ should
be interpreted as
$\langle I_i \rangle$, $\langle J_i \rangle$ and $\langle\dot S_i \rangle$.
\section{Entropy production}
\label{Sect:entropy-prod}
There are little known universal relations between efficiency, power and and entropy production,
which follow trivially from the laws of thermodynamics \cite{Cleuren2012}.
Consider the lower thermoelectric in Fig.~\ref{Fig:thermocouple}a (filled circle),
with $J_L$ and $J_R$ being steady-state heat currents into it from reservoir $L$ and R.
Then the first law of thermodynamics is
\begin{eqnarray}
J_R + J_L=P_{\rm gen},
\label{Eq:firstlaw}
\end{eqnarray}
where $P_{\rm gen}$ is the electrical power generated.
The Clausius relation for the
rate of change of total entropy averaged over long times as in Eq.~(\ref{Eq:average-dotS-def}), is
\begin{eqnarray}
\dot S = -{J_L \over T_L} + {J_L - P_{\rm gen} \over T_R},
\label{Eq:secondlaw}
\end{eqnarray}
where we have used Eq.~(\ref{Eq:firstlaw}) to eliminate $J_R$.
For a heat engine, we take $J_L$ to be positive, which means $T_L > T_R$
and $J_R$ is negative.
We use Eq.~(\ref{Eq:eff-eng}) to replace $J_L$ with $P_{\rm gen}/\eta_{\rm eng}$
in Eq.~(\ref{Eq:secondlaw}).
Then, the rate of entropy production by a heat-engine with efficiency
$\eta_{\rm eng}(P_{\rm gen})$ at
power output $P_{\rm gen}$ is
\begin{eqnarray}
\dot S (P_{\rm gen})
&=&
{P_{\rm gen} \over T_R} \left({\eta_{\rm eng}^{\rm carnot} \over \eta_{\rm eng}(P_{\rm gen})} -1 \right),
\label{Eq:dotS-eng}
\end{eqnarray}
where the Carnot efficiency, $\eta_{\rm eng}^{\rm carnot}$, is given in Eq.~(\ref{Eq:Carnot-eng}).
Hence, knowing the efficiency at power $P_{\rm gen}$, tells us the entropy production at that power. Maximizing the former minimizes the latter.
For refrigeration, the load in Fig.~\ref{Fig:thermocouple} is replaced by a power supply,
the thermoelectric thus absorbs a power $P_{\rm abs}$ to extract heat from the cold reservoir.
We take reservoir $L$ as cold ($T_L < T_R$) , so $J_L$ is positive.
We replace $P_{\rm gen}$ by $-P_{\rm abs}$ in Eqs.~(\ref{Eq:firstlaw},\ref{Eq:secondlaw}).
We then use Eq.~(\ref{Eq:eff-fri}) to replace $P_{\rm abs}$ by $J_L/\eta_{\rm fri}$.
Then the rate of entropy production by a refrigerator at cooling power $J_L$ is
\begin{eqnarray}
\dot S (J_L)
&=&
{J_L \over T_R} \left({1\over \eta_{\rm fri}(J_L)} -{1 \over \eta_{\rm fri}^{\rm carnot}} \right),
\label{Eq:dotS-fri}
\end{eqnarray}
where the Carnot efficiency, $\eta_{\rm fri}^{\rm carnot}$, is given in Eq.~(\ref{Eq:Carnot-fri}).
Hence knowing a refrigerator's efficiency at cooling power $J_L$ gives us its entropy production,
and we see that maximizing the former minimizes the latter.
Eqs.~(\ref{Eq:dotS-eng},\ref{Eq:dotS-fri}) hold for systems modelled by scattering theory, because
this theory satisfies the laws of thermodynamics \cite{Bruneau2012}$^,$\cite{2012w-2ndlaw}.
The rate of entropy production is zero when the efficiency is that of Carnot,
but becomes increasingly positive as the efficiency reduces.
In this article, we calculate the maximum efficiency for given power output,
and then use Eqs.~(\ref{Eq:dotS-eng},\ref{Eq:dotS-fri}) to
get the minimum rate of entropy production at that power output.
\section{Nonlinear Scattering Theory}
\label{Sect:ScatteringTheory}
This work uses Christen and B\"uttiker's nonlinear scattering theory \cite{Christen-ButtikerEPL96},
which treats electron-electron interactions as mean-field charging effects.
Refs.~[\onlinecite{Sanchez-Lopez2013,2012w-pointcont,Meair-Jacquod2013}] added thermoelectric effects by following works on linear-response
\cite{Engquist-Anderson1981,Sivan-Imry1986,Butcher1990}.
Particle and heat flows are given by the transmission function, ${\cal T}_{RL}(\epsilon)$,
for electrons to go from left ($L$) to right ($R$) at energy $\epsilon$, where ${\cal T}_{RL}(\epsilon)$ is a {\it self-consistently} determined function of $T_L$, $T_R$ and $V$.
In short, this self-consistency condition originates from the fact that electrons injected from the leads change the charge distribution in the quantum system, which in turn changes the behaviour of those
injected electrons (via electron-electron interactions).
The transmission function can be determined self-consistently with the charge distribution,
if the latter is treated in a time-independent mean-field manner (neglecting single electron effects).
We note that the same nonlinear scattering theory was also derived for resonant level models
\cite{Humphrey-Linke2005,Nakpathomkun-Xu-Linke2010} using functional RG to treat single-electron charging effects \cite{Meden2013}.
The scattering theory for the heat current is based on the observation that
an electron leaving reservoir $i$ at energy $\epsilon$ is carrying heat $\epsilon - \mu_i$ out of that reservoir
\cite{Butcher1990},
where $\mu_i$ is the reservoir's chemical potential. Thus, a reservoir is cooled by removing an
electron above the Fermi surface, but heated by removing a electron below the Fermi surface.
It is convenient to treat empty states below a reference chemical potential
(which we define as $\epsilon=0$),
as ``holes''. Then we do not need to keep track of a full Fermi sea of electrons, but only
the holes in that Fermi sea.
Then the heat-currents out of reservoirs L and R and into the quantum system
are
\begin{eqnarray}
J_L \! &=& \!
{1 \over h} \sum_\mu \int_0^\infty {\rm d}\epsilon
\, (\epsilon - \mu e^{\operatorname{-}} V_L) \,
{\cal T}^{\mu\mu}_{RL}(\epsilon) \, \big[f_L^\mu (\epsilon) - f_R^\mu (\epsilon)\big],
\nonumber \\
\label{Eq:JL}
\\
J_R \! &=& \!
{1 \over h} \sum_\mu \int_0^\infty {\rm d}\epsilon
\, (\epsilon - \mu e^{\operatorname{-}} V_R) \,
{\cal T}^{\mu\mu}_{RL}(\epsilon) \, \big[f_R^\mu (\epsilon) - f_L^\mu (\epsilon)\big],
\nonumber \\
\label{Eq:JR}
\end{eqnarray}
where $ e^{\operatorname{-}} $ is the electron charge ($ e^{\operatorname{-}} <0$),
so $ e^{\operatorname{-}} V_i$ is the chemical potential of reservoir $i$ measured from
the reference chemical potential ($\epsilon=0$).
The sum is over
$\mu=1$ for ``electron'' states (full states above the reference chemical potential),
and $\mu=-1$ for ``hole'' states (empty states below that chemical potential).
The Fermi function for particles entering
from reservoir $j$, is
\begin{eqnarray}
f_j^\mu(\epsilon) = \left(1+\exp\left[(\epsilon - \mu e^{\operatorname{-}} V_j)\big/ (k_{\rm B} T_j) \right] \right)^{-1}.
\label{Eq:Fermi}
\end{eqnarray}
The transmission function, ${\cal T}^{\nu\mu}_{ij}(\epsilon)$, is the probability that
a particle $\mu$ with energy $\epsilon$ entering the quantum system from reservoir $j$ will
exit into reservoir $i$ as a particle $\nu$ with energy $\epsilon$.
We only allow $\nu=\mu$ here, since we do not consider electron to hole scattering within the quantum system (only common when superconductors are present).
Interactions mean that ${\cal T}^{\mu\mu}_{RL}(\epsilon)$,
is a {\it self-consistently} determined function of $T_L$, $T_R$ $V_L$ and $V_R$.
The system generates power $P_{\rm gen} = (V_R-V_L) I_L$,
so
\begin{eqnarray}
P_{\rm gen} \! &=& \!
{1\over h} \sum_\mu \int_0^\infty {\rm d}\epsilon
\ \mu e^{\operatorname{-}} (V_R-V_L)\,
{\cal T}^{\mu\mu}_{RL}(\epsilon) \, \big[f_L^\mu (\epsilon) - f_R^\mu (\epsilon)\big],
\nonumber \\
\label{Eq:Pgen}
\end{eqnarray}
It is easy to verify that Eqs.~(\ref{Eq:JL}-\ref{Eq:Pgen})
satisfy the first law of thermodynamics, Eq.~(\ref{Eq:firstlaw}).
This theory assumes the quantum system to be relaxation-free,
although decoherence is allowed as it does not change the structure of Eqs.~(\ref{Eq:JL}-\ref{Eq:Pgen}).
Relaxation is discussed in Section~\ref{Sect:Relax}.
We define the voltage drop as $V=V_R-V_L$. Without loss of generality we take the reference chemical potential to be that of reservoir $L$, so
\begin{eqnarray}
V_L=0, \qquad V_R=V,
\label{Eq:def-V}
\end{eqnarray}
then $J_L$ and $P_{\rm gen}$ coincide with Eqs.~(8,9) in Ref.~[\onlinecite{2014w-prl}].
Numerous works have found the properties of thermoelectric systems
from their transmission functions, ${\cal T}_{RL}(\epsilon)$. Linear-response examples include
Refs.~[\onlinecite{Engquist-Anderson1981,Sivan-Imry1986,Butcher1990,Molenkamp1992,Paulsson-Datta2003,Vavilov-Stone2005,
Nozaki2010,jw-epl,Casati2011,Sanchez-Serra2011,Saha2011,Karlstrom2011,jwmb,Hwang-Lopez-Lee-Sanchez2014,Sothmann-Nernst-engine,Linke2013-onsager}],
while nonlinear responses were considered in
Refs.~[\onlinecite{
Galperin2007-2008,
Murphy2008,
Nakpathomkun-Xu-Linke2010,
Sanchez-Lopez2013,2012w-pointcont,Meair-Jacquod2013,Meden2013,Linke2013b,Battista2014,Sierra-Sanchez2014}],
see Refs.~[\onlinecite{Casati-review,Sothmann-Sanchez-Jordan-review,Haupt-review}]
for recent reviews.
However, here we do not ask what is the efficiency of a given system,
we ask what is the system that would achieve the highest efficiency, and what is this efficiency?
This is similar in spirit to Ref.~[\onlinecite{Mahan-Sofo1996}], except that we maximize the efficiency
for given power output.
We need to answer this question in the context of the mean-field treatment of electron-electron interactions\cite{Christen-ButtikerEPL96}, in which the transmission function for any given system is the solution of the above mentioned self-consistency procedure.
Despite this complexity, any transmission function (including all mean-field
interactions) must obey
\begin{eqnarray}
0\leq{\cal T}^{\mu\mu}_{RL}(\epsilon)\leq N \ \ \hbox{ for all }\epsilon,
\label{Eq:basic-limits-on-transmisson}
\end{eqnarray}
where $N$ is the number of transverse modes at the narrowest point in the nanostructure,
see Fig.~\ref{Fig:thermocouple}.
Let us assume that this is the {\it only} constraint on the transmission function.
Let us assume that for any given $T_L$, $T_R$ and $V$,
a clever physicist could engineer any desired transmission function, so long as it
obeys Eq.~(\ref{Eq:basic-limits-on-transmisson}). Presumably they could do this
either by solving the self-consistency equations for ${\cal T}^{\mu\mu}_{RL}(\epsilon)$,
or by experimental trial and error.
Thus, in this work, we find the ${\cal T}^{\mu\mu}_{RL}(\epsilon)$
which maximizes the efficiency given solely the constraint
in Eq.~(\ref{Eq:basic-limits-on-transmisson}),
and get this maximum efficiency.
We then rely on future physicists to find a way to construct a system with this
${\cal T}^{\mu\mu}_{RL}(\epsilon)$ (although some hints are given in Section~\ref{Sect:chain}).
\section{From thermoelectric optimization
to thermocouple optimization}
\label{Sect:transforming-from-full-to-open}
The rest of this article considers optimizing a single thermoelectric.
However, an optimal thermocouple heat engine (or refrigerator) consists of two systems with opposite thermoelectric responses (full and open circles in Fig.~\ref{Fig:thermocouple}).
So here we explain how to get the optimal thermocouple from the optimal thermoelectric.
Suppose the optimal system between $L$ and $R$ (the full circle)
has a given transmission function
${\cal T}_{RL}^{\mu,\mu} (\epsilon)$, which we will find in Section~\ref{Sect:eng}.
This system generates an electron flow parallel to heat flow
(so electric current is anti-parallel to heat flow, implying a negative Peltier coefficient).
The system between $L$ and $R'$ (the open circle) must have the opposite response.
For this we interchange the role played by electrons and holes compared with
${\cal T}_{RL}^{\mu,\mu} (\epsilon)$, so the optimal system between $L$ and $R'$ has
\begin{eqnarray}
{\cal T}_{R'L}^{\mu,\mu} (\epsilon) &=& {\cal T}_{RL}^{-\mu,-\mu} (\epsilon).
\end{eqnarray}
If the optimal bias for the system between $L$ and $R$ is $V$ (which we will also find in Section~\ref{Sect:eng}), then the optimal bias for the system between $L$ and $R'$ is $-V$.
Then the heat flow from reservoir $L$ into $R'$ equals that from $L$ into $R$,
while the electrical current from $L$ into $R'$ is opposite to that from $L$ into $R$,
and so $P_{\rm gen}$ is the same for each thermoelectric.
The load across the thermocouple (the two thermoelectrics) must be chosen such that
the bias across the thermocouple is $2V$. The condition that the charge current out of $L$ equals that into $L$ will then ensure that both thermoelectrics are at their optimal bias.
In the rest of this article we discuss power output and heat input {\it per thermoelectric}.
For a thermocouple, one simply needs to multiply these by
two, so the efficiency is unchanged but the power output is doubled.
\section{Simple estimate of bounds on power output}
\label{Sect:over-estimates}
One of the principal results of Ref.~[\onlinecite{2014w-prl}]
is the quantum bounds on the power output of heat-engines and refrigerators.
The exact derivation of these bounds is given in Sections~\ref{Sect:qb-eng}
and \ref{Sect:qb-fri}. Here, we give simple arguments for their basic form
based on Pendry's limit of heat flow discussed in Section~\ref{Sect:Pendry} above.
For a refrigerator, it is natural to argue that the upper bound on cooling power will be closely related to Pendry's bound, Eq.~(\ref{Eq:Jqb}).
We will show in Section~\ref{Sect:qb-fri} that this is the case.
A two-lead thermoelectric can extract as much as half of $J^{\rm qb}_L$.
In other words, the cooling power of any refrigerator must obey
\begin{eqnarray}
J_L &\leq& {1 \over 2} J^{\rm qb}_L \ =\ {\pi^2 \over 12h} N k_{\rm B}^2 T_L^2.
\end{eqnarray}
Now let us turn to a heat-engine operating between a hot reservoir $L$ and cold reservoir $R$.
Following Pendry's logic, we can expect that the heat current into the quantum system from reservoir $L$ cannot be
more than $J_L^{\hbox{\scriptsize over-estimate}} ={\pi^2 \over 6h} N k_{\rm B}^2 (T_L^2-T_R^2)$.
Similarly, no heat engine
can exceed Carnot's efficiency, Eq.~(\ref{Eq:Carnot-eng}).
Thus, we can safely assume any system's power output is less than
\begin{eqnarray}
P_{\rm gen}^{\hbox{\scriptsize over-estimate}}
&=& \eta_{\rm eng}^{\rm carnot} J_L^{\hbox{\scriptsize over-estimate}}
\nonumber \\
&=& {\pi^2 N k_{\rm B}^2 (T_L+T_R) (T_L-T_R)^2 \over 6h \ T_L} .
\end{eqnarray}
We know this is a significant over-estimate, because maximal heat flow cannot coincide with Carnot efficiency. Maximum heat flow requires ${\cal T}^{\mu\mu}_{RL}(\epsilon)$ is maximal for all $\epsilon$ and $\mu$, while
Carnot efficiency requires a ${\cal T}^{\mu\mu}_{RL}(\epsilon)$ with a
$\delta$-function-like dependence on $\epsilon$ (see Section~\ref{Sect:Carnot}).
None the less, the full calculation in Section~\ref{Sect:qb-eng} shows that the true quantum bound on
power output is such that \cite{footnote:qb2}
\begin{eqnarray}
P_{\rm gen} &\leq& P_{\rm gen}^{\rm qb2} \,\equiv\,
A_0\, {\pi^2 \over h} N k_{\rm B}^2 \big(T_L-T_R\big)^2, \quad \quad
\end{eqnarray}
where $A_0 \simeq 0.0321$.
Thus, the simple over-estimate of the bound,
$P_{\rm gen}^{\hbox{\scriptsize over-estimate}}$,
differs from the true bound $P_{\rm gen}^{\rm qb2}$ by a factor of
$(1+T_R/T_L)/(6A_0)$. In other words it over estimates the quantum bound by a factor between 5.19 and 10.38 (that is 5.19 when
$T_R=0$ and 10.38 when $T_R=T_L$).
This is not bad for such a simple estimate.
\begin{figure}
\includegraphics[width=0.9\columnwidth]{figure4.pdf}
\caption{\label{Fig:Fermi}
Sketch of Fermi functions $f_L^\mu(\epsilon)$ and $f_L^\mu(\epsilon)$
in Eq.~(\ref{Eq:Fermi}), when $\mu e^{\operatorname{-}} V$ is positive,
and $T_L > T_R$. Eq.~(\ref{Eq:Eps0-guess}) gives the point where the two curves cross, $\epsilon_0$.}
\end{figure}
\section{Guessing the optimal transmission for a heat-engine}
\label{Sect:guess-heat}
Here we use simple arguments to guess the
transmission function which will maximize a heat-engine's efficiency for a given power output.
We consider the flow of electrons
from reservoir $L$ to reservoir $R$ (the filled circle Fig.~\ref{Fig:thermocouple}a,
remembering $ e^{\operatorname{-}} <0$, so electron flow is in the opposite direction to $I$).
To produce power, the electrical current must flow against a bias, so we require $ e^{\operatorname{-}} V$ to be positive,
with $V$ as in Eq.~(\ref{Eq:def-V}).
Inspection of the integrand of Eq.~(\ref{Eq:Pgen})
shows that it only gives positive contributions to the power
output, $P_{\rm gen}$, when $\mu \big(f^\mu_L(\epsilon) - f^\mu_R(\epsilon)\big) >0$.
From Eq.~(\ref{Eq:Fermi}),
one can show that $f^\mu_L(\epsilon)$ and $f^\mu_R(\epsilon)$ cross at
\begin{eqnarray}
\epsilon_0 = \mu e^{\operatorname{-}} V \big/ (1-T_R/T_L),
\label{Eq:Eps0-guess}
\end{eqnarray}
see Fig.~\ref{Fig:Fermi}.
Since $ e^{\operatorname{-}} V$ is positive, we maximize the power output by blocking the transmission of
those electrons ($\mu=1$) which have $\epsilon< \epsilon_0$, and blocking the transmission all holes ($\mu=-1$).
For $\mu=1$, all energies above $\epsilon_0$ add to the power output.
Hence, maximizing transmission for all $\epsilon > \epsilon_0$
will maximize the power output, giving $P_{\rm gen}=P_{\rm gen}^{\rm qb}$.
However, a detailed calculation, such as that in Section~\ref{Sect:eng},
is required to find the $V$ which will maximize $P_{\rm gen}$; remembering that $P_{\rm gen}$
depends directly on $V$ as well as indirectly (via the above choice of $\epsilon_0$).
Now we consider maximizing the efficiency at a given power output $P_{\rm gen}$, where
$P_{\rm gen} < P_{\rm gen}^{\rm qb}$.
Comparing the integrands
in Eqs.~(\ref{Eq:JL},\ref{Eq:Pgen}), we see that $J_L$ contains an extra factor of energy $\epsilon$
compared to $P_{\rm gen}$. As a result, the transmission of electrons ($\mu=1$) with large $\epsilon$ enhances the heat current much more than it enhances the power output. This means that the higher an electron's $\epsilon$ is,
the less efficiently it contributes to power production.
Thus, one would guess that it is optimal to have an upper cut-off on
transmission, $\epsilon_1$, which would be just high enough to ensure the
desired power output $P_{\rm gen}$, but no higher. Then the transmission function will look like a
``band-pass filter'' (the ``boxcar'' form in Fig~\ref{Fig:tophat-width}),
with $\epsilon_0$ and $\epsilon_1$ further apart for higher power outputs. This guess is correct,
however the choice of $V$ affects both $\epsilon_0$ and $\epsilon_1$, so the
calculation in Section~\ref{Sect:eng} is necessary to find the $V$, $\epsilon_0$ and $\epsilon_1$ which
maximize the efficiency for given $P_{\rm gen}$.
\section{Maximizing heat-engine efficiency
for given power output}
\label{Sect:eng}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure5.pdf}
\caption{\label{Fig:tophat-width}
How the optimal ``boxcar'' transmission changes with increasing required power output.
At maximum power output, a heat engine has $\epsilon_1 = \infty$ while $\epsilon_0$ remains finite.
At maximum cooling power, a refrigerator has $\epsilon_1 = \infty$ and $\epsilon_0=0$.
The qualitative features follow this sketch for all $T_R/T_L$,
however the details depend on
$T_R/T_L$, see Fig.~\ref{Fig:Delta+V}.
}
\end{figure}
Now we present the central calculations of this article,
finding the maximum efficiency of a quantum thermoelectric
with {\it given} power output.
In this section we consider heat-engines, while Section~\ref{Sect:fri} addresses refrigerators.
For a heat engine, our objective is to find
the transmission function, ${\cal T}^{\mu\mu}_{RL}(\epsilon)$, and bias, $V$,
that maximize the efficiency $\eta_{\rm eng}(P_{\rm gen})$ for given power output $P_{\rm gen}$.
To do this we treat ${\cal T}^{\mu\mu}_{RL}(\epsilon)$
as a set of many slices each of width $\delta \to 0$, see the sketch in Fig.~\ref{Fig:T-functions}a.
We define $\tau^{\mu}_\gamma$ as the height of the $\gamma$th slice, which is at energy
$\epsilon_\gamma \equiv \gamma\delta$.
Our objective is to find the optimal value of $\tau^{\mu}_\gamma$ for each $\mu,\gamma$,
and optimal values of the bias, $V$; all
under the constraint of fixed $P_{\rm gen}$.
Often such optimization problems are formidable,
however this one is fairly straightforward.
The efficiency is maximum for a fixed power, $P_{\rm gen}$, if
$J_L$ is minimum for that $P_{\rm gen}$.
If we make an infinitesimal change of $\tau^{\mu}_\gamma$ and $V$,
we note that
\begin{eqnarray}
\delta P_{\rm gen} &=& \left. {\partial P_{\rm gen} \over \partial \tau^{\mu}_\gamma} \right|_V \delta\tau^{\mu}_\gamma \ +\ P'_{\rm gen} \,\delta V,
\label{Eq:deltaPgen}
\\
\delta J_L &=& \left. {\partial J_L \over \partial \tau^{\mu}_\gamma} \right|_V \delta\tau^{\mu}_\gamma \ +\ J'_L \,\delta V,
\label{Eq:deltaJL}
\end{eqnarray}
where $|_x$ indicates that the derivative is taken at constant $x$,
and the primed indicates $\partial/\partial V$ for fixed transmission functions.
If we want to fix $P_{\rm gen}$ as we change $\tau^{\mu}_\gamma$, we must change the bias $V$ to compensate.
For this, we set $\delta P_{\rm gen}=0$ in Eq.~(\ref{Eq:deltaJL}) and substitute the result for $\delta V$ into
Eq.~(\ref{Eq:deltaPgen}).
Then $J_L$ decreases (increasing efficiency)
for an infinitesimal increase of $\tau^{\mu}_\gamma$ at fixed $P_{\rm gen}$, if
\begin{eqnarray}
\left.{\partial J_L \over \partial \tau^{\mu}_\gamma} \right|_{P_{\rm gen}}
&=&
\left.{\partial J_L \over \partial \tau^{\mu}_\gamma} \right|_V
- {J'_L \over P'_{\rm gen}}
\left.{\partial P_{\rm gen} \over \partial \tau^{\mu}_\gamma }\right|_V \ <\ 0.
\qquad \label{Eq:eng-condition}
\end{eqnarray}
Comparing
Eq.~(\ref{Eq:JL}) and Eq.~(\ref{Eq:Pgen}), one sees that
\begin{eqnarray}
\left.{\partial J_L\over \partial \tau^{\mu}_\gamma }\right|_V &=&
{\epsilon_\gamma\over \mu e^{\operatorname{-}} V}\,
\left.{\partial P_{\rm gen} \over \partial \tau^{\mu}_\gamma }\right|_V .
\label{Eq:change-JL-to-change-Pgen}
\end{eqnarray}
where $V$ is given in Eq.~(\ref{Eq:def-V}).
Thus, the efficiency $\eta_{\rm eng}(P_{\rm gen})$ grows with a small increase of $\tau^{\mu}_\gamma$ if
\begin{eqnarray}
\left(\epsilon_\gamma - \mu e^{\operatorname{-}} V {J'_L \over P'_{\rm gen}} \right) \times
\left.{\partial P_{\rm gen} \over \partial \tau^{\mu}_\gamma }\right|_V \ <\ 0,
\label{Eq:eng-condition2}
\end{eqnarray}
where $P_{\rm gen}$, $P'_{\rm gen}$, $J_L$, $J'_L$ and $ e^{\operatorname{-}} V$ are positive.
\begin{figure}
\includegraphics[width=0.85\columnwidth]{figure6.pdf}
\caption{\label{Fig:T-functions}
A completely arbitrary transmission function $ {\cal T}_{RL}^{\mu\mu} (\epsilon)$
(see Section \ref{Sect:eng}). We take it to have
infinitely many slices of width $\delta \to 0$, so slice $\gamma$ has energy
$\epsilon_\gamma \equiv \gamma \delta$ and height $\tau^{\mu}_\gamma$. We find the optimal height for each slice.
}
\end{figure}
For what follows, let us define
two energies
\begin{eqnarray}
\epsilon_0 &=& e^{\operatorname{-}} V \big/ (1-T_R/T_L),
\label{Eq:eng-bounds-eps0}
\\
\epsilon_1 &=& e^{\operatorname{-}} V \, J'_L / P'_{\rm gen}.
\label{Eq:eng-bounds-eps1}
\end{eqnarray}
One can see that
$ \left.\left({\partial P_{\rm gen}/\partial\tau^{\mu}_\gamma }\right)\right|_V >0$
when both $\mu=1$ and $\epsilon > \epsilon_0$, and is negative otherwise.
Thus, for $\mu=1$, Eq.~(\ref{Eq:eng-condition2}) is satisfied when $\epsilon_\gamma$ is between
$\epsilon_0$ and $\epsilon_1$. For $\mu=-1$, Eq.~(\ref{Eq:eng-condition2}) is never satisfied.
A heat-engine is only useful if $P_{\rm gen}>0$, and this is only true for
$\epsilon_0 <\epsilon_1$.
Hence, if $\mu=1$ and $\epsilon_0 <\epsilon<\epsilon_1$, then
$\eta_{\rm eng}(P_{\rm gen})$ is maximum for $\tau^{\mu}_\gamma$ at its maximum value,
$\tau^{\mu}_\gamma=N$.
For all other $\mu$ and $\epsilon_\gamma$, $\eta_{\rm eng}(P_{\rm gen})$ is maximum
for $\tau^{\mu}_\gamma$ at its minimum value,
$\tau^{\mu}_\gamma=0$.
Since the left-hand-side of
Eq.~(\ref{Eq:eng-condition2}) is not zero for any $\epsilon_\gamma\neq \epsilon_0,\epsilon_1$,
there are no stationary points, which is why
$\tau^{\mu}_\gamma$ never takes a value between its maximum and minimum values.
Thus, the optimal ${\cal T}^{\mu\mu}_{RL}(\epsilon)$ is a ``boxcar'' or ``top-hat'' function,
\begin{eqnarray}
{\cal T}^{\mu\mu}_{RL}(\epsilon)
\! &=& \! \left\{ \! \begin{array}{cl}
N & \hbox{ for } \mu=1 \ \hbox{ \& } \ \ \epsilon_0 \! <\! \epsilon \!
<\! \epsilon_1 \phantom{\big|}
\\
0 & \hbox{ otherwise } \phantom{\big|} \end{array} \right. \quad
\label{Eq:top-hat}
\end{eqnarray}
see Fig.~\ref{Fig:T-functions}b.
It hence acts as a band-pass filter, only allowing flow between L and R for electrons ($\mu=1$) in the energy window between
$\epsilon_0$ to $\epsilon_1$.
Substituting a boxcar transmission function with arbitrary $\epsilon_0$ and $\epsilon_1$ into
Eqs.~(\ref{Eq:JL},\ref{Eq:Pgen}) gives
\begin{eqnarray}
J_L &=& N \,
\big[F_L(\epsilon_0)-F_R(\epsilon_0)-F_L(\epsilon_1)+F_R(\epsilon_1) \big],
\label{Eq:JL-eng}
\\
P_{\rm gen} \!\! &=& \!N e^{\operatorname{-}} V \, \big[G_L(\epsilon_0)-G_R(\epsilon_0)
-G_L(\epsilon_1)+G_R(\epsilon_1) \big], \qquad
\label{Eq:Pgen-eng}
\end{eqnarray}
where we define
\begin{eqnarray}
F_j(\epsilon) = {1 \over h} \int_\epsilon^\infty
{ x \ {\rm d} x \over
1+ \exp\big[(x- e^{\operatorname{-}} V_j)\big/(k_{\rm B} T_j)\big] },
\label{Eq:Fintegral}
\\
G_j(\epsilon) = {1 \over h}\int_\epsilon^\infty
{{\rm d} x \over
1+ \exp\big[(x- e^{\operatorname{-}} V_j)\big/(k_{\rm B} T_j)\big] },
\label{Eq:Gintegral}
\end{eqnarray}
which are both positive for any $\epsilon>0$.
Remembering that we took $V_L=0$ and $V_R=V$,
these integrals are
\begin{eqnarray}
F_L(\epsilon) &=& \epsilon G_L(\epsilon) -{(k_{\rm B} T_L)^2 \over h}
{\rm Li}_2\big[-{\rm e}^{-\epsilon/(k_{\rm B} T_L)}\big], \qquad \\
F_R(\epsilon) &=& \epsilon G_R(\epsilon) -{(k_{\rm B} T_R)^2 \over h}
{\rm Li}_2\big[-{\rm e}^{-( \epsilon- e^{\operatorname{-}} V)/(k_{\rm B} T_R)}\big], \qquad \\
G_L(\epsilon) &=& {k_{\rm B} T_L\over h}\ln\big[1+{\rm e}^{-\epsilon/(k_{\rm B} T_L)}\big],
\\
G_R(\epsilon) &=& {k_{\rm B} T_R\over h}\ln\big[1+{\rm e}^{-(\epsilon- e^{\operatorname{-}} V)/(k_{\rm B} T_R)}\big],
\end{eqnarray}
for dilogarithm function,
${\rm Li}_2(z)= \int_0^\infty t \, dt \big/({\rm e}^t/z -1)$.
We are only interested in cases where $\epsilon_0$ fulfills the condition in Eq.~(\ref{Eq:eng-bounds-eps0}),
in this case $(\epsilon_0- e^{\operatorname{-}} V)/(k_{\rm B} T_R) = \epsilon_0/(k_{\rm B} T_L)$, which means $G_R(\epsilon_0)$ and $F_R(\epsilon_0)$ are related to $G_L(\epsilon_0)$ and $F_L(\epsilon_0)$ by
\begin{eqnarray}
G_R(\epsilon_0)&=&{T_R \over T_L}\, G_L(\epsilon_0),
\label{Eq:G_R}
\\
F_R(\epsilon_0)-\epsilon_0 G_R(\epsilon_0)&=&{T_R^2\over T_L^2}\, \left( F_L(\epsilon_0) -\epsilon_0 G_L(\epsilon_0) \right). \quad
\end{eqnarray}
\begin{figure}
\includegraphics[width=\columnwidth]{figure7.pdf}
\caption{\label{Fig:bounds}
Solutions of the transcendental equations giving optimal
$\epsilon_1$ (heat-engine) or $\epsilon_0$ (refrigerator).
In (a), the red curve is the optimal $\epsilon_1(V)$ for $\epsilon_1> \epsilon_0$,
and the thick black line is $\epsilon_0$ in Eq.~(\ref{Eq:eng-bounds-eps0}).
The red circle and red arrow indicate the low and high power limits discussed in the text.
In (b), the red curve is the optimal $\epsilon_0(V)$ for $\epsilon_0<\epsilon_1$, and the thick black line
is $\epsilon_1$ in Eq.~(\ref{Eq:fri-bounds-eps1}). }
\end{figure}
Eq.~(\ref{Eq:eng-bounds-eps1})
tells us that $\epsilon_1$ depends on $J_L$ and $P_{\rm gen}$,
but that these depend in-turn on $\epsilon_1$.
Hence to find $\epsilon_1$, we substitutes Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) into
Eq.~(\ref{Eq:eng-bounds-eps1}) to get a transcendental equation for
$\epsilon_1$ as a function of $V$ for given $T_R/T_L$.
This equation is too hard to solve analytically
(except in the high and low power limits, discussed in Sections \ref{Sect:qb-eng} and \ref{Sect:eff-at-given-power} respectively).
The red curve in Fig.~\ref{Fig:bounds}a is
a numerical solution for $T_R/T_L=0.2$.
Having found $\epsilon_1$ as a function of $V$ for given $T_R/T_L$,
we can use Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) to get
$J_L(V)$ and $P_{\rm gen}(V)$. We can then invert the second relation to get
$V(P_{\rm gen})$. At this point we can find $J_L(P_{\rm gen})$,
and then use Eq.~(\ref{Eq:eff-eng}) to get the quantity that we desire --- the
maximum efficiency at given power output, $\eta_{\rm eng}(P_{\rm gen})$.
In Section~\ref{Sect:qb-eng}, we do this procedure analytically
for high power ($P_{\rm gen} =P_{\rm gen}^{\rm qb2}$),
and in Section~\ref{Sect:eff-at-given-power}, we do this procedure analytically
for low power ($P_{\rm gen} \ll P_{\rm gen}^{\rm qb2}$).
For other cases, we only have a numerical
solution for the transcendental equation for
$\epsilon_1$ as a function of $V,T_R/T_L$, so we must do everything numerically.
\begin{figure}
\includegraphics[width=\columnwidth]{figure8.pdf}
\caption{\label{Fig:Delta+V}
(a) Plots of optimal $\Delta$ (left) and $ e^{\operatorname{-}} V$ (right)
for a heat-engine with given power output, $P_{\rm gen}$,
for $T_R/T_L=$ 0.05, 0.1, 0.2, 0.4, 0.6 and 0.8.
We get $\epsilon_0$ from $ e^{\operatorname{-}} V$ by using Eq.~(\ref{Eq:eng-bounds-eps0}).
(b) Plots of optimal $\Delta$ (left) and $ e^{\operatorname{-}} V$ (right) for a refrigerator
with a given cooling power output, $J_L$, for $T_R/T_L=$ 1.05, 1.2, 1.5, 2, 4 and 10.
We get $\epsilon_1$ from $ e^{\operatorname{-}} V$ by using
Eq.~(\ref{Eq:fri-bounds-eps1}).
}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{figure9.pdf}
\caption{\label{Fig:allpowers}
Efficiencies of (a) heat-engines and (b) refrigerators.
In (a) the curves are the maximum allowed heat-engine efficiency as a function of
power outputs for $T_R/T_L= 0.05,0.2,0.4,0.6,0.8$ (from top to bottom).
In (b) the curves are the maximum allowed refrigerator efficiency as a function of
cooling power for $T_R/T_L= 1.05,1.2,1.5,2,4$ (from top to bottom).
In both (a) and (b) the horizontal black lines indicate Carnot efficiency for each $T_R/T_L$,
while the dashed black curves are the analytic theory for small cooling power,
given in Eq.~(\ref{Eq:eta-eng-small-Pgen}) or Eq.~(\ref{Eq:eta-fri-smallJ}).
The circles mark the analytic result for maximum power output.
}
\end{figure}
Fig.~\ref{Fig:Delta+V}a gives the values of $\Delta=(\epsilon_1-\epsilon_0)$
and $ e^{\operatorname{-}} V$ which result from solving the transcendental equation numerically
for a variety of different $T_R/T_L$. Eq.~(\ref{Eq:eng-bounds-eps0}) then relates
$\epsilon_0$ to $ e^{\operatorname{-}} V$.
The qualitative behaviour of the resulting boxcar transmission function is shown in Fig.~\ref{Fig:tophat-width}.
This numerical evaluation enables us to find the efficiency as a function of
$P_{\rm gen}$ and $T_R/T_L$, which we
plot in Fig.~\ref{Fig:allpowers}a.
\subsection{Quantum bound on heat engine power output}
\label{Sect:qb-eng}
Here we want to find the highest possible power output of the heat-engine.
In the previous section,
we had the power as a function of two independent parameters,
$V$ and $\epsilon_1$, with $\epsilon_0$ given by Eq.~(\ref{Eq:eng-bounds-eps0}).
However, we know that Eq.~(\ref{Eq:eng-bounds-eps1})
will then determine a line in this two-dimensional parameter space (Fig.~\ref{Fig:bounds}a),
which we can parametrize by the parameter $V$.
The maximum possible power corresponds to $P_{\rm gen}'=0$,
where we recall $P_{\rm gen}' \equiv {\rm d} P_{\rm gen} \big/ {\rm d} V$.
This has two consequences, the first is that from Eq.~(\ref{Eq:eng-bounds-eps0}),
we see that $P_{\rm gen}'=0$ means that $\epsilon_1 \to \infty$.
Thus, the transmission function ${\cal T}_{RL}^{\mu\mu}(\epsilon)$,
taking the form of a Heaviside step function, $\theta(\epsilon-\epsilon_0)$,
where $\epsilon_0$ is given in Eq.~(\ref{Eq:eng-bounds-eps0}).
Taking Eq.~(\ref{Eq:Pgen-eng}) combined with Eq.~(\ref{Eq:G_R})
for $\epsilon_1 \to \infty$, gives
\begin{eqnarray}
P_{\rm gen}\big(\epsilon_1\to\infty\big) &=&
N e^{\operatorname{-}} V \, \left(1-{T_R \over T_L}\right)\ G_L\left( { e^{\operatorname{-}} V \over 1-T_R/T_L}\right).
\nonumber
\end{eqnarray}
The second consequence of $P_{\rm gen}'=0$, is that the $V$-derivative of this expression
must be zero. This gives us the condition that
\begin{eqnarray}
(1+B_0)\ln[1+B_0] +B_0\ln[B_0] =0
\end{eqnarray}
where we define $B_0= \exp[- e^{\operatorname{-}} V/(k_{\rm B} T_L-k_{\rm B} T_R)] = \exp[-\epsilon_0/(k_{\rm B} T_L)]$.
Numerically solving this equation gives $B_0 \simeq 0.318$.
Eq.~(\ref{Eq:eng-bounds-eps0}) means that this
corresponds to $ e^{\operatorname{-}} V = -k_{\rm B} (T_L-T_R) \ln[0.318]= 1.146 \,k_{\rm B} (T_L-T_R)$,
indicated by the red arrow in Fig.~\ref{Fig:bounds}a.
Substituting this back into $P_{\rm gen}\big(\epsilon_1\to\infty\big)$ gives
the maximum achievable value of $P_{\rm gen}$,
\begin{eqnarray}
P_{\rm gen}^{\rm qb2} =
A_0\, {\pi^2 \over h} N k_{\rm B}^2 \big(T_L-T_R\big)^2 \quad \quad
\label{Eq:P-qb2}
\end{eqnarray}
with
\begin{eqnarray}
A_0 \equiv B_0\ln^2[B_0]\big/\big[\pi^2(1+B_0)\big] \simeq 0.0321.
\end{eqnarray}
We refer to this as the quantum bound (qb) on power output\cite{footnote:qb2},
because of its origin in the Fermi wavelength of the electrons, $\lambda_{\rm F}$.
We see this in the fact that $P_{\rm gen}^{\rm qb2}$ is proportional to
the number of transverse modes in the quantum system, $N$,
which is given by the cross-sectional area of the quantum system divided by $\lambda_{\rm F}^2$. This quantity has no analogue in classical thermodynamics.
The efficiency at this maximum power, $P_{\rm gen}^{\rm qb2}$, is
\begin{eqnarray}
\eta_{\rm eng} (P_{\rm gen}^{\rm qb2})
&=& \eta_{\rm eng}^{\rm Carnot}\big/ \big( 1+C_0 (1+T_R/T_L) \big),
\label{Eq:Eff-at-Pqb2}
\end{eqnarray}
with
\begin{eqnarray}
C_0=-(1+B_0){\rm Li}_2(-B_0)\big/\big(B_0\ln^2[B_0]\big) \simeq 0.936.
\end{eqnarray}
As such, it varies with $T_R/T_L$, but is always more than $0.3\,\eta_{\rm eng}^{\rm Carnot}$.
This efficiency is less than Curzon and Ahlborn's efficiency for all $T_R/T_L$
(although not much less).
However, the power output here is infinitely larger than the maximum power output of systems
that achieve Curzon and Ahlborn's efficiency, see Section~\ref{Sect:eff-CA}.
The form of Eq.~(\ref{Eq:Eff-at-Pqb2}) is very different from
Curzon and Ahlborn's efficiency.
However, we note in passing that Eq.~(\ref{Eq:Eff-at-Pqb2}) can easily be
written as
$\eta_{\rm eng} (P_{\rm gen}^{\rm qb2})
= \eta_{\rm eng}^{\rm carnot} \big/ \left[(1+2C_0) -C_0\eta_{\rm eng}^{\rm carnot}\right]$,
which is reminiscent of the efficiency at maximum power found
for very different systems (certain classical stochastic heat-engines) in Eq.~(31) of
Ref.~[\onlinecite{Schmiedl-Seifert2008}].
\subsection{Optimal heat-engine at low power output}
\label{Sect:eff-at-given-power}
Now we turn to the opposite limit, that of low power output, $P_{\rm gen}\ll P_{\rm gen}^{\rm qb2}$,
where we expect the maximum efficiency to be close to Carnot efficiency. In this limit,
$\epsilon_1$ is close to $\epsilon_0$.
Defining $\Delta= \epsilon_1- \epsilon_0$, we
expand Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) in small $\Delta$ up to order $\Delta^3$.
This gives
\begin{eqnarray}
J_L &=& {P_{\rm gen} \over 1-T_R/T_L}
\ +\ {N\,\Delta^3\, (1-T_R/T_L) \over 3h\,k_{\rm B} T_R} g\!\left(x_0\right), \qquad
\label{Eq:JL-eng-lowpower}
\\
P_{\rm gen} &=& {N\,\epsilon_0 \,\Delta^2 \,(1-T_R/T_L)^2 \over 2h\,k_{\rm B} T_R}
\nonumber \\
& & \times \bigg[ g\!\left(x_0\right)
+ {\Delta \, (1+T_R/T_L) \over 3 \, k_{\rm B} T_R} \,{{\rm d} g(x_0)\over {\rm d} x_0}
\bigg],
\label{Eq:Pgen-eng-lowpower}
\end{eqnarray}
where Eq.~(\ref{Eq:eng-bounds-eps0}) was used to write $ e^{\operatorname{-}} V$ in terms of
$\epsilon_0$,
and we defined $x_0=\epsilon_0/(k_{\rm B} T_L)$,
and $g(x)={\rm e}^x/(1+{\rm e}^x)^2$.
Thus, for small $\Delta$ we find that,
\begin{eqnarray}
\eta_{\rm eng}(\Delta) = \eta_{\rm eng}^{\rm Carnot} \left(1-{2\Delta \over 3x_0k_{\rm B} T_L} + \cdots\right).
\label{Eq:Efficiency-in-terms-of-Delta}
\end{eqnarray}
Eq.~(\ref{Eq:eng-bounds-eps1}) gives a
transcendental equation for $x_0$ and $\Delta$.
However, $\Delta$ drops out when it is small, and the transcendental equation
reduces to
\begin{eqnarray}
x_0 \tanh[x_0/2]=3,
\label{Eq:transcendental-small-Delta}
\end{eqnarray}
for which $x_0 \equiv \epsilon_0/(k_{\rm B} T_L) \simeq 3.24$.
Eq.~(\ref{Eq:eng-bounds-eps0}) means that this
corresponds to $ e^{\operatorname{-}} V =3.24 \,k_{\rm B} (T_L-T_R)$, indicated by the circle in Fig.~\ref{Fig:bounds}a.
Now we can use Eq.~(\ref{Eq:Pgen-eng-lowpower}) to lowest order in $\Delta$,
to rewrite Eq.~(\ref{Eq:Efficiency-in-terms-of-Delta}) in terms of
$P_{\rm gen}$. This gives the efficiency for small $P_{\rm gen}$ as,
\begin{eqnarray}
\eta_{\rm eng} \big(P_{\rm gen}\big) = \eta_{\rm eng}^{\rm Carnot}
\left(1- 0.478
\sqrt{ {T_R \over T_L} \ {P_{\rm gen} \over P_{\rm gen}^{\rm qb2}}} \,+ \cdots
\right)\!, \quad
\label{Eq:eta-eng-small-Pgen}
\end{eqnarray}
where the dots indicate terms of order $(P_{\rm gen} /P_{\rm gen}^{\rm qb2})$ or higher.
Eq.~(\ref{Eq:dotS-eng}) then gives the minimum rate of entropy production at power output $P_{\rm gen}$,
\begin{eqnarray}
\dot S \big(P_{\rm gen}\big) = 0.478{P_{\rm gen}^{\rm qb2} \over \sqrt{T_LT_R} }
\left( {P_{\rm gen} \over P_{\rm gen}^{\rm qb2}}\right)^{3/2} \,+ {\cal O}[P_{\rm gen}^2] , \quad
\label{Eq:dotS-eng-small-Pgen}
\end{eqnarray}
Thus, the maximal efficiency at small $P_{\rm gen}$
is that of Carnot minus a term that
grows like $P_{\rm gen}^{1/2}$ (dashed curves in Fig.~\ref{Fig:allpowers}a),
and the associated minimal rate of entropy production goes like $P_{\rm gen}^{3/2}$.
Note that Eq.~(\ref{Eq:Efficiency-in-terms-of-Delta}),
shows that Carnot efficiency occurs at
any $x_0$ (i.e.\ any $\epsilon_0$) when $\Delta$ is strictly zero (and so $P_{\rm gen}$ is strictly zero).
However, for arbitrary $x_0$ the factor 0.478
in Eq.~(\ref{Eq:eta-eng-small-Pgen}) is replaced by
$\sqrt{ 8\pi^2 A_0/[9 x_0^3 g(x_0)]}$. The value of $x_0$ that satisfied
Eq.~(\ref{Eq:transcendental-small-Delta})
is exactly the one which minimizes
this prefactor (its minimum being 0.478), and thus maximizes the efficiency for any small but
finite $P_{\rm gen}$.
\section{Guessing the optimal transmission for a refrigerator}
\label{Sect:guess-fri}
Here we use simple arguments to guess the
transmission function which maximizes a refrigerator's efficiency for given cooling power.
The arguments are similar to those for heat-engines (Section~\ref{Sect:guess-heat}), although some crucial differences will appear.
We consider the flow of electrons
from reservoir $L$ to reservoir $R$ (the filled circle in Fig.~\ref{Fig:thermocouple}a,
remembering $ e^{\operatorname{-}} <0$ so electrons flow in the opposite direction to $I$).
To refrigerate, the thermoelectric must absorb power, so the electrical current must be due to a bias, this
requires $ e^{\operatorname{-}} V$ to be negative,
with $V$ as in Eq.~(\ref{Eq:def-V}).
Inspection of the integrand of Eq.~(\ref{Eq:JL})
shows that it only gives positive contributions to the cooling power
output, $J_L$, when $\big(f^\mu_L(\epsilon) - f^\mu_R(\epsilon)\big) >0$.
Since $T_L< T_R$ and $ e^{\operatorname{-}} V<0$, we can use Eq.~(\ref{Eq:Fermi}) to show that this is
never true for holes ($\mu=-1$), and is only true for
electrons ($\mu=1$) with energies $\epsilon < \epsilon_1$, where
\begin{eqnarray}
\epsilon_1 = - e^{\operatorname{-}} V \big/ (T_R/T_L-1).
\label{Eq:Eps1-guess}
\end{eqnarray}
Thus, it is counter-productive to allow the transmission of electrons with $\epsilon > \epsilon_1$, or the transmission of any holes.
Note that this argument gives us an {\it upper} cut-off on electron transmission energies, despite the fact it gave a {\it lower} cut-off for the heat engine (see Eq.~(\ref{Eq:Eps0-guess}) and the text around it).
All electron ($\mu=1$) energies from zero to $\epsilon_1$ contribute
positively to the cooling power $J_L$.
To maximize the cooling power, one needs to maximize $\big(f^\mu_L(\epsilon) - f^\mu_R(\epsilon)\big)$,
this is done by taking $ e^{\operatorname{-}} V \to -\infty$ , for which $\epsilon_1 \to \infty$.
This logic gives the maximum cooling power, which Section~\ref{Sect:fri}
will show equals ${\textstyle{\frac{1}{2}}} J_L^{\rm qb}$.
Now we consider maximizing the efficiency at a given cooling output $J_L$,
when $J_L <{\textstyle{\frac{1}{2}}} J_L^{\rm qb}$.
Comparing the integrands
in Eqs.~(\ref{Eq:JL},\ref{Eq:Pgen}), we see that the extra factor of $\epsilon$ in $J_L$, means that allowing the transmission of electrons at low energies has a small effect on cooling power, while costing
a similar electrical power as higher energies.
Thus, it would seem to be optimal to have a lower cut-off on
transmission, $\epsilon_0$, which would be just low enough to ensure the
desired cooling power $J_L$, but no lower.
Then the transmission function will acts as a
``band-pass filter'' (the ``box-car'' in Fig~\ref{Fig:tophat-width}),
with $\epsilon_0$ and $\epsilon_1$ further apart for higher cooling power. This is correct,
however the choice of $V$ affects $\epsilon_0$ and $\epsilon_1$, so the
calculation in Section~\ref{Sect:fri} is necessary to find the $V$, $\epsilon_0$ and $\epsilon_1$ which
maximize the efficiency for cooling power $J_L$.
\section{Maximizing refrigerator efficiency for given cooling power}
\label{Sect:fri}
Here we find the maximum refrigerator efficiency,
also called the coefficient of performance (COP), for given cooling power $J_L$.
The method is very similar to that for heat-engines, and here we mainly summarize the differences.
The refrigerator efficiency increases for a fixed cooling power, $J_L$, if the electrical
power absorbed
$P_{\rm abs}=-P_{\rm gen}$ decreases for fixed $J_L$.
This is so if
\begin{eqnarray}
\left.{\partial P_{\rm abs} \over \partial \tau^{\mu}_\gamma }\right|_{J_L}
&=&
\left.{\partial P_{\rm abs} \over \partial \tau^{\mu}_\gamma }\right|_V
- {P'_{\rm abs} \over J'_L}
\left.{\partial J_L \over \partial \tau^{\mu}_\gamma }\right|_V \ <\ 0. \qquad
\label{Eq:fri-condition}
\end{eqnarray}
where we recall that the primed means $({\rm d} / {\rm d} V)$.
This is nothing but Eq.~(\ref{Eq:eng-condition})
with $J_L \to P_{\rm abs}$ and $P_{\rm gen} \to J_L$.
Using Eq.~(\ref{Eq:change-JL-to-change-Pgen}), we see that
$\eta_{\rm fri}(J_L)$ grows with $\tau^{\mu}_\gamma$ for
\begin{eqnarray}
\left( {-\mu e^{\operatorname{-}} V \over \epsilon_\gamma} - {P'_{\rm abs} \over J'_L} \right) \times
\left.{\partial J_L \over \partial \tau^{\mu}_\gamma }\right|_V \ <\ 0,
\label{Eq:fri-condition2}
\end{eqnarray}
where $P_{\rm abs}$, $P'_{\rm abs}$, $J_L$, $J'_L$ and $- e^{\operatorname{-}} V$ are all positive.
To proceed we define the following energies
\begin{eqnarray}
\epsilon_0 &=& - e^{\operatorname{-}} V \,J'_L / P'_{\rm abs},
\label{Eq:fri-bounds-eps0}
\\
\epsilon_1 &=& {- e^{\operatorname{-}} V \big/ (T_R/T_L-1)}.
\label{Eq:fri-bounds-eps1}
\end{eqnarray}
Then one can see that
$ \left.\left({\partial J_L/\partial \tau^{\mu}_\gamma}\right)\right|_V$ is positive
when both $\mu=1$ and $\epsilon < \epsilon_1^{\rm fri}$,
and is negative otherwise.
Thus, for $\mu=-1$, Eq.~(\ref{Eq:fri-condition2}) is never satisfied.
For $\mu=1$, Eq.~(\ref{Eq:fri-condition2}) is satisfied when $\epsilon_\gamma$ is between
$\epsilon_0^{\rm fri}$ and $\epsilon_1^{\rm fri}$.
A refrigerator is only useful if $J_L>0$ (i.e.\ it removes heat from the cold reservoir),
and this is only true for $\epsilon_0^{\rm fri} <\epsilon_1^{\rm fri}$.
Hence, if $\mu=1$ and $\epsilon_0^{\rm fri} <\epsilon<\epsilon_1^{\rm fri}$, then
$\eta_{\rm fri}(J_L)$ grows upon increasing $\tau^{\mu}_\gamma$.
Thus, the optimum is when such $\tau^{\mu}_\gamma=N$.
For all other $\mu$ and $\epsilon_\gamma$, $\eta_{\rm fri}(J_L)$
grows upon decreasing $\tau^{\mu}_\gamma$.
Thus, the optimum is when such $\tau^{\mu}_\gamma=0$.
This gives the boxcar transmission function in Eq.~(\ref{Eq:top-hat}), with
$\epsilon_0$ and $\epsilon_1$ given by Eqs.~(\ref{Eq:fri-bounds-eps0},\ref{Eq:fri-bounds-eps1}).
Comparing with Eqs.~(\ref{Eq:eng-bounds-eps0},\ref{Eq:eng-bounds-eps1}),
we see these energies are the opposite way around for a refrigerator compared to
a heat-engine (up to a minus sign).
Substituting Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) into
Eq.~(\ref{Eq:fri-bounds-eps0}), one gets a transcendental equation for
$\epsilon_0$ as a function of $V$ for given $T_R/T_L$.
This equation is too hard to solve analytically
(except in the high and low power limits, discussed in Sections \ref{Sect:qb-fri} and \ref{Sect:lowpower-fri}).
The red curve in Fig.~\ref{Fig:bounds}b is
a numerical solution for $T_R/T_L=1.5$.
Having found $\epsilon_0$ as a function of $V$ for given $T_R/T_L$,
we can use Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) to get
$J_L(V)$ and $P_{\rm abs}(V)=-P_{\rm gen}(V)$. We can then invert the first relation to get
$V(J_L)$. Now, we can find $P_{\rm abs}(J_L)$,
and then use Eq.~(\ref{Eq:eff-fri}) to get the quantity that we desire --- the
maximum efficiency (or COP), $\eta_{\rm fri}(J_L)$, at cooling power $J_L$.
Fig.~\ref{Fig:Delta+V}b gives the values of $\Delta=(\epsilon_1-\epsilon_0)$
and $ e^{\operatorname{-}} V$ which result from
solving the transcendental equation numerically.
As noted, $\epsilon_1$ is related to $ e^{\operatorname{-}} V$ by Eq.~(\ref{Eq:fri-bounds-eps1}).
The qualitative behaviour of the resulting boxcar transmission function is sketched in Fig.~\ref{Fig:tophat-width}.
This numerical evaluation enables us to find efficiency as a function of $J_L$ and $T_R/T_L$,
which we plot in Fig.~\ref{Fig:allpowers}b.
\subsection{Quantum bound on refrigerator cooling power}
\label{Sect:qb-fri}
To find the maximum allowed cooling power, $J_L$, we look for the place where $J'_L=0$.
From Eq.~(\ref{Eq:fri-bounds-eps0}) we see that this immediately implies $\epsilon_0 =0$.
Taking Eq.~(\ref{Eq:JL-eng}) with $\epsilon_0=0$, we note by using Eq.~(\ref{Eq:Fintegral})
that $F_L(0)-F_R(0)$ grows monotonically as one takes $- e^{\operatorname{-}} V \to \infty$.
Similarly, for $\epsilon_1$ given by Eq.~(\ref{Eq:eng-bounds-eps1}),
we note by using Eq.~(\ref{Eq:Fintegral}) and $T_R > T_L$ that
$F_R(\epsilon_1)-F_L(\epsilon_1)$ grows monotonically as one takes $- e^{\operatorname{-}} V \to \infty$.
Thus, we can conclude that $J_L$ is maximal for $- e^{\operatorname{-}} V \to \infty$,
which implies $\epsilon_1 \to \infty$ via Eq.~(\ref{Eq:fri-bounds-eps1}).
Physically, this corresponds to all electrons arriving at the quantum system
from reservoir $L$ being transmitted into reservoir $R$, but all holes arriving from reservoir $L$
being reflected back into reservoir $L$. At the same time, reservoir $R$ is so strongly
biased that it has no electrons with $\epsilon>0$ (i.e.\ no electrons above reservoir $L$'s chemical potential) to
carry heat from R to L.
In this limit, $F_L(\epsilon_1)=F_L(\epsilon_1)=F_R(\epsilon_0)=0$,
so the maximal refrigerator cooling power is
\begin{eqnarray}
J_L = {\pi^2 \over 12 h} N k_{\rm B}^2 T_L^2 ,
\label{Eq:J-qb-fri}
\end{eqnarray}
where we used the fact that ${\rm Li}_2[1] = \pi^2/12$.
This is exactly half the quantum bound on heat current that can flow out of reservoir $L$ given in
Eq.~(\ref{Eq:Jqb}). The quantum bound is achieved by coupling reservoir $L$ to another reservoir with a temperature of
absolute zero, through an contact with $N$ transverse mode.
By definition a refrigerator is cooling reservoir $L$ below the temperature of the other reservoirs around it.
In doing so, we show its cooling power is always less than or
equal to $J_L^{\rm qb}/2$. However, it is intriguing that the maximum cooling power is independent
of the temperature of the environment, $T_R$, of the reservoir being cooled (reservoir $L$).
In short, the best refrigerator can remove all electrons (or all holes) that reach it from reservoir $L$,
but it cannot remove all electrons {\it and} all holes at the same time.
It is easy to see that the efficiency of the refrigerator (COP) at this maximum possible cooling power
is zero, simply because $|V| \to \infty$, so the power absorbed $P_{\rm abs} \to \infty$.
However, one gets exponentially close to this limit for
$- e^{\operatorname{-}} V \gg k_{\rm B} T_R$, for which $P_{\rm abs}$ is large but finite, and so $\eta_{\rm fri}(J_L)$ remains finite (see Fig.~\ref{Fig:allpowers}b).
\subsection{Optimal refrigerator at low cooling power}
\label{Sect:lowpower-fri}
Now we turn to the opposite limit, that of low cooling power output, $J_L\ll J_L^{\rm qb}$,
where we expect the maximum efficiency to be close to Carnot efficiency. In this limit,
$\epsilon_0$ is close to $\epsilon_1$.
Defining $\Delta= \epsilon_1- \epsilon_0$, we
expand Eqs.~(\ref{Eq:JL-eng},\ref{Eq:Pgen-eng}) in small $\Delta$ up to order $\Delta^3$.
This gives
\begin{eqnarray}
J_L &=& {P_{\rm abs} \over T_R/T_L-1}
\ -\ {N\,\Delta^3 \,(T_R/T_L-1) \over 3h\,k_{\rm B} T_R} g\!\left(x_1\right), \qquad
\label{Eq:JL-fri-lowpower}
\\
P_{\rm abs}&=& {N\,\epsilon_1 \,\Delta^2 \,(T_R/T_L-1)^2 \over 2h\,k_{\rm B} T_R}
\nonumber \\
& & \times \bigg[ g\!\left(x_1\right)
- {\Delta \, (T_R/T_L+1) \over 3 \, k_{\rm B} T_R} \,{{\rm d} g(x_1)\over {\rm d} x_1}
\bigg],
\label{Eq:Pabs-fri-lowpower}
\end{eqnarray}
where Eq.~(\ref{Eq:fri-bounds-eps1}) was used to write $ e^{\operatorname{-}} V$ in terms of
$\epsilon_1$,
and we define $x_1=\epsilon_1/(k_{\rm B} T_L)$,
and $g(x)={\rm e}^x/(1+{\rm e}^x)^2$.
Thus, for small $\Delta$ we find that
the efficiency is
\begin{eqnarray}
\eta_{\rm fri}(\Delta) = \eta_{\rm fri}^{\rm Carnot} \left(1-{2\Delta \over 3x_1k_{\rm B} T_L} + \cdots\right).
\label{Eq:COP-in-terms-of-Delta}
\end{eqnarray}
Note that this is the same Eq.~(\ref{Eq:Efficiency-in-terms-of-Delta}) for the heat-engine at low power output,
except that $x_0$ is replaced by $x_1$, and the Carnot efficiency is that of the refrigerator
rather than that of the heat-engine.
Eq.~(\ref{Eq:fri-bounds-eps0}) gives a
transcendental equation for $x_1$ and $\Delta$.
However, $\Delta$ drops out when it is small,
and the transcendental equation reduces to
\begin{eqnarray}
x_1 \tanh [x_1/2]=3,
\label{Eq:condition-fri-smallJ}
\end{eqnarray}
for which $x_1\equiv \epsilon_1/(k_{\rm B} T_L) = 3.2436\cdots$.
Again this is the same as for a heat-engine, Eq.~(\ref{Eq:transcendental-small-Delta}),
but with $x_1$ replacing $x_0$.
Eq.~(\ref{Eq:fri-bounds-eps1}) means that this
corresponds to $- e^{\operatorname{-}} V =3.2436 \,k_{\rm B} (T_R-T_L)$, indicated by the circle in Fig.~\ref{Fig:bounds}b.
Now we can use Eq.~(\ref{Eq:JL-fri-lowpower}) to lowest order in $\Delta$,
to rewrite Eq.~(\ref{Eq:COP-in-terms-of-Delta}) in terms of
$J_L$. This gives the efficiency (or coefficient of performance, COP)
for small $J_L$ as,
\begin{eqnarray}
\eta_{\rm fri}(J_L) = \eta_{\rm fri}^{\rm Carnot}
\left(1- 1.09
\sqrt{
\,{T_R \over T_R-T_L}\ {J_L \over J_L^{\rm qb}} }\, + \cdots \right)\!,
\nonumber \\
\label{Eq:eta-fri-smallJ}
\end{eqnarray}
where the dots indicate terms of order $(J_L/J_L^{\rm qb})$ or higher.
Eq.~(\ref{Eq:dotS-fri}) gives the minimum rate of entropy generation at cooling power output $J_L$, as
\begin{eqnarray}
\dot S \big(J_L\big) =
1.09
{J^{\rm qb}_L \over T_L}\sqrt{1-{T_L\over T_R}}
\left({J_L \over J_L^{\rm qb}} \right)^{3/2}\, + {\cal O}[J_L^2],
\nonumber \\
\label{Eq:dotS-fri-smallJ}
\end{eqnarray}
Thus, we conclude that the maximum efficiency at small $J_L$ is that of Carnot minus a term that
grows like $J_L^{1/2}$ (dashed curves in Fig.~\ref{Fig:allpowers}b), while the associated
minimum entropy production goes like $J_L^{3/2}$.
We note that Carnot efficiency occurs at $J_L =0$ at any $x_1=\epsilon_1/(k_{\rm B} T_L)$.
However, then the 1.09 factor in Eq.~(\ref{Eq:eta-fri-smallJ})
becomes $\sqrt{ 4\pi^2/[27 x_1^3 g(x_1)]}$.
The condition in Eq.~(\ref{Eq:condition-fri-smallJ}) minimizes this factor (the minimum being 1.09),
and thereby maximizes the efficiency for given $J_L$.
\section{Implementation with a chain of quantum systems}
\label{Sect:chain}
\begin{figure}[t]
\includegraphics[width=\columnwidth]{figure10.pdf}
\caption{\label{Fig:band}
(a) A chain of single level quantum dots with their energy levels aligned at energy $E_0$.
(b) Transmission function when all hoppings are equal (note the strong oscillations).
(c) Transmission function when all hoppings are carefully chosen (see text).
To aid comparison all bandwidths in the plots have been normalized.
}
\end{figure}
The previous sections have shown that maximum efficiency
(at given power output) occurs when the thermoelectric system has a boxcar transmission function
with the right position and width. In the limit of maximum power, the boxcar becomes a
Heaviside step-function. Here, we give a detailed recipe for engineering such transmission functions
for non-interacting electrons, and then discuss how to include mean-field interaction effects.
A Heaviside step-function is easily
implemented with point-contact, whose transmission function is
\cite{Buttiker-pointcont},
\begin{eqnarray}
{\cal T}_{\rm L,isl}(\epsilon) = \left(1+ \exp \left[- {\epsilon-E(V) \over D_{\rm tunnel} }\right] \right)^{-1}
\label{Eq:transmission-pc}
\end{eqnarray}
where $E(V)$ is the height of the energy barrier induced by the point contact,
and $D_{\rm tunnel}$ is a measure of tunnelling through the point contact.
A sufficiently long point contact exhibits negligible tunnelling, $D_{\rm tunnel} \to 0$,
so the transmission
function simplifies to the desired Heaviside step-function, $\theta[\epsilon-E(V)]$.
For a potential implementation of a boxcar function we consider a
chain of sites (quantum dots or molecules) with one level per site, as sketched in Fig.~\ref{Fig:band}a.
The objective is that
the hoppings between sites, $\{t_i\}$,
will cause the states to hybridize to form a band centred at $E_0$, with a width
given by the hopping\cite{Buttiker-private-comm}.
Neglecting electron-electron interactions,
the hopping Hamiltonian for five sites in the chain ($k=5$) can be written as
\begin{eqnarray}
{\cal H}_{\rm chain} = \left(\begin{array}{ccccc}
-{\rm i} a_0 /2 \ & t_1 & 0 & 0 & 0 \\
t_1 & 0 & t_2 & 0 & 0 \\
0 & t_2 & 0 & t_3 & 0 \\
0 & 0 & t_3 & 0 & t_4 \\
0 & 0 & 0 & t_4 & \ -{\rm i} a_0/2
\end{array}\right).
\end{eqnarray}
This is easily generalized to arbitrary chain length, $k$.
Here we treat $a_0$ as a phenomenological parameter, however in reality it would be given by $|t_0|^2$ multiplied by the density of states in the reservoir.
The fact that particles escape from the chain into the reservoirs, means the wavefunction for any given particle
in the chain will decay with time.
To model this, the Hamiltonian must be non-Hermitian, with the non-Hermiticity entering in the matrix elements for coupling to the reservoirs (top-left and bottom right matrix elements).
These induce an imaginary contribution to each eigenstate's energy $E_i$, with
the wavefunction of any eigenstate decaying at a rate given by the imaginary part of $E_i$.
The non-Hermiticity of $ {\cal H}_{\rm chain}$ also means that its left and right eigenvectors are different,
defining $\big | \psi_i^{\rm (r)}\big \rangle$ as the $i$th right eigenvector of the matrix
${\cal H}_{\rm chain}$, and $\big\langle\psi_i^{\rm (l)} \big|$ as the $i$th left eigenvector,
we have
$\big\langle\psi_i^{\rm (l)} \big | \psi_j^{\rm (r)}\big \rangle = \delta_{ij}$ and
$\big\langle\psi_i^{\rm (l)} \big | {\cal H}_{\rm chain} \big | \psi_i^{\rm (r)}\big \rangle = E_i$.
The resolution of unity is
$\sum_{i} \big | \psi_i^{\rm (r)}\big \rangle \, \big\langle\psi_i^{\rm (l)} \big | = {\bm 1}$,
where ${\bm 1}$ is the $k$-by-$k$ unit matrix.
We define $|1\rangle$ as the vector whose first element is one while all its other elements are zero, and
$|k\rangle$ as the vector whose last element (the $k$th element) is one while all its other elements are zero.
Then the transmission probability at energy $\epsilon$ is given by
\begin{eqnarray}
{\cal T}_{RL}(\epsilon) &=&
\left| \big\langle k \big| \ \left[\epsilon -{\cal H}_{\rm chain}\right]^{-1} \big| 1 \big\rangle
\right|^2 \ a_0 \, ,
\end{eqnarray}
where $[\cdots]^{-1}$ is a matrix inverse.
To evaluate this matrix inverse, we introduce a resolution of unity
to the left and right of $\left[\epsilon -{\cal H}_{\rm chain}\right]^{-1}$.
This gives
\begin{eqnarray}
{\cal T}_{RL} &=&
\sum_i \
\left|{\big\langle k \big | \psi_i^{\rm (r)}\big\rangle \ \big\langle\psi_i^{\rm (l)} \big| 1 \big\rangle
\over \epsilon-E_i} \right|^2 \ a_0.
\label{Eq:T-for-chain}
\end{eqnarray}
For any given set of hoppings $a_0, t_1,\cdots t_k$, one can easily
use a suitable eigenvector finder (we used Mathematica) to evaluate this equation numerically,
while an analytic solution is straight-forward\cite{Grenier-private} for $k\leq 3$.
When all hoppings in the chain are equal,
there is a mismatch between the electron's hopping dynamics in the chain and their free motion in the reservoirs. This causes resonances in the transmission, giving the
Fabry-Perot-type oscillations in Fig.~\ref{Fig:band}b for $k=5$.
However, we can carefully tune the hoppings (to be smallest in the middle of the chain and increasing towards the ends) to get the smooth transmission functions in Fig.~\ref{Fig:band}c.
The $k=5$ curve in Fig.~\ref{Fig:band}c has $t_1=t_4= 0.39a_0$ and $t_2=t_3= 0.28 a_0$,
and we choose $a_0= 1.91$ to normalize the band width to 1.
As the number of sites in the chain, $k$, increases, the transmission function tends to the desired boxcar function.
The above logic assumes no electron-electron interactions.
When we include interaction effects at the mean-field level, things get more complicated.
If the states in the chain are all at the same energy $E_0$ when the chain is unbiased,
they will not be aligned when there is a bias between the the reservoirs, because the reservoirs
also act as gates on the chain states. To engineer a chain where the energies are aligned at the optimal bias, one must adjust the confinement
potential of the dots in the chain (or adjust the chemistry of the molecules in the chain)
so that their energies are sufficiently out of alignment at zero bias that they all align
at optimal bias. In principle, we have the control to do this. However, in practice it would
require a great deal of trial-and-error experimental fine tuning.
We do not enter further into such practical issues here.
Rather, we use the above example to show that there is no {\it fundamental} reason that the bound
on efficiency cannot be achieved.
\section{Many quantum systems in parallel}
\label{Sect:in-parallel}
To increase the efficiency at given power output, one must increase the number of transverse modes, $N$.
This is because the efficiency decays with the power output divided by
the quantum bounds in Eqs.~(\ref{Eq:P-qb2},\ref{Eq:J-qb-fri}),
and these bounds go like $N$.
However, a strong thermoelectric response requires a
transmission function that is highly energy dependent, this typically only occurs when the
quantum system (point-contact, quantum dot or molecule) has dimensions of about a wavelength,
which implies that $N$ is of order one.
Crucial exceptions (beyond the scope of this work) are systems containing superconductors,
either SNS structures\cite{Pekola-reviews} or Andreev interferometers \cite{Chandra98}
(see also Ref.~[\onlinecite{jw-epl}] and references therein),
where strong thermoelectric effects occur for large $N$.
In the absence of a superconductor, the only way to get large $N$
is to construct a device consisting of many $N=1$ systems in parallel, such as
a surface covered with a certain density of such systems
\cite{Jordan-Sothmann-Sanchez-Buttiker2013,Sothmann-Sanchez-Jordan-Buttiker2013}. In this case $P_{\rm gen}^{\rm qb2}$ and $J_{\rm L}^{\rm qb}$ in
Eqs.~(\ref{Eq:P-qb2},\ref{Eq:J-qb-fri}) become
bounds on the power per unit area, with $N$ being replaced by the number of transverse modes
per unit area.
With this one modification, all calculations and results in this article
can be applied directly to such a situation.
Carnot efficiency is achieved for a large enough surface area that
the power per unit area is much less than $P_{\rm gen}^{\rm qb2}$ and $J_{\rm L}^{\rm qb}$.
It is worth noting that the number of modes per unit area cannot exceed $\lambda_{\rm F}^{-2}$, for Fermi wavelength
$\lambda_{\rm F}$.
From this we can get a feeling for the magnitude of the bounds discussed in this article.
Take a typical semiconductor thermoelectric (with $\lambda_{\rm F}\sim 10^{-8}$m),
placed between reservoirs at 700 K and 300 K (typical temperatures for a thermoelectric recovering electricity
from the heat in the exhaust gases of a diesel engined car).
Eq.~(\ref{Eq:P-qb2}) tells us that to get 100 W of power output from a semiconductor thermoelectric
one needs a cross section of at least 4 mm$^2$.
Then Eq.~(\ref{Eq:eta-eng-small-Pgen}) tells us that to get this power at 90\% of Carnot efficiency,
one needs a cross section of at least 0.4 cm$^2$.
Remarkably, it is {\it quantum mechanics} which gives these bounds,
even though the cross sections in question are macroscopic.
\section{Phonons and photons carrying heat in parallel with electrons}
\label{Sect:ph}
\begin{figure}
\includegraphics[width=\columnwidth]{figure11.pdf}
\caption{\label{Fig:thermocouple-phonons}
The thermocouple heat-engine in Fig.~\ref{Fig:thermocouple}, showing the heat flow due
to phonon and photons, which carry heat from hot to cold by all possible routes (in parallel with the heat carried by the electrons).
This always reduces the efficiency, so it should be minimized with suitable thermal insulation.}
\end{figure}
Any charge-less excitation (such as phonons or photons)
will carry heat from hot to cold, irrespective of the thermoelectric properties of the system.
While some of the phonons and photons will flow through the thermoelectric quantum system,
most will flow via other routes, see Fig.~\ref{Fig:thermocouple-phonons}.
A number of theories for these phonon or photon heat currents
take the form
\begin{eqnarray}
J_{\rm ph}= \alpha (T_L^\kappa-T_R^\kappa),
\label{Eq:J_ph}
\end{eqnarray}
where $J_{\rm ph}$ is the heat flow out of the L reservoir due to phonons or photons.
The textbook example of such a theory is that of black-body radiation between the two reservoirs,
then $\kappa=4$ and $\alpha$ is the Stefan-Boltzmann constant.
An example relevant to suspended sub-Kelvin nanostructures
is a situation where a finite number $N_{\rm ph}$ of
phonon or photon modes carry heat between the two reservoirs
\cite{Pendry1983,photons,phonons,2012w-pointcont}
then $\kappa=2$ and $\alpha \leq N_{\rm ph}\pi^2 k_{\rm B}^2/(6h)$.
One of the biggest practical challenges for quantum thermoelectrics is that phonons and photons
will often carry much more heat than the electrons. This is simply because the hot reservoir
can typically radiate heat in all directions as phonons or photons, while electrons only carry heat
through the few nanostructures connected to that reservoir.
Thus, in many cases the phonon or photon heat flow will dominate over the electronic one.
However, progress is being made in blocking phonon and photon flow, by suspending the nanostructure
to minimize phonon flow \cite{phonons} and engineering the electromagnetic environment to minimize photon flow \cite{photons}, and it can be hoped that phonon and phonon effects
will be greatly reduced in the future.
Hence, here we consider the full range from weak to strong phonon or photon heat flows.
For compactness in what follows we will only refer to phonon heat flows (usually the dominant parasitic effect).
However, strictly one should consider $J_{\rm ph}$ as the sum of the heat flow carried by phonons, photons and any more exotic charge-less excitations that might exist in a given circuit (mechanical oscillations, spin-waves, etc.).
\begin{figure}
\includegraphics[width=\columnwidth]{figure12.pdf}
\caption{\label{Fig:allpowers-phonons}
Plots of the maximum efficiency allowed when the there is a phonon heat flow, $J_{\rm ph}$,
in parallel with the heat carried by the electrons.
The curves in (a) are for $T_R/T_L=0.2$, with $J_{\rm ph} = 0, 0.01, 0.1, 1$ (from top to bottom); the curves
come from Eq.~(\ref{Eq:eng-e+ph}) with $\eta_{\rm eng}(P_{\rm gen})$ given
in Fig.~\ref{Fig:allpowers}a.
The curves in (b) are for $T_R/T_L=1.5$, with $J_{\rm ph} = 0, 0.02, 0.1, 0.4$ (from top to bottom); the curves
come from Eq.~(\ref{Eq:fri-e+ph}) with $\eta_{\rm eng}(P_{\rm gen})$ given
in Fig.~\ref{Fig:allpowers}b.
The maximum cooling power (open circles) is $({\textstyle{\frac{1}{2}}} J^{\rm qb}_L-J_{\rm ph})$.
}
\end{figure}
\subsection{Heat-engine with phonons}
For heat-engines, the phonon heat-flow is in parallel with electronic heat-flow, so
the heat-flow for a given $P_{\rm gen}$ is $(J_L+J_{\rm ph})$, rather than just $J_L$
(as it was in the absence of phonons).
Thus, the efficiency in the presence of the phonons is
\begin{eqnarray}
\eta^{\rm e+ph}_{\rm eng}(P_{\rm gen})={P_{\rm gen} \over J_L(P_{\rm gen})+J_{\rm ph}}.
\end{eqnarray}
Writing this in terms of the efficiency, we get
\begin{eqnarray}
\eta_{\rm eng}^{\rm e+ph} (P_{\rm gen})
&=& \big[ \eta_{\rm eng}^{-1} (P_{\rm gen}) +J_{\rm ph} / P_{\rm gen} \big]^{-1},
\label{Eq:eng-e+ph}
\end{eqnarray}
where $\eta_{\rm eng} (P_{\rm gen})$ is the efficiency for $J_{\rm ph}=0$.
Given the maximum efficiency at given power in the absence of phonons,
we can use this result to find the maximum efficiency for a given phonon heat flow,
$J_{\rm ph}$.
An example of this is shown in Fig.~\ref{Fig:allpowers-phonons}a.
It shows that for finite $J_{\rm ph}$, Carnot efficiency is not possible at any power output.
Phonons have a huge effect on the efficiency at small power output.
Whenever $J_{\rm ph}$ is non-zero, the efficiency vanishes at zero power output,
with
\begin{eqnarray}
\eta^{\rm e+ph}_{\rm eng}(P_{\rm gen})=P_{\rm gen}\big/J_{\rm ph} \ \ \ \hbox{ for } \ P_{\rm gen} \ll J_{\rm ph}.
\label{Eq:eta-with-phonons-smallP}
\end{eqnarray}
As $J_{\rm ph}$ increases, the range of applicability of this small $P_{\rm gen}$ approximation (shown as dashed lines in Fig.~\ref{Fig:allpowers-phonons}) grows towards the maximum power $P_{\rm eng}^{\rm qb}$ (open circles).
In contrast, phonon heat flows have little effect on the efficiency near the maximum power output, until these flows become strong enough that $J_{\rm ph} \sim P_{\rm gen}$.
For strong phonon flow, where $J_{\rm ph} \gg P_{\rm gen}$,
Eq.~(\ref{Eq:eta-with-phonons-smallP}) applies at all powers up to the maximum, $P_{\rm gen}^{\rm qb2}$.
Then, the efficiency is maximal when the power is maximal, where maximal power is the quantum bound given in Eq.~(\ref{Eq:P-qb2}).
Thus, the system with both maximal power and maximal efficiency is that with
a Heaviside step transmission function (see section~\ref{Sect:chain}).
\subsection{Refrigerator with phonons}
For a refrigerator to extract heat from a reservoir at rate $J$ in
the presence of phonons carrying a back flow of heat $J_{\rm ph}$,
that refrigerator must extract heat at a rate $J_L=J+J_{\rm ph}$.
Note that for clarity, in this section we take $J_{\rm ph}$ to be positive when $T_L< T_R$
(opposite sign of that in Eq.~(\ref{Eq:J_ph})).
Thus, the efficiency, or COP, in the presence of phonons,
is the heat current extracted, $J$, divided by the electrical power
required to extract heat at the rate $J_L=(J+J_{\rm ph})$.
This means that
\begin{eqnarray}
\eta_{\rm fri}^{\rm e+ph}(J) &=& {J \, \eta_{\rm fri}(J+J_{\rm ph}) \over J+J_{\rm ph}} ,
\label{Eq:fri-e+ph}
\end{eqnarray}
where $\eta_{\rm fri} (J)$ is the efficiency for $J_{\rm ph}=0$.
We can use this result to find the maximum efficiency for a given phonon heat flow,
$J_{\rm ph}$.
An example is shown in Fig.~\ref{Fig:allpowers-phonons}b.
Eq.~(\ref{Eq:fri-e+ph}) means that the phonon flow suppresses the maximum cooling power,
so $J$ must now obey
\begin{eqnarray}
J&\leq& {\textstyle{\frac{1}{2}}} J_L^{\rm qb} -J_{\rm ph}
\label{Eq:Jqb-phonons}
\end{eqnarray}
with $J_L^{\rm qb}$ given in Eq.~(\ref{Eq:Jqb}).
Thus, the upper bound (open circles) in Fig.~\ref{Fig:allpowers-phonons}b
move to the left as $J_{\rm ph}$ increases.
When the reservoir being refrigerated (reservoir $L$) is at ambient temperature, $T_R$,
then $J_{\rm ph}=0$ while $J_L^{\rm qb}$ is finite. However, as reservoir $L$ is refrigerated (reducing $T_L$),
$J_{\rm ph}$ grows, while $J_L^{\rm qb}$ shrinks.
As a result, at some point (before $T_L$ gets to zero) one arrives at $J_{\rm ph} = {\textstyle{\frac{1}{2}}} J_L^{\rm qb}$,
and further cooling of reservoir $L$ is impossible. Thus, given the $T_L$ of $J_{\rm ph}$ for a given system, one can easily find the lowest temperature
that reservoir $L$ can be refrigerated to, by solving the equation
$J_{\rm ph} = {\textstyle{\frac{1}{2}}} J_L^{\rm qb}$ for $T_L$
To achieve this temperature, one needs
the refrigerator with the maximum cooling power (rather than the most efficient one), this
is a system with a Heaviside step transmission function (see section~\ref{Sect:chain}). Such a system's refrigeration capacities were discussed in Ref. [\onlinecite{2012w-pointcont}].
We also note that, as with the heat-engine, phonons have a huge effect on the efficiency at small cooling power,
as can be seen in Fig.~\ref{Fig:allpowers-phonons}b.
Whenever $0< J_{\rm ph}<{\textstyle{\frac{1}{2}}} J_L^{\rm qb}$, the efficiency vanishes for small cooling power, with
\begin{eqnarray}
\eta^{\rm e+ph}_{\rm fri}(J)=J \ {\eta_{\rm fri}(J_{\rm ph}) \over J_{\rm ph}}\ \ \ \ \hbox{ for } \ J \ll J_{\rm ph}.
\end{eqnarray}
\section{Relaxation in a quantum system without B-field}
\label{Sect:Relax}
Elsewhere in this article, we neglected relaxation in the quantum system. In other words, we assumed
that electrons traverse the system in a time much less than the time for
inelastic scattering from phonons, photons or other electrons.
We now consider systems in which there is such relaxation, and ask
if this relaxation could enable a system to exceed the bounds found above for relaxationless systems.
To make progress, we restrict our interest to systems with negligible
external magnetic field (B-field) \cite{Footnote:Error-my-PRL}.
As yet, we have not been able to consider the rich interplay of relaxation and B-field
\cite{Casati2011,Sanchez-Serra2011,Entin-Wohlman2012}.
We use the voltage-probe model \cite{voltage-probe} shown in Fig.~\ref{Fig:relax}a.
A system with relaxation is modeled as a phase-coherent scatterer coupled to a
fictitious reservoir $M$ (a region in which relaxation occurs instantaneously).
The rate of the relaxation is controlled by the transmission
of the lead coupling to reservoir $M$.
We then separate the phase-coherent scatterer
into scatterers 1,2 and 3, as shown in Fig.~\ref{Fig:relax}b, each with their own transmission functions
${\cal T}_{ij}(\epsilon)$ with $i,j \in L,M,R$.
We assume that the transmission is unchanged under reversal of direction,
so ${\cal T}_{ij}(\epsilon)={\cal T}_{ji}(\epsilon)$ for all $\epsilon$ and $i,j$.
This condition is guaranteed by time-reversal symmetry whenever the B-field has a negligible effect on the electron and hole dynamics. However, it also applies for any B-field when all particles relax as they traverse the quantum system (then ${\cal T}_{LR}(\epsilon)={\cal T}_{RL}(\epsilon)=0$, which is sufficient
to force ${\cal T}_{ij}(\epsilon)={\cal T}_{ji}(\epsilon)$ for all $i,j$).
\begin{figure}
\includegraphics[width=\columnwidth]{figure13.pdf}
\caption{\label{Fig:relax}
(a) A quantum system in which relaxation occurs is modelled phenomenologically by
a coherent quantum system coupled to a third fictitious reservoir $M$ in which the relaxation occurs.
(b) The same model after we have separated the system's scattering matrix
into three components. The dashed arrows are the exchange of phonons or photons.
The arm containing scatterers 1 and 2 is shown in (c) for a heat-engine,
and in (d) for a refrigerator.
}
\end{figure}
If the relaxation involves electron-phonon or electron-photon interactions
(typically any system which is not sub-Kelvin),
the phonons or photons with which the electrons interact usually
flow easily between the system and the reservoirs.
Thus, these phonons or photons can carry heat current
between the fictitious reservoir $M$ and reservoirs $L,R$ (dashed arrows in Fig.~\ref{Fig:relax}).
The total electrical and heat currents into reservoir $M$ must be zero,
and this constraint determines reservoir $M$'s bias, $V_M$, and temperature, $T_M$.
\subsection{Method of over-estimation}
The optimal choice of ${\cal T}_{ML}$ and ${\cal T}_{RM}$ depends on
$T_M$, while $T_M$ depend on the heat current,
and thus on ${\cal T}_{ML}$ and ${\cal T}_{RM}$.
The solution and optimization of this self-consistency problem has been beyond our ability
to resolve, even though we have restricted ourselves to a simple model of relaxation in a system with negligible B-field.
Instead, we make a simplification
which leads to an {\it over-estimate} of the efficiency.
We assume $V_M,T_M$ are free parameters (not determined from ${\cal T}_{ML}$ and ${\cal T}_{RM}$),
with $T_M$ between $T_L$ and $T_R$.
If we find the optimal ${\cal T}_{ML}$ and ${\cal T}_{RM}$ for given $T_M$, and
then find the optimal $T_M$ (irrespective of whether it is consistent with ${\cal T}_{ML}$ and ${\cal T}_{RM}$ or not), we have an over-estimate of the
maximal efficiency.
Even with this simplification, we have only been able to address the low-power and high-power limits.
However, we show below that this over-estimate is sufficient to prove the following.
\begin{itemize}
\item[(1)] At low power, relaxation cannot make the system's efficiency exceed that of the optimal relaxation-free system
with $N_{\rm max}$ modes.
\item[(2)] Relaxation cannot make a system's power exceed that of the maximum possible power of a
relaxation-free system with $N_{\rm max}$ modes.
\end{itemize}
Defining $N_L$ and $N_R$ as
the number of transverse modes in the system to the left and right of the region where relaxation occurs,
\begin{eqnarray}
N_{\rm max}={\rm max}[N_L,N_R],
\label{Eq:Nmax}
\end{eqnarray}
\subsection{Efficiency of heat-engine with relaxation}
\label{Sect:eng-eff-relax}
To get the efficiency for our model of a quantum system with relaxation, we must find the efficiency for
the system in Fig.~\ref{Fig:relax}b. This system has two ``arms''.
One arm contains scatterers 1 and 2, and we define its efficiency as $\eta_{\rm eng}^{(1\&2)}$.
The other arm contains scatterer 3,
and we define its efficiency as $\eta_{\rm eng}^{(3)}$.
The efficiency of the full system, $ \eta_{\rm eng}^{\rm total}(P_{\rm gen})$, is given by
\begin{eqnarray}
{1 \over \eta_{\rm eng}^{\rm total}(P_{\rm gen}) } =
{p_{\rm rel} \over \eta_{\rm eng}^{(1\&2)} (p_{\rm rel}P_{\rm gen}) }
+{q_{\rm rel} \over \eta_{\rm eng}^{(3)} ( q_{\rm rel}P_{\rm gen}) },
\label{Eq:heatengines-in-parallel}
\end{eqnarray}
Here $p_{\rm rel}$ is the proportion of transmitted electrons that have passed through the arm containing scatterers 1 and 2, while $q_{\rm rel}=(1-p_{\rm rel})$ is the proportion that have passed through the arm containing scatterer 3.
Physically, $p_{\rm rel}$ is
the probability that an electron entering the quantum system relaxes before transmitting,
while $q_{\rm rel}$ is the probability that it transmits before relaxing.
One sees from Eq.~(\ref{Eq:heatengines-in-parallel}) that
the maximal efficiency for a given $p_{\rm rel}$ occurs when both $\eta_{\rm eng}^{(1\&2)}$
and $\eta_{\rm eng}^{(3)}$ are maximal.
The upper-bound on $\eta_{\rm eng}^{(3)}$
is that given in section~\ref{Sect:eng} with $q_{\rm rel}N_L$
modes to the left and $q_{\rm rel}N_R$ modes to the right.
Our objective now is to find the maximum $\eta_{\rm eng}^{(1\&2)}$
with $N_1=p_{\rm rel}N_L$ modes on the left and $N_2=p_{\rm rel}N_R$ modes on the right.
More precisely our objective is to find an {\it over-estimate} of this maximum.
For the heat flows indicated in Fig.~\ref{Fig:relax}c, the efficiency is
\begin{eqnarray}
\eta_{\rm eng}^{(1\&2)} &\equiv& P_{\rm gen}^{(1\&2)}\big/J
\nonumber \\
&=&\!\! {1 \over J}\left[
P_{\rm gen}^{(1)}(J_1;T_M,T_L) + P_{\rm gen}^{(2)}(J_2;T_R,T_M)\right]\! , \qquad \
\end{eqnarray}
where $J_1=J-J_1^{\rm ph}-J^{\rm ph}$ and $J_2 =J-J_2^{\rm ph}-J^{\rm ph}-P_{\rm gen}^{(1)}$.
One sees that $\eta_{\rm eng}^{(1\&2)}$ is maximal for given $T_M$ when
$J^{\rm ph}=J_1^{\rm ph}=J_2^{\rm ph}=0$ (these heat currents cannot be negative
because $T_L > T_M> T_R$).
Thus, to get our over-estimate of the maximal efficiency for given $T_M$,
we assume these phonon and photon heat-currents are zero.
Then, with a little algebra, one finds that
\begin{eqnarray}
1-\eta_{\rm eng}^{(1\&2)}\big(P_{\rm gen}^{(1\&2)}\big) = \left(1-\eta_{\rm eng}^{(1)}\big(P_{\rm gen}^{(1)}\big) \right)\left(1-\eta_{\rm eng}^{(2)}\big(P_{\rm gen}^{(2)}\big) \right),
\nonumber
\end{eqnarray}
where $P_{\rm gen}^{(1)}$ and $P_{\rm gen}^{(1)}$ are related to $P_{\rm gen}^{(1\&2)}$ by
\begin{eqnarray}
P_{\rm gen}^{(\mu)} = P_{\rm gen}^{(1\&2)} \eta_{\rm eng}^{(\mu)} \big/ \eta_{\rm eng}^{(1\& 2)},
\label{Eq:P1-or-2}
\end{eqnarray}
for $\mu=1,2$.
For given $T_M$, one maximizes $\eta_{\rm eng}^{(1\&2)}$ by independently maximizing
$\eta_{\rm eng}^{(1)}$ and $\eta_{\rm eng}^{(2)}$.
For low powers, Eq.~(\ref{Eq:eta-eng-small-Pgen}) with $P,N,T_R\to P_1, N_1, T_M$ gives $\eta_{\rm eng}^{(1)}$,
while with $P,N,T_L\to P_2, N_2, T_M$ gives $\eta_{\rm eng}^{(2)}$.
In this limit, we can
treat efficiencies in Eq.~(\ref{Eq:P1-or-2}) to zeroth order in $P_{\rm gen}^{(1\&2)}$, taking
them to be Carnot efficiencies, so
\begin{eqnarray}
P_{\rm gen}^{(1)} \simeq {T_L-T_M \over T_L-T_R}P_{\rm gen}^{(1\&2)}, \quad
P_{\rm gen}^{(1)} \simeq {T_M-T_R \over T_L-T_R}P_{\rm gen}^{(1\&2)}.
\nonumber
\end{eqnarray}
Then some algebra gives the over-estimate of efficiency at low powers for given $T_M$, to be
\begin{eqnarray}
\eta_{\rm eng}^{(1\&2)} \leq
\eta_{\rm eng}^{\rm Carnot}
\left(1- 0.478
\sqrt{ {T_R \over T_L} \ {P_{\rm gen} \ K_{\rm rel}\over P_{\rm gen}^{\rm qb2}(N=1)} } \right)\! ,
\quad
\end{eqnarray}
with $P_{\rm gen}^{\rm qb2}(N=1)$ given by Eq.~(\ref{Eq:P-qb2}) with $N=1$, and
\begin{eqnarray}
K_{\rm rel} =
\sqrt{{1 \over N_1}\,{T_R(T_L-T_M) \over T_M(T_L-T_R)}}
+ \sqrt{{1 \over N_2}\,{T_L(T_M-T_R) \over T_M(T_L-T_R)}},\quad
\label{Eq:Krelax}
\end{eqnarray}
where $N_1= p_{\rm rel} N_L$ and $N_2=p_{\rm rel} N_L$ are respectively the number of transmission modes in scattering matrices 1 and 2.
The over-estimate of $\eta_{\rm eng}^{(1\&2)}$ is maximal when $T_M$ is chosen to minimize
$K_{\rm rel}$.
The two minima of $K_{\rm rel}$ are at $T_M=T_R$ and $T_M=T_L$,
for which the values of $K_{\rm rel}$
are $1/\sqrt{N_1}$ and $1/\sqrt{N_2}$ respectively.
Thus, we have
\begin{eqnarray}
K_{\rm rel} \geq 1/\sqrt{p_{\rm rel}N_{\rm max}}\ ,
\label{Eq:Krelax-limit}
\end{eqnarray}
with $N_{\rm max}$ in Eq.~(\ref{Eq:Nmax}). Thus, whatever $T_M$ may be,
\begin{eqnarray}
\eta_{\rm eng}^{(1\&2)} \left(P_{\rm gen}^{(1\&2)}\right) &\leq&
\eta_{\rm eng}^{\rm Carnot}
\nonumber \\
& & \times
\left(\! 1- 0.478
\sqrt{ {T_R \over T_L} {P_{\rm gen}^{(1\&2)} \over P_{\rm gen}^{\rm qb2}(p_{\rm rel}N_{\rm max}) } } \right)\! .
\nonumber \\
\label{Eq:eta1&2-bound}
\end{eqnarray}
Since $P_{\rm gen}^{(1\&2)} = p_{\rm rel} P_{\rm gen}$, we can simplify Eq.~(\ref{Eq:eta1&2-bound})
by noting that
\begin{eqnarray}
{P_{\rm gen}^{(1\&2)} \over P_{\rm gen}^{\rm qb2}(p_{\rm rel}N_{\rm max}) } =
{P_{\rm gen} \over P_{\rm gen}^{\rm qb2}(N_{\rm max}) }
\end{eqnarray}
where $P_{\rm gen}$ is the total power generated by the combined system made of scatterers 1,2 and 3.
Then substituting the result into Eq.~(\ref{Eq:heatengines-in-parallel}), we get an over-estimate of the
efficiency at power output $P_{\rm gen}$ which is equal to the upper bound we found in the absence of relaxation, Eq.~(\ref{Eq:eta-eng-small-Pgen}).
Thus, we can conclude that for small power outputs,
no quantum system with relaxation within it can exceed the upper-bound on efficiency found for a {\it relaxation-free} system with $N_{\rm max}$ transverse modes.
Since the proof is based on an over-estimate of the efficiency for a system with relaxation,
we cannot say if a system with finite relaxation can approach the bound in Eq.~(\ref{Eq:eta-eng-small-Pgen}).
Unlike in the relaxation-free case, we cannot say what properties the quantum system with relaxation
(as given in terms of the properties of the effective scatterers 1, 2 and 3) are necessary to maximize the efficiency
at given power output. We simply know that it cannot exceed Eq.~(\ref{Eq:eta-eng-small-Pgen}).
\subsection{Refrigerator with relaxation}
Our objective is to find an over-estimate of the maximal efficiency of a refrigerator that is made of quantum systems in which relaxation occurs.
The efficiency of the system with relaxation,
$ \eta_{\rm fri}^{\rm total}(P_{\rm gen})$, is given by
\begin{eqnarray}
\eta_{\rm fri}^{\rm total}(J_L) =
p_{\rm rel} \eta_{\rm fri}^{(1\&2)} (p_{\rm rel}J_L)
+q_{\rm rel} \eta_{\rm fri}^{(3)} ( q_{\rm rel}J_L),
\label{Eq:fridges-in-parallel}
\end{eqnarray}
thus we need to find an upper bound on $\eta_{\rm fri}^{(1\&2)}$.
We make an over-estimate of this efficiency by taking
$T_M$ to be a free parameter between $T_L$ and $T_R$.
For given $T_M$, the efficiency of the combined systems 1 and 2
is
\begin{eqnarray}
\eta_{\rm fri}^{(1\&2)} (J)= J \Big/ \big[ P_{\rm abs}^{(1)}(J_1)
+ P_{\rm abs}^{(2)}(J_2) \big],
\end{eqnarray}
where $J_1=J+J_1^{\rm ph}+J^{\rm ph}$ and $J_2=J+J_2^{\rm ph} +J^{\rm ph}+P_{\rm abs}^{(1)}$,
see Fig.~\ref{Fig:relax}d.
This efficiency is maximized when $J_1^{\rm ph},J_2^{\rm ph},J^{\rm ph} = 0$
(since $T_L<T_M<T_R$ means these currents are not negative). Then a little algebra gives
\begin{eqnarray}
1+{1 \over \eta_{\rm fri}^{(1\&2)}(J)}
= \left[
1+{1 \over \eta_{\rm fri}^{(1)}(J)}
\right]\left[
1+{1 \over \eta_{\rm fri}^{(2)}\big(J_2\big)}
\right] \! , \qquad
\end{eqnarray}
where $J_2= J+P_{\rm abs}^{(1)}= J\big[1+1/\eta_{\rm fri}^{(1)}(J)\big]$.
Thus, to maximize $\eta_{\rm fri}^{(1\&2)}(J)$ for given $T_M$, one must maximize both
$\eta_{\rm fri}^{(1)}$ and $\eta_{\rm fri}^{(2)}$.
For low power, this can be done using Eq.~(\ref{Eq:eta-fri-smallJ})
(much as for the heat-engine in Section~\ref{Sect:eng-eff-relax} above)
giving
\begin{eqnarray}
\eta_{\rm fri}^{(1\&2)}
\leq \eta_{\rm fri}^{\rm Carnot}
\! \left(\! 1- 1.09
\sqrt{
{T_R \over T_R-T_L}{J_L K_{\rm rel}\over J_L^{\rm qb}(N=1)}}\,\right) \! , \ \
\end{eqnarray}
where $K_{\rm rel}$ is given in Eq.~(\ref{Eq:Krelax}), and $J_L^{\rm qb}(N=1)$ is given by
Eq.~(\ref{Eq:J-qb-fri}) with $N=1$.
The over-estimate of $\eta_{\rm fri}^{(1\&2)}$ is maximal when $K_{\rm rel}$ is minimal,
see Eq.~(\ref{Eq:Krelax-limit}).
Substituting this into Eq.~(\ref{Eq:fridges-in-parallel}), we see that the efficiency
with relaxation does not exceed the result in Eq.~(\ref{Eq:eta-fri-smallJ})
for a {\it relaxation-free} system with $N_{\rm max}$ transverse modes.
\subsection{Quantum bounds on power with relaxation}
For a heat-engine, the arm with scatterers 1 and 2,
has a maximum power,
\begin{eqnarray}
P_{\rm gen}^{(1\&2)} \leq A_0\, {\pi^2 \over h} k_{\rm B}^2
\left[ N_1\big(T_L-T_M\big)^2+ N_2\big(T_M-T_R\big)^2 \right],
\nonumber
\end{eqnarray}
Since $(T_L-T_M)^2 +(T_M-T_R)^2 \leq (T_L-T_R)^2$, the power of the full system
cannot exceed the maximum power of a relaxation-less system, Eq.~(\ref{Eq:P-qb2}),
with $N_{\rm max}$ modes.
For a refrigerator, the arm containing scatterers 1 and 2
has a maximum cooling power,
\begin{eqnarray}
J \leq \left\{
\begin{array}{l}
\pi^2 N_1 k_{\rm B}^2 T_L^2 \big/(12 h) \\
\pi^2 N_2 k_{\rm B}^2 T_M^2 \big/(12 h) -P_{\rm abs}^{(1)} \ ,
\end{array}\right.
\end{eqnarray}
where $P_{\rm abs}^{(1)}$ is the electrical power absorbed by scatter 1.
The upper (lower) term is the limit on the heat-flow into scatterer 1 (scatterer 2),
noting that the heat-flow into scatterer 2 is $J+P_{\rm abs}^{(1)}$.
Unless $N_2 \gg N_1$, the lower limit is the more restrictive one.
In any case, the cooling power of the full system
can never exceed the maximum power of a relaxation-less system, Eq.~(\ref{Eq:J-qb-fri}), with
$N_{\rm max} $ modes.
\section{Conclusions}
\label{Sect:conclusions}
The upper bound on efficiency at zero power (i.e.~Carnot efficiency) is classical,
since it is independent of wavelike nature of
the electrons. However, this work on thermoelectrics shows that the
upper bound on efficiency at finite power is quantum,
depending on the ratio of the thermoelectric's cross-section to the electrons' Fermi wavelength.
If one thought that electrons were classical (strictly zero wavelength), one would believe that Carnot efficiency was achievable at any power output. Quantum mechanics appears to tell us that this is not so.
However, a crucial point for future work is to discover how universal our bounds on efficiency at
finite power are. Our bounds currently rely on the quantum system being (a) well modelled by the
nonlinear scattering theory with its mean-field treatment of electron-electron interactions,
(b) coupled to only two reservoirs (hot and cold), and (c) relaxation free.
Under certain conditions we have also shown that they apply when there is relaxation in the quantum system.
We cannot yet prove that our results are as general as Pendry's bound on heat flow\cite{Pendry1983},
which applies for arbitrary relaxation and for more than two reservoirs \cite{2012w-2ndlaw},
as well as for electronic Luttinger liquids\cite{Kane-Fisher} and bosons\cite{Pendry1983}.
It also remains to be seen if our bound occurs in systems with strong electron-electron interactions (Coulomb blockade, Kondo physics, etc.).
More generally, we wonder whether similar bounds apply to those thermodynamic machines that do not rely on thermoelectric effects, such as Carnot heat engines.
\section{Acknowledgements}
I am very grateful to M.~B\"uttiker for the suggestion which led to the implementation
in Section~\ref{Sect:chain}. I thank P.~H\"anggi for questions on
entropy flow which led to section~\ref{Sect:Unique}.
I thank L.~Correa for questions which led to a great improvement of section~\ref{Sect:eff-CA}.
I thank C.~Grenier for an analytic solution of Eq.~(\ref{Eq:T-for-chain}) for $k=3$.
|
1,477,468,750,166 | arxiv | \section{Introduction.}
One of the simplest sets that is widely studied by and most important to many mathematicians, in particular analysts and topologists, is Cantor's ternary set (also referred to as the middle third Cantor set), introduced by H.~Smith \cite{HS:1975} and by G.~Cantor \cite{GC:1883}. Cantor's ternary set is generated by the simple recipe of dividing the unit interval $[0,1]$ into three parts, removing the open middle interval, and then continuing the process so that at each stage, each remaining subinterval is similarly subdivided into three and the middle open interval removed. Continuing this process \textit{ad infinitum} one obtains a non-empty set consisting of an infinite number of points.
We now formally define Cantor's ternary set in arithmetic terms.
\begin{definition}\label{def:CantorSet}
\textit{Cantor's ternary set} is defined to be the set
\begin{align*}
C \mathrel{:=} \left\{ \sum_{n \in \mathbb{N}} \frac{\omega_{n}}{3^{n}} : \omega_{n} \in \{ 0, 2 \} \; \text{for all} \; n \in \mathbb{N} \right\}.
\end{align*}
\end{definition}
Throughout, $C$ will denote Cantor's ternary set equipped with the subspace topology induced by the Euclidean metric. Below we list some of the remarkable properties of Cantor's ternary set. For a proof of Property (1), more commonly known as the Hausdorff--Alexandroff Theorem, we refer the reader to \cite[Theorem 30.7]{Willard:1970}. Both F. Hausdorff \cite{Hausdorff:1927} and P. S. Alexandroff \cite{A:1927} published proofs of this result in 1927. For Properties (2) to (8), we refer the reader to \cite{Falconer:1990} or \cite[Counterexample 29]{SS:1978}; for a proof of Properties (9) and (10), the definition of the Lebesgue measure, Hausdorff dimension and that of a self-similar set, we refer the reader to \cite{Falconer:1990}. For basic definitions of topological concepts see \cite{Munkres:2000} or \cite{Willard:1970}.
\begin{enumerate}[label=(\arabic*),leftmargin=0.75cm]
\item Any compact metric space is the continuous image of $C$.
\item The set $C$ is totally disconnected.
\item The set $C$ is perfect.
\item The set $C$ is compact.
\item The set $C$ is nowhere dense in the closed unit interval $[0, 1]$.
\item The set $C$ is Hausdorff.
\item The set $C$ is normal.
\item The cardinality of $C$ is equal to that of the continuum.
\item The one dimensional Lebesgue measure and outer Jordan content of $C$ are both zero.
\item The set $C$ is a self-similar set and has Hausdorff dimension equal to $\log(2)/\log(3)$.
\end{enumerate}
In this note we are interested in whether the Hausdorff--Alexandroff Theorem can be strengthened. To avoid any misunderstandings regarding the compactness condition that might stem from different naming traditions, we shall explicitly define what we mean by compact. Note that a compact space does not have to be Hausdorff.
\begin{definition}[Compact]\label{defn:compact}
Given a subset $A \subseteq X$ of a topological space $(X,\tau)$, an \textit{open cover} of $A$ is a collection of open sets whose union contains $A$. An \textit{open subcover} is a sub-collection of an open cover whose union still contains $A$. We call a subset $A$ of $X$ \textit{compact} if every open cover has a finite open subcover.
\end{definition}
It is known that for compact Hausdorff spaces the properties (i) metrizability, (ii) second-countability and (iii) being a continuous image of Cantor's ternary set are equivalent. This follows from the fact that a compact Hausdorff space is metrizable if and only if it is second-countable (see for instance \cite[p. 218]{Munkres:2000}) the Hausdorff--Alexandroff Theorem, and the fact that the continuous image of a compact metric space (in our case the Cantor set $C$) in a compact Hausdorff space is again a compact metrizable space (\cite[Corollary 23.2]{Willard:1970}).
Obviously the cardinality of a space that is the continuous image of $C$ cannot exceed that of the continuum. This restriction on cardinality is a necessary, but not a sufficient condition because there are compact Hausdorff spaces with cardinality equal to that of the continuum which are not second-countable, for instance the Alexandroff one-point compactification of the discrete topological space $(\mathbb{R},\mathcal{P}(\mathbb{R}))$, where $\mathcal{P}(\mathbb{R})$ denotes the power set of $\mathbb{R}$.
Restricting the cardinality even further leads to a sufficient condition. If we only look at countably infinite target spaces, we can deduce that for compact countably infinite spaces the Hausdorff property already implies metrizability. In a countably infinite Hausdorff space, every point is a $G_\delta$ point which together with compactness implies that the space is first-countable \cite[Problem 16.A.4]{Willard:1970}; since the space is countably infinite it follows that it is also second-countable. Hence, we have a space which is compact Hausdorff and second-countable and so, using the above mentioned equivalence, metrizable. Therefore, any compact countably infinite Hausdorff space is the continuous image of $C$. Is this strong restriction on the space's cardinality a sufficient condition for the Hausdorff--Alexandroff Theorem, i.e. is every compact countably infinite topological space the continuous image of $C$? This is precisely the question we address and, in fact, show that the answer is no by exhibiting a counterexample.
\begin{theorem}\label{thm:MainThm}
There exists a compact countably infinite topological space $( T,\tau )$ which is not the continuous image of $C$.
\end{theorem}
In order to prove this result we require an auxiliary result, Lemma \ref{lem:CounterEx}. In the proof of this result we rectify an error in \cite[Counterexample 99]{SS:1978}.
The main idea behind the proof of Theorem \ref{thm:MainThm} is to choose a specific non-Hausdorff space and to show that if there exists a continuous map from the Cantor set into this space, then the continuous map must push-forward the Hausdorff property of Cantor's ternary set, which will be a contradiction to how the target space was originally chosen.
\section{Proof of Theorem \ref{thm:MainThm}.}\label{sec:proof}
\begin{lemma}\label{lem:CounterEx}
There exists a countable topological space $(T, \tau)$ with the following properties:
\begin{enumerate}[label=(\alph{*}),leftmargin=0.75cm]
\item $(T, \tau)$ is compact,
\item $(T, \tau)$ is non-Hausdorff, and
\item every compact subset of $T$ is closed with respect to $\tau$.
\end{enumerate}
\end{lemma}
\begin{proof}
This proof is based on \cite[Counterexample 99]{SS:1978}. We define $T \mathrel{:=} (\mathbb{N}\times\mathbb{N}) \cup \left\{x,y\right\}$, namely the Cartesian product of the set of natural numbers $\mathbb{N}$ with itself unioned with two distinct arbitrary points $x$ and $y$. We equip the set $T$ with the topology $\tau$ whose base consists of all sets of the form:
\begin{enumerate}[label=(\roman{*}),leftmargin=0.75cm]
\item $\{(m,n)\}$, where $(m,n)\in \mathbb{N}\times\mathbb{N}$,
\item $T \setminus A$, where $A \subset (\mathbb{N}\times\mathbb{N}) \cup \{y\}$ contains $y$ and is such that the cardinality of the set $A \cap \{ (m, n) : n \in \mathbb{N}\}$ is finite for all $m \in \mathbb{N}$; that is, the set $A$ contains at most finitely many points on each row (these sets are the open neighbourhoods of $x$) and
\item $T \setminus B$, where $B \subset (\mathbb{N}\times\mathbb{N}) \cup \{x\}$ contains $x$ and is such that there exists an $M \in \mathbb{N}$, so that if $(m, n) \in B \cap (\mathbb{N} \times \mathbb{N})$, then $m \leq M$; that is $B$ contains only points from at most finitely many rows (these sets are the open neighbourhoods of $y$).
\end{enumerate}
Property (a) follows from the observation that any open cover of $T$ contains at least one open neighbourhood $U \supseteq T\setminus A$ of $x$ and one open neighbourhood $V \supseteq T \setminus B$ of $y$ with $A$ and $B$ as given above. The points not already contained in these two open sets are contained in $T \setminus (U \cup V) \subseteq T \setminus \left(\left(T\setminus A\right)\cup \left(T\setminus B\right)\right) = A \cap B$ which, by construction, is a finite set. In this way a finite open subcover can be chosen and hence the topological space $(T, \tau)$ is compact.
To see why $(T, \tau)$ has Property (b), consider open neighbourhoods of $x$ and $y$. An open neighbourhood of $x$ contains countably infinitely many points on each row of the lattice $\mathbb{N}\times\mathbb{N}$; an open neighbourhood of $y$ contains countably infinitely many full rows. It follows that there are no disjoint open neighbourhoods $U \ni x$ and $V \ni y$ and thus $T$ is non-Hausdorff.
We use contraposition to prove Property (c). Suppose that $E \subset T$ is not closed. Note that we may assume that $E$ is a strict subset of $T$ since $T$ itself is closed by the fact that $\emptyset \in \tau$. By construction of the topology, a set that is not closed cannot contain both $x$ and $y$. Also, there needs to be at least one point in the closure $\overline{E}$ of $E$, but not already in $E$; this has to be one of the points $x$ or $y$, because singletons $\{(m,n)\}$ which are subsets of the lattice $\mathbb{N}\times\mathbb{N}$ are open. Thus the point $(m,n)$ cannot be a limit point of $E$. We shall now check both cases, that is, (i) if $x\in \overline{E}\setminus E$ and (ii) if $y\in \overline{E}\setminus E$.
\begin{enumerate}[label=(\roman{*}),leftmargin=0.75cm]
\item If $x\in \overline{E}\setminus E$, then every open neighbourhood of $x$ has a non-empty intersection with $E$. It follows that there is at least one row in $\mathbb{N}\times\mathbb{N}$ that shares infinitely many points with $E$. Denote this row by $B$. Then the open cover $\{T \setminus B\} \cup \{ \{b\} : b \in B \cap E \}$ of $E$ cannot be reduced to a finite open subcover, and therefore, $E$ is not compact.
\item If $y\in \overline{E}\setminus E$, then, similar to (i), we have that $E$ contains points from infinitely many rows. Take one point from each of these rows and call the resulting set $A$. Then the open cover $\{T \setminus A\} \cup \{ \{ a \} : a \in A \cap E \}$ of $E$ cannot be reduced to a finite open subcover and hence $E$ is not compact.
\end{enumerate}
\vspace{-1.5em}
\end{proof}
\begin{proof}[Proof of \ref{thm:MainThm}]
Assume that there exists a surjective continuous map $f: C \to T$. We will show that this implies that $\left(T,\tau\right)$ is Hausdorff which contradicts Lemma \ref{lem:CounterEx}(b).
Choose two distinct points $u, v\in T$. Since singletons are compact, Lemma \ref{lem:CounterEx}(c) implies that the sets $\left\{ u \right\}$ and $\left\{ v \right\} $ are closed in $T$ with respect to $\tau$. Therefore, their pre-images under $f$ are non-empty closed subsets of $C$ and have disjoint open neighbourhoods $U(u)$ and $U(v)$, as $C$ is normal. The complements $U(u)^{c}$ and $U(v)^{c}$ of these open neighbourhoods are compact subsets of $C$. Thus, their images under $f$ are compact in $T$ because $f$ is continuous and they are closed because of Lemma \ref{lem:CounterEx}(c). Therefore, $V\left( u \right)\mathrel{:=}f\left(U\left( u \right)^{c}\right)^{c}$ and $V\left( v \right)\mathrel{:=}f\left(U\left( v \right)^{c}\right)^{c}$ are open neighbourhoods of $u$ and $v$ respectively. We claim that these sets are disjoint:
\begin{align*}
V\left( u \right)\cap V\left(v \right) = f\left(U\left(u \right)^{c}\right)^{c}\cap f\left(U\left(v \right)^{c}\right)^{c} &= \left(f\left(U\left( u \right)^{c}\right)\cup f\left(U\left( v \right)^{c}\right)\right)^{c}\\
&=\left(f\left(U\left(u \right)^{c}\cup U\left( v \right)^{c}\right)\right)^{c}\\
&=\left(f\left(\left(U\left( u \right)\cap U\left( v \right)\right)^{c}\right)\right)^{c}
=\left(f\left(\emptyset^{c}\right)\right)^{c}
= \emptyset.
\end{align*}
Hence we have separated the points $u$ and $v$ by open neighbourhoods. Since $u, v \in T$ were chosen arbitrarily, we conclude that $\left(T,\tau\right)$ is Hausdorff, giving the desired contradiction.
\end{proof}
|
1,477,468,750,167 | arxiv |
\section{Introduction}
An {\it overlap} is a word of the form $axaxa$, where $a$ is a single
letter and $x$ is a (possibly empty) word. In 1980, Earl Fife
\cite{Fife:1980} proved a theorem characterizing the infinite binary
overlap-free words as encodings of paths in a finite automaton.
Berstel \cite{Berstel:1994} later simplified the exposition, and both
Carpi \cite{Carpi:1993a} and Cassaigne \cite{Cassaigne:1993b} gave an
analogous analysis for the case of finite words.
In a previous paper \cite{Shallit:2011}, the second author gave a new
approach to Fife's theorem, based on the factorization theorem
of Restivo and Salemi \cite{Restivo&Salemi:1985a} for overlap-free words.
In this paper, we extend this analysis
by applying it to the case of $\frac{7}{3}$-power-free words.
Given a rational number $\frac{p}{q} > 1$, we define a word $w$ to be a
$\frac{p}{q}$-power if $w$ can be written in the form $x^n x'$ where $n
= \lfloor p/q \rfloor$, $x'$ is a (possibly empty)
prefix of $x$, and $|w|/|x| = p/q$.
The word $x$ is called a {\it period} of $w$, and $p/q$ is an
{\it exponent} of $w$. If $p/q$ is the largest exponent of $w$,
we write $\exp(w) = p/q$. We also say that $w$ is {\it $|x|$-periodic}.
For example, the word
{\tt alfalfa} is a $\frac{7}{3}$-power, and the corresponding period is
{\tt alf}. Sometimes, as is routine in the literature,
we also refer to $|x|$ as the period; the context
should make it clear which is meant.
A word, whether finite or infinite, is {\it $\beta$-power-free}
if it contains no factor $w$ that is an $\alpha$-power for $\alpha\geq
\beta$. A word is {\it $\beta^+$-power-free} if it contains no
factor $w$ that is an $\alpha$-power for $\alpha > \beta$. Thus, the
concepts of ``overlap-free'' and ``$2^+$-power-free'' coincide.
\section{Notation and basic results}
Let $\Sigma$ be a finite alphabet. We let $\Sigma^*$ denote the set
of all finite words over $\Sigma$ and $\Sigma^\omega$ denote the
set of all (right-) infinite words over $\Sigma$. We say
$y$ is a {\em factor} of a word $w$ if there exist words
$x, z$ such that $w = xyz$.
If $x$ is a finite word, then $x^\omega$ represents the
infinite word $xxx \cdots$.
From now on we fix $\Sigma = \lbrace 0,1 \rbrace$. The most famous
infinite binary overlap-free word is $\bf t$, the Thue-Morse word,
defined as the fixed point, starting with $0$, of the Thue-Morse
morphism $\mu$, which maps $0$ to $01$ and $1$ to $10$. We have
$$ {\bf t} = t_0 t_1 t_2 \cdots = 0110100110010110 \cdots .$$
The morphism $\mu$ has a second fixed point, ${\overline{\bf t}}$, which is
obtained from $\bf t$ by applying the complementation coding
defined by $\overline{0} = 1$ and $\overline{1} = 0$.
We let ${\cal F}_{7/3}$ denote the set of (right-) infinite binary
$\frac{7}{3}$-power-free words. We point out that these words are of
particular interest, because $\frac{7}{3}$ is the largest exponent $\alpha$
such that there are only polynomially-many $\alpha$-power-free words of
length $n$ \cite{Karhumaki&Shallit:2004}.
The exponent $\frac{7}{3}$ plays a special role in
combinatorics on words, as testified to by the many papers mentioning
this exponent (e.g.,
\cite{Kolpakov&Kucherov:1997,Shur:2000,Karhumaki&Shallit:2004,Rampersad:2005,Aberkane&Currie:2005,Blondel&Cassaigne&Jungers:2009}).
We now state a
factorization theorem for infinite $\frac{7}{3}$-power-free words:
\begin{theorem}
Let ${\bf x} \in {\cal F}_{7/3}$, and let $P = \lbrace p_0, p_1, p_2, p_3, p_4
\rbrace$, where $p_0 = \epsilon$, $p_1 = 0$, $p_2 = 00$,
$p_3 = 1$, and $p_4 = 11$. Then there exists ${\bf y} \in {\cal F}_{7/3}$ and
$p \in P$ such that ${\bf x} = p \mu({\bf y})$. Furthermore, this
factorization is unique, and $p$ is uniquely determined by inspecting
the first $5$ letters of ${\bf x}$.
\end{theorem}
\begin{proof}
The first two claims follow immediately from the version for finite
words, as given in \cite{Karhumaki&Shallit:2004}. The last claim
follows from exhaustive enumeration of cases.
\end{proof}
We can now iterate this factorization theorem to get
\begin{corollary}
Every infinite $\frac{7}{3}$-power-free word $\bf x$ can be written
uniquely in the form
\begin{equation}
{\bf x} = p_{i_1} \mu( p_{i_2} \mu ( p_{i_3} \mu ( \cdots ) ) ) \label{qq}
\end{equation}
with $i_j \in \lbrace 0, 1, 2, 3, 4 \rbrace$ for
$j \geq 1$, subject to the understanding
that if there exists $c$ such that $i_j = 0$ for
$j \geq c$, then we also need to specify whether the ``tail'' of the
expansion represents $\mu^\omega(0) = {\bf t}$ or $\mu^\omega(1) = {\overline{\bf t}}$.
Furthermore, every truncated expansion
$$p_{i_1} \mu(p_{i_2} \mu (p_{i_3} \mu (\cdots p_{i_{n-1}} \mu(p_{i_n})
\cdots )))$$
is a prefix of $\bf x$, with the understanding that if
$i_n = 0 $, then we need to replace $p_{i_n}$ with either
$1$ (if the ``tail'' represents $\bf t$) or $3$ (if the ``tail''
represents ${\overline{\bf t}}$).
\end{corollary}
\begin{proof}
The form (\ref{qq}) is unique, since each $p_i$ is uniquely determined
by the first 5 characters of the associated word.
\end{proof}
Thus, we can associate each infinite binary $\frac{7}{3}$-power-free
word $\bf x$ with the
essentially unique infinite sequence
of indices ${\bf i} := (i_j)_{j \geq 0}$ coding elements in $P$,
as specified by (\ref{qq}). If $\bf i$ ends in $0^\omega$, then
we need an additional element (either $1$ or $3$) to disambiguate
between $\bf t$ and ${\overline{\bf t}}$ as the ``tail''.
In our notation, we separate this additional element with a
semicolon so that, for example, the string $000\cdots; 1$ represents
$\bf t$ and $000\cdots; 3$ represents ${\overline{\bf t}}$.
Of course, not every possible sequence of $(i_j)_{j \geq 1}$ of indices
corresponds to an
infinite $\frac{7}{3}$-power-free word. For example, every infinite word coded
by an infinite sequence beginning $400\cdots$ has a $\frac{7}{3}$-power
Our goal is to characterize precisely,
using a finite automaton, those infinite sequences corresponding to
$\frac{7}{3}$-power-free words.
Next, we recall some connections between the morphism $\mu$ and the
powers over the binary alphabet. Below $x$ is an arbitrary finite or
right-infinite word.
\begin{lemma}
\label{squares}
If the word $\mu(x)$ has a prefix $zz$, then the word $x$ has the prefix $\mu^{-1}(z)\mu^{-1}(z)$.
\end{lemma}
\begin{proof}
Follows immediately from \cite[Lemma 1.7.2]{Allouche&Shallit:2003}.
\end{proof}
\begin{lemma}
\label{exps}
(1) For any real $\beta>1$, we have $\exp(x)=\beta$ iff $\exp(\mu(x))=\beta$.\\
(2) For any real $\beta\ge2^+$, the word $x$ is $\beta$-power-free iff $\mu(x)$ is $\beta$-power-free.
\end{lemma}
\begin{proof}
For (1), see \cite[Prop.~1.1]{Shur:2000}. For (2), see \cite[Prop.~1.2]{Shur:2000} or \cite[Thm.~5]{Karhumaki&Shallit:2004}.
\end{proof}
\begin{lemma} \label{cor}
Let $p$ be a positive integer. If the longest $p$-periodic prefix of
the word $\mu(x)$ has the exponent $\beta \geq 2$, then the longest
$(p/2)$-periodic prefix of $x$ also has the exponent $\beta$.
\end{lemma}
\begin{proof}
Let $zzz'$ (where $|z|=p$ and $z'$ is a possibly empty prefix of
$z^{\omega}$) be the longest $p$-periodic prefix of $\mu(x)$.
Lemma~\ref{squares} implies that $p$ is even. If $|z'|$ is odd, let $a$
be the last letter of $z'$. The next letter $b$ in $\mu(x)$ is fixed by
the definition of $\mu$: $b\ne a$. By the definition of period, another $a$
occurs $p$ symbols to the left of the last letter of $z'$. Since $p$ is
even, this $a$ also fixes the next letter $b$. Hence the prefix
$zzz'b$ of $\mu(x)$ is $p$-periodic, contradicting the
definition of $zzz'$. Thus $|z'|$ is even. Therefore $x$ begins with
the $\beta$-power $\mu^{-1}(z)\mu^{-1}(z)\mu^{-1}(z')$ of period
$p/2$.
It remains to note that if $x$ has a $(p/2)$-periodic prefix $y$ of
exponent $\alpha>\beta$, then by Lemma~\ref{exps}\,(1), the
$p$-periodic prefix $\mu(y)$ of $\mu(x)$ also has the exponent $\alpha$,
contradicting the hypotheses of the lemma.
\end{proof}
\section{The main result}
For each finite word $w \in \lbrace 0,1,2,3,4 \rbrace^*$,
$w = i_1 i_2 \cdots i_r$, and an infinite word
${\bf x} \in \lbrace 0, 1 \rbrace^\omega$, we
define
\begin{align*}
C_w ({\bf x}) &= p_{i_1} \mu( p_{i_2} \mu ( p_{i_3} \mu( \cdots {\bf x} \cdots))) \text{ and}\\
F_w &= \lbrace {\bf x} \in \Sigma^\omega \ : \ C_w ({\bf x}) \in {\cal F}_{7/3} \rbrace
\end{align*}
Note that $F_w\subseteq {\cal F}_{7/3}$ for any $w$ in view of Lemma~\ref{exps}\,(2).
\begin{figure}[htp]
\centerline{
\unitlength=1mm
\begin{picture}(100,180)(-1,-90)
\gasset{Nw=6,Nh=4.5,AHnb=0,Nframe=n}
\node(eps)(0,0){\small$F_{\epsilon}$}
\node(0)(12,0){\small$F_0$}
\node(0a)(18.5,0){\small$=F_{\epsilon}$}
\node(1)(12,61){\small$F_1$}
\node(2)(12,16){\small$F_2$}
\drawedge(eps,0){}
\drawedge(eps,1){}
\drawedge(eps,2){}
\node(10)(30,73){\small$F_{10}$}
\node(10a)(36.5,73){\small$=F_{\epsilon}$}
\node(11)(30,67){\small$F_{11}$}
\node(12)(30,61){\small$F_{12}$}
\node(12a)(36.5,61){\small$=F_2$}
\node(13)(30,55){\small$F_{13}$}
\node(14)(30,49){\small$F_{14}$}
\node(14a)(36.5,49){\small$=\varnothing$}
\node(21)(30,28){\small$F_{21}$}
\node(21a)(36.5,28){\small$=\varnothing$}
\node(22)(30,22){\small$F_{22}$}
\node(22a)(36.5,22){\small$=\varnothing$}
\node(20)(30,16){\small$F_{20}$}
\node(23)(30,10){\small$F_{23}$}
\node(23a)(37,10){\small$=F_{13}$}
\node(24)(30,4){\small$F_{24}$}
\node(24a)(36.5,4){\small$=\varnothing$}
\drawedge(1,10){}
\drawedge(1,11){}
\drawedge(1,12){}
\drawedge(1,13){}
\drawedge(1,14){}
\drawedge(2,20){}
\drawedge(2,21){}
\drawedge(2,22){}
\drawedge(2,23){}
\drawedge(2,24){}
\node[Nw=7](110)(55,88){\small$F_{110}$}
\node(110a)(62,88){\small$=F_3$}
\node[Nw=7](111)(55,82){\small$F_{111}$}
\node(111a)(62.5,82){\small$=F_{11}$}
\node[Nw=7](112)(55,76){\small$F_{112}$}
\node(112a)(62,76){\small$=F_2$}
\node[Nw=7](113)(55,70){\small$F_{113}$}
\node(113a)(62.5,70){\small$=F_{13}$}
\node[Nw=7](114)(55,64){\small$F_{114}$}
\node(114a)(62,64){\small$=\varnothing$}
\node[Nw=7](130)(55,58){\small$F_{130}$}
\node[Nw=7](131)(55,52){\small$F_{131}$}
\node(131a)(62.5,52){\small$=F_{31}$}
\node[Nw=7](132)(55,46){\small$F_{132}$}
\node(132a)(62,46){\small$=\varnothing$}
\node[Nw=7](133)(55,40){\small$F_{133}$}
\node(133a)(62,40){\small$=\varnothing$}
\node[Nw=7](134)(55,34){\small$F_{134}$}
\node(134a)(62,34){\small$=\varnothing$}
\node[Nw=7](200)(55,28){\small$F_{200}$}
\node(200a)(62,28){\small$=\varnothing$}
\node[Nw=7](201)(55,22){\small$F_{201}$}
\node(201a)(62,22){\small$=\varnothing$}
\node[Nw=7](202)(55,16){\small$F_{202}$}
\node(202a)(62,16){\small$=\varnothing$}
\node[Nw=7](203)(55,10){\small$F_{203}$}
\node[Nw=7](204)(55,4){\small$F_{204}$}
\node(204a)(62,4){\small$=F_{4}$}
\drawedge[curvedepth=-3.5](11,110){}
\drawedge[curvedepth=-2.7](11,111){}
\drawedge[curvedepth=-2](11,112){}
\drawedge[curvedepth=-1](11,113){}
\drawedge(11,114){}
\drawedge(13,130){}
\drawedge[curvedepth=1](13,131){}
\drawedge[curvedepth=2](13,132){}
\drawedge[curvedepth=2.7](13,133){}
\drawedge[curvedepth=3.5](13,134){}
\drawedge[curvedepth=-1.5](20,200){}
\drawedge[curvedepth=-0.8](20,201){}
\drawedge(20,202){}
\drawedge[curvedepth=0.8](20,203){}
\drawedge[curvedepth=1.5](20,204){}
\node[Nw=8](1300)(80,73){\small$F_{1300}$}
\node(1300a)(89,73){\small$=F_{130}$}
\node[Nw=8](1301)(80,67){\small$F_{1301}$}
\node(1301a)(88,67){\small$=F_1$}
\node[Nw=8](1302)(80,61){\small$F_{1302}$}
\node(1302a)(88,61){\small$=F_2$}
\node[Nw=8](1303)(80,55){\small$F_{1303}$}
\node(1303a)(87.5,55){\small$=\varnothing$}
\node[Nw=8](1304)(80,49){\small$F_{1304}$}
\node(1304a)(87.5,49){\small$=\varnothing$}
\node[Nw=8](2030)(80,28){\small$F_{2030}$}
\node(2030a)(89,28){\small$=F_{310}$}
\node[Nw=8](2031)(80,22){\small$F_{2031}$}
\node(2031a)(87.5,22){\small$=\varnothing$}
\node[Nw=8](2032)(80,16){\small$F_{2032}$}
\node(2032a)(87.5,16){\small$=\varnothing$}
\node[Nw=8](2033)(80,10){\small$F_{2033}$}
\node(2033a)(88.5,10){\small$=F_{33}$}
\node[Nw=8](2034)(80,4){\small$F_{2034}$}
\node(2034a)(88,4){\small$=F_4$}
\drawedge[curvedepth=-1.5](130,1300){}
\drawedge[curvedepth=-0.8](130,1301){}
\drawedge(130,1302){}
\drawedge[curvedepth=0.8](130,1303){}
\drawedge[curvedepth=1.5](130,1304){}
\drawedge[curvedepth=-2](203,2030){}
\drawedge[curvedepth=-1.2](203,2031){}
\drawedge[curvedepth=-0.6](203,2032){}
\drawedge(203,2033){}
\drawedge[curvedepth=0.8](203,2034){}
\node(3)(12,-61){\small$F_3$}
\node(4)(12,-16){\small$F_4$}
\drawedge(eps,3){}
\drawedge(eps,4){}
\node(30)(30,-73){\small$F_{30}$}
\node(30a)(36.5,-73){\small$=F_{\epsilon}$}
\node(33)(30,-67){\small$F_{33}$}
\node(34)(30,-61){\small$F_{34}$}
\node(34a)(36.5,-61){\small$=F_4$}
\node(31)(30,-55){\small$F_{31}$}
\node(32)(30,-49){\small$F_{32}$}
\node(32a)(36.5,-49){\small$=\varnothing$}
\node(43)(30,-28){\small$F_{43}$}
\node(43a)(36.5,-28){\small$=\varnothing$}
\node(44)(30,-22){\small$F_{44}$}
\node(44a)(36.5,-22){\small$=\varnothing$}
\node(40)(30,-16){\small$F_{40}$}
\node(41)(30,-10){\small$F_{41}$}
\node(41a)(37,-10){\small$=F_{31}$}
\node(42)(30,-4){\small$F_{42}$}
\node(42a)(36.5,-4){\small$=\varnothing$}
\drawedge(3,30){}
\drawedge(3,33){}
\drawedge(3,34){}
\drawedge(3,31){}
\drawedge(3,32){}
\drawedge(4,40){}
\drawedge(4,43){}
\drawedge(4,44){}
\drawedge(4,41){}
\drawedge(4,42){}
\node[Nw=7](330)(55,-88){\small$F_{330}$}
\node(330a)(62,-88){\small$=F_1$}
\node[Nw=7](333)(55,-82){\small$F_{333}$}
\node(333a)(62.5,-82){\small$=F_{33}$}
\node[Nw=7](334)(55,-76){\small$F_{334}$}
\node(112a)(62,-76){\small$=F_4$}
\node[Nw=7](331)(55,-70){\small$F_{331}$}
\node(331a)(62.5,-70){\small$=F_{31}$}
\node[Nw=7](332)(55,-64){\small$F_{332}$}
\node(332a)(62,-64){\small$=\varnothing$}
\node[Nw=7](310)(55,-58){\small$F_{310}$}
\node[Nw=7](313)(55,-52){\small$F_{313}$}
\node(313a)(62.5,-52){\small$=F_{13}$}
\node[Nw=7](314)(55,-46){\small$F_{314}$}
\node(314a)(62,-46){\small$=\varnothing$}
\node[Nw=7](311)(55,-40){\small$F_{311}$}
\node(311a)(62,-40){\small$=\varnothing$}
\node[Nw=7](312)(55,-34){\small$F_{312}$}
\node(312a)(62,-34){\small$=\varnothing$}
\node[Nw=7](400)(55,-28){\small$F_{400}$}
\node(400a)(62,-28){\small$=\varnothing$}
\node[Nw=7](403)(55,-22){\small$F_{403}$}
\node(403a)(62,-22){\small$=\varnothing$}
\node[Nw=7](404)(55,-16){\small$F_{404}$}
\node(404a)(62,-16){\small$=\varnothing$}
\node[Nw=7](401)(55,-10){\small$F_{401}$}
\node[Nw=7](402)(55,-4){\small$F_{402}$}
\node(402a)(62,-4){\small$=F_{2}$}
\drawedge[curvedepth=3.5](33,330){}
\drawedge[curvedepth=2.7](33,333){}
\drawedge[curvedepth=2](33,334){}
\drawedge[curvedepth=1](33,331){}
\drawedge(33,332){}
\drawedge(31,310){}
\drawedge[curvedepth=-1](31,313){}
\drawedge[curvedepth=-2](31,314){}
\drawedge[curvedepth=-2.7](31,311){}
\drawedge[curvedepth=-3.5](31,312){}
\drawedge[curvedepth=1.5](40,400){}
\drawedge[curvedepth=0.8](40,403){}
\drawedge(40,404){}
\drawedge[curvedepth=-0.8](40,401){}
\drawedge[curvedepth=-1.5](40,402){}
\node[Nw=8](3100)(80,-73){\small$F_{3100}$}
\node(3100a)(89,-73){\small$=F_{310}$}
\node[Nw=8](3103)(80,-67){\small$F_{3103}$}
\node(3103a)(88,-67){\small$=F_3$}
\node[Nw=8](3104)(80,-61){\small$F_{3104}$}
\node(3104a)(88,-61){\small$=F_4$}
\node[Nw=8](3101)(80,-55){\small$F_{3101}$}
\node(3101a)(87.5,-55){\small$=\varnothing$}
\node[Nw=8](3102)(80,-49){\small$F_{3102}$}
\node(3102a)(87.5,-49){\small$=\varnothing$}
\node[Nw=8](4010)(80,-28){\small$F_{4010}$}
\node(4010a)(89,-28){\small$=F_{130}$}
\node[Nw=8](4013)(80,-22){\small$F_{4013}$}
\node(4013a)(87.5,-22){\small$=\varnothing$}
\node[Nw=8](4014)(80,-16){\small$F_{4014}$}
\node(4014a)(87.5,-16){\small$=\varnothing$}
\node[Nw=8](4011)(80,-10){\small$F_{4011}$}
\node(4011a)(88.5,-10){\small$=F_{11}$}
\node[Nw=8](4012)(80,-4){\small$F_{4012}$}
\node(4012a)(88,-4){\small$=F_2$}
\drawedge[curvedepth=1.5](310,3100){}
\drawedge[curvedepth=0.8](310,3103){}
\drawedge(310,3104){}
\drawedge[curvedepth=-0.8](310,3101){}
\drawedge[curvedepth=-1.5](310,3102){}
\drawedge[curvedepth=2](401,4010){}
\drawedge[curvedepth=1.2](401,4013){}
\drawedge[curvedepth=0.6](401,4014){}
\drawedge(401,4011){}
\drawedge[curvedepth=-0.8](401,4012){}
\end{picture} }
\caption{\small\sl Equations between languages $F_w$.} \label{tree}
\end{figure}
\begin{lemma} \label{main2}
The sets $F_w$ satisfy the equalities listed in Fig.~\ref{tree}. In particular, there are only 15 different nonempty sets $F_w$; they are
$$
F_{\epsilon},F_1,F_{11},F_{13},F_{130},F_2,F_{20},F_{203},F_3,F_{31},F_{310},F_{33},F_4,F_{40},F_{401}.
$$
\end{lemma}
\begin{proof}
Due to symmetry, it is enough to prove only the 30 equalities from the
upper half of Fig.~\ref{tree} and the equality $F_0=F_{\epsilon}$. We
first prove the emptiness of 15 sets from the upper half of
Fig.~\ref{tree}.
\smallskip
Four sets: $F_{21}$, $F_{22}$, $F_{201}$, and $F_{202}$, consist of
words that start $000$.
Eight sets consist of words that contain the factor $0\mu(11)=01010$
($F_{14}$, $F_{24}$, $F_{133}$, $F_{134}$, $F_{1303}$, and $F_{1304}$),
its $\mu$-image ($F_{114}$), or the complement of its $\mu$-image
($F_{132}$).
Two sets: $F_{2031}$ and $F_{2032}$, consist of words that start
$00\mu^2(1)0=0010010$. Finally, the words from the set $F_{200}$ have
the form $00\mu^3({\bf x})$; each of these words starts either $000$ or
$0010010$.
\medskip
Each of the 16 remaining equalities has the form $F_{w_1}=F_{w_2}$. We
prove them by showing that for an arbitrary ${\bf x}\in {\cal F}_{7/3}$, the
words $u_1=C_{w_1}({\bf x})$ and $u_2=C_{w_2}({\bf x})$ are either both
$\frac{7}{3}$-power-free or both not. In most cases, some suffix of $u_1$ coincides with
the image of $u_2$ under some power of $\mu$. Then by
Lemma~\ref{exps}\,(2) the word $u_1$ can be $\frac{7}{3}$-power-free only if $u_2$ is $\frac{7}{3}$-power-free.
In these cases, it suffices to study $u_1$ assuming that $u_2$ is
$\frac{7}{3}$-power-free.
When we refer to a ``forbidden'' power in what follows, we mean
a power of exponent $\geq \frac{7}{3}$.
\smallskip
$F_0=F_{\epsilon}$: By Lemma~\ref{exps}\,(2), $u_1=\mu({\bf x})$ is
$\frac{7}{3}$-power-free iff $u_2={\bf x}$ is $\frac{7}{3}$-power-free.
\smallskip
$F_{10}=F_{\epsilon}$: The word $u_1=0\mu(\mu({\bf x}))$ contains a
$\mu^2$-image of $u_2={\bf x}$. If ${\bf x}$ is $\frac{7}{3}$-power-free, then so is
$\mu^2({\bf x})$. Hence, if $u_1$ has a forbidden power, then this
power must be a prefix of $u_1$.
Now let $\beta<7/3$ be the largest possible exponent of a prefix of
${\bf x}$ and $q$ be the smallest period of a prefix of exponent
$\beta$ in ${\bf x}$. Write $\beta=p/q$. Then the word $\mu^2({\bf x})$
has a prefix of exponent $\beta$ and of period $4q$ by
Lemma~\ref{exps}\,(1), but no prefixes of a bigger exponent or of the
same exponent and a smaller period by Lemma~\ref{cor}. Hence $u_1$ has
no prefixes of exponent greater than $(4p+1)/(4q)$. Since $p$ and $q$
are integers, we obtain the required inequality: $$
\frac{p}{q}<\frac{7}{3}\Longrightarrow3p<7q\Longrightarrow
3p+\frac{3}{4}<7q\Longrightarrow \frac{4p+1}{4q}<\frac{7}{3}. $$
\smallskip
$F_{12}=F_2$: The word $u_1=0\mu(00\mu({\bf x}))$ contains a
$\mu$-image of $u_2=00\mu({\bf x})$. Suppose that $u_2$ is $\frac{7}{3}$-power-free. Then
it starts $0010011$. Since the factor $001001$ cannot occur in a
$\mu$-image, we note that
\begin{itemize}
\item[$(\star)$] the word $00\mu({\bf x})$ has only two prefixes of exponent 2 ($00$ and $001001$) and no prefixes of bigger exponents.
\end{itemize}
By Lemma~\ref{cor}, the word $\mu(u_2)$ has only two prefixes of
exponent 2 ($\mu(00)$ and $\mu(001001)$) and no prefixes of bigger
exponents. Thus, the word $u_1=0\mu(u_2)$ is obviously $\frac{7}{3}$-power-free.
\smallskip
$F_{23}=F_{13}$: We have $u_1=00\mu(1\mu({\bf x}))=0u_2$. Suppose the
word $u_2$ is $\frac{7}{3}$-power-free; then it starts $010011$. A forbidden power in
$u_1$, if any, occurs at the beginning and hence contains $0010011$.
But $00100$ does not occur later in this word, so no such forbidden
power exists.
\smallskip
$F_{110}=F_{3}$: The word $u_1=0\mu(0\mu(\mu({\bf x})))=001\mu^3(x)$ is
a suffix of the $\mu^2$-image of $u_2=1\mu({\bf x})$. Hence, if $u_2$
is $\frac{7}{3}$-power-free, then by Lemma~\ref{exps}\,(2) $u_1$ is $\frac{7}{3}$-power-free as well.
For the other direction, assume $u_1$ is $\frac{7}{3}$-power-free and then $\mu({\bf x})$
is $\frac{7}{3}$-power-free. So, if $u_2$ contains some power $y y y'$ with $|y'| \geq
|y|/3$, then this power must be a prefix of $u_2$. Put $y = 1z$ and
$y'=1z'$. The word $\mu({\bf x})$ starts $z1z1z'$. Hence ${\bf x}$
starts $\mu^{-1}(z1)\mu^{-1}(z1)$ by Lemma~\ref{squares}. So we
conclude that $|z1|=|y|$ is an even number. Now let $|y| = q$ and $p =
|yyy'|$ so that $p/q \geq 7/3$. Thus the word $1u_1=\mu^2(u_2)$ starts
with a $(p/q)$-power of period $4q$. Since $u_1$ is $\frac{7}{3}$-power-free, we have
$(4p{-}1)/4q < 7/3$.
This gives us the inequalities $3p \geq 7q$ and $3p - 3/4 < 7q$.
Since $p$ and $q$ are integers this means $3p = 7q$ and hence $q$ is
divisible by 3. On the other hand from above $q$ is even. So $q$ is
divisible by 6. Now $|y'| = |y|/3$ so $|y'|$ is even. But then $z'$ is
odd, and begins at an even position in a $\mu$-image, so the character
following $z'$ is fixed and must be the same character as in the
corresponding position of $z$, say $a$. Thus $z1z1z'a$ is a
$(7/3)$-power occurring in $\mu({\bf x})$, a contradiction.
\smallskip
$F_{111}=F_{11}$: The word $u_1=0\mu(0\mu(0\mu({\bf x})))=0010110 \mu^3
({\bf x})$ contains a $\mu$-image of $u_2=0\mu(0\mu({\bf
x}))=001 \mu^2({\bf x})$. Suppose $u_2$ is $\frac{7}{3}$-power-free but to the contrary
$u_1=0\mu(u_2)$ has a forbidden power. By Lemma~\ref{exps}\,(2), this
power must be a prefix of $u_1$. Note that this power can be extended
to the left by 1 (not by 0, because a $\mu$-image cannot contain
$000$). Hence the word $1u_1=\mu^3(1{\bf x})$ starts with a forbidden
power. This induces a forbidden power at the beginning of $1{\bf x}$;
this power has a period $q$ and some exponent $p/q\ge 7/3$. Then $u_1$
has a prefix of exponent $(8p-1)/8q\ge 7/3$. On the other hand the word
$\mu(u_2)$ is $\frac{7}{3}$-power-free, whence $(8p-2)/q< 7/3$. So we get the system of
inequalities $3p - 3/8 \geq 7q$, $3p - 3/4 < 7q$. This system has no
integer solutions, a contradiction.
\smallskip
$F_{112}=F_{2}$: We have $u_1=0\mu(0\mu(00\mu({\bf
x})))=001\mu^2(00\mu({\bf x}))=001\mu^2(u_2)$. In view of $(\star)$,
one can easily check that if $u_2$ is $\frac{7}{3}$-power-free, then so is $u_1$.
\smallskip
$F_{113}=F_{13}$: We have $u_1=0\mu(0\mu(1\mu({\bf
x})))=0\mu(u_2)$. Suppose $u_2$ is $\frac{7}{3}$-power-free. Then $\mu^2({\bf x})$ starts
$01101001$. Assume to the contrary that $u_1$ has a forbidden power. By
Lemma~\ref{exps}\,(2), this power must be a prefix of $u_1$. Again,
this power can be extended to the left by 1, not by 0. Hence the word
$1u_1=\mu^2(11\mu({\bf x}))$ starts with a forbidden power, thus
inducing a forbidden power at the beginning of $u=11\mu({\bf
x})=110110\cdots$. The word $u$ has only two squares as prefixes ($11$
and $110110$, cf. $(\star)$). Hence $u$ has the prefix $11011010$ and
no forbidden factors except for the $(7/3)$-power prefix $1101101$.
Therefore, the word $u_1$ has no prefixes of exponent $\ge 7/3$.
\smallskip
$F_{131}=F_{31}$: We have $u_1=0\mu(1\mu(0\mu({\bf x})))=0\mu(u_2)$.
Suppose $u_2$ is $\frac{7}{3}$-power-free. Then the word $u=11\mu(0\mu({\bf x}))$ is $\frac{7}{3}$-power-free
by the equality $F_{41}=F_{31}$, which is symmetric to $F_{23}=F_{13}$
proved above. But $u_1$ is a suffix of $\mu(u)$, whence the result.
\smallskip
$F_{204}=F_{4}$: We have $u_1=00\mu(\mu(11\mu({\bf x})))=00\mu^2(u_2)$.
Suppose $u_2$ is $\frac{7}{3}$-power-free. Using the observation symmetric to $(\star)$, we
check by inspection that $u_1$ contains no forbidden power.
\smallskip
$F_{1300}=F_{130}$: Neither one of the words
$u_1=0\mu(1\mu(\mu(\mu({\bf x}))))=010\mu^4({\bf x})$,
$u_2=0\mu(1\mu(\mu({\bf x})))=010\mu^3({\bf x})$ contains an image of
the other. The proofs for both directions are essentially the same, so
we give only one of them. Let $u_1$ be $\frac{7}{3}$-power-free; then the words $\mu^4({\bf
x})$, ${\bf x}$, and $\mu^3({\bf x})$ are $\frac{7}{3}$-power-free as well, and $x$ starts
0. A simple inspection of short prefixes of $u_2$ shows that if this
word is not $\frac{7}{3}$-power-free, then some $\beta$-power with $\beta\ge2$ is a prefix
of $\mu^3({\bf x})$. By Lemma~\ref{cor}, the word ${\bf x}$ also starts
with a $\beta$-power. The argument below will be repeated, with small
variations, for several identities.
\begin{itemize}
\item[$(*)$] Consider a prefix $yyy'$ of ${\bf x}$ which is the longest
prefix of ${\bf x}$ with period $|y|$. Then $|y'|<|y|/3$. By
Lemma~\ref{cor}, the longest prefix of the word $\mu^3({\bf x})$ having
period $8|y|$ is $\mu^3(yyy')$. If some word $z\mu^3(yyy')$ also has
period $8|y|$, then $z$ should be a suffix of a $\mu^3$-image of some
word. Since the word $010$ is not such a suffix, then $10\mu^3(yyy')$
is the longest possible $(8|y|)$-periodic word contained in $u_2$. Let
us estimate its exponent. Since $|z|$ and $|y'|$ are integers, we have
$$
8|y'|<8|y|/3\Longrightarrow 24|y'|<8|y|\Longrightarrow
24|y'|+6<8|y|\Longrightarrow 8|y'|+2<8|y|/3,
$$
whence $\exp(10\mu^3(yyy'))<7/3$. Since we have chosen an arbitrary
prefix $yyy'$ of ${\bf x}$, we conclude that the word $u_2$ is $\frac{7}{3}$-power-free.
\end{itemize}
\smallskip
$F_{1301}=F_{1}$: The word $u_1=0\mu(1\mu(\mu(0\mu({\bf
x}))))=010\mu^3(0\mu({\bf x}))$ contains a $\mu^3$-image of
$u_2=0\mu({\bf x})$. Suppose $u_2$ is $\frac{7}{3}$-power-free. It suffices to check that
the prefix $010$ of $u_1$ does not complete any prefix of $\mu^3(u_2)$
to a forbidden power. For short prefixes, this can be checked directly,
while long prefixes that can be completed in this way should have
exponents $\ge 2$. By Lemma~\ref{cor}, a prefix of period $p$ and
exponent $\beta\ge 2$ of the word $\mu^3(u_2)$ corresponds to the
prefix of the word $u_2$ having the exponent $\beta$ and the period
$p/8$. So, we repeat the argument $(*)$ replacing ${\bf x}$ with $u_2$
to obtain that $u_1$ is $\frac{7}{3}$-power-free.
\smallskip
$F_{1302}=F_{2}$: We have $u_1=0\mu(1\mu(\mu(00\mu({\bf
x}))))=010\mu^3(00\mu({\bf x}))=010\mu^3(u_2)$. Suppose $u_2$ is $\frac{7}{3}$-power-free.
By $(\star)$ and Lemma~\ref{cor}, among the prefixes of $\mu^3(u_2)$
there are only two squares, $\mu^3(00)$ and $\mu^3(001001)$, and no
words of bigger exponent. By direct inspection, $u_1$ is $(7/3)$-free.
\smallskip
$F_{2030}=F_{310}$: Neither one of the words
$u_1=00\mu(\mu(1\mu(\mu({\bf x}))))=001001\mu^4({\bf x})$ and
$$u_2=1\mu(0\mu(\mu({\bf x})))=101\mu^3({\bf x})$$ contains an image of
the other. If the word $u_1$ is assumed to be $\frac{7}{3}$-power-free, then the proof
repeats the proof of the identity $F_{1300}=F_{130}$, up to renaming
all $0$'s to $1$'s and vice versa. Let $u_2$ be $\frac{7}{3}$-power-free. The words
$\mu^4({\bf x})$ and ${\bf x}$ are also $\frac{7}{3}$-power-free, and ${\bf x}$ begins with
$1$, assuring that there are no short forbidden powers in the beginning
of $u_1$. Concerning long forbidden powers, we consider, similar to
$(*)$, a prefix $yyy'$ of ${\bf x}$ which is the longest prefix of
${\bf x}$ with period $|y|$. The longest possible $(16|y|)$-periodic
word contained in $u_1$ is $01001\mu^4(yyy')$, because $001001$ is not
a suffix of a $\mu^4$-image. As in $(*)$, we obtain $16|y'|+5<16|y|/3$,
implying $\exp(01001\mu^4(yyy'))<7/3$. Hence the word $u_1$ is $\frac{7}{3}$-power-free.
\smallskip
$F_{2033}=F_{33}$: The word
$u_1=00\mu(\mu(1\mu(1\mu(x))))=00\mu^2(110\mu^2(x))$ contains a
$\mu^2$-image of $u_2=1\mu(1\mu(x))=110\mu^2(x)$. Again, if the word
$u_2$ is $\frac{7}{3}$-power-free, then so is $\mu^2(u_2)$, and it suffices to check that
the prefix $00$ of $u_1$ does not complete any prefix of $\mu^2(u_2)$
to a forbidden power. Similar to $(*)$, consider a prefix $yyy'$ of
$u_2$ which is the longest prefix of $u_2$ with period $|y|$. The
longest possible $(4|y|)$-periodic word contained in $u_1$ is
$0\mu^2(yyy')$, because $00$ is not a suffix of a $\mu^2$-image. As in
$(*)$, we see that $4|y'|+1<4|y|/3$, implying $\exp(0\mu^2(yyy'))<7/3$,
and conclude that the word $u_1$ is $\frac{7}{3}$-power-free.
\smallskip
$F_{2034}=F_{4}$: The word
$u_1=00\mu(\mu(1\mu(11\mu(x))))=001001\mu^3(11\mu(x))$ contains a
$\mu^3$-image of $u_2=11\mu(x)$. Suppose $u_2$ is $\frac{7}{3}$-power-free. Using $(\star)$
and Lemma~\ref{cor}, we conclude that among the prefixes of
$\mu^3(u_2)$ there are only two squares, $\mu^3(11)$ and
$\mu^3(110110)$, and no words of bigger exponent. By direct inspection,
$u_1$ is $(7/3)$-free.
\end{proof}
\begin{figure}[!htb]
\centerline{
\begin{picture}(110,92)(-5,-1)
\gasset{Nw=7,Nh=7,AHangle=15,AHLength=2.5,loopCW=n,loopdiam=7,ELdist=0.5,linewidth=0.1}
\node(e)(50,45){$\epsilon$}
\node(3)(32,55){\small$3$}
\node(1)(32,35){\small$1$}
\node(33)(16,65){\small$33$}
\node(11)(16,25){\small$11$}
\node(31)(0,80){\small$31$}
\node(13)(0,10){\small$13$}
\node(310)(50,75){\small$310$}
\node(130)(50,15){\small$130$}
\node(4)(68,55){\small$4$}
\node(2)(68,35){\small$2$}
\node(40)(85,53){\small$40$}
\node(20)(85,37){\small$20$}
\node(203)(100,65){\small$203$}
\node(401)(100,25){\small$401$}
\drawloop[loopangle=0](e){\scriptsize$0$}
\drawloop[loopangle=225](33){\scriptsize$3$}
\drawloop[loopangle=135](11){\scriptsize$1$}
\drawloop[loopangle=90](310){\scriptsize$0$}
\drawloop[loopangle=270](130){\scriptsize$0$}
\drawedge[curvedepth=2,ELside=r](e,1){\scriptsize$1$}
\drawedge[curvedepth=-2,ELside=r](e,3){\scriptsize$3$}
\drawedge[curvedepth=2](1,e){\scriptsize$0$}
\drawedge[curvedepth=-2](3,e){\scriptsize$0$}
\drawedge[curvedepth=-2](e,2){\scriptsize$2$}
\drawedge[curvedepth=2](e,4){\scriptsize$4$}
\drawedge[curvedepth=-2](1,2){\scriptsize$2$}
\drawedge[curvedepth=2](3,4){\scriptsize$4$}
\drawedge[ELside=r,ELpos=40](1,11){\scriptsize$1$}
\drawedge[curvedepth=4,ELpos=37](1,13){\scriptsize$3$}
\drawedge(11,3){\scriptsize$0$}
\drawedge[ELside=r](11,13){\scriptsize$3$}
\drawedge[ELpos=40](3,33){\scriptsize$3$}
\drawedge[curvedepth=-4,ELside=r,ELpos=40](3,31){\scriptsize$1$}
\drawedge[ELside=r](33,1){\scriptsize$0$}
\drawedge[ELside=r](33,31){\scriptsize$1$}
\drawedge[curvedepth=4,ELside=r](31,13){\scriptsize$3$}
\drawedge[curvedepth=4,ELside=r](13,31){\scriptsize$1$}
\drawedge[curvedepth=6](31,310){\scriptsize$0$}
\drawedge[ELside=r,ELpos=45](310,3){\scriptsize$3$}
\drawedge[ELpos=45](310,4){\scriptsize$4$}
\drawedge[curvedepth=30](33,4){\scriptsize$4$}
\drawedge[curvedepth=-6](13,130){\scriptsize$0$}
\drawedge[ELside=r,ELpos=40](130,1){\scriptsize$1$}
\drawedge[ELpos=40](130,2){\scriptsize$2$}
\drawedge[curvedepth=-30](11,2){\scriptsize$2$}
\drawedge[curvedepth=-6](4,31){\scriptsize$1$}
\drawedge[curvedepth=6,ELside=r](2,13){\scriptsize$3$}
\drawedge[curvedepth=3,ELside=r,ELpos=30](203,33){\scriptsize$3$}
\drawedge[curvedepth=-3,ELside=r,ELpos=30](401,11){\scriptsize$1$}
\drawedge(4,40){\scriptsize$0$}
\drawedge[ELside=r,ELpos=30](40,2){\scriptsize$2$}
\drawedge(40,401){\scriptsize$1$}
\drawedge[ELside=r,ELpos=40](401,2){\scriptsize$2$}
\drawedge[ELside=r](401,130){\scriptsize$0$}
\drawedge(2,20){\scriptsize$0$}
\drawedge[ELside=r,ELpos=30](20,4){\scriptsize$4$}
\drawedge(20,203){\scriptsize$3$}
\drawedge[ELpos=40](203,4){\scriptsize$4$}
\drawedge[ELside=r](203,310){\scriptsize$0$}
\end{picture} }
\caption{Automaton coding infinite binary $\frac{7}{3}$-power-free words} \label{figure1}
\end{figure}
From Lemma~\ref{main2} and the results above, we get
\begin{theorem}
Every infinite binary $\frac{7}{3}$-power-free word $\bf x$ is encoded by an
infinite path, starting in $F_{\epsilon}$,
through the automaton in Figure~\ref{figure1}.
Every infinite path through the automaton
not ending in $0^\omega$ codes a unique
infinite binary $\frac{7}{3}$-power-free word $\bf x$. If a path $\bf i$ ends in
$0^\omega$ and this suffix corresponds to a cycle on state $F_{\epsilon}$
then $\bf x$ is
coded by either ${\bf i}; 1$ or
${\bf i}; 3$. If a path $\bf i$ ends in $0^\omega$
and this suffix corresponds to a cycle on $F_{310}$,
then $\bf x$ is coded by ${\bf i}; 3$. If a path $\bf i$ ends
in $0^\omega$ and this suffix corresponds to a cycle
on $F_{130}$, then $\bf x$ is coded by ${\bf i}; 1$.
\label{main}
\end{theorem}
\begin{remark}
Blondel, Cassaigne, and Jungers \cite{Blondel&Cassaigne&Jungers:2009}
obtained a similar result, and even more general ones,
for finite words. The main
advantage to our construction is its simplicity.
\end{remark}
\begin{corollary}
Each of the 15 sets $F_{\epsilon}$,
$F_1$, $F_2$, $F_3$, $F_4$,
$F_{11}$, $F_{33}$, $F_{13}$, $F_{31}$, $F_{20}$, $F_{40}$,
$F_{130}$, $F_{310}$, $F_{203}$, $F_{401}$ is uncountable.
\end{corollary}
\begin{proof}
It suffices to provide uncountably
many distinct paths from each state
to itself. By symmetry,
it suffices to prove this for all the states labeled $\epsilon$ or
below in Figure~\ref{figure1}.
These are as follows:
\begin{itemize}
\item $\epsilon$: $(0+10)^\omega$
\item $1$: $(01+001)^\omega$
\item $2$: $(0402+030402)^\omega$
\item $11$: $(0011+00011)^\omega$
\item $13$: $(013+0013)^\omega$
\item $20$: $(4020+34020)^\omega$
\item $401$: $(10401+203401)^\omega$
\item $130$: $(0+104010)^\omega$.
\end{itemize}
\end{proof}
\begin{corollary}
For all words $w \in \lbrace 0,1,2,3,4 \rbrace^*$, either
$F_w$ is empty or uncountable.
\end{corollary}
\section{The lexicographically least $\frac{7}{3}$-power-free word}
\begin{theorem}
The lexicographically least infinite binary $\frac{7}{3}$-power-free word is
$0 0 1 0 0 1 \overline{\bf t}$.
\end{theorem}
\begin{proof}
By tracing through the possible paths through the
automaton we easily find that $2030^\omega; 1$ is the code for the
lexicographically least sequence.
\end{proof}
\begin{remark}
This result does not seem to follow directly from
\cite{Allouche&Currie&Shallit:1998} as one referee suggested.
\end{remark}
\section{Automatic infinite binary $\frac{7}{3}$-power-free words}
As a consequence of Theorem~\ref{main}, we can give a complete description
of the infinite binary $\frac{7}{3}$-power-free words that are $2$-automatic
\cite{Allouche&Shallit:2003}. Recall that an infinite word $(a_n)_{n \geq 0}$
is $k$-automatic if there exists a deterministic finite automaton with
output that, on input $n$ expressed in base $k$, produces an output
associated with the state last visited that is equal to $a_n$. Alternatively,
$(a_n)_{n \geq 0}$
is $k$-automatic if its $k$-kernel
$$\lbrace (a_{k^i n + j})_{n \geq 0} \ : \ i \geq 0 \text{ and }
0 \leq j < k^i \rbrace$$
consists of finitely many distinct sequences.
\begin{theorem}
An infinite binary $\frac{7}{3}$-power-free word is $2$-automatic if and only
if its code is both
specified by the DFA given above in Figure~\ref{figure1},
and is ultimately periodic.
\label{auto}
\end{theorem}
First, we need a lemma:
\begin{lemma}
An infinite binary word ${\bf x} = a_0 a_1 a_2 \cdots$
is $2$-automatic if and only if
$\mu({\bf x})$ is $2$-automatic.
\label{tm}
\end{lemma}
\begin{proof}
Proved in \cite{Shallit:2011}.
\end{proof}
Now we can prove Theorem~\ref{auto}.
\begin{proof}
Suppose the code of $\bf x$ is ultimately periodic. Then we can write
its code as $y z^\omega$ for some finite words $y$ and $z$.
Since the class of $2$-automatic sequences is closed under appending
a finite prefix \cite[Corollary 6.8.5]{Allouche&Shallit:2003}, by
Lemma~\ref{tm}, it suffices to show that the word coded by $z^\omega$
is $2$-automatic.
The word $z^\omega$ codes a $\frac{7}{3}$-power-free word ${\bf w}$ satisfying ${\bf w} = t
\varphi ({\bf w})$, where $t$ is a finite word and $\varphi = \mu^k$.
Hence, by iteration, we get that ${\bf w} = t \varphi(t)
\varphi^2(t) \cdots$. It is now easy to see that the $2$-kernel of
$\bf w$ is contained in
$$S := \lbrace u \mu^i(v) \mu^{i+k}(v) \mu^{i+2k}(v) \cdots \ : \ |u| \leq |t|
\text{ and } v \in \lbrace t, \overline{t} \rbrace \text{ and } 1 \leq i \leq k \rbrace,$$
which is a finite set.
On the other hand, suppose the code for $\bf x$ is not ultimately
periodic. Then we show that the $2$-kernel is infinite. Now it is
easy to see that if the code for $\bf x$ is $a{\bf y}$ for some letter
$a \in \lbrace 0, 1, 3 \rbrace$ then one of the sequences in the
$2$-kernel (obtained by taking either the odd- or even-indexed terms)
is either coded by $\bf y$ or its complement is coded by $\bf y$. On
the other hand, if the code for $\bf x$ is $a{\bf y}$ with $a \in
\lbrace 2, 4 \rbrace$, then $\bf y$ begins with $0, 1, $ or $3$, say
${\bf y} = b {\bf z}$. It follows that the subsequences
obtained by taking the terms congruent to $0, 1, 2,$ or $3$ (mod $4$)
is coded by $\bf z$, or its complement is coded by $\bf z$. Since the
code for $\bf x$ is not ultimately periodic, there are infinitely many
distinct sequences in the orbit of the code for $\bf x$, under the
shift. By the infinite pigeonhole principle, infinitely many
correspond to a sequence in the $2$-kernel, or its complement. Hence
$\bf x$ is not $2$-automatic.
\end{proof}
\section{Acknowledgments}
We are grateful to the referees for their helpful suggestions.
\bibliographystyle{eptcs}
|
1,477,468,750,168 | arxiv | \section{The $U_L(3) \otimes U_R(3)$ symmetry.}
The pattern of the lowest-lying states
in the spectrum of strong interactions uncovers an approximate continuous
symmetry of nature, the so-called {\it chiral symmetry}, which is
spontaneously broken. The octet of pseudoscalar particles
- $\pi$, $K$, and $\eta$ -, with
masses much smaller than those of the next excited states - the octet
of vector particles $\rho$, $\omega$ and $K^*$, the baryons-, are the
accepted candidates for pseudo-goldstone bosons associated to the spontaneous
breaking of the symmetry.
This approximate symmetry is well incorporated in QCD as three of
the quarks happen to be light. In the (chiral) limit of vanishing
$m_u,\; m_d, \; m_s$ the QCD lagrangian has the symmetry freedom of
arbitrarily
rotating with unitary matrices the quark field components in the space
of flavours (u,d,s), independently in the Left and the Right sectors of
chirality eigenstates. The symmetry group is
$U_L(3) \otimes U_R(3)$ and is explicitly broken
by the light quark masses; if it had not,
the lightest mesons would indeed have been massless particles.
This breaking is small, though,
since the light quark masses are much smaller than the typical hadronic
scale of a few hundred MeV. (This is certainly so for the $u$ and $d$
quarks ( $2 < m_u < 8$ and $5 < m_d < 15$ MeV) and still approximately verified
for the heavier $s$-quark ( $100 < m_s < 300$ MeV),
\cite{pdb96}).
Empirically, however, only the $SU_L(3) \otimes SU_R(3)$ symmetry subgroup,
spontaneously broken to $SU_{L+R}(3)$,
is manifest: instead
of a nonet of light pseudoscalars only an octet is observed. Of the remaining
$U_L(1) \otimes U_R(1)$, the vector part provides the conserved baryon
number current,
whereas the axial $U_A(1)$
does not seem reflected at all in the spectrum,
either as a conserved quantum number or as a goldstone boson.
The first possibility would imply that all massive hadrons would
appear in parity doublets and this is not what is observed.
On the other hand,
by its quantum numbers the $\eta'$ would be the ninth candidate
for goldstone boson; but if the $U_A(1)$ were realized in the Goldstone
mode and explicitly broken only by the same quark mass terms
that break the $SU_L(3) \otimes SU_R(3)$ one would expect that the
ninth pseudo-goldstone boson would have a mass similar to the pion
(to the masses in the octet): actually, a singlet pseudoscalar meson ought to
exist with a mass smaller than $\sqrt{3} m_{\pi}$ \cite{weinberg1}.
Such a particle is missing in the spectrum. There is the $\eta'$ instead
but it is indeed so much heavier than the
$\pi$, $K$, $\eta$ that, at any rate, it seems
hard to conceive it on the same footing as the light pseudoscalars,
altogether in a nonet.
This puzzle is part of what is known as
the $U_A(1)$ problem of QCD \cite{sksw}, and originates
in that the $U_A(1)$ symmetry of the QCD lagrangian
is anomalous at the quantum
level. Yet a conserved ninth singlet axial current can still
be defined in spite of the $U_A(1)$ anomaly, it is not gauge invariant and
't Hooft showed how its conservation could be broken by non-perturbative
effects \cite{hooft}.
Nevertheless, there is a limit of QCD in which the $\eta '$
appears as the ninth, genuine goldstone boson: the limit of large number
of colours $N_c$.
't Hooft proposed a systematic expansion of QCD \cite{hooftNc} \cite{Witten}
with the
inverse number of colours, $1/N_c$, as the expansion parameter. With the
only assumption that in the framework of this $1/N_c$ expansion QCD confines,
a few qualitative features of the strong interactions already emerge
by keeping only the leading terms. Of special interest to us is the result
that the pattern of
chiral symmetry breaking is exactly
$U_L(n_l) \otimes U_R(n_l) \to U_{L+R}(n_l)$, where $n_l$ stands for
a generic number of quark flavours \cite{cw}, which resembles very much the
pattern for chiral symmetry breaking observed in nature. Now
in the $N_c \to \infty$ limit there is no
spoilt $U_A(1)$ anymore, since the anomaly
in the divergence of the singlet axial current
is $1/N_c$ suppressed and the full $U_L \otimes U_R$ is recovered
along with the entire nonet of goldstone bosons \cite{w1} \cite{v}.
The present study is based on the framework that the $1/N_c$ expansion
provides. Following the steps pioneered by Weinberg \cite{weinberg} and
Gasser and Leutwyler \cite{gl}, we write the low-energy effective lagrangian
of the $N_c \to \infty$ limit of QCD, with $m_u=m_d=m_s=0$: it is
a chiral lagrangian that involves the whole nonet of pseudo-goldstone
bosons and is invariant under $U_L(3) \otimes U_R(3)$.
The departures from this scenario, which stem from the explicit breaking
of chiral symmetry by quark masses and from the $U_A(1)$ anomaly,
are treated perturbatively, in powers of the
quark masses and $1/N_c$. It is conceivable that a good
picture of the lightest hadrons and their interactions at
low energies could emerge from this approach.
Many authors \cite{vrw} have discussed how to construct such lagrangian,
{\it i.e.}, how to extend the symmetry to $U_L(3) \otimes U_R(3)$ and
properly take into account the effects of the $U_A(1)$ anomaly at the
same time,
nicely
organized in powers of $1/N_c$. In these articles the physical consequences
to lowest orders in $1/N_c$ and the derivative expansion have been worked
out as well. We closely follow their work.
The present study is devoted to extend the analysis to a full $O(p^4)$
chiral lagrangian, {\it i.e.}, with terms kept up to four derivatives
and quadratic in the quark masses. The conservative bookkeeping
of quark masses as two chiral powers, $m_q=O(p^2)$, is adopted, as in
\cite{gl} (see also \cite{gorgw}).
The celebrated Gell-Mann Okubo relations amongst the
light pseudoscalar masses squared follow from
this chiral power counting in a rather natural way.
The exact mechanism of chiral symmetry breaking being unknown,
however, many different possibilities other than $m_q=O(p^2)$ have
not been ruled out hitherto, either from the experiment or from QCD.
None of them has been considered here. They can be adequately treated
in the framework of {\it Generalized Chiral Perturbation Theory} as
proposed in \cite{fss}, if needed. In order to discriminate
amongst the various possibilities, the nature of chiral symmetry breaking
needs to be established on more solid grounds.
Our study is to all orders in the large-$N_c$ expansion - in a sense
that will be later qualified.
We calculate all one-loop divergences to the effective action
using the heat-kernel technique and dimensional regularisation, and
carry out to this approximation the renormalization of the couplings.
The article is organized as follows: in section 2 the method of external
sources and the generating functional are briefly reviewed and the
notation is set. In section 3 the chiral lagrangian including terms up to
$O(p^4)$ and the one-loop effective action are put forward, before the
$1/N_c$ expansion is performed. Next, in section 4, the calculation is
organized in powers of $1/N_c$ and the first non-trivial terms are also
given. The conclusions and the appendices follow.
\section{The method of the external sources.}
In this section we briefly review the symmetries of QCD that are relevant
for the chiral lagrangian in the large-$N_c$ limit.
The QCD lagrangian can always be written with a diagonal quark mass term, as
\begin{equation}
\label{2.1}
{\cal L}_{QCD}= -\frac {1}{2}\; Tr \left(G_{\mu \nu} G^{\mu \nu} \right)
+{\sum}^{n_l}_{f=1}
\bar{q}_f
\left( i \gamma_{\mu} D^{\mu} - m_f \right) q_f \;,
\end{equation}
where $f$ labels the light quarks $q_f$
that appear in $n_l$ number of light flavours.
In nature, $n_l=3$ at most, for the flavours $u$, $d$, $s$.
Although explicitly omitted, the quark fields
also carry
a colour index $q^c$ which labels the fundamental representation of
the gauge group $SU(N_c)$ of colour. Nature has chosen $N_c=3$.
The covariant derivative $D_{\mu}$
acts on colour indices through the gluon matrix, $G_{\mu}$ which
form an adjoint $(N_c^2 -1)$-dimensional representation of $SU(N_c)$.
On the quark fields it acts in the usual way, diagonal in flavour indices,
\begin{equation}
\label{2.2 }
D_{\mu} q^c = \partial_{\mu} q^c - i {\frac {g}{\sqrt{N_c}}}
{ \left( G_{\mu} \right) }^c_{c'} \; q^{c'} \;.
\end{equation}
The field strength matrix is
$G_{\mu \nu}=\partial_\mu G_\nu -\partial_\nu G_\mu -
i {\frac {g}{\sqrt{N_c}}} \left[ G_\mu, G_\nu \right]$, whereas
$G_{\mu} \equiv G_{\mu}^a \left( \frac {\lambda^a}{2} \right)$;
the sum over $a$ runs from $1$ to $N_c^2 -1$
and $(\lambda^a)$ stands for the $N_c \times N_c$
Gell-Mann matrices of $SU(N_c)$.
The heavy quarks $c$, $b$, $t$ are also omitted from the lagrangian in
(\ref{2.1}) for they decouple from the strong low energy processes which only
involve light pseudoscalars with $u$, $d$, $s$ quantum numbers.
The $N_c$ dependence that accompanies the coupling
constant, $\frac {g}{\sqrt{N_c}}$, is explicitly displayed.
Apart from the usual combinatorial factors that appear in the
Feynman diagrams, this extra dependence in $N_c$ must be added in
order to allow for a smooth, non-trivial $N_c \to \infty$ limit of QCD.
We shall revert to the issue
of counting the leading powers of $N_c$ of each effective coupling in
section 4.
If the light quark masses are switched off, the lagrangian (\ref{2.1})
becomes invariant under the symmetry group $U_L(n_l) \otimes U_R(n_l)$,
$n_l=3$. Henceforth we keep the number of light flavours, $n_l$, generic
in the expressions.
This is most easily seen by writing the lagrangian above in terms
of the quark left and right components,
\begin{equation}
\label{2.3}
q_L= \frac {1- \gamma_5}{2} q \;,\;\;\;\;\;\;\;\;\;\;\;\;\;
q_R= \frac {1+ \gamma_5}{2} q.
\end{equation}
The terms in (\ref{2.1}) that involve the quark fields read,
\begin{equation}
\label{2.4}
\bar{q}_{L} ( i \gamma_{\mu} D^{\mu}) q_{L}+
\bar{q}_{R} ( i \gamma_{\mu} D^{\mu}) q_{R} \;.
\end{equation}
The symmetry of rotating independently the components $(q_L)_f$ and
$(q_R)_f$ in flavour space with unitary matrices is manifest.
It is a global symmetry that
is explicitly broken by the quark masses. However, if all the quark
masses were the same there would still be an invariance under the
diagonal vector subgroup $U_{L+R} ({n_l})$.
In that case the subgroup coincides with the unbroken
subgroup after spontaneous breaking of $U_L(n_l) \otimes U_R (n_l)$.
\vspace*{0.7cm}
As it is well known not all the symmetries of the classical action
are maintained at the quantum level;
the quantum theory thus generates anomalous contributions
to the divergences of some currents - they are no longer conserved.
The low-energy effective action is a convenient bookkeeping device
to encode the symmetries of the underlying theory - QCD -,
which automatically incorporates
all the unitarity features of quantum field theory.
In writing the effective lagrangian care must be taken that all the
(chiral) Ward Identities (WI) among the Green's functions are well
implemented, including the anomalous ones.
Actually, the method put forward in ref. \cite{gl} constructs
a solution to these WI's. It is based upon the transformation
properties of the generating functional.
The probability amplitude
of transition from the vacuum in the remote past to the vacuum in the
far future, in the presence of terms in the lagrangian that couple
the external sources linearly to the currents, as in (\ref{2.6}) below,
contains all the Green's functions among these currents.
Its logarithm $Z[f]$ is the generating functional of the connected
Green's functions,
\begin{equation}
\label{2.5}
e^{iZ[f]}= {\sum}_n \frac {i^n}{n!} \int dx_1 dx_2 ... dx_n \;
f_{i_1}^{\mu_1}(x_1) f_{i_2}^{\mu_2}(x_2) ... f_{i_n}^{\mu_n}(x_n)
\langle 0| T \; J^{i_1}_{\mu_1}(x_1) J^{i_2}_{\mu_2}(x_2) ...
J^{i_n}_{\mu_n}(x_n) |0\rangle ,
\end{equation}
$J'$s and $f'$s stand for generic currents and external sources, respectively.
We shall consider both
bilinear quark operators (currents) and the topological charge operator
coupled to external sources, added to the QCD lagrangian,
\begin{eqnarray}
{\cal L}&=&{\cal L}_{QCD}+ \bar{q}_L \gamma_\mu l^\mu (x) q_L
+ \bar{q}_R \gamma_\mu r^\mu (x) q_R
- \bar{q}_R (s(x) + i p(x) ) q_L \nonumber \\
&-& \bar{q}_L (s(x) - i p(x) ) q_R
- \frac {g^2}{ 16 \pi^2} \frac {\theta (x)}{N_c} Tr \left( G_{\mu \nu}
{\tilde{G}}^{\mu \nu} \right),
\label{2.6}
\end{eqnarray}
${\tilde{G}}^{\mu \nu}= \epsilon^{\mu \nu \alpha \beta} G_{\alpha \beta} $.
The first two new terms correspond to sources for the
$U_L \otimes U_R$ Noether currents of QCD, the non-singlets are
the generators of Current Algebra;
$s$ is a source for the quark mass term and $p$ for pseudoscalar bilinears
with the quantum numbers of $\pi$, $K$, $\eta$ and $\eta'$.
The sources
$l_\mu$, $r_\mu$, $s$ and $p$ are hermitian $n_l \times n_l$
matrices; $\theta$ is a real function. The axial $a_\mu$ and the
vector $v_\mu$ sources are defined so that $l_\mu=v_\mu - a_\mu$ and
$r_\mu=v_\mu + a_\mu$. One can, formally, write the generating functional
as a path integral,
\begin{equation}
\label{2.7}
exp \{i Z[l, r, s, p, \theta] \}= \int [d\bar{q} \; dq \; dG_\mu] \;
exp \{i \int dx {\cal L} \}.
\end{equation}
The connected Green's functions are obtained by performing functional
derivatives of $Z$ with respect to the sources.
In order to further constrain the form of the effective lagrangian
it is a convenient trick
to promote the global $U_L \otimes U_R$ transformations
- that leave the QCD lagrangian invariant - to local ones
by allowing the external sources to
transform along with the dynamical fields.
The combined set of local $U_L \otimes U_R$
transformations\footnote{Being unitary,
$g_R$ and $g_L$ can be always parametrized as $g_R=exp(i \beta) exp(i \alpha)$,
$g_L=exp(i \beta) exp(-i \alpha)$, with $\alpha$ and $\beta$ $n_l \times
n_l$ hermitian matrices. A pure vector transformation has $\alpha=0$,
whereas an axial one has $\beta=0$.},
\begin{eqnarray}
g_L(x) \in U_L, &\;& g_R(x) \in U_R \nonumber \\
q_L &\rightarrow& g_L q_L \nonumber \\
q_R &\rightarrow& g_R q_R \nonumber \\
l_\mu &\rightarrow& g_L l_\mu {g}_L^{\dagger} +
i g_L \partial_\mu {g}_L^{\dagger} \nonumber \\
r_\mu &\rightarrow& g_R r_\mu {g}_R^{\dagger} +
i g_R \partial_\mu {g}_R^{\dagger} \nonumber \\
s+ip &\rightarrow& g_R \; (s+ip) \; {g}_L^{\dagger} \; ,
\label{2.8}
\end{eqnarray}
na\" {\i}vely becomes a local gauge symmetry for
the lagrangian density in eq. (1).
This is not quite so, due to the fact that the transformations in (\ref{2.8})
also induce a non-trivial
anomalous $U_A(1)$
transformation on the generating functional $Z[l, r, s, p, \theta]$. Its
origin may be traced to the transformation properties of the fermionic
integration measure \cite{f} (or, if one wishes, of the fermion determinant)
once it is properly regularized. This is reflected in the
anomalous divergence in the ninth axial
singlet current,
\begin{equation}
\label{2.9}
\partial_\mu J_5^{\mu \; (0)}= \frac {g^2}{16 \pi^2} \frac {1}{N_c}
Tr_c \left({G_{\mu \nu} {\tilde{G}}^{\mu \nu}} \right)
\;;\;\;\;\; J_5^{\mu \; (0)}=\bar{q} \gamma_\mu\gamma_5 q.
\end{equation}
This anomaly-related effect turns off any
potential advantage inherent to the existence of a gauge symmetry which
eventually would severely constraint the form of the effective action.
However, this drawback is obviated as the
{ $U_A(1)$ anomaly contribution may be
altogether eliminated by
judiciously choosing the transformation law for the external
field $\theta(x)$. Indeed,
for infinitesimal $g_L=I + i (\beta - \alpha)$,
$g_R=I+ i (\beta + \alpha)$, the source $\theta (x)$ ought to change as
$$\theta(x) \rightarrow \theta (x) - 2 \langle \alpha(x)\rangle ,$$
(here we have switched to the standard notation and denote the trace
operation over flavour indices by brackets $tr_F (...) \equiv \langle ... \rangle $)
in order for the $U_A(1)$ anomaly to cancel. The term generated by the
anomaly in the fermion determinant is explicitly compensated by
the shift in the $\theta$ source.
A subtlety is still to be analized. The set of local gauge
transformations we have
constructed in (\ref{2.8}) also induces a non-abelian anomaly. This
new drawback can not be circumvented and needs explicit consideration.
As discussed in \cite{b}, imposing upon regularization the
requirement of conservation for the nine
vector currents, the change in $Z$ under (\ref{2.8}) reads
$$
\delta Z \equiv - \int dx \; \langle \alpha(x) \Omega(x) \rangle , $$
$$
\Omega(x)=\frac {N_c}{16 \pi^2} \epsilon^{\alpha \beta \mu \nu}
\left[ v_{\alpha \beta} v_{\mu \nu} + \frac {4}{3} (\nabla_{\alpha}
a_{\beta}) (\nabla_{\mu} a_\nu) + \frac {2}{3} i \{ v_{\alpha \beta},
a_\mu a_\nu \} + \frac {8}{3} i a_\mu v_{\alpha \beta} a_\nu +
+ \frac {4}{3} a_\alpha a_\beta a_\mu a_\nu \right] \;, $$
\begin{equation}
\label{2.11}
v_{\alpha \beta}= \partial_\alpha v_\beta - \partial_\beta v_\alpha
- i [v_\alpha, v_\beta] \;, \;\;\;\;\;\;\;\;\;\
\nabla_\alpha a_\beta=\partial_\alpha a_\beta - \partial_\beta a_\alpha
- i [v_\alpha, a_\beta].
\end {equation}
The terms in $\delta Z$ appear due to triangle AVV Feynman diagrams which
involve insertions of an axial and two vector quark bilinear operators.
Higher polygon-shaped diagrams, quadrangles and pentagons are anomalous
as well and give rise to the cubic and quartic terms in (\ref{2.11}).
Adler and Bardeen showed that the coefficients of the
anomaly are not affected by higher-order radiative corrections, i.e.,
diagrams with more than one-loop do not contribute to the anomalous terms
\cite{ab}. $\delta Z$ fulfills the Wess-Zumino consistency conditions
\cite{wz}.
The integrated form of this anomaly must be added by hand to the
low-energy effective field theory that we shall construct. Such a theory
will consist of interacting bosons and therefore will not be anomalous.
Thus, the above effect will be contained in an additional term to
the effective lagrangian.
It is worth pointing out that unlike for $SU_L \otimes SU_R$,
here there is the possibility of combining one Wess-Zumino-Witten (WZW) vertex
(which start out at $O(p^4)$) with the $O(p^0)$ pieces in the lagrangian
and generate additional one-loop divergences at $O(p^4)$. They will
be given elsewhere \cite{anomalia}. Notice that
for $SU_L \otimes SU_R$ this cannot occur at $O(p^4)$ because
the lowest chiral power counting are $O(p^2)$ terms, which generate
divergences at $O(p^6)$.
\section{The chiral lagrangian}
The mere knowledge of the symmetries and the way they are realized
provide an enormous insight for they are reflected in every
aspect of the theory, e.g., in
the spectrum, the Green's functions, the interactions, to mention a few.
In the case of the strong interactions the symmetries are the bulk
of the information which is available at low energies,
since QCD is non-perturbative there. Although a lot of effort has been put
in numerical calculation projects and enormous progress in solving
the technical difficulties has been achieved, so far
the problem has remained too hard to tackle satisfactorily.
In this section we shall write an effective lagrangian for the soft
interactions of the
lightest particles in the spectrum, the nonet of pseudoscalar mesons,
$\pi$, $K$, $\eta$, $\eta'$.
The effective lagrangian is a combined statement about the degrees
of freedom and the symmetries that are relevant
for the processes under study.
{\it Effective} refers to the choice of field variables,
in this case fields for the $\pi$, $K$, $\eta$, $\eta'$ particles that are
observed in the range of energies below the $\rho$-meson mass $m_{\rho}$.
More generally, being the lightest particles in the spectrum
the use of the effective lagrangian will be the determination of
the long distance behaviour of any of
the QCD Green's functions, where they are expected to dominate.
Being a symmetry statement, the strategy consists of writing down
for the effective lagrangian the most general expression
that contains all the independent terms compatible with the symmetries,
multiplied by unknown constants.
The fate of many effective theories
is to be of little practical use if the number of unknown constants
blows up. They can always be fitted from experiment but too often the
number of experimental results available is of the same order
(if not smaller) as that
of constants, rendering the approach little predictive.
In the present case the spontaneously broken symmetry character
and the consequent goldstone nature of the $\pi$, $K$, $\eta$, $\eta'$
impose severe constraints on the form of the interactions. The
number of unknown constants reduces in a drastic manner
if one restricts to the lower terms. Actually, they would reduce to a handful
of them if there were no $U_A(1)$ anomaly effects to incorporate, as happens
in the $SU_L \otimes SU_R$ lagrangian if one only keeps terms up
to $O(p^4)$. Fortunately, the
swarm of new terms that the $U_A(1)$ anomaly introduces
carry high powers of $1/N_c$, and to lower orders in $1/N_c$
only a few survive.
Following \cite{gl}, \cite{l} we collect the nine pseudoscalar fields
in a hermitian matrix $\Phi (x)$,
$$\Phi (x)= \eta^0(x) \lambda_0+ \pi (x)$$
where $\pi(x)=\vec{\pi}(x) \cdot \vec{\lambda}$,
$\lambda_0= \sqrt{\frac {2}{n_l}} I$ and
$\vec{\lambda}$ are the Gell-Mann matrices of $SU(n_l)$. For $n_l=3$ (see
Appendix B),
\begin{equation}
\label{3.1}
\Phi (x)=\left(\begin{array}{ccc}
\frac{1}{\sqrt{2}}\pi^0 + \frac{1}{\sqrt{6}}\eta^8+ \sqrt{\frac {2}{3}} \eta^0
& \pi^+ & K^+ \\
\pi^- & -\frac{1}{\sqrt{2}}\pi^0+ \frac{1}{\sqrt{6}}\eta^8+
\sqrt{\frac {2}{3}} \eta^0 & K^0 \\
K^- & {\bar{K}}^0 & - \frac{2} {\sqrt{6}}\eta^8+ \sqrt{\frac {2}{3}} \eta^0
\end{array}\right).
\end{equation}
The unitary $n_l \times n_l$ matrix $U(x)$ is the exponential of $\Phi (x)$,
\begin{equation}
U(x) \equiv e^{ i \Phi (x) / f }\ ,
\label{U}
\end{equation}
$f$ is an order parameter of chiral symmetry breaking and gives the strength
of the coupling between
the goldstone bosons and the currents that are spontaneously broken
and do not annihilate the vacuum.
In this case $\det U$ is a phase,
$$\det U= \exp ( i \sqrt{2 n_l} \; \eta^0 /f ).$$
Under $U_L \otimes U_R$, U transforms linearly,
\begin{equation}
\label{3.2}
U \rightarrow g_R U g^{\dagger}_L.
\end{equation}
The transformations induced by (\ref{3.2}) on $\Phi$ are more involved.
Under $U_{R+L}(1)$, $\Phi$ is automatically invariant, as it should,
for mesons must carry baryon number equal to zero.
Under $SU_{L+R}(3)$, $\Phi$ transforms linearly: it contains
two irreducible representations, the octet $\pi$ and the singlet
$\eta^0$.
Under an axial transformation
$$U \longrightarrow e^{i \alpha} U e^{i \alpha},$$
$\Phi$ changes nonlinearly,
$$ \Phi \rightarrow \Phi + 2 \alpha + O(\alpha^2).$$
This can be understood on geometrical grounds
since the fields in $\Phi$ may be regarded as coordinates that span
the coset space $U_L \otimes U_R \; / \; U_{R+L}$, upon which
the $U_L \otimes U_R$ acts: the fields
themselves are, in a sense, parameters of a group element,
and the nonlinearity
reflects the group transformation law when written in terms of the
continuous parameters.
For $\langle \log U \rangle = i \sqrt{2 n_l} \eta^0/f$,
$$\langle \log U\rangle \
\longrightarrow \ \langle \log
U\rangle + \langle \log (g_R g_L^{\dagger})\rangle = \langle \log U\rangle
+ 2 i \langle \alpha\rangle \; ;$$
$\eta^0$ gets thus shifted only under an axial transformation and is invariant
under any vector one.
Under $U_L \otimes U_R$ it never mixes with any of the $\pi$ components.
With this choice of fields, the origin of the derivative couplings among
goldstone-boson becomes transparent: a lagrangian
invariant under global $U_L \otimes U_R$ ought to
reduce to zero if $U$ were a constant matrix, for then, by
virtue of the symmetry, it could always be transformed away with a global
unitary rotation: the couplings need thus
be derivative couplings \cite{llibre de Georgi}.
The expansion will be in powers of momenta of the soft light mesons divided
by a scale of chiral symmetry breaking, which is of order $ \sim 4 \pi f \;
\sim m_\rho$, the mass of the next excited state, the $\rho$-meson.
In addition, in the present case the $U_A (1)$ anomaly introduces
novel couplings of the $\eta_0$ meson that are not derivative couplings.
The objects at hand to construct the chiral
effective lagrangian are the matrix $U$ and the external sources,
$r_\mu$, $l_\mu$, $(s+ip)$ and $\theta$.
It is useful to introduce covariant derivatives for the fields
and the sources.
The covariant derivative will act on flavour space and, as usual,
its action will depend on the transformation law of the object it derives,
\begin{eqnarray}
D_\mu U &=& \partial_\mu U - i r_\mu U + i U l_\mu, \nonumber \\
D_\mu (s+ip) &=& \partial_\mu (s+ip) - i r_\mu (s+ip) +
i (s+ip) l_\mu, \nonumber \\
D_\mu \langle \log U\rangle &=&\langle U^\dagger D_\mu U\rangle =
\partial_\mu \langle \log U\rangle -
i \langle r_\mu - l_\mu\rangle , \nonumber \\
i D_\mu \theta &=& i \partial_\mu \theta + i \langle r_\mu - l_\mu\rangle .
\label{3.3}
\end{eqnarray}
One is led to introduce field strengths for the vector
and axial sources
\begin{eqnarray}
F^L_{\mu \nu}= \partial_\mu l_\nu - \partial_\nu l_\mu - i [ l_\mu, l_\nu],
\nonumber \\
F^R_{\mu \nu}= \partial_\mu r_\nu - \partial_\nu r_\mu - i [ r_\mu, r_\nu].
\label{3.4}
\end{eqnarray}
Under local $U_L \otimes U_R$ they transform as
\begin{eqnarray}
D_\mu U &\rightarrow& g_R
\left( D_\mu U \right) g_L^{\dagger} \nonumber \\
D_\mu (s+ip) &\rightarrow& g_R \left(
D_\mu (s+ip) \right) g_L^{\dagger} \nonumber \\
F^L_{\mu \nu} &\rightarrow& g_L F^L_{\mu \nu} g_L^{\dagger} \nonumber \\
F^R_{\mu \nu} &\rightarrow& g_R F^R_{\mu \nu} g_R^{\dagger}.
\label{3.5}
\end{eqnarray}
The combination
\begin{equation}
X(x) \equiv \langle \log U(x)\rangle +\hat{\theta}(x) =
i\ \frac{\sqrt{2\nz}}{f}\ \eta^0 +\hat{\theta}(x) , \;\;\;\;
\hat{\theta} \equiv i \theta,
\label{3.5bis}
\end{equation}
is invariant, and so is any function of $X$ \cite{w1}, \cite{l}.
This is a novelty of
$U_L \otimes U_R$ and it is possible because
$\langle \log U \rangle$ does not vanish, as it does for $SU_L \otimes
SU_R$. Due to the $U_A(1)$ anomaly, each invariant operator
generates, in reality, an infinite set of invariant operators, since
the symmetry allows to multiply it by any function of $X$ and still remain
invariant. Notice that this method of finding new operators by multiplying
the old ones by functions of $X$ never introduces new derivatives
to the vertices. It is for the same reason that counting the number
of derivatives and retaining operators that contain up to a certain
number does not limit the number free constants, as it used to
in $SU_L \otimes SU_R$. At each order in the derivative expansion
we find an infinite number of constants.
It is customary to introduce the source $\chi (x)$
$$\chi \equiv 2 B (s+ip),$$
which transforms as the $U$ matrix.
$B$ is a constant that is related to the quark condensate
$\langle \bar{q} q\rangle$.
Its relevance comes from the fact that the explicit symmetry breaking driven
by the quark masses provides the former goldstone bosons also with a mass.
If $m_q$ denotes a quark mass,
to lowest chiral order there is a contribution to the meson masses
which involves $B$ and is given by $m_\pi^2= 2 B m_q$. We {\it assume} that
it is the bulk of it: we make the {\it hypothesis} that
further pieces which involve order parameters other
than $B$ are negligible, comparatively.
This leads to counting $\chi$ as $O(p^2)$.
The $\eta'$ gets an additional piece to its mass through the $U_A(1)$ anomaly,
which is $O(1/N_c)$, and which does not add to the masses in the octet
(\ref{mass}).
The singlet-octet mass splitting is very big in practice, of $ \sim 400$ MeV
in the least extreme case.
All the constants that appear in the effective theory are free, to
be fitted from experiment. Although we have not been able to compute
them from QCD, there are exact inequalities that have to be verified, which are
based on the vector structure of QCD. These relations are non-perturbative
\cite{nosaltres}.
\vspace*{0.5cm}
From the effective lagrangian, one can make contact with QCD through
the generating functional $Z[l,r,s,p,\theta]$ introduced in section 2.
One demands that the same functional - the same Green's functions -
should be obtained by starting from the effective theory as from QCD.
It can be formally
written in terms of the light pseudoscalar fields, collected in $U(x)$, as
\begin{equation}
\label{3.6} \left.
exp \{ i Z[l,r,s,p,\theta] \}= \int [dU] \; e^{ i \int dx \; L_{eff} }
\;\;\; \right|_{low \; modes}.
\end{equation}
The chiral lagrangian only copes with the
long distance behaviour of Green's functions. Only the lowest modes have
physical significance in (\ref{3.6}). For distances smaller than
$1/m_\rho$ the approach is inappropriate and
the integrals over loop momenta have a natural cutoff associated with them
that is $m_\rho$. The higher modes, corresponding to more energetic
pseudoscalars, and the rest of massive hadrons are integrated out and
their effect manifests through the coupling constants of the effective
theory \cite{e}.
The most general lagrangian invariant under (local) $U_L \otimes U_R$
\footnote{Terms of the sort $\sum_{\alpha=0}^{8}\langle \lambda^\alpha
U^\dagger D_\mu U\rangle \langle \lambda^\alpha U^\dagger D^\mu U\rangle $
and alike, given the
properties of the $\{ \lambda^\alpha \}$ matrices, can be re-expressed
in terms of the operators written in the text. In this particular case, with
the relation $\sum_{\alpha=0}^{8} \lambda^\alpha_{ij} \lambda^\alpha_{kl}=
2 \delta_{il} \delta_{jk}$, it becomes $-2\langle
(D_\mu U^\dagger)( D^\mu U)\rangle $}
that includes terms with two derivatives or less, or one
power of $\chi$, is the following \cite{gl}, \cite{l},
\begin{eqnarray}
{\cal L}_{(0+2)}=&-&W_0(X)+ W_1(X) \langle D_\mu U^\dagger D^\mu U\rangle +
W_2(X) \langle U^\dagger \chi + \chi^\dagger U\rangle + i W_3(X)
\langle U^\dagger \chi - \chi^\dagger U\rangle \nonumber \\
&+& W_4(X) \langle U^\dagger D_\mu U\rangle \langle U^\dagger D^\mu U\rangle +
W_5(X) \langle U^\dagger (D_\mu U)\rangle D^\mu \hat{\theta}
+ W_6 (X) D_\mu \hat{\theta} D^\mu \hat{\theta}.
\label{3.7}
\end{eqnarray}
Parity conservation implies that they
are all even functions of $X$ except for $W_3$ that is odd. Furthermore,
$W_4(0)=0$, $W_1(0)=W_2(0)=\frac {f^2}{4}$ gives the correct normalization
for the quadratic terms.
The first term $W_0$ has neither derivatives nor powers of $\chi$, and
therefore counts as $O(p^0)$. The rest of the terms count as $O(p^2)$.
The $1/N_c$ power counting of the diverse couplings is given in the next
section.
The term $W_0$ brings a mass terms for the $\eta^0$
\begin{equation}
\label{mass}
\left. m_{\eta^0}^2 \right|_{U_A(1)} = - \frac {2 n_l}{f^2} W_0''(0),
\end{equation}
whereas the first term in the expansion of $W_3$ gives a contribution
to singlet-octet mixing from $U_A(1)$.
The $N_c \to \infty$ limit of QCD actually imposes more restrictions
on the effective lagrangian than the symmetry $U_L \otimes U_R$ alone.
Under $U_L \otimes U_R$, the fields
$\eta^0$ and $\pi$ never mix their components.
There is no reason why the particles created by them should bear
any sort of relationship whatsoever, as though
they belonged to the same irreducible
representation, like the $\pi$ do. There is no reason, either, why in
the definition of $U(x)$ in (\ref{3.1}) $\eta^0$ and $\pi$
should appear in the exponent
divided by the same constant $f$, i.e., normalised
in the same way. One could have written instead
$$U(x)= e^{ i \left( \sqrt{\frac {2}{n_l}} \frac {\eta^0}{f_0} +
\frac {\pi}{f} \right)}\ ,$$
with $f$ and $f_0$ unrelated and, yet, (\ref{3.7}) would be
$U_L \otimes U_R$ invariant.
The proper way to cast this issue requires to fix the normalization
of each field by looking at the kinetic energy too, not at $U(x)$ only.
In (\ref{3.7}), the kinetic energy terms are
$ \frac {f^2}{4} \langle D_\mu U^\dagger D^\mu U\rangle
+W_4(0) \langle U^\dagger D_\mu U\rangle \langle U^\dagger D^\mu U\rangle ,$
which read,
$$ \frac {1}{2} \left(\partial_\mu \vec{\pi} \right)
\cdot \left( \partial^\mu \vec{\pi} \right)+\frac {1}{2} \left(
\frac {f^2}{f_0^2}
-\frac{ 4 n_l W_4(0)}{f_0^2} \right) \partial_\mu \eta_0
\partial^\mu \eta_0.$$
The normalization condition $\left(f^2-4 n_l W_4(0) \right)/ f_0^2 =1$
relates three constants;
it may be viewed as an arbitrariness in their definition, for a change in
$f_0$ can be always compensated for by an appropriate change in $W_4(0)$.
Once this normalization is fixed,
the strength that the singlet $\eta^0$ couples to the singlet axial
current is $f_0$ whereas the octet couple
to the octet axial current with strength $f$.
Unlike $U_L \otimes U_R$, which does not have a dimension nine irreducible
representation, the $N_c \to \infty$
really enforces a nonet symmetry, with the $\pi$, $K$, $\eta$, $\eta '$
all having identical properties.
Indeed, the planar Feynman diagrams that contribute
to $\langle J_{5 \ \mu}^{(0)}(x) J_{5 \ \nu}^{(0)}(0)\rangle $ and to
$\langle J_{5 \ \mu}^{(a)}(x) J_{5 \ \nu}^{(a)}(0)\rangle $ ($a=1,...,8$), with
$J_{5 \ \mu}^{\rho}=\bar{q} \gamma_\mu \gamma_5 \frac {\lambda^\rho}{2} q$,
are the same since the $\bar{q} q \to gluon$ annihilation diagrams that
would only contribute to the singlet channel turn out to be
$1/N_c$ suppressed (OZI violating processes).
Barring for $1/N_c$ corrections the two decay constants coincide $f/f_0=1$.
Moreover, if the same quark mass were switched on for all light quark species,
a mass would be generated identical for the singlet $\eta_0$ as for the
octet particles \cite{w1} \cite{v}.
The standard normalization of the kinetic energy and
nonet symmetry require $W_4(0)=0$.
Without loss of generality, in that case
the term $W_4(X)$ may be eliminated altogether
by a change of field variables of the type
$U \to U \; \exp [ i F(X) ]$, that maintains the transformation properties
for $U$ (but changes the normalization conditions of $\eta^0$).
\vspace*{0.5cm}
The methods of the background field and the steepest descent, when applied
to the functional integral (\ref{3.6}), provide
the loop-wise expansion of the generating functional.
The Green's functions are read off from
the generating functional, about its minimum when the external
sources are switched off, $\chi = 2 B \; diag (m_u, \; m_d, \; m_s)$ and
$\hat{\theta}=0$.
To lowest order we assume that the minimum is achieved at $U_0=I$, which is
compatible with the equation that minimizes the lowest order effective
action \cite{gl}.
In order to include one-loop corrections,
one proceeds to introduce a background
field matrix $\bar{U} (x)$, and expand (\ref{3.7})
about this background configuration. For that we decompose $U(x)$ as
$$U(x)=\bar{U}(x) \Sigma (x), \;\;\;\;\;\;\;\;\; \Sigma(x) =e^{i\Delta(x)}\ ,$$
with $\Delta$ the matrix of quantum fields
\footnote{The procedure is not manifestly {\it left - right} symmetric. This
is not a worrisome issue given that the quantization of scalar fields in four
dimensions does not have any anomaly that would favor one over the other.
Simplicity reasons have lead to our choice. A more symmetric treatment
would lead to the same final results.}.
By choosing $\bar{U}$ to transform under $U_L \otimes U_R$ as $U$,
the transformation laws for $\Sigma$ and $\Delta$ become
$$\Delta \rightarrow g_L \Delta g_L^\dagger , \;\;\;\;\;\;
\Sigma \rightarrow g_L \Sigma g_L^\dagger,$$
and, therefore, their covariant derivatives are
$$D_\mu \Sigma = \partial_\mu \Sigma -i[l_\mu,\Sigma ], \;\;\;\;\;\;
D_\mu \Delta = \partial_\mu \Delta -i[l_\mu,\Delta ].$$
Next we expand the lagrangian in (\ref{3.7}) about $\bar{U}$ and
keep terms up to quadratic in $\Delta$. The one-loop effective action
is obtained upon evaluation of the functional determinant of the
differential operator that appears in the piece quadratic in $\Delta$.
Finally the background
field $\bar{U}(x)$, which is {\it a priori} independent of the sources,
is judiciously chosen so as to verify the equations
$\delta Z[\bar{U},l,r,s,p,\theta] =0$ with the sources held fixed.
The method ensures that with this $\bar{U}$, Z is the effective action
$\Gamma[\bar{U}]$ - the generator of the one-particle irreducible Green's
functions of fields gathered in $U$ -, in the presence of the external
sources.
For the one-loop effective action it suffices to use
the tree level equations of motion.
The corrections to it would modify the two-loop effective action.
Expanding the effective lagrangian in (\ref{3.7}) about $\bar{U}$,
disregarding the terms linear in $\Delta(x)$ that vanish with the
equations of motion, it yields:
\begin{eqnarray}
{\cal L}_{0+2}(\bar{U} \Sigma)&=&{\cal L}_{0+2}(\bar{U})+
{\cal A} \; \langle \Delta\rangle ^2 \nonumber \\
&+& 2 W_1'(\bar{X}) \langle \Delta\rangle \langle C^\mu
\ D_\mu \Delta\rangle
+ W_1(\bar{X}) \langle D_\mu \Delta \ D^\mu \Delta\rangle
+ W_1(\bar{X}) \langle C_\mu [\Delta, D_\mu \Delta]\rangle \nonumber \\
&+& W_2'(\bar{X}) \langle \Delta\rangle \langle \Delta N\rangle
- \frac {1}{2} W_2 (\bar{X}) \langle \Delta ^2 M\rangle \nonumber \\
&+& i W_3'(\bar{X}) \langle \Delta\rangle \langle \Delta M\rangle -
\frac {i}{2} W_3 (\bar{X})
\langle \Delta ^2 N\rangle \nonumber \\
&-& W_5' (\bar{X}) \langle \Delta\rangle \langle D_\mu \Delta\rangle
D^\mu \hat{\theta}
+O(\Delta)^3 ,
\label{3.8}
\end{eqnarray}
where
\begin{eqnarray}
{\cal A} &\equiv& \frac {1}{2} W_0''(\bar{X}) + \frac {1}{2} W_1''(\bar{X})
\langle C_\mu C^\mu\rangle - \frac {1}{2} W_2''(\bar{X}) \langle M\rangle
\nonumber \\
&-& \frac {i}{2} W_3''(\bar{X}) \langle N\rangle - \frac {1}{2} W_5''(\bar{X})
\langle C_\mu\rangle D^\mu \hat{\theta} - \frac {1}{2} W_6''(\bar{X})
D_\mu \hat{\theta} D^\mu \hat{\theta},
\label{3.9}
\end{eqnarray}
and
$$C_\mu \equiv \bar{U}^\dagger D_\mu \bar{U},\;\;\;\;\;\
M \equiv \bar{U}^\dagger \chi + \chi^\dagger \bar{U},\;\;\;\;\;
N\equiv \bar{U}^\dagger \chi - \chi^\dagger \bar{U};$$
all transform as $C_\mu$, $M \to g_L M g_L^\dagger$,
$N \to g_L N g_L^\dagger$; also, $C_\mu^\dagger= - C_\mu$, $M^\dagger = M$
and $N^\dagger=-N$.
The functions $W_i (X)$ have been Taylor expanded about the background value
$\bar{X}=\langle \log \bar{U}\rangle + \hat{\theta}$ as follows,
$$W_k (\langle \log (\bar{U} \Sigma)\rangle + \hat{\theta})=
W_k (\bar{X}) + i W_k'(\bar{X}) \langle \Delta\rangle
-\frac {1}{2} W_k''(\bar{X}) \langle \Delta\rangle ^2 + O(\Delta)^3.$$
Integrations by parts have been performed where necessary and the
total divergences have been discarded.
The equations of motion can be read off from the terms that are
linear in the variation $\Delta$,
\begin{eqnarray}
D_\mu C^\mu &=& \frac {1}{2} \frac {W_0'}{W_1} +
\frac {1}{2} \frac {W_1'}{W_1} \langle C^\mu C_\mu\rangle
- \frac {W_1'}{W_1} (\langle C_\mu\rangle +D_\mu \hat{\theta}) C^\mu
\nonumber \\
&+&\frac {1}{2} \frac {W_2}{W_1} N + \frac {i}{2} \frac{W_3}{W_1} M
- \frac {1}{2} \frac {W_2'}{W_1} \langle M\rangle - \frac{i}{2}
\frac {W_3'}{W_1} \langle N\rangle
\nonumber \\
&+& \left( \frac {1}{2} \frac {W_5'}{W_1}- \frac {1}{2}
\frac {W_6'}{W_1} \right)
(D_\mu \hat{\theta})(D^\mu \hat{\theta})
+ \frac {1}{2} \frac {W_5}{W_1} D_\mu D^\mu \hat{\theta},
\label{3.10}
\end{eqnarray}
(Henceforth the arguments $\bar{X}$ are omitted from the functions
$W_k$'s and their derivatives that appear in the calculation;
also, bars are suppressed from $\bar{X}$ and $\bar{U}$).
In order to be able to use the expressions collected in the Appendix A for the
evaluation of the one-loop divengent parts
it proves useful to perform a change of
integration variables
so as to leave the operator that acts on the quadratic piece in
(\ref{3.8})
in the usual form, with the laplacian piece $\partial_\mu \partial^\mu$
multiplied by a constant, not by a function.
For this purpose we change variables to
\begin{equation}
\Delta(x)= \frac {f}{2} \frac {1}{\sqrt{W_1}} \varphi (x),
\label{3.11}
\end{equation}
and expand the hermitian matrix $\varphi(x)$ in the basis of matrices
$\lambda^\alpha$ (see Appendix B):
$$\varphi = \varphi^\alpha \lambda^\alpha, \;\;\;\;\;\;\;\;\varphi^\alpha=
\frac {1}{2}\langle \varphi \lambda^\alpha\rangle .$$
Retaining the piece quadratic in the quantum fluctuating fields
$\varphi^\alpha$, one finds
\begin{equation}
{\cal L}_{(0+2)}^{Quadratic}= -\frac{f^2}{2} \varphi^\alpha
\left( d_\mu d^\mu + \sigma \right)^{\alpha \beta}
\varphi^\beta ,
\label{3.12}
\end{equation}
where
\begin{equation}
[d_\mu \varphi]^\alpha= \partial_\mu \varphi^\alpha + \omega_\mu^{\alpha
\beta} \varphi^\beta,
\label{3.13}
\end{equation}
and
\begin{equation}
\omega_\mu^{\alpha \beta}=
\frac {i}{2} \langle (l_\mu + \frac {i}{2} C_\mu)
[\lambda^\alpha,\lambda^\beta]\rangle
+\frac {1}{4} \frac {W_1'}{W_1} \left(\langle C_\mu
\lambda^\alpha\rangle \langle \lambda^\beta\rangle
-\langle C_\mu \lambda^\beta\rangle \langle \lambda^\alpha\rangle \right).
\label{3.14}
\end{equation}
Notice that $\omega_\mu^{\alpha \beta}$ is antisymmetric in
$\alpha$, $\beta$. It is this property what allows to integrate $d_\mu$
by parts as a whole, as though it were the single $\partial_\mu$.
For that reason the operator $d_\mu d^\mu + \sigma$ is manifestly
hermitian. The expression in (\ref{3.12}) differs from that of
(\ref{3.8}), after the change of variables, by a total derivative. The
evaluation of the Gaussian integral involves the expression
(\ref{3.12}), though:
it is the determinant of the differential operator that is
{\it hermitian} the one that has to be evaluated
\footnote{In practice this means that any term of the sort $\varphi^\alpha
f^{\alpha \beta}_\mu (x) \partial^\mu \varphi^\beta$ (which,
in general, is {\it not} hermitian as can be seen
if one tries to bring the operator act on
$\varphi^\alpha$, on the left) can always be written as $\varphi^\alpha
a^{\alpha \beta}_\mu (x) \partial^\mu \varphi^\beta- \frac {1}{2}
\varphi^\alpha
\partial_\mu \left( s^{\alpha \beta}_\mu (x) \right) \varphi^\beta
+\frac {1}{2} \partial_\mu \left(
\varphi^\alpha s^{\alpha \beta}_\mu (x) \varphi^\beta \right)$. Notice
that the last term is a total derivative which we shall discard. The
remaining part
$a^{\alpha \beta}_\mu(x) \partial^\mu$
is hermitian now. The $a^{\alpha \beta}_\mu (x)$
and $s^{\alpha \beta}_\mu (x)$ are the antisymmetric and symmetric parts
of $f^{\alpha \beta}_\mu (x)$, respectively. This decomposition
is unique.}.
The curvature associated to this connection is
\begin{eqnarray}
R^{(\alpha \beta)}_{\mu \nu}&=& \partial_\mu \omega_\nu^{\alpha \beta} -
\partial_\nu \omega_\mu^{\alpha \beta}+
\omega_\mu^{\alpha \gamma} \omega_\nu^{\gamma \beta} -
\omega_\nu^{\alpha \gamma} \omega_\mu^{\gamma \beta} \nonumber \\
&=&\frac {i}{4} \langle Q_{\mu \nu} [\lambda^\alpha, \lambda^\beta] \rangle +
\frac {1}{4} \left( \langle H_{\mu \nu} \lambda^\alpha\rangle \langle
\lambda^\beta\rangle
-\langle H_{\mu \nu} \lambda^\beta\rangle \langle \lambda^\alpha\rangle
\right) \nonumber \\
&-& \frac{n_l}{8} \left( \frac {W_1'}{W_1} \right)^2
\left( \langle C_\mu \lambda^\alpha\rangle \langle C_\nu \lambda^\beta\rangle
-\langle C_\mu \lambda^\beta\rangle \langle C_\nu \lambda^\alpha\rangle
\right),
\label{3.14p}
\end{eqnarray}
$\mu, \nu$ are space-time indices, whereas $\alpha, \beta, \gamma$
label the Gell-Mann matrices of $U(n_l)$,
$$Q^{\mu \nu}= F_L^{\mu \nu} + U^\dagger F_R^{\mu \nu} U - \frac {i}{2}
[C^\mu, C^\nu],$$
and
\begin{eqnarray}
H^{\mu \nu} &=& \left( \frac {W_1''}{W_1} - \frac {3}{2}
\left(\frac {W_1'}{W_1}\right)^2 \right)
\left( \langle C^\mu\rangle C^\nu - \langle C^\nu\rangle C^\mu \right)
+\left( \frac {W_1''}{W_1} - \left(\frac {W_1'}{W_1} \right)^2 \right)
\left( C^\nu D^\mu \hat{\theta} - C^\mu D^\nu \hat{\theta} \right)
\nonumber \\
&+& \frac {W_1'}{W_1} \left( i F_L^{\mu \nu} - i U^\dagger F_R^{\mu \nu} U
\right). \nonumber
\end{eqnarray}
For $\sigma$ we find
\begin{eqnarray}
\sigma^{\alpha \beta} &=& \frac{1}{8} \langle [C_\mu, \lambda^\alpha]
[C^\mu, \lambda^\beta]\rangle +\frac{1}{8} \langle
R \{\lambda^\alpha,\lambda^\beta \}\rangle
+S \delta^{\alpha \beta}
+\frac {n_l}{8} \left( \frac {W_1'}{W_1} \right)^2 \langle C_\mu
\lambda^\alpha\rangle
\langle C^\mu \lambda^\beta\rangle \nonumber \\
&+& \frac {1}{4} ( \langle T \lambda^\alpha\rangle \langle
\lambda^\beta\rangle
+\langle T \lambda^\beta\rangle \langle \lambda^\alpha\rangle ),
\label{3.15}
\end{eqnarray}
where
\begin{eqnarray}
S &=& -\frac {1}{2} \left( \frac {W_1''}{W_1} - \frac {1}{2} \left(
\frac{W_1'}{W_1} \right)^2 \right) (\langle C_\mu\rangle +
D_\mu\hat {\theta} \;)^2
-\frac {1}{2} \frac {W_1'}{W_1} (\langle D_\mu C^\mu\rangle +
D_\mu D^\mu \hat{\theta}\;), \nonumber \\
T &=& -\frac {1}{2} \frac {W_0''}{W_1} +
\left( \frac {W_1''}{W_1} - \frac {1}{2} \left(
\frac{W_1'}{W_1} \right)^2 \right) ( \langle C_\mu\rangle C^\mu - \frac {1}{2}
\langle C_\mu C^\mu\rangle ) + \frac {W_1'}{W_1} (D_\mu C^\mu) \nonumber \\
&+& \frac {1}{2} \langle \frac {W_2''}{W_1}M + i \frac {W_3''}{W_1} N
\rangle -
\frac {W_2'}{W_1} N - i \frac{W_3'}{W_1} M +
\frac {W_1''}{W_1} C^{\mu} (D_\mu \hat{\theta}) -
\frac {1}{2}\frac {W_5'}{W_1} D_\mu D^\mu \hat{\theta}
\nonumber \\ &+&\frac {1}{2} \left( \frac {W_6''}{W_1} -
\frac {W_5''}{W_1} \right)(D_\mu \hat{\theta})^2,
\nonumber \\
R &=& \frac {W_2}{W_1} M+i \frac {W_3}{W_1} N.
\label{3.16}
\end{eqnarray}
The one-loop effective action is obtained by including the quadratic
fluctuations
about the configuration $[\bar{U}]$ that, consistently, minimizes the
effective action itself. One needs to evaluate the integral
of a Gaussian functional, with the known formal result
$$\int [d \varphi] e^{- i \frac{f^2}{2} \int d^4x \; \varphi^\alpha
( d_\mu d^\mu + \sigma)^{\alpha \beta} \varphi^\beta }
\sim \frac {1}{\sqrt{\det \left( d_\mu d^\mu+ \sigma \right)}},$$
which, upon exponentiation, contributes to the effective action as
\begin{equation}
\Gamma^{One-loop}_{eff}[\bar{U}]=\int d^4x \; {\cal L}_{(0+2)}(\bar{U})
+\frac{i}{2} Tr \; \log \left(d_\mu d^\mu + \sigma \right)+
\int d^4x \; {\cal L}_{(4)}(\bar{U}).
\label{3.17}
\end{equation}
A word is needed on the proper definition of the previous expressions.
The heat-kernel technique has been used to define the determinant
(see Appendix A), and the divergences have been dealt with dimensional
regularization.
In order to absorb the infinities that result from the functional determinant,
counter-terms of $O(p^2)$ and $O(p^0)$, as well as new terms of $O(p^4)$
need be included.
The determinant from the change of functional integration variables in
(\ref{3.11}) gives a contribution
which is proportional to a singularity
$\delta^{(4)}(0)$, and dimensional regularization sets it equal to zero.
(A similar remark was in order when a change of field variables
allowed to cross out $W_4(X)$ from the lagrangian in (\ref{3.7})).
We find,
\begin{eqnarray}
& &\frac{1}{2}\sigma^{\alpha \beta} \sigma^{\beta \alpha} =
\left( \frac {1}{8} + \frac {n_l^2}{32} \left(
\frac {W_1'}{W_1} \right)^4 \right)
\langle C_\mu C_\nu \rangle \langle C^\mu C^\nu \rangle +
\frac {1}{16}\langle C_\mu C^\mu \rangle \langle C_\nu C^\nu \rangle
-\frac {1}{4} \langle C_\mu C^\mu C_\nu \rangle \langle C^\nu \rangle
\nonumber \\
&+& \frac {n_l}{8} \left( \frac {W_1'}{W_1} \right)^2 \left(
\langle C_\mu C_\nu C^\mu C^\nu \rangle -
\langle C_\mu C^\mu C_\nu C^\nu \rangle \right) +
\frac {1}{16} \langle R \rangle^2 + \frac{n_l}{16} \langle R^2 \rangle
+ \frac {n_l}{2} S^2 +
\frac {1}{4} \langle T \rangle^2 \nonumber \\
&+&\frac {n_l}{4} \langle T^2 \rangle
+ S \langle T \rangle
+\frac {n_l}{8} \left(\left( \frac {W_1'}{W_1}\right)^2 -1 \right)
\langle C_\mu C^\mu R \rangle
- \frac {1}{8} \langle C_\mu C^\mu\rangle \langle R \rangle +
\frac {1}{4} \langle C_\mu \rangle \langle C^\mu R \rangle \nonumber\\
&+& \frac {1}{2} S \langle C_\mu \rangle \langle C^\mu \rangle +
\frac {n_l}{2} \left( \frac {1}{2} \left( \frac {W_1'}{W_1} \right)^2
-1 \right)
S \langle C_\mu C^\mu \rangle +
\frac {n_l}{2} S \langle R \rangle + \frac {1}{2} \langle R T \rangle
+ \frac {\nz}{4} \left( \frac {W_1'}{W_1}\right)^2
\langle C_\mu \rangle \langle C^\mu T \rangle,
\nonumber
\end{eqnarray}
and
\begin{eqnarray}
& &\frac {1}{12} R_{\mu \nu}^{(\alpha \beta)}R^{\mu \nu \ (\beta \alpha)}=
- \frac {n_l}{24} \langle Q_{\mu \nu} Q^{\mu \nu} \rangle
+\frac {1}{24} \langle Q_{\mu \nu} \rangle \langle Q^{\mu \nu} \rangle
+\frac {i \nz}{24} \left( \frac {W_1'}{W_1} \right)^2
\langle Q_{\mu \nu}[C^\mu,C^\nu] \rangle \nonumber \\
&+& \frac{1}{24}\langle H_{\mu \nu} H^{\mu \nu} \rangle
- \frac {n_l}{24} \langle H_{\mu \nu} \rangle \langle H^{\mu \nu} \rangle
+\frac {n_l^2}{96} \left( \frac {W_1'}{W_1} \right)^4
\left(\langle C_\mu C_\nu \rangle
\langle C^\mu C^\nu \rangle - \langle C_\mu C^\mu \rangle
\langle C_\nu C^\nu \rangle \right) \nonumber \\
&+& \frac {n_l}{12} \left( \frac {W_1'}{W_1} \right)^2 \langle C_\mu
H^{\mu \nu} \rangle \langle C_\nu \rangle, \nonumber
\end{eqnarray}
which are the only structures that get divergent contributions at one-loop
(see Appendix A).
Since quantum scalar fields in four dimensions do not generate
any anomaly to the $U_L \otimes U_R$ symmetry, the new
terms needed to renormalize the one-loop result are necessarily in the list
of all possible operators of $O(p^4)$ invariant under $U_L \otimes U_R$.
The list of independent operators is given below.
The criteria used to select this particular set
are the following: terms involving the derivatives
$D_\mu M$, $D_\mu N$, $D_\mu F_L^{\mu \nu}$ and alike;
three derivatives of $\hat{\theta}$
$(D^\mu D_\mu D_\nu \hat{\theta})$, $(D^\mu D_\nu D_\mu \hat{\theta})$;
$D^\mu D_\mu C_\nu$,
$(D_\mu D_\nu \hat{\theta})$ or $D_\mu C_\nu$ can be removed as
combinations of those in (\ref{3.18}) plus terms with the piece
$D_\mu C^\mu$: finally,
these can be eliminated with the equations of motion (\ref{3.10}).
The two derivatives of $\hat{\theta}$ can always
be chosen to appear under the form $D^\mu D_\mu \hat{\theta}$.
The rest of operators that are not independent
have been removed upon integration by parts and with the help of
the identities
$$D^\mu C^\nu - D^\nu C^\mu = -[C^\mu, C^\nu]+ i F_L^{\mu \nu} -
i U^\dagger F_R^{\mu \nu} U , $$
$$[D^{\mu}, D^{\nu}] C^\rho = -i[F_L^{\mu \nu}, C^\rho],$$
$$[D^{\mu}, D^{\nu}] \hat{\theta} = i\langle F_R^{\mu \nu}-
F_L^{\mu \nu}\rangle .$$
All the following terms are real $O_i^\dagger=O_i$.
The first ones correspond to the twelve $O(p^4)$ operators
of $SU_L \otimes SU_R$
(recall that $C_\mu \equiv U^\dagger D_\mu U$,
$M \equiv U^\dagger \chi + \chi^\dagger U$,
$N \equiv U^\dagger \chi - \chi^\dagger U$ and $\hat{\theta}= i \theta$),
\begin{eqnarray}
O_0 &=& \langle D_\mu U \ D_\nu U^\dagger \ D^\mu U \ D^\nu U^\dagger \rangle
=\langle C^\mu C^\nu C_\mu C_\nu\rangle ,\nonumber \\
O_1 &=& \langle D_\mu U^\dagger \ D^\mu U\rangle ^2 =\langle C^\mu C_\mu\rangle \langle C^\nu C_\nu\rangle
\nonumber, \\
O_2 &=& \langle D_\mu U^\dagger \ D_\nu U \rangle \langle D^\mu U^\dagger \
D^\nu U\rangle
=\langle C^\mu C^\nu\rangle \langle C_\mu C_\nu\rangle ,\nonumber \\
O_3 &=& \langle D_\mu U^\dagger \ D^\mu U \ D_\nu U^\dagger \ D^\nu U \rangle
=\langle C^\mu C_\mu C^\nu C_\nu\rangle ,\nonumber \\
O_4 &=& \langle D_\mu U^\dagger \ D^\mu U \rangle \langle U^\dagger \chi +
\chi^\dagger U \rangle
=- \langle C^\mu C_\mu\rangle \langle M\rangle , \nonumber \\
O_5 &=&\langle D_\mu U^\dagger \ D^\mu U \
( U^\dagger \chi + \chi^\dagger U ) \rangle
= - \langle C^\mu C_\mu M\rangle , \nonumber \\
O_6 &=& \langle U^\dagger \chi + \chi^\dagger U \rangle ^2 = \langle M\rangle ^2, \nonumber \\
O_7 &=& \langle U^\dagger \chi - \chi^\dagger U \rangle ^2 = \langle N\rangle ^2, \nonumber \\
O_8 &=& \langle \chi^\dagger U \chi^\dagger U +
U^\dagger \chi U^\dagger \chi \rangle
= \frac {1}{2} \langle M^2 + N^2\rangle
, \nonumber \\
O_9 &=& -i \langle F^{\mu \nu}_R \ D_\mu U \ D_\nu U^\dagger +
F^{\mu \nu}_L \ D_\mu U^\dagger \ D_\nu U \rangle = i \langle C_\mu C_\nu
\left( F_L^{\mu \nu} + U^\dagger F_R^{\mu \nu} U \right)\rangle ,
\nonumber \\
O_{10} &=& \langle U^\dagger F_R^{\mu \nu} U F_{L\; \mu \nu}\rangle , \nonumber \\
O_{11} &=& \langle F_{R \; \mu \nu} F_R^{\mu \nu} + F_{L \; \mu \nu} F_L^{\mu \nu}\rangle ,
\nonumber \\
O_{12} &=& \langle \chi^\dagger \chi \rangle =\frac {1}{4}
\langle M^2 - N^2 \rangle , \nonumber
\end{eqnarray}
The following eight operators are obtained from the previous twelve by
splitting up single traces into products of traces:
$\langle C_\mu\rangle $ does not vanish,
$\langle C_\mu\rangle \neq 0$, for $U_L \otimes U_R$.
They read,
\begin{eqnarray}
O_{13} &=& \langle U^\dagger \ D_\mu U\rangle \langle U^\dagger \ D^\mu U \
D_\nu U^\dagger
\ D^\nu U\rangle =-\langle C^\mu\rangle \langle C_\mu C^\nu C_\nu\rangle ,
\nonumber \\
O_{14} &=& \langle U^\dagger \ D_\mu U\rangle \langle U^\dagger \ D^\mu U\rangle
\langle D^\nu U^\dagger \ D_\nu U\rangle =-\langle C^\mu\rangle \langle C_\mu\rangle \langle C^\nu C_\nu\rangle , \nonumber \\
O_{15} &=& \langle U^\dagger \ D_\mu U\rangle \langle U^\dagger \ D_\nu U\rangle
\langle D^\mu U^\dagger \ D^\nu U\rangle =-\langle C^\mu\rangle \langle C^\nu\rangle \langle C_\mu C_\nu\rangle ,\nonumber \\
O_{16} &=& \langle U^\dagger \ D^\mu U\rangle \langle U^\dagger \ D_\mu U\rangle
\langle U^\dagger \ D^\nu U\rangle \langle U^\dagger \ D_\nu U\rangle =
\langle C^\mu\rangle \langle C_\mu\rangle \langle C^\nu\rangle \langle C_\nu\rangle ,\nonumber\\
O_{17} &=& \langle U^\dagger \ D^\mu U\rangle \langle U^\dagger \ D_\mu U\rangle
\langle U^\dagger \chi + \chi^\dagger U \rangle =\langle C^\mu\rangle \langle C_\mu\rangle \langle M\rangle , \nonumber \\
O_{18} &=& \langle U^\dagger \ D_\mu U\rangle \langle D^\mu U^\dagger\ \chi
-D^\mu U\
\chi^\dagger \rangle =- \langle C^\mu\rangle \langle C_\mu M\rangle , \nonumber \\
O_{19} &=& \langle F_{R \; \mu \nu}\rangle \langle F_R^{\mu \nu}
\rangle +
\langle F_{L \; \mu \nu}\rangle \langle F_L^{\mu \nu}\rangle ,
\nonumber \\
O_{20} &=& \langle F_{R \; \mu \nu}\rangle \langle F_L^{\mu \nu}\rangle
.\nonumber
\end{eqnarray}
So far, all are {\it parity even} operators.
The next seven are similar but have {\it odd parity},
\begin{eqnarray}
O_{21} &=& -i\langle D_\mu U^\dagger \ D^\mu U \; ( U^\dagger \chi -
\chi^\dagger U) \rangle = i \langle N C^\mu C_\mu\rangle ,\nonumber\\
O_{22} &=& -i \langle D_\mu U^\dagger \ D^\mu U) \rangle \langle U^\dagger \chi -
\chi^\dagger U \rangle = i \langle N \rangle \langle C^\mu C_\mu\rangle ,\nonumber\\
O_{23} &=& i\langle U^\dagger \ D_\mu U\rangle \langle D^\mu U^\dagger\
\chi -D^\mu U\
\chi^\dagger \rangle = i \langle N C^\mu\rangle \langle C_\mu\rangle ,\nonumber\\
O_{24} &=& i \langle U^\dagger \ D^\mu U\rangle \langle U^\dagger \ D_\mu U\rangle
\langle U^\dagger \chi - \chi^\dagger U \rangle = i \langle N \rangle \langle C^\mu\rangle \langle C_\mu\rangle ,\nonumber\\
O_{25} &=&
i \langle U^\dagger \chi U^\dagger \chi - \chi^\dagger U \chi^\dagger U \rangle =
i \langle NM\rangle ,\nonumber\\
O_{26} &=& i \left( \langle U^\dagger \chi\rangle ^2 - \langle \chi^\dagger U\rangle ^2 \right) =
i \langle N\rangle \langle M\rangle ,\nonumber\\
O_{27} &=& \langle U^\dagger \ D_\mu U\rangle
\langle F_L^{\mu \nu} U^\dagger \ D_\nu U-F_R^{\mu \nu}
\ D_\nu U\ U^\dagger\rangle =
\langle C_\mu\rangle \langle C_\nu \left( F_L^{\mu \nu} -
U^\dagger F_R^{\mu \nu}U\right)\rangle ,\nonumber\\
\nonumber
\end{eqnarray}
Three operators involve the $\epsilon_{\mu \nu \rho \sigma }$ tensor.
The first two of them are $odd$ under parity and $O_{30}$ is $even$.
\begin{eqnarray}
O_{28} &=& \epsilon_{\mu \nu \rho \sigma}
\langle F_L^{\mu \nu} U^\dagger F_R^{\rho \sigma} U \rangle ,\nonumber \\
O_{29} &=& i \epsilon_{\mu \nu \rho \sigma}
\langle \left( F_L^{\mu \nu} +
U^\dagger F_R^{\mu \nu} U \right) C^\rho C^\sigma\rangle ,\nonumber \\
O_{30} &=& \epsilon_{\mu \nu \rho \sigma} \langle \left( F_L^{\mu \nu} -
U^\dagger F_R^{\mu \nu} U \right) C^\rho\rangle \langle C^\sigma\rangle
,\nonumber \\
\label{3.18}
\end{eqnarray}
From $O_{31}$ to $O_{57}$ they involve derivatives
of the source $\hat{\theta}$, and are given in Appendix C.
The operators that appear at $O(p^2)$ in (\ref{3.8}) are
\begin{equation}
\begin{array}{ll}
E_1 = -\langle C^\mu C_\mu\rangle &\hskip 2cm
E_4 = \langle C^\mu\rangle \langle C_\mu\rangle \cr
E_2 = \langle M\rangle &\hskip 2cm
E_5 = \langle C^\mu\rangle D_\mu \hat{\theta}\cr
E_3 = i\langle N\rangle &\hskip 2cm
E_6 = D^\mu\hat{\theta} D_\mu \hat{\theta}
\end{array}
\label{3.22p}
\end{equation}
The effective lagrangian including up to one-loop corrections is, thus,
\begin{equation}
{\cal L}^{One-loop}=-W_0^r(X,\mu)+ \sum_{i=1}^{6} W_i^r(X,\mu) E_i +
\sum_{i=0}^{57} \beta_i^r(X,\mu) O_i + \; finite \;\; non-local .
\label{3.22}
\end{equation}
The following structure of counter-terms
renders it finite (see Appendix A):
\begin{eqnarray}
\delta W_i (X) &=& \hbar W_i^{(1)} (X,\mu) + \hbar
\Omega_i(X,\mu) \lambda_\infty + O(\hbar^2), \;\; i=0,...,6 \;, \nonumber \\
\beta_i (X) &=&\beta_i^r (X,\mu) + \hbar B_i(X,\mu) \lambda_\infty
+ O(\hbar^2), \;\; i=0,...,57.
\label{3.19}
\end{eqnarray}
with
\begin{equation}
\lambda_\infty=\frac {\mu^{D-4}}{(4 \pi)^2} \left(\frac {1}{D-4} -
\frac{1}{2} \left( \log 4\pi - \gamma +1 \right) \right),
\label{3.20}
\end{equation}
so that
\begin{equation}
W_i^r (X,\mu)= W_i (X) + \hbar W_i^{(1)} (X,\mu), \;\; i=0,...,6 \;.
\label{3.21}
\end{equation}
At this point we have reinserted the powers of $\hbar$ to help the counting
of loops. Recall that the one-loop effective action carries one
power of $\hbar$.
The roster of functions $\Omega_i$'s and $B_i$'s is given in Appendix D;
they are the main result of our paper.
The counter-terms in (\ref{3.19}) are written in terms of the functions
$\Omega_i$'s and $B_i$'s for the sake of concision. This notation, however,
may seem a bit contrived. It should be read in the usual way of perturbation
theory, namely with the functions $\Omega_i$'s and $B_i$'s
understood as their series in powers of $X$. The
renormalization of the functions means the renormalization of the coupling
constants which are the coefficients in these expansions.
Parity, charge conjugation and time reversal ought to be conserved.
Only the operators that are invariant under charge conjugation themselves
can appear in the lagrangian. This is because $X$ is invariant under $C$
as well (see Appendix E). The list of $C$-violating operators is given
in Appendix C.
The result is valid for any value of $n_l$.
However, depending on the specific $n_l$ considered,
there are $n_l$-dependent factorization relations among the
traces of products of $n_l \times n_l$ matrices that are of relevance to
us since some operators in the list become redundant, i.e., some can be written
in terms of a smaller subset.
These relations follow from the Cayley-Hamilton
theorem and have been extensively used in \cite{gl}. For $n_l=3$
they boil down to \cite{fearingscherer}
$$ A^3 - \langle A\rangle A^2 + \frac {1}{2} \left(
\langle A\rangle ^2 -\langle A^2\rangle \right) A - \det (A)=0,$$
for any $3 \times 3$ matrix, be it hermitian or not, and to
$$\sum_{6 \; perm} \langle A_1 A_2 A_3 A_4\rangle - \sum_{8 \; perm}
\langle A_1 A_2 A_3\rangle \langle A_4\rangle
-\sum_{3 \; perm} \langle A_1 A_2\rangle \langle A_3 A_4\rangle $$
\begin{equation}
+\sum_{6 \; perm} \langle A_1 A_2\rangle \langle A_3\rangle \langle A_4\rangle
-\langle A_1\rangle \langle A_2\rangle \langle A_3\rangle
\langle A_4\rangle =0.
\label{3.23}
\end{equation}
The first relation ensures that the determinant of a matrix can always
be expressed in terms of traces and justifies why the determinants of products
of $U$ matrices and their derivatives need not be considered independently.
From the second relation, with
\begin{eqnarray}
A_1=A_2=C_\mu, &\;\;\;\;\;\;\;\;\;\;&
A_3=A_4=C_\nu,
\label{3.24}
\end{eqnarray}
and summing over the indices $\mu$ and $\nu$, one gets
\begin{equation}
2 O_0 - O_1 -2 O_2 + 4 O_3 + 8 O_{13} - 2 O_{14} - 4 O_{15} - O_{16}=0.
\label{3.25}
\end{equation}
Thus, for $n_l=3$ we can spare $O_0$ and
the $O(p^4)$ lagrangian will be written as
\begin{equation}
{\cal L}_4=\sum_{i=1}^{57} L_i (X) O_i,
\label{3.26}
\end{equation}
where
\begin{equation}
\begin{array}{ll}
L_1 = \beta_1 + \frac {1}{2} \beta_0 &\hskip 1cm
L_{13} = \beta_{13} - 4 \beta_0\cr
L_2 = \beta_2 + \beta_0 &\hskip 1cm
L_{14} = \beta_{14} + \beta_0\cr
L_3 = \beta_3 - 2 \beta_0 &\hskip 1cm
L_{15} = \beta_{15} + 2 \beta_0\cr
&\hskip 1cm L_{16} = \beta_{16} + \frac {1}{2} \beta_0
\end{array}
\label{3.27}
\end{equation}
and $L_i = \beta_i$ for the rest.
From $O_1$ to $O_9$ the same notation as in
$SU_L \times SU_R$ \cite{gl} has been kept, also for the coefficient
functions.
The constants $H_1$, $H_2$ in \cite{gl} have turned into the functions
$L_{11}(X)$, $L_{12}(X)$. The new operators that
appear in this $U_L \otimes U_R$ lagrangian are labeled
from $O_{13}$ onwards and the coefficient functions $L_i(X)$'s follow suit.
\vspace*{0.5cm}
If one disregards all the coefficients associated to $U_A(1)$ one finds
for the $B_i$'s in (\ref{3.19})
\begin{equation}
\begin{array}{lllllll}
B_0=\frac {n_l}{48} & B_1= \frac {1}{16} & B_2 = \frac {1}{8} &
B_3=\frac {n_l}{24} & B_4= \frac {1}{8} & B_5 = \frac {n_l}{8} &
B_6= \frac {1}{16} \\
B_7 = 0 & B_8= \frac {n_l}{16} & B_9=\frac {n_l}{12} & B_{10}=-\frac {n_l}{12}&
B_{11}=-\frac {n_l}{24} & B_{12}= \frac {n_l}{8} & B_{13}= \frac {1}{4} \\
B_{14}=0 & B_{15}=0 & B_{16} = 0 & B_{17}=0 & B_{18}= - \frac {1}{4} &
B_{19}= \frac {1}{24} & B_{20}= \frac {1}{12} \\
\end{array}
\label{llista}
\end{equation}
$O_{30}$ involves an
$\epsilon_{\mu \nu \rho \sigma}$ and does not need to be renormalized.
In the case of $n_l=3$, taking into account that the same relations
from (\ref{3.27}) should hold, and renaming the constants as
$\Gamma_i$'s, we obtain
\begin{equation}
\begin{array}{lllllll}
\Gamma_1 = \frac {3 } {32 } &
\Gamma_2 = \frac {3 } {16 } &
\Gamma_3 = 0 &
\Gamma_4 = \frac {1 } {8 } &
\Gamma_5 = \frac {3 } {8 } &
\Gamma_6 = \frac {1 } {16 } &
\Gamma_7 = 0 \\
\Gamma_8 = \frac {3 } {16 } &
\Gamma_9 = \frac {1 } {4 } &
\Gamma_{10} = - \frac {1 } {4 } &
\Gamma_{11} = - \frac {1 } {8 } &
\Gamma_{12} = \frac {3 } {8 } &
\Gamma_{13} = 0 &
\Gamma_{14} = \frac {1 } {16 } \\
\Gamma_{15} = \frac {1 } {8 } &
\Gamma_{16} = \frac {1 } {32 } &
\Gamma_{17} = 0 &
\Gamma_{18} = - \frac {1 } {4 } &
\Gamma_{19} = \frac {1 } {24 } &
\Gamma_{20} = \frac {1 } {12 }. &
\end{array}
\nonumber
\end{equation}
They coincide to those of $SU_L(3) \otimes SU_R(3)$
\cite{gl} except for the terms that involve
$\langle M^2 \rangle$, $\langle M \rangle^2$,
$\langle N^2 \rangle$, $\langle N \rangle^2$: $\Gamma_6$, $\Gamma_8$,
$\Gamma_{12}$. The reason is that among the building blocks that have been
used to write the chiral lagrangian, $C_\mu$ and $F_{L, R}^{\mu \nu}$
have vanishing traces in the case of
$SU_L(3) \otimes SU_R(3)$, whereas neither $M$ nor $N$ do.
Although it is less immediate
to retrieve the $SU_L(3) \otimes SU_R(3)$ coefficients
\begin{equation}
\begin{array}{lll}
\Gamma_6^{[SU]}= \frac {11}{144} & \Gamma_8^{[SU]}= \frac {5}{48} &
\Gamma_{12}^{[SU]}= \frac {5}{24},
\end{array}
\nonumber
\end{equation}
from our result, there are simple relations that have
to be verified. For instance, if one adds all the divergent pieces
that go with the operators $O_6$, $O_8$, $O_{12}$,
sets $\chi=m^2 I$ for simplicity,
and expands the operators, it is easy to check that the
$SU_L(3) \otimes SU_R(3)$ and the $U_L(3) \otimes U_R(3)$ coefficients of
$\langle \chi^\dagger \chi \rangle$
and $\vec{\pi}^2$ verify, respectively,
$$\frac {9}{8} \left(12 \Gamma_6^{[SU]} + 2 \Gamma_8^{[SU]}
+ \Gamma_{12}^{[SU]} \right)=
\left( 12 \Gamma_6 + 2 \Gamma_8 + \Gamma_{12} \right),$$
and similarly
$$\frac {9}{8} \left( 3 \Gamma_6^{[SU]} + \Gamma_8^{[SU]} \right)=
\left( 3 \Gamma_6 + \Gamma_8 \right).$$
One recognizes the ratio $\frac {9}{8}$ as the fraction
of degrees of freedom in the two theories, for these coefficients
multiply one-loop divergent pieces which in the two cases stem from
tadpole diagrams, which give a constant divergent contribution
for all the virtual mesons that travel around the loop.
Therefore, the total result is
proportional to the number of degrees of freedom that in each case
can circulate.
(Of course the same argument goes through for any number of flavours
$n_l$ and a similar result holds in general.
The $SU_L(n_l) \otimes SU_R(n_l)$ coefficients are \cite{donoghue}
$\Gamma_6^{[SU]} = \frac {2 + n_l^2}{16 n_l^2}$,
$\Gamma_8^{[SU]} = \frac {n_l^2-4}{16 n_l}$. The second relation holds in the
form $\frac {n_l \Gamma_6 + \Gamma_8}{n_l^2}=
\frac {n_l \Gamma_6^{[SU]} + \Gamma_8^{[SU]}}{n_l^2-1}$. The first
relation, that
now involves the combination $4n_l \Gamma_6 + 2 \Gamma_8 + \Gamma_{12}$,
reduces to the previous one if one realizes that $\Gamma_{12}=2 \Gamma_8$
in either case.)
\vspace*{0.5cm}
One important difference between the $SU_L \otimes SU_R$ case and
ours is that in the first theory the meson masses do not get any
infinite contribution and in this case they do.
This statement needs some qualification for the language it uses is
the customary of renormalizable field theories, where
the divergences that are generated require a fixed number of counter-terms,
of same type as the terms in the lagrangian only.
The chiral expansion in increasing number
of derivatives is not a theory of this kind, rather it is non-renormalizable,
because at each higher loop new terms are required to absorb the new infinities.
In $SU_L \otimes SU_R$ one only needs
counter-terms of a chiral order higher
than the terms involved in the loops. In $U_L \otimes U_R$
we find a combination of both previous cases.
It is non-renormalizable and there is a $O(p^0)$ term (\ref{3.7}),
included to reproduce the $U_A(1)$ anomaly,
which at one-loop induces
a mixture of chiral orders in the divergent parts, as can be seen
from the heat-kernel expressions: the
divergences are proportional to $\sigma^2$ (\ref{a.4}) and
$\sigma= \sigma_{(0)} + \sigma_{(2)}$ decomposes in (\ref{3.15}) in two
pieces, $O(p^0)$ and $O(p^2)$, respectively. (There are divergences
proportional to $R^2$ too in (\ref{a.4}), but
the curvature $R$ associated
to the connexion (\ref{3.14p}) does not get any $O(p^0)$ contribution).
There is a lot of freedom in deciding the prescriptions of what
removes which divergences, all of them equally acceptable from the
point of view of rendering the final result finite.
They are not completely arbitrary, though, since the nesting of divergences
when higher loops are considered imposes some constraints among the results
at different orders, of the Gell-Mann and Low type in QED \cite{weinberg}.
Furthermore, some of them
appear more natural than others.
Let us analyse the question of the renormalization of
the pseudo-goldstone boson
masses, induced by a quark-mass term. Let us first
disregard the $U_A(1)$ anomaly
by freezing the functions of $X$ to their constant values at $X=0$
and, for simplicity,
consider the symmetric case
where the three quark species are degenerate in mass,
and switch the external sources off: $\chi$ is a constant
that multiplies the unit matrix; at tree-level $\chi$
gives the nonet mass. Now, let us write all the terms
quadratic in the fields with at most two derivatives,
having added the one-loop
divergences to the tree-level result (prior to renormalization). There are
contributions from the operators $E_1$, $E_2$ in (\ref{3.22p})
for the tree-level parts, whereas the
divergent parts come with the operators $O_4$, $O_5$, $O_6$,
$O_8$, $O_{17}$ and $O_{18}$, and can be read off from (\ref{3.19})
and (\ref{llista}).
It yields,
\begin{equation}
\left(1 - 2 n_l \frac {\chi}{f^2} \hbar \lambda_\infty \right)
\left( \frac {1}{2} \partial_\mu \vec{\pi} \cdot \partial^\mu \vec{\pi}
- \frac {1}{2} \chi \vec{\pi} \cdot \vec{\pi} \right) +
\frac {1}{2} \partial_\mu \eta^0 \partial^\mu \eta^0
- \frac {1}{2} \left(1 - 2 n_l \frac {\chi}{f^2} \hbar \lambda_\infty \right)
\chi (\eta^0)^2,
\label{ren}
\end{equation}
where $\lambda_\infty$ is the ultraviolet divergent amount,
that in dimensional
regularization is essentially $\frac {1}{D-4}$, (see \ref{3.20}).
The difference between the singlet and the octet is apparent. The
piece $\partial_\mu \eta^0 \partial^\mu \eta^0$
does not get any divergent contribution while the octet
counterpart $\partial_\mu \vec{\pi} \cdot \partial^\mu \vec{\pi}$
does, and exactly by the same amount as the mass term
$\vec{\pi} \cdot \vec{\pi}$ does as well. In the octet sector,
one can pull out the common factor from the entire kinetic term and
render a finite result by a field redefinition
$\pi \to (1-\hbar n_l \frac {\chi}{f^2} \lambda_{\infty} ) \pi$.
There is no infinity left over that could require mass renormalization.
In chiral perturbation theory it is often simpler
to talk about a renormalization of an $O(p^4)$ operator
rather than a wave-function renormalization,
but in our case this is what it corresponds to,
and this precision is required to qualify the mass renormalization issue.
The same result also holds in $SU_L \otimes SU_R$.
For the $\eta^0$, though, the divergent contributions to $\partial_\mu
\eta^0 \partial^\mu \eta^0$ - which are none -, and to $(\eta^0)^2$
are different, and therefore
the mass gets necessarily renormalized by an infinite amount.
This is a remarkable difference between
$SU_L \otimes SU_R$ and $U_L \otimes U_R$.
In both cases, of course,
the divergences disappear if the quark-mass is turned off
$\chi \to 0$, which reflects the fact that the spontaneously
broken symmetry is built-in, loop by loop, in the quantum theory
and prevents the goldstone particles
from acquiring a mass. When the quark-mass is turned on,
it is not true that the symmetry structure prevents
the $\eta^0$ mass from being infinitely renormalized, as happens
with the octet mass.
The difference can be rooted to the terms in the
lagrangian that are responsible for the wave-function renormalization.
To one-loop, this only includes the
terms that are quartic in the fields with two derivatives, which are obtained
by expanding the operator $\langle C^\mu C_\mu \rangle$; by contracting
the two fields that carry no derivatives a tadpole diagram is generated
and its divergence multiplies
$\partial_\mu \vec{\pi} \cdot \partial^\mu \vec{\pi}$.
The interesting point is that in the chiral lagrangian such
a term involving the $\eta^0$ field, like
$(\eta^0)^2 \partial_\mu \eta^0 \partial^\mu \eta^0$
or $\vec{\pi} \cdot \vec{\pi} \partial_\mu \eta^0 \partial^\mu \eta^0$,
does not exist at all, as can be
immediately seen by making the invariant decomposition of
the matrix $U$ in $U=e^{i \sqrt{\frac {2}{n_l}} \frac {\eta^0}{f}} U_s$,
where $U_s$ contains only the octet fields and has $\det U_s=1$. $C_\mu$
then reads
$$C_\mu \equiv U^\dagger \partial_\mu U = i \sqrt{ \frac {2}{n_l}}
\frac {1}{f} \partial_\mu \eta^0 + U_s^\dagger \partial_\mu U_s,$$
and $\langle C^\mu C_\mu \rangle$
\begin{equation}
\langle C^\mu C_\mu \rangle = -\frac{2}{f^2} \partial^\mu \eta^0 \partial_\mu \eta^0
+\langle U_s^\dagger \partial_\mu U_s \ U_s^\dagger \partial^\mu U_s \rangle,
\end{equation}
and no crossed singlet-octet term survive for
$\langle U_s^\dagger \partial_\mu U_s \rangle =0$.
$\langle C^\mu C_\mu \rangle$ provides the
$\eta^0$ with a kinetic term and nothing else, and no term can generate
a one-loop wave-function divergence for it.
Whereas for the octet, one learns from
$$U_s^\dagger \partial_\mu U_s= i \ \frac {1}{f} \partial_\mu \pi
+ \frac {1}{2f^2} [\pi,\partial_\mu \pi] - i \ \frac {1}{6} \frac {1}{f^3}
\left[ \pi, [\pi, \partial_\mu \pi] \right] + \; ...$$
that
\begin{equation}
- \frac {f^2}{4} \langle U_s^\dagger
\partial^\mu U_s \ U_s^\dagger \partial_\mu U_s \rangle
= \frac {1}{2} \partial_\mu \vec{\pi} \cdot \partial^\mu \vec{\pi}+
\frac {1}{48f^2} \langle[\pi,\partial_\mu \pi]
[\pi,\partial^\mu \pi] \rangle + \; ... \ .
\label{qtermes}
\end{equation}
It is the second term in (\ref{qtermes})
which is responsible for the one-loop wave-function renormalization.
The structure of commutators makes one realize again that
such terms vanish for the singlet.
This same piece generates also a divergence to the mass term. It gets
additional $O(p^2)$ contributions from
operator $E_2$ in (\ref{3.22p}) which come from tadpole diagrams too
that arise from four meson interaction terms
$\sim \frac {\chi}{f^2} \langle \Phi^4 \rangle$.
It is a well-know result that a lagrangian for scalar fields
with no derivative couplings
other than the kinetic energy with a quartic interaction
has an effective action that needs a wave-function renormalization
that starts at two loops \cite{iim}. At one-loop it requires
mass renormalization and this is what we find for the $\eta^0$ field.
(Vertices with more
than four fields from $E_2$ at one-loop do not participate in
the renormalization of the kinetic terms).
We see that the singlet vs. octet difference in mass renormalization
is imbued by the structure of the symmetry group.
If one includes the $U_A(1)$-anomaly effects, both the octet and
the singlet get their masses renormalized by an infinite
amount.
\vspace*{0.5cm}
In this section we have presented the complete one-loop calculation
of the divergent part of the effective action. It is a calculation to all
orders in $1/N_c $, in the sense that the functions of $X$
have been kept generic through the end.
There are many unknown parameters in this approach (potentially,
all the coefficients of the functions of $X$) and without any further
restrictions we would not know how to eliminate any of them.
We invoke the $1/N_c$ expansion of QCD
and the restrictions it imposes on the chiral lagrangian to classify
the coefficients
according to the maximum $1/N_c$ power allowed for each,
so as to estimate their size,
and with this criterion try to select
the fewer terms that allegedly bring the main contributions.
This will be done in the next section.
\section{The $1/N_c$ expansion.}
The systematic expansion of QCD in powers of $1/N_c$ provides
a way of effectively reducing the
number of constants that intervene in a certain process, once
it is decided where to truncate the series in $1/N_c$.
If $N_c$ is large enough, a few terms will suffice to give a
good account of the exact result. How much large is
{\it large enough} is a question hard to assess,
for a good reason, that despite the
simplification the large $N_c$ limit entails
technically, it remains too difficult to sum
the subclass of diagrams that survive in the limit
and it worsens,
if anything, for the sub-leading contributions.
It is argued that, conceivably,
big numerical factors might be
accompanying the powers of $N_c$ in the denominator; in that
case a few terms would give accurate predictions even for $N_c=3$.
That would explain the remarkable resemblance of many qualitative
features and patterns of the leading terms
to those observed in hadron physics, with $N_c=3$
\cite{Witten}. At any rate, lacking of any analytical result,
it is the accuracy of the predictions
in explaining the data what could give
an ultimate justification for the expansion. It is this perspective
what launched
this project, to set out the basis for the systematic
study of the predictions that come out from such scenario
for soft $\pi$, $K$, $\eta$, $\eta'$
so as to discern in what processes and to what extent an
agreement with experiment holds.
On general grounds, Witten \cite{Witten} showed that in the
$N_c \to \infty$ limit, if QCD confines it has a mesonic spectrum
that consists of an infinite number of noninteracting, stable states,
with masses that have smooth and finite limits. Furthermore,
the strong interaction scattering amplitudes are given, to
lowest order in $1/N_c$, by sums of tree diagrams with mesons
exchanged, which can be derived from an effective lagrangian with
local vertices and local meson fields.
The decay constants $f$'s are of order
$\sqrt{N_c}$ and a coupling constant for a vertex with $k$ mesons
attached to it is of order $ {N_c}^{-\frac {(k-2)}{2}}$, i.e., it decreases
with the multiplicity of mesons in the vertex, each new meson bringing
in one additional power of $1/\sqrt{N_c}$.
The $N_c$ counting rules imply that while large-$N_c$ QCD is a
strong interacting theory in terms of quarks and gluons, it is equivalent
to a weakly interacting theory of mesons.
The higher order corrections in the $1/N_c$ expansion,
in addition to new couplings in the effective theory, also include
the loop diagrams of mesons, which as in any quantum field theory
reestablish the unitarity constraints
on the amplitudes (cuts, discontinuities across, etc...).
Each loop of mesons, in the effective theory, contributes an extra power
of $1/N_c$.
In addition to the mesons there are infinitely many glueball states, which at
$N_c \to \infty$ are stable and noninteracting.
The amplitude for a glueball
to mix with a meson is of order $O(1/\sqrt{N_c})$, and the vertices
in which they are involved are even more suppressed than the meson vertices:
one power of $1/N_c$ for each glueball.
However, only $\pi$, $K$, $\eta$, $\eta'$ remain massless in the
chiral limit and $N_c \to \infty$, their masses being precluded by
Goldstone's theorem. The rest of excited mesons and glueballs
remain massive, with
a typical hadronic mass of about 1 GeV. They decouple from the
soft processes that involve the goldstone bosons by virtue of their masses
and they are integrated out from the effective theory. The baryons
have masses much higher for large $N_c$, for they are known to grow
like $N_c$ \cite{Witten}.
Within the large-$N_c$, the $U_A(1)$ anomaly effects can be accommodated
in a rather natural way in the framework of the chiral lagrangian.
Actually, the identification of each ingredient that has been taken into
account in writing down the effective theory is clearcut:
the constraints
imposed on the interactions among goldstone bosons are contained in the
operators that involve the $U$ matrices (but not $\log U$);
the quark masses enter through
the terms in $\chi$; and the $U_A(1)$ enters through the functions of $X$.
One can talk of switching off the $U_A(1)$ anomaly by freezing the functions
to constants, in a similar way as one can take the chiral limit by sending
the quark masses to zero. In this language, one can say that
the $\eta^0$ has two ways of manifesting itself: either as a goldstone boson
or as the argument of the functions of $X$, breaking chiral
symmetry as dictated by the $U_A(1)$ anomaly.
When it manifests as a goldstone boson, its couplings
follow the rules of the meson couplings.
This is its the mesonic part, associated to the content in quark degrees
of freedom.
Its other presence in the chiral lagrangian, imposed by $U_A(1)$,
involves couplings that are more suppressed,
$1/N_c^{\frac{3}{2}}$ per $\eta^0$ in
a vertex. This is associated to the
special gluonic content of the $\eta^0$, put forward by the anomaly.
Although there would be no such a thing as the $\eta^0$ in a world
without quarks - nor chiral symmetry -, in the chiral limit
the $\eta^0$ mass is \cite{w1}
$$\left. \left. m_{\eta^0}^2\right|_{U_A(1)}
= \frac {4 n_l}{f^2} \left( \frac {d^2 E}{d \theta^2}
\right) \right|^{no \; quarks}_{\theta=0} + O(\frac {1}{{N_c}^2})$$
where $E$ is the vacuum energy
in a world without quarks and with a coupling
to the topological charge $Tr \ G^{\mu \nu} \tilde{G}_{\mu \nu}$
in the lagrangian, as in (\ref{2.6}).
The fixed proportion of $\eta^0$ and $\hat{\theta}$ that appear in the
combination $X= i \frac {\sqrt{2 n_l}}{f} \eta^0 + \hat{\theta}$ actually
relates glueball and $\eta^0$ anomalous couplings
to operators involving goldstone bosons.
The vertex suppression of
$1/N_c^{\frac{3}{2}}$ per $\eta^0$ is a combination of $1/N_c$-glueball
and $1/\sqrt{N_c}$-meson suppression, the former originated in the anomaly
equation, the second carried by the $f$ factor that divides the $\eta^0$ field.
\vspace*{0.5cm}
In order to obtain the $1/N_c$ power counting of the
sub-leading pieces too, one might proceed
by comparison of Green's functions, as evaluated from the
chiral lagrangian and from the diagrams in QCD.
In the effective theory, given an operator
multiplied by a function $G(X)$,
the $1/N_c$ power counting can be established on the basis of
two distinguishing features:
the number of traces over flavour indices ($\# (tf)$),
and the number of powers of the source $\hat{\theta}(x)$, ($\#(\hat{\theta}))$.
As a rule of thumb, the leading
dependence on $1/N_c$ of the various couplings follows from the
simple prescription \cite{l}:
\begin{equation}
G(X)= N_c^{2-\#(tf)-\#(\hat{\theta})} \ g ( X/N_c ),
\label{4.1}
\end{equation}
where $g$ is a function whose expansion in powers of $X/N_c$ has coefficients
of order 1.
The origin of each factor can be easily traced:
in relation to the vacuum energy which is $O(N_c)^2$,
there is one power suppression of $1/N_c$ for each flavour trace and one for
each source - recall that in QCD the sources for $1/N_c$ suppression are
the loops of quarks and the non-planarity of the
diagrams.
Each trace taken over flavour indices amounts to a sum
over quark flavours, which in turn can arise only in a quark loop
in QCD and adds a factor of $1/N_c$. As for the
source $\hat{\theta}(x)$, it couples to the topological
charge in (\ref{2.6}) with strength $1/N_c$; each derivative
of the generating functional with respect to $\hat{\theta} (x)$ will bring
one power of $1/N_c$ to the Green's function. In the
effective theory this
is achieved by pulling out an explicit power of $1/N_c$ for each
$\hat{\theta}(x)$. Finally, it was already mentioned that
the ubiquitous factor of $f$ count as $\sqrt{N_c}$. In particular,
the $\eta^0$ that appears in $X$ is always suppressed by a factor $1/f$
(\ref{3.5bis}).
There are no additional powers of $1/N_c$ for the leading contributions: once
all the previous factors of $1/N_c$ have been pulled out,
also from $X$ as $X/N_c$, the expansion
of $g$ in powers of $X/N_c$ has coefficients that are order 1.
Recall at this point that the $1/N_c$ counting should be done in
a chiral lagrangian with a generic number $n_l$ of light flavours.
This is because of the $n_l$-dependent factorization relations already
mentioned in section 3, that give linear relations among the {\it a
priori} independent operators for particular values of $n_l$.
The mismatch of powers of
$1/N_c$ and the departure from the general rule (\ref{4.1}) are avoided
by allowing for a generic $n_l$. The correct counting is thus obtained
for the functions $\beta_i (X)$ in (\ref{3.22}),(\ref{3.19}).
The implications for the $L_i(X)$ can be read off from (\ref{3.27}).
Furthermore, as pointed out in \cite{dRP}, it is
the $U_L \otimes U_R$ lagrangian
that provides the suitable basis to establish the correct
$1/N_c$ power counting of the constants that involve the nonet of mesons.
The $\eta'$ in the large-$N_c$ limit gives a contribution to the
large-distance behaviour of the Green's functions that should not be
overlooked; in the limit its properties are the same as for the rest of
goldstone bosons in the nonet. In next to leading order,
a topological mass term appears for the $\eta'$, but it is
$O(1/N_c)$ and is treated perturbatively.
In counting powers of $1/N_c$, the $\eta'$ cannot be integrated out
from the nonet,
for it is when $N_c \to \infty$, when the $1/N_c$ expansion is more sensible,
that the $\eta'$ becomes massless and does not decouple.
Expanding the $W_i$'s functions in (\ref{3.7}) in power series in X,
\begin{equation}
W_k(X)=W_{k0} + W_{k2} X^2 + W_{k4} X^4 + ...= \frac {f^2}{4}
\left( v_{k0} + v_{k2} X^2 + v_{k4} X^4 + ...\right),
\label{p}
\end{equation}
for $k=0,1,2,4,5,6$, and for $W_3$
\begin{equation}
W_3(X)=W_{31} X + W_{33} X^3 + \; ...= -i \frac {f^2}{4}
\left( v_{31} X + v_{33} X^3 + \; ...\right),
\label{d}
\end{equation}
and following rule (\ref{4.1}) we find
\begin{eqnarray}
W_{00}&=& O(N_c^2) \nonumber\\
W_{10}&=& W_{20}\ =\ \frac {f^2}{4}\ =\ O(N_c) \nonumber\\
W_{31}&=& O(1), \ \ \ \ W_{50}\ =\ O(1), \ \ \ \ W_{60}\ =\ O(1)
. \nonumber\\
\nonumber
\label{t}
\end{eqnarray}
For the coefficients of the $O(p^4)$ operators
that do not involve derivatives of $\hat{\theta}$ \cite{dRP},
for $n_l=3$ we find,
\begin{eqnarray}
L_1(0) \; ,L_2(0)\; ,L_3(0)\; ,L_5(0)\; ,L_8(0)\; ,L_9(0)\; ,L_{10}(0)\; ,
L_{11}(0) \; ,L_{12}(0) \; , \nonumber\\
L_{13}(0)\; ,L_{14}(0)\; ,L_{15}(0)\; ,L_{16}(0) &=& O(N_c) \nonumber\\
2L_1(0)-L_2(0)\; , L_{13}(0)+8 L_1(0)\; , L_{14}(0)-2 L_1(0)\; ,
L_4(0)\; , L_6(0)\; , \nonumber\\
L_7(0)\; ,L_{18}(0)\; , L_{19}(0)\; ,
L_{20}(0) &=& O(1) \nonumber\\
L_{15}(0)-2L_{14}(0)\; , 2 L_{16}(0)- L_{14}(0)\; , L_{17}(0) &=& O(1/N_c).
\nonumber\\
\label{c}
\end{eqnarray}
For the parity-odd terms, the first contribution is given by the
linear term of the $X$ expansion, $L_i'(0)$:
\begin{eqnarray}
L'_{21}(0)\; , L'_{25}(0)\; , L'_{28}(0)
&=& O(1) \nonumber\\
L'_{22}(0)\; ,L'_{23}(0)\; ,L'_{26}(0)\; ,L'_{27}(0) &=& O(1/N_c)
\nonumber\\
L'_{24}(0) &=& O(1/N_c^2).
\nonumber\\
\label{cp}
\end{eqnarray}
As mentioned, the $\eta^0$ gets a contribution to its mass
that is $O(1/N_c)$, in the notation of (\ref{d}) $\left. m^2_{\eta^0}
\right|_{U_A(1)}= - {n_l} v_{02}$. Counting $m^2_{\eta^0}$
as two chiral powers $O(p^2)$, in the multiple expansion we shall
count $1/N_c$ also as $O(p^2)$ \cite{l}.
However, to be fully consistent with it, if terms $O(p^4) \times
\frac {1}{N_c}$ are kept, then the chiral $O(p^6)$ order should
be also included. This would require a two-loop chiral perturbation theory
calculation which is far beyond the scope of this article.
Following these criteria and using (\ref{p}), (\ref{d}) and (\ref{t}),
we expand the $B_i$ and $\Omega_i$ functions from Appendix D, first
in powers of $X$ and then in
powers of $\frac{1}{N_c}$, keeping corrections up to $\frac{1}{N_c}$ for
the $O(p^4)$ terms, up to $\frac{1}{{N_c}^2}$ for the $O(p^2)$ terms and
up to $\frac{1}{{N_c}^3}$ for the $O(p^0)$ one. Notice that none of the
$\eta^0$ fields that appeared through the $X$ has survived, which means that
all the terms in the lagrangian starting with more that four fields are
eliminated.
This kind of terms would be only required for calculating processes that are
very difficult to measure experimentally.
To this order, only two $\Omega_i$'s survive:
\begin{equation}
\begin{array}{ll}
\Omega_0 = \frac {\nz^2}{2} \vzd ^2 + O(\frac{1}{{N_c}^4}) , \;\;\;&
\Omega_2 = - \frac {1}{2}\vzd + \nz\vzd\vtu + O(\frac{1}{{N_c}^3}).\nonumber\\
\nonumber
\end{array}
\nonumber
\end{equation}
The $B_i$'s for i=0 to 20 are the same as in (\ref{llista}) except for
two new contributions:
\begin{equation}
\begin{array}{ll}
B_8 = \frac {\nz}{16}- \frac {1}{2}\vtu + O(\frac{1}{{N_c}^2}), \;\;\;&
B_{12} = \frac {\nz}{8} -\vtu + O(\frac{1}{{N_c}^2}), \nonumber \\
\nonumber
\end{array}
\nonumber
\end{equation}
The rest of coefficients either vanish exactly or do not contribute
to this order in the expansion. There are also some contributions
that are proportional to $\hat{\theta}$.
\vspace*{0.5cm}
The use of the equations of motion does not ruin the $1/N_c$ power counting
in (\ref{4.1}), although it introduces additional products
of traces and powers of derivatives of $\hat{\theta}$ in (\ref{3.10}).
The very structure of (\ref{3.10}) complies
with (\ref{4.1}) since the functions of $X$ that multiply the various
operators carry their own
$1/N_c$ suppression required by
(\ref{4.1}): for the identity,
$\frac {W_0'}{W_1}= g(X/N_c)$; $\langle C^\mu C_\mu\rangle$,
$\langle C^\mu \rangle \ C_\mu$ and $D^\mu \hat{\theta} \ C_\mu$ are multiplied
by $\frac {W_1'}{W_1} = \frac {1}{N_c} \ g(X/N_c)$; $N$ and $M$ do
not get any suppression since both $\frac {W_2}{W_1}$ and $\frac {W_3}{W_1}$
are $g(X/N_c)$; however, $\langle M \rangle$ and $\langle N \rangle$
appear multiplied by $\frac {W_2'}{W_1}$ and $\frac {W_3'}{W_1}$ which are
$\frac {1}{N_c} \ g (X/N_c)$; $\frac {W_2}{W_1}=\frac {1}{N_c} \
g (X/N_c)$ multiplies $D^\mu D_\mu \theta $ whereas
$\frac {W_5'- W_6'}{W_1}=\frac {1}{{N_c}^2} \ g(X/N_c)$. Here $g$
denotes, in each case, some function that has $N_c$-independent Taylor
expansion coefficients.
Finally, let us comment on the regeneration through quantum corrections
of a term \break
$\langle U^\dagger D_\mu U\rangle
\langle U^\dagger D^\mu U\rangle$, which had been
removed from the tree level effective lagrangian in (\ref{3.7}).
It is a confirmation that the effects of meson loops
bring to the effective action contributions that are $1/N_c$ suppressed
in relation to the leading tree level.
It is only at $N_c =\infty$ that $\pi$, $K$, $\eta$,
$\eta'$ form a nonet. When sub-leading corrections in $1/N_c$ are taken into
account, the enlarged symmetry with respect to $U_L \otimes U_R$ no longer
holds.
The first instance is the $O(1/N_c)$ mass piece
exclusive for the $\eta_0$. The reappearance of that term is another example:
the $\eta^0$ (singlet) and the $\pi$ (octet)
fields are normalized differently by sub-leading contributions.
\section{Conclusion and outlook.}
In this article the effective theory developed in \cite{gl} for the
strong interactions at low energies among the lightest pseudoscalars
is extended
to include the $\eta'$ particle. The approach that has been adopted
exploits the fact that, as the number of colours $N_c$ grows bigger,
the mass difference (topological mass) between the $\eta '$ and the
octet vanishes as $1/N_c$, and the $U_A(1)$ puzzle can be treated as
a series in inverse powers on $N_c$. We exploit the possibility
that a good description could emerge by taking the
nonet of soft pseudoscalars as the goldstone bosons of the spontaneous
breaking of $U_L \otimes U_R \to U_V$,
thus benefitting from the perks that such theories feature,
in terms of constraints on the form of the couplings and
relations among the coupling constants. The departures from the results
in the real world are dealt with as corrections
in powers of the quark masses and $1/N_c$.
The most general effective action is given that includes
terms up to $O(p^4)$, with quantum corrections included to one-loop.
The $U_A(1)$ anomaly is conveniently incoporated.
This is the first step of a project towards a systematic study to spell out
the predictions that could emerge from such scenario. It will shed light
on what processes can be accomodated within the approach.
\section{Acknowledgments}
We are grateful to D. Espriu,
E. de Rafael and S. Peris for discussions as well as
R. Tarrach for his critical reading of the manuscript.
Financial support from CICYT, contract AEN95-0590,
and from CIRIT, contract GRQ93-1047 are
acknowledged. J.T. acknowledges the Benasque Center for Physics
and the Physics Department at Brookhaven National Laboratory for the
hospitality extended to him during the completion on this
work. P.H.-S. acknowledges a Grant from the {\it Generalitat de Catalunya}.
\newpage
\section{Appendix A: The Heat-Kernel Technique.}
This appendix is a reminder of some results that have
been used in the text, which are obtained with the help
the heat-kernel technique. It provides a convenient way to evaluate
the one-loop effective action.
Let $\hat{H}$ be an operator acting on a Hilbert space.
The heat-kernel two-point function associated to $\hat{H}$
is defined as a function of the parameter $\tau$,
$$K(x,y;\tau)=\langle x|e^{-i\tau \hat{H} }|y\rangle \theta(\tau).$$
It verifies the equation
$$i\frac {\partial}{\partial \tau} K(x,y;\tau)= \int d^D z \;\langle x|\hat{H}|z\rangle
K(z,y;\tau) +i \delta (\tau) \delta^D (x-y),$$
with the boundary condition,
$$ \lim_{\tau \to 0} K(x,y;\tau) = \delta^{D}(x-y),$$
which follows from its definition. We do not specify the dimensionality
of space-time and allow for a generic $D$.
Often, $\hat{H}$ is a {\it local} operator, i.e.,
$\langle x|\hat{O}|y\rangle =O_x \delta^{D}(x-y)$, where $O_x$ is a
differential
operator with a finite number of terms. We shall limit ourselves to this case.
Then, $K(x,y; \tau)$ is a Green's function that verifies
a partial differential equation of the Schr\"odinger type,
$$i\frac {\partial}{\partial \tau} K(x,y;\tau)= O_x K(x,y;\tau)+i \delta (\tau) \delta^D (x-y).$$
When the operator $\hat{H}$ is {\it elliptic} the definition
of $K$ differs somewhat from the one given above. In that case there
is no need for a factor of $i$ in the exponent, since all the eigenvalues
of $\hat{H}$ are non-negative. There, the equation is a
heat-transport-like equation, from which the technique shares its
name. In the present case we are dealing with
operators of the sort $\partial_\mu \partial^\mu$, which in Minkowski
space is {\it hyperbolic}.
It is convenient to pick out the mass term from $\hat{H}$, if the theory
is massive, or, else, to introduce an infrared regulator by adding a
constant term $M^2$ to $\hat{H}$, $\hat{H}+ M^2$.
For small values of $\tau$, $K(x,y;\tau)$ admits an asymptotic expansion
of the form,
\begin{equation}
K(x,y;\tau) \sim i \frac {1}{(4 \pi i \tau)^{\frac {D}{2}}} \ \
e^{-i M^2 \tau + \frac {(x-y)^2}{4 i \tau}}\ \sum_{n=0}^{\infty} h_n(x,y)
(i\tau)^n.
\label{SDW}
\end{equation}
The functions $h_n(x,y)$ are known as the {\it Seeley-DeWitt} coefficients
\cite{ball}.
Given that
$$K(x,y;\tau \to 0) \sim i \frac {1}{(4 \pi i \tau)^{\frac {D}{2}}}\
e^{\frac {(x-y)^2}{4 i \tau}}\ \longrightarrow\ \delta^{D}(x-y),$$
the boundary condition translates into
$$h_0(x,x)=1.$$
The computation of the one-loop effective action using the background field
method entails the evaluation of
$$\log \det (\hat{H})= Tr \; \log \hat{H}.$$
$Tr$ stands for the trace over space-time as well as all the internal
indices. This can be written in terms of the kernel $K(x,y; \tau)$ if one uses
the following integral representation for the logarithm
$$\log \hat{H} -\log \hat{H}_0= - \int_0^{\infty} \frac {d \tau}{\tau}
\left( e^{-i \tau \hat{H} }
-e^{-i \tau \hat{H}_0 } \right).$$
Apart from an unessential $\hat{H}$-independent, divergent constant $C$
\begin{equation}
Tr\; \log \hat{H}= -\int d^Dx \int_0^{\infty} \frac {d \tau}{\tau}
\ tr \ \langle x|e^{-i \tau \hat{H} }|x\rangle + \ C.
\label{heatk}
\end{equation}
where tr stands for the trace over internal indices only.
This expansion will be used to depict the divergent short-distance
contributions so as to be able to subtract them away.
Given $\tau$, only those points separated an interval
$(x-y)^2$ that is less or of the order of
$\tau$ give non-negligible (non-oscillatory)
values for $K(x,y;\tau)$. The
short distance contribution to the effective action is thus contained
in the integral about $\tau=0$; it will not be
affected by $M$, which only modifies the contribution from large
values of $\tau$ (large wavelength modes).
The particular form of the operator $\hat{H}$ which corresponds to
our computation is
$$\hat{H}= d_\mu d^\mu +\sigma\ ,$$
where $d_\mu= \partial_\mu +\omega_\mu (x)$.
A na\"{\i}ve application of eq. (\ref{heatk}) could lead to infrared
divergences associated to the large $\tau$ integration region.
For that reason we have changed it to $\hat{H}+M^2= d_\mu d^\mu +\sigma+M^2$.
Each term in the Seeley-DeWitt expansion yields a finite contribution,
which give
\begin{equation}
i \ tr \langle x|\log \left( \hat{H} + M^2 \right)|x\rangle =
\frac {1}{(4 \pi)^{D/2}} \sum_{n=0}^{\infty}
(M^2)^{\frac {D}{2} -n} \Gamma\left(n - \frac {D}{2} \right)
\langle h_n(x,x) \rangle,
\label{a.1}
\end{equation}
except for an unessential additive constant. The Seeley-DeWitt coefficients
\footnote{In general $\sigma (x)$ and
$\omega (x)$ can be matrices. In that case
the coefficients $h_n$ are matrices as well and a trace over these internal
indices is also understood in (\ref{a.1}).}.
are known in this case,
\begin{eqnarray}
h_0(x,x) &=& I, \nonumber \\
h_1(x,x) &=&- \sigma, \nonumber \\
h_2(x,x) &=& \frac {1}{2} \sigma^2 + \frac {1}{12} R_{\mu \nu} R^{\mu \nu}
+\frac {1}{6} d_\mu d^\mu \sigma.
\label{a.3}
\end{eqnarray}
where
$R_{\mu \nu}=\partial_\mu \omega_\nu - \partial_\nu \omega_\mu+[\omega_\mu,
\omega_\nu].$
Higher order $h_n(x,x)$ may be found in \cite{ball}.
The singularities that arise in the calculation have been regulated
by analytic continuation over the $D$-complex plane.
In a four-dimensional
theory the ultraviolet divergences appear as poles about $D=4$,
and get contributions from $h_0$, $h_1$ and $h_2$,
as $\Gamma \left( - \frac {D}{2} \right)$,
$\Gamma \left( 1 - \frac {D}{2} \right)$ and
$\Gamma \left( 2- \frac {D}{2} \right)$, respectively. Retaining in
(\ref{a.1})
these three terms only, it reads
\begin{eqnarray}
\left.
i \ tr \langle x|\log \left( \hat{H} + M^2 \right)|x\rangle
\right|_{div} &=&
\frac {M^{D-4}}{(4 \pi)^2} \left( \frac {-2}{D-4} +
\log 4 \pi - \gamma +1 \right)
\langle \ \frac {1}{2} (\sigma + M^2)^2 +
\frac {1}{12} R_{\mu \nu} R^{\mu \nu} \ \rangle
\nonumber \\
&+& \frac {1}{(4 \pi)^2} \langle \ \left( \frac {1}{4} M^4
- \frac {1}{2} \sigma^2 \right) -
\frac {1}{12} R_{\mu \nu} R^{\mu \nu} \ \rangle + O(D-4),
\label{a.4}
\end{eqnarray}
Notice that in (\ref{a.4}) it is the combination
$\sigma + M^2 $ the one that multiplies the pole $\frac {1}{D-4}$, i.e., it
is the whole operator that adds to
$d_\mu d^\mu$ in $\hat{H}+M^2$.
If $\sigma$ has its own mass terms - the light pseudoscalar masses
in our case -
there is no need to introduce any new infrared regulator $M$.
We see that only $h_2$ is involved in the residue of the pole
$\frac {1}{D-4}$.
Therefore, a finite expression is obtained by subtracting
$$\frac {\mu^{D-4}}{(4 \pi)^2} \left( - \frac {2}{D-4} +
\log 4 \pi - \gamma +1 \right)
\langle \ h_2(x,x) \ \rangle$$
to the effective action. This copes with the ultraviolet divergences
and it is the procedure used in the text.
$\mu$ is a parameter with {\it mass} units.
The total derivative
$\langle d_\mu d^\mu \sigma \rangle$ has been discarded.
\section{Appendix B: U(3).}
This appendix is devoted to present some of the properties of the $U(3)$
group which have been used in the text. Whenever the generalization is
straightforward we give the result for $U(n)$.
The explicit form of the $U(3)$ hermitian generators
, $\lambda_\mu=\lambda_\mu^\dagger$
, $\mu=0,1,2,...,8$, is
\begin{eqnarray}
\label{unmatrices}
\lambda_0 =\sqrt{\frac {2}{3}}
\left(\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1 \end{array}\right),\
&
\lambda_1 = \left(\begin{array}{rrr}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 0 \end{array}\right),
&
\lambda_2 = \left(\begin{array}{rrr}
0 & -i & 0 \\
i & 0 & 0 \\
0 & 0 & 0 \end{array}\right),
\\
& & \nonumber \\
& & \nonumber \\
\lambda_3 = \left(\begin{array}{rrr}
1 & 0 & 0 \\
0 & -1 & 0 \\
0 & 0 & 0 \end{array}\right),\nonumber
&
\lambda_4 = \left(\begin{array}{rrr}
0 & 0 & 1 \\
0 & 0 & 0 \\
1 & 0 & 0 \end{array}\right),
&
\lambda_5 = \left(\begin{array}{rrr}
0 & 0 & -i \\
0 & 0 & 0 \\
i & 0 & 0 \end{array}\right),
\\
& & \nonumber \\
& & \nonumber \\
\lambda_6 = \left(\begin{array}{rrr}
0 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0 \end{array}\right), \nonumber
&
\lambda_7 = \left(\begin{array}{rrr}
0 & 0 & 0 \\
0 & 0 & -i \\
0 & i & 0 \end{array}\right),
&
\lambda_8 = \frac {1}{\sqrt{3}}\left(\begin{array}{rrr}
1 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & -2 \end{array}\right).
\end{eqnarray}
For $U(n)$ there are $n^2$ such matrices, and $\mu= 0,1,...,n^2-1$.
Also $\lambda_0=\sqrt{\frac {2}{n}} I$, with $I$ the $n \times n$ identity.
These matrices obey, in general, the following basic trace properties:
\begin{equation}
\label{basic}
Tr\left(\lambda_\mu\right)\equiv \langle \lambda_\mu\rangle =
\sqrt{2n}\ \delta_{\mu 0}
\qquad,\qquad
\langle \lambda_\mu\lambda_\nu\rangle =2 \delta_{\mu\nu}\ .
\end{equation}
The product of two matrices verifies:
\begin{equation}
\left[\lambda_\mu,\lambda_\nu\right]= 2 i f_{\mu\nu\rho}\lambda_\rho\qquad
,\qquad
\left\{\lambda_\mu,\lambda_\nu\right\}= 2 d_{\mu\nu\rho}\lambda_\rho\ ,
\end{equation}
\begin{equation}
\lambda_\mu\lambda_\nu=\left( d_{\mu\nu\rho}+ i f_{\mu\nu\rho}\right)
\lambda_\rho\ .
\end{equation}
For $U(3)$, the non-zero entries of the antisymmetric
$f_{\mu\nu\rho}$ and symmetric $d_{\mu\nu\rho}$ constants are
\begin{eqnarray}
&f_{123}=1\qquad, \qquad f_{458}=f_{678}={\sqrt{3}\over 2} \ ,\nonumber\\
&f_{147}=-f_{156}=f_{246}=f_{257}=f_{345}=-f_{367}=\frac {1}{2} \ ,
\end{eqnarray}
\begin{eqnarray}
&d_{0\mu\nu}=\sqrt{2\over 3} \delta_{\mu\nu}\ ,\nonumber\\
&d_{118}=d_{228}=d_{338}=-d_{888}={1\over \sqrt{3}}\ ,\nonumber\\
&d_{146}=d_{157}=-d_{247}=d_{256}=d_{344}=d_{355}=-d_{366}
=-d_{377}={1\over 2}\ ,\nonumber\\
&d_{448}=d_{558}=d_{668}=d_{778}=-{1\over 2\sqrt{3}}\ .
\end{eqnarray}
These tensors are not independent as they satisfy the following relations
(repeated indices are summed up):
\begin{eqnarray}
& d_{\mu\nu\nu}=n \sqrt{2n}\ \delta_{\mu 0}\ , \nonumber\\
\nonumber\\
& d_{\mu\nu\lambda}d_{\rho\nu\lambda}=
n\left(\delta_{\mu\rho}+\delta_{\mu 0}
\delta_{\rho 0}\right)\ ,\nonumber\\
& f_{\mu\nu\lambda}f_{\rho\nu\lambda}=
n\left(\delta_{\mu\rho}-\delta_{\mu 0}
\delta_{\rho 0}\right)\ , \nonumber\\
\nonumber\\
&f_{\mu\nu\tau} f_{\lambda\rho\tau}+f_{\mu\lambda\tau}f_{\rho\nu\tau}+
f_{\mu \rho\tau}f_{\nu\lambda\tau}=0\ ,\nonumber \\
&f_{\mu\nu\tau} d_{\lambda\rho\tau}+f_{\mu\lambda\tau}d_{\rho\nu\tau}+
f_{\mu \rho\tau}d_{\nu\lambda\tau}=0\ ,\nonumber \\
\nonumber\\
&f_{\mu\nu\sigma}f_{\rho\tau\sigma}=d_{\mu\rho\sigma}d_{\tau\nu\sigma}-
d_{\mu\tau\sigma}d_{\nu\rho\sigma}\ .
\end{eqnarray}
The identity
\begin{equation}
\left(\lambda_\alpha\right)_{a b}
\left(\lambda_\alpha\right)_{c d}= 2 \delta_{a d}\delta_{b c}\ ,
\end{equation}
has been extensively used. It yields the properties
\begin{eqnarray}
\lambda^{\alpha} \lambda^\alpha &=& 2 n I , \nonumber \\
\langle \lambda_\alpha A \lambda_\alpha B\rangle & =& 2 \langle A\rangle
\langle B\rangle \ ,\nonumber \\
\langle \lambda_\alpha A\rangle \langle
\lambda_\alpha B\rangle &=& 2 \langle A B\rangle \ .
\end{eqnarray}
\section{Appendix C: Operators with derivatives of $\hat{\theta}$ and
some \\ C-violating operators.}
In this appendix to give list of the operators that involve derivatives
of $\hat{\theta}$, that we have not included in the text
(see (\ref{3.18})).
\begin{eqnarray}
O_{31} &=& D_\mu \hat{\theta}\ \langle C^\mu C^\nu C_\nu\rangle , \nonumber\\
O_{32} &=& D_\mu \hat{\theta}\ \langle C^\mu\rangle \langle C^\nu C_\nu\rangle , \nonumber\\
O_{33} &=& D_\mu \hat{\theta}\ \langle C^\mu C^\nu\rangle \langle C_\nu\rangle , \nonumber\\
O_{34} &=& D_\mu \hat{\theta}\ \langle C^\mu\rangle \langle C^\nu\rangle \langle C_\nu\rangle , \nonumber\\
O_{35} &=& D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ \langle C^\nu C_\nu\rangle , \nonumber\\
O_{36} &=& D_\mu \hat{\theta}\ D_\nu \hat{\theta}\ \langle C^\mu C^\nu\rangle , \nonumber\\
O_{37} &=& D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ \langle C^\nu\rangle \langle C_\nu\rangle , \nonumber\\
O_{38} &=& D_\mu \hat{\theta}\ D_\nu \hat{\theta}\ \langle C^\mu\rangle \langle C^\nu\rangle , \nonumber\\
O_{39} &=& D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ D_\nu \hat{\theta}
\langle C^\nu\rangle , \nonumber\\
O_{40} &=& D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ D_\nu \hat{\theta}\
D^\nu \hat{\theta} ,\nonumber\\
O_{41} &=& i D^\mu D_\mu \hat{\theta}\ \langle C^\nu C_\nu\rangle ,\nonumber\\
O_{42} &=& i D^\mu D_\mu \hat{\theta}\ \langle C^\nu\rangle \langle C_\nu\rangle ,\nonumber\\
O_{43} &=& i \langle C^\mu\rangle D_\mu \hat{\theta}\ D^\nu D_\nu \hat{\theta},\nonumber\\
O_{44} &=& i D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ D^\nu D_\nu \hat{\theta}
,\nonumber\\
O_{45} &=& D^\mu D_\mu \hat{\theta}\ D^\nu D_\nu \hat{\theta} ,\nonumber\\
O_{46} &=& D_\mu \hat{\theta}\ \langle C^\mu M\rangle ,\nonumber\\
O_{47} &=& D_\mu \hat{\theta}\ \langle C^\mu\rangle \langle M\rangle ,\nonumber\\
O_{48} &=& i D_\mu \hat{\theta}\ \langle C^\mu N\rangle ,\nonumber\\
O_{49} &=& i D_\mu \hat{\theta}\ \langle C^\mu\rangle \langle N\rangle ,\nonumber\\
O_{50} &=& D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ \langle M\rangle ,\nonumber\\
O_{51} &=& i D_\mu \hat{\theta}\ D^\mu \hat{\theta}\ \langle N\rangle ,\nonumber\\
O_{52} &=& i D^\mu D_\mu \hat{\theta}\ \langle M\rangle ,\nonumber\\
O_{53} &=& D^\mu D_\mu \hat{\theta}\ \langle N\rangle ,\nonumber\\
O_{54} &=& D_\mu \hat{\theta}\ \langle C_\nu \left( F_L^{\mu \nu} -
U^\dagger F_R^{\mu \nu} U \right)\rangle ,\nonumber\\
O_{55} &=& D_\mu \hat{\theta}\ \langle C_\nu\rangle \langle F_L^{\mu \nu} - F_R^{\mu \nu}\rangle ,
\nonumber\\
O_{56} &=& \epsilon_{\mu \nu \rho \sigma}
\langle \left( F_L^{\mu \nu} -
U^\dagger F_R^{\mu \nu} U \right) C^\rho\rangle D^\sigma \hat{\theta}
,\nonumber \\
O_{57} &=& \epsilon_{\mu \nu \rho \sigma}
\langle F_L^{\mu \nu} - F_R^{\mu \nu}\rangle \langle C^\rho\rangle \
D^\sigma \hat{\theta}.
\label{Dtheta}
\end{eqnarray}
The left-over independent operators are $C$-violating and are the following:
\begin{equation}
\begin{array}{ll}
\langle C_\mu\rangle
\langle \left( F_L^{\mu \nu} + U^\dagger F_R^{\mu \nu} U \right)
C_\nu\rangle ,
& i \langle C_\mu C_\nu \left( F_L^{\mu \nu} - U^\dagger F_R^{\mu \nu} U
\right)\rangle ,
\nonumber \\ \langle F_L^{\mu \nu}F_{L \;\mu \nu}-F_R^{\mu \nu}
F_{R \;\mu \nu}\rangle ,
& \langle F_L^{\mu \nu}\rangle \langle F_{L \;\mu \nu}\rangle -
\langle F_R^{\mu \nu}\rangle \langle F_{R \;\mu \nu}\rangle ,
\nonumber \\ \epsilon_{\mu \nu \rho \sigma} \langle C^\mu\rangle \langle
C^\nu C^\rho C^\sigma\rangle ,
& \epsilon_{\mu \nu \rho \sigma} \langle
\left( F_L^{\mu \nu} +
U^\dagger F_R^{\mu \nu} U \right) C^\rho\rangle \langle C^\sigma\rangle ,
\nonumber\\
D_\mu \hat{\theta}\ \langle C_\nu \left( F_L^{\mu \nu} +
U^\dagger F_R^{\mu \nu} U
\right)\rangle ,
& D_\mu \hat{\theta}\ \langle C_\nu\rangle \langle F_L^{\mu \nu} +
F_R^{\mu \nu}\rangle ,
\nonumber\\ i \epsilon_{\mu \nu \rho \sigma}\ D^\mu \hat{\theta}\
\langle C^\nu C^\rho C^\sigma\rangle ,
& \epsilon_{\mu \nu \rho \sigma}
\langle \left( F_L^{\mu \nu} +
U^\dagger F_R^{\mu \nu} U \right) C^\rho\rangle D^\sigma \hat{\theta},
\nonumber\\
\epsilon_{\mu \nu \rho \sigma}
\langle F_L^{\mu \nu} + F_R^{\mu \nu}\rangle \langle C^\rho\rangle \
D^\sigma \hat{\theta} &
\end{array}
\nonumber
\end{equation}
\section{Appendix D: The one-loop renormalization functions}
In this appendix, we collect the functions $\Omega_i$'s and $B_i$'s, as
defined in equation (\ref{3.19}) , which have been computed with the heat-
kernel method. The results have been obtained with the help of the
program {\sl Mathematica}.
The $O(p^0)$ lagrangian is renormalized with $\Omega_0$,
whereas the rest of the
$\Omega_i$'s renormalize the $O(p^2)$. For technical reasons, it was
convenient to keep one more operator at $O(p^2)$ than those given in
(\ref{3.22p}), $E_7 = iD_\mu D^\mu \hat{\theta}$. By integration by parts,
this can be written in terms of the other $E_i$,
$$\Omega_7 E_7 = - i\Omega_7' E_5 - i\Omega_7' E_6,$$
which should be taken into account and added to $\Omega_5$ and $\Omega_6$.
For the sake of brevity, we use the following short notation:
$$\omega_i \equiv \frac {W_i}{W_1}, \\\\\
\omega_i^{\prime} \equiv \frac {W_i ^{\prime }}{W_1}, \\\\\
\omega_i^{\prime \prime} \equiv \frac {W_i ^{\prime \prime}}{W_1},$$
and, similarly, to include as many
primes as necessary. Let us emphasize that $\omega_i^{\prime}$ {\it is
not} the derivative of $\omega_i$, and so forth.
Also, recall that $W_1(0)=W_2(0)=\frac {f^2}{4}$ so that $\omega_1(0)=
\omega_2 (0)=1$.
\begin{eqnarray}
\Omega_0 &=& \frac {\nz^2}{8} \wopp^2 - \frac {\nz^2}{8} \wop \wopp \wunp
+ \frac {\nz^4}{32} \wop^2 \wunp^2 \nonumber \\
\Omega_1 &=& - \frac {\nz^2}{8} \wop \wunp - \frac {\nz^2}{4} \wopp \wunpp
+ \frac {\nz^2}{4} \wopp \wunp^2 + \frac {\nz^2}{8} \wop \wunp \wunpp
- \frac{\nz^4}{16} \wop \wunp^3 , \nonumber \\
\Omega_2 &=& - \frac {1}{4} \wopp \wdos - \frac {\nz^2}{4} \wopp \wdospp
+ i \frac {\nz}{2} \wopp \wtresp + \frac {1}{4} \wop \wunp \wdos
- \frac {\nz^2}{8} \wop \wunp \wdos \nonumber \\ &+&
\frac {\nz^2}{8} \wop \wunp \wdospp +
\frac {\nz^2}{8} \wopp \wunp \wdosp - i\frac {\nz}{4} \wop \wunp \wtresp -
i\frac {\nz}{8} \wopp \wunp \wtres \nonumber \\ &-&
\frac {\nz^4}{16} \wop \wunp^2 \wdosp +
i\frac {\nz^3}{16} \wop \wunp^2 \wtres , \nonumber \\
\Omega_3 &=& - i \frac {\nz}{2} \wopp \wdosp - \frac {1}{4} \wopp \wtres -
\frac {\nz^2}{4}\wopp \wtrespp + i \frac {\nz}{8} \wopp \wunp \wdos +
i \frac {\nz}{4}\wop \wunp \wdosp \nonumber \\ &+&
\left( \frac {1}{4} - \frac {\nz^2}{8} \right) \wop \wunp \wtres +
\frac {\nz^2}{8} \wop \wunp \wtrespp + \frac {\nz^2}{8} \wopp \wunp \wtresp -
\frac {\nz^4}{16} \wop \wunp^2 \wtresp \nonumber \\ &-& i \frac {\nz^3}{16}
\wop \wunp^2 \wdos , \nonumber \\
\Omega_4 &=& - \frac {\nz}{8} \wop \wunp - \frac {\nz}{4} \wopp \wunpp +
\frac {\nz}{4} \wopp \wunp^2 +
\frac {\nz^3}{8} \wop \wunp \wunpp +
\left( \frac {\nz}{8}- 3 \frac {\nz^3}{16} \right)
\wop \wunp^3 , \nonumber \\
\Omega_5 &=& -\frac {1}{4} \left( \nz^3 - \nz \right) \wop \wunp^3 +
\frac {1}{4} \left( \nz^3 - \nz \right) \wop \wunp \wunpp , \nonumber \\
\Omega_6 &=& \frac {\nz}{4} \wopp \wunpp + \frac {\nz^2}{4} \wopp \wcincpp -
\frac {\nz^2}{4} \wopp \wsispp - \frac {\nz}{8} \wopp \wunp^2 +
\left( \frac {\nz^3}{8} - \frac {\nz}{4} \right)
\wop \wunp \wunpp \nonumber \\ &-& \frac {\nz^2}{8} \wopp \wunp \wcincp -
\frac {\nz^2}{8} \wop \wunp \wcincpp +
\frac {\nz^2}{8} \wopp \wunp \wsisp + \frac {\nz^2}{8} \wop \wunp \wsispp
\nonumber \\ &+&
\left( \frac {\nz}{8} - \frac {\nz^3}{16} \right) \wop \wunp^3 +
\frac {\nz^4}{16} \wop \wunp^2 \wcincp -
\frac {\nz^4}{16} \wop \wunp^2 \wsisp , \nonumber \\
\Omega_7 &=& -i\frac {\nz}{4} \wopp \wunp -i \frac {\nz^2}{4} \wopp \wcincp -i
\left( \frac {\nz^3}{8} - \frac {\nz}{4} \right) \wop \wunp^2 +i
\frac {\nz^2}{8} \wopp \wunp \wcinc \nonumber \\ &+&i
\frac {\nz^2}{8} \wop \wunp \wcincp -i
\frac {\nz^4}{16} \wop \wunp^2 \wcinc .
\end{eqnarray}
\vspace*{0.5cm}
\noindent
The $O(p^4)$ is renormalized with the following $B_i$'s,
\begin{eqnarray}
B_0 &=& \frac {\nz}{48} + \frac {\nz}{6} \wunp^2 ,
\nonumber \\
B_1 &=& \frac {1}{16} + \frac {\nz^2}{8} \wunp^2 + \frac {\nz^2}{8} \wunpp^2 -
\frac {\nz^2}{4} \wunp^2 \wunpp + \left( \frac {\nz^2}{48} + \frac {\nz^4}{32}
\right) \wunp^4, \nonumber \\
B_2 &=& \frac {1}{8} + \frac {\nz^2}{24} \wunp^4 , \nonumber \\
B_3 &=& \frac {\nz}{24} - \frac {\nz}{6} \wunp^2 , \nonumber \\
B_4 &=& \frac {\wdos}{8} - i \frac {\nz}{8} \wunp \wtres +
\frac {\nz^2}{4} \wunpp \wdospp + \frac {1}{4} \wunpp \wdos +
\frac {\nz^2}{8} \wunp \wdosp - i \frac {\nz^3}{2} \wunpp \wtresp
\nonumber\\
&+& \left( \frac {\nz^2}{8} - \frac {3}{8} \right) \wunp^2 \wdos -
\frac {\nz^2}{8} \wunp \wunpp \wdosp - \frac {\nz^2}{4} \wunp^2 \wdospp +
i \frac {\nz}{8} \wunp \wunpp \wtres \nonumber\\
&+& i \frac {\nz}{2} \wunp^2 \wtresp +
\frac {\nz^4}{16} \wunp^3 \wdosp -
i \frac {\nz}{16} \wunp^3 \wtres , \nonumber \\
B_5 &=& \frac {\nz}{8} \wdos - \frac {\nz}{8} \wunp^2 \wdos , \nonumber \\
B_6 &=& \frac {1}{16} \wdos^2 + \frac {1}{4} \wdos \wdospp +
\frac {\nz^2}{8} \wdospp^2 - \frac {1}{4} \wtresp^2 -
i \frac {\nz}{2} \wdospp \wtresp +
\left( \frac {\nz^2}{8} - \frac {1}{4} \right) \wunp \wdos \wdosp
\nonumber \\ &-&
\frac {\nz^2}{8} \wunp \wdosp \wdospp - i \frac {\nz}{8} \wunp \wdos \wtres +
i \frac {\nz}{8} \wunp \wdospp \wtres + i \frac {\nz}{4} \wunp \wdosp \wtresp
\nonumber \\ &+& \frac {\nz^4}{32} \wunp^2 \wdosp^2 +
\left( \frac {1}{16}- \frac {\nz^2}{32} \right) \wunp^2 \wtres^2 -
i \frac {\nz^3}{16} \wunp^2 \wdosp \wtres , \nonumber \\
B_7 &=& \frac {1}{4} \wdosp^2 - \frac {1}{16} \wtresp^2 -
\frac {\nz^2}{8} \wtrespp^2 - \frac {1}{4} \wtres \wtrespp -
i \frac {\nz}{2} \wdosp \wtrespp - i \frac {\nz}{8} \wunp \wdos \wtres
\nonumber \\ &+& i \frac {\nz}{4} \wunp \wdosp \wtresp +
\left( \frac {1}{4}- \frac {\nz^2}{8} \right) \wunp \wtres \wtresp +
i \frac {\nz}{8} \wunp \wdos \wtrespp + \frac {\nz^2}{8} \wunp \wtresp \wtrespp
\nonumber \\ &+&
\left( \frac {\nz^2}{32} - \frac {1}{16} \right) \wunp^2 \wdos^2 -
i \frac {\nz^3}{16} \wunp^2 \wdos \wtresp - \frac {\nz^4}{32} \wunp^2 \wtresp^2
, \nonumber \\
B_8 &=& \frac {\nz}{16} \wdos^2 - \frac {\nz}{16} \wtres^2 - \frac {\nz}{4}
\wdosp^2 - \frac {\nz}{4} \wtresp^2
- \frac {i}{2} \wdos \wtresp - \frac {i}{2} \wdosp \wtres
+ \frac {i}{2} \wunp \wdos \wtres \nonumber \\&+&
\frac {\nz}{4} \wunp \wtres \wtresp - \frac {\nz}{4} \wunp \wdos \wdosp -
\frac {\nz}{16} \wunp^2 \wtres^2 + \frac {\nz}{16} \wunp^2 \wdos^2,
\nonumber \\
B_9 &=& \frac {\nz}{12} + \frac {\nz}{12} \wunp^2 , \nonumber \\
B_{10} &=& - \frac {\nz}{12} - \frac {\nz}{12} \wunp^2 , \nonumber \\
B_{11} &=& - \frac {\nz}{24} + \frac {\nz}{24} \wunp^2 , \nonumber \\
B_{12} &=& \frac {\nz}{8} \wdos^2 + \frac {\nz}{8} \wtres^2 -
\frac {\nz}{2} \wdosp^2 - \frac {\nz}{2} \wtresp^2 + i \wdosp \wtres -
i \wdos \wtresp +\frac {\nz}{2} \wunp \wdos \wdosp \nonumber \\
&+& \frac {\nz}{2} \wunp \wtres \wtresp -
\frac {\nz}{8} \wunp^2 \wdos^2 - \frac {\nz}{8} \wunp^2 \wtres^2 , \nonumber \\
B_{13} &=& \frac {1}{4} , \nonumber \\
B_{14} &=& -\frac {\nz}{4} \wunpp + \frac {\nz}{2} \wunp^2 +
\frac {\nz}{3} \wunpp^2 +
\left( \frac {5 \nz}{12}- \frac {\nz^3}{8} \right) \wunp^2 \wunpp +
\left(- \frac {\nz}{8} + \frac {3 \nz^3}{16} \right)\wunp^4
, \nonumber \\
B_{15} &=& - \frac {\nz}{3} \wunpp^2 +
\frac {2 \nz}{3} \wunp^2 \wunpp - \frac {\nz}{4} \wunp^4 , \nonumber \\
B_{16} &=& - \frac {1}{4} \wunpp + \frac {3}{8} \wunp^2 +
\left(\frac {\nz^2}{8} - \frac {1}{4}\right) \wunpp^2 +
\left(\frac {3}{4}-\frac {3\nz^2}{8}\right)\wunp^2 \wunpp +
\left( \frac {9\nz^2}{32}-\frac {9}{16} \right) \wunp^4 , \nonumber \\
B_{17} &=& \frac {\nz}{8} \wunp \wdosp - \frac {\nz}{4} \wunpp \wdos +
\frac {\nz}{4} \wunpp \wdospp - \frac {i}{8} \wunp \wtres +
\frac {\nz}{8} \wunp^2 \wdos - \frac {\nz^3}{8} \wunp \wunpp \wdosp
\nonumber \\
&-& \frac {\nz}{4} \wunp^2\wdospp + \nonumber +
i\left(\frac {\nz^2}{8}-\frac {1}{4} \right) \wunp \wunpp \wtres -
\frac {\nz}{8} \wunp^3 \wdos + \frac {3\nz^3}{16}\wunp^3 \wdosp
\nonumber \\&+& 3i \left(\frac {1}{8}- \frac {\nz^2}{16} \right)\wunp^3 \wtres
, \nonumber \\
B_{18} &=& -\frac {1}{4} \wdos - \frac {1}{2} \wunpp \wdos +
i \frac {\nz}{2}\wunpp \wtresp + \frac {3}{4}\wunp^2\wdos-
i \frac {\nz}{4} \wunp \wunpp \wtres - i \frac {\nz}{2} \wunp^2\wtresp
\nonumber \\ &+&i \frac {\nz}{4} \wunp^3 \wtres ,
\nonumber \\
B_{19} &=& \frac {1}{24} - \frac {1}{24}\wunp^2 , \nonumber \\
B_{20} &=& \frac {1}{12} + \frac {1}{12}\wunp^2 , \nonumber \\
B_{21} &=& - \frac {\nz}{8} \wtres + \frac {\nz}{8} \wunp^2 \wtres,
\nonumber \\
B_{22} &=& - \frac {1}{8}\wtres - i \frac {\nz}{8} \wunp \wdos -
i \frac {\nz}{2} \wunpp \wdosp - \frac {1}{4}\wunpp \wtres -
\frac {\nz^2}{8}\wunp \wtresp-
\frac {\nz^2}{4}\wunpp \wtrespp \nonumber \\ &+&
i \frac {\nz}{8}\wunp \wunpp \wdos + i \frac {\nz}{2}\wunp^2 \wdosp +
\frac {1}{8} \left(3- \nz^2 \right)\wunp^2\wtres +
\frac {\nz^2}{8}\wunp \wunpp \wtresp \nonumber \\ &+&
\frac {\nz^2}{4}\wunp^2\wtrespp - i \frac {\nz^3}{16} \wunp^3 \wdos -
\frac {\nz^4}{16} \wunp^3 \wtresp, \nonumber \\
B_{23} &=& \frac {1}{4}\wtres + \frac {1}{2} \wunpp \wtres +
i \frac {\nz}{2} \wunpp \wdosp - i \frac {\nz}{4}\wunp \wunpp \wdos +
i \frac {\nz}{2} \wunp^2 \wdosp
- \frac {3}{4} \wunp^2 \wtres \nonumber \\ &+&i \frac {\nz}{4} \wunp^3 \wdos ,
\nonumber \\
B_{24} &=& \frac {i}{8} \wunp \wdos - \frac {\nz}{4}\wunpp \wtres +
\frac {\nz}{8}\wunp \wtresp + \frac {\nz}{4}\wunpp \wtrespp +
\frac {i}{4}\left(1- \frac {\nz^2}{2}\right)\wunp \wunpp \wdos
\nonumber \\ &+&\frac {3\nz}{8}\wunp^2\wtres - \frac {\nz^3}{8}\wunp \wunpp
\wtresp - \frac {\nz}{4} \wunp^2 \wtrespp +
\frac {3i}{8}\left(\frac {\nz^2}{2}-1\right)\wunp^3 \wdos
\nonumber \\ &+&\left(- \frac {\nz}{8}+
\frac {3 \nz^3}{16}\right)\wunp^3 \wtresp, \nonumber \\
B_{25} &=& \frac {i}{2}\wdos\wdosp-\frac {i}{2}\wtres\wtresp +
\frac {\nz}{8}\wdos\wtres +\frac {\nz}{2}\wdosp\wtresp -
\frac {i}{4}\wunp \wdos^2 + \frac {i}{4}\wunp \wtres^2 \nonumber \\ &-&
\frac {\nz}{4}\wunp\wdosp\wtres -
\frac {\nz}{4}\wunp\wdos\wtresp +\frac {\nz}{8}\wunp^2\wdos\wtres
, \nonumber \\
B_{26} &=& i \frac {\nz}{2}\wdosp\wdospp -i \frac {\nz}{2}\wtresp\wtrespp +
\frac {1}{8} \wdos\wtres +\frac {1}{2} \wdosp\wtresp +
\frac {\nz^2}{4}\wdospp\wtrespp +
\frac {1}{4} \wdospp\wtres \nonumber \\ &+&\frac {1}{4} \wdos\wtrespp +
i \frac {\nz}{8} \wunp \wdos^2
-i \frac {\nz}{4} \wunp \wdosp^2+
\frac{i \nz^3}{16} \wunp^2 \wdos \wdosp+
i \frac {\nz}{8} \wunp \wdos \wdospp \nonumber \\ &+&
\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wdosp\wtres -
i \frac {\nz}{8}\wunp\wtres^2 +
\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wdos\wtresp -
\frac {\nz^2}{8}\wunp\wdospp\wtresp \nonumber \\ &+&
i \frac {\nz}{4}\wunp\wtresp^2 -
\frac {\nz^2}{8}\wunp\wdosp\wtrespp +i \frac {\nz}{8}\wunp\wtres\wtrespp +
\frac {1}{8}\left(\frac {\nz^2}{2}-1\right)\wunp^2\wdos\wtres
\nonumber \\ &+&\frac {\nz^4}{16} \wunp^2\wdosp\wtresp - i \frac {\nz^3}{16}
\wunp^2\wtres\wtresp, \nonumber \\
B_{27} &=& -i\frac {\nz}{6} \wunp \wunpp + i\frac {\nz}{6 } \wunp^3
, \nonumber \\
B_{28} &=& 0, \nonumber \\
B_{29} &=& 0, \nonumber \\
B_{30} &=& 0, \nonumber \\
B_{31} &=& 0, \nonumber \\
B_{32} &=& \frac {\nz}{2}\wunpp - \frac {\nz}{2}\wunp^2 -
\frac {\nz}{6}\wunpp^2 + \left(-\frac {\nz}{6} +
\frac {\nz^3}{4} \right)\wunp^2 \wunpp +
\left(\frac {\nz}{3} - \frac {\nz^3}{4}\right)\wunp^4, \nonumber \\
B_{33} &=& \frac {2\nz}{3}\wunpp^2 -
\frac {4 \nz}{3} \wunp^2\wunpp +
\frac {2 \nz}{3} \wunp^4, \nonumber \\
B_{34} &=& -\frac {1}{2}\wunpp +\frac {1}{2}\wunp^2 +
\left(\frac {\nz^2}{2}-1\right) \wunpp^2 +
\frac {5}{2}\left(1-\frac {\nz^2}{2}\right) \wunp^2\wunpp -
\frac {3}{2}\left(1-\frac {\nz^2}{2}\right) \wunp^4, \nonumber \\
B_{35} &=& \frac {\nz}{4}\wunpp - \frac {\nz}{8}\wunp^2 +
\frac {\nz}{6}\wunpp^2 +
\frac {\nz^2}{8}\wunp\wcincp + \frac {\nz^2}{4}\wunpp\wcincpp -
\frac {\nz^2}{8}\wunp\wsisp \nonumber \\ &-&\frac {\nz^2}{4}\wunpp\wsispp +
\left( \frac {11 \nz}{24}+ \frac {\nz^3}{8}\right) \wunp^2\wunpp -
\frac {\nz^2}{8}\wunp\wunpp\wcincp - \frac {\nz^2}{4}\wunp^2\wcincpp
\nonumber \\ &+&\frac {\nz^2}{8}\wunp\wunpp\wsisp +
\frac {\nz^2}{4}\wunp^2\wsispp +
\left(\frac {\nz}{6} - \frac {\nz^3}{16}\right) \wunp^4 +
\frac {\nz^4}{16} \wunp^3 \wcincp \nonumber \\ &-&
\frac {\nz^4}{16} \wunp^3 \wsisp, \nonumber \\
B_{36} &=& \frac {\nz}{3}\wunpp^2-\frac {2\nz}{3}\wunp^2\wunpp +
\frac {\nz}{3}\wunp^4, \nonumber \\
B_{37} &=& -\frac {1}{4}\wunpp + \frac {1}{8}\wunp^2 +
\left(\frac {\nz^2}{4}-\frac {5}{12}\right)\wunpp^2 -
\frac {\nz}{8}\wunp\wcincp - \frac {\nz}{4}\wunpp\wcincpp +
\frac {\nz}{8}\wunp\wsisp \nonumber \\ &+& \frac {\nz}{4}\wunpp\wsispp +
\left(\frac {5}{6}-\frac {\nz^2}{2}\right)\wunp^2\wunpp +
\frac {\nz^3}{8}\wunp\wunpp\wcincp - \frac {\nz^3}{8}\wunp\wunpp\wsisp
\nonumber \\ &+& \frac {1}{4}\wunp^2\wcincpp -
\frac {\nz}{4}\wunp^2\wsispp +
\left(\frac {3\nz^2}{16}-\frac {7}{24}\right)\wunp^4 +
\left(\frac {\nz}{8}-\frac {3 \nz^3}{16}\right)\wunp^3\wcincp
\nonumber \\ &+&\left(- \frac {\nz}{8}+
\frac {3 \nz^3}{16}\right)\wunp^3\wsisp, \nonumber \\
B_{38} &=& \left(\frac {\nz^2}{2}-\frac {5}{6}\right)\wunpp^2 +
\left(\frac {5}{3}-\nz^2\right)\wunp^2\wunpp +
\left(\frac {\nz^2}{2}-\frac {5}{6}\right)\wunp^4, \nonumber \\
B_{39} &=& \frac {1}{2}\left(\nz^2-1\right)\wunpp^2 -
\frac {3}{4}\left(\nz^2-1\right)\wunp^2\wunpp +
\frac {1}{4}\left(\nz^3-\nz\right)\wunp\wunpp\wcincp \nonumber \\ &-&
\frac {1}{4}\left(\nz^3-\nz\right)\wunp\wunpp\wsisp +
\frac {1}{4}\left(\nz^2-1\right)\wunp^4 -
\frac {1}{4}\left(\nz^3-\nz\right)\wunp^3\wcincp \nonumber \\ &+&
\frac {1}{4}\left(\nz^3-\nz\right)\wunp^3\wsisp , \nonumber \\
B_{40} &=&\frac {\nz^2}{8}\wunpp^2 - \frac {\nz^2}{4}\wcincpp\wsispp +
\frac {\nz}{4}\wunpp\wcincpp - \frac {\nz}{4}\wunpp\wsispp +
\frac {\nz^2}{8}\wcincpp^2 +\frac {\nz^2}{8}\wsispp^2
\nonumber \\ &-&\frac {\nz^2}{8}\wunp^2\wunpp +
\frac {1}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp\wunpp\wcincp -
\frac {1}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp\wunpp\wsisp
\nonumber \\ &-& \frac {\nz}{8}\wunp^2\wcincp +\frac {\nz}{8}\wunp^2\wsisp
-
\frac {\nz^2}{8}\wunp\wcincp\wcincpp - \frac {\nz^2}{8}\wunp\wsisp\wsispp +
\frac {\nz^2}{8}\wunp\wcincpp\wsisp \nonumber \\ &+&
\frac {\nz^2}{8}\wunp\wcincp\wsispp +
\frac {\nz^2}{32}\wunp^4 -
\frac {1}{8}\left(\frac {\nz^3}{2}-\nz\right)\wunp^3\wcincp +
\frac {1}{8}\left(\frac {\nz^3}{2}-\nz\right)\wunp^3\wsisp
\nonumber \\ &+&\frac {\nz^4}{32}\wunp^2\wcincp^2 +
\frac {\nz^4}{32}\wunp^2\wsisp^2 -
\frac {\nz^4}{16}\wunp^2\wcincp\wsisp , \nonumber \\
B_{41} &=& -i \frac {\nz}{4}\wunp -i \frac {\nz}{4}\wunp\wunpp -i
\frac {\nz^2}{8}\wunp\wcinc -i\frac {\nz^2}{4}\wunpp\wcincp +
i\left(\frac {\nz}{2} - \frac {\nz^3}{8}\right)\wunp^3
\nonumber \\ &+&i\frac {\nz^2}{8}\wunp\wunpp\wcinc +i
\frac {\nz^2}{4}\wunp^2\wcincp -i
\frac {\nz^4}{16}\wunp^3\wcinc, \nonumber \\
B_{42} &=&\frac {i}{4}\wunp +
\frac {i}{2}\left(1-\frac {\nz^2}{2}\right)\wunp\wunpp +i
\frac {\nz}{8}\wunp\wcinc +i \frac {\nz}{4}\wunpp\wcincp +
i \left(- \frac {3}{4} + \frac {3 \nz^3}{16} \right)\wunp^3
\nonumber \\ &-&i\frac {\nz^3}{8}\wunp\wunpp\wcinc -i
\frac {\nz}{4}\wunp^2\wcincp -i
\left(\frac {\nz}{8} - \frac {3 \nz^3}{16}\right)\wunp^3\wcinc , \nonumber \\
B_{43} &=&\frac {i}{2}\left(1-\nz^2\right)\wunp\wunpp -
\frac {i}{2}\left(1-\nz^2\right)\wunp^3 -
\frac {i}{4}\left(\nz^3-\nz\right)\wunp\wunpp\wcinc \nonumber \\ &+&
\frac {i}{4}\left(\nz^3-\nz\right)\wunp^3\wcinc, \nonumber \\
B_{44} &=& -i \frac {\nz^2}{4}\wunp\wunpp -i \frac {\nz}{4}\wunpp\wcincp
-i
\frac {\nz}{4}\wunp\wcincpp - i \frac {\nz^2}{4}\wcincp\wcincpp +
i \frac {\nz}{4}\wunp\wsispp \nonumber \\ &+&i\frac {\nz^2}{4}\wcincp\wsispp +
i \frac {\nz^2}{8}\wunp^3 -
\frac {i}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp\wunpp\wcinc -
\frac {i}{8}\left(\nz^3-3\nz\right)\wunp^2\wcincp
\nonumber \\ &+&i \frac {\nz^2}{8}\wunp\wcincp^2 +
i \frac {\nz^2}{8}\wunp\wcinc\wcincpp +
\frac {i}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp^2\wsisp -i
\frac {\nz^2}{8}\wunp\wcincp\wsisp \nonumber \\ &-&i
\frac {\nz^2}{8}\wunp\wcinc\wsispp +
\frac {i}{8}\left(\frac {\nz^3}{2}-\nz\right)\wunp^3\wcinc -
i \frac {\nz^4}{16}\wunp^2\wcinc\wcincp +i
\frac {\nz^4}{16}\wunp^2\wcinc\wsisp, \nonumber \\
B_{45} &=&\frac {\nz^2}{8}\wunp^2 + \frac {\nz}{4}\wunp\wcincp +
\frac {\nz^2}{8}\wcincp^2 +
\frac {1}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp^2\wcinc -
\frac {\nz^2}{8}\wunp\wcinc\wcincp \nonumber \\ &+&
\frac {\nz^4}{32}\wunp^2\wcinc^2, \nonumber \\
B_{46} &=& \frac {1}{2}\wunpp\wdos - i \frac {\nz}{2}\wunpp\wtresp -
\frac {1}{2}\wunp^2\wdos+ i \frac {\nz}{4}\wunp\wunpp\wtres +
i \frac {\nz}{2}\wunp^2\wtresp - i \frac {\nz}{4}\wunp^3\wtres , \nonumber \\
B_{47} &=& -\frac {\nz}{2}\wunpp\wdos + \frac {i}{2}\wunpp\wtresp +
\frac {\nz}{2}\wunp^2\wdos +
\frac {1}{4}\left(\nz-\nz^3\right)\wunp\wunpp\wdosp \nonumber\\
&+& i \left( - \frac {1}{2} + \frac {\nz^2}{4}\right)\wunp\wunpp\wtres
-\frac {i}{2}\wunp^2\wtresp -
\frac {1}{4}\left(\nz-\nz^3\right)\wunp^3\wdosp +
\frac {i}{2}\left(1-\nz^2\right)\wunp^3\wtres, \nonumber \\
B_{48} &=& i \frac {\nz}{2}\wunpp\wdosp + \frac {i}{2}\wunpp\wtres -
i \frac {\nz}{4}\wunp\wunpp\wdos - i \frac {\nz}{2}\wunp^2\wdosp -
\frac {1}{2}\wunp^2\wtres + i \frac {\nz}{4}\wunp^3\wdos, \nonumber \\
B_{49} &=&- \frac {i}{2}\wunpp\wdosp - \frac {\nz}{2}\wunpp\wtres +
\frac {i}{2}\left(1-\frac {\nz^2}{2}\right)\wunp\wunpp\wdos +
\frac {i}{2}\wunp^2\wdosp + \frac {\nz}{2}\wunp^2\wtres \nonumber \\ &+&
\frac {1}{4}\left(\nz-\nz^3\right)\wunp\wunpp\wtresp -
\frac {i}{2}\left(1-\frac {\nz^2}{2}\right)\wunp^3\wdos -
\frac {1}{4}\left(\nz-\nz^3\right)\wunp^3\wtresp, \nonumber \\
B_{50} &=&-\frac {\nz}{4}\wunpp\wdos
- \frac {\nz}{4}\wunpp\wdospp
+\frac {i}{2}\wunpp\wtresp
-\frac {1}{4}\wcincpp\wdos
- \frac {\nz^2}{4}\wcincpp\wdospp
+\frac {1}{4}\wsispp\wdos
\nonumber \\ &+&\frac {\nz^2}{4}\wsispp\wdospp
+i\frac {\nz}{2}\wcincpp\wtresp
-i \frac {\nz}{2}\wsispp\wtresp
+\frac {\nz}{8}\wunp^2\wdos
+ \frac {\nz}{8}\wunp^2\wdospp
\nonumber \\ &-&\frac {1}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp\wunpp\wdosp
+\frac {i}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wunpp\wtres
-\frac {i}{4}\wunp^2\wtresp
\nonumber \\ &-&\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)
\wunp\wdos\wcincp
+ \frac {\nz^2}{8}\wunp\wdospp\wcincp
-i\frac {\nz}{4}\wunp\wtresp\wcincp
+ \frac {\nz^2}{8}\wunp\wdosp\wcincpp
\nonumber \\ &-&i\frac {\nz}{8}\wunp\wtres\wcincpp
+\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wdos\wsisp
-\frac {\nz^2}{8}\wunp\wdospp\wsisp
+i \frac {\nz}{4}\wunp\wtresp\wsisp
\nonumber \\ &-&\frac {\nz^2}{8}\wunp\wdosp\wsispp
+i \frac {\nz}{8}\wunp\wtres\wsispp
+\frac {1}{8}\left(\frac {\nz^3}{2}-\nz\right)\wunp^3\wdosp
-\frac {i}{8}\left(\frac {\nz^2}{2}-1\right)\wunp^3\wtres
\nonumber \\ &-& \frac {\nz^4}{16}\wunp^2\wdosp\wcincp
+i\frac {\nz^3}{16}\wunp^2\wtres\wcincp
+\frac {\nz^4}{16}\wunp^2\wdosp\wsisp
-i \frac {\nz^3}{16}\wunp^2\wtres\wsisp
, \nonumber \\
B_{51} &=& - \frac {\nz}{4}\wunpp\wtres
- \frac {\nz}{4}\wunpp\wtrespp
-\frac{i}{2}\wunpp\wdosp
- \frac {1}{4}\wcincpp\wtres
- \frac {\nz^2}{4}\wcincpp\wtrespp
+ \frac {1}{4}\wsispp\wtres
\nonumber \\ &+&\frac {\nz^2}{4}\wsispp\wtrespp
-i \frac {\nz}{2}\wcincpp\wdosp
+ i \frac {\nz}{2}\wsispp\wdosp
+ \frac {\nz}{8}\wunp^2\wtres
+\frac {\nz}{8}\wunp^2\wtrespp
\nonumber \\ &-&\frac {1}{4}\left(\frac {\nz^3}{2}-\nz\right)
\wunp\wunpp\wtresp
-\frac {i}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wunpp\wdos
+\frac {i}{4} \wunp^2\wdosp
\nonumber \\ &-&\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)
\wunp\wtres\wcincp
+ \frac {\nz^2}{8}\wunp\wtrespp\wcincp
+i\frac {\nz}{4}\wunp\wdosp\wcincp
+ \frac {\nz^2}{8}\wunp\wtresp\wcincpp
\nonumber \\ &+&i\frac {\nz}{8}\wunp\wdos\wcincpp
+\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wtres\wsisp
-\frac {\nz^2}{8}\wunp\wtrespp\wsisp
-i \frac {\nz}{4}\wunp\wdosp\wsisp
\nonumber \\ &-&\frac {\nz^2}{8}\wunp\wtresp\wsispp
- i \frac {\nz}{8}\wunp\wdos\wsispp
+\frac {1}{8}\left(\frac {\nz^3}{2}-\nz\right)\wunp^3\wtresp
+\frac {i}{8}\left(\frac {\nz^2}{2}-1\right)\wunp^3\wdos
\nonumber \\ &-& \frac {\nz^4}{16}\wunp^2\wtresp\wcincp
-i \frac {\nz^3}{16}\wunp^2\wdos\wcincp
+\frac {\nz^4}{16}\wunp^2\wtresp\wsisp
+i \frac {\nz^3}{16}\wunp^2\wdos\wsisp
, \nonumber \\
B_{52} &=& i \frac {\nz}{4}\wunp\wdos +i \frac {\nz}{4}\wunp\wdospp +
\frac {1}{2}\wunp\wtresp + \frac {i}{4}\wdos\wcincp +i
\frac {\nz^2}{4}\wdospp\wcincp + \frac {\nz}{2}\wtresp\wcincp
\nonumber \\ &+&
\frac {i}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp^2\wdosp +
\frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp^2\wtres +
\frac {i}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wdos\wcinc
\nonumber \\ &-&i\frac {\nz^2}{8}\wunp\wdospp\wcinc -
\frac {\nz}{4}\wunp\wtresp\wcinc -i
\frac {\nz^2}{8}\wunp\wdosp\wcincp - \frac {\nz}{8}\wunp\wtres\wcincp
\nonumber \\ &+&i\frac {\nz^4}{16}\wunp^2\wdosp\wcinc +
\frac {\nz^3}{16}\wunp^2\wtres\wcinc
, \nonumber \\
B_{53} &=& -i \frac {\nz}{4}\wunp\wtres -i \frac {\nz}{4}\wunp\wtrespp
+\frac {1}{2}\wunp\wdosp -\frac {i}{4}\wtres\wcincp -
i\frac {\nz^2}{4}\wtrespp\wcincp + \frac {\nz}{2}\wdosp\wcincp \nonumber \\
&+& \frac {1}{4}\left(\frac {\nz^2}{2}-1\right)\wunp^2\wdos -
\frac {i}{4}\left(\frac {\nz^3}{2}-\nz\right)\wunp^2\wtresp -
\frac {i}{4}\left(\frac {\nz^2}{2}-1\right)\wunp\wtres\wcinc
\nonumber \\ &+&i\frac {\nz^2}{8}\wunp\wtrespp\wcinc -
\frac {\nz}{4}\wunp\wdosp\wcinc + i
\frac {\nz^2}{8}\wunp\wtresp\wcincp - \frac {\nz}{8}\wunp\wdos\wcincp
\nonumber \\ &-&i\frac {\nz^4}{16}\wunp^2\wtresp\wcinc +
\frac {\nz^3}{16}\wunp^2\wdos\wcinc, \nonumber \\
B_{54} &=& i \frac {\nz}{6}\wunp^3 -i \frac {\nz}{6}\wunp\wunpp, \nonumber \\
B_{55} &=& -\frac {i}{6}\wunp^3 +\frac {i}{6}\wunp\wunpp, \nonumber \\
B_{56} &=& 0. \nonumber \\
B_{57} &=& 0. \nonumber \\
\end{eqnarray}
\section{Appendix E: The Discrete Symmetries: C, P and T.}
For the sake of completeness we give in this appendix the transformation
laws for the fields and the sources that appear in the article under
the discrete symmetries, C, P and T.
In order to specify them one needs to first define how they act on the
space-time coordinates $x^\mu=(t, \vec{x})$.
The Parity (P) operation transforms $\vec{x} \to -\vec{x}$ while leaving
the time component unchanged.
Using the Minkowski space notation, $x^\mu \buildrel P \over \rightarrow
x_p^\mu= p^\mu_{\; \nu} x^\nu$,
where $p^\mu_{\; \nu}= \ diagonal \ (1, -1,-1,-1)$ is a matrix.
Time-Reversal (T) reverses the flow of the time-component
$t \to -t$ while leaving the space components unchanged.
$x^\mu \buildrel T \over \rightarrow x_t^\mu= t^\mu_{\;\nu} x^\nu$,
where $t^\mu_{\;\nu}= \ diagonal \ (-1,1,1,1)$.
Charge-Conjugation (C) does not act on space time-indices, it
interchanges the r\^ole of particles and anti-particles.
In Quantum Mechanics they are implemented with operators acting
on a Hilbert space that are unitary for C, P; and anti-unitary
for T.
Acting on the (Dirac) quark fields $q_a(x)$, where $a$ labels
any colour or flavour index, they read
\begin{eqnarray}
q_a(x) \buildrel C \over \longrightarrow q_a^{(C)}(x) &=& \xi_C
{\cal C} {\bar{q_a}}^T (x), \;\;\;\;\;\;
{\cal C} \gamma_\mu^T {\cal C}^{-1}=
- \gamma_\mu; \nonumber\\
\buildrel P \over \longrightarrow q_a^{(P)}(x) &=& \xi_P
{\cal P} q_a (x_p), \;\;\;\;\;\;
{\cal P} \gamma_\mu^\dagger {\cal P}^{-1}=\gamma_\mu; \nonumber\\
\buildrel T \over \longrightarrow q_a^{(T)}(x) &=& \xi_T
{\cal T} q_a (x_t), \;\;\;\;\;\;
{\cal T} \gamma_\mu^T {\cal T}^{-1}= \gamma_\mu,
\label{quarks}
\end{eqnarray}
The $\xi_C$, $\xi_P$, $\xi_T$ are arbitrary phase factors,
$|\xi_C|^2=|\xi_P|^2=|\xi_T|^2=1$.
The matrices ${\cal C}$, ${\cal P}$, ${\cal T}$ act on Dirac indices
only. In the Dirac representation ${\cal C}= i \gamma^0 \gamma^2$,
${\cal P}= \gamma^0$. Once ${\cal C}$, ${\cal P}$ are fixed, ${\cal T}=
-i \gamma_5 {\cal C}$. They verify
${\cal C}^{-1}={\cal C}^\dagger={\cal C}^T=-{\cal C}$, and
${\cal T}^{-1}={\cal T}^\dagger=-{\cal T}^T=
{\cal T}$.
Acting on $\gamma_5$ they yield
$${\cal C} \gamma_5^T {\cal C}^{-1}= \gamma_5, \;\;\;\;
\gamma^0 \gamma_5 \gamma^0= -\gamma_5, \;\;\;\;
{\cal T} \gamma_5^T {\cal T}= \gamma_5.$$
For $\bar{q}_a (x)$,
\begin{eqnarray}
\bar{q}_a(x) & \buildrel C \over \longrightarrow &
- \xi_C^* q_a^T(x) {\cal C}^{-1},
\nonumber\\
&\buildrel P \over \longrightarrow & \;\;\; \xi_P^*
\bar{q}_a (x_p) \gamma_0,
\nonumber\\
&\buildrel T \over \longrightarrow & \;\;\; \xi_T^*
\bar{q_a}(x_t) {\cal T},
\label{antiquarks}
\end{eqnarray}
The phase factors $\xi_C$, $\xi_P$, $\xi_T$ shall be omitted henceforth.
The quark bilinears $\bar{q}_a (x) \Gamma q_b (x)$ transform as
\begin{eqnarray}
&\buildrel C \over \longrightarrow& \bar{q}_b (x) [\Gamma]_C \; q_a (x),
\;\;\;\;\;\;\;\;\; [\Gamma]_C \;=
\left( {\cal C}^{-1} \Gamma {\cal C} \right)^T,
\nonumber\\
&\buildrel P \over \longrightarrow& \bar{q}_a (x_p) [\Gamma]_P \; q_b (x_p),
\;\;\;\;\;\;[\Gamma]_P \; = \gamma^0 \Gamma \gamma^0,
\nonumber\\
&\buildrel T \over \longrightarrow& \bar{q}_a (x_t) [\Gamma]_T \; q_b (x_t),
\;\;\;\;\;\;\;[\Gamma]_T \; = {\cal T} \Gamma^* {\cal T}.
\label{bilinears}
\end{eqnarray}
The star $(*)$ in the last line of (\ref{bilinears}) denotes complex
conjugation. There is a minus sign in the bilinear transformed under C
which comes from the anti-commutation of two quark fields.
$${[I]}_C= I, \;\;\;\;\;\;\;\;\;\; {[I]}_P= I ,
\;\;\;\;\;\;\;\;\;\; {[I]}_T= I, $$
$$[i {\gamma}_5]_C=i \gamma_5, \;\;\;\;\;\;\;
[i {\gamma}_5]_P= - i {\gamma}_5,
\;\;\;\;\;\;\;
[i {\gamma}_5]_T= - i {\gamma}_5,$$
$$[\gamma^\mu]_C = - \gamma^\mu, \;\;\;\;\;\; [\gamma^\mu]_P= p^\mu_{\;\mu'}
[\gamma^{\mu'}],
\;\;\;\;\;\;
[\gamma^\mu]_T= -t^\mu_{\;\mu'} [\gamma^{\mu'}], $$
$$[\gamma^\mu \gamma_5]_C= [\gamma^\mu \gamma_5], \;\;
[\gamma^\mu \gamma_5]_P= -p^\mu_{\;\mu'}[\gamma^{\mu'} \gamma_5], \;\;
[\gamma^\mu \gamma_5]_T= -t^\mu_{\;\mu'}[\gamma^{\mu'} \gamma_5].$$
For the gluon field (hermitian) matrix $G^\mu (x)$ in colour-space,
$$\buildrel C \over \rightarrow - G^{\mu \; T} (x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'} G^{\mu'} (x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'} G^{\mu'} (x_t).$$
It is easy to verify that the QCD action is invariant under C, P and T.
For the topological charge density $Q(x) \sim \epsilon_{\mu \nu \rho \sigma}
\; Tr_c G^{\mu \nu}(x) G^{\rho \sigma}(x)$, which is also real,
$$\buildrel C \over \rightarrow Q(x),
\;\;\;\; \buildrel P \over \rightarrow \det (p^\mu_{\;\mu'}) \ Q(x_p)= -Q(x_p),
\;\;\;\; \buildrel T \over \rightarrow \det (t^\mu_{\;\mu'}) \ Q(x_t)= -Q(x_t).$$
So far for operators involving the dynamical fields.
The (hermitian) operator $i \bar{q}_a (x)\gamma_5 q_b(x)$
has the same quantum
as the light pseudoscalar matrix $\Phi_{ab}(x)$, and,
since the vacuum is invariant under C, P, and T the transformation
laws of the latter are taken from those of the former.
Let us write them down for the simpler case of $U_L(2) \otimes U_R(2)$, where
\begin{equation}
\Phi =
\left(\begin{array}{cc}
\frac {\pi_0 - \eta}{\sqrt{2}} & \pi^+ \\
\pi^- & -\frac {\pi_0 + \eta} {\sqrt{2}}\\
\end{array}\right).
\label{pions}
\end{equation}
Under the discrete symmetries
\begin{equation}
\buildrel C \over \longrightarrow
\left(\begin{array}{cc}
\frac{\pi_0 - \eta}{\sqrt{2}} & \pi^- \\
\pi^+ & - \frac {\pi_0 (x) + \eta}{\sqrt{2}}\\
\end{array}\right) = \Phi^T, \nonumber
\end{equation}
and, similarly,
\begin{equation}
\begin{array}{cc}
\buildrel P \over \longrightarrow - \Phi (x_p), &
\buildrel T \over \longrightarrow - \Phi (x_t),
\end{array}
\end{equation}
which generalize immediately for $U_L (n_l) \otimes U_R (n_l)$. It translates
into
\begin{eqnarray}
U(x) &\buildrel \rm{C} \over \longrightarrow& U^{(C)}(x)=U^T (x),
\nonumber \\
U(x) &\buildrel \rm{P} \over \longrightarrow& U^{(P)}(x)=U^\dagger (x_p),
\nonumber \\
U(x) &\buildrel \rm{T} \over \longrightarrow& U^{(T)}(x)=U(x_t),
\label{cptu}
\end{eqnarray}
for the $U(x)$ matrix, as defined in (\ref{U}).
As for the external sources, we shall chose their transformation so as
to leave the action invariant.
The real source $\theta(x)$, that couples to the topogical charge $Q(x)$, transforms as:
$$\buildrel C \over \rightarrow \theta(x),
\;\;\;\; \buildrel P \over \rightarrow -\theta (x_p),
\;\;\;\; \buildrel T \over \rightarrow -\theta (x_t).$$
In the text, the combination $\hat{\theta}= i \theta$ appeared in a natural
way. It transforms accordingly, with an extra minus sign for the
T transformation because it anti-commutes with the imaginary number $i$
due to its anti-unitary character. The combination $X$, defined in
(\ref{3.5bis}), transforms as $\hat{\theta}$ does.
The (hermitian) source matrices $s\ , p\ , v_\mu \ , a_\mu$, in flavour space
transform as
$$ s(x) \buildrel C \over \rightarrow s^T(x),
\;\;\;\; \buildrel P \over \rightarrow s(x_p),
\;\;\;\; \buildrel T \over \rightarrow s(x_t),$$
$$ p(x) \buildrel C \over \rightarrow p^T(x),
\;\;\;\; \buildrel P \over \rightarrow -p(x_p),
\;\;\;\; \buildrel T \over \rightarrow -p (x_t),$$
$$ v^\mu (x)\buildrel C \over \rightarrow - v^{\mu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'}v^{\mu'}(x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'}v^{\mu'}(x_t),$$
$$ a^\mu (x)\buildrel C \over \rightarrow a^{\mu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow -p^\mu_{\;\mu'}a^{\mu'}(x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'}a^{\mu'}(x_t).$$
The combination $\chi=2 B (s +i p)$ transforms as the $U$ fields.
The left and right combinations of the vector and axial sources,
$l_\mu= v_\mu - a_\mu$ and $r_\mu= v_\mu + a_\mu$, transform as
$$ l^\mu (x)\buildrel C \over \rightarrow - r^{\mu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'} r^{\mu'}(x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'} l^{\mu'}(x_t),$$
$$ r^\mu (x)\buildrel C \over \rightarrow -l^{\mu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'} l^{\mu'}(x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'} r^{\mu'}(x_t).$$
Both the C and the P transformations interchange {\it left} and {\it right}.
For the field strengths $F_L^{\mu \nu}$, $F_R^{\mu \nu}$ associated to
$l_\mu$, $r_\mu$,
$$F_L^{\mu \nu}(x)\buildrel C \over \rightarrow - F_R^{\mu \nu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'} p^\nu_{\;\nu'}
F_R^{\mu' \nu'} (x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'} t^\nu_{\;\nu'}
F_L^{\mu' \nu'}(x_t),$$
$$F_R^{\mu \nu}(x)\buildrel C \over \rightarrow - F_L^{\mu \nu \; T}(x),
\;\;\;\; \buildrel P \over \rightarrow p^\mu_{\;\mu'} p^\nu_{\;\nu'}
F_L^{\mu' \nu'} (x_p),
\;\;\;\; \buildrel T \over \rightarrow -t^\mu_{\;\mu'} t^\nu_{\;\nu'}
F_R^{\mu' \nu'}(x_t),$$
Finally, for the combination $C^\mu= U^\dagger D^\mu U$, that is anti-hermitian
$C^{\mu \; \dagger}= - C^\mu$,
$$ C^\mu (x)\buildrel C \over \rightarrow [U(x) C^\mu (x) U^\dagger (x)]^T,
\;\;\;\; \buildrel P \over \rightarrow - p^\mu_{\;\mu'}
U(x_p) C^{\mu'} (x_p) U^\dagger (x_p),
\;\;\;\; \buildrel T \over \rightarrow t^\mu_{\;\mu'} C^{\mu'} (x_t).$$
\newpage
|
1,477,468,750,169 | arxiv | \section{Introduction}
It is a standard topic in Banach space theory to investigate whether the convergence of functionals on a given Banach space in one topology implies the convergence in another finer one. In this paper we study the following instance of this problem. Let $\mathcal{A}$ be a $\sigma$-complete Boolean algebra. It follows from Nikodym's Uniform Boundedness Principle (see Diestel \cite[page 80]{Die84}) that every pointwise convergent sequence of measures on $\mathcal{A}$ is also weak* convergent (i.e. $\mathcal{A}$ has \textit{the Nikodym property}; see Section \ref{section:convergence} for definitions). On the other hand, Grothendieck \cite{Gro53} proved that every weak* convergent sequence of measures on $\mathcal{A}$ is weakly convergent (i.e. $\mathcal{A}$ has \textit{the Grothendieck property}). Thus, it follows that if $K_\mathcal{A}$ is the Stone space of a $\sigma$-complete Boolean algebra $\mathcal{A}$, then the pointwise convergence of a sequence in the dual space $C\big(K_\mathcal{A}\big)^*$ implies its weak convergence (i.e. $\mathcal{A}$ has the \textit{Vitali--Hahn--Saks property}).
Intuitively speaking, the Stone spaces of Boolean algebras with the Nikodym property do not admit any non-trivial pointwise convergent sequences of measures --- here, ``non-trivial'' means ``not weak* convergent''. Similarly, the Stone spaces of Boolean algebras with the Grothendieck property do not have any non-trivial weak* convergent sequences of measures, where ``non-trivial'' this time means ``not weakly convergent''. It is well-known that the Stone spaces of Boolean algebras with any of the Nikodym or Grothendieck properties do not have any non-trivial, i.e. non-eventually constant, convergent sequences, so both properties may be seen as strengthening of the lack of non-trivial convergent sequences in the Stone spaces. As we see further, this is related to the famous Efimov problem.
Let now $V$ denote the set-theoretic universe, $\mathbb P\in V$ be a notion of forcing and $G$ a $\mathbb P$-generic filter over $V$. Assume that $\mathcal{A}\in V$ is a $\sigma$-complete Boolean algebra. Preservation of the Vitali--Hahn--Saks property in the extension $V[G]$ is not automatic --- e.g. if $\mathbb P$ adds new reals, then the ground model algebra $\big(\wp(\omega)\big)^V$ of all subsets of integers will no longer be $\sigma$-complete in $V[G]$ and it may also fail to have the Vitali--Hahn--Saks property --- such a situation happens e.g. after adding Cohen reals (see Dow and Fremlin \cite[Introduction]{DF07}). The main aim of this paper is thus to find out what properties of $\mathbb P$ are sufficient to ensure that $\mathcal{A}$ will have the Vitali--Hahn--Saks property in the extension $V[G]$.
Our question was motivated by the utility of the properties --- the research of Seever \cite{See68}, Talagrand \cite{Tal80}, Haydon \cite{Hay81}, Molt\'o \cite{Mol81}, Schachermayer \cite{Sch82}, Freniche \cite{Fre84}, Aizpuru \cite{Aiz92}, Valdivia \cite{Val13}, K\k{a}kol and Lop\'ez-Pellicer \cite{KakLopPel16} etc. showed their importance. Also the following cardinal number issue led us to deal with the problem. It can be shown that many separation or interpolation properties of infinite Boolean algebras studied by the aforementioned authors implying the Nikodym or Grothendieck properties imply also that these algebras have cardinality at least equal to the continuum $\mathfrak c$. The natural question whether consistently there are infinite Boolean algebras with at least one of the properties and of cardinality strictly less than $\mathfrak c$ appeared. Brech \cite{Bre06} showed that in the side-by-side Sacks extension all ground model $\sigma$-complete Boolean algebras preserve the Grothendieck property. Recently, Sobota and Zdomskyy \cite{SobZdo17} proved the same result for the Nikodym property, which --- together with Brech's result --- consistently yields a class of examples of Boolean algebras with the Vitali--Hahn--Saks property and of cardinality $\omega_1$ while the inequality $\omega_1<\mathfrak c$ holds true in the model. Also, Sobota \cite{Sob17} proved in ZFC that for every cardinal number $\kappa$ such that $\operatorname{cof}([\kappa]^\omega)=\kappa\ge\operatorname{cof}(\mathcal{N})$ (the cofinality of the Lebesgue null ideal) there exists a Boolean algebra with the Nikodym property and of cardinality $\kappa$.
In this paper we generalize the results of Brech \cite{Bre06} and the authors \cite{SobZdo17}. Namely, we prove (Theorem \ref{theorem:main}) that if $\mathbb P$ is a proper forcing preserving the ground model reals as a non-meager subset of the reals in the extension and having the Laver property (Definitions \ref{def:preservation_nonmeager} and \ref{def:laver_property}), then in any $\mathbb P$-generic extension any ground model $\sigma$-complete Boolean algebra has the Vitali--Hahn--Saks property. There are many examples of notions of forcing satisfying the assumptions of the theorem, e.g. side-by-side products of Sacks forcing, Miller forcing or Silver(-like) forcing.
Our result has some interesting consequences. First, it yields a consistent example of a whole class of infinite Boolean algebras with the Vitali--Hahn--Saks property and of cardinality strictly less than the dominating number $\mathfrak{d}$, as well as it sheds some new light on connections between convergence of measures on Boolean algebras and cardinal characteristics of the continuum --- see Section \ref{section:cardinal}. Second, as shown in Section \ref{section:efimov}, it can be used to obtain a new consistent situation in which there exists \textit{an Efimov space} --- a counterexample to \textit{the Efimov problem}, a long-standing open question asking whether there exists an infinite compact Hausdorff space containing neither any non-trivial convergent sequence nor any copy of $\beta\omega$, the \v{C}ech-Stone compactification of integers. So far, Efimov spaces have been obtained only with an aid of additional set-theoretic assumptions (like $\Diamond$) --- no ZFC Efimov space is known; see Hart \cite{Har07} for a detailed discussion.
The structure of the paper is as follows. In Section \ref{section:convergence} we recall basic definitions, properties and facts concerning (sequences of) measures on Boolean algebras. In Section \ref{section:trees_on_forcings} we construct special auxiliary trees on a poset $\mathbb P$ associated with the Nikodym and Grothendieck properties. Section \ref{section:aux_set_theory} contains auxiliary results concerning almost disjoint families and proper posets. In Section \ref{section:main} we present the proof of the main result --- Theorem \ref{theorem:main}. Section \ref{section:consequences} presents the consequences of the theorem to cardinal characteristics of the continuum and Efimov spaces.
\section{Measures on Boolean algebras\label{section:convergence}}
In this section we provide notation, terminology, basic definitions and facts concerning sequences of measures used in the paper.
Let $\mathcal{A}$ be a Boolean algebra. The Stone space of $\mathcal{A}$ is denoted by $K_\mathcal{A}$. Recall that by the Stone duality $\mathcal{A}$ is isomorphic to the algebra of clopen subsets of $K_\mathcal{A}$; if $A\in\mathcal{A}$, then $[A]$ denotes the corresponding clopen subset of $K_\mathcal{A}$. A subset $X$ of $\mathcal{A}$ is an \textit{antichain} if $x\wedge y=\normalfont{\textbf{0}}_\mathcal{A}$ for every distinct $x,y\in X$, i.e. every two distinct elements of $X$ are \textit{disjoint}
A \textit{measure} $\mu\colon\mathcal{A}\to\mathbb C$ on $\mathcal{A}$ is always a finitely additive complex-valued function with finite variation $\|\mu\|$. The measure $\mu$ has a unique regular Borel extension (denoted also by $\mu$) onto the space $K_\mathcal{A}$, preserving the variation of $\mu$. By the Riesz representation theorem the dual space $C\big(K_\mathcal{A}\big)^*$ of the Banach space $C\big(K_\mathcal{A}\big)$ of continuous complex-valued functions on $K_\mathcal{A}$, endowed with the supremum norm, is isometrically isomorphic to the space of all regular Borel measures on $K_\mathcal{A}$, hence $C\big(K_\mathcal{A}\big)^*$ is isometrically isomorphic to the space of all measures on $\mathcal{A}$.
Let us now recall basic definitions concerning sequences of measures.
\begin{definition}
Let $\mathcal{A}$ be a Boolean algebra. We say that a sequence $\langle\mu_n\colon\ n\in\omega\rangle$ of measures on $\mathcal{A}$ is:
\begin{itemize}
\item \textit{pointwise bounded} if $\sup_{n\in\omega}\big|\mu_n(A)\big|<\infty$ for every $A\in\mathcal{A}$;
\item \textit{uniformly bounded} if $\sup_{n\in\omega}\big\|\mu_n\big\|<\infty$;
\item \textit{pointwise convergent} if $\lim_{n\to\infty}\mu_n(A)$ exists for every $A\in\mathcal{A}$;
\item \textit{weak* convergent} if $\lim_{n\to\infty}\int_{K_\mathcal{A}}fd\mu_n$ exists for every $f\in C\big(K_\mathcal{A}\big)$;
\item \textit{weakly convergent} if $\lim_{n\to\infty}x^{**}\big(\mu_n\big)$ exists for every $x^{**}\in C\big(K_\mathcal{A}\big)^{**}$.
\end{itemize}
\end{definition}
\begin{remark}\label{remark:weak_conver_borel}
Note that the weak convergence is equivalent to the convergence on every Borel subset of $K_\mathcal{A}$ --- see Diestel \cite[Theorem 11, page 90]{Die84}.
\end{remark}
\begin{definition}
We say that a Boolean algebra $\mathcal{A}$ has:
\begin{itemize}
\item \textit{the Nikodym property} if every pointwise bounded sequence of measures on $\mathcal{A}$ is uniformly bounded;
\item \textit{the Grothendieck property} if every weak* convergent sequence of measures on $\mathcal{A}$ is weakly convergent;
\item \textit{the Vitali--Hahn--Saks property} if every pointwise convergent sequence of measures on $\mathcal{A}$ is weakly convergent.
\end{itemize}
\end{definition}
Proposition \ref{prop:nikodym_convergence}, which is a simple folklore fact, shows that the definition of the Nikodym property can be also stated in the convergence manner: a Boolean algebra $\mathcal{A}$ has the Nikodym property if every pointwise convergent sequence of measures on $\mathcal{A}$ is weak* convergent. However, the first definition is easier to deal with (and also follows from the original statement of Nikodym's theorem \cite{Nik33}). The Vitali--Hahn--Saks property is usually stated in terms of so-called exhaustiveness of families of measures, but Schachermayer \cite[Theorem 2.5]{Sch82} proved that the property is equivalent to the conjunction of the Nikodym and Grothendieck properties, whence follows our definition. Note that by Remark \ref{remark:weak_conver_borel} the definition of the property can be stated also as follows: a Boolean algebra $\mathcal{A}$ has the Vitali--Hahn--Saks property if every pointwise convergent sequence of measures on $\mathcal{A}$ is convergent on every Borel subset of $K_\mathcal{A}$.
\begin{proposition}\label{prop:nikodym_convergence}
Let $\mathcal{A}$ be a Boolean algebra. Then, the following are equivalent:
\begin{enumerate}
\item every pointwise convergent sequence of measures on $\mathcal{A}$ is weak* convergent;
\item every pointwise convergent sequence of measures on $\mathcal{A}$ is uniformly bounded;
\item every pointwise bounded sequence of measures on $\mathcal{A}$ is uniformly bounded.
\end{enumerate}
\end{proposition}
\begin{proof}
\noindent(1)$\Rightarrow$(2): By the Banach--Steinhaus theorem (the Uniform Boundedness Principle) every weak* convergent sequence of measures on $\mathcal{A}$ is uniformly bounded.
\medskip
\noindent(2)$\Rightarrow$(3): Assume there is a pointwise bounded sequence $\langle\mu_n\colon\ n\in\omega\rangle$ of measures on $\mathcal{A}$ such that $\lim_{n\to\infty}\big\|\mu_n\big\|=\infty$. For every $n\in\omega$ define the measure:
\[\nu_n=\mu_n/\sqrt{\big\|\mu_n\big\|}.\]
Then, $\langle\nu_n\colon\ n\in\omega\rangle$ is pointwise convergent to $0$. Indeed, for every $A\in\mathcal{A}$ and $n\in\omega$ we have:
\[\big|\nu_n(A)\big|=\frac{\big|\mu_n(A)\big|}{\sqrt{\big\|\mu_n\big\|}}\le\frac{\sup_{m\in\omega}\big|\mu_m(A)\big|}{\sqrt{\big\|\mu_n\big\|}},\]
so $\lim_{n\to\infty}\nu_n(A)=0$ for every $A\in\mathcal{A}$. On the other hand, since $\big\|\nu_n\big\|=\sqrt{\big\|\mu_n\big\|}$ for every $n\in\omega$, we have $\lim_{n\to\infty}\big\|\nu_n\big\|=\infty$, a contradiction with (2).
\medskip
\noindent(3)$\Rightarrow$(1): Let $\langle\mu_n\colon\ n\in\omega\rangle$ be a sequence of measures on $\mathcal{A}$ pointwise convergent to $0$. Fix $f\in C\big(K_\mathcal{A}\big)$ and let $\varepsilon>0$. Since $\langle\mu_n\colon\ n\in\omega\rangle$ is pointwise bounded, it is by (3) uniformly bounded. Let then $M>0$ be such that $\big\|\mu_n\big\|<M$ for every $n\in\omega$. Let $\sum_{i=1}^k\alpha_i\chi_{A_i}\in C\big(K_\mathcal{A}\big)$ ($\alpha_i\in\mathbb C,A_i\in\mathcal{A}$) be such a function that:
\[\big\|f-\sum_{i=1}^k\alpha_i\chi_{A_i}\big\|<\varepsilon/(2M).\]
By the pointwise convergence to $0$, there is $N\in\omega$ such that for every $n>N$ we have:
\[\sum_{i=1}^k\big|\alpha_i\big|\cdot\big|\mu_n\big(A_i\big)\big|<\varepsilon/2.\]
Then, for every $n>N$ it holds:
\[\Big|\int_{K_\mathcal{A}}fd\mu_n\Big|\le\int_{K_\mathcal{A}}\big|f-\sum_{i=1}^k\alpha_i\chi_{A_i}\big|d\mu_n+\int_{K_\mathcal{A}}\Big|\sum_{i=1}^k\alpha_i\chi_{A_i}\Big|d\mu_n\le\]
\[\big\|f-\sum_{i=1}^k\alpha_i\chi_{A_i}\big\|\cdot\big\|\mu_n\big\|+\sum_{i=1}^k\big|\alpha_i\big|\cdot\big|\mu_n\big(A_i\big)\big|<\big(\varepsilon/(2M)\big)\cdot M+\varepsilon/2=\varepsilon,\]
which proves (3).
\end{proof}
\subsection{Anti-Nikodym sequences}
\begin{definition}
A sequence $\langle\mu_n\colon\ n\in\omega\rangle$ of measures on a Boolean algebra $\mathcal{A}$ is \textit{anti-Nikodym} if it is pointwise bounded on $\mathcal{A}$ but not uniformly bounded.
\end{definition}
\noindent Obviously, a Boolean algebra has the Nikodym property if it does not have any anti-Nikodym sequences of measures. The following lemma is an important tool in studying the Nikodym property --- see Sobota \cite[Lemmas 4.4 and 4.7]{Sob17} for a proof.
\begin{lemma}\label{lemma:aN_antichain}
Let $\mathcal{A}$ be a Boolean algebra and $\langle\mu_n\colon\ n\in\omega\rangle$ an anti-Nikodym sequence of measures on $\mathcal{A}$. Then, there exists $x\in K_\mathcal{A}$ such that
for every finite $\mathcal{A}_0\subset \mathcal{A}$ with $x\not\in\bigvee \mathcal{A}_0$ and $M>0$, there exist $X\in[\omega]^\omega$ and an antichain $\big\{A_n\colon\ n\in X\big\}$ in $\normalfont{\textbf{1}}_\mathcal{A}\setminus\bigvee\mathcal{A}_0$ such that for every $n\in X$ we have $x\not\in A_n$ and the following inequality holds:
\[\big|\mu_n(A_n)\big|>\max_{\mathcal{C}\subset\mathcal{A}_0}\big|\mu_n\big(\bigvee\mathcal{C}\big)\big|+M.\]\hfill$\Box$
\end{lemma}
Every point $x\in K_\mathcal{A}$ satisfying the conclusion of Lemma \ref{lemma:aN_antichain} is called \textit{a Nikodym concentration point of the sequence $\langle\mu_n\colon\ n\in\omega\rangle$}. Properties of the set of all Nikodym concentration points of a given sequence were studied in Sobota \cite[Section 4]{Sob17}.
\subsection{Anti-Grothendieck sequences}
Similarly to anti-Nikodym sequences we define anti-Grothendieck sequences.
\begin{definition}
A sequence $\langle\mu_n\colon\ n\in\omega\rangle$ of measures on a Boolean algebra $\mathcal{A}$ is \textit{anti-Grothendieck} if it is weak* convergent to the zero measure but not weakly convergent.
\end{definition}
Note that by the Banach--Steinhaus theorem (the Uniform Boundedness Principle) every anti-Grothendieck sequence is uniformly bounded.
A (far) analogon of Lemma \ref{lemma:aN_antichain} for anti-Grothendieck sequences is the following well-known consequence of the Dieudonn\'e--Grothendieck theorem (see e.g. Diestel \cite[Theorem VII.14, page 98]{Die84}).
\begin{lemma}\label{lemma:aG_antichain}
Let $\mathcal{A}$ be a Boolean algebra and $\langle\mu_n\colon\ n\in\omega\rangle$ an anti-Grothendieck sequence of measures on $\mathcal{A}$. Then, there exist $X\in[\omega]^\omega$, an antichain $\langle A_n\colon\ n\in X\rangle$ in $\mathcal{A}$, and $\varepsilon>0$ such that for every $n\in X$ the following inequality holds:
\[\big|\mu_n\big(A_n\big)\big|>\varepsilon.\]\hfill$\Box$
\end{lemma}
\subsection{Measures and almost disjoint families}
The following fact is folklore --- see e.g. Brech \cite[Lemma 2.1]{Bre06} and Sobota \cite[Lemma 2.6]{Sob17} for different proofs. Recall that a family $\mathcal{H}\subset[\omega]^\omega$ is \textit{almost disjoint} if $A\cap B$ is finite for every distinct $A,B\in\mathcal{H}$.
\begin{lemma}\label{lemma:measures_ad}
Let $\mathcal{H}$ be an uncountable family of infinite almost disjoint subsets of $\omega$ and let $\langle A_n\colon\ n\in\omega\rangle$ be an antichain in a Boolean algebra $\mathcal{A}$. Assume that $\bigvee_{n\in H}A_n\in\mathcal{A}$ for every $H\in\mathcal{H}$. Then, for every sequence $\langle\mu_n\colon\ n\in\omega\rangle$ of measures on $\mathcal{A}$ there exists $H_0\in\mathcal{H}$ such that for every $k\in\omega$ the following equality holds:
\[\mu_k\Big(\bigvee_{n\in H_0}A_n\Big)=\sum_{n\in H_0}\mu_k\big(A_n\big).\]\hfill$\Box$
\end{lemma}
\section{Trees on forcings and sequences of measures\label{section:trees_on_forcings}}
Throughout the \underline{whole} paper $V$ denotes the set-theoretic universe. In the following section we assume that $\mathbb P$ is a notion of forcing and $G$ is a $\mathbb P$-generic filter over the ground model $V$.
\begin{proposition}\label{prop:tree_nikodym}
Let $\mathcal{A}$ be a ground model Boolean algebra. Let $\langle\name{\mu}_n\colon\ n\in\omega \rangle$ be a sequence of $\mathbb P$-names for measures on $\mathcal{A}$ and $\name{x}$ a name for a point in $K_\mathcal{A}$. Assume that $1_{\mathbb P}$ forces the following formulas:
\[``\langle\name{\mu}_n\colon\ n\in\omega \rangle\text{ is anti-Nikodym}",\]
\[``\name{x}\text{ is a Nikodym concentration point of }\langle\name{\mu}_n\colon\ n\in\omega \rangle",\text{ and}\]
\[``\big\|\name{\mu}_n\big\|<n\text{ for every }n\in\omega".\]
Then, there exists a tree $T\subset\mathbb P^{<\omega}$ such that to every $t=\langle p_0,\ldots,p_{n-1}\rangle\in T$ ($n\ge1$) there are associated the following objects:
\begin{itemize}
\item a set $a_t\in[\omega]^{<\omega}$,
\item a sequence $\langle A_m^t\in\mathcal{A}\colon\ m\in a_t\rangle$,
\item a sequence $\langle b_{t\upharpoonright k}^t\subset a_{t\upharpoonright k}\colon\ 1\le k<n\rangle$,
\item a sequence $\langle e_{t\upharpoonright k,m}^t\subset b_{t\upharpoonright k}^t\colon\ m\in a_{t\upharpoonright i},1\le i<k<n\rangle$,
\item a sequence $\langle l_{t\upharpoonright k}^t\in b_{t\upharpoonright k}^t\colon\ 1\le k<n\rangle$,
\end{itemize}
satisfying the following conditions:
\begin{enumerate}[(i)]
\item $\max a_{t\upharpoonright k}<\min a_{t\upharpoonright (k+1)}$ for all $1\le k<n$,
\item $\big|a_{t\upharpoonright k}\big|>(k+1)\cdot\big|b_{t\upharpoonright k}^t\big|$ for all $1\le k<n$,
\item $b_{t\upharpoonright k}^t=\big\{l_{t\upharpoonright k}^t\big\}\cup\bigcup_{1\le i<k}\bigcup_{m\in a_{t\upharpoonright i}}e_{t\upharpoonright k,m}^t$ for all $1\le k<n$,
\item $\big\{A^{t\upharpoonright k}_m\colon\ m\in a_{t\upharpoonright k},m\neq l_{t\upharpoonright k}^t,1\le k<n\big\}\cup\big\{A^t_m\colon\ m\in a_t\big\}$ is an antichain in $\mathcal{A}$,
\item $p_{n-1}\Vdash e_{t\upharpoonright k,m}^t=\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_m\big|\big(A_l^{t\upharpoonright k}\big)\ge 1/2^k\big\}$ for all $m\in a_{t\upharpoonright i}$ and $1\le i<k<n$,
\item either $p_{n-1}\Vdash\Big(l_{t\upharpoonright k}^t=\min a_{t\upharpoonright k}\text{ and }\name{x}\not\in\bigvee_{m\in a_{t\upharpoonright k}}A_m^{t\upharpoonright k}\Big)$, or $p_{n-1}\Vdash\name{x}\in A_{l_{t\upharpoonright k}^t}^{t\upharpoonright k}$, for all $1\le k<n$,
\item $p_{n-1}\Vdash\forall \mathcal{C}\subset\big\{A_l^{t\upharpoonright k}\colon\ l\in a_{t\upharpoonright k},l\neq l_{t\upharpoonright k}^t,1\le k<n\big\}\ \forall m\in a_t\colon\ $
\[\big|\name{\mu}_m\big(A_m^t\big)\big|>\big|\name{\mu}_m(\bigvee\mathcal{C})\big|+n.\]
\end{enumerate}
Besides, for every $t\in T$ the set $D_t^T=\big\{q\colon\ t\hat{\ \ } q\in T\big\}\in V$ is dense in $\mathbb P$
\end{proposition}
\begin{proof}
The first level of the tree $T$ is constructed as follows. Fix $p\in\mathbb P$. Since $1_\mathbb P$ forces that $\name{x}$ is a Nikodym concentration point of $\langle\name{\mu}_n\colon\ n\in\omega \rangle$, by Lemma \ref{lemma:aN_antichain}, there exist $q_p\le p$, a name $\name{X}$ for an infinite subset of $\omega$ and a name $\name{f}$ for a map from $\name{X}$ to $\mathcal{A}$ whose range is an antichain with elements not containing $\name{x}$, such that $q_p$ forces that for every $n\in\name{X}$ we have
\[\big|\name{\mu}_n\big(\name{f}(n)\big)\big|>1.\]
Now, there exist $r_p\le q_p$, $a_{r_p}\in[\omega]^{<\omega}$ of size $\big|a_{r_p}\big|>2^3$ and $\langle A_m^{r_p}\in\mathcal{A}\colon\ m\in a_{r_p}\rangle$ such that $r_p$ forces that $a_{r_p}\subset\name{X}$ and $\name{f}(m)=A_m^{r_p}$ for every $m\in a_{r_p}$ (hence $\langle A_m^{r_p}\colon\ m\in a_{r_p}\rangle$ is also an antichain).
The map $p\mapsto r_p$ (with the domain $\mathbb P$) has the range which is a dense subset of $\mathbb P$ --- call this range $D_\emptyset^T$, i.e. $D_\emptyset^T$ will be the first level of the tree $T$. Note that trivially $D_\emptyset^T\in V$. For every $r_p\in D_\emptyset^T$ and $m\in a_{r_p}$ rename $a_{\langle r_p\rangle}=a_{r_p}$ and $A_m^{\langle r_p\rangle}=A_m^{r_p}$ (if for different $p$ and $p'$ we have got $r_p=r_{p'}$, then assume that $a_{r_p}=a_{r_{p'}}$ and $A_m^{r_p}=A_m^{r_{p'}}$ for all $m\in a_{r_p}$). As the conditions $(i)-(vii)$ trivially hold, this finishes the first step.
\medskip
Assume now that we have constructed $t=\langle p_0,\ldots,p_{n-1}\rangle$ for some $n\ge 1$ along with $\langle a_{t\upharpoonright k}\colon\ 1\le k\le n\rangle$, $\langle A^{t\upharpoonright k}_m\colon\ m\in a_{t\upharpoonright k}, 1\le k\le n\rangle$, $\langle l_{t\upharpoonright k}^{t\upharpoonright n'}\colon\ 1\le k<n'\le n\rangle$, $\langle e_{t\upharpoonright k,m}^{t\upharpoonright n'}\colon\ m\in a_{t\upharpoonright i},1\le i<k<n'\le n\rangle$, and $\langle b_{t\upharpoonright k}^{t\upharpoonright n'}\colon\ 1\le k<n'\le n\rangle$ satisfying $(i)$-$(vii)$ whenever relevant. Assume also that for every $1\le k\le n$ we have:
\[\tag{$*$}\big|a_{t\upharpoonright k}\big|>(k+1)^2\cdot\big(\max a_{t\upharpoonright(k-1)}+1\big)^2\cdot2^k,\]
where we assign $\max a_\emptyset=0$.
Fix $p\in\mathbb P$. There are $q_p\le p$ and a sequence $\langle l_k^{q_p}\in a_{t\upharpoonright k}\colon\ 1\le k\le n\rangle$ such that for every $1\le k\le n$ if $q_p\Vdash\name{x}\not\in\bigvee_{m\in a_{t\upharpoonright k}}A_m^{t\upharpoonright k}$, then $l_k^{q_p}=\min a_{t\upharpoonright k}$, and
$q_p\Vdash\name{x}\in A_{l_k^{q_p}}^{t\upharpoonright k}$ otherwise. Put:
\[\mathcal{A}_0=\bigcup_{1\le k\le n}\big\{A_m^{t\upharpoonright k}\colon\ m\in a_{t\upharpoonright k},m\neq l_k^{q_p}\big\}.\]
Hence, $q_p$ forces that $\name{x}\not\in\bigvee\mathcal{A}_0$, and so, by Lemma \ref{lemma:aN_antichain}, there are names $\name{X}$ for an infinite subset of $\omega$ and $\name{f}$ for a map from $\name{X}$ to $\mathcal{A}$ whose range is an antichain with elements disjoint with $\bigvee\mathcal{A}_0$ and not containing $\name{x}$, such that $q_p$ forces that for every $m\in\name{X}$ and $\mathcal{C}\subset\mathcal{A}_0$ we have
\[\big|\name{\mu}_m\big(\name{f}(m)\big)\big|>\big|\name{\mu}_m\big(\bigvee\mathcal{C}\big)\big|+n+1.\]
There exist $r_p\le q_p$, $a_{r_p}\in[\omega]^{<\omega}$ of size
\[\big|a_{r_p}\big|>(n+2)^2\cdot\big(\max a_t+1\big)^2\cdot2^{n+1}\]
with $\max a_t<\min a_{r_p}$, and $\langle A_m^{r_p}\in\mathcal{A}\colon\ m\in a_{r_p}\rangle$ such that $r_p$ forces that $a_{r_p}\subset\name{X}$ and $\name{f}(m)=A_m^{r_p}$ for every $m\in a_{r_p}$ (hence $\langle A_m^{r_p}\colon\ m\in a_{r_p}\rangle$ is also an antichain).
Finally, there exist $s_p\le r_p$ and for every $1\le i<k\le n$ and every $m\in a_{t\upharpoonright i}$ a set $e_{k,m}^{s_p}\subset a_{t\upharpoonright k}$ of size $\big|e_{k,m}^{s_p}\big|\le m\cdot2^k$, such that $s_p$ forces that
\[\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_{m}\big|\big(A^{t\upharpoonright k}_l\big)\ge 1/2^k\big\}=e_{k,m}^{s_p}\]
for all $k$ and $m$ as above (recall here that $1_\mathbb P\Vdash\big\|\name{\mu}_m\big\|\le m$ for every $m\in\omega$).
As in the first step, the map $p\mapsto s_p$ (with the domain $\mathbb P$) has the range which is a dense subset of $\mathbb P$ --- call this range $D_t^T$, i.e. for all $q\in D_t^T$ the sequence $t\hat{\ \ } q$ will be in the tree $T$. Again, $D_t^T\in V$. For every $s_p\in D_t^T$ and $m\in a_{r_p}$ rename $a_{t\hat{\ \ } s_p}=a_{r_p}$, $A_m^{t\hat{\ \ } s_p}=A_m^{r_p}$ and $l_{t\upharpoonright k}^{t\hat{\ \ } s_p}=l_k^{q_p}$. Finally, for every $s_p\in D_t^T$, $1\le i<k\le n$ and $m\in a_{t\upharpoonright i}$ rename $e_{t\upharpoonright k,m}^{t\hat{\ \ } s_p}=e_{k,m}^{s_p}$ and put:
\[b_{t\upharpoonright k}^{t\hat{\ \ } s_p}=\Big\{l_{t\upharpoonright k}^{t\hat{\ \ } s_p}\Big\}\cup\bigcup_{j=1}^{k-1}\bigcup_{m\in a_{t\upharpoonright j}}e_{t\upharpoonright k,m}^{t\hat{\ \ } s_p}.\]
It is easy to see that all the demanded conditions (including auxiliary $(*)$), maybe except for $(ii)$, are satisfied. To show $(ii)$, fix $s_p\in D_t^T$ and $1\le k\le n$. Write $t'=t\hat{\ \ } s_p$ for simplicity and note that $t\upharpoonright k=t'\upharpoonright k$. We have:
\[\big|b_{t'\upharpoonright k}^{t'}\big|\le 1+\sum_{j=1}^{k-1}\sum_{m\in a_{t\upharpoonright j}}\big|e_{t\upharpoonright k,m}^{t'}\big|\le 1+\sum_{j=1}^{k-1}\sum_{m\in a_{t\upharpoonright j}}m\cdot 2^k\le1+\sum_{j=1}^{k-1}\big(\max a_{t\upharpoonright j}\big)^2\cdot 2^k\le\]
\[\le 1+(k-1)\cdot\big(\max a_{t\upharpoonright(k-1)}\big)^2\cdot 2^k\le(k+1)\cdot\big(\max a_{t'\upharpoonright(k-1)}+1\big)^2\cdot 2^k.\]
Combining this with $(*)$ we obtain:
\[\frac{\big|a_{t'\upharpoonright k}\big|}{\big|b_{t'\upharpoonright k}^{t'}\big|}>\frac{(k+1)^2\cdot\big(\max a_{t'\upharpoonright(k-1)}+1\big)^2\cdot2^k}{(k+1)\cdot\big(\max a_{t'\upharpoonright(k-1)}+1\big)^2\cdot 2^k}=k+1,\]
which yields (ii).
\end{proof}
The proof of the next proposition is obviously similar to the proof of Proposition \ref{prop:tree_nikodym}, but not identical.
\begin{proposition}\label{prop:tree_grothendieck}
Let $\mathcal{A}$ be a ground model Boolean algebra. Assume that $\langle\name{B_n}\colon\ n\in\omega\rangle$ is a sequence of $\mathbb P$-names for elements of $\mathcal{A}$ such that $1_\mathbb P$ forces that $\langle\name{B_n}\colon\ n\in\omega\rangle$ is an antichain. Assume also that $\langle\name{\mu}_n\colon\ n\in\omega\rangle$ is a sequence of $\mathbb P$-names for measures on $\mathcal{A}$ and that there exist rational numbers $M,\varepsilon>0$ such that for all $n\in\omega$ we have:
\[1_\mathbb P\Vdash\big\|\name{\mu}_n\big\|<M\text{ and }\big|\name{\mu}_n\big(\name{B}_n\big)\big|>2\varepsilon.\]
Then, there exists a tree $T\subset\mathbb P^{<\omega}$ such that to every $t=\langle p_0,\ldots,p_{n-1}\rangle\in T$ ($n\ge1$) there are associated the following objects:
\begin{itemize}
\item a set $a_t\in[\omega]^{<\omega}$,
\item a sequence $\langle A_m^t\in\mathcal{A}\colon\ m\in a_t\rangle$,
\item a sequence $\langle b_{t\upharpoonright k}^t\subset a_{t\upharpoonright k}\colon\ 1\le k<n\rangle$,
\item a sequence $\langle c_{t\upharpoonright k}^t\subset b_{t\upharpoonright k}^t\colon\ 1\le k<n\rangle$,
\item a sequence $\langle e_{t\upharpoonright k,m}^t\subset b_{t\upharpoonright k}^t\colon\ m\in a_{t\upharpoonright i},1\le i<k<n\rangle$,
\item a name $\name{X}_t$ for a subset of $\omega$,
\end{itemize}
satisfying the following conditions:
\begin{enumerate}[(i)]
\item $\max a_{t\upharpoonright k}<\min a_{t\upharpoonright (k+1)}$ for all $1\le k<n$,
\item $\big|a_{t\upharpoonright k}\big|>(k+1)\cdot\big|b_{t\upharpoonright k}^t\big|$ for all $1\le k<n$,
\item $b_{t\upharpoonright k}^t=c_{t\upharpoonright k}^t\cup\bigcup_{i<k}\bigcup_{m\in a_{t\upharpoonright i}}e_{t\upharpoonright k,m}^t$ for all $1\le k<n$,
\item $1_\mathbb P\Vdash\name{X}_t\in[\omega]^\omega$ and $\name{X}_{t\upharpoonright(k+1)}\subset\name{X}_{t\upharpoonright k}$ for all $1\le k<n$,
\item $1_\mathbb P\Vdash\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_m\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}=\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_{m'}\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}$ for all $m,m'\in\name{X}_t$ and $1\le k<n$,
\item $p_{n-1}\Vdash a_t\subset\name{X}_t$ and $\name{B}_m=A_m^t$ for all $m\in a_t$,
\item $p_{n-1}\Vdash c_{t\upharpoonright k}^t=\big\{l\in a_{t\upharpoonright k}\colon\ \exists m\in\name{X}_t\text{ such that } \big|\name{\mu}_m\big|\big(A_l^{t\upharpoonright k}\big)\ge\varepsilon/2^{k+2}\big\}$ for all $1\le k<n$,
\item $p_{n-1}\Vdash e_{t\upharpoonright k,m}^t=\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_m\big|\big(A_l^{t\upharpoonright k}\big)\ge\varepsilon/2^{k+2}\big\}$ for all $m\in a_{t\upharpoonright i}$ and $1\le i<k<n$.
\end{enumerate}
Besides, for every $t\in T$ the set $D_t^T=\big\{q\colon\ t\hat{\ \ } q\in T\big\}\in V$ is dense in $\mathbb P$
\end{proposition}
\begin{proof}
The first level of the tree $T$ is constructed as follows. Fix $p\in\mathbb P$.
There exist $r_p\le p$, $a_{r_p}\in[\omega]^{<\omega}$ of size $\big|a_{r_p}\big|>2^4\cdot M/\varepsilon$ and $\langle A_m^{r_p}\in\mathcal{A}\colon\ m\in a_{r_p}\rangle$ such that $r_p$ forces that $\name{B}_m=A_m^{r_p}$ for every $m\in a_{r_p}$
The map $p\mapsto r_p$ (with the domain $\mathbb P$) has the range which is a dense subset of $\mathbb P$ --- call this range $D_\emptyset^T$, i.e. $D_\emptyset^T$ will be the first level of the tree $T$. Note that trivially $D_\emptyset^T\in V$. For every $r_p\in D_\emptyset^T$ and $m\in a_{r_p}$ rename $a_{\langle r_p\rangle}=a_{r_p}$ and $A_m^{\langle r_p\rangle}=A_m^{r_p}$ (if for different $p$ and $p'$ we have got $r_p=r_{p'}$, then assume that $a_{r_p}=a_{r_{p'}}$ and $A_m^{r_p}=A_m^{r_{p'}}$ for all $m\in a_{r_p}$) and, finally, let $\name{X}_{\langle r_p\rangle}$ be a name for $\omega$. As the conditions $(i)-(viii)$ trivially hold, this finishes the first step.
\medskip
Assume now that we have constructed $t=\langle p_0,\ldots,p_{n-1}\rangle$ for some $n\ge 1$ along with $\langle a_{t\upharpoonright k}\colon\ 1\le k\le n\rangle$, $\langle\name{X}_{t\upharpoonright k}\colon\ k\le n\rangle$, $\langle A^{t\upharpoonright k}_m\colon\ m\in a_{t\upharpoonright k}, 1\le k\le n\rangle$, $\langle e_{t\upharpoonright k,m}^{t\upharpoonright n'}\colon\ m\in a_{t\upharpoonright i},1\le i<k<n'\le n\rangle$, $\langle c_{t\upharpoonright k}^{t\upharpoonright n'}\colon\ 1\le k<n'\le n\rangle$, and $\langle b_{t\upharpoonright k}^{t\upharpoonright n'}\colon\ 1\le k<n'\le n\rangle$ satisfying $(i)$-$(viii)$ whenever relevant. Assume also that for every $1\le k\le n$ we have:
\[\tag{$*$}\big|a_{t\upharpoonright k}\big|>(k+1)\cdot\big(\max a_{t\upharpoonright(k-1)}+1\big)^2\cdot M\cdot2^{k+2}/\varepsilon,\]
where we assign $\max a_\emptyset=0$.
Fix $p\in\mathbb P$. Since for every $1\le k\le n$ the set $a_{t\upharpoonright k}$ is finite, there is a name $\name{X}$ for an infinite subset of $\omega$ such that $1_\mathbb P$ forces that $\name{X}\subset\name{X}_t$ and
\[\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_m\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}=\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_{m'}\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}\]
for every $m,m'\in\name{X}$ and $1\le k\le n$. Since for every $n\in\omega$ we have $1_\mathbb P\Vdash\big\|\name{\mu}_n\big\|<M$, there exist $q_p\le p$ and $c_k^{q_p}\subset a_{t\upharpoonright k}$ of size $\big|c_k^{q_p}\big|\le M\cdot2^{k+2}/\varepsilon$ such that $q_p$ forces for every $m\in\name{X}$ that
\[\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_m\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}=c_k^{q_p}.\]
There exist $r_p\le q_p$, $a_{r_p}\in[\omega]^{<\omega}$ of size
\[\big|a_{r_p}\big|>(n+2)\cdot\big(\max a_t+1\big)^2\cdot M\cdot2^{n+3}/\varepsilon\]
with $\max a_t<\min a_{r_p}$, and $\langle A_m^{r_p}\in\mathcal{A}\colon\ m\in a_{r_p}\rangle$ such that $r_p$ forces that $a_{r_p}\subset\name{X}$ and $\name{B}_m=A_m^{r_p}$ for every $m\in a_{r_p}$
Finally, there exist $s_p\le r_p$ and for every $1\le i<k\le n$ and every $m\in a_{t\upharpoonright i}$ a set $e_{k,m}^{s_p}\subset a_{t\upharpoonright k}$ of size $\big|e_{k,m}^{s_p}\big|\le M\cdot2^{k+2}/\varepsilon$, such that $s_p$ forces that
\[\big\{l\in a_{t\upharpoonright k}\colon\ \big|\name{\mu}_{m}\big|\big(A^{t\upharpoonright k}_l\big)\ge\varepsilon/2^{k+2}\big\}=e_{k,m}^{s_p}\]
for all $k$ and $m$ as above (recall here that $1_\mathbb P\Vdash\big\|\name{\mu}_m\big\|<M$ for every $m\in\omega$).
As in the first step, the map $p\mapsto s_p$ (with the domain $\mathbb P$) has the range which is a dense subset of $\mathbb P$ --- call this range $D_t^T$, i.e. for all $q\in D_t^T$ the sequence $t\hat{\ \ } q$ will be in the tree $T$. Again, $D_t^T\in V$. For every $s_p\in D_t^T$ and $m\in a_{r_p}$ rename $\name{X}_{t\hat{\ \ } s_p}=\name{X}$, $a_{t\hat{\ \ } s_p}=a_{r_p}$, $A_m^{t\hat{\ \ } s_p}=A_m^{r_p}$ and $c_{t\upharpoonright k}^{t\hat{\ \ } s_p}=c_k^{q_p}$. Finally, for every $s_p\in D_t^T$, $1\le i<k\le n$ and $m\in a_{t\upharpoonright i}$ rename $e_{t\upharpoonright k,m}^{t\hat{\ \ } s_p}=e_{k,m}^{s_p}$ and put:
\[b_{t\upharpoonright k}^{t\hat{\ \ } s_p}=c_{t\upharpoonright k}^{t\hat{\ \ } s_p}\cup\bigcup_{1\le j<k}\bigcup_{m\in a_{t\upharpoonright j}}e_{t\upharpoonright k,m}^{t\hat{\ \ } s_p}.\]
It is easy to see that all the demanded conditions (including auxiliary $(*)$), maybe except for $(ii)$, are satisfied. To show $(ii)$, fix $s_p\in D_t^T$ and $1\le k\le n$. Write $t'=t\hat{\ \ } s_p$ for simplicity and note that $t\upharpoonright k=t'\upharpoonright k$. We have:
\[\big|b_{t'\upharpoonright k}^{t'}\big|\le\big|c_{t\upharpoonright k}^{t'}\big|+\sum_{j=1}^{k-1}\sum_{m\in a_{t\upharpoonright j}}\big|e_{t\upharpoonright k,m}^{t'}\big|\le M\cdot2^{k+2}/\varepsilon+\sum_{j=1}^{k-1}\sum_{m\in a_{t\upharpoonright j}}M\cdot2^{k+2}/\varepsilon\le\]
\[\le\Big(1+\sum_{j=1}^{k-1}\big|a_{t\upharpoonright j}\big|\Big)\cdot M\cdot2^{k+2}/\varepsilon\le\Big(1+\sum_{j=1}^{k-1}\max a_{t\upharpoonright j}\Big)\cdot M\cdot2^{k+2}/\varepsilon\le\]
\[\le\Big(1+\big(\max a_{t\upharpoonright(k-1)}\big)^2\Big)\cdot M\cdot2^{k+2}/\varepsilon\le\big(1+\max a_{t'\upharpoonright(k-1)}\big)^2\cdot M\cdot2^{k+2}/\varepsilon.\]
Combining this with $(*)$ we obtain:
\[\frac{\big|a_{t'\upharpoonright k}\big|}{\big|b_{t'\upharpoonright k}^{t'}\big|}>\frac{(k+1)\cdot\big(\max a_{t'\upharpoonright(k-1)}+1\big)^2\cdot M\cdot2^{k+2}/\varepsilon}{\big(\max a_{t'\upharpoonright(k-1)}+1\big)^2\cdot M\cdot2^{k+2}/\varepsilon}=k+1,\]
which yields (ii).
\end{proof}
Let us briefly compare the tree, say $T_N$, from Proposition \ref{prop:tree_nikodym} and the tree, say $T_G$, from Proposition \ref{prop:tree_grothendieck}. The main difference lays in the construction of the associated collections $\langle A_m^t\in\mathcal{A}\colon\ m\in a_t\rangle$. Namely, in each step of the construction of $T_N$ we just found such a collection using Lemma \ref{lemma:aN_antichain} --- the only necessary condition was that elements of the collection must be pairwise disjoint and disjoint from the element $\bigvee\mathcal{A}_0$ (defined as the union of the elements from the collections constructed in the previous steps) and must satisfy a certain measure-theoretic inequality, while in the construction of $T_G$ the elements of $\langle A_m^t\colon\ m\in a_t\rangle$ were found as approximating the already fixed antichain $\langle\name{B}_n^G\colon\ n\in\omega\rangle$ in $V[G]$ (given by Lemma \ref{lemma:aG_antichain}) --- we had much less freedom of choice than in the case of $T_N$. This difference had an important effect: in the case of $T_N$ we could keep Proposition \ref{prop:tree_nikodym}(iv) true and finally obtain an infinite antichain (see Remark \ref{remark:branch_generic_infinite}), while in the case of $T_G$ a similar condition was not possible to obtain --- collections from distinct steps may have had non-disjoint elements.
\medskip
\begin{lemma}\label{lemma:branch_compatible}
Let $\vec{p}=\langle p_n\colon\ n\in\omega\rangle\in\mathbb P^\omega$. Let $k<n,n'\in\omega$ and assume that $p_{n-1}$ and $p_{n'-1}$ are compatible. Then:
\begin{enumerate}
\item if $\vec{p}$ is a branch of the tree $T$ from Proposition \ref{prop:tree_nikodym} or \ref{prop:tree_grothendieck}, then $e_{\vec{p}\upharpoonright k,m}^{\vec{p}\upharpoonright n}=e_{\vec{p}\upharpoonright k,m}^{\vec{p}\upharpoonright n'}$ for all $m\in a_{\vec{p}\upharpoonright i}$ and $i<k$;
\item if $\vec{p}$ is a branch of the tree $T$ from Proposition \ref{prop:tree_nikodym}, then $l_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}=l_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n'}$;
\item if $\vec{p}$ is a branch of the tree $T$ from Proposition \ref{prop:tree_grothendieck}, then $c_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}=c_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n'}$.
\end{enumerate}
In particular, $b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}=b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n'}$ in both propositions.
\end{lemma}
\begin{proof}
(1) follows from Proposition \ref{prop:tree_nikodym}(v) and Proposition \ref{prop:tree_grothendieck}(viii). (2) holds by Proposition \ref{prop:tree_nikodym}(vi). (3) follows from Proposition \ref{prop:tree_grothendieck}(iv--v) and \ref{prop:tree_grothendieck}(vii). The last sentence follows from (iii) of both propositions.
\end{proof}
\begin{remark}\label{remark:branch_generic_infinite}
We work in $V[G]$. Let $\vec{p}=\langle p_n\colon\ n\in\omega\rangle\in\mathbb P^\omega$ be a branch of the tree $T$ from Proposition \ref{prop:tree_nikodym} or \ref{prop:tree_grothendieck}. Assume that the set $I=\big\{n\colon\ p_{n-1}\in G\}$ is infinite. Since $G\subset\mathbb P$ is a filter, it follows from Lemma \ref{lemma:branch_compatible} that we have $b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}=b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n'}$ whenever $n,n'\in I$ and $k\in\omega$ is such that $1\le k<n,n'$. Hence, for every $k\in\omega\setminus\{0\}$ we can put $b_k=b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}$, where $n\in I$ is arbitrary and $n>k$. We put also $a_0=\emptyset$ and $a_k=a_{\vec{p}\upharpoonright k}$ for $k>0$, and since $\langle a_k\colon\ k\in\omega\rangle$ is a sequence of pairwise disjoint sets (by (i) in both propositions), we may put $A_m=A_m^{\vec{p}\upharpoonright k}$ for every $m\in a_k$ and $k\in\omega$.
\end{remark}
\begin{lemma}\label{lemma:generic_branch_estimation}
Let $\vec{p}=\langle p_n\colon\ n\in\omega\rangle\in\mathbb P^\omega\cap V[G]$. Assume that $I=\big\{n\colon\ p_{n-1}\in G\}$ is infinite and let $\langle m_k\colon\ k\in\omega\setminus\{0\}\rangle\in V$ be a sequence such that $m_k\in a_k\setminus b_k$ for all $k\in\omega\setminus\{0\}$. Fix $n\in I$. Then, in $V[G]$ the following hold:
\begin{enumerate}
\item if $\vec{p}$ is a branch of the tree $T$ from Proposition \ref{prop:tree_nikodym}, then $\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}\big)<1/2^k$ for all $k\in\omega$ such that $k>n$;
\item if $\vec{p}$ is a branch of the tree $T$ from Proposition \ref{prop:tree_grothendieck}, then $\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}\big)<\varepsilon/2^{k+2}$ for all $k\neq 0,n$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Fix $k\in\omega$ such that $k>n$. Since $I$ is infinite, there is $n'\in I$ such that $n'-1>k$. By Proposition \ref{prop:tree_nikodym}(v) we have:
\[p_{n'-1}\Vdash e_{\vec{p}\upharpoonright k,m_n}^{\vec{p}\upharpoonright n'}=\big\{l\in a_{\vec{p}\upharpoonright k}\colon\ \big|\name{\mu}_{m_n}\big|\big(A_l^{\vec{p}\upharpoonright k}\big)\ge 1/2^k\big\}.\]
Since $m_k\not\in b_k=b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n'}\supset e_{\vec{p}\upharpoonright k,m_n}^{\vec{p}\upharpoonright n'}$ and $p_{n'-1}\in G$, we have:
\[\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}\big)=\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}^{\vec{p}\upharpoonright k}\big)<1/2^k.\]
(2) Fix $k\in\omega\setminus\{0\}$. If $k>n$, then similarly as in (1), using Proposition \ref{prop:tree_grothendieck}(viii), we show that $\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}\big)<\varepsilon/2^{k+2}$.
Suppose now that $k<n$. By Proposition \ref{prop:tree_grothendieck}(vi) we have:
\[p_{n-1}\Vdash c_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}=\big\{l\in a_{\vec{p}\upharpoonright k}\colon\ \exists m\in\name{X}_{\vec{p}\upharpoonright n}\text{ such that }\big|\name{\mu}_m\big|\big(A_l^{\vec{p}\upharpoonright k}\big)\ge\varepsilon/2^{k+2}\big\}.\]
Since $p_{n-1}\in G$, it follows that $m_n\in a_n=a_{\vec{p}\upharpoonright n}\subset\name{X}_{\vec{p}\upharpoonright n}^G$, and hence by the fact that $m_k\not\in b_k=b_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}\supset c_{\vec{p}\upharpoonright k}^{\vec{p}\upharpoonright n}$, we obtain:
\[\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}\big)=\big|\name{\mu}_{m_n}^G\big|\big(A_{m_k}^{\vec{p}\upharpoonright k}\big)<\varepsilon/2^{k+2}.\]
\end{proof}
\section{Auxiliary set-theoretic results\label{section:aux_set_theory}}
In this section we present some combinatorial results implied by the preservation of the ground model set of reals as a non-meager subset of the reals in the extension.
\begin{definition}\label{def:preservation_nonmeager}
A poset $\mathbb P$ \textit{preserves the ground model reals non-meager} if the set $\mathbb R^V$ is a non-meager subset of $\mathbb R^{V[G]}$ for any $\mathbb P$-generic filter $G$.
\end{definition}
\noindent Typical examples of notions of forcing preserving the ground model reals non-meager include Sacks, side-by-side products of Sacks, Miller, and Silver (see Raghavan \cite[Section 5]{Rag09}). The property is preserved by countable support iterations (\cite[Theorem 61]{Rag09}).
The following lemma is the only place in this paper where we need properness. Recall that $H(\theta)$ for a regular cardinal number $\theta$ denotes the family of all subsets of hereditary cardinality $<\theta$.
\begin{lemma}\label{lemma:generic_branch}
Let $\mathbb P$ be a proper forcing preserving the ground model reals non-meager. Let $M$ be a countable elementary submodel of $H(\theta)$ for some regular cardinal number $\theta$. Then, there exists a $\mathbb P$-generic filter $G$ over $V$ having the following property: for every tree $T\subset\mathbb P^{<\omega}$ in $V\cap M$ such that for every $t\in T\cap M$ the set of successors $D_t=\big\{q\colon\ t\hat{\ \ } q\in T\big\}$ is dense in $\mathbb P$ and $D_t\in V\cap M$, there exists a branch $\vec{p}=\langle p_n\colon\ n\in\omega\rangle\in V$ of $T$ such that $I=\big\{n\colon\ p_{n-1}\in G\big\}$ is infinite.
\end{lemma}
\begin{proof}
Let $q_0$ be an $(M,\mathbb P)$-generic condition. Let $T\subset\mathbb P^{<\omega}$ be a tree in $V\cap M$ such that for every $t\in T\cap M$ the set $D_t=\big\{q\in\mathbb P\colon\ t\hat{\ \ } q\in T\big\}$ is a dense subset of $\mathbb P$ and $D_t\in V\cap M$.
Set $T_0=T\cap M$ and note that $D_t\cap M$ is predense below $q_0$ for all $t\in T_0$. Let $G$ be a $\mathbb P$-generic filter containing $q_0$.
In what follows we work in $V[G]$. Let $[T_0]$ denote the set of all branches of $T_0$. Suppose, contrary to our claim, that
$[T_0]\cap V\subset\bigcup_{k\in\omega}X_k$, where
\[X_k=\big\{\langle p_n\colon\ n\in\omega\rangle\in [T_0]\colon\ p_n\not\in G\text{ for all }n\ge k\big\}.\]
Note that $X_k$ is closed in $[T_0]$
for all $k\in\omega$ and $[T_0]\cap V$ is non-meager in $[T_0]$ (since $\mathbb P$ preserves the ground model reals non-meager), and hence there exists $t\in T_0$
and $k\in\omega$ such that $|t|>k$ and $U_t\cap V\subset X_k$, where the set
\[U_t=\big\{\vec{p}\in [T_0]\colon\ \vec{p}\upharpoonright |t|=t\big\}\]
is the basic open subset of $[T_0]$ generated by $t$.
Since $D_t\cap M$ is predense below $q_0\in G$,
there exists $p\in D_t\cap M\cap G$. Note that $t\hat{\ \ } p\in T_0$. Let $\vec{p}$ in $U_t\cap V$ such that $\vec{p}\big(|t|\big)=p$.
Then $\vec{p}\in (U_t\cap V)\setminus X_k$, a contradiction.
\end{proof}
\subsection{Almost disjoint families}
The following two lemmas seem folklore.
\begin{lemma}\label{lemma:ad_aux}
Suppose that $\mathbb P$ preserves the ground model reals non-meager,
$G$ is a $\mathbb P$-generic filter and $I\in [\omega]^{\omega}\cap V[G]$.
For any sequence $\langle H_k:k\in\omega\rangle \in V$ of mutually disjoint infinite subsets of
$\omega$ such that $I\cap H_k$ is non-empty for every $k\in\omega$, there exists a function
$f\in\omega^\omega\cap V$ such that the set
\[I\cap\bigcup_{k\in\omega}(H_k\cap f(k))\]
is infinite.
\end{lemma}
\begin{proof
We work in $V[G]$. Set $g(n)=\min(I\cap H_n)$. Since $\mathbb P$ preserves the ground model reals non-meager,
it obviously cannot add dominating functions, and hence there exists
$f\in\omega^\omega\cap V$ such that $g(n)<f(n)$ for infinitely many $n$.
It is easy to see that the set
\[I\cap\bigcup_{k\in\omega}(H_k\cap f(k))\]
is infinite, which finishes the proof.
\end{proof}
\begin{lemma}\label{lemma:ad}
Suppose that $\mathbb P$ preserves the ground model reals non-meager,
$G$ is a $\mathbb P$-generic filter and $I\in [\omega]^{\omega}\cap V[G]$. Then, there exists an uncountable almost
disjoint family $\mathcal H\subset [\omega]^\omega\cap V$, $\mathcal{H}\in V$, such that $H\cap I$ is infinite for all $H\in\mathcal H$.
\end{lemma}
\begin{proof}
Throughout the whole proof we work in $V[G]$.
First let us note that there exists a decomposition $\omega=\sqcup_{n\in\omega}B_n$ of $\omega$
such that $\langle B_n\colon\ n\in\omega\rangle\in V$ and $\big|B_n\cap I\big|=\omega$ for all $n\in\omega$. Indeed,
fix any decomposition $\omega=\sqcup_{n\in\omega}C_n$ of $\omega$ into infinite sets such that
$\langle C_n\colon\ n\in\omega\rangle\in V$, and consider subsets
\[S_n=\{\sigma\in \mathit{Sym}(\omega)\colon\ \big|\sigma[C_n]\cap I\big|<\omega\}\]
of the symmetric group $\mathit{Sym}(\omega)$ of all permutations of $\omega$. It is easy to see that
each $S_n$ is a meager subset of $\mathit{Sym}(\omega)$, and hence, by our assumption on $\mathbb P$, there exists a permutation
\[\sigma\in\big(\mathit{Sym}(\omega)\cap V\big)\setminus\bigcup_{n\in\omega}S_n.\]
Set $B_n=\sigma[C_n]$.
Fix a family $\{D_\tau\colon\ \tau\in 2^{<\omega}\}\in V$ of infinite subsets of $\omega$
such that $D_\emptyset=\omega$ and $D_{\tau}=D_{\tau\hat{\ } 0}\cup D_{\tau\hat{\ } 1}$ for all $\tau\in 2^{<\omega}$.
For every $x\in 2^\omega\cap V$ consider the sequence
$\langle H^x_k\colon\ k\in\omega\rangle$, where
\[H^x_k=\bigcup\big\{B_n\colon\ n\in D_{x\upharpoonright k}\setminus D_{x\upharpoonright(k+1)}\big\}.\]
Observe that $\langle H^x_k\colon\ k\in\omega\rangle\in V$ for all $x\in 2^\omega\cap V$ and
$H^x_{k_1}\cap H^x_{k_2}=\emptyset$ for all $k_1\neq k_2$. Moreover,
if $x, y\in 2^\omega\cap V$ and $x(k)\neq y(k)$ for some $k$, then
$H^x_{k_1}\cap H^y_{k_2}=\emptyset$ for any $k_1,k_2\ge k$.
Since for any $x\in 2^\omega\cap V$ and $k\in\omega$ there exists $n\in\omega$
such that $B_n\subset H^x_k$, the sequence $\langle H^x_k\colon\ k\in\omega\rangle $ satisfies the
requirements of Lemma \ref{lemma:ad_aux}, and hence there exists $f^x\in\omega^\omega\cap V$
such that $\big|I\cap H^x\big|=\omega$, where
\[H^x=\bigcup_{k\in\omega}(H^x_k\cap f^x(k))\in V.\]
Since $\big|H^x\cap H^y\big|<\omega$ for any $x\neq y$, the family
$\mathcal H=\{H^x\colon\ x\in 2^\omega\cap V\}$ is as required.
\end{proof}
\section{Main result\label{section:main}}
Recall the following standard definition.
\begin{definition}\label{def:laver_property}
A poset $\mathbb P$ has \textit{the Laver property} if for any $\mathbb P$-generic filter $G$, functions $f\in\omega^\omega\cap V$ and $g\in\omega^\omega\cap V[G]$ such that $f$ eventually dominates $g$ ($g\le^*f$), there exists in $V$ a function $H\colon\omega\to[\omega]^{<\omega}$ such that $|H(n)|\le n+1$ and $g(n)\in H(n)$ for all $n\in\omega$.
\end{definition}
\noindent The following standard proper posets have the Laver property: Sacks and side-by-side products of Sacks (Bartoszy\'nski and Judah \cite[Lemma 6.3.38]{BarJud95}), Laver, Mathias, Miller (\cite[Section 7.3]{BarJud95}), and Silver (more generally, Silver-like posets) (Halbeisen \cite[Chapter 22]{Hal12}). The Laver property is preserved by countable support iterations (\cite[Theorem 6.3.34]{BarJud95}). For more information about the property see e.g. Bartoszy\'nski and Judah \cite[Section 6.3.E]{BarJud95} or Halbeisen \cite[Chapter 20]{Hal12}.
\begin{remark}\label{remark:laver}
Note that if a poset $\mathbb P$ has the Laver property, then it has the following property. Let $f\in\omega^\omega\cap V$ and let $\langle\mathcal{F}_k\colon\ k\in\omega\rangle\in V$ be a sequence of finite sets such that $|\mathcal{F}_k|=f(k)$ for every $k\in\omega$. Let $G$ be a $\mathbb P$-generic filter over $V$ and let $b\in\prod_{k\in\omega}\mathcal{F}_k$ be in $V[G]$. Then, there exists a function
\[B\in\prod_{k\in\omega}\big[\mathcal{F}_k\big]^{\le k+1}\]
in $V$ such that $b(k)\in B(k)$ for every $k\in\omega$.
\end{remark}
We are now in the position to prove the main theorem of the paper.
\begin{theorem}\label{theorem:main}
Let $\mathbb P\in V$ be a notion of proper forcing having the Laver property and preserving the ground model reals non-meager. Let $\mathcal{A}\in V$ be a $\sigma$-complete Boolean algebra. Then, for every $\mathbb P$-generic filter $G$ over $V$ the algebra $\mathcal{A}$ has the Vitali--Hahn--Saks property in $V[G]$.
\end{theorem}
\begin{proof}
Let $G$ be a $\mathbb P$-generic filter over $V$. To show that $\mathcal{A}$ has the Vitali--Hahn--Saks property in $V[G]$, we prove that it has in $V[G]$ both the Nikodym property and the Grothendieck property. The proof will follow by the \textit{a contrario} argument.
\medskip
\noindent\textbf{Case (N). } If $\mathcal{A}$ does not have the Nikodym property in $V[G]$, then there exists an anti-Nikodym sequence of measures $\langle\mu_n\colon\ n\in\omega\rangle\in V[G]$. Without loss of generality we may assume that $\big\|\mu_n\big\|<n$ for every $n\in\omega$ (if $\big\|\mu_n\big\|\ge n$ for some $n\in\omega$, then replace it with $0.5n\cdot\mu_n/\big\|\mu_n\big\|$). Let $x\in K_\mathcal{A}\cap V[G]$ be a Nikodym concentration point of $\langle\mu_n\colon\ n\in\omega\rangle$.
In $V$, we may assume that $1_\mathbb P$ forces that $\langle\name{\mu}_n\colon\ n\in\omega\rangle$ is anti-Nikodym, $\name{x}$ is its Nikodym concentration point, and $\big\|\name{\mu}_n\big\|<n$ for every $n\in\omega$.
\medskip
\noindent\textbf{Case (G). } If $\mathcal{A}$ does not have the Grothendieck property in $V[G]$, then there exist an anti-Grothendieck sequence of measures $\langle\mu_n'\colon\ n\in\omega\rangle\in V[G]$, norm bounded by some rational number $M$, and, by Lemma \ref{lemma:aG_antichain}, an antichain $\langle B_n\in\mathcal{A}\colon\ n\in\omega\rangle\in V[G]$ and rational $\varepsilon>0$ such that $\big|\mu_n'\big(B_n\big)\big|>2\varepsilon$ for every $n\in\omega$.
In $V$, we may assume that $1_\mathbb P$ forces that $\langle\name{\mu}_n'\colon\ n\in\omega\rangle$ is anti-Grothendieck, $\langle\name{B}_n\in\mathcal{A}\colon\ n\in\omega\rangle$ is an antichain, $\big\|\name{\mu}_n'\big\|<M$ and $\big|\name{\mu}_n'\big(\name{B}_n)\big|>2\varepsilon$ for every $n\in\omega$.
\medskip
\noindent\textbf{Cases (N) and (G). } For a moment, the proof goes simultaneously for both Case (N) and Case (G).
\medskip
Let $T\subset\mathbb P^{<\omega}$ be a tree in $V$ from Proposition \ref{prop:tree_nikodym} (in Case (N)) or Proposition \ref{prop:tree_grothendieck} (in Case (G)) with all the associated objects like $\big\{a_t\colon\ t\in T\setminus\{\emptyset\}\big\}$, $\big\{D_t^T\colon\ t\in T\setminus\{\emptyset\}\big\}$ etc. Let $M$ be a countable elementary submodel of $H(\theta)$ for a sufficiently big regular cardinal number $\theta$ containing $\mathbb P$ and all the objects mentioned above. Let $G'$ be a $\mathbb P$-generic filter from Lemma \ref{lemma:generic_branch} and $\vec{p}=\langle p_n\colon\ n\in\omega\rangle$ an infinite branch of $T\cap V$ such that the set $I=\big\{n\colon\ p_{n-1}\in G'\big\}$ is infinite. Let $\mathcal{H}\subset\big[\omega\setminus\{0\}\big]^\omega\cap V$, $\mathcal{H}\in V$, be an almost disjoint family from Lemma \ref{lemma:ad}.
We now and to the end of the proof work in $V[G']$. Let $\langle a_k\colon\ k\in\omega\rangle$, $\langle b_k\colon\ k\in\omega\setminus\{0\}\rangle$ and $\langle A_m\colon\ m\in a_k,k\in\omega\rangle$ be sequences from Remark \ref{remark:branch_generic_infinite}. Define a function $b\colon\omega\setminus\{0\}\to[\omega]^{<\omega}$ by putting $b(k)=b_k$. Then, $b\in V[G']$. For every $k\in\omega\setminus\{0\}$ put:
\[\mathcal{F}_k=\big\{c\subset a_k\colon\ \big|a_k\big|>(k+1)|c|\big\}.\]
Obviously, each $\mathcal{F}_k\in\big[[\omega]^{<\omega}\big]^{<\omega}$ and $\langle\mathcal{F}_k\colon\ k\in\omega\setminus\{0\}\rangle\in V$ (since $\langle a_k\colon\ k\in\omega\rangle\in V$). Note that by (ii) in Propositions \ref{prop:tree_nikodym} and \ref{prop:tree_grothendieck} we have $b(k)\in\mathcal{F}_k$ for all $k\in\omega\setminus\{0\}$. Since $\mathbb P$ has the Laver property, by Remark \ref{remark:laver}, there exists a function
\[B\colon\omega\setminus\{0\}\to\big[[\omega]^{<\omega}\big]^{<\omega}\]
in $V$ such that for every $k\in\omega\setminus\{0\}$ the following hold:
\begin{itemize}
\item $B(k)\subset\mathcal{F}_k$,
\item $|B(k)|\le k+1$,
\item $b(k)\in B(k)$.
\end{itemize}
It follows that $a_k\setminus\bigcup B(k)\neq\emptyset$ for all $k\in\omega\setminus\{0\}$. Since $\langle a_k\setminus\bigcup B(k)\colon\ k\in\omega\rangle\in V$, there exists $\langle m_k\colon\ k\in\omega\setminus\{0\}\rangle\in V$ such that $m_k\in a_k\setminus\bigcup B(k)$ for every $k\in\omega\setminus\{0\}$.
\medskip
We now again deal separately with Cases (N) and (G).
\medskip
\noindent\textbf{Case (N). } Note that $\langle A_{m_k}\colon\ k\in\omega\setminus\{0\}\rangle\in V$ and by Proposition \ref{prop:tree_nikodym}(iv) it is an antichain. Since $\mathcal{A}$ is $\sigma$-complete in $V$, $\bigvee_{k\in H}A_{m_k}\in\mathcal{A}$ for every $H\in\mathcal{H}$. By Lemma \ref{lemma:measures_ad} there exists $H_0\in\mathcal{H}$ such that
\[\mu_m\Big(\bigvee_{k\in H_0}A_{m_k}\Big)=\sum_{k\in H_0}\mu_m\big(A_{m_k}\big)\]
for every $m\in\omega$. Let $n\in I\cap H_0$. Note that $p_{n-1}\in G'$, so we have:
\[\big|\mu_{m_n}\Big(\bigvee_{k\in H_0}A_{m_k}\Big)\big|=\big|\sum_{k\in H_0}\mu_{m_n}\big(A_{m_k}\big)\big|\ge\]
\[\big|\mu_{m_n}\big(A_{m_n}\big)\big|-\big|\sum_{\substack{k\in H_0\\k<n}}\mu_{m_n}\big(A_{m_k}\big)\big|-\sum_{\substack{k\in H_0\\k>n}}\big|\mu_{m_n}\big|\big(A_{m_k}\big)\ge\]
\[n-\sum_{\substack{k\in H_0\\k>n}}1/2^k>n-1,\]
where the last line follows from Proposition \ref{prop:tree_nikodym}(vii) and Lemma \ref{lemma:generic_branch_estimation}(1). Thus, we get that
\[\sup_{n\in\omega}\big|\mu_n\Big(\bigvee_{k\in H_0}A_{m_k}\Big)\big|=\infty,\]
which contradicts the pointwise boundedness of $\langle\mu_n\colon\ n\in\omega\rangle$ and hence proves that $\mathcal{A}$ has the Nikodym property in $V[G]$.
\medskip
\noindent\textbf{Case (G). } For every $k\in\omega\setminus\{0\}$ put:
\[C_k=A_{m_k}\setminus\bigvee_{0<i<k}A_{m_i}.\]
Then, $\langle C_k\colon\ k\in\omega\setminus\{0\}\rangle$ is an antichain and, since $\langle A_{m_k}\colon\ k\in\omega\setminus\{0\}\rangle\in V$, $\langle C_k\colon\ k\in\omega\setminus\{0\}\rangle\in V$ as well. Note that if $n\in I$, then $p_{n-1}\in G'$ and hence $A_{m_n}=B_{m_n}$ by Proposition \ref{prop:tree_grothendieck}(vi). Again, since $\mathcal{A}$ is $\sigma$-complete in $V$, $\bigvee_{k\in H}C_{m_k}\in\mathcal{A}$ for every $H\in\mathcal{H}$, so by Lemma \ref{lemma:measures_ad} there exists $H_0\in\mathcal{H}$ such that
\[\mu_m\Big(\bigvee_{k\in H_0}C_{m_k}\Big)=\sum_{k\in H_0}\mu_m\big(C_{m_k}\big)\]
for every $m\in\omega$. Let $n\in I\cap H_0$. Since for every $k\in\omega\setminus\{0,n\}$ we have $C_k\subset A_{m_k}$, by Lemma \ref{lemma:generic_branch_estimation}(2) we also have:
\[\big|\mu_{m_n}\big|\big(C_k\big)<\varepsilon/2^{k+2}.\]
Finally, we obtain:
\[\big|\mu_{m_n}\Big(\bigvee_{k\in H_0}C_k\Big)\big|=\big|\sum_{k\in H_0}\mu_{m_n}\big(C_k\big)\big|\ge\big|\mu_{m_n}\big(C_n\big)\big|-\big|\sum_{\substack{k\in H_0\\k\neq n}}\mu_{m_n}\big(C_k\big)\big|\ge\]
\[\big|\mu_{m_n}\Big(A_{m_n}\setminus\bigvee_{0<i<n}A_{m_i}\Big)\big|-\big|\sum_{\substack{k\in H_0\\k\neq n}}\mu_{m_n}\big(C_k\big)\big|\ge\]
\[\big|\mu_{m_n}\big(A_{m_n}\big)\big|-\sum_{0<i<n}\big|\mu_{m_n}\big|\big(A_{m_i}\big)-\sum_{\substack{k\in H_0\\k\neq n}}\big|\mu_{m_n}\big|\big(C_k\big)>\]
\[2\varepsilon-\sum_{0<i<n}\varepsilon/2^{i+2}-\sum_{\substack{k\in H_0\\k\neq n}}\varepsilon/2^{k+2}>2\varepsilon-\varepsilon/2-\varepsilon/2=\varepsilon,\]
where the last line again follows from Lemma \ref{lemma:generic_branch_estimation}(2). Thus, we get that
\[\limsup_{n\to\infty}\mu_n\Big(\bigvee_{k\in H_0}C_{m_k}\Big)\ge\varepsilon>0,\]
which contradicts the fact that $\langle\mu_n\colon\ n\in\omega\rangle$ is weak* convergent to $0$ and hence proves that $\mathcal{A}$ has the Gronthendieck property in $V[G]$.
\end{proof}
As mentioned in the introduction, Theorem \ref{theorem:main} gives a generalization of the results of Brech \cite{Bre06} and the authors \cite{SobZdo17} stating together that side-by-side products of the Sacks forcing preserve the Vitali--Hahn--Saks property of ground model $\sigma$-complete Boolean algebras.
\begin{corollary}\label{cor:main}
Let $\mathbb P\in V$ be one of the following posets: Sacks forcing, side-by-side product of the Sacks forcing, Silver forcing, Miller forcing, or the countable support iteration of length $\omega_2$ of any of them. Let $\mathcal{A}\in V$ be a $\sigma$-complete Boolean algebra. Then, for any $\mathbb P$-generic filter $G$ over $V$, the algebra $\mathcal{A}$ has the Vitali--Hahn--Saks property in $V[G]$.
\end{corollary}
No infinite Boolean algebra of cardinality strictly less than the bounding number $\mathfrak{b}$ has the Nikodym property (Sobota \cite[Proposition 3.2]{Sob17}), so if the Continuum Hypothesis holds in $V$ but $\omega_1<\mathfrak{b}=\mathfrak c$ in the generic extension $V[G]$ of a proper forcing, then no ground model $\sigma$-complete Boolean algebra of cardinality $\omega_1$ has the Nikodym property in $V[G]$. This implies that in case, e.g., of the Laver model we may only ask about the Grothendieck property.
\begin{question}\label{question:laver}
Let $\mathcal{A}\in V$ be a $\sigma$-complete Boolean algebra. Does $\mathcal{A}$ have the Grothendieck property in the model obtained by the countable support iteration of length $\omega_2$ of the Laver forcing?
\end{question}
Note that if the answer to Question \ref{question:laver} is positive, then we obtain a consistent example of a whole class of Boolean algebras with the Grothendieck property but without the Nikodym property. This would shed new light on such Boolean algebras, since so far only one example has been found (under the Continuum Hypothesis) --- see Talagrand \cite{Tal84}.
\medskip
It seems that changing \textit{mutatis mutandis} its proof, Theorem \ref{theorem:main} also holds for Boolean algebras with the Subsequential Completeness Property introduced by Haydon \cite{Hay81}: a Boolean algebra $\mathcal{A}$ has \textit{the Subsequential Completeness Property (SCP)} if for every antichain $\langle A_n\colon\ n\in\omega\rangle$ in $\mathcal{A}$ there exists $M\in[\omega]^\omega$ such that $\bigvee_{n\in M}A_n\in\mathcal{A}$. Haydon \cite{Hay81} proved that algebras with SCP have the Vitali--Hahn--Saks property. Later on, many other completeness and interpolation properties of Boolean algebras were proved also to imply the Vitali--Hahn--Saks property, see e.g. Seever \cite{See68}, Molt\'o \cite{Mol81}, Schachermayer \cite{Sch82}, Freniche \cite{Fre84}, Aizpuru \cite{Aiz88}.
\begin{question}
For which other completeness or interpolation properties of Boolean algebras does the statement of Theorem \ref{theorem:main} hold?
\end{question}
\section{Consequences\label{section:consequences}}
In this section we provide several consequences of Theorem \ref{theorem:main} concerning cardinal characteristics of the continuum and the Efimov problem.
\subsection{Cardinal characteristics of the continuum\label{section:cardinal}}
Let us introduce the following three cardinal characteristics of the continuum.
\begin{definition}\label{def:numbers}
\textit{The Nikodym number} $\mathfrak{nik}$, \textit{the Grothendieck number} $\mathfrak{gr}$ and \textit{the Vitali--Hahn--Saks number} $\mathfrak{vhs}$ are defined respectively as:
\[\mathfrak{nik}=\min\big\{|\mathcal{A}|\colon\ \mathcal{A}\textit{ is an infinite B. algebra with the Nikodym property}\big\},\]
\[\mathfrak{gr}=\min\big\{|\mathcal{A}|\colon\ \mathcal{A}\textit{ is an infinite B. algebra with the Grothendieck property}\big\},\]
\[\mathfrak{vhs}=\min\big\{|\mathcal{A}|\colon\ \mathcal{A}\textit{ is an infinite B. a. with the Vitali--Hahn--Saks property}\big\}.\]
\end{definition}
Since every countable Boolean algebra has neither the Nikodym property nor the Grothendieck property and $\wp(\omega)$ has both of the properties, we immediately get that $\omega_1\le\mathfrak{nik},\mathfrak{gr}\le\mathfrak{vhs}\le\mathfrak c$. The relations between $\mathfrak{nik}$ and other classical cardinal characteristics of the continuum were studied in Sobota \cite{Sob17}, where the following inequalities were proved:
\begin{enumerate}
\item $\max\big(\mathfrak{b},\mathfrak{s},\operatorname{cov}(\mathcal{M})\big)\le\mathfrak{nik}$ \cite[Corollary 3.3]{Sob17};
\item $\omega_1\le\mathfrak{nik}\le\kappa$ for every cardinal number $\kappa$ such that $\operatorname{cof}(\mathcal{N})\le\kappa=\operatorname{cof}\big([\kappa]^\omega,\subseteq\big)$ \cite[Theorem 7.3]{Sob17};
\item $\omega<\mathrm{cf}(\mathfrak{nik})$ and $\mathfrak{nik}$ may be consistently singular \cite[Corollary 3.7]{Sob17}.
\end{enumerate}
Results similar to (1) and (3) were obtained in Sobota \cite[Chapter 7]{Sob16} for $\mathfrak{gr}$:
\begin{enumerate}\setcounter{enumi}{3}
\item $\max\big(\mathfrak{s},\operatorname{cov}(\mathcal{M})\big)\le\mathfrak{gr}$ \cite[Corollary 7.2.4]{Sob16};
\item $\omega<\mathrm{cf}(\mathfrak{gr})$ and $\mathfrak{gr}$ may be consistently singular \cite[Corollary 7.2.8]{Sob16}.
\end{enumerate}
Any ZFC upper bound better than $\mathfrak c$ for $\mathfrak{gr}$ has been so far unknown. However, it follows from Brech's result that in the Sacks model we have $\omega_1=\mathfrak{gr}<\mathfrak c$ and hence, by Sobota and Zdomskyy \cite{SobZdo17}, $\omega_1=\mathfrak{vhs}<\mathfrak c$. (Besides, note that the Grothendieck property is strongly related to the pseudo-intersection number $\mathfrak{p}$ --- see e.g. Haydon, Levy and Odell \cite[Corollary 3F]{HLO87}, Talagrand \cite{Tal80}, and Krupski and Plebanek \cite[page 2189]{KP11}.)
Theorem \ref{theorem:main} gives new situations where the numbers from Definition \ref{def:numbers} are small.
\begin{corollary}\label{cor:numbers}
Let $\mathbb P\in V$ be a notion of forcing as in Theorem \ref{theorem:main} and let $G$ be a $\mathbb P$-generic filter over $V$. Assume that the Continuum Hypothesis holds in $V$ but not in $V[G]$. Then, in $V[G]$, it holds $\omega_1=\mathfrak{nik}=\mathfrak{gr}=\mathfrak{vhs}<\mathfrak c$.
\end{corollary}
Inequalities (1) and (4) may suggest that the dominating number $\mathfrak{d}$ is a good candidate for bounding $\mathfrak{nik}$ and $\mathfrak{gr}$ from below (e.g. Sobota \cite[Question 3.5]{Sob17}). However, using the countable support iteration of length $\omega_2$ of Miller's forcing, we obtain the model where $\omega_1<\mathfrak{d}=\omega_2=\mathfrak c$ (see Blass \cite[Section 11.9]{Bla10}) and so, by Corollary \ref{cor:numbers}, the following holds.
\begin{corollary}\label{cor:d_big}
It is consistent that $\omega_1=\mathfrak{nik}=\mathfrak{gr}=\mathfrak{vhs}<\mathfrak{d}=\omega_2=\mathfrak c$.
\end{corollary}
\noindent However, we do not know whether $\mathfrak{d}$ may be consistently strictly smaller than any of the number $\mathfrak{nik}$, $\mathfrak{gr}$ or $\mathfrak{vhs}$.
\begin{question}
Let $\mathfrak{x}\in\big\{\mathfrak{nik},\mathfrak{gr},\mathfrak{vhs}\big\}$. Is it consistent that $\mathfrak{x}>\mathfrak{d}$?
In particular, is there an $\omega^\omega$-bounding poset $\mathbb P$ such that $\wp(\omega)^V$ does not have the Vitali--Hahn--Saks property in some $\mathbb P$-generic extension $V[G]$?
\end{question}
We can obtain a result similar to Corollary \ref{cor:d_big} for the ultrafilter number $\mathfrak{u}$ and the reaping number $\mathfrak{r}$: using the countable support iteration of length $\omega_2$ of Silver's forcing, we obtain the model where $\omega_1=\mathfrak{d}<\mathfrak{r}=\mathfrak{u}=\omega_2=\mathfrak c$ (see Halbeisen \cite[page 379]{Hal12}).
\begin{corollary}\label{cor:r_u_big}
It is consistent that $\omega_1=\mathfrak{nik}=\mathfrak{gr}=\mathfrak{vhs}<\mathfrak{r}=\mathfrak{u}=\omega_2=\mathfrak c$.
\end{corollary}
It can be shown that $\omega_1=\mathfrak{r}=\mathfrak{u}<\mathfrak{s}=\omega_2=\mathfrak c$ consistently holds (see Blass \cite[Section 11.11]{Bla10}), so, by the inequalities (1) and (4) above and Corollary \ref{cor:r_u_big}, there is no ZFC inequality between any of the numbers from Definition \ref{def:numbers} and $\mathfrak{r}$ or $\mathfrak{u}$. A similar situation occurs also for the groupwise density number $\mathfrak{g}$: in the Cohen model we have $\omega_1=\mathfrak{g}<\omega_2=\operatorname{cov}(\mathcal{M})=\mathfrak{nik}=\mathfrak{gr}=\mathfrak{vhs}=\mathfrak c$, while in the Miller model it holds that $\omega_1=\mathfrak{nik}=\mathfrak{gr}=\mathfrak{vhs}<\omega_2=\mathfrak{g}=\mathfrak c$ (see Blass \cite[Chapter 11]{Bla10}).
\begin{corollary}
Let $\mathfrak{x}\in\big\{\mathfrak{nik},\mathfrak{gr},\mathfrak{vhs}\big\}$ and $\mathfrak{y}\in\big\{\mathfrak{r},\mathfrak{u},\mathfrak{g}\big\}$. Then, there is no ZFC inequality between $\mathfrak{x}$ and $\mathfrak{y}$.
\end{corollary}
Note that by (2) it follows that there is also no ZFC inequality between $\mathfrak{nik}$ and the almost disjointness number $\mathfrak{a}$. Indeed, in the Cohen model we have $\omega_1=\mathfrak{a}<\operatorname{cov}(\mathcal{M})=\mathfrak{nik}=\omega_2=\mathfrak c$, while Brendle \cite[Proposition 4.7]{Bre03} showed that it consistently holds $\omega_2=\operatorname{cof}(\mathcal{N})<\mathfrak{a}=\omega_3=\mathfrak c$, hence consistently $\mathfrak{nik}<\mathfrak{a}$.
\begin{question}
Is it consistent that $\mathfrak{gr}<\mathfrak{a}$?
\end{question}
To obtain counterparts of Corollaries \ref{cor:d_big} and \ref{cor:r_u_big} for other cardinal characteristics (e.g. those from the right-hand side half of Cicho\'n's diagram), it would be sufficient to answer the following question.
\begin{question}
Which standard cardinal characteristics of the continuum may be pushed up to $\mathfrak c$ using a proper forcing $\mathbb P$ having the Laver property and preserving the ground model reals non-meager?
\end{question}
We do not know whether $\mathfrak{b}\le\mathfrak{gr}$ in ZFC. If the answer for Question \ref{question:laver} is positive, then it would hold $\omega_1=\mathfrak{gr}<\mathfrak{b}=\mathfrak{nik}=\mathfrak{vhs}=\omega_2=\mathfrak c$ in the Laver model obtained from $V$ satisfying the Continuum Hypothesis.
\begin{question}\label{question:gr_b}
Is it consistent that $\mathfrak{gr}<\mathfrak{b}$?
\end{question}
The positive answer to Question \ref{question:gr_b} (or \ref{question:laver}) would imply that it is consistent that $\mathfrak{gr}<\mathfrak{nik}$. So far, we do not know any examples of models where the two numbers are different.
\begin{question}
Is it consistent that $\mathfrak{gr}\neq\mathfrak{nik}$?
\end{question}
\subsection{Efimov spaces\label{section:efimov}}
As we have already mentioned in Introduction, the Efimov problem is a long-standing open question asking whether there exists \textit{an Efimov space}, i.e. an infinite compact Hausdorff space with neither non-trivial converging sequences nor a copy of $\beta\omega$, the \v{C}ech-Stone compactification of $\omega$. Many consistent examples of Efimov spaces have been obtained, but so far no ZFC example has been found. The first consistent examples were found by Fedorchuk \cite{Fed75,Fed76} under the assumptions of the Continuum Hypothesis or $\Diamond$. Fedorchuk \cite{Fed77} also obtained an Efimov space assuming that $\mathfrak{s}=\omega_1$ and $2^{\mathfrak{s}}<2^{\mathfrak c}$. Dow \cite{Dow05} strengthened Fedorchuk's result and constructed an Efimov space assuming ``only'' that $\operatorname{cof}\big([\mathfrak{s}]^\omega,\subseteq\big)=\mathfrak{s}$ and $2^{\mathfrak{s}}<2^{\mathfrak c}$. Dow and Fremlin \cite{DF07} proved that in the random model we may have $2^{\omega_1}=2^{\mathfrak{s}}=2^{\mathfrak c}$ but there still do exist Efimov spaces --- namely, they proved that if $K$ is a ground model (totally disconnected) compact F-space, then in any random generic extension $K$ has no non-trivial converging sequences. Recently, Dow and Shelah \cite{DS13} constructed an Efimov space under the assumption that $\mathfrak{b}=\mathfrak c$.
Boolean algebras with the Nikodym property or the Grothendieck property yield examples of Efimov spaces --- it is well-known that if a Boolean algebra $\mathcal{A}$ has either the Nikodym property or the Grothendieck property, then its Stone space $K_\mathcal{A}$ does not have any non-trivial convergent sequences, and hence if $\omega<|\mathcal{A}|<\mathfrak c$, then $K_\mathcal{A}$ is an Efimov space (since $w\big(K_\mathcal{A}\big)=|\mathcal{A}|$). In Sobota \cite[Section 8.2]{Sob17}, assuming that $\operatorname{cof}(\mathcal{N})\le\kappa=\operatorname{cof}\big([\kappa]^\omega,\subseteq\big)<\mathfrak c$, a Boolean algebra with the Nikodym property and of cardinality $\kappa$ was constructed, so an Efimov space of weight $\kappa$ was obtained as well. We can apply this argument here --- together with Theorem \ref{theorem:main} --- to prove the following corollary.
\begin{theorem}\label{theorem:efimov}
Let $\mathbb P\in V$ be a proper forcing having the Laver property and preserving the ground model reals non-meager and $G$ a $\mathbb P$-generic filter over $V$. Assume that the Continuum Hypothesis does not hold in $V[G]$. Let $\mathcal{A}\in V$ be $\sigma$-complete Boolean algebra of cardinality $\omega_1$. Then, in $V[G]$, the Stone space $K_\mathcal{A}$ of the algebra $\mathcal{A}$ is an Efimov space of weight $\omega_1$.
\end{theorem}
Note that Theorem \ref{theorem:efimov} introduces a new situation for which none of the previous arguments works (e.g. those of Fedorchuk or Dow et al.), but still there does exist an Efimov space. Indeed, assume that the Generalized Continuum Hypothesis holds in the universe $V$. Let $V'$ be an extension of $V$ in which $2^{\omega}=\omega_1$ and $2^{\omega_1}=2^{\omega_2}=\omega_3$ hold (e.g. force with $Fn\big(\omega_3\times\omega_1,2,\omega_1\big)$). Then, by using the countable support iteration of length $\omega_2$ of Miller's forcing obtain the extension $V''$ of $V'$ in which $\mathfrak{s}=\mathfrak{b}=\omega_1<\omega_2=\operatorname{cof}(\mathcal{N})=\mathfrak c$ and $2^{\omega_1}=2^{\mathfrak{s}}=2^{\omega_2}=\omega_3$.
\begin{corollary}\label{corollary:new_efimov}
It is consistent that $\mathfrak{s}=\mathfrak{b}=\omega_1<\omega_2=\operatorname{cof}(\mathcal{N})=\mathfrak c$, $2^{\omega_1}=2^{\mathfrak{s}}=2^{\omega_2}=\omega_3$, and an Efimov space exists.
\end{corollary}
There is an interesting question connecting Theorem \ref{theorem:main} and the result of Dow and Fremlin similar to Question \ref{question:laver}. Let $K$ be a totally disconnected compact space. Seever \cite[Theorem A]{See68} proved that $K$ is an F-space if and only if the Boolean algebra $\mathcal{A}=Clopen(K)$ of clopen subsets of $K$ has \textit{the property (I)}, i.e. for every sequences $\langle A_n\colon\ n\in\omega\rangle$ and $\langle B_n\colon\ n\in\omega\rangle$ in $\mathcal{A}$ such that $A_n\le B_m$ for every $n,m\in\omega$, there exists $C\in\mathcal{A}$ such that $A_n\le C\le B_m$ for every $n,m\in\omega$. Trivially, $\sigma$-complete Boolean algebras have the property (I). Seever \cite[Theorems B and C]{See68} proved also that if a Boolean algebra $\mathcal{A}$ has the property (I), then it has the Vitali--Hahn--Saks property. Now, since the Stone space of a Boolean algebra with the Nikodym property or the Grothendieck property has no non-trivial convergent sequences, we may ask about the extension of Dow and Fremlin's result as follows.
\begin{question}
Let $\mathbb P\in V$ be the random forcing. Let $\mathcal{A}\in V$ be a Boolean algebra which is $\sigma$-complete (or weaker: has the property (I)). If $G$ is a $\mathbb P$-generic filter over $V$, then does $\mathcal{A}$ have the Vitali--Hahn--Saks property in $V[G]$?
\end{question}
We finish the paper with the following issue concerning spaces without non-trivial converging sequences and cardinal characteristics of the continuum.
\begin{definition}
Let $\mathcal{NC}$ denotes the class of all infinite compact Hausdorff spaces without non-trivial convergent sequences. \textit{The non-convergence number} $\mathfrak{z}$ is defined as:
\[\mathfrak{z}=\min\big\{w(K)\colon\ K\in\mathcal{NC}\big\}.\]
\end{definition}
It is well-known that $\max(\mathfrak{s},\operatorname{cov}(\mathcal{M}))\le\mathfrak{z}$ (cf. Sobota \cite[Proposition 3.1]{Sob17}) and, by the fact that the Stone spaces of Boolean algebras with any of the Nikodym and Grothendieck properties do not contain any non-trivial convergent sequences, we have $\mathfrak{z}\le\min(\mathfrak{nik},\mathfrak{gr})$. However, we do not know whether the inequality may be strict.
\begin{question}
Is any of the inequalities $\mathfrak{z}<\mathfrak{gr}$ and $\mathfrak{z}<\mathfrak{nik}$ consistent?
\end{question}
Obviously, a positive answer to Question \ref{question:gr_b} would imply the consistency of the inequalities $\mathfrak{z}\le\mathfrak{gr}<\mathfrak{b}\le\mathfrak{nik}$. On the other hand, if the answer to Question \ref{question:gr_b} is negative, we ask about the relation between $\mathfrak{z}$ and $\mathfrak{b}$.
\begin{question}
Does $\mathfrak{b}\le\mathfrak{z}$ hold in ZFC?
\end{question}
Note that assuming $\mathfrak{b}=\mathfrak{c}$ Dow and Shelah \cite{DS13} obtained a counterexample to the Efimov problem.
|
1,477,468,750,170 | arxiv | \section{Introduction}
\label{se:introduction}
In the past few years a number of papers have discussed
the inclusion of weak boson finite-width effects in the theoretical predictions for
$e^+e^-$ processes. A careful treatment is required since these effects
are intimately related to the gauge invariance of the theory and any violation
of Ward identities can lead to large errors.
Even recently a new proposal for handling unstable particle processes
has appeared \cite{berendschapovsky}.
The most appealing approach used in actual numerical computations
is in our opinion the Fermion-Loop (FL) scheme \cite{bhf1, baurzepp, bhf2},
which consists in the
resummation of the fermionic one-loop corrections to the vector-boson
propagators and the inclusion of all remaining fermionic one-loop corrections,
in particular those to the Yang--Mills vertices.
In Ref.~\cite{bhf1, baurzepp} only the imaginary parts of the loops were
included since these represent the minimal set of one--loop contributions which
is required for preserving gauge invariance. This scheme will be referred to
as the Imaginary Part Fermion-Loop (IFL) scheme in the following.
In \cite{bhf2} all contributions from
fermionic one-loop corrections have been computed.
Some effects of light fermion masses in the fermionic loops have been
investigated in \cite{Hoogland&vanoldenborgh}.
In this paper we study the effects of external particle fermion masses
which imply the non--conservation of the weak currents which couple to the
fermionic loops. These effects have been as yet neglected for $e^+ e^-\to
4f$ processes:
in Ref.~\cite{bhf1} and in the numerical part of Ref.~\cite{bhf2}
all fermions have been assumed to be massless,
while in Ref.~\cite{Hoogland&vanoldenborgh} massive matrix elements
together with the FL corrections of Ref.~\cite{bhf2} were used
under the assumption that the currents were conserved.
Since our main focus is on gauge invariance, we restrict our attention to the
imaginary parts of the fermionic loops, generalizing the approach of
Ref.~\cite{bhf1}.
The extension of the full FL scheme to the case of massive external fermions
is at present being studied \cite{GPnew} and it will allow to determine the
scale of $\alpha_{QED}$ for single $W$\ processes.
We compare the different gauge-restoring schemes in $e^-e^+\to e^-\bar{\nu}_eu\bar{d}$\/ (CC20)
which, in addition to the usual diagrams of $e^-e^+\to \mu^-\bar{\nu}_\mu u\bar{d}$\/ (CC10), requires
all diagrams obtained exchanging the incoming $e^+$ with the outgoing
$e^-$. These contributions become dominant for $\theta_e \rightarrow 0$
because of the $t$-channel
$\gamma$ propagator. The CC20 four fermion events with
$e$ lost in the pipe are often referred to as single W production,
and are relevant for triple gauge studies and as background to searches.
For recent reviews see Refs.~\cite{revs}.
Since the $t$-channel $\gamma$ propagator diverges at $\theta_e=0$ in the
$m_e \rightarrow 0$ limit,
fermion masses have to be exactly accounted for.
Moreover, the apparent $t^{-2}$ behaviour is reduced to $t^{-1}$ by
gauge cancellations. This implies that even a tiny violation of gauge
conservation can have dramatic effects,
as e.g.\ discussed in Ref.~\cite{bhf1,lmunu}, and the use of some
gauge conserving scheme is unavoidable.
Two different strategies have been used:
Improved Weiszacker-Williams \cite{iww} implemented in
{\tt WTO\cite{wto}} and completely massive codes.
In the first case one separates the 4 $t$-channel photon diagrams, evaluates
them analytically in the equivalent photon approximation taking into account
the complete dependence on all masses, and then adds
the rest of diagrams and the interference between the two sets
in the massless approximation.
In the fully massive MC numerical approach {\tt COMPHEP}\cite{com},
{\tt GRC4F}\cite{grc}, {\tt KORALW}\cite{kor}, {\tt WPHACT}\cite{wph} and
recently also the two new codes
{\tt NEXTCALIBUR}\cite{nextc}
and {\tt SWAP\cite{swap}} have compared their results and found good
agreement \cite{revs}\footnote{More details can be found in the
homepage of the LEP2 MC Workshop
{\tt http://www.ph.unito.it/\~{}giampier/lep2.html}}
among themselves and with {\tt WTO}.
In the following we first discuss the issue of U(1) gauge invariance
in single W production with non conserved weak currents.
We then give the expression
of all required contributions to the vertex corrections in the IFL scheme.
Finally we present comparisons between the IFL and
other gauge-preserving schemes which have been employed in the literature
and study the relevance of neglecting current non conservation in
the energy range of LEP2 and LC.
\begin{figure}[tb]
\begin{center}
\begin{picture}(90,100)(0,0)
\ArrowLine(10,80)(35,80)
\ArrowLine(35,80)(80,80)
\ArrowLine(80,20)(35,20)
\ArrowLine(35,20)(10,20)
\ArrowLine(80,35)(65,50)
\ArrowLine(65,50)(80,65)
\Photon(35,80)(35,50){2}{4}
\Photon(35,50)(35,20){2}{4}
\Photon(35,50)(65,50){2}{4}
\Vertex(35,80){1.2}
\Vertex(35,20){1.2}
\Vertex(35,50){1.2}
\Vertex(65,50){1.2}
\put(08,90){\makebox(0,0)[r]{$e^-$}}
\put(08,10){\makebox(0,0)[r]{$e^+$}}
\put(82,90){\makebox(0,0)[l]{$e^-$}}
\put(82,10){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(82,65){\makebox(0,0)[l]{$u$}}
\put(82,35){\makebox(0,0)[l]{$\bar{d}$}}
\put(30,65){\makebox(0,0)[r]{$\gamma$}}
\put(30,35){\makebox(0,0)[r]{$W$}}
\put(55,55){\makebox(0,0)[b]{$W^+$}}
\end{picture}
\qquad
\begin{picture}(90,100)(0,0)
\ArrowLine(10,80)(35,80)
\ArrowLine(35,80)(80,80)
\ArrowLine(80,20)(55,20)
\ArrowLine(55,20)(35,20)
\ArrowLine(35,20)(10,20)
\ArrowLine(85,35)(70,45)
\ArrowLine(70,45)(85,60)
\Photon(35,80)(35,20){2}{6}
\Photon(55,20)(70,45){2}{3}
\Vertex(35,80){1.2}
\Vertex(35,20){1.2}
\Vertex(55,20){1.2}
\Vertex(70,45){1.2}
\put(08,90){\makebox(0,0)[r]{$e^-$}}
\put(08,10){\makebox(0,0)[r]{$e^+$}}
\put(82,90){\makebox(0,0)[l]{$e^-$}}
\put(82,10){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(87,60){\makebox(0,0)[l]{$u$}}
\put(87,35){\makebox(0,0)[l]{$\bar{d}$}}
\put(30,50){\makebox(0,0)[r]{$\gamma$}}
\put(65,32){\makebox(0,0)[br]{$W^+$}}
\end{picture}
\\
\begin{picture}(90,100)(0,0)
\ArrowLine(10,80)(35,80)
\ArrowLine(35,80)(80,80)
\ArrowLine(80,20)(35,20)
\ArrowLine(35,20)(10,20)
\ArrowLine(80,35)(50,35)
\ArrowLine(50,35)(50,65)
\ArrowLine(50,65)(80,65)
\Photon(35,80)(50,65){2}{3}
\Photon(35, 20)(50,35){2}{3}
\Vertex(35,80){1.2}
\Vertex(35,20){1.2}
\Vertex(50,35){1.2}
\Vertex(50,65){1.2}
\put(08,90){\makebox(0,0)[r]{$e^-$}}
\put(08,10){\makebox(0,0)[r]{$e^+$}}
\put(82,90){\makebox(0,0)[l]{$e^-$}}
\put(82,10){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(82,65){\makebox(0,0)[l]{$u$}}
\put(82,35){\makebox(0,0)[l]{$\bar{d}$}}
\put(38,65){\makebox(0,0)[r]{$\gamma$}}
\put(42,35){\makebox(0,0)[r]{$W$}}
\end{picture}
\qquad
\begin{picture}(90,100)(0,0)
\ArrowLine(10,80)(35,80)
\ArrowLine(35,80)(80,80)
\ArrowLine(80,20)(35,20)
\ArrowLine(35,20)(10,20)
\ArrowLine(50,35)(80,35)
\ArrowLine(50,65)(50,35)
\ArrowLine(80,65)(50,65)
\Photon(35,80)(50,65){2}{3}
\Photon(35, 20)(50,35){2}{3}
\Vertex(35,80){1.2}
\Vertex(35,20){1.2}
\Vertex(50,35){1.2}
\Vertex(50,65){1.2}
\put(08,90){\makebox(0,0)[r]{$e^-$}}
\put(08,10){\makebox(0,0)[r]{$e^+$}}
\put(82,90){\makebox(0,0)[l]{$e^-$}}
\put(82,10){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(82,35){\makebox(0,0)[l]{$u$}}
\put(82,65){\makebox(0,0)[l]{$\bar{d}$}}
\put(38,65){\makebox(0,0)[r]{$\gamma$}}
\put(42,35){\makebox(0,0)[r]{$W$}}
\end{picture}
\end{center}
\vskip -.5cm
\caption[]{The four diagrams of the process $e^-(p_1)e^+(k_1)
\to e^-(p_2)\bar{\nu}_e(k_2) u(p_u)
\bar{d}(p_d)$ which are considered in this paper.}
\label{diagrams_eeenuud}
\end{figure}
\section{Gauge invariance}
\label{se:gaugeinv}
We choose to work in the unitary gauge. In this case,
the relevant set of Feynman diagrams which become dominant
for $\theta_e \rightarrow 0$
coincides with those discussed in Ref.~\cite{bhf1}.
They are shown in Fig.~\ref{diagrams_eeenuud}.
For ease of comparison we follow closely the notation of \cite{bhf1}.
The corresponding matrix element ${\cal M}$ is given by
\begin{equation}
{\cal M}
=
{\cal M}^{\mu}\;
J_\mu\;\;,\;\;\;
J^{\mu} = {Q_e \over q^2} \bar{u}(p_2)\gamma^{\mu} u(p_1)\;\;,\;\;\;
{\cal M}^{\mu}
=
\sum_{i=1}^4{\cal M}_i^{\mu}
\end{equation}
where
\begin{eqnarray}
{\cal M}_1^{\mu}
& = &
Q_{_W}\;
P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
V^{\alpha\beta\mu}(p_+,-p_-,-q)
D_\alpha^\rho(p_+) D_\beta^\sigma(p_-) {\cal M}^0_{\sigma\rho}\;\;,\nonumber\\
{\cal M}_2^{\mu}
& = &
\hphantom{-}
4iQ_e g_w^2
\;P_{_W}(p_+^2)\;
\bar{v}(k_1)\gamma^\mu{\sla{k}_1+\sla{q}-m_e\over(k_1+q)^2-m_e^2}
\gamma^\alpha P_L v(k_2)\;\;
\bar{u}(p_u)\gamma_\rho P_L v(p_d)D_\alpha^\rho(p_+)\;\;,\nonumber\\
{\cal M}_3^{\mu}
& = &
-4iQ_u g_w^2
\;P_{_W}(p_-^2)\;
\bar{u}(p_u)\gamma^\mu{\sla{p}_u-\sla{q}+m_u\over(p_u-q)^2-m_u^2}
\gamma^\beta P_L v(p_d)\;\;
\bar{v}(k_1)\gamma_\sigma P_L v(k_2)D_\beta^\sigma(p_-)\;\;,\nonumber\\
{\cal M}_4^{\mu}
& = &
-4iQ_d g_w^2
\;P_{_W}(p_-^2)\;
\bar{u}(p_u)\gamma^\beta P_L {\sla{q}-\sla{p}_d+m_d\over(p_d-q)^2-m_d^2}
\gamma^\mu v(p_d)\;\;
\bar{v}(k_1)\gamma_\sigma P_L v(k_2)D_\beta^\sigma(p_-)\;\;,\nonumber\\
{\cal M}^0_{\sigma\rho}
& \equiv &
4i g_w^2\;
\bar{v}(k_1)\gamma_\sigma P_L v(k_2)\;\;
\bar{u}(p_u)\gamma_\rho P_L v(p_d)\;\;.
\label{fourdiagrams}
\end{eqnarray}
where
$P_L \equiv {1\over2}(1-\gamma^5)$ and
\begin{equation}
p_+ = p_u+p_d\;\;,\;\;p_- = k_1-k_2\;\;,\;\;q = p_1-p_2\;\;,
\end{equation}
\begin{equation}
\label{invprop}
\left[P_{_W}(s)\right]^{-1} = s-M_{_W}^2+i\gamma_{_W}(s)\;\;,
\end{equation}
\begin{equation}
D_\alpha^\beta(p) = g_\alpha^\beta - p_\alpha
p^\beta/K(p^2)\;\;.
\end{equation}
$M_{_W}$ is the $W$ mass and
$\gamma_{_W}$ denotes the imaginary part of the inverse $W$ propagator.
At tree level,
$K(p^2) = M^2_{_W}$ but the resummation of the imaginary
parts of higher order graphs modifies the lowest
order expression of $K$ in addition to generate a finite width.
The charged weak coupling constant $g_w$ is given by
$g_w^2 = M_{_W}^2G_F/\sqrt{2}$, while
$Q_i$ is the electric charge of particle $i$, and
\begin{equation}
V^{\mu_1\mu_2\mu_3}(p_1,p_2,p_3) = (p_1-p_2)^{\mu_3}g^{\mu_1\mu_2} +
(p_2-p_3)^{\mu_1}g^{\mu_2\mu_3} + (p_3-p_1)^{\mu_2}g^{\mu_3\mu_1}\;\;.
\end{equation}
The conservation of electromagnetic current requires
\begin{equation}
q^\mu {\cal M}_\mu = 0\;\;.
\label{currcons}
\end{equation}
Any small violation of this relation will be amplified
by a huge factor and will lead to totally wrong predictions
for almost collinear electrons \cite{bhf1,lmunu}.
Multiplying $q^\mu$ into the four diagrams of \eqn{fourdiagrams},
we obtain
\begin{eqnarray}
\label{qdot4diag}
W & \equiv & q^\mu{\cal M}_\mu \nonumber\\
& = & {\cal M}_0\left\{
(p_+^2-p_-^2)Q_{_W}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\right.\nonumber\\
& & \hphantom{{\cal M}_0}
\left.\mbox{}
+Q_e\;P_{_W}(p_+^2) - \left( Q_d-Q_u\right)\;
P_{_W}(p_-^2) \right\}\;\;\nonumber\\
& & -{\cal M}_{++}\left\{
Q_{_W}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
\left( 1 - p_-^2/K(p_+^2)\right)
\right.\nonumber\\
& & \hphantom{{\cal M}_{++}\;}
\left.\mbox{}
+Q_e\;P_{_W}(p_+^2)\;/K(p_+^2)
\right\}
\\
& & +{\cal M}_{--}\left\{
Q_{_W}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
\left( 1 - p_+^2/K(p_-^2)\right)
\right.\nonumber\\
& & \hphantom{{\cal M}_{--}\;}
\left.\mbox{}
+ \left( Q_d-Q_u\right)\;P_{_W}(p_-^2)\;/K(p_-^2)
\right\}
\nonumber\\
& & +{\cal M}_{-+}\left\{
Q_{_W}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\; p_-\cdot p_+\;
\left(
K(p_-^2)^{-1} - K(p_+^2)^{-1} \right)
\right\}\;.
\nonumber
\end{eqnarray}
\newpage
where
\begin{equation}
{\cal M}_0
\equiv
{\cal M}^0_{\alpha\beta}\,g^{\alpha\beta}\;,\; \;\;\;\;\;\;\;
{\cal M}_{++}
\equiv
{\cal M}^0_{\alpha\beta}\,p_+^\alpha p_+^\beta\;,
\end{equation}
\begin{equation}
{\cal M}_{--}
\equiv
{\cal M}^0_{\alpha\beta}\,p_-^\alpha p_-^\beta\;,\; \;\;\;
{\cal M}_{-+}
\equiv
{\cal M}^0_{\alpha\beta}\,p_-^\alpha p_+^\beta\;\;.
\end{equation}
Using $Q_{_W} = Q_e = Q_d-Q_u = - 1$ and \eqn{invprop} we have
\begin{eqnarray}
\label{gaugecancellation1}
W & = &\; i\;{\cal M}_0\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
\left(\gamma_{_W}(p_+^2)-\gamma_{_W}(p_-^2)\right) \nonumber\\
& & +\;{\cal M}_{++}\left\{
P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
\left( 1 - \parent{M_{_W}^2-i\gamma_{_W}(p_-^2)}/K(p_+^2)\right)
\right\} \\
& & -\;{\cal M}_{--}\left\{
P_{_W}(p_-^2)\;P_{_W}(p_+^2)\;
\left( 1 - \parent{M_{_W}^2-i\gamma_{_W}(p_+^2)}/K(p_-^2)\right)
\right\} \nonumber\\
& & -\;{\cal M}_{-+}\left\{
P_{_W}(p_+^2)\;P_{_W}(p_-^2)\; p_-\cdot p_+\;
\left( K(p_-^2)^{-1} - K(p_+^2)^{-1} \right)
\right\}\;.
\nonumber
\end{eqnarray}
Current conservation is therefore violated unless
\begin{eqnarray}
\gamma_{_W}(p_+^2) &=& \gamma_{_W}(p_-^2) \equiv \overline{\gamma}_{_W}\\
K(p_+^2) &=& K(p_-^2) = M_{_W}^2-i\overline{\gamma}_{_W}
\label{fixwidthcondition}
\end{eqnarray}
It should be mentioned that all effects due to the non conservation
of the currents
which couple to the $W$ and $Z$ bosons are contained in the last three terms of
\eqn{qdot4diag} and \eqn{gaugecancellation1} which would be zero if the
currents were conserved.
The most naive treatment of a
Breit-Wigner resonance uses a {\em fixed width\/} approximation, with
\begin{equation}
\overline{\gamma}_{_W} = M_{_W}\Gamma_{_W}\;\;.
\end{equation}
\eqns{gaugecancellation1}{fixwidthcondition}
show that in this case there is no violation of
electromagnetic current conservation.
In the unitary gauge this corresponds to adding the same imaginary part,
$-iM_{_W}\Gamma_{_W}$, to $M^2_{_W}$ both in the denominator and in the
$p^\mu p^\nu$ term of the $W$ propagator (see {\it e.g.} \cite{baurzepp}
and references therein). We have verified numerically
that neglecting to modify the latter leads to large errors already at
$800 \;\mathrm{GeV}$.
A similar approach, in which
all weak boson masses squared $M^2_{_B}\;, B = W,Z$ are changed to
$M^2_{_B}-i\gamma_{_B}$ everywhere,
including in the definition of the weak mixing angle,
has in fact been suggested \cite{DDRW}
as a mean of preserving both U(1) and SU(2) Ward identities in
the Standard Model.
The fixed-width approximation cannot however
be justified from field theory. Indeed,
propagators with space-like momenta are real and cannot
acquire a finite width in contradiction to the fixed-width scheme.
As discussed in Ref.~\cite{bhf1},
the simplest way to restore gauge-invariance
in a theoretically satisfying fashion is the addition of the imaginary parts
of one-loop fermionic vertex corrections,
shown in Fig.~\ref{extra_diagrams_eeenuud}, which cancel the imaginary
part in the Ward identities.
The cancellation is exact as long as all fermion loops, both in the vertices and
in the propagators, are computed in the same approximation.
In particular we can
consistently neglect fermion masses in the loops,
if we use
for the $W$ width
the tree--level expression for the decay of an on-shell $W$ to massless fermions
\begin{equation}
\label{widthw}
\Gamma_{_W} = \sum_{\mbox{\tiny doublets}}\;N_f\;
{G_FM_{_W}^3\over6\pi\sqrt{2}}\;\;,
\end{equation}
involving a sum over all fermion doublets with $N_f$ (1 or 3)
colours.
The vertex corrections are given by
\begin{figure}[tb]
\begin{center}
\begin{picture}(130,120)(0,0)
\ArrowLine(10,100)(40,100)
\ArrowLine(40,100)(110,100)
\ArrowLine(110,20)(40,20)
\ArrowLine(40,20)(10,20)
\ArrowLine(115,40)(95,60)
\ArrowLine(95,60)(115,80)
\Photon(40,100)(50,75){2}{4}
\Photon(40, 20)(50,45){2}{4}
\ArrowLine(50,75)(50,45)
\ArrowLine(50,45)(75,60)
\ArrowLine(75,60)(50,75)
\Photon(75,60)(95,60){2}{3}
\Vertex(40,100){1.2}
\Vertex(40,20){1.2}
\Vertex(50,45){1.2}
\Vertex(50,75){1.2}
\Vertex(95,60){1.2}
\Vertex(75,60){1.2}
\put(08,100){\makebox(0,0)[r]{$e^-$}}
\put(08,20){\makebox(0,0)[r]{$e^+$}}
\put(112,100){\makebox(0,0)[l]{$e^-$}}
\put(112,20){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(117,80){\makebox(0,0)[l]{$u$}}
\put(117,40){\makebox(0,0)[l]{$\bar{d}$}}
\put(41,85){\makebox(0,0)[r]{$\gamma$}}
\put(42,35){\makebox(0,0)[r]{$W$}}
\put(61,70){\makebox(0,0)[bl]{\shortstack{$d,s$\\$e,\mu,\tau$}}}
\put(57,51){\makebox(0,0)[tl]{\shortstack{$u,c$\\$\nu_e,\nu_\mu,\nu_\tau$}}}
\end{picture}
\qquad
\begin{picture}(130,120)(0,0)
\ArrowLine(10,100)(40,100)
\ArrowLine(40,100)(110,100)
\ArrowLine(110,20)(40,20)
\ArrowLine(40,20)(10,20)
\ArrowLine(115,40)(95,60)
\ArrowLine(95,60)(115,80)
\Photon(40,100)(50,75){2}{4}
\Photon(40, 20)(50,45){2}{4}
\ArrowLine(50,45)(50,75)
\ArrowLine(75,60)(50,45)
\ArrowLine(50,75)(75,60)
\Photon(75,60)(95,60){2}{3}
\Vertex(40,100){1.2}
\Vertex(40, 20){1.2}
\Vertex(50, 45){1.2}
\Vertex(50, 75){1.2}
\Vertex(95,60){1.2}
\Vertex( 75,60){1.2}
\put(08,100){\makebox(0,0)[r]{$e^-$}}
\put(08,20){\makebox(0,0)[r]{$e^+$}}
\put(112,100){\makebox(0,0)[l]{$e^-$}}
\put(112,20){\makebox(0,0)[l]{$\bar{\nu}_e$}}
\put(127,80){\makebox(0,0)[l]{$u$}}
\put(127,40){\makebox(0,0)[l]{$\bar{d}$}}
\put(41,85){\makebox(0,0)[r]{$\gamma$}}
\put(42,35){\makebox(0,0)[r]{$W$}}
\put(61,70){\makebox(0,0)[bl]{\shortstack{$u,c$\\$\nu_e,\nu_\mu,\nu_\tau$}}}
\put(57,48){\makebox(0,0)[tl]{\shortstack{$d,s$\\$e,\mu,\tau$}}}
\end{picture}
\end{center}
\vskip -1cm
\caption[]{The extra fermionic diagrams needed to cancel the gauge-breaking
terms.}
\label{extra_diagrams_eeenuud}
\end{figure}
\begin{eqnarray}
\label{eq:M5}
{\cal M}_5^{\mu} &=& {i\over16\pi}{\cal M}^0_{\rho\sigma}\;P_{_W}(p_+^2)\;
P_{_W}(p_-^2)\;g_w^2 \times \\
& & \hphantom{--------} \times \sum_{\mbox{\tiny doublets}}N_f\left( Q_d-Q_u \right)\;
D_\alpha^\rho(p_+) D_\beta^\sigma(p_-)\; Z^{\alpha\beta\mu}\;\;, \nonumber
\end{eqnarray}
where
\begin{equation}
\label{eq:Z1}
Z^{\alpha\beta\mu} = {1\over2\pi} \int d\Omega
\;\mbox{Tr}\left[ \sla{r}_1\gamma^\mu{\sla{r}_1-\sla{q}\over(r_1-q)^2}
\gamma^\beta\sla{r}_2\gamma^\alpha
\right]
\end{equation}
is the imaginary part of the triangle insertions. The momenta $r_1$ and
$r_2$ are the momenta of the cut fermion lines with $p_+=r_1+r_2$.
The expression $Z^{\alpha\beta\mu}$ satisfies the three Ward
identities:
\begin{eqnarray}
\label{wiz}
Z^{\alpha\beta\mu} q_\mu
& = & -{8\over3}\left( p_+^\alpha p_+^\beta - p_+^2 g^{\alpha\beta}
\right) \;\;,\nonumber\\
Z^{\alpha\beta\mu} p^+_\alpha
& = & 0 \;\;,\\
Z^{\alpha\beta\mu} p^-_\beta
& = & +{8\over3}\left( p_+^\mu p_+^\alpha - p_+^2 g^{\mu\alpha}
\right) \;\;.\nonumber
\end{eqnarray}
Because of the anomaly cancellation we have no explicit contributions from the
part containing $\gamma^5$.
Attaching the photon momentum $q_\mu$ to the sum of the
diagrams ${\cal M}_5^{\mu}$ gives
\begin{eqnarray}
W_{\mathrm{add}} \equiv q_\mu{\cal M}_5^{\mu}
& = &
- \; i \;{\cal M}_0\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
\Gamma_{_W} {p_+^2\over M_{_W}}\nonumber\\
& & +\; i\;{\cal M}_{++}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
{\Gamma_{_W} \over M_{_W}} \\
& & +\; i\;{\cal M}_{--}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
{\Gamma_{_W} \over M_{_W}}{p_+^2\over K(p_-^2)}
\nonumber\\
& & -\; i\;{\cal M}_{-+}\;P_{_W}(p_+^2)\;P_{_W}(p_-^2)\;
{\Gamma_{_W} \over M_{_W}}
{{p_+\cdot p_-}\over K(p_-^2)}
\nonumber
\label{qdotM5}
\end{eqnarray}
where we used the Ward identity of \eqn{wiz} and the definition of the
nominal $W$ width, \eqn{widthw}.
\begin{figure}[tb]
\begin{picture}(200,30)(0,0)
\Photon(110,15)(130,15){2}{4}
\put(122,20){\makebox(0,0)[b]{$W$}}
\BCirc(140,15){10}
\Photon(150,15)(170,15){2}{4}
\put(162,20){\makebox(0,0)[b]{$W$}}
\put(175,15){\makebox(0,0)[l]
{$= - i \,\Pi^{\mu\nu}_{{\scriptscriptstyle W}}$}}
\end{picture}
\vskip -.3cm
\caption[]{First order contribution to the inverse W propagator.
The fermions in the loop are assumed to be massless.}
\label{invWprop}
\end{figure}
Assuming $\gamma_{_W}(p_-^2)=0$
as required by field theory, the extra diagrams restore U(1) gauge invariance
provided
\begin{eqnarray}
\gamma_{_W}(p_+^2) &=& \Gamma_{_W} {p_+^2\over M_{_W}}\; , \\
K(p_+^2) &=& M^2_{_W} \left( 1 + i {\Gamma_{_W} \over M_{_W}}\right)^{-1}\;,\\
K(p_-^2) &=& M^2_{_W}\;\;.
\label{fixedW}
\end{eqnarray}
This result may be surprising but it is actually the correct field theoretical
resummation, in the unitary gauge, of the imaginary part of the fermionic
one--loop contributions \cite{baurzepp},
shown in \fig{invWprop}, which is transverse if we consider only
massless fermions
\begin{equation}
\mathop{\operator@font Im}\nolimits \left( \Pi^{\mu\nu}_{{\scriptscriptstyle W}}\right) =
\left( g^{\mu\nu} - p^\mu p^\nu/p^2 \right)\Pi_{{\scriptscriptstyle W}}
\end{equation}
with
\begin{equation}
\Pi_{{\scriptscriptstyle W}} = p^2 {\Gamma_{_W} \over M_{_W}}\;\;.
\label{treeWwidth}
\end{equation}
If, suppressing indices for simplicity, we define $1\!\!1 \equiv g^{\mu\nu}$,
$D \equiv g^{\mu\nu} - p^\mu p^\nu/M^2$ and
$T \equiv g^{\mu\nu} - p^\mu p^\nu/p^2$ we have $DT = TD = T$ and $T^2 = T$.
The usual Dyson series for the resummation of the imaginary part $\Pi$
of one--loop corrections reads:
\begin{eqnarray}
S & = & {-iD \over p^2- M^2} + {-iD \over p^2- M^2}(T\Pi){-iD \over p^2- M^2} +
\cdots \nonumber \\
& = & {-iD \over p^2- M^2 +i\,\Pi}\left( 1\!\!1 + {i\,\Pi \over p^2-M^2}\left(1\!\!1-T\right)
\right)
\end{eqnarray}
More explicitly
\begin{eqnarray}
S^{\mu\nu}
& = & {-i \over p^2- M^2 +i\Pi}
\left\{
g^{\mu\nu}
- {p^\mu p^\nu \over M^2}
\left(1 + {i\Pi \over p^2}\right)\right\}
\end{eqnarray}
Hence the introduction of a finite width for $s$-channel
virtual $W$ 's which is required even in tree level calculations
has to be associated with a corresponding modification of
the $p^\mu p^\nu$ term.
\newpage
\par
\section{Form factors for the vertex corrections}
\label{se:formfactors}
We report in this section the analytic expression of $Z^{\alpha\beta\mu}$
which is needed for actual computations in the FL scheme.
Parametrizing $Z^{\alpha\beta\mu}$ as follows
\begin{eqnarray}
\label{ourZ}
Z^{\alpha\beta\mu} &=& \;\; p_+^\alpha p_+^\beta p_+^\mu f_1
+ q^\alpha p_+^\beta p_+^\mu f_2
+ p_+^\alpha q^\beta p_+^\mu f_3
+ p_+^\alpha p_+^\beta q^\mu f_4 \nonumber \\
& & + q^\alpha q^\beta p_+^\mu f_5
+ q^\alpha p_+^\beta q^\mu f_6
+ p_+^\alpha q^\beta q^\mu f_7
+ q^\alpha q^\beta q^\mu f_8 \\
& & + g^{\alpha\beta} p_+^\mu f_9
+ g^{\alpha\beta} q^\mu f_{10}
+ g^{\beta\mu} p_+^\alpha f_{11}
+ g^{\beta\mu} q^\alpha f_{12} \nonumber \\
& & + g^{\alpha\mu} p_+^\beta f_{13}
+ g^{\alpha\mu} q^\beta f_{14} \nonumber
\end{eqnarray}
we find
\begin{eqnarray}
f1 &=&
160 \frac{p_+^2 q^2 p_-^2}{\lambda^3}
\left\{
f_0 \left[ -6 p_+^2 q^2 p_-^2 -2 ( p_+^2 + p_-^2 )
({p_- \cdot p_+})^2 + 2 ( p_+^4 + p_-^4 ){p_- \cdot p_+} \right]
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & + 10 \frac{p_+^2 ({p_- \cdot p_+})^2}{q^2}
+ 20 p_+^2 p_-^2 + 10 p_+^4
- \frac{2}{q^2} \left( 3 p_+^6
+ ({p_- \cdot p_+})^2 p_-^2 + p_-^6 \right)
+ 2 p_-^4 \nonumber \\
& & + \frac{\lambda}{10} \left[ f_0 \left( - \frac{q^2 p_-^2}{p_+^2}+
\frac{p_-^4}{p_+^2}- \frac{p_+^2 q^2}{p_-^2} - 6 p_+^2
+ \frac{p_+^4}{p_-^2} - 10 q^2 - 6 p_-^2 \right)
\rule[-.3 cm]{0cm}{.8cm} \right. \\
& &
\left. \rule[-.3 cm]{1.cm}{0.cm}
\frac{116}{3}- 2 \frac{p_-^4}{q^2 p_+^2}
+ 2 \frac{p_-^2}{p_+^2} + \frac{139}{3} \frac{p_+^2}{q^2}
+ 14 \frac{p_+^2}{p_-^2}
- \frac{20}{3} \frac{p_+^4}{q^2 p_-^2}
+ \frac{67}{3} \frac{p_-^2}{q^2}
\rule[-.3 cm]{0cm}{.8cm} \right] \nonumber \\
& &
\left. + \frac{\lambda^2}{10} \left[
- f_0 \left( \frac{1}{p_+^2} + \frac{1}{p_-^2} \right)
+ \frac{7}{3 p_+^2 q^2} + \frac{2}{3 p_+^2 p_-^2}
+ \frac{19}{3 q^2 p_-^2} \right]
\right\}\;, \nonumber \\
& & \nonumber \\
& & \nonumber \\
f2 &=&
-160 \frac{p_+^2 q^2 p_-^2}{\lambda^3}
\left\{
f_0 \left[ p_+^2 q^2 p_-^2 + p_+^2 q^4 - q^2 p_-^4 +2 q^4 p_-^2 - q^6
\right]
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & + 2 p_+^2 {q\cdot p_-} +6 q^2 {q\cdot p_-}
+4 q^4 -2 {q\cdot p_-} p_-^2 \nonumber \\
& & + \frac{f_0 \lambda}{20} \left( 2 \frac{p_+^2 q^2}{ p_-^2} + 3 p_+^2 +17 q^2
-2 \frac{q^4}{p_-^2} -3 p_-^2 \right) \\
& & + \left.
\frac{\lambda}{15 p_-^2}( 11 q^2 + 13{q\cdot p_-} - p_-^2 )
+ \frac{\lambda^2}{60 q^2 p_-^2}( 1 + 3 f_0 q^2 )
\right\}\;, \nonumber \\
& & \nonumber \\
& & \nonumber \\
f3 &=&
160 \frac{p_+^2 q^2 p_-^2}{\lambda^3}
\left\{
f_0 \left[ 3 p_+^2 q^2 p_-^2 + (3 p_+^2 - p_-^2 )({p_-\cdot p_+})^2
- 2 p_+^4 {p_-\cdot p_+} \right]
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & - 8 \frac{p_+^2 ({p_-\cdot p_+})^2}{ q^2}
- 8 p_+^2 p_-^2 - 8 p_+^4
+ 4 \frac{p_+^6}{q^2}
+ 4 \frac{({p_-\cdot p_+})^2 p_-^2}{q^2}
\nonumber \\
& & + \frac{\lambda}{10} \left[
f_0 \left( \frac{p_+^2 q^2}{p_-^2} + 3 p_+^2
- \frac{p_+^4}{p_-^2} + \frac{13}{2} q^2 + 3 p_-^2 \right)
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & \left. \rule[-.3 cm]{1.cm}{0.cm}
- 20 - 32 \frac{p_+^2}{q^2} - \frac{40}{3}\frac{ p_+^2}{ p_-^2}
+ 6 \frac{p_+^4}{q^2 p_-^2} - 4 \frac{p_-^2}{q^2}
\right] \\
& &
\left.
+ \frac{\lambda^2}{20}
\left[ f_0 \left( \frac{1}{ p_+^2} + \frac{2}{ p_-^2} \right)
- \frac{1}{ p_+^2 q^2} - \frac{1}{p_+^2 p_-^2}
- \frac{34}{3 q^2 p_-^2} \right]
\rule[-.3 cm]{0cm}{.8cm} \right\}\;, \nonumber \\
& & \nonumber \\
& & \nonumber \\
f5 &=&
160 \frac{p_+^2 q^2 p_-^2}{\lambda^3}
\left\{
f_0 \left[
3 p_+^2 q^2 p_-^2 + p_+^2 q^4 - q^2 p_-^4 + 2 q^4 p_-^2 - q^6 \right]
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & + 6 p_+^2 {q\cdot p_-} + 6 q^2 {q\cdot p_-} + 4 q^4 - 2 {q\cdot p_-}p_-^2
\nonumber \\
& & + \frac{f_0 \lambda}{20}( 2 \frac{p_+^2 q^2}{ p_-^2} + 7 p_+^2 + 21 q^2
- 2 \frac{q^4}{ p_-^2} + p_-^2 ) \\
& & \left .
+ \frac{\lambda}{5}\left( - 3 + \frac{2}{3} \frac{q\cdot p_-}{q^2}
+ \frac{11}{3}\frac{q^2}{ p_-^2} + 5 \frac{{q\cdot p_-}}{ p_-^2}
\right)
+ \frac{\lambda^2}{60 q^2 p_-^2}( 1 + 3 f_0 q^2 )
\right\}\;, \nonumber \\
& & \nonumber \\
& & \nonumber \\
f9 &=&
16 \frac{p_+^2 q^2 p_-^2}{\lambda^2}
\left\{
f_0 \left[ q^2 {p_-\cdot p_+} + \frac{\lambda}{2}\right]
+ 2 {q\cdot p_+}
+ \frac{\lambda}{6 q^2 p_-^2} ( 4{p_+\cdot q} - 3 q^2 )
\right\} \\
& & \nonumber \\
& & \nonumber \\
f11 &=&
16 \frac{p_+^2 q^2 p_-^2}{\lambda^2} \left\{
f_0 \left[ q^2 {p_-\cdot p_+} + \lambda \left( - \frac{1}{4}
+\frac{{p_-\cdot p_+}}{2 p_+^2} \right) \right] \right. \\
& & \left. + 2 {p_+\cdot q}
+ \lambda {p_+\cdot q} (\frac{1}{2 p_+^2 q^2} +
\frac{1}{ 2 p_+^2 p_-^2} -\frac{2}{3 q^2p_-^2 } ) \right\}\;,
\nonumber \\
& & \nonumber \\
& & \nonumber \\
f12 &=&
16 \frac{p_+^2 q^2 p_-^2}{\lambda^2} \left\{
f_0 \left[- p_+^2 {q\cdot p_-} + \frac{\lambda}{2}\right]
- 2 p_+^2
+ \frac{\lambda}{6 q^2 p_-^2} ( 6{q\cdot p_-} - p_+^2) \right\}\;,
\\
& & \nonumber \\
& & \nonumber \\
f13 &=&
16 \frac{p_+^2 q^2 p_-^2}{\lambda^2} \left\{
\phantom{\left( - \frac{3}{4}\right.} \right. \nonumber \\
& & \left.
f_0 \left[ - 7 q^2{p_+\cdot q}
+ 4 ({p_+\cdot q})^2 - 3 q^2 p_-^2 + 3 q^4
+ \lambda \left( - \frac{3}{4} + \frac{{p_+\cdot q}}{2 p_-^2}
-\frac{q^2}{2 p_-^2} \right) \right] \right. \\
& & \left. + \frac{8}{3}( p_-^2 - q^2 ) - \frac{22}{3}{p_-\cdot p_+}
+ \frac{14}{3}\frac{({p_-\cdot p_+})^2}{ p_-^2}
+ \frac{\lambda}{6 q^2 p_-^2} {q\cdot p_-}
\right\}\;, \nonumber \\
& & \nonumber \\
& & \nonumber \\
f14 &=&
16 \frac{p_+^2 q^2 p_-^2}{\lambda^2} \left\{
f_0 \left[ 7 {p_+\cdot q} q^2 -{p_+\cdot q} p_-^2 - 4 ({p_+\cdot q})^2
+ 3 q^2 p_-^2 - 3 q^4 \rule[-.3 cm]{0cm}{.8cm} \right.
\rule[-.3 cm]{0cm}{.8cm} \right. \nonumber \\
& & \left. +\lambda \left( \frac{1}{4} - \frac {p_+\cdot q} {2 p_-^2}
+ \frac{q^2}{2 p_-^2} \right) \right]
- \frac{8}{3}( p_-^2 - q^2 ) + \frac{16}{3} {p_-\cdot p_+}
- \frac{14}{3}\frac{({p_- \cdot p_+})^2}{ p_-^2} \\
& & \left.
+ \frac{\lambda}{6 q^2 p_-^2} ( 4 p_-^2 - 5{p_-\cdot p_+} )
\right\}\;\;. \nonumber
\label{formfacts}
\end{eqnarray}
with
\begin{equation}
f_0 = -{2\over \sqrt{\lambda}}\ln\left({2(p_-\!\!\cdot q)+\sqrt{\lambda}
\over 2(p_-\!\!\cdot q)-\sqrt{\lambda}}\right)\;, \hphantom{---}
\lambda \equiv
4(p_-\!\!\cdot q)^2-4p_-^2q^2\;\;.
\end{equation}
The form factors $f_4$, $f_6$, $f_7$, $f_8$, $f_{10}$ do not contribute
in CC20 processes because the electron current is conserved.
If we assume that all currents which couple to the fermion loops are conserved
we have
\begin{equation}
Z^{\alpha\beta\mu} = q^\alpha q^\beta p_+^\mu c_0
+ g^{\beta\mu} q^\alpha c_1
+ g^{\alpha\mu} q^\beta c_2
+ g^{\alpha\beta} p_+^\mu c_3
\end{equation}
\begin{equation}
c_0 = f_2 + f_5 \hphantom{---}
c_1 = f_{12} \hphantom{---}
c_2 = f_{13} + f_{14} \hphantom{---}
c_3 = f_9
\end{equation}
where the $c_i$ agree with the results of Ref.\cite{bhf1}.
\section{Applications to the process $e^-e^+\to e^-\bar{\nu}_eu\bar{d}$: numerical effects in a physically
relevant case study}
\label{se:numerics}
In the following we present numerical results obtained with the IFL
scheme and make comparisons with those obtained with other gauge-preserving
approaches.
The following schemes are considered in our analysis:
\begin{description}
\item[Imaginary-part FL scheme(IFL):]
The imaginary part of the fermion-loop corrections
\eqns{ourZ}{formfacts}
are used. The fermion masses are neglected in the loops but not in the
rest of the diagrams.
\item[Fixed width(FW):] All $W$-boson propagators are given by
\begin{equation}
{g^{\mu\nu}-{p^\mu p^\nu \over M^2_{_W} -i\Gamma_{_W}M_{_W}}
\over p^2-M^2_{_W}+i\Gamma_{_W}M_{_W}}\;\;.
\end{equation}
This gives an unphysical width for $p^2<0$, but retains U(1) gauge
invariance.
\item[Complex Mass(CM):]
All weak boson masses squared $M^2_{_B}\;, B = W,Z$ are changed to
$M^2_{_B}-i\gamma_{_B}$, including when they appear
in the definition of the weak mixing angle. This scheme has the advantage of
preserving both U(1) and SU(2) Ward identities \cite{DDRW}.
\item[Overall scheme(OA):]
The diagrams for $e^-e^+\to e^-\bar{\nu}_eu\bar{d}$\ can be split into two sets which are
separately gauge invariant under U(1).
In the present implementation of OA \cite{overall}, $t$-channel diagrams
are computed without any width and are then multiplied
by $(q^2-M^2)/(q^2-M^2+iM\Gamma)$ where $q$, $M$ and $\Gamma$ are the
momentum, the mass and
the width of the possibly-resonant $W$-boson.
This scheme retains U(1) gauge invariance at the
expenses of mistreating non resonant terms.
\end{description}
In order to assess the relevance of current non-conservation in
process $e^-e^+\to e^-\bar{\nu}_eu\bar{d}$\ we have also implemented
the imaginary part of the fermion-loop corrections with the
assumption that all currents which couple to the fermion-loop are conserved.
In this case \eqns{ourZ}{formfacts} reduce to those
computed in ~\cite{bhf1}.
Notice that the masses of external fermions are nonetheless taken into account
in the calculation of the matrix elements. This scheme violates
U(1) gauge invariance
by terms which are proportional to the fermion masses squared, as already
noted in Ref.~\cite{Hoogland&vanoldenborgh}. However they are enhanced
at high energy by large factors and can be numerically quite relevant.
This scheme will be referred to as the
imaginary-part FL scheme with conserved currents {\bf (IFLCC)}
in the following.
\begin{table}[tb]\centering
\begin{tabular}{|c|c|c|c|}
\hline
& 190 GeV & 800 GeV & 1500 GeV \\
\hline
IFL & 0.11815 (13) & 1.6978 (15) & 3.0414 (35) \\
\hline
FW & 0.11798 (11) & 1.6948 (12) & 3.0453 (41) \\
\hline
CM & 0.11791 (12) & 1.6953 (16) & 3.0529 (60) \\
\hline
OA & 0.11760 (10) & 1.6953 (13) & 3.0401 (23) \\
\hline
\hline
IFLCC & 0.11813 (12) & 1.7987 (16) & 5.0706 (44) \\
\hline
\end{tabular}
\caption{Cross sections for the process $e^+ e^-~\rightarrow\; e^- \bar
\nu_e u\bar d$ for various gauge restoring schemes}
\label{tab1}
\end{table}
In the comparisons among the different codes mentioned in the introduction,
{\tt COMPHEP} and {\tt WPHACT} used the OA scheme, {\tt KORALW} and {\tt GRC4F}
the $L^{\mu\nu}$ transform method of Ref.~\cite{lmunu},
{\tt NEXTCALIBUR} used the CM and {\tt SWAP} the FW scheme.
Here all schemes described above have been implemented in the new version
of {\tt WPHACT}
in which all massive matrix elements have been added to the old massless ones.
In particular the IFL contributions in \eqns{ourZ}{formfacts} have been
introduced.
In this way, the same matrix elements, phase spaces and integration routines
are used in all instances.
\begin{table}[bt]\centering
\begin{tabular}{|c|c|c|c|}
\hline
& $\cos (\theta_e) >$ .997 &
$\theta_e <$ 0.1 degree &
M$(u \bar d) >$ 40 GeV \\
\hline
IFL & 1.6978 (15) & 1.1550 (15) & 1.6502 (15) \\
\hline
FW & 1.6948 (12) & 1.1538 (21) & 1.6480 (13) \\
\hline
CM & 1.6953 (16) & 1.1533 (14) & 1.6520 (10) \\
\hline
OA & 1.6953 (13) & 1.1537 (12) & 1.6523 (12) \\
\hline
\hline
IFLCC & 1.7987 (16) & 1.2600 (22) & 1.7424 (21) \\
\hline
\end{tabular}
\caption{Cross sections for the process $e^+ e^-~\rightarrow\; e^- \bar
\nu_e u\bar d $ at E=800 GeV for various gauge restoring schemes
and different cuts.}
\label{tab2}
\end{table}
If not stated otherwise we apply the following cuts:
\begin{equation}
M(u \bar d) > 5 \;\mathrm{GeV}\;, E_u > 3 \;\mathrm{GeV}, E_{\bar d} > 3 \;\mathrm{GeV}, \cos (\theta_e ) > .997
\end{equation}
We have produced numerical results for $e^-e^+\to e^-\bar{\nu}_eu\bar{d}$\ in the
small space-like $q_\gamma^2$ (collinear electron) region
where we expect gauge-invariance issues to be essential.
We have not included in our computations Initial State Radiation (ISR),
in order to avoid any additional uncertainty in these comparisons among
different gauge restoring schemes.
In \tab{tab1} we give the cross sections for CC20 at Lep 2 and LC energies.
In \tab{tab2} we give the cross sections for CC20 at $E=800 \;\mathrm{GeV}$
with slightly modified selections. With all other cuts at their standard values,
in the second column the electron scattering angle is not
allowed to be larger than
0.1 degree while in the third column the invariant mass of the $u\bar{d}$ pair
is required to be greater than $40 \;\mathrm{GeV}$\/.
The IFL, FW, CM and OA schemes agree
within $2\,\sigma$ in almost all cases. The IFLCC scheme agrees with all other
ones at Lep 2 energies but already at 800 $\;\mathrm{GeV}$ it overestimates the total
cross section by about 6\%. At $1.5 \;\mathrm{TeV}$ the error is almost a factor of two.
The results in \tab{tab2} show that the discrepancy between the IFLCC scheme
and all the others decreases slightly to 5.6 \% if larger masses of the
$u\bar{d}$ pair are required.
If instead smaller electron scattering angle are allowed the discrepancy
increases to about 9\%. This is a consequence of the fact that
in the collinear region the neglected terms,
proportional to the fermion masses, are enhanced
by factors of order $\ord{m_f^2\gamma_{_W}(p^2)/(M_{_W}^2 m_e^2)}$
which can become very large at high energy even for typical light fermion
masses.
We conclude then that, even in the presence of non--conserved currents
i.e. of massive external fermions, the FW, CM and OA calculations give
predictions
which are in agreement, within a few per mil, with the IFL scheme.
This agreement with the
results of a fully self-consistent approach justifies from
a practical point of view the ongoing use of the FW, CM and OA schemes.
It should be remarked that for massless fermions
it has been shown that at high energies, for the
total cross section of the process $e^-e^+\to \mu^-\bar{\nu}_\mu u\bar{d}$\
the full FL scheme deviates from the FW
scheme and the IFL scheme by about 2\% at $1 \;\mathrm{TeV}$ increasing
to about 7\% at $10 \;\mathrm{TeV}$ \cite{bhf2} mainly because of the running
of the couplings. As a consequence, it appears likely that calculations
performed in the
IFL scheme with running couplings would be able to reproduce the complete
FL results with sufficient accuracy for most practical purposes.
Hitherto missing higher order QCD and bosonic contributions
could still conceivably produce significant corrections.
\section{Conclusions}
The Imaginary Part Fermion-Loop scheme, introduced in Ref.~\cite{bhf1}
for the gauge-invariant treatment of the finite-width effects of $W$ and $Z$
bosons, has been generalized so that it could be applied to processes with
massive external fermions. This involves the Dyson
resummation of higher order imaginary contributions to the propagator which
implies, in the unitary gauge, a modification of the $p^\mu p^\nu$ term in the
numerator. From a numerical
point of view we find no significant difference between
the IFL scheme and the FW, CM or OA schemes in the region most sensible to
U(1) gauge invariance.
\section*{Acknowledgments}
This research has been partly supported by
NSF Grant No. PHY-9722090 and by MURST.\\
We wish to thank G.~Passarino for several discussions on gauge invariance and related
issues. We also gratefully acknowledge the exchange of information and
comparisons with other groups and in particular with
E.~Boos, M.~Dubinin, S.~Jadach and R.~Pittau.
|
1,477,468,750,171 | arxiv | \section{Introduction}
\label{sec:intro}
The problem of radiation reaction has long been
one of the fundamental theoretical issues in general relativity.
Starting from the historical works of Eddington
in his 1922 book\cite{Eddington},
Chandrasekhar and Esposito\cite{ChandraEspo} discussed
the radiation reaction of the self-gravitating fluid
emphasizing the importance of the time asymmetric part of the metric
appearing in the post-Newtonian expansion,
and Burke and Thorne\cite{BurkeThorne} found that
the leading contribution from the time asymmetric part
can be compactly expressed in the form of
a resistive potential.
The previous studies of radiation reaction\cite{ChandraEspo,BurkeThorne}
were done under the assumption
that the post-Newtonian expansion is valid.
Here we consider this problem
in the framework of linear perturbation theory in a general spacetime.
A part of motivation is to give a rigid foundation
of the method to solve the Einstein equations perturbatively as an
expansion with respect to the perturbation caused by a point-like
particle. Usually one adopts a point-like particle to
represent a black hole or neutron star
in the linear perturbation studies as was done in Chapter 1.
Then one may pose several questions:
Since the perturbed field diverges at the location
of the point-like particle,
is the approximation scheme of linear perturbation still valid?
Does the point-like particle really represent
a black hole or a neutron star?
If it represents a black hole, the center of it is
inside the event horizon, and then in what sense
does `the motion of the particle' make sense?
Here we are going to clarify the meaning of
the particle trajectory and derive the equations of motion including
the effect of radiation reaction to the first non-trivial order.
Before starting the discussion of the gravitational
radiation reaction, it is worthwhile to refer to the electromagnetic
case in a fixed curved background spacetime which was discussed by
DeWitt and Brehme\cite{DeWitt}.
In the electromagnetic case, the total energy momentum tensor
composed of the particle and field
contributions satisfies the conservation law.
The conservation law is integrated over the interior of a world tube
with an infinitesimal length surrounding the particle orbit.
The part of the integration which does not vanish in the limit of
small tube radius is transformed into
the surface integrations over both ends of the tube
and over the surface of the tube by using the Gauss theorem.
The integrations over the top and bottom of the tube, respectively,
give the definition of the particle momenta at both ends and
the difference between them represents the change of the momentum
during this infinitesimal time interval, which is to be equated
with the momentum flow given by the
integration over the surface of the tube.
In this way the equations of motion are obtained.
In the case of gravitational radiation reaction,
it is possible to construct a conserved rank-two tensor
defined on the background spacetime,
composed of the matter field and the metric perturbation\cite{Mino1}.
However, there is an essential difference between the electromagnetic
and gravitational cases.
In electromagnetism, we can consider an extended
charge distribution which is supported by a certain force other
than the electromagnetic field.
Thus it is possible to assume that the charge and mass
distributions of a point-like particle are not distorted by
the effect of the radiation reaction.
Therefore one may consistently assume that
the momentum and the electric current of the particle are
proportional to the 4-velocity of the particle.
Moreover the electromagnetic charge $e$ is not directly related
to the energy momentum of the particle which is proportional
to the mass $m$.
Hence, even if the limit of zero particle radius
is taken, the divergent self-energy ($\propto e^2$)
can be renormalized into the mass.
In the case of gravitational radiation reaction,
it is not possible to consider such an ideal point-like
particle because every force field universally couples with gravity.
Even worse, the role of $e$ in electromagnetism is also attributed to
$m$. Thus a simple renormalization scheme does not make any sense.
In order to deal with the gravitational case,
we use the matched asymptotic expansion technique
that has been studied by many authors (e.g.,
D'Eath \cite{Death} and Thorne and Hartle \cite{Thorne1})
in the context of the post-Minkowski (or post-Newtonian)
approximation. We assume that the metric sufficiently far from the
particle is approximated by the perturbation on the background spacetime
generated by a point-like particle. We call this the external metric.
We also assume that the internal metric which describes the
geometry around the particle is represented by
a black hole metric of small mass in the lowest order approximation.
As the particle moves in the curved background, the internal metric
suffers from the tidal distortion.
Thus both internal and external metrics are constructed perturbatively.
The expansion parameters for the internal and external metrics are,
however, different. We call this construction of the metric in the
internal region the internal scheme and that in the external region the
external scheme.
Assuming the existence of the matching region where
both schemes are valid, the internal and external metrics are expanded
there as double series with respect to the two expansion parameters.
Then the terms in these series are labeled by two indices
which denote the powers of the two expansion parameters.
Equating them order by order, we obtain the matching condition,
through which one scheme determines the
boundary condition of the other and vice versa.
Using the matched asymptotic expansion to the first non-trivial orders
of the expansion parameters, we present two different derivations
of the equations of motion with the radiation reaction force of O$(Gm)$;
(1) by means of an explicit construction of
the metric, and (2) by using the so-called laws of motion and
precession\cite{Thorne1}.
As mentioned above, in constructing the internal metric,
the tidal distortion of the geometry is taken into account
by the perturbation of the black hole.
In the method (1), we set the gauge condition
in the internal metric so that $J=0$ and $1$ linear homogeneous
perturbations of the black hole vanish since
they are purely gauge degrees of freedom as long as both the mass and
angular momentum of the black hole stay constant.
Applying a limited class of coordinate
transformations that keep the meaning
of the center of the particle unambiguous,
the internal metric is matched with
the external one in the matching region.
Then we find that for a given trajectory of the particle
a consistent coordinate transformation does not always exist,
and this consistency condition determines
the equations of motion.
In the method (2), not all the metric components are evaluated
in both schemes independently but we assume
the existence of a coordinate transformation
that gives a relation between the internal and external metrics.
Once we know some metric components in one scheme,
the counter parts in the other scheme are obtained
from the matching condition. At this stage, the gauge condition is not
fixed in a unique manner.
The coordinate transformation between the internal metric and
the external metric is chosen so that some of the metric components
that are evaluated in both schemes are correctly matched in the matching
region. Substituting the metric constructed in this way
into the Einstein equations, we obtain the consistency
condition. There is a convenient method to extract out the information
about the equations of motion from the consistency condition.
Namely to use the laws of motion and precession
introduced by Thorne and Hartle\cite{Thorne1}.
The laws of motion and precession are derived from
the non-covariant but conserved form of the Einstein equations.
The resulting equations obtained from both derivations
are the same, although the strategies are quite different.
In the method (1), the metrics in both schemes are
calculated independently by using the Einstein equations.
The matching condition is used to obtain the consistency conditions,
which in turn give the equations of motion.
On the other hand, in the method (2), the matching condition
is used to construct the metric. The consistency condition is
derived by requiring that thus obtained metric satisfies
the Einstein equations.
The meaning of the matching condition in deriving the equations of
motion is clearer in the method (1) than in (2), but the method (2) is
much simpler and straightforward than the method (1) as we shall see
in the following.
The organization of this chapter is as follows.
We use the terminology `a monopole (spinning) particle'
to refer to a particle which represents a Schwarzschild (Kerr) black
hole. In section \ref{sec:MAE}, the matched asymptotic
expansion technique is explained in detail.
In section \ref{sec:minoExt}, we discuss the metric
perturbation in the external scheme.
In section \ref{sec:deri1}, the equations of motion for a monopole
particle are derived by using the method (1).
The method (1) is applied only to the case of
a monopole particle because of the difficulty in constructing
the perturbed metric of a Kerr black hole.
The case for a spinning particle is considered
in section \ref{sec:deri2} by using the method (2).
Throughout this chapter we assume that the background metric satisfies
the vacuum Einstein equations\footnote{The result is not altered
even if we assume that the background spacetime is
vacuum just around the particle.}.
Hence in the following calculations we use the fact that
the background Ricci tensor vanishes;
\begin{eqnarray}
R_{\mu\nu}=0.
\end{eqnarray}
\section{Matched Asymptotic Expansion}
\label{sec:MAE}
The matched asymptotic expansion is a technique
with which the same physical quantities derived
in different zones by two different approximation schemes
are matched in the overlapping region
to obtain an approximate solution valid in the whole region.
We first prepare the metrics in both internal and external zones
by using different approximation schemes.
The internal zone is the region
where the self-gravity of the particle dominates
while the external zone is the region where
the background geometry dominates the full geometry.
In the internal zone, we assume that the metric can be described
by that of a black hole plus perturbation.
Namely, we assume that the particle is represented by
a Schwarzschild/Kerr black hole in the lowest order of
approximation.
In the present case, the perturbation
is caused by the tidal effect of the curvature of the
spacetime in which the particle travels. As mentioned in Introduction,
we call this construction of the metric the internal scheme.
In order to make this scheme valid, the linear extension of
the internal zone around the particle
must be much smaller than the background curvature scale $L$.
We introduce the coordinate
$\{ X^a \}=\{ T, X^i \}\quad (a=0,1,2,3;~i=1,2,3)$
for the internal scheme and $|X|(:=\sqrt{X^iX^i})$ is assumed to
represent the physical distance scale\footnote{
In this chapter, we adopt the Minkowskian summation rule on
$a,b,\cdots$, and the Kronecker summation rule on $i,j,\cdots$
over the repeated indices.}.
Then the internal scheme is valid when
\begin{eqnarray}
|X| \ll L\,,
\end{eqnarray}
where $L$ is the length scale of the background curvature.
In the external zone,
we expect that the metric is well approximated by the perturbation
induced by a point source on a given background spacetime.
We call this construction of the metric the external scheme.
This approximation scheme is valid
when the self-gravity of the particle is sufficiently weak,
that is,
\begin{eqnarray}
Gm \ll |X| \,,
\end{eqnarray}
where $(Gm)$ is the scale of Schwarzschild radius.
As the point source is placed where the external scheme is invalid,
there is no matter source in the external zone.
Thus the external metric is given by a vacuum solution of the Einstein
equations.
We require that the metrics obtained in both schemes be matched
in the overlapping region of both zones,
by considering a coordinate transformation
between the internal and external metrics.
Safely, we may assume the existence of the matching region
as long as
\begin{eqnarray}
Gm \ll L\,,
\end{eqnarray}
is satisfied. For definiteness, we set the matching radius at
\begin{eqnarray}
|X| \sim(GmL)^{1/2},
\end{eqnarray}
in the spatial coordinates of the internal scheme, {$X^i$}.
Then writing down the metric in the internal scheme,
we have two independent small parameters $|X|/L$ and $Gm/|X|$
in the matching region.
The power expansion with respect to these two small parameters
allows us to consider the matching order by order.
First we consider the expansion of the internal scheme.
Recalling that the perturbation in the internal zone is
induced by the external curvature
which has a characteristic length scale $L$,
the metric can be expanded in powers of $|X|/L$ as
\begin{equation}
\tilde g_{ab}(X) =
{}^{(0)}H_{ab}(X)+{1\over L}{}^{(1)}H_{ab}(X)
+{1\over L^2}{}^{(2)}H_{ab}(X)+\cdots,
\label{eq:bh0}
\end{equation}
where ${}^{(0)}H_{ab}(X)$ is the unperturbed black hole metric.
We expect that ${}^{(1)}H_{ab}(X)$ will be given by the standard linear
perturbation of the black hole. Later, we find that ${}^{(1)}H_{ab}(X)$
can be consistently set to zero, which is in accordance with the notion
that the spacetime curvature is of $O(1/L^2)$. Thus the standard black
hole perturbation theory applies up to ${}^{(2)}H_{ab}(X)$.
Further we expand the metric with respect to $Gm/|X|$
which is also small at the matching radius:
\begin{eqnarray}
{}^{(0)}H_{ab}(X)&=&\eta_{ab}+Gm{}_{(1)}^{(0)}H_{ab}(X)
+(Gm)^2 {}_{(2)}^{(0)}H_{ab}(X)
+\cdots\,,
\nonumber \\
{1\over L}{}^{(1)}H_{ab}(X)&=&
{1\over L}{}_{(0)}^{(1)}H_{ab}(X)
+{Gm\over L}{}_{(1)}^{(1)}H_{ab}(X)
+{(Gm)^2\over L}{}_{(2)}^{(1)}H_{ab}(X)+\cdots\,,
\nonumber \\
{1\over L^2}{}^{(2)}H_{ab}(X)&=&
{1\over L^2}{}_{(0)}^{(2)}H_{ab}(X)
+{Gm\over L^2}{}_{(1)}^{(2)}H_{ab}(X)
+{(Gm)^2\over L^2}{}_{(2)}^{(2)}H_{ab}(X)+\cdots\,.
\label{eq:bh}
\end{eqnarray}
Note that, from the definitions of the expansion parameters,
the ${}_{(n)}^{(m)}H_{ab}$ component of the metric behaves as
\begin{eqnarray}
{}_{(n)}^{(m)}H_{ab} \sim |X|^{m-n}.
\label{eq:bhpower}
\end{eqnarray}
The explicit form of the coordinate transformation
from the general coordinates of a background metric $\{ x^\mu \}$
to the coordinates of the internal scheme $\{X^a\}$
will be discussed in section \ref{sec:minoExt} for the method (1)
and in section \ref{sec:deri2} for the method (2).
Assuming the matching can be consistently done, the full metric
in the external scheme $\tilde g_{\mu\nu}(x)$
is written in terms of the internal coordinates as
\begin{equation}
\tilde g_{ab}(X)dX^a dX^b=\tilde g_{\mu\nu}(x)dx^{\mu} dx^{\nu} \,.
\label{eq:465}
\end{equation}
Generally, as the external metric can be expanded by $Gm/|X|$,
we write it as
\begin{eqnarray}
\tilde g_{ab}(X)= g_{ab}(X) +Gm {}_{(1)}h_{ab}(X)
+(Gm)^2 {}_{(2)}h_{ab}(X) +\cdots.
\end{eqnarray}
Then $Gm {}_{(1)}h_{ab}(X)$ can be recognized as
the linear perturbation on the background $g_{ab}(X)$.
Further we expand it with respect to $|X|/L$ as
\begin{eqnarray}
g_{ab}(X) &=&
{}^{(0)}_{(0)}h_{ab}(X)+{1\over L}{}^{(1)}_{(0)}h_{ab}(X)
+{1\over L^2}{}^{(2)}_{(0)}h_{ab}(X)+\cdots\,,
\nonumber \\
Gm {}_{(1)}h_{ab}(X) &=&
Gm{}_{(1)}^{(0)}h_{ab}(X)+{Gm\over L}{}_{(1)}^{(1)}h_{ab}(X)
+{Gm\over L^2}{}_{(1)}^{(2)}h_{ab}(X)+\cdots\,,
\nonumber \\
(Gm)^2 {}_{(2)}h_{ab}(X) &=&
(Gm)^2{}_{(2)}^{(0)}h_{ab}(X)+{(Gm)^2\over L}
{}_{(2)}^{(1)}h_{ab}(X)
\nonumber\\
&&\qquad+{(Gm)^2\over L^2}{}_{(2)}^{(2)}h_{ab}(X) +\cdots\,.
\label{eq:ext}
\end{eqnarray}
As before,
\begin{eqnarray}
{}_{(n)}^{(m)}h_{ab} \sim |X|^{m-n}.
\label{eq:extpower}
\end{eqnarray}
For brevity, we call ${}_{(n)}^{(m)}h_{ab}$ or
${}_{(n)}^{(m)}H_{ab}$ the $({}^{m}_{n})$-component and
the matching condition for them as the $({}^{m}_{n})$ matching.
In the matching region ($|X| \sim (GmL)^{1/2}$), the
$({}^{m}_{n})$-component is of $O\left((Gm/L)^{(m+n)/2}\right)$.
The matching condition requires that all the corresponding terms
in Eqs.~(\ref{eq:bh}) and (\ref{eq:ext}) should be identical.
Then the matching condition is given by
equating the terms of the same power
in $|X|$ in both schemes to desired accuracy.
Thus the condition for the $({}^{m}_{n})$ matching is
\begin{equation}
\sum_{m'-n'=m-n \atop m'\le m} {(Gm)^{n'} \over L^{m'} }
{}_{(n')}^{(m')}h_{ab} =
\sum_{m'-n'=m-n \atop m'\le m} {(Gm)^{n'} \over L^{m'} }
{}_{(n')}^{(m')}H_{ab}
+O\left({(Gm)^{n+1} \over L^{m+1} }|X|^{(m-n)}\right).
\end{equation}
\section{External Scheme}
\label{sec:minoExt}
As we assume that the gravitational radius of the particle, $Gm$, is
small compared with the length scale of the background curvature, $L$,
we approximate $\delta g_{\mu\nu}$
by the linear perturbation induced by a point-like particle,
$h_{\mu\nu}$, in the whole spacetime region
except for the vicinity of the world line of the particle.
The calculation is performed in an analogous manner
to the case of the scalar and vector perturbations
developed by DeWitt and Brehme\cite{DeWitt}.
We take a Green function approach to
study the linear perturbation of the metric generated by a point
source. In order to calculate the tensor Green function
in a background covariant manner,
we begin with introducing the concept of bi-tensors.
\subsection{Bi-tensor formalism}
Bi-tensors are tensors which depend on
two distinct spacetime points, say, $x$ and $z$,
so that they can have two types of indices.
The simplest example is given by a direct product of
tensors at the points $x$ and $z$ as
\begin{eqnarray}
A^{\mu\alpha}(x,z)=B^\mu(x)C^\alpha(z) \,.
\end{eqnarray}
In what follows, we use $x$ for a field point and $z$ for a point on
the particle trajectory, and assign the letters $\alpha$, $\beta$,
$\gamma$, $\delta$, $\epsilon$, $\zeta$, $\eta$ for the tensor indices
of $z$ and $\mu$, $\nu$, $\xi$, $\rho$, $\sigma$ for $x$.
Basic bi-tensors used in our calculations are
half the squared geodetic interval $\sigma (x,z)$,
\begin{eqnarray}
&&\sigma(x,z)={1\over 2} g^{\mu\nu}(x)\sigma_{;\mu}(x,z)\sigma_{;\nu}(x,z)=
{1\over 2} g^{\alpha\beta}(z)\sigma_{;\alpha}(x,z)\sigma_{;\beta}(x,z) \,,
\nonumber \\
&& \lim_{x\rightarrow z}\sigma (x,z)_{;\mu} =
\lim_{x\rightarrow z}\sigma (x,z)_{;\alpha} = 0 \,,
\label{111db}
\end{eqnarray}
and the geodetic parallel displacement bi-vector,
\begin{eqnarray}
&&\bar g_{\mu\alpha;\nu}(x,z)g^{\nu\sigma}(x) \sigma_{;\sigma}(x,z) =0,
\quad
\bar g_{\mu\alpha;\beta}(x,z)g^{\beta\gamma}(z)
\sigma_{;\gamma}(x,z) =0,
\nonumber\\
&&\lim_{x\rightarrow z}\bar g_{\mu}{}^{\alpha} (x,z)
= \delta_{\mu}{}^{\alpha}.
\label{131db}
\end{eqnarray}
These are used to expand bi-tensors around the orbit of a particle.
For example, we have
\begin{eqnarray}
A^\alpha(x,z) &=& \lim_{x'\rightarrow z}
\biggl(A^\alpha(x',z)-\sigma_{;\mu'}(x,x')A^{\alpha;\mu'}(x',z)
+O(\epsilon^2)\biggr) \,,
\\
B^\mu(x) &=& \bar g^\mu{}_\alpha(x,z)
\biggl(B^\alpha(z)-\sigma_{;\beta}(x,z)B^{\alpha;\beta}(z)
+O(\epsilon^2)\biggr) \,,
\end{eqnarray}
for a small geodetic interval between $x$ and $z$,
where $\epsilon = \sqrt{2|\sigma(x,z)|}$.
These relations can be verified by taking the $x\to z$ limit
of their repeated derivatives.
By evaluating the repeated derivatives of Eqs.~(\ref{111db}) and
(\ref{131db}) in the coincidence limit $x\rightarrow z$,
we obtain some useful formulas for expansion in $\epsilon$:
\begin{eqnarray}
\sigma_{;\alpha\beta}(x,z) &=& g_{\alpha\beta}(z)- {1\over 3}
R_{\alpha}{}^{\gamma}{}_{\beta}{}^{\delta}(z)
\sigma_{;\gamma}(x,z) \sigma_{;\delta}(x,z) +O(\epsilon^3) \,,
\label{128db} \\
\sigma_{;\mu\beta}(x,z) &=& -\bar g_{\mu}{}^{\alpha}(x,z)
\left(g_{\alpha\beta}(z)+{1\over 6}
R_{\alpha\gamma\beta\delta}(z)
\sigma^{;\gamma}(x,z) \sigma^{;\delta}(x,z)\right)
\cr &&
+O(\epsilon^3) \,,
\label{173db} \\
\bar g^{\mu\alpha}{}_{;\beta}(x,z) & = & -{1\over 2}
\bar g^{\mu\gamma}(x,z)R^{\alpha}{}_{\gamma\beta\delta}(z)
\sigma^{;\delta}(x,z) +O(\epsilon^2) \,, \cr
\bar g^{\mu\alpha}{}_{;\nu}(x,z) & = & -{1\over 2}
\bar g^{\mu\beta}(x,z) \bar g_{\nu}{}^{\gamma}(x,z)
R^{\alpha}{}_{\beta\gamma\delta}(z)
\sigma^{;\delta}(x,z) +O(\epsilon^2) \,.
\label{140db}
\end{eqnarray}
We also introduce the van Vleck-Morette determinant, $\Delta(x,z)$:
\begin{equation}
\Delta(x,z):=|\bar g^{\alpha\mu}(z,x)\sigma_{;\mu\beta}(x,z)|,
\label{Deltadef}
\end{equation}
which appears in the expression of the Green function later.
\subsection{Tensor Green function}
We consider the linearized Einstein equations.
We introduce the trace-reversed metric perturbation,
\begin{eqnarray}
\psi_{\mu\nu}(x) &=& h_{\mu\nu}(x) -{1\over 2}g_{\mu\nu}(x) h(x) \,,
\end{eqnarray}
and set the harmonic gauge condition,
\begin{eqnarray}
\psi^{\mu\nu}{}_{;\nu}(x)&=&0 \,,
\end{eqnarray}
where $h(x)$ and $\psi(x)$ are
the trace of $h^{\mu\nu}(x)$ and that of $\psi^{\mu\nu}(x)$, respectively,
and the semicolon means the covariant derivative with respect to the
background metric. In this gauge, the linearized Einstein equations become
\begin{eqnarray}
-{1\over 2}\psi^{\mu\nu;\xi}{}_\xi(x)
-R^\mu{}_\xi{}^\nu{}_\rho (x) \psi^{\xi\rho}(x)
= 8 \pi G T^{\mu\nu}(x) \,.
\end{eqnarray}
Thus we define the tensor Green function $G^{\mu\nu\alpha\beta}(x,z)$
which satisfies
\begin{eqnarray}
G^{\mu\nu\alpha\beta;\xi}{}_{;\xi}(x,z)
&&+2R^\mu{}_\xi{}^\nu{}_\rho(x) G^{\xi\rho\alpha\beta}(x,z)
\cr
&&=-2\bar g^{\alpha(\mu}(x,z)\bar g^{\nu)\beta}(x,z)
{\delta^{(4)}(z-x) \over \sqrt{-g}} \,,
\label{eq:green}
\end{eqnarray}
where $g$ is the determinant of the metric $g_{\mu\nu}(x)$.
First we consider
the elementary solution $G_{*}^{\mu\nu\alpha\beta}(x,z)$ which satisfies
Eq.~(\ref{eq:green}) except at the $\sigma(x,z)\rightarrow 0$ limit and
takes the Hadamard form,
\begin{eqnarray}
G_*^{\mu\nu\alpha\beta}(x,z)=
{1\over (2\pi)^2}\Biggl({u^{\mu\nu\alpha\beta}(x,z)\over\sigma(x,z)}
&&+v^{\mu\nu\alpha\beta}(x,z)\log|\sigma(x,z)|
\cr &&
+w^{\mu\nu\alpha\beta}(x,z)\Biggr) \,.
\label{eq:Hadamard}
\end{eqnarray}
The bi-tensors $u^{\mu\nu\alpha\beta}(x,z)$, $v^{\mu\nu\alpha\beta}(x,z)$
and $w^{\mu\nu\alpha\beta}(x,z)$ are regular in the
$\sigma(x,z)\rightarrow 0$ limit and $u^{\mu\nu\alpha\beta}(x,z)$
satisfies the normalization condition,
\begin{eqnarray}
\lim_{x\rightarrow z}u^{\mu\nu\alpha\beta}(x,z)=
\lim_{x\rightarrow z}2\bar g^{\alpha(\mu}(x,z)\bar g^{\nu)\beta}(x,z) \,.
\label{eq:unorm}
\end{eqnarray}
If we put the form (\ref{eq:Hadamard}) into the left hand side of
Eq.~(\ref{eq:green}), the terms can be classified into three parts.
One is the terms which contain the factor $1/\sigma^2(x,z)$ manifestly
and another is the terms which contain $\log|\sigma(x,z)|$.
The remaining terms have no singular behavior at the
$\sigma(x,z)\rightarrow 0$ limit.
Since the form (\ref{eq:Hadamard}) is redundant,
we can set these three sets to vanish separately:
\begin{eqnarray}
&& \left(2u^{\mu\nu\alpha\beta;\xi}(x,z)
-{\Delta^{;\xi}(x,z)\over\Delta(x,z)}u^{\mu\nu\alpha\beta}(x,z)\right)
\sigma_{;\xi}(x,z) = 0 \,,
\label{eq:ueq}
\\
&& v^{\mu\nu\alpha\beta;\xi}{}_{;\xi}(x,z)
+ 2 R^\mu{}_\xi{}^\nu{}_\rho(x) v^{\xi\rho\alpha\beta}(x,z) = 0 \,,
\label{eq:veq}
\\
&& 2v^{\mu\nu\alpha\beta}(x,z)
+\left(2v^{\mu\nu\alpha\beta;\xi}(x,z)
-{\Delta^{;\xi}(x,z)\over\Delta(x,z)}v^{\mu\nu\alpha\beta}(x,z)\right)
\sigma_{;\xi}(x,z)
\nonumber \\
&& \qquad \quad
+u^{\mu\nu\alpha\beta;\xi}{}_{;\xi}(x,z)
+2 R^\mu{}_\xi{}^\nu{}_\rho(x) u^{\xi\rho\alpha\beta}(x,z)
\nonumber \\
&& \qquad \quad
+\left(w^{\mu\nu\alpha\beta;\xi}{}_{;\xi}(x,z)
+2 R^\mu{}_\xi{}^\nu{}_\rho(x) w^{\xi\rho\alpha\beta}(x,z)\right)\sigma(x,z)
= 0 \,.
\label{eq:weq}
\end{eqnarray}
Equation~(\ref{eq:ueq}) is solved
with the normalization (\ref{eq:unorm}) as
\begin{eqnarray}
u^{\mu\nu\alpha\beta}(x,z)=
2\bar g^{\alpha(\mu}(x,z)\bar g^{\nu)\beta}(x,z)\sqrt{\Delta(x,z)} \,.
\label{eq:u}
\end{eqnarray}
The bi-tensors $v^{\mu\nu\alpha\beta}(x,z)$ and
$w^{\mu\nu\alpha\beta}(x,z)$
are to be determined by solving Eqs.~(\ref{eq:veq}) and (\ref{eq:weq}).
The bi-tensor $w^{\mu\nu\alpha\beta}(x,z)$ is not needed but
the bi-tensor $v^{\mu\nu\alpha\beta}(x,z)$ plays an important role in
the following discussion.
Although it is difficult to find
the solution of $v^{\mu\nu\alpha\beta}(x,z)$ in an arbitrary
background spacetime,
its explicit form is not required for the succeeding discussions.
However it is important to note that $v^{\mu\nu\alpha\beta}(x,z)$ is uniquely
determined. The reason is as follows.
{}From Eq.~(\ref{eq:veq}) one finds it satisfies a hyperbolic
equation. Hence the problem is if its Cauchy data are unique or not.
First we note the coincidence limit of Eq.~(\ref{eq:weq}), which gives
\begin{eqnarray}
\lim_{x\rightarrow z}v^{\mu\nu\alpha\beta}(x,z)
=\lim_{x\rightarrow z}2\bar g^\alpha{}_{(\xi} (z,x)
\bar g^{\beta}{}_{\rho)}(z,x)
R^{\mu\xi\nu\rho}(x).
\label{eq:ulim}
\end{eqnarray}
Then taking the null limit $\sigma(x,z)\rightarrow 0$ of
Eq.~(\ref{eq:weq}),
we obtain the first order differential equation for
$v^{\mu\nu\alpha\beta}(x,z)$ which can be solved along a null geodesic.
Thus this equation with the boundary condition (\ref{eq:ulim})
uniquely determines $v^{\mu\nu\alpha\beta}(x,z)$ on the light cone
emanating from $z$.
Therefore the hyperbolic equation (\ref{eq:veq}) has a unique
solution.
We also mention that $v^{\mu\nu\alpha\beta}(x,z)$ is divergence free,
\begin{equation}
v^{\mu\nu\alpha\beta}{}_{;\nu}(x,z)=0.
\label{divv}
\end{equation}
To see this we note the harmonic gauge condition on the Green
function requires
\begin{equation}
\lim_{\sigma\rightarrow 0} v^{\mu\nu\alpha\beta}{}_{;\nu}(x,z)=0.
\end{equation}
We also see that the equation for
$v^{\mu\nu\alpha\beta}{}_{;\nu}(x,z)$ follows from Eq.~(\ref{eq:veq}),
\begin{equation}
\left[v^{\mu\nu\alpha\beta}{}_{;\nu}(x,z)\right]{}^{;\xi}{}_{;\xi}=0,
\end{equation}
where we have used the fact $R^{\mu\xi\nu\rho}{}_{;\rho}=0$,
which is proved by contracting the Bianchi identities
for the vacuum case. Thus we conclude
that Eq.~(\ref{divv}) holds everywhere.
The Feynman propagator $G_F^{\mu\nu\alpha\beta}(x,z)$ can be derived
from the elementary solution $G_*^{\mu\nu\alpha\beta}(x,z)$
by the $i\epsilon$-prescription.
\begin{eqnarray}
G_F^{\mu\nu\alpha\beta}(x,z)
={1\over (2\pi)^2}&&
\Biggl({u^{\mu\nu\alpha\beta}(x,z)\over\sigma(x,z)+i\epsilon}
\cr&&
+v^{\mu\nu\alpha\beta}(x,z)\log(\sigma(x,z)+i\epsilon)
+w^{\mu\nu\alpha\beta}(x,z)\Biggr).
\end{eqnarray}
The imaginary part of the Feynman propagator $G_F^{\mu\nu\alpha\beta}(x,z)$
gives the symmetric Green function $\bar G^{\mu\nu\alpha\beta}(x,z)$,
from which we can obtain
the retarded Green function $G_{Ret}^{\mu\nu\alpha\beta}(x,z)$,
and the advanced Green function $G_{Adv}^{\mu\nu\alpha\beta}(x,z)$ as
\begin{eqnarray}
\bar G^{\mu\nu\alpha\beta}(x,z)
&=&-{1\over 2}{\rm Im}\left[G_F^{\mu\nu\alpha\beta}(x,z)\right] \nonumber \\
&=&{1\over 8\pi}\left[u^{\mu\nu\alpha\beta}(x,z)\delta(\sigma(x,z))
-v^{\mu\nu\alpha\beta}(x,z)\theta(-\sigma(x,z))\right], \\
G_{Ret}^{\mu\nu\alpha\beta}(x,z)
&=&2\theta[\Sigma(x),z]\bar G^{\mu\nu\alpha\beta}(x,z), \\
G_{Adv}^{\mu\nu\alpha\beta}(x,z)
&=&2\theta[z,\Sigma(x)]\bar G^{\mu\nu\alpha\beta}(x,z),
\end{eqnarray}
where
$\Sigma(x)$ is an arbitrary space-like hypersurface
containing $x$, and $\theta[\Sigma(x),z]=1
-\theta[z,\Sigma(x)]$ is equal to $1$ when $z$ lies in the past
of $\Sigma(x)$ and vanishes when $z$ lies in the future.
\subsection{Metric perturbation}
Using the above obtained retarded Green function,
we compute the trace-reversed metric perturbation $\psi^{\mu\nu}(x)$
induced by a point-like particle. We assume the energy-momentum tensor
of the form,
\begin{eqnarray}
T^{\mu\nu} &=& T^{\mu\nu}_{(mono)} +T^{\mu\nu}_{(spin)} \,,
\label{eq:point}
\\
&& T^{\mu\nu}_{(mono)}(x) = m\int dT v^\mu(x,T) v^\nu(x,T)
{\delta^{(4)}(x-z(T))\over \sqrt{-g}} \,,
\\
&& T^{\mu\nu}_{(spin)} = -m \int dT \nabla_\xi\left(S^{\xi(\mu}(x,T)
v^{\nu)}(x,T){\delta^{(4)}(x-z(T))\over \sqrt{-g}}\right) \,,
\label{eq:spinTmunu}
\\
&& \qquad v^\mu(x,T)=\bar g^\mu{}_\alpha(x,z(T))\dot z^\alpha(T) \,,
\\
&& \qquad S^{\mu\nu}(x,T)=\bar g^\mu{}_\alpha(x,z(T))
\bar g^\nu{}_\beta(x,z(T))S^{\alpha\beta}(T) \,,
\end{eqnarray}
where $\dot z^\alpha(T)=d z^\alpha/dT$, $m$ is the mass of the particle
and $S^{\alpha\beta}(T)$ is an anti-symmetric tensor representing the
specific spin of the particle per unit mass. We call it the spin
tensor of the particle and assume that it satisfies the center of mass
condition,
\begin{eqnarray}
S_{\alpha\beta}(T)\dot z^\beta(T)=0 \,.
\end{eqnarray}
In Chapter 1, section 11, we have given the energy-momentum tensor of a
spinning test particle.\footnote{Note that $S_{\alpha\beta}$ there
corresponds to $m S_{\alpha\beta}$ here.}
There the four-velocity of the orbit $v^\alpha=\dot z^\alpha$ is
distinguished from the specific four-momentum of the particle
$u^\alpha=p^\alpha/m$. The difference is $O(S^2/L^2)$ where $S$ is the
magnitude of the spin tensor
$S:=\sqrt{S_{\alpha\beta}S^{\alpha\beta}/2}$. Here we ignore this
difference because of the following reason.
Since the particle is assumed to represent a black hole,
$m$ will be identified with the black hole mass and $S$
with the Kerr spin parameter $a$. Therefore $S$ is assumed to be of
order $Gm$, hence the difference between $v^\alpha$ and $u^\alpha$ is
$O((Gm/L)^2)$. Since we are interested in the radiation reaction of
$O(Gm/L^2)$ to the equations of motion, we may consistently neglect this
difference.
At this point, we must comment on the reason why
we may assume the point-like particle for the source.
Even in the linear perturbation, in order
to generate a general gravitational field
in the external zone, we need to consider a source
with arbitrary higher multipole moments.\footnote{A distributional form
of the energy-momentum tensor with arbitrary higher multipole moments
was discussed by Dixon\cite{Dixon}.}
However, the $\ell$-th moment of the gravitational field will be
$O((Gm/|X|)^{\ell+1})$ in the matching region if the particle represents
a black hole. As we shall see in the following discussions, we find it
is not necessary to consider the matchings at $O\bigl((Gm)^3\bigr)$ or
higher in order to derive the equations of motion with the reaction
force of $O(Gm/L^2)$. Hence the moments higher than the spin can be
consistently neglected.
We should also note that the metric perturbation induced by
$T^{\mu\nu}_{(spin)}$ is $O\bigl((Gm)^2\bigr)$ for $S=O(Gm)$.
At first glance, one might think that this implies the necessity of the
second order perturbation theory if we are to incorporate the spin
effect of the particle in the expansion with respect to $Gm$ in a
consistent way. However, provided that the construction of the metric by
the matched asymptotic expansion is consistent, the second order
perturbation theory turns out to be unnecessary. In fact, we shall find
that the spin-induced metric perturbation of $O\bigl((Gm)^2\bigr)$ gives
rise to the leading order spin-curvature coupling term of $O(Gm/L^2)$ in
the equations of motion, while the spin-independent metric perturbation
of $O\bigl((Gm)^2\bigr)$ does not contribute to the reaction force term
at $O(Gm/L^2)$.
Without any further approximation,
the metric perturbation due to the point-like particle becomes
\begin{eqnarray}
\psi^{\mu\nu}(x)&=&
2Gm\Biggl(\biggl[
{1\over\dot\sigma(x,z(T))}u^{\mu\nu}{}_{\alpha\beta}(x,z(T))
\dot z^\alpha(T) \dot z^\beta(T)
\nonumber \\ && \qquad \quad
+{\ddot\sigma(x,z(T))\over\dot\sigma^3(x,z(T))}
u^{\mu\nu}{}_{\alpha\beta}(x,z(T))\sigma_{;\gamma}(x,z(T))
S^{\gamma\alpha}(T)\dot z^\beta(T)
\nonumber \\ && \qquad \quad
+{1\over\dot\sigma(x,z(T))}u^{\mu\nu}{}_{\alpha\beta;\gamma}(x,z(T))
S^{\gamma\alpha}(T) \dot z^\beta(T)
\nonumber \\ && \qquad \quad
-{1\over\dot\sigma^2(x,z(T))}{d\over dT}\left(
u^{\mu\nu}{}_{\alpha\beta}(x,z(T))\sigma_{;\gamma}(x,z(T))
S^{\gamma\alpha}(T) \dot z^\beta(T) \right)
\nonumber \\ && \qquad \quad
+{1\over\dot\sigma(x,z(T))}v^{\mu\nu}{}_{\alpha\beta}(x,z(T))
\sigma_{;\gamma}(x,z(T)) S^{\gamma\alpha}(T)\dot z^\beta(T)
\biggr]_{T=T_{Ret}(x)}
\nonumber \\ && \qquad \quad
-\int^{T_{Ret}(x)}_{-\infty}
dT \biggl(v^{\mu\nu}{}_{\alpha\beta}(x,z(T))
\dot z^\alpha(T)\dot z^\beta(T)
\nonumber \\ && \qquad \qquad \qquad \quad
+v^{\mu\nu}{}_{\alpha\beta;\gamma}(x,z(T))
S^{\gamma\alpha}(T)\dot z^\beta(T)\biggr)
\Biggr) \,,
\label{eq:metper}
\end{eqnarray}
where $T_{Ret}(x)$ is the retarded time
of the particle and is a scalar function which is determined by
\begin{eqnarray}
\sigma\left(x,z(T_{Ret})\right)=0 \,, \quad
\theta\left(\Sigma(x),z(T_{Ret})\right)=1 \,.
\end{eqnarray}
Since the retarded time $T_{Ret}(x)$ is not convenient for specifying
the field point $x$ around the particle trajectory in the following
computations, we introduce a new specification of $x$ as follows.
We foliate the spacetime with spacelike 3-surfaces
perpendicular to the particle trajectory.
Specifically, the 3-surfaces are defined as a one-parameter
family of $T$ by the relation,
$\sigma_{;\alpha}(x,z(T))\dot z^\alpha(T)=0$.
We denote the value of $T$ of the 3-surface containing
the point $x$ by $T_x$.
That is
\begin{eqnarray}
\sigma_{;\alpha}(x,z(T_x))\dot z^\alpha(T_x)=0 \,,
\label{eq:foli}
\end{eqnarray}
where we have introduced the notation,
\begin{eqnarray}
Q_{;\alpha}(x,z(T_x)) & := & [Q_{;\alpha}(x,z)]_{z=z(T_x)} \,,
\cr
Q_{;\mu}(x,z(T_x)) & := & [Q_{;\mu}(x,z)]_{z=z(T_x)} \,.
\end{eqnarray}
Note that
\begin{equation}
\left[Q(x,z(T_x))\right]_{;\mu} = Q_{;\mu}(x,z(T_x))
+Q_{;\alpha}(x,z(T_x)) \dot z^{\alpha}(T_x) T_{x;\mu} \,.
\end{equation}
We use $\sigma_{;\alpha}(x,z(T_x))$ to distinguish the spatial points on
the same 3-surface, and denote the spatial distance from $z(T_x)$ to $x$
by
\begin{equation}
\epsilon(x):=\sqrt{2\sigma(x,z(T_x))} \,.
\end{equation}
In the matching region, we have
\begin{eqnarray}
Gm \ll \epsilon(x) \ll L.
\end{eqnarray}
To obtain the external metric in the matching region, we first
consider the $\epsilon$-expansion of the time retardation,
$\delta_{Ret}(x)$,
\begin{eqnarray}
\delta_{Ret}(x):=T_{Ret}(x)-T_x \,.
\end{eqnarray}
It is given by expanding Eq.~(\ref{eq:foli}) as
\begin{eqnarray}
0 &=& \left[\sigma(x,z(T))\right]_{\tau=T_{Ret}(x)}
\nonumber \\
&=&\sigma(x,z(T_x))+\dot\sigma(x,z(T_x))\delta_{Ret}(x)
\nonumber \\ && \qquad
+{1\over 2}\ddot{\sigma}(x,z(T_x))\delta_{Ret}^2(x)
+{1\over 3!}\stackrel{...}{\sigma}(x,z(T_x))\delta_{Ret}^3(x)
\nonumber \\ && \qquad
+{1\over 4!}\stackrel{....}{\sigma}(x,z(T_x))\delta_{Ret}^4(x)
+O(\epsilon^5) \,.
\end{eqnarray}
Using Eqs.~(\ref{128db}), (\ref{eq:foli}),
and the normalization condition, $(dz/dT)^2 = -1+ O(Gm/L)$, which will
be proved to be consistent later, each term in the above is computed as
\begin{eqnarray}
\sigma(x,z(T_x))&=&{1\over 2}\epsilon^2(x) \,,
\\
\dot\sigma(x,z(T_x))
&=&\sigma_{;\alpha}(x,z(T_x))\dot z^{\alpha}(T_x)=0 \,,
\label{eq:Afoli}\\
\ddot\sigma(x,z(T_x))\,&=:&-\kappa^2(x)
\nonumber\\
&=&\sigma_{;\alpha\beta}(x,z(T_x))\dot z^\alpha(T_x)\dot z^\beta(T_x)
+\sigma_{;\alpha}(x,z(T_x))\ddot z^{\alpha}(T_x)
\nonumber \\
& = &
\left(g_{\alpha\beta}(z(T_x))-{1\over 3}
R_{\alpha}{}^{\gamma}{}_{\beta}{}^{\delta}(z(T_x))
\sigma_{;\gamma}(x,z(T_x)) \sigma_{;\delta}(x,z(T_x))\right)
\dot z^{\alpha}(T_x)\dot z^{\beta}(T_x)
\nonumber \\
&& \quad +
\sigma_{;\alpha}(x,z(T_x)) \ddot z^{\alpha}(T_x)
+O(\epsilon^3) \,,
\\
\stackrel{...}{\sigma}(x,z(T_x))
&=&\sigma_{;\alpha}(x,z(T_x))\stackrel{...}{z}{}^\alpha(T_x)
+O(\epsilon^2) \,,
\\
\stackrel{....}{\sigma}(x,z(T_x))
&=& -g_{\alpha\beta}(z(T_x))
\ddot z^\alpha(T_x)\ddot z^\beta(T_x)+O(\epsilon) \,.
\end{eqnarray}
where we have introduced $\kappa(x)$ to denote
$\sqrt{-\ddot\sigma(x,z(T_x))}$.
{}From these, we obtain
\begin{eqnarray}
\delta_{Ret}(x)&=&
-\epsilon(x)\kappa^{-1}(x)\biggl(1
-{1\over 6}\epsilon(x)\kappa^{-3}(x)
\stackrel{...}{z}{}^{\alpha}(T_x)\sigma_{;\alpha}(x,z(T_x))
\nonumber \\ && \qquad \qquad
-{1\over 24}\epsilon^2(x)\kappa^{-4}(x)\ddot z^2(T_x)\biggr)
+O(\epsilon^4) \,.
\label{eq:retard}
\end{eqnarray}
With the help of Eq.~(\ref{eq:retard}), we then obtain the
expansion of various terms in Eq.~(\ref{eq:metper}).
We have
\begin{eqnarray}
&& \left[{1\over\dot\sigma(x,z(T))}\right]_{T=T_{Ret}(x)}
\nonumber \\ && \qquad
={1\over\epsilon(x)\kappa(x)}\biggl(1
-{1\over 3}\epsilon(x)\stackrel{...}{z}{}^\alpha(T_x)
\sigma_{;\alpha}(x,z(T_x))
-{1\over 8}\epsilon^2(x)\ddot z^2(T_x)
+O(\epsilon^3)\biggr) \,.
\label{eq:sigmadot}
\end{eqnarray}
In order to obtain the expansion of $u^{\mu\nu\,\alpha\beta}(x,z)$ given
by Eq.~(\ref{eq:u}), we also need the following expansions:
\begin{eqnarray}
&& \biggl[\Delta^{1/2}(x,z(T))\biggr]_{T=T_{Ret}(x)}
=1+O(\epsilon^3) \,,
\\
&& \biggl[\bar g_{\mu\alpha}(x,z(T))\biggr]_{T=T_{Ret}(x)}
\nonumber \\ && \qquad
=\bar g_{\mu\alpha}(x,z(T_x))
-{1\over 2}\bar g_\mu{}^\beta(x,z(T_x))
R_{\alpha\beta\gamma\delta}(z(T_x))
\sigma^{;\gamma}(x,z(T_x))\dot z^\delta(T_x)\epsilon(x)
\nonumber \\ && \qquad \qquad
+O(\epsilon^3) \,,
\\
&& \biggl[\dot z^\alpha(T)\biggr]_{T=T_{Ret}(x)}
=\dot z^\alpha(T_x)-\epsilon(x)\kappa^{-1}(x)\ddot z^\alpha(T_x)
+{1\over 2}\epsilon^2(x)\stackrel{...}{z}{}^\alpha(T_x)+O(\epsilon^3) \,.
\label{eq:expansions}
\end{eqnarray}
In the above expressions there appear higher derivatives of $\dot z$,
such as $\ddot z$ and $\stackrel{...}{z}$, where
a dot means the covariant derivative $D/dT$ along the trajectory of
the particle. Since we are considering the case in which the radiation
reaction force is $O(Gm/L^2)$, it is reasonable to assume these
derivatives are smaller by a factor of $O(1/T_r)$, i.e.,
\begin{eqnarray}
{D^{n+1}z(T)\over dT^{n+1}}\sim {1\over T_r L^{n-1}}
<O\left({\epsilon(x)\over L^{n+1}}\right)\quad(n\ge1),
\end{eqnarray}
where $T_r=O(L^2/(Gm))$ is the reaction time scale.
We shall find that this is consistent with the equations of motion in
the end.
Keeping this fact in mind, and using Eqs.~(\ref{eq:sigmadot}) $\sim$
(\ref{eq:expansions}), we obtain the $\epsilon$-expansion of the
trace-reversed metric perturbation, Eq.~(\ref{eq:metper}), as
\begin{equation}
\psi^{\mu\nu}=\psi^{\mu\nu}_{(mono)}+\psi^{\mu\nu}_{(spin)}
+\psi^{\mu\nu}_{(tail)} \,,
\label{eq:metper0} \\
\end{equation}
where
\begin{eqnarray}
\psi^{\mu\nu}_{(mono)}(x)
&=& 2Gm \bar g^\mu_\alpha \bar g^\nu_\beta
\Biggl({2\over \epsilon}\kappa^{-1}\dot z^\alpha\dot z^\beta
\nonumber \\ && \quad
-4\dot z^{(\alpha}\ddot z^{\beta)}
+2\dot z^\gamma\sigma^{;\delta}\dot z^\epsilon
R_{\gamma\delta\epsilon}{}^{(\alpha}\dot z{}^{\beta)}
-2\epsilon R^\alpha{}_\gamma{}^\beta{}_\delta\dot z^\gamma\dot z^\delta
+O(\epsilon^2)\Biggr),
\\
\psi^{\mu\nu}_{(spin)}(x)
&=& -4Gm \bar g^\mu_\alpha \bar g^\nu_\beta
\Biggl({1\over \epsilon^3}\dot z^{(\alpha} S^{\beta)\gamma}\sigma_{;\gamma}
+O((Gm)\epsilon^0)\Biggr),
\\
\psi^{\mu\nu}_{(tail)}(x)
&=& 2Gm \bar g^\mu_\alpha \bar g^\nu_\beta
\nonumber \\ && \times
\Biggl(-\int^{T_x}_{-\infty}dT'
\biggl(v^{\alpha\beta}{}_{\alpha'\beta'}(z(T_x),z(T'))
\dot z^{\alpha'}(T')\dot z^{\beta'}(T')
\nonumber \\ && \quad
+v^{\alpha\beta}{}_{\alpha'\beta';\gamma'}(z(T_x),z(T'))
S^{\gamma'\alpha'}(T')\dot z^{\beta'}(T')\biggr)
\nonumber \\ && \qquad
+\sigma_{;\gamma}\int^{T_x}_{-\infty}dT'
\biggl(v^{\alpha\beta}{}_{\alpha'\beta'}{}^{;\gamma}
(z(T_x),z(T'))\dot z^{\alpha'}(T')\dot z^{\beta'}(T')
\nonumber \\ && \qquad \quad
+v^{\alpha\beta}{}_{\alpha'\beta';\gamma'}{}^{;\gamma}
(z(T_x),z(T'))S^{\gamma'\alpha'}(T')\dot z^{\beta'}(T')\biggr)
+O(\epsilon^2)\Biggr),
\end{eqnarray}
where $\bar g^\mu_\alpha=\bar g^\mu_\alpha(x,z(T_x))$.
The part $\psi_{(tail)}^{\mu\nu}$ is called the tail term because it is
not due to the direct light cone propagation of waves but due to
multiple curvature scattering of waves as described by the
$v^{\mu\nu\alpha\beta}(x,z)$ term in the Green function.
\subsection{Transformation to the internal coordinates}
In order to write down the external metric in terms of the
internal coordinates,
we consider a coordinate transformation from $x$ to $\{X^a\}$
given in the form,
\begin{eqnarray}
\sigma_{;\alpha}(x,z(T))=-{\cal F}_\alpha(T,X) \label{eq:trans}.
\end{eqnarray}
We restrict our consideration on a coordinate transformation which
satisfies the following requirements.
We assume $X^i=0$ corresponds to the center of the particle,
$x^\alpha=z^\alpha(T)$, hence ${\cal F}_\alpha=0$ at $X^i=0$.
We also assume that the right hand side of Eq.~(\ref{eq:trans}) can
be expanded in positive powers of $X^i$ as
\begin{eqnarray}
{\cal F}_\alpha(T,X)=f_{\alpha i}(T)X^i+{1\over 2}f_{\alpha ij}(T)X^iX^j
+{1\over 3!}f_{\alpha ijk}(T)X^iX^jX^k+\cdots.
\label{eq:calFexpand}
\end{eqnarray}
Although it is possible that there appear more complicated terms such as
$X^iX^j/|X|$, we simply ignore such kinds of terms. We shall find it is
consistent within the order of the approximation to which we are going
to develop our consideration. Here $f_{\alpha i_1 \cdots i_n}(T)$ is
totally symmetric for $i_1\cdots i_n$ and is at most of
$O(L^{-(n-1)})$. Using Eqs.~(\ref{128db}) and (\ref{173db}), the total
derivative of Eq.~(\ref{eq:trans}) gives the important relation,
\begin{eqnarray}
\bar g^\alpha{}_\mu(z(T),x)dx^\mu &=&
\Biggl( {dz^\alpha \over dT}(T) +{Df^\alpha{}_i\over dT}(T)X^i
+{1\over 2}{Df^\alpha{}_{ij}\over dT}(T)X^iX^j
\nonumber \\ && \qquad \quad
-{1\over 2}R^\alpha{}_{\beta\gamma\delta}(z(T))
f^\beta{}_i(T){d z^\gamma\over dT}(T)
f^\delta{}_j(T)X^iX^j\Biggr) dT
\nonumber \\ &&
+\Biggl( f^\alpha{}_i+f^\alpha{}_{ij}(T)X^j
+{1\over 2}f^\alpha{}_{ijk}(T)X^jX^k
\nonumber \\ && \qquad \quad
-{1\over 6}R^\alpha{}_{\beta\gamma\delta}(z(T))
f^\beta{}_j(T)f^\gamma{}_i(T)
f^\delta{}_k(T)X^jX^k\Biggr) dX^i.
\nonumber \\ && \qquad
+O(|X|^3)
\label{eq:dtrans}
\end{eqnarray}
In the following sections, we write down the external metric
in terms of the internal coordinates in the matching region to obtain
the equations of motion.
\section{Equations of motion for a monopole particle}
\label{sec:deri1}
In this section, we adopt the method (1)
mentioned in section \ref{sec:intro} to derive the equations of
motion. We restrict our consideration to the case of a monopole
particle, which is necessary because we use a well-established method
to decompose the metric in the internal scheme by the tensor harmonics.
The tensor harmonics are classified by the total
angular momentum, $J$, reflecting the spherical symmetry
of the Schwarzschild black hole.
In the internal scheme, the monopole mode $(J=0)$ corresponds
to the mass perturbation. Thus we may set this mode to zero
since it is natural to suppose that the change of mass due to the
radiation reaction is negligible. The dipole modes $(J=1)$ are related
to the translation and rotation.
The translation modes are purely gauge and thus
we set them to zero to fix the center of the black hole.
As we are considering a non-rotating black hole,
we also set the notational modes to zero.
In general, the higher modes contain
gauge degrees of freedom as well as the physical ones.
However, for these higher modes,
we do not give any principle to fix the gauge for the moment.
Before the explicit computation of the $({}^{m}_{n})$ matching condition,
we briefly review the construction of the scalar
and vector harmonics in terms of the
symmetric trace-free (STF) tensor \cite{BlaDam}.
\subsection{Spherical harmonics expansion}
\label{sssec:harmonics}
We introduce the notation,
\begin{equation}
A_{<i_1 i_2\cdots i_\ell>},
\end{equation}
to represent the totally symmetric trace-free
part of $A_{i_1 i_2\cdots i_\ell}$.
More explicitly in the cases of $\ell=2$, $3$,
\begin{eqnarray}
A_{<ij>} & = & A_{(ij)}-{1\over 3}\delta_{ij} A_{kk}, \cr
A_{<ijk>} & = & A_{(ijk)}-{1\over 5}\left(
\delta_{ij} A_{(kmm)}+\delta_{jk} A_{(imm)}+
\delta_{ki} A_{(jmm)}\right).
\end{eqnarray}
The spherical harmonics expansion of a scalar function
$A$ on the unit-sphere can be written as
\begin{equation}
A=\sum_{\ell=0}^{\infty} A_{<i_1 i_2\cdots i_\ell>}
n^{<i_1}n^{i_2}\cdots n^{i_\ell>},
\end{equation}
where $n^i=X^i/|X|$.
In this case, the order $\ell$, which is associated with the angular
dependence, is equivalent to the total angular momentum, $J$.
Thus the $J$ mode of the $(TT)$-component
of the metric perturbation is totally determined by
its angular dependence.
Namely, the terms in the $(TT)$-component
of the metric perturbation which contain
\begin{equation}
1,\quad n^i,\quad n^{<i}n^{j>},
\end{equation}
correspond to the $J=0$, $1$, $2$ modes, respectively.
Next we consider the expansion of a vector field $A_i$,
\begin{equation}
A_i=\sum_{\ell=0}^{\infty} A_{i <i_1 i_2\cdots i_\ell>}
n^{<i_1}n^{i_2}\cdots n^{i_\ell>}.
\end{equation}
In this case the term of the $\ell$-th order in the angular dependence
is decomposed into $J=\ell+1$, $\ell$ and $\ell-1$.
This is done by using the Clebsch-Gordan reduction formula \cite{BlaDam},
\begin{equation}
U_i T_{i_1 i_2\cdots i_\ell}= R^{(+)}_{i<i_1 i_2\cdots i_\ell>}
+{\ell\over \ell+1} \epsilon_{ji<i_\ell}
R^{(0)}_{i_1 i_2\cdots i_{\ell-1}>j}
+{2\ell-1\over 2\ell+1} \delta_{i<i_\ell}
R^{(-)}_{i_1 i_2\cdots i_{\ell-1}>},
\end{equation}
where $T_{i_1 i_2\cdots i_\ell}$ is a STF tensor of order $\ell$ and
\begin{eqnarray}
R^{(+)}_{i_1 i_2\cdots i_{\ell+1}} & := &
U_{<i_{\ell+1}} T_{i_1 i_2\cdots i_\ell>},
\cr
R^{(0)}_{i_1 i_2\cdots i_{\ell}} & := &
U_{j} T_{k<i_1 i_2\cdots i_{\ell-1}} \epsilon_{i_\ell>jk},
\cr
R^{(-)}_{i_1 i_2\cdots i_{\ell-1}} & := &
U_{j} T_{j i_1 i_2\cdots i_{\ell-1}}.
\end{eqnarray}
We perform the decomposition explicitly for $\ell\le 2$ here.
For $\ell=0$, there exists no $J=0$ mode and
it trivially corresponds to the $J=1$ mode.
For $\ell=1$, the decomposition is performed as
\begin{equation}
A_{ij}n^j= \left[\left(A_{(ij)}
-{1\over 3}\delta_{ij} A_{kk}\right) + A_{[ij]}
+{1\over 3}\delta_{ij} A_{kk}\right] n^j,
\end{equation}
and the first, second and third terms in the square brackets
correspond to the $J=2$ ,$1$ and $0$ modes, respectively.
{}For $\ell=2$, we obtain the decomposition formula as
\begin{equation}
A_{i<jk>}n^{<j} n^{k>}= \left[A_{<ijk>}+{2\over 3}
\epsilon_{mi<j} B^{(2)}_{k>m} + {3\over 5}
\delta_{i<j} B^{(1)}_{k>}\right]
n^{<j} n^{k>},
\label{c10}
\end{equation}
where
\begin{eqnarray}
B^{(2)}_{ij} & = & {1\over 2}
(A_{k<mi>}\epsilon_{jkm}+A_{k<mj>}\epsilon_{ikm}),
\cr
B^{(1)}_{k} & = & A_{i<jk>}\delta_{ij}\,,
\label{c9}
\end{eqnarray}
and the first, second and third terms correspond to the $J=3$, $2$
and $1$ modes, respectively.
We omit the general discussion on the expansion of the tensor field
and we shall give a specific argument when necessary.
\subsection{Geodesics; $({}^0_0)$ and $({}^{1}_0)$ matching}
\label{sssec:geodesics}
We begin with the $({}^0_0)$ and $({}^{1}_0)$ matchings
which are, respectively, of $O((Gm/L)^0)$ and of $O((Gm/L)^{1/2})$ in
the matching region. First we consider the external scheme.
In these matchings the external metric is the background itself.
Here, the necessary order of expansion in $|X|$ is $O(|X|)$.
We note
\begin{eqnarray}
g_{\mu\nu}(x)dx^\mu dx^\nu =g_{\alpha\beta}(z)\bar g^\alpha{}_\mu(z,x)
\bar g^\beta{}_\nu(z,x)dx^\mu dx^\nu \,.
\end{eqnarray}
Then from Eq.~(\ref{eq:dtrans}), we get
\begin{eqnarray}
g_{\mu\nu}(x)dx^\mu dx^\nu &=&
\left( \left({dz \over dT}\right)^2(T)
+2{dz^\alpha \over dT}(T){D{f}_{\alpha i} \over dT}(T)X^i
\right)dT^2
\nonumber \\ &&
+2\left( {dz^\alpha \over dT}(T) {f}_{\alpha i}(T)
+{dz^\alpha \over dT}(T) {f}_{\alpha ij}(T)X^j
+{f}^{\alpha}{}_{i}(T){D{f}_{\alpha j} \over dT}(T)X^j
\right)dTdX^i
\nonumber \\ &&
+\left({f}^{\alpha}{}_{i}(T){f}_{\alpha j}(T)
+2{f}^{\alpha}{}_{i}(T){f}_{\alpha jk}(T)X^k\right)dX^idX^j
\nonumber \\ && \qquad
+O\left({|X|^2\over L^2}\right) \,.
\label{eq:ext1}
\end{eqnarray}
Comparing the above equation with Eq.~(\ref{eq:ext}) and
looking at the dependence on $X$,
one can readily extract out ${}^{(0)}_{(0)}h_{ab}$
and ${}^{(1)}_{(0)}h_{ab}$ to the lowest order in $Gm/L$.
Next we consider the internal scheme.
The $({}^0_0)$-component is trivially given
by the flat Minkowski metric. Hence the $({}^0_0)$ matching becomes
\begin{eqnarray}
-1 &=& \left({dz \over dT}\right)^2(T)
+O\left({Gm\over L}\right) \,,
\qquad \mbox{($TT$)-component}, \label{eq:m0tt}
\\
0 &=& {dz^\alpha \over dT}(T) f_{\alpha i}(T)
+O\left({Gm\over L}\right) \,,
\qquad \mbox{($Ti$)-component}, \label{eq:m0ti}
\\
\delta_{ij} &=& f^{\alpha}{}_{i}(T)f_{\alpha j}(T)
+O\left({Gm\over L}\right) \,,
\qquad \mbox{($ij$)-component}. \label{eq:m0ij}
\end{eqnarray}
Equations~(\ref{eq:m0ti}) and (\ref{eq:m0ij}) indicate
that $f^{\alpha i}(T)$ are spatial triad basis
along the orbit, i.e.,
\begin{equation}
f^\alpha{}_k(T) f^\beta{}_k(T)=g^{\alpha\beta}(z(T))
+{dz^\alpha\over dT}(T){dz^\beta\over dT}(T)
+O\left({Gm\over L}\right) \,.
\label{eq:triad}
\end{equation}
To know the $({}^1_0)$-component of the internal scheme,
it is better to consider all the $({}^1_n)$-components at the same time.
Namely we consider the linear perturbation of the black hole
${}^{(1)}H_{ab}$. For this purpose, we consider the harmonic
decomposition of linear perturbation as discussed in subsection
\ref{sssec:harmonics}.
Since the time scale associated with the perturbation should be of the
order of the background curvature scale $L$, it is much larger than the
matching radius $(GmL)^{1/2}$. Therefore the perturbation may be
regarded as static. It is known that all the physical static
perturbations regular on the black hole horizon behave as $\sim |X|^{J}$
asymptotically where $J$ is the angular momentum eigenvalue. However, in
${}^{(1)}_{(n)}H_{ab}$, there exists no term which behaves as $\sim
|X|^{m}$ $(m\ge 2)$. Hence, except for gauge degrees of freedom,
${}^{(1)}_{(n)}H_{ab}$ contain only $J=0$, $1$ modes. As mentioned
before, we set the perturbation of $J=0$, $1$ modes to zero. Thus we
conclude that we may set
\begin{eqnarray}
{}^{(1)}_{(n)}H_{ab}=0 \,,
\end{eqnarray}
for all $n$. This is the gauge condition we adopt for the internal
scheme at $O(1/L)$. In particular this condition gives the $({}^1_0)$
matching as
\begin{eqnarray}
0 &=&
2{dz^\alpha \over dT}(T){D{f}_{\alpha i} \over dT}(T)X^i
+O\left({Gm\over L}{|X|\over L}\right) \,,
\qquad \mbox{($TT$)-component}, \label{eq:m1/2tt}
\\
0 &=&
{dz^\alpha \over dT}(T) {f}_{\alpha ij}(T)X^j
+{f}^{\alpha}{}_{i}(T){D{f}_{\alpha j} \over dT}(T)X^j
\nonumber \\ && \qquad
+O\left({Gm\over L}{|X|\over L}\right) \,,
\qquad\qquad\qquad\qquad\quad \mbox{($Ti$)-component}, \label{eq:m1/2ti}
\\
0 &=&
2{f}_{\alpha (i}(T){f}^{\alpha}{}_{j)k}(T)X^k
+O\left({Gm\over L}{|X|\over L}\right) \,,
\qquad \mbox{($ij$)-component}. \label{eq:m1/2ij}
\end{eqnarray}
Then the covariant $T$-derivative
of Eq.~(\ref{eq:m0tt}) and that of Eq.~(\ref{eq:m0ti})
with Eq.~(\ref{eq:m1/2tt})
result in the background geodetic motion,
\begin{eqnarray}
{D\over dT}\left({dz^\alpha \over dT}\right)(T)=
O\left({Gm\over L}{1\over L}\right) \,.
\label{eq:geo}
\end{eqnarray}
One can see from Eq.~(\ref{eq:m0tt}) that the internal time coordinate
$T$ becomes a proper time of the orbit in the lowest order in $Gm/L$.
In the same manner, Eq.~(\ref{eq:m1/2tt}) and
the covariant $T$-derivative of Eq.~(\ref{eq:m0ij})
with $(ij)$-antisymmetric part of Eq.~(\ref{eq:m1/2ti}) give
the geodetic parallel transport of the triad ${f}^{\alpha}{}_{i}(T)$,
\begin{eqnarray}
{D\over dT}{f}^{\alpha}{}_{i}(T)
=O\left({Gm\over L}{1\over L}\right) \,.
\label{eq:para}
\end{eqnarray}
Further, from Eqs.~(\ref{eq:m1/2ti}) and (\ref{eq:m1/2ij}),
we can see
\begin{eqnarray}
{f}^{\alpha}{}_{ij}(T)
=O\left({Gm\over L}{1\over L}\right) \,.
\label{eq:faij}
\end{eqnarray}
\subsection{Hypersurface condition; $({}^2_0)$ matching}
\label{sssec:hypersurface}
We now proceed to the $({}^{2}_{0})$ matching, in which the external
metric is still given by the background but there appear
non-trivial perturbations in the internal scheme.
Although it is of $O(Gm/L)$ in the matching region and
$O((Gm/L)^{1/2})$ higher than the remaining $({}^0_1)$-component,
we consider it first for the reason which will be clarified below.
In order to obtain $\displaystyle {}^{(2)}_{(0)}h_{ab}$,
we expand the external metric in terms of the internal coordinates up to
$O(|X|^2)$, i.e., we have to go one order higher than
Eq.~(\ref{eq:ext1}). Then the $(^2_0)$ matching becomes
\begin{eqnarray}
{1\over L^2}{}^{(2)}_{(0)}H_{TT}
&=& -R_{\alpha\beta\gamma\delta}(z(T)){dz^\alpha\over dT}(T)
f^\beta{}_i(T){dz^\gamma\over dT}(T)f^\delta{}_j(T)X^i X^j
\nonumber \\
&& +O\left({Gm\over L}{|X|^2\over L^2} \right),
\qquad \mbox{($TT$)-component},
\label{eq:h20TT}\\
{1\over L^2}{}^{(2)}_{(0)}H_{Ti}
&=& {1\over 2}{dz^\alpha \over dT}(T) {f}_{\alpha ijk}(T) X^j X^k
\nonumber\\
&& -{2\over 3}R_{\alpha\beta\gamma\delta}(z(T))
{dz^\alpha\over dT}(T){f}^{\beta}{}_{j}(T)
{f}^{\gamma}{}_{i}(T){f}^{\delta}{}_{k}(T)X^j X^k
\nonumber\\
&&+O\left({Gm\over L}{|X|^2\over L^2} \right),
\qquad \mbox{($Ti$)-component},
\label{eq:h20Ti} \\
{1\over L^2}{}^{(2)}_{(0)}H_{ij}
&=& {f}_{\alpha (i}(T){f}^{\alpha}{}_{j)kl}(T)X^k X^l
\nonumber\\
&&-{1\over 3}R_{\alpha\beta\gamma\delta}(z(T))
{f}^{\alpha}{}_{i}(T){f}^{\beta}{}_{k}(T)
{f}^{\gamma}{}_{j}(T){f}^{\delta}{}_{l}(T)X^k X^l
\nonumber \\
&&+O\left({Gm\over L}{|X|^2\over L^2} \right),
\qquad \mbox{($ij$)-component},
\label{eq:h20ij}
\end{eqnarray}
where Eqs.~(\ref{eq:para}) and (\ref{eq:faij}) have been used to
simplify the expressions. Since we have set ${}^{(1)}_{(n)}H_{ab}=0$,
the first non-trivial perturbations of the internal metric appear in
${}^{(2)}_{(n)}H_{ab}$. Hence they describe the linear perturbation of
the black hole metric in the internal scheme. Then we have to fix the
gauge condition for this perturbation to perform the matching.
{}For ${}^{(2)}_{(0)}H_{ab}$, since the physical perturbation contained
in it is quadrupolar, we fix the gauge so that all the $J$ modes except
$J=2$ are zero. Then the $({}^2_0)$ matching becomes as follows.
First consider the $(TT)$-component of the metric.
The right hand side of Eq.~(\ref{eq:h20TT}) may contain $J=0$, $2$
modes. The $J=0$ mode, however, vanishes because of the background Ricci
flatness. Hence this matching just determines the physical perturbation
in the $(TT)$-component.
As for the $(Ti)$-component, the right hand side of Eq.~(\ref{eq:h20Ti})
may contain $J=1$, $2$, $3$ modes. As before, the $J=2$ mode just
determines the physical perturbation of the $(Ti)$-component.
So we put $J=0$, $3$ modes to zero. However, they are found to be absent
in the second term of Eq.~(\ref{eq:h20Ti}). To see this we first
decompose its angular dependence,
\begin{equation}
{dz^{\alpha}\over dT} R_{\alpha\beta\gamma\delta}
f^{\gamma}{}_{i}
\left( f^{\beta}{}_{<j} f^{\delta}{}_{k>} X^{<j} X^{k>}
+{1\over 3} f^{\beta}{}_{k} f^{\delta}{}_{k} |X|^{2}\right).
\label{c12}
\end{equation}
Using Eq.~(\ref{eq:triad}) and the fact that the Ricci tensor vanishes,
the second term in the parentheses is rewritten as
\begin{equation}
{1\over 3} {dz^{\alpha}\over dT} R_{\alpha\beta\gamma\delta}
f^{\gamma}{}_{i}
{dz^{\beta}\over dT} {dz^{\delta}\over dT} |X|^{2},
\end{equation}
and is found to be zero due to the symmetry of the Riemann tensor.
The first term in the parentheses of Eq.~(\ref{c12}) is decomposed
further with the aid of the formulas (\ref{c9}) and (\ref{c10}) as
\begin{equation}
{dz^{\alpha}\over dT} R_{\alpha\beta\gamma\delta}
\left(f^{\gamma}{}_{<i}
f^{\beta}{}_{j} f^{\delta}{}_{k>} +
{2\over 3}\epsilon_{mi<j}
F^{(2)\gamma\beta\delta}_{k>m} +
{3\over 5} \delta_{i<j} F^{(1)\gamma\beta\delta}_{k>}
\right)
X^{<j} X^{k>},
\label{eq:mino103}
\end{equation}
where
\begin{eqnarray}
F^{(2)\gamma\beta\delta}_{ij} & := &
{1\over 2}\left( f^{\gamma}{}_{m} f^{\beta}{}_{<n}
f^{\delta}{}_{i>} \epsilon_{jmn}
+ f^{\gamma}{}_{m} f^{\beta}{}_{<n}
f^{\delta}{}_{j>} \epsilon_{imn}\right),
\cr
F^{(1)\gamma\beta\delta}_{i} & := &
{1\over 2}\left( f^{\gamma}{}_{k} f^{\beta}{}_{i}
f^{\delta}{}_{k} + f^{\gamma}{}_{k} f^{\beta}{}_{k}
f^{\delta}{}_{i} \right) -{1\over 3}
f^{\gamma}{}_{i} f^{\beta}{}_{k} f^{\delta}{}_{k}\,.
\end{eqnarray}
It is easy to see that the first and third terms in the parentheses of
Eq.~(\ref{eq:mino103}) vanish due to the symmetry of the Riemann tensor
and the Ricci flatness. Thus only the $J=2$ mode remains in the second
term in the right hand side of Eq.~(\ref{eq:h20Ti}).
Decomposing the first term in the right hand side of
Eq.~(\ref{eq:h20Ti}) in a similar manner, we find it contains
$J=1$, $3$ modes as well as $J=2$ mode. Putting the $J=1$ mode to zero,
we obtain
\begin{equation}
{dz^\alpha\over dT}(T)f_{\alpha ikk}(T)
=O\left({Gm\over L}{1\over L^2} \right) \,.
\label{eq:fikk}
\end{equation}
Putting the $J=3$ mode to zero gives
\begin{equation}
{1\over2}{dz^\alpha\over dT}(T)f_{\alpha <ijk>}(T)X^jX^k
=O\left({Gm\over L}{|X|^2 \over L^2} \right) \,.
\end{equation}
Then combining this with Eq.~(\ref{eq:fikk}), we find
\begin{equation}
{dz^\alpha\over dT}(T)f_{\alpha ijk}(T)
=O\left({Gm\over L}{1\over L^2} \right).
\label{eq:fijk}
\end{equation}
{}From Eqs.~(\ref{eq:m0ti}), (\ref{eq:faij}) and (\ref{eq:fijk}),
we find
\begin{equation}
{dz^\alpha \over dT}(T)\sigma_{;\alpha}\left(x(T,X),z(T)\right)
=-{dz^\alpha \over dT}(T){\cal F}_{\alpha}(T,X)=
O\left({|X|^4\over L^4} L\right),
\end{equation}
to the lowest order in $Gm/L$.
Comparing this with the hypersurface condition of $T_x$,
Eq.~(\ref{eq:foli}), one finds that the $T={\rm constant}$ hypersurface
differs from the $T_x={\rm constant}$ hypersurface only
by $O(\epsilon^4)=O(|X|^4)$. It then follows that all the calculations
done in section \ref{sec:minoExt} remain valid even if we replace
Eq.~(\ref{eq:foli}) with
\begin{equation}
\sigma_{;\alpha}(x,z(T_x))\dot z^\alpha(T_x)=O(\epsilon^4/L^3).
\label{eq:hypcond}
\end{equation}
Thus $T$ can be identified $T_x$ to the lowest order in $Gm/L$.
The reason why we have done the $({}^2_0)$ matching prior to the
remaining $({}^0_1)$ matching is to establish this equivalence of
$T$ and $T_x$.
Turning to the $(ij)$-component, it may contain $J=0\sim4$ modes.
we first note that the second term of Eq.~(\ref{eq:h20ij})
contains only $J=2$ mode. This can be seen as follows.
First, we define the spatial triad components of the Riemann tensor by
\begin{equation}
R_{ijkm}:=R_{\alpha\beta\gamma\delta}
f^{\alpha}{}_{i} f^{\beta}{}_{j}
f^{\gamma}{}_{k} f^{\delta}{}_{m}\,.
\end{equation}
Introducing a symmetric tensor defined by
\begin{equation}
{\cal R}_{ij}={1\over 4} \epsilon_{ikm} \epsilon_{jns}
R_{kmns},
\end{equation}
we can express $R^{ikjm}$ in terms of ${\cal R}_{ij}$ as
\begin{equation}
R_{ijkm} = \epsilon^{nij} \epsilon^{skm} {\cal R}_{ns}\,.
\end{equation}
Then the symmetric tensor ${\cal R}_{ij}$ is decomposed into STF tensors
as
\begin{equation}
{\cal R}_{ij}={\cal R}_{<ij>}+
{1\over 3}\delta_{ij}{\cal R}_{kk}.
\label{calRab}
\end{equation}
Counting the number of indices, we find that
the first and second terms in Eq.~(\ref{calRab})
correspond to $J=2$ and $0$ modes, respectively.
However, again owing to the symmetry of the Riemann tensor and
the Ricci flatness, the $J=0$ mode vanishes
and only the $J=2$ mode remains.
Therefore the gauge condition for the $(ij)$-component implies
\begin{equation}
\left[f_{\alpha(i}(T)f^{\alpha}{}_{j)kl}(T)\right]_{J\ne2}=
O\left({Gm\over L}{1\over L^2} \right) \,,
\label{eq:fifjkl}
\end{equation}
where $[\cdots]_{J\ne2}$ means the $J\ne2$ parts of the quantity.
This will be used in the $({}^2_1)$ matching below.
\subsection{External perturbation; $({}^{0}_{1})$ matching}
Now we proceed to the first non-trivial order in $Gm/|X|$.
For this purpose, we must develop the external scheme.
However, since the time slicing by the
internal time coordinate $T$ is now identical to that by $T_x$
in the lowest order in $Gm/L$,
we can use the previously obtained formula (\ref{eq:metper0})
for the external metric perturbation.
Among the matchings which becomes of $O((Gm/L)^{1/2})$ in the matching
region, there remains the $({}^{0}_{1})$ matching.
This matching relates the masses of the particle in both schemes.
Since this matching is independent of $L$, we may regard the background
external metric as if it were flat. As is well-known, the
linear perturbation induced by a point-like particle of mass $m$ in the
flat background spacetime is exactly equal to the asymptotic metric of a
Schwarzschild black hole of mass $m$ in the linear order in $m$.
This fact indicates that the matching gives a consistency condition
at this order.
In order to directly check the consistency, we rewrite
Eq.~(\ref{eq:metper0}) in terms of the internal coordinates.
Since ${}^{(0)}_{(1)}h_{ab}\sim|X|^{-1}$, we have only to consider the
$\psi_{(mono)}^{\mu\nu}$ term of Eq.~(\ref{eq:metper0}). Using
Eqs.~(\ref{eq:465}), (\ref{eq:dtrans}) and the fact that
$\epsilon=\sqrt{{\cal F}_\alpha(T,X){\cal F}^\alpha(T,X)}$, we find
\begin{eqnarray}
Gm{}^{(0)}_{(1)}h_{ab}(X)dX^a dX^b
= Gm\left({2\over |X|}dT^2+{2\over |X|}dX^idX^i\right),
\label{eq:10match}
\end{eqnarray}
which corresponds to the asymptotic form of the
Schwarzschild black hole of mass $m$ in the harmonic coordinates.
\subsection{Radiation reaction;
$({}^1_1)$ and $({}^2_1)$ matchings}
There are many components which become of
$O(Gm/L)$ and $O((Gm/L)^{3/2})$ in the matching region.
However, we are interested in the leading order correction
to the equations of motion with respect to $Gm/L$ and
we found in the $({}^0_0)$ and $({}^1_0)$ matchings
that in the lowest order the terms which behave as $\sim |X|^0$
or $|X|^1$ determines the motion of the particle.
Therefore we consider the $({}^1_1)$ and $({}^2_1)$ matchings here.
In order to perform the $({}^1_1)$ and $({}^2_1)$ matchings,
the calculation we have done to obtain Eq.~(\ref{eq:10match}) must be
extended to the linear order in $|X|$.
Then the $({}^1_1)$ matching equations are found as
\begin{eqnarray}
{Gm\over L}{}^{(1)}_{(1)}H_{TT}
&=& \left\{ \left({dz \over dT}\right)^2(T)+1 \right\}
+ Gm{dz^\alpha\over dT}(T){dz^\beta\over dT}(T){\Theta}_{\alpha\beta}(T)
\nonumber \\ &&
+O\left(\left({Gm\over L}\right)^2\right) \,,
\qquad \mbox{($TT$)-component},
\label{eq:m1tt}\\
{Gm\over L}{}^{(1)}_{(1)}H_{Ti}
&=& {dz^\alpha \over dT}(T) {f}_{\alpha i}(T)
+ Gm{dz^\alpha\over dT}(T){f}^{\beta}{}_{i}(T){\Theta}_{\alpha\beta}(T)
\nonumber \\ &&
+O\left(\left({Gm\over L}\right)^2\right) \,,
\qquad \mbox{($Ti$)-component},
\label{eq:m1ti}\\
{Gm\over L}{}^{(1)}_{(1)}H_{ij}
&=& \left\{{f}^{\alpha}{}_{i}(T){f}_{\alpha j}(T) -\delta_{ij}\right\}
+Gm{f}^{\alpha}{}_{i}(T){f}^{\beta}{}_{j}(T){\Theta}_{\alpha\beta}(T)
\nonumber \\ &&
+O\left(\left({Gm\over L}\right)^2\right) \,,
\qquad \mbox{($ij$)-component},
\label{eq:m1ij}
\end{eqnarray}
and the $({}^2_1)$ matching as
\begin{eqnarray}
{Gm\over L^2}{}^{(2)}_{(1)}H_{TT}
&=& 2{dz^\alpha \over dT}(T){D{f}_{\alpha i} \over dT}(T)X^i
\nonumber \\ &&
+Gm\Biggl\{
{dz^\alpha\over dT}(T){dz^\beta\over dT}(T)
f^{\gamma}{}_{i}(T){\Theta}_{\alpha\beta\gamma}(T)X^i
\nonumber \\ && \quad
-{1\over 3|X|^3}f_{\alpha i}(T)f^{\alpha}{}_{jkl}(T)X^i X^j X^k X^l
\nonumber \\ && \quad
+{5\over 3|X|}R_{\alpha\beta\gamma\delta}(z(T))
{dz^\alpha\over dT}(T){f}^{\beta}{}_{i}(T)
{dz^\gamma\over dT}(T){f}^{\delta}{}_{j}(T)X^i X^j\Biggr\}
\nonumber \\ && \quad
+O\left(\left({Gm\over L}\right)^2{|X|\over L}\right) \,,
\qquad \mbox{($TT$)-component},
\label{eq:m3/2tt} \\
{Gm\over L^2}{}^{(2)}_{(1)}H_{Ti}
&=& {dz^\alpha \over dT}(T) {f}_{\alpha ij}(T)X^j
+{f}^{\alpha}{}_{i}(T){D{f}_{\alpha j} \over dT}(T)X^j
\nonumber \\ &&
+Gm\Biggl\{
{dz^\alpha\over dT}(T){f}^{\beta}_{i}(T)
{f}^{\gamma}_{j}(T){\Theta}_{\alpha\beta\gamma}(T)X^j
\nonumber \\ && \quad
+2R_{\alpha\beta\gamma\delta}(z(T)){dz^\alpha\over dT}(T)
{f}^{\beta}{}_{i}(T){dz^\gamma\over dT}(T){f}^{\delta}{}_{j}(T)X^j
\nonumber \\ && \quad
+{2\over 3|X|}R_{\alpha\beta\gamma\delta}(z(T))
{dz^\alpha\over dT}(T){f}^{\beta}{}_{j}(T)
{f}^{\gamma}{}_{i}(T) {f}^{\delta}{}_{k}(T)X^j X^k
\Biggr\}
\nonumber \\ && \quad
+O\left(\left({Gm\over L}\right)^2{|X|\over L}\right) \,,
\qquad \mbox{($Ti$)-component},
\label{eq:m3/2ti}
\end{eqnarray}
where
\begin{eqnarray}
Gm {\Theta}_{\alpha\beta}(T) &
:= & h_{(tail)\alpha\beta}(z(T)),
\cr
Gm {\Theta}_{\alpha\beta\gamma}(T)&
:= & h_{(tail)\alpha\beta;\gamma}(z(T)),
\end{eqnarray}
with
\begin{equation}
h_{(tail)\mu\nu}(x):=\psi_{(tail)\mu\nu}(x)
-{1\over 2} g_{\mu\nu}(x)\psi_{(tail)}(x).
\label{eq:hv}
\end{equation}
Note that $h_{(tail)\mu\nu}(x)$ is the metric
perturbation due to $v_{\mu\nu\alpha\beta}(x,z)$ in the Green function.
The ($ij$)-component of the $({}^2_{1})$ matching is not presented here
since it will not be used in the following discussion.
As we have discussed in subsection \ref{sssec:geodesics},
we require ${}^{(1)}_{(1)}H_{ab}=0$.
Thus the right hand sides of Eqs.~(\ref{eq:m1tt}),
(\ref{eq:m1ti}) and (\ref{eq:m1ij}) must vanish.
As for ${}^{(2)}_{(1)}H_{ab}$, following the discussion in
subsection \ref{sssec:hypersurface}, we set all the modes except
$J=2$ to zero.
Inspection of the right hand sides of Eqs.~(\ref{eq:m3/2tt}) and
(\ref{eq:m3/2ti}) reveals that the terms involving the Riemann tensor
are in the same forms as those appeared in Eqs.~(\ref{eq:h20TT}),
(\ref{eq:h20Ti}) and (\ref{eq:h20ij}). Hence they contain only $J=2$
modes and do not give any matching condition.
Furthermore, from Eq.~(\ref{eq:fifjkl}), all the modes except $J=2$
contained in the term involving $f^{\alpha}{}_{jkl}$ in
Eq.~(\ref{eq:m3/2tt}) vanish at the lowest order in $Gm/L$.
Hence we only have to consider the remaining terms in
Eqs.~(\ref{eq:m3/2tt}) and (\ref{eq:m3/2ti}). The $J=1$ modes are
extracted out to give
\begin{eqnarray}
0 &=&
2{dz^\alpha \over dT}(T){D{f}_{\alpha i} \over dT}(T)
+ Gm\,{dz^\alpha\over dT}(T){dz^\beta\over dT}(T)
{f}^{\gamma}{}_{i}(T){\Theta}_{\alpha\beta\gamma}(T)
\nonumber \\ &&
+O\left(\left({Gm\over L}\right)^2{1\over L}\right),
\qquad \mbox{($TT$)-component},
\label{eq:m3/2ttg}\\
0 &=&
{f}_{\alpha [i}(T){D{f}^{\alpha}{}_{j]} \over dT}(T)
+ Gm\,{\Theta}_{\alpha\beta\gamma}(T){dz^\alpha\over dT}(T)
{f}^{\beta}{}_{[i}(T){f}^{\gamma}{}_{j]}(T)
\nonumber \\ &&
+O\left(\left({Gm\over L}\right)^2{1\over L}\right),
\qquad \mbox{($Ti$)-component}.
\label{eq:m3/2tig}
\end{eqnarray}
The $J=0$ mode is absent in the ($TT$)-component, while that in the
($Ti$)-component exists but it just gives
the equation which determines $(dz^{\alpha}/dT)f_{\alpha ii}$
to the first order in $Gm/L$.
Taking the covariant $T$-derivative of Eqs.~(\ref{eq:m1tt}) and
(\ref{eq:m1ti}) and using Eq.~(\ref{eq:m3/2ttg}),
we obtain the equations of motion with the $O(Gm/L^2)$ correction
due to the radiation reaction,
\begin{eqnarray}
{D\over dT}{dz^\alpha\over dT}(T) &=&
-{Gm\over 2}
\left({\Theta}^\alpha{}_{\beta\gamma}(T)
+{\Theta}^\alpha{}_{\gamma\beta}(T)-{\Theta}_{\beta\gamma}{}^\alpha(T)\right)
{dz^\beta\over dT}(T){dz^\gamma\over dT}(T)
\nonumber \\ && \qquad
+O\left(\left({Gm\over L}\right)^2{1\over L}\right) \,.
\label{eq:damp1}
\end{eqnarray}
Similarly the $O(Gm/L^2)$ correction to the evolution equations of the
`triad' basis, $f^{\alpha}{}_{i}(T)$, are obtained from
the covariant $T$-derivative of Eq.~(\ref{eq:m1ij}), and
Eqs.~(\ref{eq:m3/2ttg}) and (\ref{eq:m3/2tig}). The result is
\begin{eqnarray}
{D\over dT}{f}^{\alpha}{}_{i}(T) &=&
- {Gm\over 2}
\left({\Theta}^\alpha{}_{\beta\gamma}(T)
+{\Theta}^\alpha{}_{\gamma\beta}(T)-{\Theta}_{\beta\gamma}{}^\alpha(T)\right)
{f}^{\beta}{}_{i}(T){dz^\gamma\over dT}(T)
\nonumber \\ && \qquad
+O\left(\left({Gm\over L}\right)^2{1\over L}\right) \,.
\label{eq:damp2}
\end{eqnarray}
Since the internal time coordinate $T$ is not properly normalized
in the external metric, we define the proper time, $\tau=\tau(T)$, such
that $(dz/d\tau)^2 =-1$. It is easy to see that we should choose
\begin{equation}
{d\tau\over dT}=1+{Gm\over 2}\Theta_{\alpha\beta}(T)
{dz^{\alpha}\over d\tau}(T){dz^{\beta}\over d\tau}(T)
+O\left(\left({Gm\over L}\right)^2\right) \,.
\end{equation}
Since the second term on the right hand side of this equation
is proportional to the small perturbation induced by the particle, it is
guaranteed to stay small even after a long time interval compared with
the reaction time scale $T_r=O\left((Gm/L)^{-1}L\right)$.
Then Eq.~(\ref{eq:damp1}) becomes
\begin{eqnarray}
&&{D\over d\tau}{dz^\alpha\over d\tau}(\tau)
\nonumber \\ && \qquad
= -{Gm\over 2}\left({dz^\alpha\over d\tau}{dz^\beta\over d\tau}
{dz^\gamma\over d\tau}{dz^\delta\over d\tau}
+2g^{\alpha\beta}(z){dz^\gamma\over d\tau}{dz^\delta\over d\tau}
-g^{\alpha\delta}(z){dz^\beta\over d\tau}{dz^\gamma\over d\tau}
\right)(\tau)~{\Theta}_{\beta\gamma\delta }(\tau)
\nonumber \\ && \qquad \qquad
+O\left(\left({Gm\over L}\right)^2{1\over L}\right) \,.
\label{eq:damp1r}
\end{eqnarray}
Also, the triad basis are not properly normalized
in the external metric. Thus we define $e^{\alpha}{}_{i}(\tau)$ as
\begin{eqnarray}
e_{\alpha i}(\tau)e^{\alpha}{}_{j}(\tau) &=& \delta_{ij} \,,
\label{eq:normtriad}\\
e^{\alpha}{}_{i}(\tau) &=& (\delta_{ij}+s_{ij})f^{\alpha}{}_{j}
-Gm({dz^{\alpha} / dT})({dz^{\beta} / dT})f^\gamma{}_i
{\Theta}_{\beta\gamma} \,,
\end{eqnarray}
where $s_{ij}$ is of $O(Gm/L)$ and recalling Eq.~(\ref{eq:m1ti}) the
last term is added so as to satisfy the orthonormal condition,
\begin{eqnarray}
e_{\alpha i}(\tau)({dz^{\alpha} / d\tau})(\tau)=0 \,.
\end{eqnarray}
{}From Eq.~(\ref{eq:normtriad}) we find
\begin{equation}
s_{ij}=- {Gm\over 2}\Theta_{\alpha\beta}(\tau)
{f^{\alpha}{}_i}(\tau){f^{\beta}{}_j}(\tau)
+O\left(\left({Gm\over L}\right)^2\right) \,.
\end{equation}
Again the correction terms in $e^\alpha_i$ are guaranteed to stay small.
Then the evolution equations of the normalized triad
$e^{\alpha}{}_{i}(\tau)$ become
\begin{eqnarray}
&&{D \over d\tau}e^{\alpha}{}_{i}(\tau)
\nonumber \\ && \qquad
= - {Gm\over 2} \left({dz^\alpha\over d\tau}{dz^\beta\over d\tau}
e^{\gamma}{}_{i}{dz^\delta\over d\tau}
+g^{\alpha\beta}(z){dz^\gamma\over d\tau}e^{\delta}{}_{i}
-g^{\alpha\delta}(z) e^{\beta}{}_{i}{dz^\gamma\over d\tau}
\right) (\tau)~ {\Theta}_{\beta\gamma\delta }(\tau)
\nonumber \\ && \qquad \qquad
+O\left(\left({Gm\over L}\right)^2{1\over L}\right) \,.
\label{eq:damp2r}
\end{eqnarray}
\section{Equations of motion for a spinning particle}
\label{sec:deri2}
In this section, we consider the equations of motion for a spinning
particle. Different from the Schwarzschild case, we cannot make use of
the mode decomposition by the spherical harmonics since the background
in the internal scheme does not have the spherical symmetry. Therefore,
it is quite unclear for us how to fix the gauge in the internal scheme,
and hence we cannot derive the equations of motion by the consistency
condition of matching.
Instead, we here apply the laws of motion and precession
discussed by Thorne and Hartle\cite{Thorne1}.
As noted in section \ref{sec:intro}, assuming the consistency
between the internal and external schemes, we can make use of the
matching condition to obtain the internal metric from the knowledge of
the external metric. The problem to derive the equations of motion for a
spinning particle was discussed by Thorne and Hartle\cite{Thorne1} and
the spin-induced force was derived. The discussion given below is an
extension of Ref.~\citen{Thorne1} in the sense that we take into account
the effect of radiation reaction to the motion. Both derivations of
the radiation reaction and the spin-induced force are discussed in a
unified manner.
\subsection{Laws of motion and precession}
The laws of motion and precession\cite{Thorne1} are derived from
the integral identities given in terms of the Landau-Lifshitz
pseudo-tensor, $t^{\alpha\beta}_{L-L}$, and the Landau-Lifshitz
super-potential, $H^{\alpha\mu\beta\nu}_{L-L}$.
The Einstein equations can be put into the form,
\begin{eqnarray}
H^{\alpha\gamma\beta\delta}_{L-L}{}_{,\gamma\delta}
=16\pi G(-g)\left(T^{\alpha\beta}+t^{\alpha\beta}_{L-L}\right) \,,
\label{eq:minoEieq}
\end{eqnarray}
where
\begin{eqnarray}
H^{\alpha\gamma\beta\delta}_{L-L}
&=&{\bf g}^{\alpha\beta}{\bf g}^{\gamma\delta}
-{\bf g}^{\alpha\delta}{\bf g}^{\gamma\beta} \,,
\\
(-g)t^{\alpha\beta}_{L-L}
&=&{1\over 16\pi}\biggl\{
{\bf g}^{\alpha\beta}{}_{,\gamma}{\bf g}^{\gamma\delta}{}_{,\delta}
-{\bf g}^{\alpha\gamma}{}_{,\gamma}{\bf g}^{\beta\delta}{}_{,\delta}
+{1\over 2}g^{\alpha\beta}g_{\gamma\delta}
{\bf g}^{\gamma\epsilon}{}_{,\zeta}{\bf g}^{\delta\zeta}{}_{,\epsilon}
\nonumber \\ && \quad
-\left(g^{\alpha\gamma}g_{\delta\epsilon}
{\bf g}^{\beta\delta}{}_{,\zeta}{\bf g}^{\epsilon\zeta}{}_{,\gamma}
+g^{\beta\gamma}g_{\delta\epsilon}
{\bf g}^{\alpha\delta}{}_{,\zeta}{\bf g}^{\epsilon\zeta}{}_{,\gamma}
\right)
+g_{\gamma\delta}g^{\epsilon\zeta}
{\bf g}^{\alpha\gamma}{}_{,\epsilon}{\bf g}^{\beta\delta}{}_{,\zeta}
\nonumber \\ && \quad
+{1\over 8}\left(2g^{\alpha\gamma}g^{\beta\delta}
-g^{\alpha\beta}g^{\gamma\delta}\right)
\left(2g_{\epsilon\zeta}g^{\eta\theta}
-g^{\epsilon\eta}g^{\zeta\theta}\right)
{\bf g}^{\epsilon\eta}{}_{,\gamma}{\bf g}^{\zeta\theta}{}_{,\delta}
\biggr\} \,,
\\
{\bf g}^{\alpha\beta}&:=&(-g)^{1/2}g^{\alpha\beta},
\end{eqnarray}
and a comma denotes the ordinary derivative.
By construction, the following conservation laws are satisfied:
\begin{eqnarray}
\left((-g)\left(T^{\alpha\beta}
+t^{\alpha\beta}_{L-L}\right)\right)_{,\beta}=0.
\end{eqnarray}
Suppose that the internal metric around a Kerr black hole is calculated
for a given trajectory of the particle. In terms of the internal metric
we define
\begin{eqnarray}
P^a(T,r) &:=& {1\over 16\pi G}
\int_{|X|=r}d^2 S_j H_{L-L}^{ab0j}{}_{,b}\,,
\label{eq:momentum} \\
J^{ij}(T,r) &:=& {1\over 16\pi G}\int_{|X|=r}d^2 S_k
\left(X^i H^{ja0k}_{L-L}{}_{,a}-X^j H_{L-L}^{ia0k}{}_{,a}\right.
\nonumber\\
&&\hspace{4cm}\left.+H_{L-L}^{ik0j}-H_{L-L}^{jk0i}\right),
\label{eq:spin}
\end{eqnarray}
where $d^2 S_j$ is the surface element of a two-sphere at $|X|=r$.
Then by using the Einstein equations (\ref{eq:minoEieq}),
we have the following integral identities:
\begin{eqnarray}
{d\over dT}P^a(T,r) &=& \int_{|X|=r} d^2 S_j (-g)t_{L-L}^{aj}(X),
\label{eq:force} \\
{d\over dT}J^{ij}(T,r) &=& \int_{|X|=r} d^2 S_k
\left(X^i (-g)t_{L-L}^{jk}(X) -X^j (-g)t_{L-L}^{ik}(X) \right).
\label{eq:torque}
\end{eqnarray}
These are called the laws of motion and precession.
By explicitly evaluating the right hand sides of
Eqs.~(\ref{eq:momentum}), (\ref{eq:spin}), (\ref{eq:force}) and
(\ref{eq:torque}), and eliminating $P^a(T,r)$ and $J^{ij}(T,r)$ from the
resulting equations, one obtains the equations of motion.
\subsection{Use of the matched asymptotic expansion}
In the present method, we construct the external metric and use the
matching conditions to obtain the necessary components of the internal
metric. The $({}^0_n)$-components of the internal metric are assumed to
be given by the metric of a Kerr black hole.
Since we do not construct the internal metric independently,
there exists no a priori requirement for thus obtained internal
metric to satisfy some specific gauge condition. Hence the
transformation from the external coordinates to the internal ones can be
rather arbitrarily chosen.
Here, we make use of the knowledge we have obtained in section
\ref{sec:deri1} and we choose the coordinate conditions as follows.
We assume that the external metric is generated by the point-like
source, Eq.~(\ref{eq:point}), and calculate the external metric in
the matching region as in the previous section. In order to do so,
the hypersurfaces of $T={\rm constant}$ and $T_x={\rm constant}$ should
be identical to each other as given by Eq.~(\ref{eq:hypcond}).
To satisfy this requirement, we adopt the coordinate transformation
from $x$ to $X$ in the form,
\begin{equation}
\sigma_{;\alpha}(x,z(T)) + f_{\alpha i}(T) X^i = O((Gm)^2/L).
\label{eq:minosurface}
\end{equation}
This is satisfied by setting
\begin{equation}
Lf^\alpha{}_{ij} = L^2f^\alpha{}_{ijk} = O(Gm/L)\,,
\label{eq:fijfijk0}
\end{equation}
in Eq.~(\ref{eq:calFexpand}). Note that, in the case of a monopole
particle discussed in section \ref{sec:deri1}, the conditions that
are required to guarantee Eq.~(\ref{eq:hypcond}) are obtained from the
$({}^{n}_0)$-matchings ($n=0,1,2$). On the contrary, here we impose the
conditions (\ref{eq:fijfijk0}) by hand to guarantee
Eq.~(\ref{eq:hypcond}).
Furthermore, to determine the internal metric from the matching
conditions, we set the $({}^1_n)$-components of the internal metric to
zero:
\begin{equation}
\label{eq:H1ncond}
{}^{(1)}_{(n)}H_{ab}=0\quad (n=0,1,2,\cdots).
\end{equation}
In the case of a monopole particle, we have found we can impose these
conditions. However, in the present case, since we have imposed the
coordinate condition (\ref{eq:minosurface}) by hand, it is not clear if
a similar argument can be made to justify these conditions. Nevertheless,
at least for $n=0$, $1$, we should be able to require the conditions
(\ref{eq:H1ncond}). This is because the spin of the black hole appears
at $O\bigl((Gm)^2\bigr)$ or higher in the internal metric, hence the
discussion we gave in the case of a monopole particle should be equally
applicable to the $({}^1_0)$ and $({}^1_1)$-components of the metric.
In fact, we see below that the conditions (\ref{eq:H1ncond}) for $n=0$,
$1$ consistently determine the internal metric in the local rest frame
by matching.
First consider the background metric in the internal scheme.
For convenience we define the trace-reversed $({}^m_n)$-components of
the metric with respect to the flat Minkowski space:
\begin{eqnarray}
{}^{(m)}_{(n)}\bar H_{ab}={}^{(m)}_{(n)}H_{ab}
-{1\over 2}\eta_{ab}\eta^{cd}{}^{(m)}_{(n)}H_{cd}
\end{eqnarray}
Expanding the Kerr metric with respect to $Gm$, the
$({}^0_n)$-components of the metric in the harmonic coordinates are
found as
\begin{eqnarray}
{}^{(0)}_{(0)}H_{ab}&=&\eta_{ab}\,,
\label{eq:H00ab}\\
Gm\,{}^{(0)}_{(1)}\bar H_{TT}&=& {4Gm \over |X|} \,,
\label{eq:H01TT}\\
Gm\,{}^{(0)}_{(1)}\bar H_{Ti}&=& 0 \,,
\label{eq:H01Ti}\\
Gm\,{}^{(0)}_{(1)}\bar H_{ij}&=& 0 \,,
\label{eq:H01ij}\\
(Gm)^2{}^{(0)}_{(2)}\bar H_{TT} &=& {(Gm)^2 \over|X|^2} \,,
\label{eq:H02TT}\\
(Gm)^2{}^{(0)}_{(2)}\bar H_{Ti} &=&{2 G m \over |X|^3}S_{ij}X^j\,,
\label{eq:H02Ti}\\
(Gm)^2{}^{(0)}_{(2)}\bar H_{ij} &=& {(Gm)^2 \over |X|^2}
\left(-2 \delta_{ij} +{X^i X^j\over |X|^2}\right) \,,
\label{eq:H02ij}
\end{eqnarray}
where $S_{ij}$ is the specific spin tensor in the local rest frame of
the black hole. Then calculating the $({}^0_0)$ and
$({}^0_1)$-components of the external metric in the matching region, we
find they are consistent with Eqs.~(\ref{eq:H00ab}) $\sim$
(\ref{eq:H01ij}) provided that $\dot z(T)$ and $f^\alpha_i(T)$ satisfy
the lowest order orthonormal conditions, Eqs.~(\ref{eq:m0tt}),
(\ref{eq:m0ti}) and (\ref{eq:m0ij}). Further, the spin contribution to
the metric, Eq.~(\ref{eq:H02Ti}), can be reproduced from the external
metric with the source (\ref{eq:point}) by the identification,
\begin{equation}
S_{\alpha\beta}(T)=S_{ij}f_\alpha^i(T) f_\beta^j+O((Gm)^2/L).
\end{equation}
This fact indicates the consistency of using the point-particle energy
momentum tensor (\ref{eq:point}) in the perturbation analysis.
Keeping in mind the imposed conditions (\ref{eq:fijfijk0}) and
(\ref{eq:H1ncond}), the calculation of the $({}^1_0)$-components of the
external metric in the matching region gives
\begin{equation}
\label{eq:h10ab}
{dz^\alpha\over dT}(T){Df_{\alpha i}\over dT}(T)=O(Gm/L^2),
\quad
f^\alpha_i(T){Df_{\alpha j}\over dT}(T)=O(Gm/L^2).
\end{equation}
As before, these equations imply that $f^\alpha_i(T)$ is
parallel transported along the particle trajectory at the lowest order.
Similarly, the calculation of the $({}^{1}_{1})$-components of the
external metric gives the same conditions as we have found in the
previous section (see Eqs.~(\ref{eq:m1tt}) $\sim$ (\ref{eq:m1ij})):
\begin{eqnarray}
\dot z^2(T) &=& -1 - {Gm \over 2} \bar \Theta_{\alpha\beta}(T)
\left(g^{\alpha\beta}(z(T)) +2 \dot z^\alpha(T)\dot z^\beta(T)\right)
\nonumber \\ && \qquad
+O((Gm)^2/L^2) \,,
\label{eq:Ltt} \\
\dot z^\alpha(T) f_{\alpha i}(T) &=&
-Gm \bar \Theta_{\alpha\beta}(T) \dot z^\alpha(T) f^\beta{}_i(T)
\nonumber \\ && \qquad
+O((Gm)^2/L^2) \,,
\label{eq:Lti} \\
f^\alpha{}_i(T)f_{\alpha j}(T) &=& \delta_{ij}
-{Gm \over 2} \bar \Theta_{\alpha\beta}(T)\left(-\delta_{ij}
g^{\alpha\beta}(z(T))
+2f^\alpha{}_i(T)f^\beta{}_j(T)\right)
\nonumber \\ && \qquad
+O((Gm)^2/L^2) \,,
\label{eq:Lij}
\end{eqnarray}
where we have introduced
\begin{eqnarray}
\bar \Theta_{\alpha\beta}=\Theta_{\alpha\beta}
-1/2 g_{\alpha\beta} \Theta^\delta{}_{\delta}
={1\over Gm}\bar\psi_{(tail)\alpha\beta} \,.
\end{eqnarray}
These equations may be viewed as a coordinate condition on the internal
time $T$. Clearly there is no inconsistency in them.
Computation of the rest of components of the internal metric which are
needed to derive the equations of motion is straightforward. The results
are
\begin{eqnarray}
{1\over L^2}\,{}^{(2)}_{(0)}\bar H_{TT}
&=& - {2\over 3}R_{\alpha\beta\gamma\delta}(z(T))
\dot z^\alpha(T) X^\beta(T) \dot z^\gamma(T) X^\delta(T)
\nonumber \\ &&
+O(Gm|X|^2/L^3) \,,
\\
{1\over L^2}\,{}^{(2)}_{(0)}\bar H_{Ti}
&=& -{2\over 3}R_{\alpha\beta\gamma\delta}(z(T))
f^\alpha{}_i(T) X^\beta(T) \dot z^\gamma(T) X^\delta(T)
\nonumber \\ &&
+O(Gm|X|^2/L^3) \,,
\\
{1\over L^2}\,{}^{(2)}_{(0)}\bar H_{ij}
&=& -{1\over 3}R_{\alpha\beta\gamma\delta}(z(T))
X^\beta(T) X^\delta(T) \left(f^\alpha{}_i(T) f^\gamma{}_j(T)
+\delta_{ij}\dot z^\alpha(T) \dot z^\gamma(T)\right)
\nonumber \\ &&
+O(Gm|X|^2/L^3) \,,
\\
{Gm\over L^2}\,{}^{(2)}_{(1)}\bar H_{TT}&=&
\dot z^\alpha(T) {D\over dT}f_{\alpha i}(T) X^i(T)
\nonumber \\ &&
-{10 G m \over 3 |X|}R_{\alpha\beta\gamma\delta}(z(T))
\dot z^\alpha(T) X^\beta(T) \dot z^\gamma(T) X^\delta(T)
\nonumber \\ &&
+ Gm \bar \Theta_{\alpha\beta\gamma}(T) \dot z^\alpha(T) \dot z^\beta(T)
X^\gamma(T)
+O((Gm)^2 |X|/L^3) \,,
\\
{Gm\over L^2}\,{}^{(2)}_{(1)}\bar H_{Ti}&=&
f^\alpha{}_i(T) {D\over dT}f_{\alpha j}(T) X^j
\nonumber \\ &&
+ G m R_{\alpha\beta\gamma\delta}(z(T)) X^\beta(T) \dot z^\gamma(T)
\left(-\dot z^\alpha(T) f^\delta{}_i(T)
+{2 \over 3 |X|}f^\alpha{}_i(T) X^\delta(T)\right)
\nonumber \\ &&
+ Gm \bar \Theta_{\alpha\beta\gamma}(T)\dot z^\alpha(T)
f^\beta{}_i(T) X^\gamma(T)
+O((Gm)^2 |X|/L^3) \,,
\\
{Gm\over L^2}\,{}^{(2)}_{(1)}\bar H_{ij}&=&
\delta_{ij}\dot z^\alpha(T) {D\over dT}f_{\alpha k}(T) X^k
\nonumber \\ &&
-2 G m R_{\alpha\beta\gamma\delta}(z(T))
\Bigl({1\over 3|X|}X^\beta(T) X^\delta(T)
\left(f^\alpha{}_i(T) f^\gamma{}_j(T)
-\delta_{ij}\dot z^\alpha(T) \dot z^\gamma(T) \right)
\nonumber \\ && \qquad \qquad \qquad \qquad \qquad
-2 |X| \dot z^\alpha(T) f^\beta{}_i(T) \dot z^\gamma(T) f^\delta{}_j(T)
\Bigr)
\nonumber \\ &&
-2 Gm \Theta_{\alpha\beta\gamma}(T)
f^\alpha{}_i(T) f^\beta{}_j(T) X^\gamma(T)
+O((Gm)^2 |X|/L^3) \,,
\end{eqnarray}
where $X^\alpha(T)=f^\alpha_i(T)X^i$ and we have defined
\begin{eqnarray}
\bar \Theta_{\alpha\beta\gamma}:=\Theta_{\alpha\beta\gamma}
-1/2 g_{\alpha\beta} \Theta^\delta{}_{\delta\gamma} \,.
\end{eqnarray}
\subsection{Equations of motion}
Before evaluating Eqs.~(\ref{eq:momentum}) and (\ref{eq:force}),
let us first consider the equations for the spin, Eqs.~(\ref{eq:spin})
and (\ref{eq:torque}). Equation (\ref{eq:spin}) has a dimension of
$(mass)\times(length)$ and we extract out the terms of $O(Gm^2)$.
Power counting of $X$ shows that there will be contributions
linear in the $({}^0_2)$-components of the metric and those from
bilinear combinations of the $({}^0_1)$- and $({}^0_1)$-components
of the metric. Then we obtain
\begin{eqnarray}
J^{ij}(T,r) = m S_{\alpha\beta}(T)f^{\alpha i}f^{\beta i} +O(G^2m^3/L)
+(r\mbox{-dependent terms}).
\label{eq:spin1}
\end{eqnarray}
Equation (\ref{eq:torque}) has a dimension of $(mass)^1$ and
we extract out the terms of $O(Gm^2/L)$ in the same way.
Power counting of $X$ shows that
there will be contributions from bilinear combinations of
the $({}^0_1)$- and $({}^0_1)$-components of the metric,
and we find that the right hand side of Eq.~(\ref{eq:torque}) vanishes:
\begin{eqnarray}
{d \over dT}J^{ij}(T,r) = O(G^2m^3/L^2)
+(r\mbox{-dependent terms}).
\label{eq:torque1}
\end{eqnarray}
Since the spatial triad are geodetic parallel transported
in the background geometry to the leading order,
Eqs.~(\ref{eq:spin1}) and (\ref{eq:torque1}) result in
\begin{eqnarray}
{D\over dT}S^{\alpha\beta}(T) =O\left({(Gm)^2\over L^2}\right).
\label{eq:parallel1}
\end{eqnarray}
Thus in the test particle limit $m\to0$ the spin tensor is parallel
transported along the particle trajectory in the background geometry.
We next consider Eqs.~(\ref{eq:momentum}) and (\ref{eq:force}).
Equation (\ref{eq:momentum}) has a dimension of $(mass)^1$ and
we extract out the terms of $O(m)$ and $O(Gm^2/L)$.
We find that there will be linear contributions
from $({}^0_1)$-, $({}^0_2)$-components of the metric,
and bilinear contributions
from pairs of $({}^0_1)-$ and $({}^0_1)$-components of the metric.
We obtain
\begin{eqnarray}
P^0(T,r) &=& m + O(G^2 m^3/L^2) +(r\mbox{-dependent terms}), \\
P^i(T,r) &=& O(G^2 m^3/L^2) +(r\mbox{-dependent terms}).
\end{eqnarray}
Eq.~(\ref{eq:force}) has a dimension of $(mass)/(length)$ and
we consider the terms of $O(m/L)$ and $O(Gm^2/L^2)$.
There will be bilinear contributions from
pairs of the $({}^0_2)-$ and $({}^2_0)$-components
and pairs of the $({}^0_1)-$ and $({}^2_1)$-components of the metric.
We find the former pairs give the spin-induced force
and the latter pairs give the radiation reaction force.
A straightforward computation results in
\begin{eqnarray}
{d\over dT}P^0(T,r) &=& O(G^2 m^3/L^3) +(r\mbox{-dependent terms}),
\\
{d\over dT}P^i(T,r) &=&
-{m\over 2}R_{\alpha\beta\gamma\delta}(z(T))
f^\alpha{}_i(T)\dot z^\beta(T)S^{\gamma\delta}(T)
\nonumber \\ &&
+{Gm^2 \over 4} \bar \Theta_{\alpha\beta\gamma}(T)f^\gamma{}_i(T)
\left(2\dot z^\alpha(T)\dot z^\beta(T)+g^{\alpha\beta}(z(T))\right)
\nonumber \\ &&
+m\dot z^\alpha(T) {D\over dT}f_{\alpha i}(T)
\nonumber\\&&
+O(G^2 m^3/L^3) +(r\mbox{-dependent terms}).
\end{eqnarray}
Taking the $T$-derivative of Eqs.~(\ref{eq:Ltt}) and (\ref{eq:Lti}),
we obtain the equations of motion,
\begin{eqnarray}
{D \over dT}\dot z^\alpha(T)
&=& -{Gm \over 2} \Theta_{\beta\gamma\delta}(T)
\left(2\dot z^\beta(T)g^{\alpha\gamma}(z(T))\dot z^\delta(T)
-\dot z^\beta(T)\dot z^\gamma(T)g^{\alpha\delta}(z(T))\right)
\nonumber \\ &&
-{1\over 2}R^\alpha{}_{\beta\gamma\delta}(z(T))
\dot z^\beta(T)S^{\gamma\delta}(T)
+O(G^2 m^2/L^3) \,. \label{eq:result0}
\end{eqnarray}
Introducing the proper time $\tau$ of the orbit,
\begin{eqnarray}
{d\tau \over dT}&=&
1+ {Gm \over 2}\bar \Theta_{\alpha\beta}(T)\dot z^\alpha(T)\dot z^\beta(T),
\end{eqnarray}
we finally arrive at
\begin{eqnarray}
{D \over d\tau}{d z^\alpha \over d\tau}(\tau) &=&
-{Gm \over 2}\Theta_{\beta\gamma\delta}(\tau)
\left({d z^\alpha \over d\tau}{d z^\beta \over d\tau}
{d z^\gamma \over d\tau}{d z^\delta \over d\tau}
+2{d z^\beta\over d\tau}g^{\alpha\gamma}(\tau){d z^\delta\over d\tau}
-{d z^\beta\over d\tau}{d z^\gamma\over d\tau}g^{\alpha\delta}(\tau)
\right)
\nonumber \\ &&
-{1\over 2}R^\alpha{}_{\beta\gamma\delta}(\tau)
{d z^\beta\over d\tau}S^{\gamma\delta}(\tau)
+O(G^2 m^2/L^3) \,,
\label{eq:result}
\end{eqnarray}
where $Q(\tau)=Q(z(\tau))$. One finds that the result is exactly equal to
Eq.~(\ref{eq:damp1r}) except for the spin-curvature coupling term.
In the case of a monopole particle discussed in the previous
section, the $({}^2_1)$ matching gave two conditions
(\ref{eq:m3/2ttg}) and (\ref{eq:m3/2tig}).
The latter condition was crucial to obtain the $O(Gm/L^2)$ correction
terms in the evolution equations of $f^{\alpha}_i$.
In the present analysis, we do not have the counterpart of this
condition. This indicates that the gauge condition relating with the
notational mode must be specified to determine $Df_{\alpha i}/dT$.
\section{Discussion}
Let us first discuss the physical meaning of the equations of motion
obtained in the preceding two sections.
For simplicity, we consider the case of a monopole particle.
We divide the perturbed metric in the external scheme
into the two:
\begin{equation}
h_{\mu\nu}(x)=h_{(mono)\mu\nu}(x)+h_{(tail)\mu\nu}(x),
\end{equation}
where $h_{(tail)\mu\nu}(x)$ is the part due to the
$v^{\mu\nu\alpha\beta}$ in the Green function while
$h_{(mono)\mu\nu}$ is due to the $u_{\mu\nu\alpha\beta}$ term
(see Eq.~(\ref{eq:metper})).
The singular behavior of the perturbed metric
in the coincidence limit $x\to z$ is totally due to
$h_{(mono)\mu\nu}(x)$. Thus, we introduce the regularized
perturbed metric as
\begin{eqnarray}
\tilde g_{(reg)\mu\nu}(x):=g_{\mu\nu}(x)+h_{(tail)\mu\nu}(x),
\label{eq:regmet}
\end{eqnarray}
which has no singular behavior any more.
Then we find the equations of motion (\ref{eq:damp1}) and the
evolution equations of the triad basis (\ref{eq:damp2})
coincide with the geodesic equation and the geodetic parallel transport
equation, respectively, on the regularized spacetime with
the metric $\tilde g_{(reg)\mu\nu}$.
To see this let us consider the parallel transport of a vector
$A^{\alpha}$ along a geodesic $x^\alpha=z^\alpha(\tilde\tau)$
in this spacetime. It is given by
\begin{equation}
{\tilde D\over d\tilde\tau}A^{\alpha}:=
{D\over d\tilde\tau}A^{\alpha}+\delta
\Gamma_{(reg)}{}^{\alpha}{}_{\beta\gamma} A^{\beta}
{dz^{\gamma}\over d\tilde\tau}=0,
\label{eq:saigo}
\end{equation}
to the linear order in $h_{(tail)\mu\nu}$ where
\begin{equation}
\delta \Gamma_{(reg)}{}^{\alpha}{}_{\beta\gamma}:=
{1\over 2}\left(h_{(tail)}{}^{\alpha}{}_{\beta;\gamma}+
h_{(tail)}{}^{\alpha}{}_{\gamma;\beta}-h_{(tail)\beta\gamma}{}^{;\alpha}
\right) \,.
\end{equation}
Then one recovers Eqs.~(\ref{eq:damp1}) and (\ref{eq:damp2}) by
identifying $\tilde\tau$ with $T$ and
replacing $A^\alpha$ with $dz^\alpha/d T$ or $f^{\alpha}{}_{i}$.
In the case of a spinning particle,
there exists an additional force in the equations of motion
(\ref{eq:result0}) due to the coupling of
the spin and the background curvature.
The result for the monopole particle seems analogous to that in
the electromagnetic case\cite{DeWitt},
except that the instantaneous reaction force which is proportional to
higher derivatives of the particle velocity is absent in the present
case. This is because the particle traces a geodesic in the lowest order
approximation. If an external force field exists, the assumption of the
geodetic motion in the lowest order breaks down and furthermore
the contribution of the external force field
to the energy momentum tensor must be taken into account.
Since this fact makes the problem too complicated,
it is beyond the scope of the present discussion.
Now let us consider how to construct $\tilde g_{(reg)\mu\nu}$,
in the case of a monopole particle.
Unfortunately, we do not have any satisfactory formalism that can be
applied to such a calculation, even for a specific background spacetime
such as the Kerr geometry, mainly due to the difficulty in evaluating
the bi-tensor $v_{\mu\nu\alpha\beta}(x,z)$.
Here we just give a few primitive discussions on this matter.
Basically, there seems to be two approaches
for calculating $\tilde g_{(reg)\mu\nu}$ (or equivalently
$h_{(tail)\mu\nu}$).
The first one is to calculate $h_{(tail)\mu\nu}$ directly.
The second one is to calculate
$h_{\mu\nu}=h_{(mono)\mu\nu}+h_{(tail)\mu\nu}$ and subtract
$h_{(mono)\mu\nu}$ from it.
In the following, we discuss only the first approach.
As for the second approach, we have nothing to mention here, but
this direction of research may be fruitful\cite{Minothesis}.
By definition, $h_{(mono)\mu\nu}$ evaluated on the particle trajectory
is independent of the past history of the particle.\footnote{
There is a possibility that
the future light cone emanating from $z$ crosses
the particle trajectory again. Since inclusion of this possibility
makes the problem too complicated, we do not consider it here.}
Therefore if we consider the metric defined by
\begin{equation}
h^{(\Delta\tau)}_{\mu\nu}(x)=
G m \left(\delta_{\mu}{}^{\rho}\delta_{\nu}{}^{\sigma}
-{1\over 2}g_{\mu\nu}(x) g^{\rho\sigma}(x)\right)
\int_{-\infty}^{\tau_x-\Delta\tau}~
d\tau' G^{Ret}_{\rho\sigma\alpha\beta}(x,z(\tau'))\dot z^{\alpha}
(\tau')\dot z^{\beta}(\tau'),
\end{equation}
for any finite $\Delta\tau$\,$(>0)$, it will not contain
$h_{(mono)\mu\nu}$ when it is evaluated on the particle trajectory.
The difference between $h^{(\Delta\tau)}_{\mu\nu}$ and
$h_{(tail)\mu\nu}$ comes from the integral
over a small interval,
\begin{equation}
\sim Gm \int_{\tau_x-\Delta\tau}^{\tau_x}~
d\tau' v_{\rho\sigma\alpha\beta}(x,z(\tau'))\dot z^{\alpha}
(\tau')\dot z^{\beta}(\tau').
\end{equation}
Since $v_{\rho\sigma\alpha\beta}(x,z)$ is regular in the coincidence
limit $x\to z$, this integral will be negligible for a sufficiently
small $\Delta\tau$. Thus
$\lim_{\Delta\tau\to0}h^{(\Delta\tau)}_{\mu\nu}$ will give
$h_{(tail)\mu\nu}$.
In the case of the electromagnetic (vector) Green function,
a calculation along the above strategy was performed by
DeWitt and DeWitt\cite{DeW2} by assuming
the background gravitational field is weak so that
its metric is given by the small perturbation
on the Minkowski metric,
\begin{equation}
g_{\mu\nu}=\eta_{\mu\nu}+h^{(b)}_{\mu\nu}\,.
\end{equation}
DeWitt and DeWitt calculated the relevant part of the
Green function perturbatively to the first order in $h^{(b)}_{\mu\nu}$
by using the Minkowski Green function.
Here we should mention one important fact. We have obtained the
equations of motion with the correction term of $O(Gm/L^2)$.
Although we use the terminology `radiation reaction' to describe it, it
is not appropriate in a narrow sense because the correction term may
well contain something more than just the usual effect of radiation
reaction. In fact, in the electromagnetic case, the existence of
the effect which is termed as `the induced polarization force on the
background spacetime' is reported by several authors\cite{inpol}.
Furthermore, a calculation analogous to that done by DeWitt and
DeWitt\cite{DeW2} for the electromagnetic case was done by
Carmeli\cite{Carm} for the gravitational case and it was shown that the
tail part correctly reproduces the lowest order post-Newtonian
corrections to the equations of motion. However, no such calculation has
been done for the background with strong gravity, such as
a black hole spacetime. It seems difficult to develop DeWitt and
DeWitt's approach to higher orders in $h^{(b)}_{\mu\nu}$.
It is a challenging issue to formulate a systematic method to evaluate
the tail part of the metric when the background gravity is strong and
clarify its physical content.
Turning back to the effect of the gravitational radiation reaction,
we should make one additional comment.
There has been some proposals to obtain the radiation reaction force in
a quite different manner. Among others is the use of the radiative Green
function (a half of the retarded minus advanced Green functions) in the
case of a Kerr background proposed by Gal`tsov\cite{Gal}. As easily seen
from the results in section \ref{sec:minoExt}, the use of the
radiative Green function instead of the retarded one results in the
replacement of $\psi_{(tail)\mu\nu}(x)$ by $\psi^{Rad}_{(v)\mu\nu}(x)$,
which is defined by
\begin{equation}
\psi^{Rad}_{(v)}{}_{\beta\gamma}(x):=-Gm\int^{+\infty}_{-\infty}d\tau'
v_{\beta\gamma\alpha'\beta'}(x,z(\tau'))
\dot z^{\alpha'}(\tau')\dot z^{\beta'}(\tau').
\end{equation}
Gal'tsov proved that the back reaction force computed using the
radiative Green function gives the loss rates of the energy and the
$z$-component of the angular momentum of the particle in quasi-periodic
orbits which correctly balance with the emission rates of the
corresponding quantities by gravitational radiation.
However, we do not think that this fact indicates
the correctness of the prescription, even if we restrict it to the case
of a Kerr background, because those constants of motion
are special ones which reflect the existence of the corresponding
Killing vector fields. For such quantities, there may be some symmetry
in the structure of the Green function which makes the use of the
radiative Green function valid. However, it is doubtful that the
radiative Green function correctly describes the radiation reaction
effect on the Carter constant.
Finally we make a couple of comments on the implications of our results.
It is important to note that the particle does not have to be a black
hole since the detailed internal boundary condition was not used to
determine the metric in the internal scheme. The resulting equations of
motion should be equally applicable to any compact body such as a
neutron star. The essential assumption here is that the only length
scale associated with the particle is $Gm$.
In this sense, we have shown the strong equivalence principle to
the first order in $Gm$.
We also note that our results strongly support, if not rigorously
justify, the so-called black hole perturbation approach.
In the black hole perturbation approach, one calculates the
gravitational radiation from a particle orbiting a black hole with the
assumption that the particle is a point-like object with the energy
momentum tensor described by the delta function. Although this approach
has been fruitful, there has been always skepticism about the validity
of the delta functional source. What we have shown in this chapter is
the consistency of using the delta function in the source energy
momentum tensor within the order of matched asymptotic expansion we have
examined.
|
1,477,468,750,172 | arxiv | \section{Introduction}
The challenges associated with big data are commonly referred to as
the 3 V's of Big Data - Volume, Velocity and
Variety~\cite{laney20013d}. The 3 V's provide a guide to the largest outstanding challenges associated with
working with big data systems. Big data \textbf{volume} stresses the storage,
memory and compute capacity of a computing system and requires access
to a computing cloud. The \textbf{velocity} of big data stresses the rate at which data can
be absorbed and meaningful answers produced. Big data \textbf{variety}
makes it difficult to develop algorithms and tools that can address
that large variety of input data.
The ability to collect and analyze large amounts of data is a growing
problem within the scientific community. The growing gap between data
and users calls for innovative tools that address the challenges faced
by big data volume, velocity and variety. Numerous tools exist that
allow users to store, query and index these massive quantities of
data. Each storage or database engine comes with the promise of
dealing with complex data. Scientists and engineers who wish to use
these systems often quickly find that there is no single technology
that offers a panacea to the complexity of
information~\cite{stonebraker2005one, cattell2011scalable}. When using
multiple technologies, however, there is significant trouble in
designing the movement of information between storage and database
engines to support an end-to-end application. In this article, we
present the Dynamic Distributed Dimensional Data Model - a technology
developed at MIT Lincoln Laboratory. Previous articles on D4M~\cite{kepner2013d4m,kepner2012dynamic} have
showcased the ability of D4M to interact with the popular Apache
Accumulo database. Recent advances in D4M now allow D4M to operate on
a variety of back end storage or database engines while providing a
federated look to the end user through the use of associative
arrays. Associative arrays provide a mathematical interface across
different database technologies and can help solve one of the largest
problems of working with numerous backend storage or database engines
- how do we correlate information that may be spread across different
storage or database engines?
The Intel Science and Technology Center (ISTC) on Big Data~\cite{istcwebpage} is centered
at the MIT Lincoln Laboratory and supports five major research themes:
Big Data Databases and Analytics, Big Data Math and Algorithms, Big
Data Visualization, Big Data Architectures, and Streaming Big Data. One of the core goals of the ISTC is to develop the next generation software
stack required to manage heterogenous data in order to enable large
scale data analytics on data from the Internet of Things (IoT). This
solution stack is known as the Big Data Working Group (BigDAWG)
stack~\cite{dugganbigdawg}. The BigDAWG solution stack is a vertically integrated stack
that supports numerous hardware platforms, analytics libraries,
database and storage engines, software development through the Big
Dawg Query Language (BQL) and Compiler, visualization and presentation
of data through a variety of applications. The BQL will provide software and
analytics developers an abstraction of the underlying database and
storage engines, analytics libraries and hardware platforms. A key
feature of BQL is to develop the API required to provide a federated
look to developers.
Federated databases have the ability to abstract away details about
the underlying storage or database engine. Very often, federated
databases are used to provide some mutual benefit. This
feature can be quite appealing to scientists who wish to write complex
analytics and are not necessarily database or storage experts. There
has been much promise of federated
databases~\cite{sheth1990federated}. Federated databases provide the
ability to give users the feel of a data warehouse without physically
moving data into a central repository~\cite{haas2002data}. As an
example of a federated database, consider
Myria~\cite{halperin2014demonstration, howe2013collaborative}, a distributed database that uses
SQL or MyriaL as the language all of which was developed at the University of
Washington. One of the challenges in database federation has
been in developing a programming API that can be used to interact
with the ever-increasing variety of databases and storage
engines~\cite{cheneyprogramming}.
D4M's mathematical foundation, associative arrays, have the ability to
to help alleviate the challenges associated with open
problems in federated database. Having a one-to-one relationship with
triple store or with key-value store systems allows a flexible
representation that can be supported by many databases. The ability to
perform linear algebraic operations on associative arrays (and thus
data stored in different database engines) opens up
big-data analytics to non-computer scientists. We believe that an API based on
mathematical operations is easy to learn. The software
implementation in popular languages such as MATLAB,
Octave, and Julia allows the rapid prototyping of new and complex
analytics with minimal effort.
In this article, we present our work on developing associative arrays
as the datatype for big data in Section~\ref{assoc}. In
Section~\ref{d4m}, we present D4M and provide examples of how database
operations such as context and cast can be done with D4M and
associative arrays through the D4M MATLAB toolbox. In Section~\ref{scidb}, in order to motivate the
ease at which new database support can be built into D4M, we detail
the D4M-SciDB connector. In order to demonstrate the use of D4M, associative arrays, and
database engines, we provide a
simple case study of developing an analytic for medical data that
spans across three different storage engines in
Section~\ref{medical}. Finally, we conclude in
Section~\ref{conc}.
\section{Associative Arrays}
\label{assoc}
Associative arrays are
used to describe the relationship between multidimensional entities
using numeric/string keys and numeric/string values. Associative
arrays provide a generalization of sparse matrices. Formally, an
associative array \textbf{A} is a map from $d$ sets of keys $K_1 \times K_2 \times ... \times K_d$ to a value set $V$ with a semi-ring structure
$$
{\bf A}: K_1 \times K_2 \ ... \times K_d \rightarrow V
$$
where $(V,\oplus,\otimes, 0, 1)$ is a semi-ring with addition operator
$\oplus$, multiplication operator $\otimes$,
additive-identity/multiplicative-annihilator 0, and
multiplicative-identity 1. Furthermore, associative arrays have a
finite number of non-zero values which means their support $supp({\bf
A})={\bf A}^{-1} (V \backslash \{ 0 \} )$ is finite. While
associative arrays can be any number of dimensions, a common technique
to use associative arrays in databases is to project the d-dimensional
set into two dimensions as in:
$$
{\bf A}: K_1 \times \{ K_2 \cup K_3 \cup ... \cup K_d \} \rightarrow V
$$
where the $\cup$ operation indicates a union operation. In this 2D
representation, $K_1$ is often referred to as the row key and $ \{ K_2
\cup K_3 ... \cup K_d \} $ is referred to as the column key.
As a data structure, associative arrays return a value given some
number of keys and constitute a function between a set of tuples and a
value space. In practice, every associative array can be created from
an empty associative array by simply adding and subtracting values. With
this definition, it is assumed that only a finite number of tuples
will have values, and all other tuples have a default
value of the additive-identity/multiplicative-annihilator 0. Further, the associative array mapping should support
operations that resemble operations on ordinary vectors and matrices
such as matrix multiplication. In practice, associative arrays support a variety of linear algebraic operations
such as summation, union, intersection, multiplication and element
wise operations. Summation of
two associative arrays, for example, that do not have any common row
or column key performs a union of their underlying non-zero
keys. Element wise multiplication as an example performs an operation
similar to an intersection. Associative arrays have a one-to-one
relationship with key-value store or triple store databases, sparse
matrices, and adjacency or incidence matrix representations of
graphs. These relations allow complex datasets to be easily converted
to associative array representation. Linear algebraic operations on
associative arrays can be used to perform graph algorithms as described in~\cite{gabb2015}.
NoSQL database tables can be exactly described using the mathematics of
associative arrays~\cite{kepner2015associative}. In the D4M schema, a table in
a NoSQL database, such as Apache Accumulo, is an associative array. In
this context, the primary differences between associative arrays and
sparse matrices are: associative array entries always carry their
global row and column labels while sparse matrices do not. Another
difference between associative arrays is that sparse
matrices can have empty rows or columns while associative arrays do
not.
Using associative arrays as a datatype for big data has many
benefits such as:
\begin{itemize}
\item Using associative arrays as the base datatype will make database
development easier. DB developers will only need to provide an
optimized interface to associative arrays;
\item Associative arrays can limit the programming language-DB connectors that are
required. Currently, if there are N programming languages and M
database engines, we need $N \times M$ connectors. Having a single
interface can reduce this to $N + M$ connectors; and
\item An API based on mathematical operations is natural for the vast
majority of scientists and engineers.
\end{itemize}
\section{The Dynamic Distributed Dimensional Data Model (D4M)}
\label{d4m}
The Dynamic Distributed Dimensional Data Model (D4M) combines techniques
from diverse fields to support rapid prototyping of big data
problems in a familiar programming environment. Specifically, D4M consists of 3 components:
\begin{itemize}
\item A software API that enables D4M to connect with databases,
\item A software API that supports Associative Arrays and their
mathematics, and
\item A schema to represent unstructured multi-dimensional datasets.
\end{itemize}
D4M has a multi layer architecture that allows users to develop
analytics without knowledge of the underlying engine. In
Figure~\ref{d4marchitecture} we describe the various components of
D4M. The D4M software API is roughly broken into two components - a client
binding and a server binding.
\begin{figure}
\centerline{
\includegraphics[width=3.1in]{d4marchitecture.pdf}
}
\caption{D4M provides a middle layer between storage/database engines
and applications. The D4M client binding provides support for
associative arrays. The D4M server binding connects client code to
different database engines.}
\label{d4marchitecture}
\end{figure}
The D4M client binding is responsible for most of the
sophisticated data processing. Support for the associative array
datatype allows users to quickly convert a representative subset of
their dataset into an
associative array and prototype different algorithms to test for
mathematical correctness. Given the relationship between associative
arrays and sparse matrices, there are a wide variety of potentially
complex algorithms such as machine learning that can be directly
translated to operations on associative arrays. Once algorithms have
been developed and tested for correctness, a user can make use of the
D4M server binding to scale their dataset by connecting to a database engine.
The D4M server binding allows users to map their in-memory associative
arrays to a wide variety of backend storage or database
engines. Creating a database server binding creates an object in local memory
that contains information about the database type, authentication
information, and host. Using this object, one can create a table object
that binds the D4M client to a DB table. With minimal effort, a user can read in raw data, convert to associative
array representation, and insert into a database. Querying from the
database results in associative arrays that can be directly used for
the complex analytics developed using the client binding. Syntax wise,
querying data from an associative array or database binding is the
same. For example, suppose we have an associative array $A$ and
database table binding $T$, finding all data that has a row key
between $a$ and $d$ is denoted as: $A(a:d, :)$ or $T(a:d, :)$
depending on whether information is being requested from the
associative array or database table. In both cases, the data returned
is in the form of an associative array.
In order to connect to different database
engines, D4M uses various connectors (either existing or custom built) to
connect to popular databases. As an example, the D4M-mySQL connection
is done by calling the Java Database Connector (JDBC) from D4M. While the current
implementation of D4M has a limited set of backend engines that are
supported, this number is increasing.
A typical user workflow to develop an analytic on a large dataset will
be as follows. First, the user makes use of a schema to convert
their raw dataset (often in JSON, TSV, CSV format) into an associative
array. The user can then read a one or more associative arrays into
memory and develop the desired analytic. The user can verify correctness of analytic
with alternate pieces of the larger dataset. The user can then insert the full
dataset, converted to associative array format, into a database engine (or
set of databases). The user can then query for data which results in
an associative array that can be used directly in the analytic
developed.
One of the challenges in working with numerous backend databases is in
developing a uniform syntax and data format to put queries in the
context of a particular database or to cast information from one to
another in order to perform cross-DB analytics.
The Context operation is to provide explicit control of the backend
database or storage engine. The Cast operator is to move data between
storage and database engines.
In D4M, the context operation is done by using the DBserver command
which returns a DB object that contains information about the specific
database being connected to. Thus, when performing a query on a
backend database, the DB operator will use the correct context and
connector to perform the required query. The DBserver function in the
server binding returns an object to a DB that contains the host,
instance, and authentication information.
\begin{lstlisting}[frame=single]
DB = DBserver(host,type,instanceName,user,pass)
Inputs:
host = database host name
type = type of database
instanceName = database instance name
username = username in database
password = password associated with username
Outputs:
DB = database object with a binding to specific DB
\end{lstlisting}
Once a DB object is created, one can perform database specific
operations such as \emph{ls} or create a binding to a specific table
in the database. If the requested table does not exist, a table with
that name will be created in the database. Binding to a table provides
functionality such as querying and inserting data.
\begin{lstlisting}[frame=single]
A = T(rows,cols)
Inputs:
T = database table
rows = row keys to select
cols = column keys to select
Outputs:
A = associative array of all non-empty row/columns
\end{lstlisting}
The following example describes how one can connect to a database and
return all the information in a particular table in the database.
\begin{lstlisting}[frame=single]
DB = DBserver('host,'type', db_name','user','pass')
table_list=ls(DB);
T=DB('tab_name');
A=T(:,:);
\end{lstlisting}
In D4M, associative arrays can be used as the interface to cast information from one
database to another. Consider the following example of casting
data from mySQL to Apache Accumulo (a noSQL database). Of course, it
is up to the user to ensure that data can be cast from one database
to another (for example, certain databases may not support certain
datatypes or schemas). The following example describes how one could
cast data from mySQL to a noSQL database such as Apache Accumulo via
associative arrays.
\begin{lstlisting}[frame=single]
DBsql=DBserver('host,'mysql', 'sql_dbname','u','p');
DBnosql=DBserver('host','nosql','dbname','u','p');
T=DBsql('tabname');
Asql=T(:,:);
Tnosql=DBnosql('tabname');
put(Tnosql, Asql);
Anosql=Tnosql(:,:)
\end{lstlisting}
One of the important aspects of D4M is the ability to easily add new
database engines via an API exposed by the database developer. In the
next section, we will discuss how a popular NewSQL database SciDB was
added to D4M. A thorough description of the D4M-Accumulo binding can
be found in~\cite{kepner2013d4m}.
\section{The SciDB-D4M Connection}
\label{scidb}
SciDB is a parallel database designed for multidimensional data
management with support for a variety of in-database computation and
analytics. SciDB has the ability to connect to a variety of client side
tools such as R, Python or a Web Browser. The SciDB coordinator is
responsible for moving data across back end data
storage~\cite{brown2010overview}. Connection to a SciDB server is
mediated via the coordinator. Other instances in the SciDB cluster are
referred to as worker nodes. SciDB represents data as multidimensional
arrays which are defined by specifying dimensions and
attributes. Dimensions are 64-bit integers, and attributes can be one
of many supported SciDB datatypes. SciDB supports a variety of
connection mechanisms such as JDBC or a SHIM.
A SHIM is a small library that is capable of intercepting API calls
and translating them in to the underlying system API. In SciDB, the
Shim is a basic SciDB client that exposes SciDB functionality via a
HTTP interface~\cite{scidbshim}. The D4M-SciDB connection is built
using the SciDB SHIM. Specifically, given an operation on SciDB table, D4M
will convert this operation into a query that is supported by the
SciDB SHIM and pass it to the coordinator node. The coordinator node
will then perform the requested operation and return data back to D4M
via the established SHIM connection. As described in
Figure~\ref{scidb-d4m}, when a user calls a SciDB context
function, D4M will automatically translate the query into an operation
supported by the SciDB SHIM. When connecting to a SciDB table, a user will first call the \emph{DBserver}
operation that will authenticate and connect to SciDB via the
SHIM. This will return a token that is held in an object returned by
\emph{DBserver}. To establish a connection with an existing table in
SciDB, one can issue the D4M \emph{DBtable} command, that takes as an
argument the object returned by \emph{DBserver} and the required dimensions and
attributes. Currently, a number of D4M server binding commands are supported to
directly interface with the SciDB table. For example, \emph{nnz}, will
return the number of non-zero entries in a table. In any of these API
examples, the command issues the query to the backend database using
the context of the DB command. Consider the example of inserting an associative
array into SciDB. The user will create an associative array and table
binding as described in Section~\ref{d4m}. The user can use the D4M
\emph{put} command which converts the associative
array into a datatype supported by SciDB and ingests this converted data to the
SciDB coordinator node. The dataflow is described in Figure~\ref{scidb-d4m}.
\begin{figure}
\centerline{
\includegraphics[width=2.8in]{scidbdataflow.pdf}
}
\caption{D4M-SciDB Dataflow. A D4M operation will create a session
with the SciDB coordinator node which will return information
through D4M for requested data.}
\label{scidb-d4m}
\end{figure}
After
optimization, inserts are done in 128 MB batches and using the parallel
CSV loader. Once data is in SciDB, the standard D4M API can be used to
pull data back. For example, if the table binding is held in the
object \emph{T}, \emph{T(:,:)} returns all the elements in the table,
and \emph{T(row1:rowN, :)} returns the elements within the row range
\emph{row1:rowN}.
\subsection{D4M-SciDB Performance}
In order to benchmark SciDB, data was generated using a random graph
generator from the Graph500
benchmark~\cite{murphy2010introducing}. The Graph500 scalable data
generator that can efficiently generate power-law graphs that represent common graphs such as those generated from
social media datasets. The number of vertices and edges in the graph
are set using a positive integer called the SCALE parameter. Given a SCALE parameter,
the number of vertices, N, and the number of edges, M, are then
computed as $N=2^{SCALE}$, and $M= 8N$.
For example, if SCALE = 14, then N = 16384 and M = 131072. The
Graph500 generator uses a recursive matrix
algorithm~\cite{chakrabarti2004r} to generate a set of starting
vertices and ending vertices corresponding to edges in a graph. This graph is then be represented
as a large $N \times N$ sparse matrix $A$, where $A(i,j) = 1$ indicates an edge
from vertex $i$ to vertex $j$, often called the adjacency matrix. As an example, consider
Figure~\ref{kronecker}, which shows the adjacency
matrix and distribution of degrees for a SCALE 14 graph generated
using the Kronecker graph generator. The degree of a vertex is the
number of edges incident to a vertex. For a power law graph, we expect
to see an exponential increase when looking at the number of nodes
with particular degrees (i.e, few nodes will have a high degree,
and many nodes will have a low degree).
\begin{figure}
\centerline{
\includegraphics[width=3.4in]{scale14_kronecker_noperm.pdf}
}
\caption{Kronecker Graph Generator. The figure on the left represents
the adjacency matrix. A connection between two vertices is denoted
by a blue dot. The figure on the right shows the degree distribution
of the graph on the left.}
\label{kronecker}
\end{figure}
SciDB is a highly scalable database and is capable of connecting with
multiple clients at once. In order to test the scalability of SciDB, we
use pMATLAB~\cite{bliss2007pmatlab} in addition to D4M to insert data
from multiple clients simultaneously. In order to overcome a SciDB
bottleneck that applies a table lock when data is being written to a
table, we use D4M to create multiple tables based on the total number
of ingestors. For example, if there are four simultaneous ingestors,
we create 4 tables into which each ingestor will simultaneously insert. The resulting tables can be merged after the ingest using D4M if desired.
SciDB was launched using the MIT SuperCloud~\cite{reuther2013llsupercloud} architecture through the
database hosting system. For the purpose of benchmarking SciDB on a
single node, instances were launched on a system with Intel Xeon E5
processors with 16 cores and 64GB of RAM. SciDB coordinator and worker
nodes were located on the same physical node.
Weak scaling is a measure of the time taken for a single processing
element to solve a specific problem or fixed problem size per processor. In Figure~\ref{weakscaling}, we
describe the performance of SciDB in inserting
Kronecker Graph whose SCALE varies with the number of processors into SciDB using D4M. The maximum performance (insert rate) was
observed at 10 processors.
\begin{figure}
\centerline{
\includegraphics[width=3.1in]{weakscaling_onlyrate.pdf}
}
\caption{Weak scaling of D4M-SciDB insert performance for problem
size that varies with number of processors.}
\label{weakscaling}
\end{figure}
Strong scaling is a measure of the time taken for solving a fixed
total problem size. Figure~\ref{strongscaling} describes the results varying
the number of inserters for a fixed SCALE 19 Kronecker Graph. The
maximum performance (insert rate) was observed to be at 8 processors.
\begin{figure}
\centerline{
\includegraphics[width=3.1in]{strongscaling_onlyrate.pdf}
}
\caption{Strong scaling of D4M-SciDB insert performance for varying
number of processes for fixed SCALE=19 problem size. The y-axis represents the
insert rate.}
\label{strongscaling}
\end{figure}
\section{Medical Big Data Processing with Big Data}
\label{medical}
Medical big data is a common example used to justify the adage that ``one size
does not fit all'' for database and storage engines. Consider the popular MIMIC II dataset~\cite{MIMICII}. This dataset
consists of data collected from a variety of Intensive Care Units (ICU) at the
Beth Isreal Deaconess Hospital. The data contained in the MIMIC II
dataset was collected over seven years and contains data from a
variety of clinical and waveform sources. The clinical dataset contains the
data collected from tens of thousands of individuals and consists of
information such as patient demographics, medications, interventions,
and text-based doctor or nurse notes. The waveform dataset contains
thousands of time series physiological signal recordings such as ECG
signals, arterial blood pressure, and other measurements of patient
vital signs. In order to support data extraction from these different
datasets, one option would be to attempt to organize all the information
into a single database engine. However, existing technologies would
prove to be cumbersome or inefficient for such a task. The next solution
is to store and index each of the individual components into a storage
or database engine that is the most efficient for a particular data
modality. While technically this solution may be the most efficient,
it makes application development difficult as researchers need to be
aware of underlying technologies and make development highly dependent
on changing technologies. D4M and associative arrays can be used to
provide developers (such as a medical researcher) with an abstraction
that hides such details in order to develop technology-agnostic
applications. As a part of the ISTC for Big Data, a prototype
application was developed that leverages different backend storage
engines. In this solution, the MIMIC II clinical data was placed in a relational database (MySQL), the text
notes were placed in Apache Accumulo, and the waveform data was placed
in SciDB using D4M.
\begin{figure*}[ht!]
\centerline{
\includegraphics[width=4.4in]{mimicviz.pdf}
}
\caption{Screen shots of MIMIC II Visualization that uses D4M and
associative arrays for back end processing.}
\label{fi:mimicviz}
\end{figure*}
The prototype developed supports cross-database analytics such as:
``tell me about what happens to heart rate variance of patients who
have taken a particular medication.'' Naturally, such a query needs
information from
the clinical data contained in MySQL database, the patient
database contained in Accumulo and the waveform data
contained in SciDB. The sample query
provided is then be broken up into three distinct
queries where: 1) tell me which patients have taken a particular medication goes to
MySQL, 2) tell me which of these patients have heart
beat waveforms goes to Accumulo, and 3) show me what
happened to these patients heart rate variance goes to the waveform
database. At each of these sub-queries, associative arrays are
generated that can be used to move the results of one query to the next
database engine. In Figure~\ref{fi:mimicviz}, we show the web front end that
uses D4M and associative arrays to implement the query described
above. As an example of how D4M and associative arrays are used,
querying the relational table results in an associative array
where the row keys represent the table name, and column keys represent the
patients who have taken Lisinopril. The resulting column keys are directly
passed into Accumulo to find all patients who have a
certain type of waveform where the rows contain the patient ID and
columns contain related waveform IDs. The resultant associative array can then be
passed directly into SciDB to extract the waveforms of interest and
perform the required analytic.
\section{Conclusions}
\label{conc}
D4M is a toolkit that supports a variety of database and storage
engine operations. Support for associative arrays can be used as a natural
interface between heterogenous database engines. Currently, D4M is
designed to work with a variety of engines such as SQL, Accumulo, and
SciDB. Currently, the number of context operations is limited; however,
D4M exposes these operations to the user with context specific
operations by allowing pass-through queries. Further, casting data
from one engine to another requires data to pass through the
client which may be a bottleneck for large scale data movement.
In this paper, we described D4M and the relation between D4M,
associative arrays and databases. D4M can be used as a tool by
application developers to write applications agnostic of underlying
storage and database engines. Current research includes determining
how data can be cast directly between databases and increasing the
number of context agnostic D4M commands.
\section*{Acknowledgment}
The authors would like to thank the Intel Science and Technology
Center for Big Data and Paradigm4 for their
technical support in developing the D4M-SciDB connector.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\begin{spacing}{0.75}
\footnotesize
|
1,477,468,750,173 | arxiv | \section{Introduction}
The current development of techniques for big data applications has been extremely useful in many fields including data analysis, bioinformatics and scientific computing. These techniques need to handle large amounts of data and often rely on dimensionality reduction; this is often cast as approximating a matrix with a low-rank element.\\
Non-negative matrix factorization (NMF) is a method that aims at finding part-based, linear representations of non-negative data by factorizing it as the product of two low-rank non-negative matrices~\citep{paatero1994,lee1999}. In \citeyear{lee2000}, two multiplicative algorithms for NMF were introduced by \citeauthor{lee2000}, one that minimizes the conventional least-squares error, and other one that minimizes the generalized Kullback-Leibler (KL) divergence~\citep{lee2000}.\\
These algorithms extend to other losses and have been reported in different applications, e.g., face recognition \citep{wang2005}, music analysis~\citep{fevotte2009a}, and text mining \citep{guduru2006}. An important weakness of multiplicative algorithms is their slow convergence rate in high-dimensional data and their susceptibility to become trapped in poor local optima~\citep{lin2007}. Gradient descent methods for NMF provide additional flexibility and fast convergence~\citep{lin2007,kim2008,gillis2011}. These methods have been extensively studied for the minimization of the least-squares error~\citep{lin2007,kim2008}.\\
The goal of this paper is to provide similar first-order methods for the KL divergence, with updates as cheap as multiplicative updates. Our method builds on the recent work of \citet{sun2014} which consider the alternating direction method of multipliers (ADMM) adapted to this problem. We instead rely on the Chambolle-Pock algorithm~\citep{chambolle2011}, which may be seen as a linearized version of ADMM, and thus we may reuse some of the tools developed by~\citet{sun2014} while having an empirically faster algorithm.
\subsection{Contributions}
The main contributions of this article are as follows:
\begin{itemize}
\item[--] We propose a new primal-dual formulation for the convex KL decomposition problem in \sref{sec:pd}, and an extension to the non-convex problem of NMF by alternating minimization in \sref{sec:implementation}.
\item[--] We provide a purely data-driven way to select all step-sizes of our algorithm in \sref{sec:stepsizes}.
\item[--] In our simulations in \sref{sec:results} on synthetic examples, face recognition or music source separation datasets, our algorithm is either faster than existing algorithms, or leads to improved local optima, or both.
\item[--] We derive a cheap and efficient implementation (\autoref{algo:2}).~Matlab code is available online at: \texttt{anonymized website}
\end{itemize}
\section{Problem Formulation}
Let $\mathbf{V}\in\mathbb{R}^{n\times m}_+$ denote the $n\times m$ given matrix formed by $m$ non-negative column vectors of dimensionality~$n$. Considering $r\leq\min(n,m)$, let $\mathbf{W}\in\mathbb{R}^{m\times r}_+$ and $\H\in\mathbb{R}^{r\times n}_+$ be the matrix factors such that
\begin{equation*}
\mathbf{V}\approx\mathbf{W}\H.
\end{equation*}
Two widely used cost functions for NMF are the conventional least-squares error (not detailed herein), and the generalized KL divergence
\begin{eqnarray}
D(\mathbf{V}\|\mathbf{W}\H) &=& - \sum_{i=1}^m\sum_{j=1}^n\mathbf{V}_{ij}\left\{\log\left(\frac{(\mathbf{W}\H)_{ij}}{\mathbf{V}_{ij}}\right) + 1\right\} \nonumber \\
&& \qquad \ \ \ \ \ + \sum_{i=1}^m\sum_{j=1}^n(\mathbf{W}\H)_{ij}.
\label{eq:KLdivergence}
\end{eqnarray}
In this work, only the KL divergence is considered. Therefore, the optimization problem is as follows:
\begin{equation}
\begin{aligned}
& \underset{\mathbf{W},\H\;\geq\;0}{\operatorname{minimize}}
& & D(\mathbf{V}\|\mathbf{W}\H).
\end{aligned}
\label{prob:nmf}
\end{equation}
We recall that the previous problem is non-convex in both factors simultaneously, whereas convex in each factor separately, i.e., the non-negative decomposition (ND) problems,
\begin{eqnarray}
& &\underset{\mathbf{W}\;\geq\;0}{\operatorname{minimize}}\;D(\mathbf{V}\|\mathbf{W}\H) \label{prob:nd1}\\
&\mathrm{and} &\underset{\H\;\geq\;0}{\operatorname{minimize}}\;D(\mathbf{V}\|\mathbf{W}\H),\label{prob:nd2}
\end{eqnarray}
are convex.\\
We now present two algorithms for NMF, multiplicative updates~\citep{lee2000}, and the ADMM-based approach \citep{sun2014}.
\subsection{Multiplicative updates}
\citet{lee2000} introduced two multiplicative updates algorithms for NMF. One minimizes the conventional least-squares error, and the other one minimizes the KL divergence.\\
The NMF problem, for both losses, is a non-convex problem in $\mathbf{W}$ and $\H$ simultaneously, but convex with respect to each variable taken separately; this make alternating optimization techniques, i.e., solving at each iteration two separate convex problems, very adapted: first fixing $\H$ to estimate $\mathbf{W}$, and then fixing $\mathbf{W}$ to estimate $\H$ \citep{lee2000,fevotte2009a}. The multiplicative updates algorithms (like ours) follow this approach.\\
For the KL divergence loss, the multiplicative update rule~\citep{lee2000} for $\mathbf{W}$ and $\H$ is as follows and may be derived from expectation-maximization (EM) for a certain probabilistic model~\citep{lee2000,fevotte2009b}:
\begin{eqnarray*}
\mathbf{W}_{ia} &\gets& \mathbf{W}_{ia}\frac{\sum_{\mu=1}^n\H_{a\mu}\mathbf{V}_{i\mu}/(\mathbf{W}\H)_{i\mu}}{\sum_{\nu=1}^n\H_{a\nu}},\;\mathrm{and} \\
\H_{a\mu} &\gets& \H_{a\mu}\frac{\sum_{i=1}^m\mathbf{W}_{ia}\mathbf{V}_{i\mu}/(\mathbf{W}\H)_{i\mu}}{\sum_{k=1}^m\mathbf{W}_{ka}}.
\label{eq:MU}
\end{eqnarray*}
The complexity per iteration is $O(rmn)$.
\subsection{Alternating direction method of multipliers (ADMM)}
\citet{sun2014} propose an ADMM technique to solve \pref{prob:nmf} by reformulating it as
\begin{equation*}
\begin{aligned}
& \text{minimize}
& & D(\mathbf{V}\|\mathbf{X})\\
& \text{subject to}
& & \mathbf{X} = \mathbf{Y}\mathbf{Z}\\
&
&& \mathbf{Y} = \mathbf{W},\; \mathbf{Z} = \H\\
&
&& \mathbf{W} \geq 0,\; \H \geq 0.\\
\end{aligned}
\end{equation*}
The updates for the primal variables $\mathbf{W}$, $\H$, $\mathbf{X}$, $\mathbf{Y}$ and $\mathbf{Z}$ are as follows and involve certain proximal operators for the KL loss which are the same as ours in \sref{sec:proximals}:
\begin{eqnarray*}
\mathbf{Y}^\top &\gets& \left(\mathbf{Z}\Z^\top+\mathbf{I}\right)^{-1}\left(\mathbf{Z}\mathbf{X}^\top+\mathbf{W}^\top + \tfrac{1}{\rho}\left(\mathbf{Z}\alpha^\top_\mathbf{X}-\alpha^\top_\mathbf{Y}\right)\right)\\
\mathbf{Z} &\gets& \left(\mathbf{Y}^\top\mathbf{Y}+\mathbf{I}\right)^{-1}\left(\mathbf{Y}^\top\mathbf{X}+\H + \tfrac{1}{\rho}\left(\mathbf{Y}^\top\alpha_\mathbf{X}-\alpha_\mathbf{Z}\right)\right)\\
\mathbf{X} &\gets& \frac{\left(\rho\mathbf{Y}\mathbf{Z}-\alpha_\mathbf{X}-\mathbf{1}\right)+\sqrt{\left(\rho\mathbf{Y}\mathbf{Z}-\alpha_\mathbf{X}-\mathbf{1}\right)^2+4\rho\mathbf{V}}}{2\rho}\\
\mathbf{W} &\gets& \left(\mathbf{Y}+\tfrac{1}{\rho}\alpha_\mathbf{Y}\right)_+\\
\H &\gets& \left(\mathbf{Z}+\tfrac{1}{\rho}\alpha_\mathbf{Z}\right)_+.
\end{eqnarray*}
Note that the primal updates require solving linear systems of sizes $r \times r$, but that the overal complexity remains $O(rmn)$ per iteration (the same as multiplicative updates).\\
The updates for the dual variables $\alpha_\mathbf{X}$, $\alpha_\mathbf{Y}$ and $\alpha_\mathbf{Z}$ are then:
\begin{eqnarray*}
\alpha_\mathbf{X} &\gets& \alpha_\mathbf{X} + \rho\left(\mathbf{X}-\mathbf{Y}\mathbf{Z}\right)\\
\alpha_\mathbf{Y} &\gets& \alpha_\mathbf{Y} + \rho\left(\mathbf{Y}-\mathbf{W}\right)\\
\alpha_\mathbf{Z} &\gets& \alpha_\mathbf{Z} + \rho\left(\mathbf{Z}-\H\right).
\end{eqnarray*}
This formulation introduces a regularization parameter, $\rho\in\mathbb{R}_+$, that needs to be tuned (in our experiments we choose the best performing one from several candidates).\\
Our approach has the following differences: (1) we aim at solving alternatively \emph{convex} problems with a few steps of primal-dual algorithms for convex problems, as opposed to aiming at solving directly the non-convex problem with an iterative approach, (2) for the convex decomposition problem, we have certificates of guarantees, which can be of used for online methods for which decomposition problems are repeatedly solved~\citep{lefevre2011} and (3) we use a different splitting method, namely the one of~\citet{chambolle2011}, which does not require matrix inversions, and which allows us to compute all step-sizes in a data-driven way.
\section{Proposed Method}
In this section we present a formulation of the convex KL decomposition problem as a first-order primal-dual algorithm (FPA), followed by the proposed NMF technique.
\subsection{Primal and dual computation}\label{sec:pd}
We consider a vector $a\in\mathbb{R}^p_+$ and a matrix $K\in\mathbb{R}^{p\times q}_+$ as known parameters, and $x\in\mathbb{R}^q_+$ as an unknown vector to be estimated, where the following expression holds,
\begin{equation*}
a\approx Kx,
\end{equation*}
and we aim at minimizing the KL divergence between $a$ and $Kx$.\\
This is equivalent to a ND problem as defined in Problems (\ref{prob:nd1}) and (\ref{prob:nd2}), considering $a$ as a column of the given data, $K$ as the fixed factor, and $x$ as a column of the estimated factor, i.e., in \pref{prob:nd1} $a$ and $x$ are column vectors of $\mathbf{V}^\top$ and $\mathbf{W}^\top$ with the same index and $K$ is $\H^\top$, and in \pref{prob:nd2} $a$ and $x$ are columns of $\mathbf{V}$ and $\H$ with the same index and $K$ is $\mathbf{W}$.\\
The \emph{convex} ND problem with KL divergence is thus
\begin{equation}
\underset{x\in\mathbb{R}^q_+}{\operatorname{minimize}}\; - \sum_{i=1}^pa_i\left(\log(K_ix/a_i) + 1\right) + \sum_{i=1}^pK_ix,
\label{prob:primal}
\end{equation}
which may be written as
\begin{equation}
\label{eq:AA}
\begin{aligned}
& \underset{x\in\mathcal{X}}{\operatorname{minimize}}
& & F(Kx) + G(x),
\end{aligned}
\end{equation}
with
\begin{eqnarray*}
F(z) &=& - \sum_{i=1}^pa_i\left(\log(z_i/a_i) + 1\right) \\
G(x) &=& \mathbbm{1}_{x\succeq0} + \sum_{i=1}^pK_ix.
\end{eqnarray*}
Following \citet{pock2009,chambolle2011}, we obtain the dual problem
\begin{equation*}
\begin{aligned}
& \underset{y\in\mathcal{Y}}{\operatorname{maximize}}
& & -F^\ast(y) - G^\ast(-K^\ast y),
\end{aligned}
\end{equation*}
with
\begin{eqnarray*}
F^\ast(y) & = & \sup_z\left\{y^\top z - F(z)\right\} = -\sum_{i=1}^pa_i\log\left(-y_i\right) \\
G^\ast(y) & = & \sup_{x}\left\{y^\top x - G(x)\right\} = \mathbbm{1}_{y\preceq K^\top\mathbf{1}}.
\end{eqnarray*}
We then get the dual problem
\begin{equation}
\begin{aligned}
& \underset{K^\top(-y) \;\preceq\; K^\top\mathbf{1}}{\operatorname{maximize}}
& & a^\top\log\left(-y\right).
\end{aligned}
\label{prob:dual}
\end{equation}
In order to provide a certificate of optimality, we have to make sure that the constraint $K^\top(-y) \;\preceq\; K^\top\mathbf{1}$ is satisfied. Therefore, when it is not satisfied, we project as follows:
\begin{equation*}
y \gets y/\max\{K^\top(-y)\oslash K^\top\mathbf{1}\},
\end{equation*}
where $\oslash$ represents the entry-wise division operator.
\subsection{Primal-dual algorithm}
\label{sec:proximals}
The general FPA framework of the approach proposed by \citeauthor{chambolle2011} for \pref{eq:AA} is presented in \autoref{algo:1}.
\RestyleAlgo{ruled}
\begin{algorithm}[htbp]
\caption{First-order primal-dual algorithm.}\label{algo:1}
\vspace{3mm}
Select $K\in\mathbb{R}^{p\times q}_+$, $x\in\mathbb{R}^q_+$, $\sigma>0$, and $\tau>0$\;\vspace{1mm}
Set $\bar{x}=x_{old}=x$, and $y=Kx$\;\vspace{3mm}
\For{$N$ iterations}{\vspace{1mm}
$y \gets \mathbf{prox}_{\sigma F^\ast}(y-\sigma K\bar{x})$\;
$x \gets \mathbf{prox}_{\tau G}(x-\tau K^\ast y)$\;
$\bar{x} \gets 2x - x_{old}$\;
$x_{old} \gets x$\;\vspace{1mm}
}\vspace{3mm}
\Return{$x^\star = x.$}\vspace{3mm}
\end{algorithm}
\autoref{algo:1} requires the computation of proximal operators $\mathbf{prox}_{\sigma F^\ast}(y)$ and $\mathbf{prox}_{\tau G}(x)$. These are defined as follows:
\begin{eqnarray*}
\mathbf{prox}_{\sigma F^\ast}(y) &=& \arg\min_v\left\{ \frac{\|v-y\|^2}{2\sigma} + F^\ast(v)\right\},\;\mathrm{and}\\
\mathbf{prox}_{\tau G}(x) &=& \arg\min_u\left\{ \frac{\|u-x\|^2}{2\tau} + G(u)\right\}.
\end{eqnarray*}
For further details, see \citep{boyd2004,rockafellar1997}.\\
Using the convex functions $F^\ast$ and $G$, we can easily solve the problems for the proximal operators and derive the following closed-form solution operators
\begin{eqnarray*}
\mathbf{prox}_{\sigma F^\ast}(y) &=& \frac{1}{2}\left(y - \sqrt{y\circ y + 4\sigma a}\right),\;\mathrm{and}\\
\mathbf{prox}_{\tau G}(x) &=& \left( x - \tau K^\top\mathbf{1} \right)_+.
\end{eqnarray*}
The detailed derivation of these operators may be found in the \app{app:prox}, the first one was already computed by~\citet{sun2014}.
\subsection{Automatic heuristic selection of $\sigma$ and $\tau$}\label{sec:stepsizes}
In this section, we provide a heuristic way to select $\sigma$ and $\tau$ without user intervention, based on the convergence result of~\citet[Theorem 1]{chambolle2011}, which states that (a) the step-sizes have to satisfy $\tau\sigma\|K\|^2 \leq 1$, where $\|K\|=\max\{\|Kx\|:x\in\mathcal{X}\; \textit{with}\; \|x\|\leq1\}$ is the largest singular value of $K$; and (b) the convergence rate is controlled by the quantity
$$C = \frac{\|y_0-y^\star\|^2}{2\sigma} + \frac{\|x_0-x^\star\|^2}{2\tau},$$
where $(x^\star,y^\star)$ is an optimal primal/dual pair. If $(x^\star,y^\star)$ was known, we could thus consider the following minimization problem with the constraint $\tau\sigma\|K\|^2 \leq 1$:
\begin{eqnarray*}
&& \min_{\sigma,\tau} \frac{\|y_0-y^\star\|^2}{2\sigma} + \frac{\|x_0-x^\star\|^2}{2\tau} \\
&\iff& \min_\sigma \frac{\|y_0-y^\star\|^2}{2\sigma} + \frac{\|x_0-x^\star\|^2}{2}\sigma\|K\|^2.
\end{eqnarray*}
Applying first order conditions, we find that
\begin{eqnarray*}
\sigma = \frac{\|y_0-y^\star\|}{\|x_0-x^\star\|}\frac{1}{\|K\|} &\mathrm{and}& \tau = \frac{\|x_0-x^\star\|}{\|y_0-y^\star\|}\frac{1}{\|K\|}.
\end{eqnarray*}
However, we do not know the optimal pair $(x^\star,y^\star)$ and we use heuristic replacements. That is, we consider the unknown constants $\alpha$ and $\beta$, and assume that $x^\star = \alpha\mathbf{1}$ and $y^\star = \beta\mathbf{1}$ solve Problems (\ref{prob:primal}) and (\ref{prob:dual}). Letting $(x_0,y_0) = (\mathbf{0},\mathbf{0})$ we have
\begin{eqnarray*}
\|x_0-x^\star\| = |\alpha|\sqrt{q} &\mathrm{and}& \|y_0-y^\star\| = |\beta|\sqrt{p}.
\end{eqnarray*}
Plugging $x^\star$ to \pref{prob:primal}, we are able to find that $\alpha = \frac{\mathbf{1}^\top a}{\mathbf{1}^\top K\mathbf{1}}>0$. Now, using optimality conditions: $y^\star\circ(Kx^\star) = -a$, we obtain $\beta = -1$.\\
The updated version of the parameters is:
\begin{eqnarray*}
\sigma = \sqrt{\frac{p}{q}}\frac{1}{\alpha\|K\|} &\mathrm{and}& \tau = \sqrt{\frac{q}{p}}\frac{\alpha}{\|K\|}.
\end{eqnarray*}
Finally, an automatic heuristic selection of step sizes $\sigma$ and $\tau$ is as follows:
\begin{eqnarray*}
\sigma = \frac{\sqrt{p}\sum_{i=1}^pK_i\mathbf{1}}{\sqrt{q}\|K\|\sum_{i=1}^pa_i} &\mathrm{and}& \tau = \frac{\sqrt{q}\sum_{i=1}^pa_i}{\sqrt{p}\|K\|\sum_{i=1}^pK_i\mathbf{1}}.
\end{eqnarray*}
Note the invariance by rescaling of $a$ and $K$.
\subsection{Implementation}
The proposed method is based on \autoref{algo:1}. The required parameters to solve each ND problem are
\begin{multicols}{2}
\begin{itemize}
\item \pref{prob:nd1}:
\begin{itemize}
\item $a\; \gets \left(\mathbf{V}^\top\right)_i$
\item $K \gets \H^\top$
\item $x\; \gets \left(\mathbf{W}^\top\right)_i$
\item $\sigma \gets \sqrt{\frac{m}{r}}\frac{\mathbf{1}^\top\H\mathbf{1}}{\mathbf{1}^\top\left(\mathbf{V}^\top\right)_i\|\H\|}$
\item $\tau \gets \sqrt{\frac{r}{m}}\frac{\mathbf{1}^\top\left(\mathbf{V}^\top\right)_i}{\mathbf{1}^\top\H\mathbf{1}\|\H\|}$
\end{itemize}
\item \pref{prob:nd2}:
\begin{itemize}
\item $a\; \gets \mathbf{V}_i$
\item $K \gets \mathbf{W}$
\item $x\; \gets \H_i$
\item $\sigma \gets \sqrt{\frac{n}{r}}\frac{\mathbf{1}^\top\mathbf{W}\mathbf{1}}{\mathbf{1}^\top\mathbf{V}_i\|\mathbf{W}\|}$
\item $\tau \gets \sqrt{\frac{r}{n}}\frac{\mathbf{1}^\top\mathbf{V}_i}{\mathbf{1}^\top\mathbf{W}\mathbf{1}\|\mathbf{W}\|}$.
\end{itemize}
\end{itemize}
\end{multicols}
The previous summary treats each ND problem by columns. For algorithmic efficiency, we work directly with the matrices, e.g., $a\in\mathbb{R}^{n\times m}_+$ instead of $\mathbb{R}^n_+$.
We also include normalization steps such that the columns of $\mathbf{W}$ have sums equal to $1$.
The stopping criteria is enabled for maximum number of iterations (access to data) and for duality gap tolerance.
\subsection{Extension to NMF}\label{sec:implementation}
A pseudo-code of the first-order primal-dual algorithm for non-negative matrix factorization can be found in \autoref{algo:2}. It corresponds to alternating between minimizing with respect to $\H$ and minimizing with respect to $\mathbf{W}$. A key algorithmic choice is the number of inner iterations \texttt{iter}$_{ND}$ of the convex method, which we consider in \sref{sec:results}.\\
The running-time complexity is still $O(rnm)$ for each inner iterations. Note moreover, that computing the largest singular value of $\H$ or $\mathbf{W}$ (required for the heuristic selection of step-sizes everytime we switch from one convex problem to the other) is of order $O(r \max\{m,n\})$ and is thus negligible compared to the iteration cost.
\RestyleAlgo{ruled}
\begin{algorithm}[ht!]
\caption{Proposed technique.}\label{algo:2}
\vspace{3mm}
Select $\mathbf{V}\in\mathbb{R}^{n\times m}_+$, $\mathbf{W}_0\in\mathbb{R}^{n\times r}_+$, and $\H_0\in\mathbb{R}^{r\times m}_+$\;\vspace{3mm}
Set $\mathbf{W} = \bar{\mathbf{W}} = \mathbf{W}_{old} = \mathbf{W}_0$, $\H = \bar{\H} = \H_{old} = \H_0$, and $\chi = \mathbf{W}\H$\;\vspace{3mm}
\While{stopping criteria not reached}{\vspace{3mm}
Normalize $\mathbf{W}$ and set $\sigma=\sqrt{\frac{m}{r}}\frac{\mathbf{1}^\top\H\mathbf{1}}{\mathbf{1}^\top\mathbf{V}^\top\|\H\|}\mathbf{1}$, $\tau=\sqrt{\frac{r}{m}}\frac{\mathbf{1}^\top\mathbf{V}^\top}{\mathbf{1}^\top\H\mathbf{1}\|\H\|}\mathbf{1}$, and $\H(-\chi^\top) \leq \H\mathbf{1}$\;\vspace{3mm}
\For{\texttt{iter}$_{ND}$ iterations}{\vspace{3mm}
$\chi^\top \gets \chi^\top - \sigma\circ\left(\bar{\mathbf{W}}\H\right)^\top$\;
$\chi^\top \gets \frac{1}{2}\left(\chi^\top - \sqrt{\chi^\top\circ\chi^\top + 4\sigma\circ\mathbf{V}^\top}\right)$\;
$\mathbf{W}^\top \gets \left( \mathbf{W}^\top - \tau \circ \left(\H\left(\chi^\top + \mathbf{1} \right) \right) \right)_+$\;
$\bar{\mathbf{W}}^\top \gets 2\mathbf{W}^\top - \mathbf{W}_{old}^\top$\;
$\mathbf{W}_{old}^\top \gets \mathbf{W}^\top$\;\vspace{3mm}
}\vspace{3mm}
Normalize $\H$ and set $\sigma=\sqrt{\frac{n}{r}}\frac{\mathbf{1}^\top\mathbf{W}\mathbf{1}}{\mathbf{1}^\top\mathbf{V}\|\mathbf{W}\|}\mathbf{1}$, $\tau=\sqrt{\frac{r}{n}}\frac{\mathbf{1}^\top\mathbf{V}}{\mathbf{1}^\top\mathbf{W}\mathbf{1}\|\mathbf{W}\|}\mathbf{1}$, and $\mathbf{W}^\top(-\chi) \leq \mathbf{W}^\top\mathbf{1}$\;\vspace{3mm}
\For{\texttt{iter}$_{ND}$ iterations}{\vspace{3mm}
$\chi \gets \chi - \sigma \circ \left(\mathbf{W}\bar{\H}\right)$\;
$\chi \gets \frac{1}{2}\left(\chi - \sqrt{\chi \circ \chi + 4\sigma \circ \mathbf{V}}\right)$\;
$\H \gets \left( \H - \tau \circ \left(\mathbf{W}^\top\left(\chi + \mathbf{1}\right) \right) \right)_+$\;
$\bar{\H} \gets 2\H - \H_{old}$\;
$\H_{old} \gets \H$\;\vspace{3mm}
}\vspace{3mm}
}\vspace{3mm}
\Return{$\mathbf{W}^\star = \mathbf{W}$, and $\H^\star = \H$.}\vspace{3mm}
\end{algorithm}
\subsection{Extension to topic models}
Probabilistic latent semantic analysis~\citep{hofmann1999probabilistic} or latent Dirichlet allocation \citep{blei2003}, generative probabilistic models for collections of discrete data, have been extensively used in text analysis. Their formulations are related to ours in \pref{prob:primal}, where we just need to include an additional constraint: $\mathbf{1}^\top x=1$. In this sense, if we modify $G$, i.e., $G(x) = \mathbbm{1}\{\mathbf{1}^\top x=1;\;x\succeq0\} + 1^\top Kx$, we can use \autoref{algo:1} to find the latent topics. It is important to mention that herein $\mathbf{prox}_{\tau G}(x)$ does not have a closed solution, but can be efficiently solved with dedicated methods for orthogonal projections on the simplex~\citep{maculan1989linear}.
\section{Experimental Results}\label{sec:results}
The proposed technique was tested on synthetic data, the CBCL face images database and a music excerpt from a recognized jazz song by Louis Armstrong \& His Hot Five. The performance of the proposed first-order primal-dual algorithm (FPA) was compared against the traditional multiplicative updates algorithm (MUA) by \citet{lee2000} and the contemporary alternating direction method of multipliers (ADMM) by \citet{sun2014}. The three techniques were implemented in Matlab.
\subsection{Synthetic data}
A given matrix $\mathbf{V}$ of size $n=200$ and $m=1000$ is randomly generated from the uniform distribution $\mathcal{U}(0,750)$. The low-rank element was set to $r=15$. Initializations $\mathbf{W}_0$ and $\H_0$ are defined by the standard normal distribution's magnitude plus a small offset.
\begin{figure*}[h!]
\begin{center}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig1_S_ND_iter_fixedH.eps}
\caption{Estimate $\mathbf{W}$ given $\H^\star$.}
\vspace{3mm}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig1_S_ND_iter_fixedW.eps}
\caption{Estimate $\H$ given $\mathbf{W}^\star$.}
\vspace{3mm}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig1_S_ND_time_fixedH.eps}
\caption{Estimate $\mathbf{W}$ given $\H^\star$.}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig1_S_ND_time_fixedW.eps}
\caption{Estimate $\H$ given $\mathbf{W}^\star$.}
\end{subfigure}
\end{center}
\caption{ND on synthetic data. (a-b) Distance to optimum versus iteration number. Distance to optimum reveals the difference between the values of the objective function and optimal point, $p^\star$. In the case of the dual function values, the distance to optimum is the difference between $p^\star$ and the dual points. (c-d) Distance to optimum versus time.}
\label{fig:synthetic1}
\end{figure*}
\subsubsection{ND problem}
To examine the accuracy of our method, we first apply \autoref{algo:2} to convex ND problems for fixed values of $n$, $m$ and $r$, solving separately Problems~(\ref{prob:nd1})~and~(\ref{prob:nd2}). For both problems, we set the number of iterations of the traditional MUA and contemporary ADMM to 1200, as well as for the proposed FPA.
Optimal factors $\mathbf{W}^\star$ and $\H^\star$ are obtained by running 5000 iterations of the MUA, starting from $\mathbf{W}_0$ and $\H_0$. For the first ND problem, we fix $\H$ to $\H^\star$ and estimate $\mathbf{W}$ starting from $\mathbf{W}_0$; for the second one, we fix $\mathbf{W}$ to $\mathbf{W}^\star$ and estimate $\H$ from $\H_0$. The optimal regularization parameter of ADMM, the tuning parameter that controls the convergence rate, is $\rho=0.15$ (small values imply larger step sizes, which may result in faster convergence but also instability).
\frefs{fig:synthetic1}{a}{b} present us the distance to optimum of MUA and ADMM, as well as for the primal and dual of our technique that reveals strong duality. The FPA and ADMM algorithm converge to the same point, whereas the MUA exhibits slow convergence. Note moreover the significant advantage towards our algorithm FPA, together with the fact that we set automatically all step-sizes.
In \frefs{fig:synthetic1}{c}{d}, we illustrate the distance to optimum versus time of the three methods.
\subsubsection{NMF problem}
The setting is slightly different as in the ND experiment, we increased the problem dimension to $n=250$, $m=2000$ and $r=50$, and repeat both previously presented experiments. For all methods, we set the number of iterations to 3000. The parameter \texttt{iter}$_{ND}$ indicates the number of iterations to solve each ND problem. We set \texttt{iter}$_{ND}$ to 5 iterations. To have a fair comparison between algorithms, for FPA, the number of iterations means access to data, i.e., we use 5 iterations to solve \pref{prob:nd1} (as well as for \pref{prob:nd2}), and repeat this 600 times. The optimal regularization parameter of the ADMM is here $\rho=1$.
\begin{figure*}[htbp]
\begin{center}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig2_S_NMF_time.eps}
\caption{Objective function versus time.}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig2_S_NMF_time_zoom.eps}
\caption{Objective function versus time (zoomed).}
\end{subfigure}
\end{center}
\caption{
NMF on synthetic data. Recall that the dual function is not presented due to the non-convexity of the NMF problem.
}
\label{fig:synthetic2}
\end{figure*}
In \autoref{fig:synthetic2} we present the objective function of the three algorithm for the non-convex \pref{prob:nmf}. The MUA initially reports high decrement in the objective, but as time increases it exhibits evident slow tail convergence. The FPA primal objective decreases dramatically in only seconds (few iterations), and furthermore, it presents fast tail convergence achieving the lowest objective value. The ADMM has poor initial performance, but then achieves an optimal value similar to the one obtained by FPA.
In order to show that FPA converges faster and with lower computational cost, we store the cost function values and computation times at each iteration. The total time required by the FPA was 190s, whereas 205s by the ADMM and 473s by the MUA. Then we analyze the ADMM and MUA for the same time 190s (the vertical dotted line in \fref{fig:synthetic2}{b}), i.e., 2786 and 1211 iterations, respectively: the competitive algorithms have a significantly larger cost function for the same running time. The result of this experiment is illustrated in \fref{fig:synthetic2}{b}.
The results considering the objective function versus iteration number may be found in the \app{app:S}.
\subsubsection{NMF with warm restarts}
The problem dimension is $n=150$, $m=2000$ and $r_1=50$. We run 3000 iterations of each method using initializations $\mathbf{W}_0$ and $\H_0$; then we increase ten times the low-rank element, $r_2=100$; and finally run 2000 more iterations, producing $\mathbf{W}_2$ and $\H_2$. The idea is to use as initializations the estimations obtained after the first 3000 iterations, $\mathbf{W}_1$ and $\H_1$, considering that the low-rank element changed. A trivial solution could be to include random entries so that $\mathbf{W}_1$ and $\H_1$ have the proper dimensions, but that increases the objective value, diminishing the estimations. On the other hand, if we include zero entries so that $\mathbf{W}_1$ and $\H_1$ have the proper dimensions, we would be in a saddle-point where none of the algorithms could perform. However, if we set only one factor with zero entries, $[\mathbf{W}_1,c\mathbf{1}]\in\mathbb{R}^{n\times r_2}$ with $c=0$, and the other one with non-zero values, $[\H_1;\nu]\in\mathbb{R}^{r_2\times m}$, we still maintain the same last objective value and perform FPA. In this situation, MUA cannot perform either (because of the absorbing of zeros), therefore we try some values of $c$ to run the algorithm. \autoref{fig:synthetic3} illustrates the proposed experiment. Notice that as $c\to0$, the MUA starts to get stuck in a poor local optima, i.e., the one obtained with $\mathbf{W}_1$ and $\H_1$. ADMM has a similar behavior as FPA, therefore, it is not displayed.
\begin{figure}[h!]
\centering\includegraphics[width=3.2in]{figures/Fig3_S_NMF_restarts.eps}
\caption{NMF with warm restarts on synthetic data. Value of the objective function at each iteration.}
\label{fig:synthetic3}
\end{figure}
\subsection{MIT-CBCL Face Database \#1}
We use the CBCL face images database \citep{sung1996} composed of $m=2429$ images of size $n=361$ pixels. The low-rank element was set to $r=49$.
\autoref{fig:supp3} shows samples from the database.
\begin{figure}[htbp]
\centering\includegraphics*[scale=1]{figures/Fig4_F_samples.eps}
\caption{MIT-CBCL Face Database \#1 samples.}
\label{fig:supp3}
\end{figure}
Our next experiment is to determine the optimal the number of iterations for the current database. Therefore, we run 3000 iterations of FPA, using 3, 5, 10 and 15 iterations for the ND problem. The MUA and ADMM ($\rho$=50) algorithms are performing as well. \autoref{fig:cbcl_obj} illustrates the decay of the objective function of the FPA, MUA and ADMM algorithms.
\begin{figure*}[htbp]
\begin{center}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig5_F_NMF_optim_iter.eps}
\caption{Objective function versus iteration number.}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig5_F_NMF_optim_iter_zoom.eps}
\caption{Objective versus iteration number (zoomed).}
\end{subfigure}
\end{center}
\caption{NMF on the CBCL database. Value of the objective function at each iteration solving \pref{prob:nmf} varying the number of iterations to solve each ND problem.
}
\label{fig:cbcl_obj}
\end{figure*}
We appreciate that setting the number of iterations to 3 yield to over-alternation, whereas using 15 or even more iterations result in an under-alternating method. Using 10 iterations reveal good performance, but the best trade-off is obtained with 5 iterations. Therefore, we set \texttt{iter}$_{ND} = 5$, i.e., the number of iterations to solve \pref{prob:nd1} and \pref{prob:nd2}. All following results in the MIT-CBCL Face Database \#1 are with the same setting.\\
Finally, in \fref{fig:cbcl_nmf}{a} we present the objective function of the three algorithm for the non-convex \pref{prob:nmf}, where all algorithms perform similarly. However, in the zoomed \fref{fig:cbcl_nmf}{b} we can appreciate that the MUA presents the slowest convergence, whereas the proposed method the fastest one.
The results considering the objective function versus iteration number may be found in the \app{app:F}.
\begin{figure*}[h!]
\begin{center}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig6_F_NMF_time.eps}
\caption{Objective function versus time.}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig6_F_NMF_time_zoom.eps}
\caption{Objective function versus time (zoomed).}
\end{subfigure}
\end{center}
\caption{NMF on the CBCL face image database.
}
\label{fig:cbcl_nmf}
\end{figure*}
\subsection{Music excerpt from the song ``My Heart (Will Always Lead Me Back to You)"}
The last experiment is to decompose a 108-second-long music excerpt from ``My Heart (Will Always Lead Me Back to You)" by Louis Armstrong \& His Hot Five in the 1920s \citep{fevotte2009a}. The time-domain recorded signal is illustrated in \autoref{fig:music_signal}.
\begin{figure}[htbp]
\centering\includegraphics[width=3.2in]{figures/Fig7_M_signal.eps}
\caption{Time-domain recorded signal.}
\label{fig:music_signal}
\end{figure}
The recording consists of a trumpet, a double bass, a clarinet, a trombone, and a piano. The recorded signal is original unprocessed mono material contaminated with noise. The signal was downsampled to 11025 kHz, yielding 1,19$\times10^6$ samples. The Fourier Transform of the recorded signal was computed using a sinebell analysis window of length 23 ms with 50\% overlap between two frames, leading to $m=9312$ frames and $n=129$ frequency bins. Additionally, we set $r=10$. \autoref{fig:music_spectrogram} illustrates the previously described spectrogram.
\begin{figure}[htbp]
\centering\includegraphics[width=3.2in]{figures/Fig8_M_spectrogram.eps}
\caption{Log-power spectrogram.}
\label{fig:music_spectrogram}
\end{figure}
The decomposition of the song is produced by the three algorithms. We initialize them with the same random values $\mathbf{W}_0$ and $\H_0$. For a fair competition, the number of iterations is set to 5000 for MUA and ADMM, and for our algorithm FPA we consider it as access to data, i.e., we use 5 iterations for the ND, repeating it 1000 times. For comparison, we measure the computation time of the three techniques. FPA has a run time of 13 min, whereas the ADMM ($\rho=10$) one of 15 min and the MUA one of 80 min.
\begin{figure}[h!]
\begin{center}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig9_M_NMF_time.eps}
\caption{Objective function versus time.}
\end{subfigure}
\begin{subfigure}[b]{3.2in}
\centering\includegraphics[width=\textwidth]{figures/Fig9_M_NMF_time_zoom.eps}
\caption{Objective function versus time (zoomed).}
\end{subfigure}
\end{center}
\caption{NMF on an excerpt of Armstrong's song.
}
\label{fig:music_time}
\end{figure}
In this experiment, \autoref{fig:music_time} illustrates the evolution of the objective of the three techniques along \emph{time}. Initially the MUA obtained the lowest objective value, but as previously discussed, as the number of iterations increases the MUA starts exhibiting evident slow tail convergence and since approximately 100s it is reached by the FPA and shows no further substantial decrement, i.e., it gets stuck in a worse local optima. FPA converges to a slight lower cost value, overpassing MUA. Finally, ADMM reveals a slow initial performance on this dataset, but later converges to a similar point as the previous algorithms.
The results considering the objective function versus iteration number may be found in the \app{app:M}.
\section{Conclusion}
We have presented an alternating projected gradient descent technique for NMF that minimizes the KL divergence loss; this approach solves convex ND problems with the FPA. Our approach demonstrated faster convergence than the traditional MUA by \citet{lee2000} and contemporary ADMM by \citet{sun2014}. The FPA introduces a new parameter, the number of iterations for each convex ND problem. Experiments reveal that the number of iterations is mostly bounded between 3 and 10 iterations, which leads to a trivial tuning by the user. Therefore, our algorithm affords reasonable simplicity, where further user manipulation is not required. Finally, an extension to latent Dirichlet allocation and probabilistic latent semantic indexing can be easily implemented using our proposed method, thus allowing to go beyond the potential slowness of the expectation-maximization (EM) algorithm.
\subsubsection*{Acknowledgements}
This work was partially supported by a grant from the European Research Council (ERC SIERRA 239993).
\bibliographystyle{plainnat}
|
1,477,468,750,174 | arxiv | \section{Introduction} \label{section:introduction}
Understanding ecosystem states and trajectories requires a survey design that provides precise information from an efficient use of resources for monitoring.
This is true for many environmental domains including marine seascapes \parencite{swartzman1997analysis,beare2002spatio,taylor2003spatial,winter2007variations,murase2009application}, plant biomes \parencite{yee1991generalized} and atmospheric systems \parencite{aldrin2005generalised,pearce2011quantifying}.
The challenge of such data collection is to ensure the design remains efficient for the goal of the subsequent analysis despite potentially incomplete knowledge of the data generating process.
Approaches to find designs that are robust to model uncertainty have been proposed in the literature \parencite{cook1982model,sacks1984some,li2000model,yue2002model,kristoffersen2020model,selvaratnam2021model,xu2021robust} but rely on the true model being specified {\it a priori} in one form or another.
In addition to being impracticable, the resulting design can yield misleading information if the assumed model is misspecified \parencite{chang199628}.
In this paper, we consider Generalised Additive Models (GAMs) \parencite{hastie1987generalized,hastie1990generalized} as a basis for specifying prior information about the model and the range of potential discrepancies between a prediction and the mean response that may be observed.
GAMs are extensions of Generalised Linear Models (GLMs) that include smooth functions of predictor variables to provide flexibility in describing the relationship between the mean response and the covariates.
Here, we propose to exploit this flexibility to provide Bayesian designs that are robust to unknown model uncertainty.
Throughout this paper, we consider finding designs within a Bayesian framework.
There have been a number of approaches proposed to form model robust designs in such settings.
Most commonly, model robust Bayesian designs are formed through averaging a utility function over a finite set of plausible models \parencite[e.g.][]{zhou2003bayesian,bornkamp2011response,drovandi2014sequential}, an idea proposed by \textcite{lauter1974experimental,lauter1976optimal}.
This approach assumes that one of the models within the set will appropriately describe the data i.e.\ the M-closed perspective.
While offering robustness to the form of the model, the approach still relies on being able to appropriately specify the data generating process {\it a priori}.
The challenge we address here is to find designs within the M-open perspective.
Research in this space appears to stem from \textcite{welch1983mean} who included an additive term within a linear model to account for discrepancies between the assumed model and the true underlying mean response.
\textcite{sacks1984some} extended this idea to find designs within a class of possible linear models where the class was not finite dimensional.
This led to the construction of designs for estimating characteristics of a non-parametric model providing robustness to unknown model uncertainty in linear settings.
Design under additive versions of spline models has been considered more recently \parencite{biedermann2009optimal,biedermann2011optimal,dette2008optimal, dette2011optimal}.
If the number of knots is assumed to be known, then the design problem reduces to finding designs for segmented univariate polynomial regression models \parencite{heiligers1998optimal,heiligers1999experimental,woods2003designing}.
This was extended to include the precise estimation of the number of knots where it was found that D-optimal designs were not necessarily robust to prior specification of the number of knots \parencite{dette2008optimal}.
Most recently, \textcite{wang2020optimal} considered D-optimal designs while setting a prior on the curvature as a weighted smoothness penalty, and found that optimal design points appeared where there were large curvature values and on the design boundary of the design space.
Of interest, it was found that design points appear to become more spread out as assumed model discrepancy increased.
Intuitively, the more spread out design points are, the more robust the design would seem to be to model misspecification.
As these recent efforts focus on linear additive models and Fisher information, a generalised approach for forming model-robust Bayesian designs has not been proposed.
Through this paper, we aim to address this by providing an approach to find robust designs within a variety of settings.
\subsection{Motivating example: Monitoring submerged shoals off the north-west coast of Australia} \label{subsection:motivating_example}
With coral reefs in decline worldwide \parencite{hughes2018spatial}, there is an increasing interest in the role of deep coral ecosystems and submerged shoals \parencite{bridge2013call}.
This interest is motivated by an emerging understanding that they represent biodiversity hotspots \parencite{moore2017submerged}, need an updated conservation status \parencite{moore2016improving}, and may play a role in sustaining shallower reefs through, for example, larvae replenishment for fish and coral species.
Deeper reefs in the remote parts of Australia are thought to be more resilient to decline as their isolation and offshore location protect them from some anthropogenic and climatic disturbances.
For example, their relative water depth compared to shallow reefs means they are less likely to experience coral bleaching due to reduced exposure to higher temperatures and ambient light.
This has led to a call for greater ``information on the prevalence and the ecological and economic importance" of these deeper reefs \parencite{bridge2013call}.
The Australian Institute of Marine Science (AIMS) has been conducting surveys of submerged shoals off the western coast of Australia since 2010.
The sampling design used for much of this initial exploration was based on a whole shoal approach with large transects covering the full depth ranges (from $-18$ to $-60$ m) from one side of a shoal to the other.
These transects were purposefully positioned to transect depth gradients that delineate habitat boundaries and is a design that is generally used for developing spatial habitat maps.
The data collected from these transects includes hard coral cover, and a range of covariates (see Appendix \ref{appendix:covariatedescription}).
An example of the sampling that has been undertaken is shown in Figure \ref{figure:AIMS_Transect_design_BE} which shows the transects used to collect data at the Barracouta East shoal in $2010, 2011$ and $2013$ (for more information see, \textcite{heyward2017barracouta}).
\begin{figure}[H]
\centering
\includegraphics[scale=0.75]{Final_plots/AIMS_Transect_design_BE.PNG}
\caption{Map of the bio-region and the transects used for data collection at Barracouta East shoal across three sampling years 2010, 2011 and 2013.}
\label{figure:AIMS_Transect_design_BE}
\end{figure}
Interest now is moving from mapping these submerged shoals to monitoring them, and in particular monitoring coral cover within a shoal.
However, accessing these shoals for monitoring is very expensive and logistically difficult.
An effective way to characterise the state and dynamics of key habitats in these systems is critical to improve ecological understanding and management.
Optimal sampling strategies are hampered by a lack of understanding of the spatial dynamics of these systems, and the fact that the well-established diver-based sampling techniques used in shallow reef systems cannot be deployed at depth.
We start by providing the first published statistical analysis of these transect data by developing an appropriate
Generalised Additive Mixed Model, (GAMM) \parencite{lin1999inference} where GAMMs are further extensions of the GAM framework to allow variability (say) spatially and between subgroups to be captured \parencite{wood2006low}.
We then apply our proposed design methods to find transects for data collection on this shoal, and explore the robustness properties of these designs.
The remainder of the paper is set out as follows.
In Section \ref{section:modelling}, we provide background on Bayesian modelling for Generalised Additive (Mixed) Models (GA(M)Ms).
In Section \ref{section:bayesiandesign}, our methodology for finding Bayesian designs for GA(M)Ms is described.
This methodology is then applied in Section \ref{section:examples} via an illustrative example where a theoretic result is derived and then used to explore the properties of designs for a special case of GAM.
Following this, the developed methodology is applied to find transects for monitoring submerged shoals.
The paper then concludes with a discussion of the main outcomes of this research and suggestions for future studies.
\section{Statistical modelling framework} \label{section:modelling}
\subsection{Generalised Additive (Mixed) Models} \label{subsection:gamm}
To define a GA(M)M, let $\bm{y}=(y_1,\ldots,y_n)^T$ be the observed data collected with covariate information $\bm{X}=[\bm{1},\bm{x}_1,\ldots,\bm{x}_{(p+q)}]$ where $\bm{x}_j = (x_{1j},\ldots,x_{nj})^T$, for $j = 1,\ldots,(p+q)$.
Assume the $p+q$ covariates are comprised of $p$ covariates that potentially have a non-linear relationship with the mean response and $q$ covariates whose influence on the mean response is assumed to be linear.
Also assume that there may be random effect $\bm{s}$ within the model which could account for different sources of variability.
Then a GA(M)M with mean $\bm{\mu}=\mathop{\mathbb{E}} (\bm{y})$ and response variable $\bm{y} \sim \mbox{EF}(\bm{\eta},\psi)$ where $\mbox{EF}(\bm{\eta},\psi)$ denotes a distribution from the exponential family (e.g. Bernoulli, Binomial, Poisson, Normal and Exponential) with additive semi-parametric predictor $\bm{\eta}$ and scale parameter $\psi$, can be expressed as follows:
\begin{equation}\label{Eq:1}
\begin{split}
g(\bm{\mu}) = {}& \bm{\eta}, \\
\bm{\eta} = {\beta_0 + \sum_{j=1}^{p} f_j(\bm{x}_{j};\beta_j,\bm{u}_j)} +
\sum_{j=p+1}^{q}\beta_{j}\bm{x}_{j} + {}&
\sum_{a=1}^{p+q-1} \sum_{b=a+1}^{p+q} f_{x_a,x_b}(\bm{x}_{a},\bm{x}_{b};\bm{v}_{a,b}) + \bm{s},
\end{split}
\end{equation}
\noindent where $g(.)$ is a known link function (e.g. logit, log and identity). Here $\beta_j, j=0,\dots,(p+q)$, $\bm{u}_j, j=1,\dots,p$ and $\bm{v}_{a,b}, a=1,\dots,(p+q+1), b=(a+1),\dots,(p+q)$ denote model parameters, $f_j$'s are non-parametric functions that are assumed to be smooth, and $f_{x_a,x_b}$'s are functions for evaluating two-way interactions.
There are numerous ways by which the $f_j$'s can be defined and estimated.
In this work, we consider penalised splines introduced by \textcite{o1986statistical} which can be expressed as follows:
\begin{equation}\label{Eq:2}
\begin{split}
\sum_{j=1}^{p} f_j(\bm{x}_{j};\beta_j,\bm{u}_j) = {}& \beta_{1}\bm{x}_{1}+ \sum_{k=1}^{K_1} u_{1k}z_{1k}(\bm{x}_1)
+ \ldots + \beta_{p}\bm{x}_{p}+ \sum_{k=1}^{K_p} u_{pk}z_{pk}(\bm{x}_p), \\
= {}& \sum_{j=1}^{p} \left[ \beta_{j}\bm{x}_{j}+ \sum_{k=1}^{K_j} u_{jk}z_{jk}(\bm{x}_j) \right],
\end{split}
\end{equation}
\noindent where for $j=1,\dots,p$, $K_j$ is number of knots for $j^{th}$ covariate, $u$'s are wiggliness parameters and $z$'s are orthogonalised O’Sullivan spline bases (see \textcite{wand2008semiparametric}).
O’Sullivan penalised splines are similar to P-splines (see \textcite{eilers1996flexible}), and are a generalisation of smoothing splines.
In many situations, an additive predictor (as shown in Equation \eqref{Eq:2}) may not appropriately capture the variability in the response due to interactions between covariates.
Thus, interactions can be included in the model, and this can be achieved via tensor product smoothers.
Here, for constructing low-rank tensor product smoothers to use as components of the GA(M)M, we use the method introduced by \textcite{wood2006low} (further details in \textcite{wood2017generalized}).
This method can be used to construct interactions between any number of covariates, and here we consider up to two-way interactions between, say, covariates $x_a$ and ${x_b}$, denoted by $f_{x_a,x_b}$.
The process starts by assuming that we have low rank bases available for representing smooth functions of covariates $x_a$ and $x_b$ defined as follows:
\begin{equation}\label{Eq:3}
f_{x_a}(\bm{x}_a;\bm{v}_{a}) = \sum_{k_{a}=1}^{K_{a}} v_{a_{k_{a}}} w_{k_{a}}(x_a) \quad\text{and}\quad f_{x_b}(\bm{x}_b;\bm{v}_{b}) = \sum_{k_{b}=1}^{K_{b}} v_{b_{k_{b}}} w_{k_{b}}(x_b),
\end{equation}
\noindent where the $v$'s are parameters and the $w$'s are known basis functions. Now, $f_{x_a}$ needs to vary smoothly with $x_b$. To accomplish this, the smooth parameters, $v_{a_{k_{a}}}$, need to vary smoothly with $x_b$. This could be written as:
\begin{equation}\label{Eq:4}
v_{a_{k_{a}}}(x_b) = \sum_{k_{b}=1}^{K_{b}} v_{b_{k_{a},k_{b}}} w_{k_{b}}(x_b),
\end{equation}
\noindent then $f_{x_a,x_b}(\bm{x}_a,\bm{x}_b;\bm{v}_{a,b})$ can be written as follows:
\begin{equation}\label{Eq:5}
f_{x_a,x_b}(\bm{x}_a,\bm{x}_b;\bm{v}_{a,b}) = \sum_{k_{a}=1}^{K_{a}}\sum_{k_{b}=1}^{K_{b}} v_{b_{k_{a},k_{b}}} w_{k_{b}}(x_b) w_{k_{a}}(x_a).
\end{equation}
Given appropriate ordering of $v_{b_{k_{a},k_{b}}}$ into a vector $\bm{v}_{a,b}$, if $\otimes_r$ is the row-wise Kronecker product, then for any particular set of observations of the two covariates say $x_a$ and $x_b$, the relationship between the model matrix, $\bm{W}_{a,b}$ which evaluates the tensor product smooth at these observations, and the model matrices $\bm{W}_a$ and $\bm{W}_b$ that would evaluate the marginal smooths at the same observations can be written as follows:
\begin{equation}\label{Eq:6}
\bm{W}_{a,b}=\bm{W}_a\otimes_r\bm{W}_b.
\end{equation}
\sloppy Throughout, it will be convenient to describe the model in Equation \eqref{Eq:1} in matrix form. For this, let $\beta_0 + \sum_{j=1}^{p} f_j(\bm{x}_{j};\beta_j,\bm{u}_j) + \sum_{j=p+1}^{q}\beta_{j} x_{j}$ be written as $\bm{X\beta} + \bm{Zu}$. Here, $\bm{Z}=\big[\bm{Z}(\bm{x}_1),\bm{Z}(\bm{x}_2),\dots,\bm{Z}(\bm{x}_p)\big]$ where $\bm{Z}(\bm{x}_j)=\big[\bm{Z}_1(\bm{x}_j),\bm{Z}_2(\bm{x}_j),\dots,\bm{Z}_{K_j}(\bm{x}_j)\big], j=1,\dots,p$ and $\bm{Z}_k(\bm{x}_j)=\big(z_{jk}(x_{j1}),z_{jk}(x_{j2}),\dots,z_{jk}(x_{jn})\big)^T$, $k=1,\dots,K_j$. Then, let $\sum_{a=1}^{p+q-1} \sum_{b=a+1}^{p+q} f_{x_a,x_b}(\bm{x}_a,\bm{x}_b;\bm{v}_{a,b})$ be written as $\bm{Wv}$ where $\bm{W}=\big[\bm{W}_{1,2},\dots,\bm{W}_{(p+q-1),(p+q)}\big]$, and the random effect vector as $\bm{s}$.
Then, Equation \eqref{Eq:1} can be written in matrix form as follows:
\begin{equation}\label{Eq:7}
\bm{\eta} = \bm{X\beta} + \bm{Zu} + \bm{Wv} + \bm{s}.
\end{equation}
Assume that there may be $G$ subgroups within the data that explain potential sources of variability represented by $\bm{s}$ in Equation \eqref{Eq:7}, and index each subgroup by $g$ where $g=1,\dots,G$.
Then, $\bm{s} = (\bm{s}_1,\ldots,\bm{s}_G)^T$ is the vector of random effects where $\bm{s}_{g}$'s are subgroup-specific random effects assumed to follow a particular distribution (typically Normal).
If data are being collected across a spatial process, then Equation \eqref{Eq:7} could be extended to account for the fact that observations may not be independent.
This could be achieved through an alternative specification of $\bm{s}$.
For example, a latent spatial process could be assumed to describe underlying dependence within the data.
Variability in the spatial process could then be described via a covariance function $C_{\kappa}(h)$ which defines the elements of a covariance matrix, where $h$ denotes the Euclidean distance between locations where data were collected and $\kappa$ denotes additional parameters, see \textcite{diggle2007model}.
The Matérn covariance function is commonly used throughout spatial modelling \parencite{matern1966spatial}, and can be defined as follows:
\begin{equation}\label{Eq:8}
C_{\kappa}(h)=\phi_1\{{2^{\kappa-1}\Gamma(\kappa)}\}^{-1}(h/\phi_2)^\kappa K_{\kappa}(h/\phi_2),
\end{equation}
\noindent where $K_{\kappa}(\cdot)$ is the modified Bessel function of order $\kappa$, $\phi_2>0$ is the scale parameter (determines the rate at which the correlation decays to zero with increasing $h$), $\kappa>0$ is a shape parameter (determines the analytic smoothness of the spatial process) and $\phi_1$ is the variance of the spatial process.
\subsection{Bayesian inference} \label{subsection:bayesianinference}
In this work, statistical inference will be conducted within a Bayesian framework.
As such, all inferences are based on the posterior distribution of the model parameters $\bm{\theta}$.
These unknown model parameters $\bm{\theta}$ are treated as random variables where {\it a priori} uncertainty about these parameters is represented by the probability distribution $p(\bm{\theta})$, which is known as the prior distribution.
The prior distribution $p(\bm{\theta})$ and the likelihood $p(\bm{y}|\bm{\theta},\bm{x})$ of observing $\bm{y}$ given covariates $\bm{x}$ and parameter $\bm{\theta}$, can be combined using the Bayes' theorem as follows:
\begin{equation} \label{Eq:9}
p(\bm{\theta}|\bm{y},\bm{x})=\frac{p(\bm{y}|\bm{\theta},\bm{x})p(\bm{\theta})}{p(\bm{y}|\bm{x})},
\end{equation}
\noindent where $p(\bm{\theta}|\bm{y},\bm{x})$ is the posterior distribution of $\bm{\theta}$ and $p(\bm{y}|\bm{x})$ is the normalising constant or the model evidence which ensures the posterior distribution integrates to one.
Given the product of the above density/mass functions typically does not yield a known distribution, much of the research in Bayesian statistics has focused on developing efficient algorithms to sample from or approximate the posterior distribution.
The most widely used approach for this purpose is Markov chain Monte Carlo (MCMC) \parencite{metropolis1953equation}, with Hamiltonian Monte Carlo increasing in popularity \parencite{girolami_calderhead_2011}.
Often multiple models are considered for describing the observed data.
Accordingly, model choice is a typical consideration in statistical inference.
For this, assume that there are $J$ candidate models being contemplated for the data, and let $M$ denote the random variable associated with model $m$ such that $m=1,\ldots,J$.
Each model $m$ is associated with model parameters $\bm{\theta}_m$, likelihood function $p(\bm{y}|M=m,\bm{\theta}_m,\bm{x})$, and prior distribution on parameters $\bm{\theta}_m$ denoted as $p(\bm{\theta}_m|M=m)$.
Combining $p(\bm{y}|M=m,\bm{\theta}_m,\bm{x})$ and $p(\bm{\theta}_m|M=m)$ using the Bayes' theorem yields:
\begin{equation} \label{Eq:10}
p(\bm{\theta}_m|M=m,\bm{y},\bm{x})=\frac{p(\bm{y}|M=m,\bm{\theta}_m,\bm{x})p(\bm{\theta}_m|M=m)}{p(\bm{y}|M=m,\bm{x})}.
\end{equation}
Formal Bayesian model choice requires the specification of prior model probabilities $p(M = m)$ and updating these based on observed data to form posterior model probabilities $p(M =m|\bm{y},\bm{x})$.
Such probabilities can be evaluated as follows:
\begin{equation}\label{Eq:11}
p(M = m|\bm{y},\bm{x}) = \frac{p(M=m)p(\bm{y}|M =m,\bm{x})}{\sum_{j=1}^{J} p(M = j)p(\bm{y}|M = j,\bm{x})},
\end{equation}
\noindent where $p(\bm{y}|M=m,\bm{x})= \int_{\bm{\theta_m}} p(\bm{y}|M=m,\bm{\theta_m},\bm{x})p(\bm{\theta_m}|M=m) \text{d} \bm{\theta}_m$ is the model evidence conditioned on model $m$, and there is a preference for the model with the largest posterior model probability. Analytical computation of the model evidence is generally only available for conjugate models in the exponential family. Thus, approximate methods are typically needed. For a review of available methods, see \textcite{bos2002comparison}.
\subsection{Bayesian inference for GA(M)Ms} \label{subsection:bayesianinferencegamm}
A Bayesian GA(M)M based on the O’Sullivan penalised splines \parencite{wand2008semiparametric} and low-rank tensor product smoothers \parencite{wood2016just} can be specified as follows:
\begin{equation}\label{Eq:12}
\centering
\begin{split}
\bm{y} | \bm{\beta}, \bm{u}, \bm{v}, \bm{s} & \sim \mbox{EF}(\bm{\eta},\psi), \\
g(\bm{\mu}) = \bm{\eta} &= \bm{X\beta} + \bm{Zu} + \bm{Wv} + \bm{s}, \\
\bm{\beta} &= (\beta_0, \ldots,\beta_{p+q})^T, \\
\bm{u} &= (\bm{u}_1, \ldots,\bm{u}_{p})^T, \\
\bm{u}_j &= (u_{j1},\ldots,u_{jK_j}),\, j=1,\ldots,p \\
\bm{u}_j|\sigma^2_{u_{j}} &\sim \text{N}(0,\sigma^2_{u_{j}}),\, j=1,\ldots,p\\
\bm{v}_r|\lambda_{rf} ,\bm{S}_{rf} &\sim \text{MVN}\Big(\mathbf{0},\big(\sum_{f=0}^{2} \lambda_{rf} \bm{S}_{rf}\big)^{-1}\Big), \,r=1,\ldots,R\\
\bm\lambda &= (\bm\lambda_{1},\ldots,\bm\lambda_{R})^T , \bm\lambda_{r} = (\lambda_{r0},\lambda_{r1},\lambda_{r2}) , \\
\end{split}
\end{equation}
\noindent where $\lambda$'s are smoothing parameters and each $\bm{S}_{rf}, r=1,\dots,R$ and $f=0,1,2$, is a positive semi-definite marginal penalty matrices (see \textcite{wood2006low} and \textcite{wood2016just} for more information), where $
R={\comb{(p+q)}{2}}=(p+q)!/2!(p+q-2)!$ is the total number of possible two-way interaction terms.
Further, $\bm{u}_j$ is the wiggliness parameter vector for $j^{th}$ spline term and $\bm{v}_r$ is the wiggliness parameter vector for $r^{th}$ interaction term.
The specification of $\bm{s}$ will depend on the exact model being fit.
Let $\bm\phi$ denote the parameters associated with the variability of $\bm{s}$, then $\bm{s}$ could be specified as $\bm{s}\sim p(\bm{s}|\bm\phi)$, if $\bm{s}$ accounts for variability between subgroups or based on the covariance function given in Equation \eqref{Eq:8}. Finally, a GAM can be specified as a special case of the above model where the random effects $\bm{s}$ have been omitted.
For the above GA(M)M, the posterior distribution of $\bm{\theta}$ and $\bm{\alpha}$ is the distribution of interest where $\bm{\theta}=(\bm{\beta},\psi,\bm{\sigma}^{2}_{u},\bm\lambda,\bm\phi)$, ${\bm\alpha=(\bm{u},\bm{v},\bm{s})}$ and $\bm{\gamma} = (\bm{\sigma}^2_{u},\bm{\lambda},\bm{\phi})$.
Accordingly, the posterior distribution can be defined as follows:
\begin{equation}\label{Eq:13}
p(\bm{y}|\bm{\beta},\psi,\bm{\alpha},\bm{x})p(\bm{\alpha}|\bm{\gamma})p(\bm{\theta}),
\end{equation}
\noindent where $p(\bm{y}|\bm{\beta},\psi,\bm{\alpha},\bm{x})$ is the conditional likelihood of observing data $\bm{y}$ based on covariates $\bm{x}$ given $\bm{\beta},\psi$ and $\bm{\alpha}$.
This simplifies straightforwardly to the posterior distribution for a GAM by dropping the terms associated with $\bm{s}$.
In this paper, for the motivating design problem, we will adopt vague priors calibrated by prior predictive checks for fitting proposed GAMMs to historical data.
The posterior distribution from this model will then form the prior information that will be used to construct Bayesian designs.
\section{Bayesian design} \label{section:bayesiandesign}
A design $\bm{d}$ can be broadly defined as the values of input variables that are specified for data collection.
Most typically this will be the values of covariates $\bm{x}$ specified to collect data $\bm{y}$.
Alternatively, within spatial settings, the design could be the actual locations at which data and covariate information will be collected.
The aim of Bayesian design is to specify $\bm{x}$ to achieve a specified experimental goal.
This could be, for example, to maximise the posterior precision of a given parameter or maximise the accuracy of predictions across a spatial region.
This experimental goal is encapsulated within a utility function which is defined next.
\subsection{Utility functions} \label{subsection:utility}
Once the experimental goal has been specified, a function known as a utility function can be formulated to quantify how well this experimental goal would be addressed if data $\bm{y}$ were observed based on design $\bm{d}$.
We denote such a utility function as $u(\bm{d},\bm{y},\bm{\theta},\bm{\alpha})$, which indicates that it may also depend on other variables such as model parameters.
The goal is then to find the design that would maximise this utility function i.e.\ address the experimental goal as well as possible.
However, as $\bm{y}$ is not known before the experiment, the expectation is taken with respect to this and other unknowns (e.g.\ $\bm{\theta}$ and $\bm{\alpha}$). This forms an expected utility which can be defined as follows:
\begin{equation}\label{Eq:14}
\begin{split}
U(\bm{d}) = {}& E_{\bm{y},\bm{\theta},\bm{\alpha}} [u(\bm{d},\bm{y},\bm{\theta},\bm\alpha)] \\
= {}& \int_{\bm{Y}} \int_{\bm\Theta} \int_{\bm{A}} u(\bm{d},\bm{y},\bm{\theta},\bm\alpha) p(\bm{y},\bm{\theta},\bm\alpha|\bm{d}) \text{d}\bm\alpha \text{d}\bm{\theta} \text{d}\bm{y} \\
= {}& \int_{\bm{Y}} \int_{\bm\Theta} \int_{\bm{A}} u(\bm{d},\bm{y},\bm{\theta},\bm\alpha) p(\bm{y}|\bm{\theta},\bm\alpha,\bm{d}) p(\bm{\theta},\bm\alpha|\bm{d}) \text{d}\bm\alpha \text{d}\bm\theta \text{d}\bm{y}.\\
\end{split}
\end{equation}
The goal of Bayesian design can then be stated as finding $\bm{d}^* = \arg\max_{\bm{d}\in\mathcal{D}}U(\bm{d})$.
Unfortunately, in most cases, there is typically no closed-form solution to the above expectation, so an approximation is required.
The most commonly used approximation is Monte Carlo integration defined as follows:
\begin{equation}\label{Eq:15}
\hat{U}(\bm{d})\approx \frac{1}{L}\sum_{l=1}^L u(\bm{d},\bm{y}_l,\bm{\theta}_l,\bm{\alpha}_l),
\end{equation}
\noindent where $\big\{\bm{y}_l,\bm{\theta}_l,\bm{\alpha}_l\big\}_{l=1}^L$ is a sample generated from the joint distribution of $\bm{y}$, $\bm{\theta}$ and $\bm{\alpha}$.
Thus, throughout this paper, a Bayesian design will be found by maximising the above approximation to the expected utility through the choice of the design $\bm{d}$.
In the examples that follow in the next section, we focus on parameter estimation as our experimental goal.
Given this, an appropriate utility function is known as the the Kullback-Leibler divergence (KLD) \parencite{kullback195110} which measures the distance between the prior and posterior distribution of the parameters. Such a utility function can be defined as follows:
\begin{equation}\label{Eq:16}
U(\bm{d},\bm{y}) = \int_{\bm\Theta} \int_{\bm{A}} \log\Big(\frac{p(\bm{\theta},\bm{\alpha}|\bm{y},\bm{d})}{p(\bm{\theta},\bm{\alpha})}\Big)p(\bm{\theta},\bm{\alpha}|\bm{y},\bm{d})\text{d}\bm\alpha\text{d}\bm\theta.
\end{equation}
From Equation \eqref{Eq:15}, it can be seen that approximating the expected utility requires sampling from or approximating a large number of posterior distributions.
This renders the use of algorithms like MCMC computationally infeasible to use in Bayesian design for realistically sized problems.
Thus, fast methods for approximating the posterior distribution are needed. For this, we propose to use the Laplace approximation, and this is described in the next section.
\subsection{Fast approximation to the posterior distribution} \label{subsection:fastapprox}
As a computationally efficient approach to approximate the posterior distribution, we propose to use the Laplace approximation which has the following form:
\begin{equation*}\label{Eq:17}
\bm{\theta}|\bm{y},\bm{d} \sim \text{MVN}\big(\bm{\theta}^{\ast},\bm{B}(\bm{\theta}^{\ast})^{-1}\big),
\end{equation*}
\noindent where $\bm{\theta}^*$ and the Hessian matrix $B(\bm{\theta}^{\ast})$ at $\bm{\theta}^*$ are defined as:
\begin{align}\label{Eq:18}
\begin{split}
\bm{\theta}^* = \argmaxA_{\bm\theta} \space \{\log p(\bm{y}|\bm\theta,\bm{d}) + \log p(\bm\theta)\} \, \text{and}\\
B(\bm{\theta}^{\ast}) = {\frac{-\partial^2 \{ \log p(\bm{y}|\bm\theta,\bm{d}) + \log p\bm\theta) \}}{ \partial \bm\theta \partial \bm{\theta}'}} \Bigl\lvert_{\bm{\theta}=\bm{\theta}^*}.
\end{split}
\end{align}
The above approximation requires evaluating the full data likelihood (i.e.\ not the conditional likelihood). Finding this likelihood requires integrating out the the wiggliness and random effects as follows:
\begin{equation}\label{Eq:19}
\begin{split}
p(\bm{y}|\bm\theta,\bm{d}) = \int_{A} p(\bm{y}|\bm\beta,\psi,\bm{\alpha},\bm{d})p(\bm{\alpha}|\bm\gamma) \text{d}\bm\alpha.
\end{split}
\end{equation}
To obtain this for a GAM, we need to integrate out the $u$'s and $v$'s as follows:
\begin{equation}\label{Eq:20}
p(\bm{y}|\bm\theta,\bm{d})= \int_{\bm{V}}\int_{\bm{U}} \prod_{i=1}^{n} p(y_i|\bm\beta,\psi,\bm{u},\bm{v},\bm{d}_i) p(\bm{u}|\bm{\sigma}^2_u) p(\bm{v}|\bm\lambda) \text{d}\bm{u} \text{d}\bm{v},
\end{equation}
\noindent where $p(y_i|\bm\beta,\psi,\bm{u},\bm{v},\bm{d}_i)$ is the conditional likelihood of observing $y_i$ at design point $\bm{d}_i$.
Similarly, for a GAMM, if $\bm{s}$ is group specific random effects, we need to integrate out $u$'s, $v$'s and $s$'s as follows:
\begin{equation}\label{Eq:21}
p(\bm{y}|\bm{\theta},\bm{d})=\int_{\bm{S}} \int_{\bm{V}}\int_{\bm{U}} \prod_{i=1}^{n} \prod_{g=1}^G p(y_{ig}|\bm\beta,\psi,\bm{u},\bm{v},s_{g},\bm{d}_i) p(\bm{u}|\bm{\sigma}^2_u) p(\bm{v}|\bm{\lambda}) p(\bm{s}|\bm\phi) \text{d}\bm{u} \text{d}\bm{v} \text{d}\bm{s},
\end{equation}
\noindent where $p(y_{ig}|\bm\beta,\psi,\bm{u},\bm{v},s_{g},\bm{d}_i)$ is the conditional likelihood of observing $y_{ig}$ at design point $\bm{d}_i$.
Once we have $\bm{\theta}^{\ast}$, it can be used to find a computationally efficient approximation to the posterior of the wiggliness and random effect parameters, $\bm{\alpha}$.
This exploits the conditional independence between the model parameters and random effect terms, and thus just requires approximating the marginal posterior distribution of the wiggliness and random effect parameters (as the marginal posterior distribution of the model parameters is given by the above Laplace approximation).
Let's denote the random variable associated with marginal posterior of $\bm{\alpha}$ given $\bm{\theta}^\ast$ by $\bm{\alpha}_{\theta}^\ast$. Then the marginal posterior can be found as follows:
\begin{equation*}\label{Eq:23}
\bm{\alpha}_{\theta^\ast} | \bm{y},\bm{d} \sim \text{MVN}\big(\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast},H(\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast})^{-1}\big),
\end{equation*}
\noindent where $\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast}$ and the Hessian matrix $H(\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast})$ at $\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast}$ are defined as:
\begin{equation}\label{Eq:24}
\centering
\begin{split}
\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast} = \argmaxA_{\bm{\alpha}_{\bm{\theta}^{\ast}}} \space \{ \log{ p(\bm{y}|\bm{\beta}^{\ast},\psi^{\ast}, \bm{\alpha}_{\bm{\theta}^{\ast}},\bm{d})} + \log{p( \bm{\alpha}_{\bm{\theta}^{\ast}}|\bm{\gamma}^{\ast})}\} \, \text{and} \\
H(\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast}) = {\frac{-\partial^2 \{ \log{ p(\bm{y}|\bm{\beta}^{\ast},\psi^{\ast}, \bm{\alpha}_{\bm{\theta}^{\ast}},\bm{d})} + \log{p( \bm{\alpha}_{\bm{\theta}^{\ast}}|\bm{\gamma}^{\ast})}\} }{ \partial \bm{\alpha}_{\theta^\ast} \partial \bm{\alpha}_{\theta^\ast}'}} \Bigl\lvert_{ \bm{\alpha}_{\theta^\ast}=\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast}}.
\end{split}
\end{equation}
The approximation to the posterior variance-covariance matrix of $(\bm{\theta}^*,\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast})$ is then block diagonal where there are two blocks; one for the model parameters and the other for the wiggliness and random effects parameters.
As described in Section \ref{subsection:bayesianinference}, the main drawback of using the model evidence $p(\bm{y}|M=m,\bm{d})$ for model selection is that it can be difficult to evaluate analytically.
Fortunately, the Laplace approximation can also be used to provide a computationally efficient approximation.
This is achieved by approximating $p(\bm{y},\bm{\theta}_m|M=m,\bm{d})$ to the second-order around $\bm{\theta}^*_m$ by applying a Taylor series expansion which results in the following approximation (see Appendix \ref{appendix:modelevidence} for the derivation):
\begin{align} \label{Eq:22}
\log p(\bm{y}|M=m,\bm{d}) = & \log p(\bm{y}|\bm{\theta}^*_m,M=m,\bm{d}) + \log p(\bm{\theta}^*_m|M=m,\bm{d}) \\ & + {\frac{T}{2}} \log (2\pi)
- {\frac{1}{2}} \log |\bm{B}(\bm{\theta}^{\ast}_m)|, \notag
\end{align}
\noindent where $T$ is the number of parameters in the model, and we note that the values of $\log p(\bm{y}|\bm{\theta}^*_m,M=m,\bm{d}) + \log p(\bm{\theta}^*_m,M=m,\bm{d})$ and $\bm{B}(\bm{\theta}^{\ast}_m)$ are readily available from the Laplace approximation.
Thus, model comparison can be performed efficiently based on Equation \eqref{Eq:22} with an approximation to the model evidence.
Throughout the examples we consider in this paper, our model choice procedure will proceed by evaluating the posterior model probability for all possible model combinations of predictors and their two-way interactions (when both main effects are present) where each model will be considered equally likely {\it a priori}.
The model that yields the largest posterior model probability will be selected as the preferred model, and subsequently used to inform design.
Another benefit of adopting the Laplace approximation is that the posterior distribution is Multivariate Normal.
Thus, if the prior distribution is also a Multivariate Normal, then the KLD utility can be evaluated analytically as follows:
\begin{equation}\label{Eq:25}
U(\bm{d},\bm{y}) = 0.5\times\Bigg(\text{tr}\Big( \bm{\Omega}_0^{-1} \bm{\Omega}_1\Big) + \Big(\bm{\mu}_0-\bm{\mu}_1\Big)^{T} \bm{\Omega}_0^{-1} \Big(\bm{\mu}_0-\bm{\mu}_1\Big) -T - \log{\Bigg(\frac{\text{det}(\bm{\Omega}_0)}{\text{det}(\bm{\Omega}_1)}\Bigg)}\Bigg),
\end{equation}
\noindent where $\bm{\mu}_0=(\bm{\theta}_0,\bm{\alpha}_0)$, $\bm{\mu}_1=(\bm{\theta}^*,\bm{\alpha}_{\bm{\theta}^{\ast}}^{\ast})$, $\bm{\Omega}_0$ and $\bm{\Omega}_1$ are the prior mean vector, posterior mean vector, prior variance-covariance matrix, and posterior variance-covariance matrix, respectively.
In cases where the prior is not Multivariate Normal, methods of \textcite{overstall2018approach} can be adopted such that Equation \eqref{Eq:25} can still be applied.
To summarise, pseudo-code for our approach to approximate the expected utility is provided in Algorithm \ref{Alg:1}.
\begin{algorithm}[H] \label{Alg:1}
\SetAlgoLined
Initialise the prior information p($\bm{\theta},\bm{\alpha}$) using $(\bm{\theta},\bm{\alpha}) \sim \text{MVN}\big((\bm{\theta}_0,\bm{\alpha}_0),\bm{\Omega}_0\big)$ and the design $\bm{d}$. \\
\For{$l$ = $1$ to $L$}{
Draw $\bm{\theta}_l$ and $\bm{\alpha}_l$ from prior $p(\bm{\theta},\bm{\alpha})$. \\
Given $\bm{\theta}_l$ and $\bm{\alpha}_l$, simulate data $\bm{y}_l$ at design $\bm{d}$ from the assumed GA(M)M model outlined in Equation \eqref{Eq:12}. \\
Approximate $\bm{\theta}_l^*$ and $B(\bm{\theta}_l^*)$ for the simulated data using Equation \eqref{Eq:18}. \\
Given $\bm{\theta}_l^*$, approximate $(\bm{\alpha}_{\bm{\theta}^{\ast}})_l^{\ast}$ and Hessian matrix $H\big((\bm{\alpha}_{\bm{\theta}^{\ast}})_l^{\ast}\big)$ using Equation \eqref{Eq:24}. \\
Set the joint posterior $p(\bm{\theta},\bm{\alpha}|\bm{y},\bm{d})$ using $\bm{\theta},\bm{\alpha}|\bm{y},\bm{d} \sim \text{MVN}\big((\bm{\theta}^*,{\bm{\alpha}_{\bm{\theta}^{\ast}}}^{\ast})_l,(\bm{\Omega}_1)_l\big)$, where
$$(\bm{\Omega}_1)_l =\begin{bmatrix}
B(\bm{\theta}_l^*)^{-1} & \bm{0} \\
\bm{0} & H\big((\bm{\alpha}_{\bm{\theta}^{\ast}})_l^{\ast}\big)^{-1}
\end{bmatrix}.$$ \\
Evaluate approximation to KLD utility $U(\bm{d},\bm{y}_l)$ using in Equation \eqref{Eq:25}.\\}
Approximate the expected utility $\hat{U}(\bm{d}) = \frac{1}{L} \sum_{l=1}^{L} U(\bm{d},\bm{y}_l)$. \\
\caption{Approximating the expected utility function of a design}
\end{algorithm}
First, initialise the prior information for the GA(M)M.
If historical data are available, then one can use the posterior of the fitted model to these historical data as the prior for the design (line 1).
Based on this prior, model parameters are simulated (line 3), and a prior predictive sample is then generated (line 4).
Based on these data, the posterior distribution of $\bm{\theta}_l$ needs to be found.
To do so, one needs to evaluate the likelihood as in Equations \eqref{Eq:20} and \eqref{Eq:21}.
Exact evaluation of these integrals is generally not possible, thus Monte Carlo methods can be used to yield an approximation as follows:
\begin{equation}\label{Eq:26}
p(\bm{y}|\bm{\theta}, \bm{d}) \approx \frac{1}{E} \sum_{e=1}^{E} \prod_{i=1}^{n} p(y_i|\bm\beta,\psi,\bm{u}_e,\bm{v}_e,\bm{d}_i) \, \text{and},
\end{equation}
\begin{equation}\label{Eq:27}
p(\bm{y}|\bm{\theta}, \bm{d}) \approx \frac{1}{E} \sum_{e=1}^{E} \prod_{i=1}^{n} \prod_{g=1}^G p(y_i|\bm\beta,\psi,\bm{u}_e,\bm{v}_e,s_{g_e},\bm{d}_i),
\end{equation}
\noindent where $s_{g_e} \sim p(\bm{s}|\bm\phi)$, $\bm{u}_e \sim \text{N}(0,\bm{\sigma}^2_{u} )$, $\bm{v}_{r_e} \sim \text{MVN}\Big(\bm{0},\big(\sum_{f=0}^{2} \lambda_{r_{f}} \bm{S}_{r_{f}}\big)^{-1}\Big)$ and $E$ is sufficiently large.
With this approximation to the likelihood, the Laplace approximation can be used to find the posterior distribution of $\bm{\theta}_l$ (line 5).
By applying an additional Laplace approximation, we can find the posterior distribution of $(\bm{\alpha}_{\bm{\theta}^*})_l$ (line 6).
Using the output from line 5 and 6, we can approximate the joint posterior distribution of $\bm{\theta}$ and $\bm{\alpha}$ (line 7) from which the utility function can be evaluated (line 8).
\subsection{Optimisation algorithm} \label{subsection:optimisation}
Given we are now able to approximate the expected utility, an approach to find the design that maximises this approximation is needed.
Despite adopting a computationally efficient approximation to the posterior distribution (i.e.\ the Laplace approximation), evaluating the approximation to the expected utility is still computationally demanding.
Thus, we require an optimisation algorithm that does not require a large number of function evaluations and one that can handle noisy expected utility evaluations (from the Monte Carlo approximation).
Further, throughout our examples, we consider a range of design variables that could be either discrete or continuous in nature, so our adopted optimisation algorithm must be able to handle either or a combination of such variables.
Accordingly, we used the coordinate-exchange (CE) algorithm proposed by \textcite{meyer1995coordinate} to search through a discrete design space.
Such an algorithm begins with a random design (a random selection of design points) and is then optimised one design point at a time.
To do so, a given design point is substituted for all possible values and the expected utility is evaluated for each.
If any of the `new' design points yields a larger expected utility than the current design, then the design point with the highest utility is retained in the design (potentially with a certain probability).
If not, then the original design point is retained.
This process is repeated for all design points, and then the whole process is then repeated until there is no substantial improvement in the expected utility or after a fixed number of iterations.
Such an algorithm is straightforward to implement for discrete design spaces.
To extend the CE algorithm to search across continuous design spaces, we consider the approximate coordinate exchange (ACE) algorithm \parencite{overstall2017bayesian}. The extension is the use of a Gaussian process (GP) to emulate and optimise the expected utility surfaces for the one-dimensional optimisations in the CE algorithm.
This is efficient as only a relatively small number of expected utility evaluations are needed to fit the GP. Otherwise, the approach of ACE is similar to CE.
To find optimal designs for the illustrative example in Section \ref{section:examples}, we use the ACE algorithm as the design variables are continuous.
In the motivating example, a mixture of continuous and discrete design variables are considered.
Therefore, the CE and ACE algorithms are used in combination to find optimal designs.
We also fix the random numbers within the approximation to the expected utility such that the function is deterministic. This is purely for convenience as the optimisation is more computationally efficient with a deterministic utility.
\section{Examples} \label{section:examples}
The proposed approach for finding robust Bayesian designs is demonstrated in this section through two examples.
The first is an illustrative example where properties of designs for a linear additive model are derived while the second example finds robust designs for monitoring sub-merged shoals (see Section \ref{subsection:motivating_example}).
In each example, we explore the robustness of our designs with respect to potential alternative models.
Importantly, when applying our approach to find these robust designs, the alternative models are not explicitly defined, as is required in many approaches proposed to form model robust designs. Instead, these models form a set of possible models that could be observed under the defined GA(M)M.
\subsection{Illustrative example} \label{subsection:example1}
Consider finding Bayesian designs under the following linear additive model:
\begin{equation}\label{Eq:28}
\centering
\begin{split}
\bm{y} \mid \bm{\beta}, \bm{u} & \sim \text{N}( \bm{X\beta} + \bm{Zu}, \sigma^2_\varepsilon),\\
\bm{\beta} & = (\beta_0,\beta_1)^T,\\
\beta_j & \sim \text{N}(0,10^2 ), j=1,2, \\
\bm{u} & = (\bm{u}_1)^T,\\
\bm{u}_1 & = (u_{11},\ldots,u_{1K}) ,\\
\bm{u}_1 | \sigma^2_{u} & \sim \text{N}(0,\sigma^2_{u} ) . \\
\end{split}
\end{equation}
Let $\mathscr{X}$ be the design space of $\bm{x}$. We consider $\mathscr{X}\in[-1,1]$ for this example and $\bm{x}$ is normalised to [0,1] when fitting the GAM models.
Here, we are interested in how a design found under the KLD utility function might vary depending on the priors for $\sigma_{u}$ and $\sigma_{\varepsilon}$ and the specification of $K$.
To provide insight into the range of potential relationships between $\bm{x}$ and $\bm{y}$ that could be observed under the above linear additive model, realisations are shown in Figure \ref{figure:Ex1_wigg_data} for different values of $\sigma_{u}$ and $K$.
For this, $\bm{y}$ were generated using the model in Equation \eqref{Eq:28} by randomly generating $\bm{x}$ and $\bm{u}$ while $\bm{\beta}$ was fixed at $\bm{\beta}=(2,-5)^T$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.8]{Final_plots/Ex1_wigg_data.jpeg}
\caption{Five potential realisations that could be captured by GAM for same $\bm{\beta}$ values where each color represents a different realisation.}
\label{figure:Ex1_wigg_data}
\end{figure}
From Figure \ref{figure:Ex1_wigg_data}, when $\sigma_{u}$ is relatively low, only very minor deviations from the linear model are observed for all $K$.
The flexibility of the model appears to come as this standard deviation increases, with moderate curvature apparent when $\sigma_{u} = 10$ and more extreme curvature observed when $\sigma_{u} = 20$ and $\sigma_{u} = 30$ .
The flexibility of the model also appears to relate to $K$, with generally less flexibility (low wiggliness) observed when $K=3$ compared to larger values.
To provide insight into the characteristics of Bayesian optimal designs under the KLD utility function under different parameter specifications of the model in Equation \eqref{Eq:28}, the following theorem has been derived for linear additive models (as defined above) with $p$ covariates (see Appendix \ref{appendix:prooftheorem1} for the proof).
\begin{theorem}\label{theorem_1}
Let $\bm{\theta}=\begin{bmatrix} \bm{\beta} & \bm{u}\end{bmatrix}$ be the vector of model parameters, $\bm{Q}=\begin{bmatrix}\bm{X} & \bm{Z}\end{bmatrix}$ be the design matrix, $\bm{\mu}_0$ be the prior mean vector, and $\bm{\Omega}_0=\begin{bmatrix} {\sigma^2_{\beta}} I_{p+1} & \bm{0} \\ \bm{0} & \blockdiagA_{1\le j\le k_p}\big({\sigma^2_{u_{p}}} I_{K_p}\big) \end{bmatrix}$ be the prior variance-covariance matrix for the linear additive model, then the joint posterior of $\bm{\beta}$ and $\bm{u}$ is
\begin{equation*}
\begin{bmatrix} \bm{\beta} & \bm{u} \end{bmatrix}^T \bigm| \bm{y}, \bm{Q} \sim \text{MVN}\big(\bm{\mu}_1, \bm{\Omega}_1 \big),
\end{equation*}
\noindent where $\bm{\mu}_1 = \big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{Q} + \bm{\Omega}_0^{-1}\big]^{-1} \big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{y}+ \bm{\Omega}_0^{-1}\bm{\mu}_0\big]$ and $\bm{\Omega}_1=\big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{Q} + \bm{\Omega}_0^{-1}\big]^{-1}$.
\end{theorem}
Based on the result from Theorem \ref{theorem_1}, one can gain insight into the types of designs that would be preferred under different parameters configurations of the linear additive model described in Equation \eqref{Eq:28}.
This was explored for a range of designs that varied in terms of how spread out the points are across the design space, see Appendix \ref{appendix:derivecoro1} for full details.
The result from Theorem \ref{theorem_1} was then used to evaluate the expected KLD utility for these designs under different parameter configurations; the results of which are shown in Figure \ref{figure:theroy} of Appendix \ref{appendix:derivecoro1}.
The general patterns in these results lead to the follow corollary.
\begin{coll}\label{Col:1}
Given the result from Theorem \ref{theorem_1}, the following patterns in preferred designs under the above linear additive model for the KLD utility function can be determined:
\begin{enumerate}[label=(\roman*)]
\item As $\sigma_{u}$ increases, there is a preference for more spread out design points.
\item As $K$ increases, there is a preference for more spread out design points.
\item As $\sigma_{\varepsilon}$ increases, there is a preference for replicating design points.
\end{enumerate}
\end{coll}
The results of the above corollary would seem to make intuitive sense, particularly in light of the model realisations given in Figure \ref{figure:Ex1_wigg_data}.
That is, as the wiggliness terms become more variable, design points become more spread out to estimate departures from the linear relationship.
Indeed, when this term is relatively small, then a roughly linear relationship is observed so boundary points would be expected to be preferred.
The same preferences would seem reasonable for increasing and decreasing values of $K$, respectively.
Lastly, increasing replication as $\sigma_{\varepsilon}$ increases would also seem sensible for mean and variance estimation.
To further explore designs under the KLD utility, we found optimal designs under a range of values for $\sigma_{u}$, $K$, $\sigma_{\varepsilon}$, and $n$ by applying the ACE algorithm described in Section \ref{subsection:optimisation}, and results are shown in Figure \ref{figure:Ex1_design}.
Here, the values are set as $\sigma_{u}=(1,5,10,20,30)$, $K=(3,4,6,12)$ for $n=12$ and $K=(3,4,6,12,24)$ for $n=24$, and $\sigma_{\varepsilon}=(0.1,0.5,1)$.
As can be seen, these results align with what is given in the above corollary, which is based on the result from Theorem \ref{theorem_1}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.6]{Final_plots/Ex1_design.jpeg}
\caption{Optimal design points obtained for linear additive model under different values of $\sigma_{u}$, $K$, $\sigma_{\varepsilon}$, and $n$.}
\label{figure:Ex1_design}
\end{figure}
To explore the robustness properties of the optimal designs shown in Figure \ref{figure:Ex1_design}, Bayesian designs under the KLD utility function were also found under a linear, quadratic and cubic polynomial regression model.
Here, an expression for the posterior was obtained following a similar approach as shown in the proof of Theorem 1, and the ACE algorithm was again used to find the optimal designs.
Of note, such designs were found to be similar to those based on D-optimality, see \textcite{atkinson2007optimum} and optimal designs were not influenced by the choice of $\sigma_\epsilon$.
Of interest is how well the designs found under the GAM would perform with respect to designs that would be optimal under these polynomial models.
This is of interest as polynomial models are potential realisations under the GAM specification.
To evaluate this, efficiency of a design $\bm{d}$ can be defined as:
\begin{equation}\label{Eq:29}
R(\bm{d}) = \frac{U_{pol}(\bm{d})}{U_{pol}(\bm{d}_{pol}^*)},
\end{equation}
\noindent where $U_{pol}$ denotes the expected utility under a polynomial model and $\bm{d}_{pol}^*$ is the optimal design under the polynomial model.
If $\bm{d}$ is the optimal design under GAM model, $R(\bm{d})$ can be interpreted as the relative amount of information that is expected to be gained via a GAM design, compared to what would be optimal under the given polynomial model (if it is the preferred model).
These efficiencies are shown in Figure \ref{figure:Ex1_rel_eff} for a range of different parameter configurations.
As can be seen, $R(d)=1$ when one correctly guesses the appropriate underlying model, and this happens for the polynomial models as this is what was assumed in each of these cases.
However, when this guess is incorrect, the loss in efficiency can be large, particularly when the assumed model is less complex than the underlying model.
In terms of the designs found for the GAMs, all efficiencies remain above $0.9$ suggesting they are highly efficient for parameter estimation under each polynomial model.
One exception is when $\sigma_{u}=1$.
In this case, the behaviour of the model is close to that of a linear model (see Figure \ref{figure:Ex1_wigg_data}), thus the model is providing little robustness to departures from the linear relationship, resulting in reduced performance.
\begin{figure}[H]
\centering
\includegraphics[scale=0.55]{Final_plots/Ex1_rel_eff.jpeg}
\caption{Relative efficiency ($R(\bm{d})$) of the optimal GAMM designs compared to optimal designs for polynomial models.}
\label{figure:Ex1_rel_eff}
\end{figure}
\subsection{Monitoring of submerged shoals} \label{subsection:example2}
As discussed in Section \ref{subsection:motivating_example}, our objective is to derive robust Bayesian designs for monitoring coral cover on submerged shoals. To form a basis for these designs, the initial surveys conducted by AIMS were considered.
The response variable for this modelling,
coral cover, was assessed based on a series of images of the
seafloor collected along a transect via towing an unmanned imaging rig approximately $1.5$ meters(m) off the seafloor.
To convert an image into an estimate of coral cover, 20 points were randomly placed throughout each image, and then classified as being placed on coral or not. This yielded Binomial data for each image, and led to the consideration of a Generalised additive logistic regression model.
In terms of covariate information, an abundance of data about the reef topology were available.
Full details are given in Appendix \ref{appendix:covariatedescription}.
To account for potential spatial dependency in the data, a spatial grid was considered, see \textcite{wines2020accounting} for further details.
From this, the following GA(M)M can be proposed:
\begin{equation}\label{Eq:30}
\centering
\begin{split}
\bm{y}\mid \bm{\beta}, \bm{u},\bm{v}, \bm{s} \sim {}& \text{Binomial}\Big(\text{logit}^{-1}\big(\bm{X}\bm{\beta} + \bm{Z}\bm{u} + \bm{W}\bm{v}+ \bm{s}\big),20\Big), \\
\bm{\beta} &= (\beta_0, \ldots,\beta_{p+q})^T, \\
\beta_j\mid \sigma^2_{\beta_j} & \sim \text{N}(0,\sigma^2_{\beta_j}), j=1,\ldots,p \\
\bm{u} &= (u_1, \ldots,u_{p})^T, \\
\bm{u}_j &= (u_{j1},\ldots,u_{jK_j}),\, j=1,\ldots,p \\
\bm{u}_j|\sigma^2_{u_{j}} &\sim \text{N}(0,\sigma^2_{u_{j}}),\, j=1,\ldots,p\\
\bm{v}_r| \lambda_{rf} ,\bm{S}_{rf} &\sim \text{MVN}\Big(\mathbf{0},\big(\sum_{f=0}^{2} \lambda_{rf} \bm{S}_{rf}\big)^{-1}\Big), \,r=1,\ldots,R\\
\bm\lambda &= (\bm\lambda_{1},\ldots,\bm\lambda_{R})^T , \bm\lambda_{r} = (\lambda_{r0},\lambda_{r1},\lambda_{r2}) , \\
\bm{s} | \phi_1 & \sim {} \text{N}(0,\phi_1). \\
\end{split}
\end{equation}
As we have data from three different sampling years, we first considered data from each year separately. For each year, all possible combinations of covariates were considered along with all possible two-way interactions (if
the two main effects were included in the model). Note that, initially, before considering the models with two-way interactions, if
any pairwise correlation was greater than 0.5 (in absolute value), then that model was discarded.
In addition, we also considered models for data collected across all years where a random effect (denoted as $\bm{t}$) was included to account for any inter-annual variation.
This random effect $\bm{t}$ was included in the linear predictor in Equation \eqref{Eq:30} as an additive term, where we denote the variance of $\bm{t}$ by $\phi_2$.
To determine which covariates and two-way interactions were appropriate to include in this model, we followed the same procedure as described for the models for each year.
Model choice was undertaken via the model evidence (Equation \eqref{Eq:22}), where each model was considered equally likely {\it a priori}.
This resulted in the most appropriate model, for all cases, being the one that included depth as the only covariate.
Depth is commonly a strong predictor of ecological patterns in marine environments \parencite{barnes1999introduction}. For corals, this relationship is largely due to changes in ambient light with depth \parencite{laverick2020generalized}, as the corals considered here are all species that have zooxanthellae, and are therefore, at least in part, reliant on light for photosynthesis \parencite{falkowski1984light}.
The relationship between depth and the mean prediction based on these four models is displayed in Figure \ref{figure:Ex2_depth_vs_predictor}. Based on each of these model fits, the posterior distribution from each model was considered as prior information for constructing sampling designs (see Appendix \ref{appendix:B_priors_for_design}).
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Final_plots/Ex2_depth_vs_predictor.jpeg}
\caption{Relationship between the depth and predictor obtained using fitted GAMM through data from 2010, 2011, 2013 and all the data with year random effect (yre). Note that the fishnet random effects are not included in the predictions so that the relationship between depth and the linear predictor/mean prediction can be readily observed.}
\label{figure:Ex2_depth_vs_predictor}
\end{figure}
To find the Bayesian designs for each of these four preferred models, a search must be conducted across the whole shoal.
To achieve this, we use a combination of CE and ACE algorithms where new sampling locations (i.e.\ different to those previously sampled) are possible.
To search across the shoal, design parameters are introduced in order to define the placement of a transect.
This is achieved by defining three design parameters; the starting point of the transect using two coordinates based on the Easting and Northing, say $E_0$ and $N_0$, the angle of the transect, say $\omega$ and the length of the transect, say $l_t$ in meters (m) (see Figure \ref{figure:Ex2_tr_ori}).
\begin{figure}[H]
\centering
\includegraphics[scale=0.5]{Final_plots/Ex2_transect_ori.JPG}
\caption{Orientation of a transect based on the design parameters; starting point ($E_0,N_0$), length $l_t$ and angle $\omega$.}
\label{figure:Ex2_tr_ori}
\vspace{-0.5cm}
\end{figure}
Depending on the length of the transect, the end point (i.e.\ $E_1$ and $N_1$) can be evaluated using the following equations:
\begin{equation}\label{Eq:31}
\begin{split}
E_1 = {}& E_0 + l_{t} \text{cos}(\omega),\\
N_1 = {}& N_0 + l_{t} \text{sin}(\omega).\\
\end{split}
\end{equation}
Based on the transects in the previously collected data and additional practical constraints, the total number, length, width and number of data points collected within each transect was set to 18, $500$m, $50$m and $50$, respectively.
Discrete points were considered for $E_0$ and $N_0$ by introducing a $500\text{m} \times 500\text{m}$ spatial grid that covers the entire shoal and a continuous space was considered for $\omega$. Thus, a CE algorithmic-type approach was used to search the space of $E_0$ and $N_0$ while an ACE algorithmic-type approach was used to search the space of $\omega$.
Designs found under the four GAMMs are shown in Figure \ref{figure:Ex2_optimal_designs}.
As can be seen, these designs appear to be located over relatively shallow areas of the shoal, and this is actually where the probability of having coral is high according to predictions from the fitted models (see Appendix \ref{appendix:C_maps}).
In addition to this, data are also being collected across some gradients of depth, which would seem reasonable for estimating the associated effect.
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{Final_plots/Ex2_optimal_designs.jpeg}
\caption{Optimal designs found under different priors displayed on the shoal. The $(500\text{m} \times 500\text{m})$ spatial grid which is used to define the design parameters is added to each plot.}
\label{figure:Ex2_optimal_designs}
\end{figure}
As in the previous example, we explored the robustness properties of the designs shown in Figure \ref{figure:Ex2_optimal_designs}.
To do so, we again found Bayesian designs under alternative polynomial models of degrees one, two and three. Based on the relationship between depth and the logit of the mean coral cover found from the previously fitted models, these polynomial models do not seem unreasonable (see Figure \ref{figure:Ex2_depth_vs_predictor}).
Thus, these polynomial models were each fit to the data sets with depth as a covariate (resulting in a total of $12$ models).
Then we used the posterior from these models to form priors for the respective polynomial designs.
Then, by applying the same optimisation algorithm used for the GAMM designs, we found the corresponding $12$ optimal designs.
Accordingly, design efficiencies were calculated using Equation \eqref{Eq:29}, and are shown in Figure \ref{figure:Ex2_relative_efficiency}.
\begin{figure}[H]
\centering
\includegraphics[scale=0.65]{Final_plots/Ex2_relative_efficiency.jpeg}
\caption{Relative efficiencies of the optimal GAMM designs compared to optimal designs obtained under polynomial models. Each plot represents a different prior used to find the optimal design based on data collected in different years while the "yre" means that there is a random effect component included in the model based on data collected across all three years.}
\label{figure:Ex2_relative_efficiency}
\end{figure}
From Figure \ref{figure:Ex2_relative_efficiency}, it can be seen that the designs found based on the GAMMs remain highly efficient (i.e.\ relative efficiency $>0.875$) under the three alternative models, and this can be a reasonable improvement when compared to assuming a polynomial model. This highlights the robustness properties of designs found under the approach proposed in this paper.
\section{Discussion} \label{section:discussion}
Optimal sampling strategies are critical for deeper coral reef and shoal systems, due to not only the ecological importance as biodiversity hotspots but also the significant cost of accessing and field sampling in remote offshore and deeper water environments. To address this problem, we propose a Bayesian design framework for finding designs that are robust to unknown model uncertainty. The key innovation is rather than finding designs under a linear or generalised linear model, designs are found under a flexible model. The flexible model considered here is a GAM, which is well-known for providing flexible model fits when the functional relationship between the response and continuous predictors is unknown. Here, we have exploited this flexibility to provide model robust designs. The benefits of doing so have been demonstrated in two examples where designs found under a GAM formulation were shown to be robust across different alternative models. This is a highly desirable property of these designs as, of note, such alternative models were not explicitly specified {\it a priori}. Such robustness would seem appealing in practice, and this point may be supported by the recent surge in the use of GAMs for inference more generally.
Here our focus was on deriving designs for monitoring coral cover and demonstrating how GAM(M)s can be used to form robust designs when the underlying model is unknown.
However, the approach developed could be used to explore a range of questions in the context of optimising a monitoring design for these off-shore submerged shoal habitats.
Key questions of future interest in the study of these benthic communities include: 1) is there an optimal transect length and/or density of sampling points (images) within each transect that should be used; 2) what is the spatial scaling of patchiness of hard coral within shoal habitat, and should alternative modelling approaches be considered; 3) how does spatial precision impact optimal design configurations; and 4) what would be the optimal design for detecting changes through time. As interest in these submerged reef habitats grows \parencite{kahng2010community}, developing optimal designs for detecting changes in their rich coral fauna through time is critical, which requires a coral cover per taxa approach, and extensions to our methods could be considered to address this.
Across the examples considered in this work, we adopted the KLD utility function as our focus was on parameter estimation broadly across all model parameters, including parameterising the relationship with depth with greater certainty. However, there would also be value in focusing on developing methods that optimise alternative utilities. An example of interest would be maximising the precision of the estimated probability of observing coral across the shallow reef area of a shoal within each year. Such a utility may yield designs that are better suited to assessing the impact of potential catastrophic events, such as oil spills, coral bleaching or severe storm events.
Some insight into this can be provided from the designs found here but a utility could be constructed to target this explicitly.
In principle, the approach proposed here could be considered for this purpose, and this is an area of research we plan to consider into the future.
While a large number of covariates were available for developing our model for the case study, model selection resulted in the clear outcome that across all years, and with year was included as a temporal random effect, a model containing only depth was appropriate. Depth represents a strong gradient in a range of environmental factors that can influence marine communities. In particular, the attenuation of light with depth is a key driver determining the distribution of organisms that rely on photosynthesis, such as the zooxanthellate corals \parencite{laverick2020generalized}. As we would expect, the probability of observing coral was always highest on the shallowest parts of the shoal reaching values as high as 25 percent (see Figure \ref{figure:Ex2_depth_vs_predictor}). Such probabilities of observing hard coral are comparable to the more commonly recognised shallow reef communities. The prevalence of these zooxanthellate corals drops off rapidly with depth, declining to less than one percent predicted mean probability at depths of between 35 and 40 meters. The strong relationship with depth observed here and elsewhere suggests that a similar depth-based model framework may have wide applicability for determining optimal sampling designs for similar shoal-like mesophotic reefs more broadly. Future work could aim to explore the degree of model transferability \parencite{yates2018outstanding} from the one developed for Barracouta shoal, to other shoals across the north-west shelf and elsewhere. However, it must be noted that the expected depth range of zooxanthellate corals will differ among regions due to variation in light attenuation because of differences in the optical properties of the water, such as turbidity. For example, in a comparison of the benthic communities at Glomar Shoal and Rankin Bank, also off the north-west coast of Australia, there was a difference of up to 20m in the lower limits of phototrophic taxa \parencite{abdul2018biodiversity}. Given the strong dependence of phototrophic communities on light, future work may do better to focus on a light-driven process model, perhaps building on the recent work of \textcite{laverick2020generalized}.
The focus of the current work has been on hard coral cover, which is a common metric used to assess health of coral reef communities \parencite{obura2019coral}. A focus on hard coral cover only, is however, one limitation of this work, because it does not consider changes biodiversity nor community composition. New community assemblages with altered species composition may confer different functional traits. Designs suitable for monitoring community assemblages should be a priority for future modelling, to allow us to monitor a range of taxa of interest and/or conservation concern.
To provide additional robustness to model uncertainty, one could consider placing a prior distribution on $K$. This was not pursued in this work as Example 1 was exploratory, and the choice for Example 2 was relatively clear. However, we note that this would be a straightforward extension to include in future studies, as appropriate. In addition, alternative flexible modelling approaches could be adopted (rather than a GAM implementation). For example, a Gaussian Process model could be included within the linear predictor to capture model discrepancy \parencite{kennedy2001bayesian}.
It would seem that including such a term could lead to similarly model robust designs, and is an area we also hope to explore into the future.
\printbibliography[title=References,heading=bibnumbered]
\section*{Acknowledgments}
We thank the Seascape Health and Resilience team of the Australian Institute of Marine Science for the curation and provision of data.
We acknowledge the Aboriginal and Torres Strait Islander People as the Traditional Owners of the places where AIMS works, both on the land and in the sea country of tropical Australia.
We pay our respects to the Elders; past, present and future; and their continuing culture, beliefs and spiritual relationships and connection to the land and sea.
\section*{Funding}
DDS is supported by the Australian Technology Network of Universities Industry Doctoral Training Centre (ATN IDTC) Scholarship with partner funding from the Australian Institute of Marine Science.
JMM was supported by an Australian Research Council Discovery Project (DP200101263).
\vspace{-0.6cm}
\begin{appendices}
\section{Additional Material for Section \ref{subsection:example1}}
\subsection{Proof of Theorem \ref{theorem_1}}\label{appendix:prooftheorem1}
\begin{proof}
Let $\bm{y}= \bm{X}\bm{\beta} +\bm{Z}\bm{u} + \bm{\varepsilon}$, $\bm{\beta}=\begin{bmatrix} \beta_0 & \beta_1 & \ldots & \beta_p \end{bmatrix}^T$,
$\bm{u}=\begin{bmatrix}u_{11} & \ldots & u_{1{K_1}} & \ldots & u_{p1} & \ldots & u_{p{K_p}} \end{bmatrix}^T$, $\bm{y}=\begin{bmatrix} y_1 & \ldots & y_n \end{bmatrix}^T$,
$\bm{X}=\begin{bmatrix} 1 & x_{11} \ldots & x_{1p}\\ \vdots & \vdots & \vdots \\ 1 & x_{n1} \ldots & x_{np} \end{bmatrix}$,
$\bm\varepsilon = \begin{bmatrix} \varepsilon_1 & \ldots & \varepsilon_n \end{bmatrix}^T$ and
$ \bm{Z} = \begin{bmatrix}
z_{11}(x_{11}) & \ldots & z_{1K_1}(x_{11}) & \ldots & z_{p_1}(x_{p_1}) & \ldots & z_{pK_p}(x_{p_1}) \\
\vdots & \ldots & \vdots & \ldots & \vdots & \ldots & \vdots \\
z_{11}(x_{1n}) & \ldots & z_{1K_1}(x_{1n}) & \ldots & z_{p_1}(x_{pn}) & \ldots & z_{pK_p}(x_{pn})
\end{bmatrix}$. Then the Bayesian model for the linear additive model can be written as follows:
\begin{align*}
\bm{y}\bigm| \bm\beta,\bm{u},\bm{X},\bm{Z},\sigma^2_{\varepsilon} & \sim \text{N} \big( \bm{X}\bm{\beta}+\bm{Z}u,\sigma^2_{\varepsilon}\textbf{1}_n\big), \\
\beta_j|\sigma^2_{\beta_j} & \sim \text{N}(0,\sigma^2_{\beta_j} ), j=1,\ldots,p \\
\bm{u} &= (u_1, \ldots,u_{p})^T, \\
\bm{u}_j &= (u_{j1},\ldots,u_{jK_j}),\, j=1,\ldots,p \\
\bm{u}_j|\sigma^2_{u_{j}} &\sim \text{N}(0,\sigma^2_{u_{j}}),\, j=1,\ldots,p.\\
\end{align*}
\noindent To obtain the posterior of the model parameters, let $\bm{\theta}=\begin{bmatrix} \bm{\beta} & \bm{u}\end{bmatrix}$ be the vector of model parameters, $\bm{Q}=\begin{bmatrix}\bm{X} & \bm{Z}\end{bmatrix}$ be the design matrix, $\bm{\mu}_0$ be the prior mean vector, and $\bm{\Omega_0}=\begin{bmatrix} {\sigma^2_{\beta}} I_{p+1} & \bm{0} \\ \bm{0} & \blockdiagA_{1\le j\le k_p}({\sigma^2_{u_{p}}} I_{K_p}) \end{bmatrix}$ be the prior variance-covariance matrix, then
\begin{align*}
p\big(\bm\beta,\bm{u},\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta} \bigm| \bm{y}, \bm{Q} \big) \ \propto \ & p\big(\bm{y}\bigm| \bm\beta,\bm{u},\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta}, \bm{Q} \big)\ . \ p\big( \bm\beta,\bm{u},\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta} \big) \\
\propto \ & p\big(\bm{y}\bigm| \bm\beta,\bm{u},\sigma^2_{\varepsilon}, \bm{Q} \big)\ . \ p\big( \bm\beta,\bm{u}, \bigm| \sigma^2_{\varepsilon}, \sigma^2_{u},\sigma^2_{\beta} \big) \ . \ p\big(\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta} \big) \\
\propto \ & p\big(\bm{y}\bigm| \bm\beta,\bm{u},\sigma^2_{\varepsilon}, \bm{Q} \big)\ . \ p\big( \bm\beta,\bm{u}, \bigm| \sigma^2_{u}, \sigma^2_{\beta} \big) \ . \ p\big(\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta} \big).
\end{align*}
\noindent If $\sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta}$ are known,
\begin{align}\label{Eq:1_dash}
p & \big( \bm\beta,\bm{u} \bigm| \sigma^2_{\varepsilon}, \sigma^2_{u}, \sigma^2_{\beta}, \bm{y}, \bm{Q} \big) \notag \\
\propto \ & p\big(\bm{y}\bigm| \bm\beta,\bm{u}, \sigma^2_{\varepsilon}, \bm{Q} \big) . p\big(\bm{\beta}, \bm{u} \bigm| \sigma^2_{u},\sigma^2_{\beta} \big) \notag \\
\propto \ & \exp{\Bigg( \frac{(\bm{y}-\bm{Q}\bm{\theta})^T (\bm{y}-\bm{Q}\bm{\theta})}{-2\sigma^2_{\varepsilon}} \Bigg)} . \exp{\Bigg( \frac{(\bm{\theta} -\bm{\mu}_0)^T \bm{\Omega}_0^{-1} (\bm{\theta} -\bm{\mu}_0) }{-2} \Bigg)} \
\{\because \ \bm{\theta}=\begin{bmatrix} \beta & u\end{bmatrix} \} \notag \\
= \ & \exp{\Bigg(\frac{\sigma^2_{\varepsilon}(\bm{y}^T\bm{y} - \bm{y}^T\bm{Q}\bm{\theta} - \bm{\theta}^T\bm{Q}^T\bm{y} + \bm{\theta}^T\bm{Q}^T\bm{Q}\bm{\theta}) + \bm{\theta}^T\bm{\Omega}^{-1}_0\bm{\theta} - 2\bm{\theta}^T\bm{\Omega}^{-1}_0 \bm{\mu}_0 + \mu^T_0 \bm{\Omega}^{-1}_0 \bm{\mu}_0 }{-2} \Bigg)} \notag \\
= \ & \exp{\Bigg( -\frac{1}{2} \Big( \bm{\theta}^T \big(\sigma^{-2}_{\varepsilon}\bm{Q}^T\bm{Q} + \bm{\Omega}_0^{-1}\big)\bm{\theta} - 2\sigma^{-2}_{\varepsilon} \bm{\theta}^T \bm{Q}^T\bm{y}- 2\bm{\theta}^T\bm{\Omega}^{-1}_0 \bm{\mu}_0 \Big) \Bigg)} + \mbox{constant}.
\end{align}
\noindent Assume that,
\begin{align}
\bm\beta,\bm{u} \bigm| \sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta}, \bm{y}, \bm{Q} \ \sim \ \mbox{MVN}(\bm{\mu}_1,\bm{\Omega}_1), \notag
\end{align}
where $\bm{\mu}_1$ is the posterior mean vector, and $\bm{\Omega}_1$ is the posterior variance-covariance matrix for the above linear additive model, then
\begin{align}\label{Eq:2_dash}
p\big( \bm{\beta}, \bm{u} \bigm| \sigma^2_{\varepsilon},\sigma^2_{u},\sigma^2_{\beta}, \bm{y}, \bm{Q} \big) \propto & \exp{\Big(-\frac{1}{2} (\bm{\theta}-\bm{\mu}_1)^T \bm{\Omega}_1^{-1} (\bm{\theta}-\bm{\mu}_1) \Big)} \notag \\
= & \exp{\Big(-\frac{1}{2} (\bm{\theta}^T\bm{\Omega}^{-1}_1\bm{\theta} -2\bm{\theta}^T\bm{\Omega}^{-1}_1\bm{\mu}_1 + \mu^T_1\bm{\Omega}^{-1}_1\bm{\mu}_1)}\Big) \notag \\
= & \exp{\Big(-\frac{1}{2} (\bm{\theta}^T\bm{\Omega}^{-1}_1\bm{\theta} -2\bm{\theta}^T\bm{\Omega}^{-1}_1\bm{\mu}_1)}\Big) + \mbox{constant}.
\end{align}
\noindent By comparing Equations \eqref{Eq:1_dash} and \eqref{Eq:2_dash},
\begin{align*}
\bm{\mu}_1 = & \ \big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{Q} + \bm{\Omega}^{-1}_0\big]^{-1} \big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{y}+ \bm{\Omega}^{-1}_0\bm{\mu}_0\big], \\
\bm{\Omega}_1 = & \ \big[\sigma^{-2}_{\varepsilon} \bm{Q}^T\bm{Q} + \bm{\Omega}^{-1}_0\big]^{-1}.
\end{align*}
\end{proof}
\clearpage
\subsection{Derivation of Corollary \ref{Col:1}} \label{appendix:derivecoro1}
To derive the points $i-iii$ in Corollary 1, consider the numerical setup in following two tables.
Table \ref{Tab:1} and \ref{Tab:2} form five different designs with $n=12$ and six different designs with $n=24$, respectively.
For instance, in Table \ref{Tab:1}, design with index $1$ has $2$ unique design points $(-1,1)$, each repeated six times such that total number of design points are $12$.
Further, all the designs are equally spaced and the distance between two consecutive points are displayed in the third column.
The final column displays the unique design points of each design and the number of repetitions of each unique design point.
\begin{table}[H]
\small
\caption{Design setup for Corollary 1 when $n=12$}
\label{Tab:1}
\begin{tabular}{cccc} \hline
\textbf{Design Index} & \textbf{$\mbox{Points} \times \mbox{Repetitions}$} & \textbf{Design (n=12)} & \textbf{Distance} \\ \hline
1 & $2 \times 6$ & $(-1, 1) \times 6$ & $\frac{2}{1} = 2$ \\
2 & $3 \times 4$ & $(-1, 0 ,1) \times$ 4 & $\frac{2}{2} =1$ \\
3 & $4 \times 3$ & $(-1,-\frac{1}{3},\frac{1}{3},1) \times 3$ & $\frac{2}{3}$ \\
4 & $6 \times 2$ & $(-1,-\frac{3}{5}, -\frac{1}{5}, \frac{1}{5}, \frac{3}{5},1) \times 2$ & $\frac{2}{5}$ \\
5 & $12 \times 1$ & $(-1,-\frac{9}{11}, -\frac{7}{11}, -\frac{5}{11}, -\frac{3}{11},-\frac{1}{11},\frac{1}{11}, \frac{3}{11}, \frac{5}{11}, \frac{7}{11}, \frac{9}{11}, 1) \times 1$ & $\frac{2}{11}$ \\ \hline
\end{tabular}
\end{table}
\begin{table}[H]
\small
\caption{Design setup for Corollary 1 when $n=24$}
\label{Tab:2}
\begin{tabular}{cccc} \hline
\textbf{Design Index} & \textbf{$\mbox{Points} \times \mbox{Repetitions}$} & \textbf{Design (n=24)} & \textbf{Distance} \\ \hline
1 & $2 \times 12$ & $(-1, 1) \times 12$ & $\frac{2}{1} = 2$ \\
2 & $3 \times 8$ & $(-1, 0 ,1) \times$ 8 & $\frac{2}{2} =1$ \\
3 & $4 \times 6$ & $(-1,-\frac{1}{3},\frac{1}{3},1) \times 6$ & $\frac{2}{3}$ \\
4 & $6 \times 4$ & $(-1,-\frac{3}{5}, -\frac{1}{5}, \frac{1}{5}, \frac{3}{5},1) \times 4$ & $\frac{2}{5}$ \\
5 & $12 \times 2$ & $(-1,-\frac{9}{11}, -\frac{7}{11}, -\frac{5}{11}, -\frac{3}{11},-\frac{1}{11},\frac{1}{11}, \frac{3}{11}, \frac{5}{11}, \frac{7}{11}, \frac{9}{11}, 1) \times 2$ & $\frac{2}{11}$ \\
6 & $24 \times 1$ & \begin{tabular}{@{}c@{}} $(-1,-\frac{21}{23}, -\frac{19}{23}, -\frac{17}{23}, -\frac{15}{23},-\frac{13}{23},-\frac{11}{23},-\frac{9}{23},-\frac{7}{23},-\frac{5}{23},-\frac{3}{23},-\frac{1}{23},$ \\ $\frac{1}{23}, \frac{3}{23}, \frac{5}{23}, \frac{7}{23}, \frac{9}{23},\frac{11}{23},\frac{13}{23},\frac{15}{23},\frac{17}{23},\frac{19}{23},\frac{21}{23}, 1) \times 1$ \end{tabular} & $\frac{2}{23}$ \\\hline
\end{tabular}
\end{table}
Then, we numerically evaluated the KLD utility for the designs in Table \ref{Tab:1} and \ref{Tab:2} for different combinations of $K$ and $\sigma_u$ values (see Figure \ref{figure:theroy}).
Here, KLD utility was evaluated based on the posterior we obtained in Theorem 4.1.
These numerical results stand with points $i-iii$ of Corollary 1 for different combinations of $\sigma_u, K, \sigma_\varepsilon$ and $n$.
\begin{figure}[H]
\centering
\includegraphics[scale=0.7]{Final_plots/Ex1_theory.jpeg}
\caption{Left side panel displays KLD utility evaluations when $n=12$ (see Table \ref{Tab:1} for the design setup) while right side panel displays KLD utility evaluations when $n=24$ (see Table \ref{Tab:2} for the design setup). }
\label{figure:theroy}
\end{figure}
\section{Additional Materials for Section \ref{subsection:example2}}
\subsection{Priors for GAMM designs} \label{appendix:B_priors_for_design}
\begin{table}[H]
\center
\caption{Posterior mean and the standard deviation (s.d) of the model parameters from each GAMM model that were used as the prior for the design.}\label{Table:1}
\begin{tabular}{cccc} \hline
\textbf{Prior} & \textbf{Parameters} & \textbf{Mean} & \textbf{s.d} \\ \hline
\multirow{4}{*}{2010}
& $\beta_0$ & $-6.66$ & $0.06$\\
& $\beta_1$ & $\phantom{-}5.12$ & $0.08$ \\
& $\log{(1/\sigma^2_{u}})$ & $-4.52$ & $0.01$\\
& $\log{(1/\phi_1)}$ & $\phantom{-}3.40$ & $0.03$ \\ \hline
\multirow{4}{*}{2011}& $\beta_0$ & $-6.22$ & $0.01$ \\
& $\beta_1$ & $\phantom{-}5.08$ & $0.01$\\
& $\log{(1/\sigma^2_{u}})$ & $-3.40$ & $0.01$\\
& $\log{(1/\phi_1)}$ & $\phantom{-}4.50$ & $0.01$ \\ \hline
\multirow{4}{*}{2013} & $\beta_0$ & $-7.77$ & $0.03$ \\
& $\beta_1$ & $\phantom{-}6.04$ & $0.03$ \\
& $\log{(1/\sigma^2_{u}})$ & $-3.77$ & $0.01$ \\
& $\log{(1/\phi_1)}$ & $\phantom{-}4.14$ & $0.03$ \\ \hline
\multirow{4}{*}{yre} & $\beta_0$ & $-6.03$ & $0.005$ \\
& $\beta_1$ & $\phantom{-}4.42$ & $0.01$\\
& $\log{(1/\sigma^2_{u}})$ & $-2.49$ & $0.01$\\
& $\log{(1/\phi_1)}$ & $\phantom{-}5.33$ & $0.02$ \\
& $\log{(1/\phi_2)}$ & $\phantom{-}5.28$ & $0.02$ \\ \hline
\end{tabular}
\end{table}
\subsection{Covariate description} \label{appendix:covariatedescription}
\begin{table}[H]
\caption{Description of covariates for the data used in monitoring shoals example.}
\centering
\begin{tabular}{p{1.2in}p{1.2in}p{3.6in}} \hline
\multicolumn{1}{p{1.2in}}{\Centering { \textbf{Dataset prefixes used in spreadsheet}}} & \multicolumn{1}{p{1.2in}}{\Centering {\textbf{Predictor datasets}}} & \multicolumn{1}{p{3.6in}}{\Centering {\textbf{Definition}}} \\ \hhline{---}
\multicolumn{1}{p{1.2in}}{\Centering depth} & \multicolumn{1}{p{1.2in}}{\Centering Bathymetry} & \multicolumn{1}{p{3.6in}}{\Centering Elevation relative to the Australian Height Datum (AHD)} \\
\multicolumn{1}{p{1.2in}}{\Centering asp} & \multicolumn{1}{p{1.2in}}{\Centering Aspect} & \multicolumn{1}{p{3.6in}}{\Centering Azimuthal direction of the steepest slope, calculated on a $3 \times 3$ pixel area} \\
\multicolumn{1}{p{1.2in}}{\Centering slp} & \multicolumn{1}{p{1.2in}}{\Centering Slope} & \multicolumn{1}{p{3.6in}}{\Centering First\ derivative of elevation: Average change in elevation / distance calculated on a $3 \times 3$ pixel area} \\
\multicolumn{1}{p{1.2in}}{\Centering prof} & \multicolumn{1}{p{1.2in}}{\Centering Profile curvature} & \multicolumn{1}{p{3.6in}}{\Centering Second\ derivative of elevation: concavity/convexity parallel to the slope, calculated on a $3 \times 3$ pixel area} \\
\multicolumn{1}{p{1.2in}}{\Centering plan} & \multicolumn{1}{p{1.2in}}{\Centering Plan curvature} & \multicolumn{1}{p{3.6in}}{\Centering Second\ derivative of elevation: concavity/convexity perpendicular to the slope, calculated on a $3 \times 3$ pixel area} \\
\multicolumn{1}{p{1.2in}}{\Centering curv} & \multicolumn{1}{p{1.2in}}{\Centering Curvature} & \multicolumn{1}{p{3.6in}}{\Centering Combined index of profile and plan curvature} \\
\multicolumn{1}{p{1.2in}}{\Centering hyp} & \multicolumn{1}{p{1.2in}}{\Centering Hypsometric index\textsuperscript{a}} & \multicolumn{1}{p{3.6in}}{\Centering Indicator of whether a cell is a high or low point within the local neighborhood} \\
\multicolumn{1}{p{1.2in}}{\Centering rng} & \multicolumn{1}{p{1.2in}}{\Centering Local relief (Range)\textsuperscript{ a}} & \multicolumn{1}{p{3.6in}}{\Centering Maximum minus the minimum elevation in a local neighborhood} \\
\multicolumn{1}{p{1.2in}}{\Centering std} & \multicolumn{1}{p{1.2in}}{\Centering Std Dev \textsuperscript{a}} & \multicolumn{1}{p{3.6in}}{\Centering Standard deviation of elevation} \\ \hhline{---}
\multicolumn{3}{p{\dimexpr6.06in+4\tabcolsep\relax}}{\flushleft \textsuperscript{a}{\fontsize{9pt}{10.8pt}\selectfont Local neighborhood\ analysis: run on circles of kernal pixel radius $5, 10,15,20,25,30,35,40,45,50$ original cell size is $6$m interpolated multibeam}} \\ [-0.5em]
\\ \hhline{---} \\ [-0.5em]
\end{tabular}
\end{table}
\subsection{Maps of predicted probability} \label{appendix:C_maps}
\vspace{-0.6cm}
\begin{figure}[H]
\centering
\includegraphics[scale=0.7 ]{Final_plots/Ex2_prob_coral.jpeg}
\caption{Maps of predicted probability of having coral based on the four fitted models which were used as the priors for the four designs.}
\end{figure}
\section{Derivation of Equation \ref{Eq:22}} \label{appendix:modelevidence}
\vspace{-0.6cm}
\begin{align*}
p & (\bm{y} | M=m,\bm{d})\\
= & \int_{\bm{\theta}_m} p(\bm{y},\bm{\theta}_m | M=m,\bm{d}) \mbox{d}{\bm{\theta}_m} \notag \\
= & \int_{\bm{\theta}_m} \exp{\Big(\log{\Big(p(\bm{y},\bm{\theta}_m | M=m,\bm{d})\Big)}\Big)} \mbox{d}{\bm{\theta}_m} \notag \\
\approx & \int_{\bm{\theta}_m} \exp{\Big(\nabla \log{\big(p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d})\big)}(\bm{\theta}_m-\bm{\theta}^*_m) + \frac{1}{2} (\bm{\theta}_m-\bm{\theta}^*_m)^T \nabla^2 \log{\big(p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d})\big)} (\bm{\theta}_m-\bm{\theta}^*_m)\Big)} \\
& \hspace{30mm} p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d}) \mbox{d}{\bm{\theta}_m} \hspace{10mm} ( \mbox{obtained by applying the Taylor series expansion} ) \\
= & \quad p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d}) \int_{\bm{\theta}_m} \exp{\Big(0 + \frac{1}{2} (\bm{\theta}_m-\bm{\theta}^*_m)^T |\bm{B}(\bm{\theta}^{\ast}_{m})| (\bm{\theta}_m-\bm{\theta}^*_m)\Big)} \mbox{d}{\bm{\theta}_m} \notag\\
& \hspace{45mm} ( \because |\bm{B}(\bm{\theta}^{\ast}_{m})|=\nabla^2 \log{\big(p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d})\big)}, \quad \bm{B}(\bm{\theta}^{\ast}_{m}) \quad\mbox{is the Hessian matrix}) \\
= & \quad p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d}) (2\pi)^{\frac{T}{2}} |\bm{\Sigma}(\bm{\theta}^{\ast}_{m})|^{\frac{1}{2}} \int_{\bm{\theta}_m} \frac{1}{(2\pi)^{\frac{T}{2}} |\bm{\Sigma}(\bm{\theta}^{\ast}_{m})|^{\frac{1}{2}}} \exp{\Big(\frac{1}{2} (\bm{\theta}_m-\bm{\theta}^*_m)^T |\bm{\Sigma}(\bm{\theta}^{\ast}_{m})|^{-1} (\bm{\theta}_m-\bm{\theta}^*_m)\Big)} \mbox{d}{\bm{\theta}_m} \notag\\
& \hspace{45mm} ( \because |\bm{B}(\bm{\theta}^{\ast}_{m})|=|\bm{\Sigma}(\bm{\theta}^{\ast}_{m})|^{-1} ) \\
= & \quad p(\bm{y},\bm{\theta}^*_m | M=m,\bm{d}) (2\pi)^{\frac{T}{2}} |\bm{B}(\bm{\theta}^{\ast}_{m})|^{-\frac{1}{2}} \times 1\notag \\
= & \quad p(\bm{y}| \bm{\theta}^*_m , M=m,\bm{d}) p(\theta^*_m| M=m,\bm{d}) (2\pi)^{\frac{T}{2}} |\bm{B}(\bm{\theta}^{\ast}_{m})|^{-\frac{1}{2}} \notag\\
\end{align*}
$\therefore
\log p(\bm{y}|M=m,\bm{d}) = \log p(\bm{y}|\bm{\theta}^*_m,M=m,\bm{d}) + \log p(\bm{\theta}^*_m|M=m,\bm{d}) + {\frac{T}{2}} \log (2\pi) - {\frac{1}{2}} \log |\bm{B}(\bm{\theta}^{\ast}_{m})|.$
\end{appendices}
\end{document}
|
1,477,468,750,175 | arxiv | \section{Keywords:} coronavirus, covid'19, forecasting, search trends, neural networks, machine learning, spatio-temporal analysis}
\end{abstract}
\section{Introduction}
Since the outbreak of coronavirus in December, 2019 in Wuhan, China, COVID-19 is spreading exponentially and has already effected nearly every county in the world, infecting millions of people and causing more than tens of thousand deaths around the world (as of March 16, 2020), as shown in Figure \ref{fig:global-coronavirus}. It has caused extremely catastrophic social and economic damage throughout the world. Coronavirus job losses could total $47$ million, the unemployment rate may hit $32\%$, according to a Federal Reserve estimate.\footnote{https://www.cnbc.com/2020/03/30/coronavirus-job-losses-could-total-47-million-unemployment-rate-of-32percent-fed-says.html} To predict the infected patient number is crucially important to both individual and decision makers preparedness, and to flatten the curves. However, how to accurately predict the number of infected patients is never a trivial task. There are numerous factors contribute to this virus's propagation, such as population mobility, temperature, and medical condition.
Ferguson et al.~\cite{ferguson2020report} applied a previously published microsimulation model to the UK and the US dataset, and concluded that to flatten the curve requires a combination of social distancing of the entire population, home isolation of cases and household quarantine of their family members. The authors also estimated that up to $2.2$ million people could die if no actions were taken to stop transmission in the US. Using another statistical model, Murray et al.~\cite{covid2020forecasting} predict that the US infected patient number would peak around April 15. At this peak date, the US is projected to need $220,643$ total hospital beds ($32,976$ for ICU), and $26,381$ ventilators to support COVID-19 patients. Nationwide COVID-19 deaths are predicted to also peak on April 15, escalating to $2,214$ deaths per day on average. Nationwide, the mean value of the total COVID-19 deaths is projected at about $84,000$.
Unfortunately, most of the existing model based prediction approaches rely on some oversimplified assumptions such as virus travel distance, and timely and effective quarantine measures. However, these assumptions are rarely justified because the social structure is widespread. In addition, since this virus is still novel to the human being, and there are still so many unknown about spreading patterns, severity, and many more, which may introduce high irreducible error~\cite{james2013introduction}. For example, Lydia Bourouiba~\cite{ferguson2020report} recently demonstrated a Respiratory Emissions model is much more complicated than the traditional established model, and the peak exhalation speeds can reach up to $33$ to $100$ feet per second, creating a cloud that can span approximately $23$ to $27$ feet, which is far larger than the current recommended social distancing (around $6$ feet). A $2020$ report from China demonstrated that severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus particles could be found in the ventilation systems in hospital rooms of patients suggesting these virus particles can travel long distances from patients
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{figures/global-coronavirus.pdf}
\caption{Coronavirus Confirmed Cases Worldwide \cite{googlecovid}}
\label{fig:global-coronavirus}
\end{figure}
Nowadays, more and more people have access to the internet, and to search for information that are closely related to their daily lives, feeling, and thoughts. It is estimated that there are around $63,000$ Google searches per second. The average person makes some three or four searches every day.\footnote{https://serpwatch.io/blog/how-many-google-searches-per-day/}
Google Trends is a website sponsored by Google that analyzes the popularity of top search queries in Google Search across various regions and languages. Since more and more people are having access to the internet, and are more reliable than ever to search for information that they care about. Thus, Google trends are revealing and can provide an opportunity to examine people's concerns as well as hot topics that they are interested with. Researchers have used Google Trends data to investigate a number of researches such as: (1) disease outbreak prediction. As far as we know, Carneiro and Mylonakis~\cite{carneiro2009google} are the first authors to introduce the more generic Google Trends tool to health professionals, to show how they can track disease activity. Verma et al.~\cite{verma2018google} illustrated there is a strong temporal correlation between some diseases (chikungunya, dengue fever, and Haryana), and Google search trends. Zhang et al.~\cite{zhang2018using} used Google Trends and ambient temperature to predict seasonal influenza outbreaks, and suggested internet search metrics in conjunction with temperature can be used to predict influenza outbreaks. (2) economy and financial market prediction. Hong et al.~\cite{pai2018using} used Internet Search Trends and Historical Trading Data for Predicting Stock Markets, and showed that using hybrid data can provide more accurate forecasting results than using single historical trading data. MY Huang et al.~\cite{huang2019forecasting} found that the utilization of Google search data allows us to construct a model to forecast directional movements in the S\&P 500 index. Preis et al.~\cite{preis2013quantifying} suggested that Google Trends data does not only reflect aspects of the current state of the economy, but may have also provided some insight into future trends in the behavior of economic actors with other concepts in technical analysis. (3) political election. A group of researchers at Wellesley College examined data from Google Trends data successfully predicted the outcome in 33.3\% of cases in 2008 and 39\% in 2010. By analyzing data from Google Trends, \cite{mavragani2016yes} calculated a valid approximation of the final result, thus contributing to the discussion of using Google Trends as an elections' results prediction tool in the future.
In this paper, we explore the Google trends data to derive its relationships with the COVID-19 spreading situations. Instead of focusing on model based prediction, we propose to use Google Trends data and combine with the historical time series for future cases prediction. Our approach is pure data driven, and skips the complicated mathematical modeling, which greatly reduces the algorithm complexity. We did comprehensive experiments, and applied multiple popular prediction models, such as multiple linear regression model, statistical model, and deep neural network on worldwide data to see the correlation between search trends and infected cases. Our experiments demonstrated that there is a strong relationship between infected patient cases and Google trends data, and can be used with other analysis techniques for a better understanding of this mysterious disease spreading. The contributions of this paper can be summaries as followings:
\begin{itemize}
\item To the best of our knowledge, we are the first to use Google trends data to predict the number of confirmed coronavirus cases utilizing different model types: Linear model, Statistical model, and Deep learning model.
\item We present performance comparison across three models either using Google trends or not using Google trends feature. The results show that Google trends play an important role in the performance of the prediction models.
\end{itemize}
\section{Related Work}
There is a large number of studies about using Google Trends in forecasting algorithms. Here, we discuss two themes: The studies of Google Trends for disease control analysis and for other application domains.
\vspace{6mm}
\subsection{Google Trends for disease control analysis}
\cite{cook2011assessing} assessed Google Flu Trends\footnote{https://www.google.org/flutrends} performance in the United States during the 2009 influenza virus A (H1N1) pandemic. The assessment showed that the internet search behavior changed during pH1N1. And the updated version of the Google Flue Trends technique performed better than the prior one \cite{yang2015accurate}.
\cite{anggraeni2016using} used Google Trends data to build a forecasting model by applying the Autoregressive Integrated Moving Average with exogenous variable (ARIMAX) method, to predict the number of dengue fever cases in Indonesia. \cite{xu2017forecasting} used Google search queries to build a statistical model to anticipate the number of influenza cases in Hong Kong. They compared different forecasting approaches: (Generalized Linear Model (GLM), Least absolute shrinkage and selection operator (LASSO), ARIMA, Feed Forward Neural Networks (FNN), and Bayesian model averaging (BMA). Authors recommended using FNN to predict the cases with better accuracy. Similarly, \cite{ginsberg2009detecting} used search engine queries to estimate weekly influenza activity in each region of the United States with a reporting lag of about one day compared with one to two weeks in traditional surveillance system. \cite{silva2019googling} proposed a hybrid neural network approach named "Denoised NNAR" combining Neural Network AutoRegression (NNAR) with the singular spectrum analysis. The analysis used Google Trends to reduce noise in fashion data. \cite{shaghaghi2020influenza} built a model using Long Short-Term Memory (LSTM) to anticipate the number influenza cases using the data of flu season from Centers for Disease Control and Prevention (CDC) and World Health Organization (WHO), and Google Trends to help the decision maker increasing or decreasing vaccines and medicines in advance. \cite{kapitany2019can} stated that Google Trends were very correlated with the Lyme disease incidence report in Germany.
\vspace{5mm}
\subsection{Google Trends for other application domains}
\cite{bokelmann2019spurious} confirmed that there was an increment of work using Google Trends in tourism research. In this work, Google Trends was used as a predictor for short-term tourism demand. There were several traditional forecasting techniques which were utilized demonstrated that Google Trends played a significant role in short-term forecasting of tourism demand. Similar tourism related topics were discussed by \cite{bangwayo2015can,park2017short,onder2017forecasting}. \cite{xu2014stock} showed that combining the time series analysis algorithms with Google Trends and Yahoo finance improved forecasting the the stock prices. \cite{askitas2009google} and ~\cite{bulut2018google} stated that Google Trends were useful in predicting numerous economic variables (e.g., unemployment, exchange rates). \cite{yu2019online} worked in predicting Ford car sales in Argentina using GT. They used the keyword “Ford”, to improve their forecasting model. Even though there were some traps happened in the past of using Google Trends in Big Data analysis that discussed by \cite{lazer2014parable} and \cite{butler2013google}, however, many improvements have been done by Google since then. Additionally, the studies of Google Trends analysis for disease control are still increasing through time.
\vspace{-3mm}
\section{Method}
\subsection{Building Feature for Regression Models}
We collect 13 different Google trends features based on the 13 search keywords. Some of these features might be revealing, and others might contain strong noise which is not suitable for future prediction. We differentiate those features into two classes base on the Pearson method \cite{benesty2009pearson}. This approach has shown its success in some similar work as done by \cite{nguyen2019self}. The Pearson correlation coefficient attempts to measure the similarity (correlation coefficient) between a series and the original one. Given two series $X$ and $Y$, it can be defined as:
\begin{equation}
corr(X, Y) = \frac{ \sum_{i=1}^{T}{(x_i-\overline{x})*(y_i-\overline{y})}}{\sqrt{\sum_{i=1}^{T}{(x_i-\overline{x})^2}}*\sqrt{\sum_{i=1}^{T}{(y_i - \overline{y})^2}}},
\end{equation}
where $\overline{x}$ and $\overline{y}$ is the mean value of the time series $X$ and $Y$ respectively.
The correlation coefficient values which are greater than $0.7$ are treated as highly correlated and used as features for the prediction model; while the values which are below that threshold are considered as noise and being ignored.
\begin{table*}[htbp]
\centering
\caption{Correlation between Google trends of search queries and confirmed cases worldwide.}
\begin{tabular}{lc|c|rr|r}
\toprule
& \textbf{Correlation} & \textbf{p-value} & & \multicolumn{1}{c|}{\textbf{Correlation}} & \multicolumn{1}{c}{\textbf{p-value}} \\
\midrule
cases of covid19 & 0.8633 & 7.23E-19 & \multicolumn{1}{l}{coronavirus update} & \multicolumn{1}{c|}{0.7796} & \multicolumn{1}{c}{2.17E-13} \\
corona & 0.7789 & 2.34E-13 & \multicolumn{1}{l}{covid} & \multicolumn{1}{c|}{0.8650} & \multicolumn{1}{c}{5.14E-19} \\
coronavirus & 0.7408 & 1.32E-11 & \multicolumn{1}{l}{covid 19} & \multicolumn{1}{c|}{0.8627} & \multicolumn{1}{c}{8.12E-19} \\
coronavirus cases & 0.8196 & 1.18E-15 & \multicolumn{1}{l}{covid 19 cases} & \multicolumn{1}{c|}{0.8687} & \multicolumn{1}{c}{2.41E-19} \\
coronavirus covid19 & 0.8174 & 1.62E-15 & \multicolumn{1}{l}{covid19} & \multicolumn{1}{c|}{0.8506} & \multicolumn{1}{c}{7.91E-18} \\
coronavirus news & 0.7750 & 3.65E-13 & \multicolumn{1}{l}{covid19 cases} & \multicolumn{1}{c|}{0.8584} & \multicolumn{1}{c}{1.87E-18} \\
\sout{coronavirus symptoms} & \sout{0.6664} & \sout{6.18E-09} & & & \\
\bottomrule
\end{tabular}%
\label{tab:correlation-trends-confirmed-cases}%
\end{table*}%
Table \ref{tab:correlation-trends-confirmed-cases} presents the correlation of Google trends using selected keywords with respect to the changes of new confirmed coronavirus cases. We can see that most of the selected keywords are highly correlated. The only keyword \textit{coronavirus symptoms} show less correlation coefficient. Therefore, we decided to drop Google trends by this keyword from the prediction models.
\vspace{2mm}
\subsection{Regression Model}
We study typical regression models from traditional approaches like the linear model, and statistical model such as negative binomial, to the most recent approach which is the deep neural network model.
\vspace{3mm}
\subsubsection{Multiple Linear Regression Model}
The most straightforward prediction model is the multiple linear regression model. Multiple linear regression attempts to model the relationship between two or more explanatory variables and a response variable by fitting a linear equation to observed data. Essentially, it can be considered as an extension of ordinary least-squares regression that involves more than one explanatory variable. Suppose there are $p$ distinct dependent variables, then the multiple linear regression model can be expressed as
\begin{equation}
Y=\beta_0+\beta_1X_{1}+\beta_2X_{2}+...+\beta_pX_{p}+\epsilon
\end{equation}, where for $X_i$ is the $i_{th}$ variable, and $\beta_i$ measures the association between $X_i$ and the response $Y$. Similarly with the linear regression model, the parameters, $\beta_0, \beta_1, ..., \beta_p$ here are the optimal estimators to minimize the sum of squared residuals, RSS. The multiple regression model is based on the following assumptions. 1) There is a linear relationship between the dependent variables and the independent variables. 2) The independent variables are not too highly correlated with each other. 3) $y_i$ observations are selected independently and randomly from the population. 4) Residuals should be normally distributed with a mean of 0 and variance $\sigma$, which is estimated as
\begin{equation}
\sigma^2=\frac{ \sum_{i=1}^{n}{e_i}^2}{n-p-i}
\end{equation}, where $e_i=y_i - \hat{y_i}$ is the residuals.
\subsubsection{Non-negative Integer Regression Model}
Negative binomial regression is similar to regular multiple regression except that the dependent, $Y$ variable is an observed count that follows the negative binomial distribution. Negative binomial regression is a generalization of Poisson regression which loosens the restrictive assumption that the variance is equal to the mean made by the Poisson model. The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution. This formulation is popular because it allows the modeling of Poisson heterogeneity using a gamma distribution.
Hilbe~\cite{hilbe2011negative} introduces the negative binomial distribution as:
\begin{equation}
\label{eqaution:bnd_definition}
\begin{aligned} p(y)&=P\left( Y=y|{u},\alpha\right) \\
&=\frac{\varGamma \left( y+\alpha^{-1} \right) }{\varGamma \left( y+\alpha^{-1} \right) }\left( \frac{ \alpha^{-1}}{\alpha^{-1}+\mu}\right) ^{\alpha^{-1}}\left( \frac{ \mu}{\alpha^{-1}+\mu}\right) ^y \end{aligned}
\end{equation}, where $\mu$ is the mean incidence rate of $Y$ per unit of exposure, and $\alpha$ is the heterogeneity parameter.
The traditional negative binomial regression model, designated the NB2 model in~\cite{hilbe2011negative}, is
\begin{equation}
\ln\mu=\beta_0+\beta_1x_{1}+\beta_2x_{2}+...+\beta_px_{p}
\end{equation}, where the predictor variables $x_1,x_2,...,x_p$ are given, and the population regression coefficients $\beta_0, \beta_1, \beta_2,..., \beta_p$ are to be estimated. Given a random sample of $n$ subjects, for observed subject $i$, the dependent variable is $y_i$, and the predictor variables are $x_{1i},x_{2i},...,x_{pi}$. We denote $x_i=(x_{1i},x_{2i},...,x_{pi})$ and $\beta=(\beta_0, \beta_1, \beta_2,..., \beta_p)^T$, therefore, the Equ.~\ref{eqaution:bnd_definition} for an observation $i$ can be re-written as:
\begin{equation}
\begin{aligned} P\left( Y=y_{i}|{u}_{i},\alpha\right) =\frac{\varGamma \left( y_{i}+\alpha^{-1} \right) }{\varGamma \left( y_{i}+\alpha^{-1} \right) }\left( \frac{1}{1+\alpha\mu_i}\right) ^{\alpha^{-1}}\left( \frac{ \alpha\mu_{i}}{1+\alpha\mu_i}\right) ^{y_{i}}.\end{aligned}
\end{equation}The regression coefficients $\alpha$ and $\beta$ can be estimated using the maximum likelihood function:
\begin{equation}
\begin{aligned} L(\alpha,\beta) &= \prod_{i=1}^n p(y_i) \\
&= \prod_{i=1}^n \frac{\varGamma \left( y_{i}+\alpha^{-1} \right) }{\varGamma \left( y_{i}+\alpha^{-1} \right) }\left( \frac{1}{1+\alpha\mu_i}\right) ^{\alpha^{-1}}\left( \frac{ \alpha\mu_{i}}{1+\alpha\mu_i}\right) ^{y_{i}}.\end{aligned}
\end{equation}
\subsubsection{Deep Neural Network Model}
To predict the confirmed cases is extremely challenging since numerous known and unknown factors affect this pandemic spreading, such as traffic, population density, and how much people concern. However, accessing this information is not easy and may incur an additional cost. In this paper, we explore the most cutting-edge machine algorithms to predict the confirmed patients by Google trends data. Google trends data are able to indicate how much people concern about some specific topics, and provide an excellent opportunity to study the disease severity locally and globally.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{figures/dnn-network.pdf}
\caption{Deep neural network for confirmed cases prediction. }
\label{fig:network}
\end{figure}
Since the pandemic depends on the skill of social distancing to prevent the spread, so there is limited influence from the temporal factor. Therefore, instead of using a temporal model, we decided to use one dimensional convolutional neural network as the core component in our prediction model. Figure \ref{fig:network} presents the overall architecture of the deep neural network prediction model. In particular, there are three connected 1D convolutional layers, a flatten layer, a drop out layer, and a dense layer. The first convolutional layer has a filter size of $16$, kernel size of $2$, strides of $3$, and a dilation rate of $1$. The second and third convolutional layer steps one step at a time with the same filter size of $16$, kernel size of $2$, while their dilation rates are $2$, and $4$, respectively. These layers have different dilation rates in order to help the network capture more contextual information in the feature map. Lastly, we appended the dropout layer of $5\%$ at the end before producing the output to avoid overfitting.
\section{Experiments and Results}
We start explaining the datasets. Next, we perform an analysis of Google search trends, related queries and the categories of the related topics with respect to new confirmed coronavirus cases. Finally, we show the results of forecasting the number of confirmed cases as there is a strong relationship between Google\'s trends (GTs) and the confirmed cases, that help to improve the performance of traditional forecasting algorithms as well as our proposed deep neural network.
\vspace{3mm}
\subsection{Datasets and Data Collection Procedure}
We crawl the Google Trend API to retrieve daily data from Jan 20, 2020 to March 23, 2020. The collection is done in both manner: a query for the data in a specific time range and a query for each day. Both of these types will produce trending of the search terms in a daily scale. However, the query with a time range will produce aggregated data for related queries and related topics. On the other hand, the daily query will generate daily information about different related queries and related topics. Hence, it helps to see the evolvement of such information through time and space.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{figures/eps/queries.pdf}
\caption{Selection of trending search queries and its expansion. The thickness of a connection represents the weight of the two terms.}
\label{fig:trending-search-queries}
\end{figure}
Regarding the selection of search terms to derive its trending, the most obvious terms about coronavirus are used: \textit{coronavirus, covid19,} and \textit{covid19 cases}. In order to enrich the feature sets, we collect the related queries to the defined terms and extract the top five high-weighted related queries (Google Trends uses its algorithm to rank the related queries). These queries become the new search terms to pull its trending and other information from Google Trends. If the terms have appeared in other queries, it will be ignored. At the end, our dataset composes of the trending, related queries (top and rising related queries), related topics (top and rising related topics), and associated regions from $12$ trending search terms. Figure \ref{fig:trending-search-queries} demonstrates the three selected queries and its expansion to other related queries. The connection represents the comparison weight between the two terms based on search trend. For the forecasting algorithms, we randomly split the data with a ratio of $85\%$ for training and $15\%$. The deep neural network models use a dropout rate of $5\%$ to avoid overfitting. The same strategy is applied for country based prediction.
\vspace{5mm}
\subsection{Overview of Worldwide Corronavirus Cases}
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/eps/confirmed_cases.pdf}
\caption{Cumulative confirmed cases of top 10 countries worldwide.
}
\label{fig:top-10-confirmed-cases}
\end{figure}
Figure \ref{fig:top-10-confirmed-cases} presents the top $10$ countries having the most number of cumulative confirmed cases worldwide. The common trend of these countries is the exponential rise during the days of first confirmed cases. China has the earliest confirmed cases and keeps being the top in the number of confirmed cases until March 22, 2020. Italy, United States, and France cases are subsequent.
When the number of confirmed cases in China is flattened by February 24, it was the start of the increment in Europe and the United States.
The confirmed cases have been dramatically spiked after 3/9. Aftermath, Italy hit the second in the world after China from that day on.
Starting the second week of March, the U.S. boosted exponentially in just two weeks to be the third largest confirmed cases.
Spain and Germany followed similar trajectories of the confirmed cases to rank the fourth and fifth, respectively. Iran was ranked the sixth even it started early than most of the countries.
South Korea and Switzerland ranked the same, though the slow start in Switzerland.
The least confirmed infected cases were in the UK as it was the last country of the top countries starting infected by COVID-19. During the period from the end of the fourth week of February until the third week of March, we can see that China has a different trajectory of confirmed cases than other counters. These countries began to record COVID-19 cases after China's new cases slowed down.
\vspace{2mm}
\subsection{Evolution of Internet Search Queries}
We studied the daily changes on the keywords as shown in Figure~\ref{fig:daily-change}. We use blue bars if the change is greater than zero; otherwise, reds. The darker colors are for even days and lighter colors are for odd days, to clarify the figure. COVID-19 was not known as coronavirus for many people during the start of the coronavirus outbreak. There are no trends for COVID-19 before 1/24/2020. Though WHO announced on 2/11/2020 the official name for the 2019 novel coronavirus to be COVID-19~\footnote{https://www.cdc.gov/coronavirus/2019-ncov/faq.html}, the trends used this official name after 14 days. It means that several people were not aware of this term before officially announced by the international health organization.
We can see that COVID-19 terms are less fluctuation than corona terms.
The keyword "cases of covid19" started in the fourth week of January with high interest in the first couple of hours and dropped down most of that day. From the next day (1/25) until the end of the month, there was a trend of increasing usage of this keyword. Similar behaviors were for the keywords "coronavirus covid19" and "Covid19 cases".
In general, the searched keywords in Google "Covid 19", "covid 19 cases", and "covid19" were increased exponentially during the last week of January.
Moreover, most of the keywords without "covid19" (i.e., "corona", "coronavirus", "coronavirus cases", "Coronavirus news", "Coronavirus update") started on the twenty second of January and decreased in the next two days. After that, they increased most of the time until the end of the month. Whereas, the keyword "covid" fluctuated on January twenty second until the mid of the next day. In general, it continued boosting until the end of this period. Figure~\ref{fig:general_related_queries_wordcloud} illustrates the word cloud of the related queries for all days in this study. It has several words in different sizes that represent the frequencies of the words. We can see that virus was the largest occurred word during this period. Also, the terms ("coronavirus", "china", "news", "update", "cases") were also frequent. These terms are parts of the related queries as shown in Figure~\ref{fig:daily-change}.
\begin{figure*}[h]
\centering
\includegraphics[width=0.95\linewidth, height=20cm]{figures/eps/daily-change.pdf}
\caption{Daily interest change of each keyword.
}
\label{fig:daily-change}
\end{figure*}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth]{figures/general_related_queries_wordcloud.jpg}
\caption{Wordcloud visualization of overall terms in the related queries.}
\label{fig:general_related_queries_wordcloud}
\end{figure}
\vspace{3mm}
\subsection{Evolution of Internet Search Related Queries}
Word co-occurrences for the three keywords are shown in Figures~\ref{fig:cooccurrence-queries}. The keywords in the queries are: coronavirus, covid19, and covid19 cases. Coronavirus was related most with "cases", "uk", "symptoms", "news", and "update" as in Figure~\ref{fig:cooccurrence-queries}/a. People were concerned more with these terms for searching Google to understand and know more about coronavirus. The word co-occurrence for covid19 is shown in Figure~\ref{fig:cooccurrence-queries}/ b. It is clear that "cases", "virus", and "coronavirus" were the most correlated with this keyword. Also, the term "covid" was related to the number "19". Moreover, the keyword (covid19 cases) was more occurred in the terms of this composite keyword with its terms (covid19, cases) and the word "of" as shown in Figure~\ref{fig:cooccurrence-queries}/ c. The integer "19" was correlated with the term "cases".
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{figures/eps/cooccurrence-queries.pdf}
\caption{Word co-occurrence for the three keywords: (a) coronavirus, (b) covid19, and (c) covid19 cases.}
\label{fig:cooccurrence-queries}
\end{figure}
\vspace{2mm}
\subsection{Evolution of Internet Search Related Topics}
Figure~\ref{fig:top-related-topics} shows the top 20 categories of topics over time between the second day of January until the twenty second of March.
These entities (categories) are recognized (classified) by Google. In this paper, we used the related categories related to coronavirus or COVID-19.
People searched for the most popular category "Virus" on most days of this period. The second important searched category was "Infectious agent", especially in the mid of February until the third week of March. The interest of the category "Country in North America" increased the concern in March since more confirmed cases were discovered in the North American content.
The general category "Topic" named by Google was frequent at the beginning of the third week in February until the end of the period.
The category "Disease" occurred more at the end of February until the end of the second week of March.
Even though the Asia categories ("Country in East Asia" and "Country in Asia") were trends after February 11, the East Asia category was also frequent from the fourth week in January.
Categories related to cities in China were frequent from the start of the period of this study, until the mid of the ninth week of 2020 during the confirmed cases' curve in the country started to be flattened. Categories for places in California, Oceania, and Italy had fewer trends during most of the period of this research, while the category "US State" was the most frequent during the first week of March.
An article in CBS News on the first of March\footnote{https://www.cbsnews.com/news/cornavirus-corona-beer-they-have-nothing-to-do-with-each-other/} focused on a survey found that 38\% of American drinking beers would not order Corona Beer. 16\% of them thought that it might be a relation between drinking Corona Beer and COVID-19. This impacted the Google trends about beer. We can see that this trend was more frequent after 7 days of the article.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figures/eps/related_topics.pdf}
\caption{A heatmap visualization for the categories of related topics. The darker color, the higher mentioned frequencies.}
\label{fig:top-related-topics}
\end{figure*}
\subsection{Confirmed Cases Prediction Performance}
We use root mean square error (RMSE) as the main metric to compare the performance of the prediction models.
RMSE is a common metric to measure the differences between the actual value and the value predicted by the forecasting model.
Furthermore, other metrics like mean absolute error (MAE), mean absolute percentage error (MAPE), and r-square (R2) are also reported to relatively understand other aspects of the comparison.
MAE is a metric related to the average expected value of the loss (i.e., the loss of the absolute error).
MAPE is a regression metric used to measure the quality of a forecasting model, where the smaller is the better. In other words, it is the average accuracy ratio of the model.
R2 is an indication of how well the examples in a test dataset are possible to be predicted by a forecasting model.
The formulas of these evaluation metrics are: $RMSE = \sqrt{ \frac{1}{n} \sum_{i=1}^{n} (y_i - \hat{y_i})^2} $, $MAE = \frac{1}{n} \sum_{i=1}^{n} |y_i - \hat{y_i}|$, $MAPE = \frac{1}{n} \sum_{i=1}^{n} \frac{|y_i - \hat{y_i}|}{|y_i|}$,
$R2 = 1 - \frac{\sum_{i=1}^{n} |y_i - \hat{y_i}|}{\sum_{i=1}^{n} |y_i - \bar{y}|}$,
where $n$ is the size of the test dataset, $y_i$ is the actual number of confirmed cases, $\hat{y_i}$ is the predicted number of confirmed cases by a forecasting model based on the historical time series of trends and confirmed cases, and $\bar{y}$ represents the mean value $\forall{y_i}$, $i \in \{1, \dots, n\}$.
\begin{table*}[htbp]
\setlength{\tabcolsep}{20pt}
\centering
\caption{Confirmed cases forecast performance by learning algorithms.}
\begin{tabular}{l|r|r|r|r}
\toprule
& \multicolumn{1}{c|}{RMSE} & \multicolumn{1}{c|}{MAE} & \multicolumn{1}{c|}{MAPE} & \multicolumn{1}{c}{R2} \\
\midrule
Linear & 1,685 & 847 & \textbf{0.39} & 0.76 \\
Linear + GT & 1,683 & 1,475 & 1.43 & 0.38 \\
Negative Binomial & 1,645 & 1,194 & 1.74 & 0.77 \\
Negative Binomial + GT & 1,494 & 1,373 & 1.11 & 0.51 \\
Deep Neural Network & 1,595 & 1,317 & 0.48 & 0.76 \\
Deep Neural Network + GT & \textbf{807} & \textbf{569} & 0.63 & \textbf{0.82} \\
\bottomrule
\end{tabular}%
\label{tab:prediction-performance}%
\end{table*}%
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/actual-vs-prediction.pdf}
\caption{Actual versus prediction of confirmed cases using Deep Neural Network model. X axis represents the number of days since January 20, 2020.Y axis represents number of confirmed cases.
}
\label{fig:actual-vs-prediction}
\end{figure}
\textbf{World-wide prediction performance:}
Table \ref{tab:prediction-performance} presents the prediction performance of three types of learning algorithms (namely Linear, Negative Binomial, and Deep Neural Network) using Google trends and without using Google trends feature in the input (in this case, we use confirmed cases of previous day as the feature for the prediction model). The table shows that the addition of Google trends feature for Linear, Negative Binomial, and Deep Neural Network enhances themselves if not using Google trends feature. The linear model has a small improvement with RMSE increased from $1,685$ to $1,683$. Negative Binomial model presents better improvement of RMSE from $1,645$ to $1,494$. The Deep Neural Network model shows the best improvement with RMSE from $1,595$ to $807$, equivalent to $49\%$ enhancement. In all the variants, the Deep Neural Network model with Google trends outperforms other models.
Figure \ref{fig:actual-vs-prediction} demonstrates the visualization of new cases prediction compared with actual values using the Deep Neural Network model. We can see that the Deep Neural Network model is able to predict the pattern and show a smaller gap between the actual and the prediction lines compared with the version without using Google trends feature. Throughout the three models, we can see that Deep Neural Network using Google trends feature shows the best performance.
\textbf{Country-based prediction performance:} We selected countries with increasing number of new confirmed coronavirus cases such as Italy, France, and United States to compare the model performance when using google trends and without google trends data. China is not reported in this case because all the queries do not correlate with the number of confirmed cases in China. It could due to the fact that users in China use its own search system to search the Internet instead of Google. Additionally, for the model that does not use google trends as features, we use confirmed cases of previous day to predict for the next day. Figure \ref{fig:countries} presents the performance of our proposed deep neural network model whether using or not using Google Trends as features.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{figures/countries.pdf}
\caption{Deep neural network performance comparison across several countries such as France, Italy, United States, and Worldwide. We use one previous day as the feature when no google trends is used for prediction.
}
\label{fig:countries}
\end{figure}
From the Figure \ref{fig:countries}, we see that using Google Trends significantly enhances the deep neural network model. The RMSEs of the model using Google Trends in France, Italy, United States, and Worldwide are $643$, $657$, $520$, and $807$ while its performance when not using Google Trends are $1,346$, $1,432$, $1,310$, and $1,546$, respectively. We see that the model improves approximately two times in France, Italy, and Worldwide and about two and half times in United States datasets. The enhancement of the model performance across countries has confirmed that Google Trends can be a promising cheap source of information to predict new confirmed coronavirus cases.
\section{Conclusion}
In this paper, we present a spatio-temporal view on the relationship between Google search trends and the confirmed coronavirus cases. The framework supports visualization and analytic on the evolution of search trends, search queries, and related queries globally. Additionally, we explore the capability of Google search trends in predicting the number of confirmed cases for different types of learning models, namely Linear, Negative Binomial, and Deep Neural Network. The results show that Google search trends enhance the performance of the three different forecasting models, where the non-linear learning model, Deep Neural Network, has the best performance. Employing the Google search trends features fosters the performance of the Deep Learning approach more than 49\%. Thus, the potential data source is not only easy to access but also it is necessary to improve the performances of the employed forecasting models.
\bibliographystyle{frontiersinSCNS_ENG_HUMS}
|
1,477,468,750,176 | arxiv | \section{Introduction}
For nearly three decades, the analysis on metric measure spaces has been under active study, e.g., \cite{BB11,BBS,BBS03,H03,HP,H,HK,HKST}. Given a metric measure space $(Z, d_Z, \nu_Z)$, many function spaces defined on this space have been well established, e.g., Sobolev spaces, Besov spaces and Triebel-Lizorkin spaces (see \cite{N00,H96,GKS10,BP03,KYZ,KYZ10,GKZ13} and the references therein).
Given a homeomorphism $f$ between metric spaces $(Z,d_Z)$ and $(W,d_W)$, one natural question is that what kind of correspondence between certain function spaces on $(Z,d_Z, \nu_Z)$ and $(W,d_W, \nu_W)$ can be induced by $f$.
The question has been extensively studied for many function spaces when $f$ is a quasiconformal or quasisymmetric mapping, including Sobolev spaces, Besov spaces, Triebel-Lizorkin spaces and other related function spaces (see \cite{BBGS,BoSi, HKM92,Rie,KYZ,HenK,Vo,KXZZ} and the references therein).
In a very recent work by Bj\"{o}rn-Bj\"{o}rn-Gill-Shanmugalingam \cite{BBGS}, the question was studied when the homeomorphism $f$ is a biH\"{o}lder continuous mapping and the underlying spaces are bounded Ahlfors regular spaces $(Z,d_Z, \nu_Z)$ and $(W,d_W, \nu_W)$. It was shown that $f$ induces an embedding between Besov spaces $B^{s}_{p, p}(W)$ and $B^{ s^\prime}_{p, p}(Z)$ for suitable $s$ and $s^\prime$, via composition; see \cite[Proposition 7.2]{BBGS} for the details. Recall that for $\theta_1>0$ and $\theta_2>0$, a homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called $(\theta_1, \theta_2)$-{\it biH\"{o}lder continuous} if there exists a constant $C\geq1$ such that for all $x, y\in Z$,
\begin{equation}\label{defn-biholder-intro}
C^{-1}d_Z(x, y)^{\theta_1}\leq d_W(f(x), f(y))\leq Cd_Z(x, y)^{\theta_2}.
\end{equation}
In particular, if $\theta_1=\theta_2$, then $f$ is called a {\it snowflake mapping}.
It is interesting to ask what can remain of the conclusion of \cite[Proposition 7.2]{BBGS} if the assumption that the underlying metric spaces $(Z, d_Z)$ and $(W, d_W)$ are bounded is removed.
As the first purpose of this paper, we consider this question.
However, the assumption of boundedness on the underlying spaces plays a key role, since for a biH\"{o}lder continuous mapping $f: (Z, d_Z)\to (W, d_W)$, if $(Z, d_Z)$ is Ahlfors regular and unbounded, then $f$ must be a snowflake mapping. To avoid such a constraint, let us introduce the following class of mappings.
Before the statement of the definition, we make the following conventions: $(1)$ For a subset $A$ of $(Z, d_Z)$, we use ${\operatorname{diam}} A$ to denote the diameter of $A$, that is, ${\operatorname{diam}} A=\sup\{d_Z(z_1, z_2): z_1, z_2\in A\}$. $(2)$ All metric spaces involved in this paper are assumed to contain at least two points. $(3)$ When $(Z, d_Z)$ is unbounded, we take ${\operatorname{diam}} Z=\infty$. Then for any metric space $(Z, d_Z)$, $0<{\operatorname{diam}} Z\leq \infty$.
\begin{defn}\label{LbHC}
Let $\theta_1>0$, $\theta_2>0$ and $0<r<2 {\operatorname{diam}} Z$. A homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called {\it locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous} if every pair of points $x, y\in Z$ satisfies the condition \eqref{defn-biholder-intro} provided that $d_Z(x, y)<r$. Also, the constant $C$ in \eqref{defn-biholder-intro} is called a {\it locally biH\"{o}lder continuity coefficient} of $f$.
\end{defn}
Obviously, every biH\"{o}lder continuous mapping is locally biH\"{o}lder continuous, while the converse is not true (See Example \ref{ex} below). The following are direct consequences of the definitions.
\begin{prop}\label{1-8-8}
$(1)$ If $f$ is $(\theta_1, \theta_2)$-biH\"{o}lder continuous with a biH\"{o}lder continuity coefficient $C_1$, then the inverse $f^{-1}$ of $f$ is $(1/\theta_2, 1/\theta_1)$-biH\"{o}lder continuous with a biH\"{o}lder continuity coefficient $C_2=\max\{C_1^{1/ \theta_1},\; C_1^{1/ \theta_2}\}$.
$(2)$ If $f$ is locally $(\theta_3, \theta_4, r)$-biH\"{o}lder continuous with a locally biH\"{o}lder continuity coefficient $C_3$, then the inverse $f^{-1}$ of $f$ is locally $(1/\theta_4, 1/\theta_3, C_3^{-1}r^{\theta_3})$-biH\"{o}lder continuous with a locally biH\"{o}lder continuity coefficient $C_4=\max\{C_3^{1/ \theta_3},\; C_3^{1/ \theta_4}\}$.
\end{prop}
The following result is our answer to the aforementioned question, which provides us with embeddings between Besov spaces induced by locally biH\"{o}lder continuous mappings.
\begin{thm}\label{thm-1}
Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively, and let $\theta_1>0$, $\theta_2>0$, $s>0$, $s^\prime>0$ and $p\geq 1$
be constants such that
\begin{equation}\label{s-s-relation}
Q_Z\geq \theta_1 Q_W\;\;\mbox{and}\;\; s^\prime\leq \theta_2 s+\frac{\theta_2 Q_W-Q_Z}{p}.
\end{equation}
Suppose that $f: (Z, d_Z)\rightarrow (W, d_W)$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping with $0<r<2\,{\operatorname{diam}} Z$. Then $f$ induces a canonical bounded embedding $f_{\#}: B^{s}_{p, p}(W)\rightarrow B^{ s^\prime}_{p, p}(Z)$ via composition.
\end{thm}
The terminology appeared in Theorem \ref{thm-1} and in the rest of this section will be introduced in Section \ref{sec-2} unless stated otherwise.
\begin{rem}
Theorem \ref{thm-1} is a generalization of \cite[Proposition 7.2]{BBGS}. This is because when $(Z, d_Z)$ is bounded and $r>{\operatorname{diam}} Z$, Theorem \ref{thm-1} coincides with \cite[Proposition 7.2]{BBGS}. Further, Example \ref{ex} and Remark \ref{sec5}$(ii)$ below show that Theorem \ref{thm-1} is more general than \cite[Proposition 7.2]{BBGS}.
\end{rem}
As we know, a quasisymmetric mapping between bounded uniformly perfect spaces is locally biH\"{o}lder continuous since it follows from \cite[Theorem 3.14]{TV} or \cite[Corollary 11.5]{H} that it is biH\"{o}lder continuous.
Naturally, one will ask if there is any analog for the case when the underlying spaces are unbounded. However, Example \ref{ex-add} below shows that not every quasisymmetric mapping between unbounded uniformly perfect spaces is locally biH\"{o}lder continuous.
As the second purpose of this paper, we seek for a characterization for a quasisymmetric mapping to be locally biH\"{o}lder continuous.
Before the statement of our result, let us introduce the following concept.
\begin{defn}
For $0<r< 2 {\operatorname{diam}} Z$, a homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called $r$-{\it uniformly bounded} if there exist constants $a$ and $b$ with $0<a<b$ such that for all $x\in Z$,
$$a \leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b,$$ where $B(x, r)=\{y\in Z:\; d_Z(y, x)<r\}$, i.e., the open ball in $Z$ with center $x$ and radius $r$.
\end{defn}
The following property shows that in the definition of $r$-uniform boundedness, the exact value of the parameter $r$ is not important for quasisymmetric mappings.
\begin{prop}\label{uniformly-bounded}
Suppose that $(Z, d_Z)$ is a $\kappa$-uniformly perfect space with $\kappa>1$ and $f: (Z, d_Z)\to (W, d_W)$ is $\eta$-quasisymmetric. If $f$ is $r$-uniformly bounded for an $r$ with $0<r< 2 {\operatorname{diam}} Z$, then $f$ is $s$-uniformly bounded for any $s\in (0,2{\operatorname{diam}} Z)$.
\end{prop}
Note that quasisymmetry in a uniformly perfect space implies power quasisymmetry (See Theorem $A$ below).
Based on the uniform boundedness, we obtain the following geometric characterization for a (power) quasisymmetric mapping between unbounded uniformly perfect spaces to be locally biH\"{o}lder continuous.
\begin{thm}\label{thm-2}
Suppose that $(Z, d_Z)$ is $\kappa$-uniformly perfect with $\kappa>1$, and $f: (Z, d_Z)\to (W, d_W)$ is a $(\theta, \lambda)$-power quasisymmetric mapping with $\theta\geq 1$ and $\lambda\geq 1$. Then for any $r\in (0,2 {\operatorname{diam}} Z)$, the following are quantitatively equivalent: \begin{enumerate}
\item[$(1)$]
$f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous.
\item[$(2)$]
$f$ is $r$-uniformly bounded.
\end{enumerate}
\end{thm}
Here, for two conditions, we say that Condition $\Phi$ quantitatively implies Condition $\Psi$ if Condition $\Phi$ implies Condition $\Psi$ and the data of Condition $\Psi$ depends only on that of Condition $\Phi$. If Condition $\Psi$ also quantitatively implies Condition $\Phi$, then we say that Condition $\Phi$ is equivalent to Condition $\Psi$, quantitatively.
\begin{rem}
In Theorem \ref{thm-2}, the assumption that $(Z, d_Z)$ is uniformly perfect cannot be removed. For example, the identity mapping of integers ${\operatorname{id}}: \mathbb Z\rightarrow \mathbb Z$ with the standard Euclidean distance is $1$-biLipschitz, and thus, it is power quasisymmetric, and locally $(\theta_1,\theta_2,r)$-biH\"{o}lder continuous for any $\theta_1>0$, $\theta_2>0$ and $0<r<1$.
However, it is not $s$-uniformly bounded for any $s\in (0,1)$.
\end{rem}
Throughout this paper, the letter $C$ (sometimes with a subscript) denotes a positive constant that depends only on the given parameters of the spaces and may change at different occurrences. The notation $A\lesssim B$ (resp. $A \gtrsim B$) means that there is a constant $C_1\geq 1$ (resp. $C_2\geq 1$) such that $A \leq C_1 \cdot B$ (resp. $A \geq C_2 \cdot B).$ We also call $C_1$ and $C_2$ comparison coefficients of $A$ and $B$. In particular, $C_1$ (resp. $C_2$) is called an upper comparison coefficient (resp. a lower comparison coefficient) for $A$ and $B$. If $A\lesssim B$ and $A \gtrsim B$, then we write $A\approx B$.
The paper is organized as follows. In Section \ref{sec-2}, some basic concepts and known results will be introduced. Section \ref{sec-3} will be devoted to the proof of Theorem \ref{thm-1}. In Section \ref{sec-4}, the proofs of Proposition \ref{uniformly-bounded} and Theorem \ref{thm-2}
will be presented, and in Section \ref{sec-5}, two examples will be constructed.
\section{Basic terminologies}\label{sec-2}
In this section, we introduce some necessary notions and notations.
A metric space $(Z, d_Z)$ is called $\kappa$-{\it uniformly perfect} with $\kappa> 1$ if for each $x\in Z$ and for each $r>0$, the set $B(x, r)\setminus B(x, r/\kappa)$ is nonempty whenever the set $Z\setminus B(x, r)$ is nonempty. Sometimes, $(Z, d_Z)$ is called {\it uniformly perfect} if $Z$ is $\kappa$-{\it uniformly perfect} for some $\kappa > 1$.
\begin{lem}\label{1-8-4}
Suppose that $(Z, d_Z)$ is $\kappa$-uniformly perfect with $\kappa>1$, and let $x\in Z$. Then for any $r\in(0,2{\operatorname{diam}} Z)$, there exists $z\in Z$ such that $$\frac{r}{\mu}\leq d_Z(x, z)<r,$$ where $\mu=\max\{8, \kappa\}$.
\end{lem}
\begin{pf} Let $x\in Z$.
Since $0<r<2{\operatorname{diam}} Z$, we see that there exists $y\in Z$ such that $d_Z(x, y)>r/8$. If $d_Z(y, x)< r$, by letting $z=y$, we see that the lemma is true. If $d_Z(y, x)\geq r$, then the uniform perfectness of $(Z,d_Z)$ implies that there is $y^\prime\in Z$ such that $r/\kappa\leq d_Z(x, y^\prime)<r$. By letting $z=y^\prime$, we know that the lemma holds true as well.
\end{pf}
A homeomorphism $f: (Z, d_Z)\to (W, d_W)$ is called {\it $\eta$-quasisymmetric} if there exists a self-homeomorphism $\eta$ of $[0, +\infty)$ such that for all triples of points $x, y, z\in Z$,
\begin{equation}\label{eta}
\frac{d_W(f(x), f(z))}{d_W(f(y), f(z))}\leq \eta\left(\frac{d_Z(x, z)}{d_Z(y, z)}\right).
\end{equation}
In particular, if there are constants $\theta\geq 1$ and $\lambda\geq 1$ such that
\begin{equation*}\label{eq-1.1}
\eta_{\lambda, \theta}(t)=
\left\{\begin{array}{cl}
\lambda t^{\frac{1}{\theta}}& \text{for} \;\; 0<t<1, \\
\lambda t^{\theta}& \text{for} \;\; t\geq 1,
\end{array}\right.
\end{equation*}
then $f$ is called a $(\theta, \lambda)$-{\it power quasisymmetric mapping}. Here, the notation $\eta_{\lambda, \theta}$ means that the control function $\eta$ depends only on the given parameters $\theta$ and $\lambda$.
\begin{Thm}[{\cite[Theorem 11.3]{H}}]\label{Thm-A}
An $\eta$-quasisymmetric mapping of a uniformly perfect space is $(\theta, \lambda)$-power quasisymmetric, quantitatively.
\end{Thm}
In the following, we always use the notation $(Z, d_Z, \nu_Z)$ to denote a metric space $(Z, d_Z)$ admitting a Borel regular measure $\nu_Z$. A metric measure space $(Z, d_Z, \nu_Z)$ is called
\begin{enumerate}
\item[$(1)$]
{\it doubling} if there exists a constant $C\geq 1$ such that for all $x\in Z$ and $0<r<2{\operatorname{diam}} Z$,
$$0< \nu_Z(B(x, 2r))\leq C \nu_Z\big(B(x, r)\big)<\infty.$$
\item[$(2)$]
{\it $Q_Z$-Ahlfors regular} with $Q_Z>0$ if there exists a constant $C\geq 1$ such that for all $z\in Z$ and $0<r<2 {\operatorname{diam}} Z$,
$$
C^{-1} r^{Q_Z}\leq \nu_Z\Big(B(z, r)\big) \leq C r^{Q_Z}.
$$
\end{enumerate}
It is known that every Ahlfors regular space is doubling and uniformly perfect (cf. \cite[Section 11]{H}).
For given $1\leq p<\infty$, $s>0$ and a function $u: Z\to\mathbb R$, the {\it homogeneous Besov norm} on the metric measure space $(Z, d_Z, \nu_Z)$ is defined by
\begin{equation}\label{eq-besov}
\|u\|_{\dot B_{p, p}^{s}(Z)}=\left(\int_Z \int_Z\frac{|u(x)-u(y)|^p}{d_Z(x, y)^{sp}}\frac{d\nu_Z(x)\nu_Z(y)}{\nu_Z(B(x, d_Z(x, y)))}\right)^{1/p}.
\end{equation}
We write the {\it homogeneous Besov space} $\dot B_{p, p}^{s}(Z)$ for the subspace of $L^p_{\text{loc}}(Z)$ consisting of all functions $u$ such that $$\|u\|_{\dot B_{p, p}^{s}(Z)}<\infty.$$
We note that, properly speaking, \eqref{eq-besov} is actually a seminorm on $L^p_{\text{loc}}(Z)$ since any constant function has Besov norm 0.
We define the {\it Besov space} $B_{p, p}^{s}(Z)$ to be the normed space of all measurable functions $u\in L^{p}(Z)$ such that
$$
\|u\|_{B_{p, p}^{s}(Z)}=\|u\|_{L^{p}(Z)}+\|u\|_{\dot B_{p,p}^{s}(Z)}<\infty.
$$
\section{Locally biH\"{o}lder continuous mappings and their induced embeddings}\label{sec-3}
The aim of this section is to prove Theorem \ref{thm-1}. Before the proof, we need some preparation which consists of the following two auxiliary lemmas.
\begin{lem}\label{lem-besov-norm}
Suppose that $(Z, d_Z, \nu_Z)$ is Ahlfors $Q_Z$-regular with $Q_Z>0$. Let $n_0\in \mathbb Z$, and for each $n\in \mathbb Z$, let $t_n=C\sigma^n$, where $C>0$ and $0<\sigma<1$.
For any $s>0$ and $p\geq 1$, if $u\in L^{p}(Z)$, then
$$\|u\|^p_{B^s_{p, p}(Z)}\approx \|u\|^p_{L^p(Z)}+\sum_{n=n_0}^{+\infty} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x),$$
where the comparison coefficients depend on $n_0$.
\end{lem}
\begin{proof}
The following estimate easily follows from similar arguments as in the proof of \cite[Theorem 5.2]{GKS10} or \cite[Lemma 5.4]{BBGS}:
\begin{eqnarray}\label{1-3-1}
\|u\|^p_{\dot B^s_{p, p}(Z)} \approx \sum_{n=-\infty}^{+\infty} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x).
\end{eqnarray}
Let $n_0\in \mathbb Z$. Then the estimate \eqref{1-3-1} shows that to prove the estimate in the lemma, it suffices to show that
\begin{equation*}\label{norm-equiv}
\sum_{n=-\infty}^{n_0} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)\lesssim \|u\|^p_{L^p(Z)}.
\end{equation*}
Note that
\begin{align*}
\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)&\lesssim \int_Z\vint_{B(x, t_n)} (|u(x)|^p+|u(y)|^p)d\nu_Z(y)d\nu_Z(x)\\
&=\|u\|^p_{L^p(Z)}+\int_Z\vint_{B(x, t_n)} |u(y)|^pd\nu_Z(y)d\nu_Z(x).
\end{align*}
Since $(Z, d_Z, \nu_Z)$ is Ahlfors $Q_Z$-regular, we know that for any $y\in B(x, t_n)$, $$\nu_Z(B(x, t_n))\approx \nu_Z(B(y, t_n)).$$ It follows from the Fubini theorem that
\begin{align*}
\int_Z\vint_{B(x, t_n)} |u(y)|^pd\nu_Z(y)d\nu_Z(x)&\approx \int_Z\int_Z \frac{|u(y)|^p\chi_{B(x, t_n)}(y)}{\nu_Z(B(y, t_n))} d\nu_Z(y)d\nu_Z(x)\\
&=\int_Z |u(y)|^p d\nu_Z(y) \int_{Z}\frac{\chi_{B(x, t_n)}(y)}{\nu_Z(B(y, t_n))} d\nu_Z(x)\\
&=\|u\|^p_{L^p(Z)}.
\end{align*}
Therefore,
$$\sum_{n=-\infty}^{n_0} t_n^{-sp}\int_Z\vint_{B(x, t_n)}|u(x)-u(y)|^p d\nu_Z(y)d\nu_Z(x)\lesssim \sum_{n=-\infty}^{n_0} t_n^{-sp} \|u\|^p_{L^p(Z)}\lesssim \|u\|^p_{L^p(Z)},$$
which is what we need, and hence, the lemma is proved.
\end{proof}
\begin{lem}\label{embedding-lp}
Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively. Let $\theta_1>0$, $\theta_2>0$ and $0<r<2{\operatorname{diam}} Z$. Suppose that $f: Z\rightarrow W$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping such that $Q_Z\geq \theta_1 Q_W$. Then for any $p\geq 1$, the mapping $f$ induces a bounded embedding $f_{\#}: L^p(W)\rightarrow L^p(Z)$ via composition.
\end{lem}
When $Z$ and $W$ are bounded and $f$ is biH\"{o}lder continuous, Lemma \ref{embedding-lp} coincides with \cite[Lemma 7.1]{BBGS}. The proof method of \cite[Lemma 7.1]{BBGS} is also applicable to Lemma \ref{embedding-lp}, and so, we omit the details here.
Now, we are ready to prove Theorem \ref{thm-1}.
\subsection*{Proof of Theorem \ref{thm-1}}
Assume that $(Z, d_Z, \nu_Z)$ and $(W, d_W, \nu_W)$ are Ahlfors $Q_Z$-regular and Ahlfors $Q_W$-regular spaces with $Q_Z>0$ and $Q_W>0$, respectively. Suppose that $f: (Z, d_Z)\rightarrow (W, d_W)$ is a locally $(\theta_1, \theta_2, r)$-biH\"{o}lder continuous mapping with $\theta_1\geq \theta_2>0$ and $r\in (0, 2{\operatorname{diam}} Z)$.
Let $u\in B^{s}_{p, p}(W)$ with $s>0$ and $p\geq 1$, and let $v=u\circ f$.
Since $u\in L^p(W)$, it follows from Lemma \ref{embedding-lp} that
\begin{equation}\label{eq-1-4}
\|v\|_{L^p(Z)}\lesssim \|u\|_{L^p(W)},
\end{equation}
which implies $v\in L^p(Z)$.
For each $n\in \mathbb{Z}$, let $t_n=C\sigma^n$, where $C>0$ and $0<\sigma<1$. Apparently, there is $n_0\in \mathbb Z$ such that for $n\in \mathbb{Z}$, if $n\geq n_0$, then $$t_{n}<r.$$ Also, it follows from Lemma \ref{lem-besov-norm} that
for any $s'>0$,
\begin{align}
\|v\|^p_{B^{s^\prime}_{p, p}(Z)} \approx \|v\|^p_{L^p(Z)}+\sum_{n=n_0}^{+\infty} I_n, \label{zw-10-16}
\end{align}
where
$$I_n=t_n^{- s^\prime p}\int_Z\vint_{B(x, t_n)}|v(x)-v(y)|^p d\nu_Z(y)d\nu_Z(x)\notag.$$
In the following, we are going to estimate $I_n$. For this, we first estimate the integral
$$i_n=\vint_{B(x, t_n)}|v(x)-v(y)|^p d\nu_Z(y).$$
Since for any $n\geq n_0$, $t_n<r$, we infer from the locally biH\"{o}lder continuity of $f$ that there is $C_0\geq 1$ such that $$f(B(x, t_n))\subset B(f(x), C_0t_n^{\theta_2}).$$ Then the Ahlfors regularity of $(Z, d_Z, \nu_Z)$ gives
\begin{align}\label{1-5-2}
i_n &\approx \frac{1}{t_n^{Q_Z}} \int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(x, t_n)}(y) d\nu_Z(y)
\\ \nonumber
&\leq \frac{1}{t_n^{Q_Z}} \int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(f(x), C_0t_n^{\theta_2})}(f(y)) d\nu_Z(y).
\end{align}
As $v\in L^p(Z)$, we know that $v(x)$ is finite for almost every $x\in Z$.
Since
\begin{align*}
\int_W |v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime)d\nu_W(y^\prime)\lesssim \;& |v(x)|^p\nu_W(B(f(x), C_0t_n^{\theta_2}))\\ &+ \int_{B(f(x), C_0t_n^{\theta_2})}|u(y^\prime)|^pd\nu_W(y^\prime),
\end{align*}
we see from the Ahlfors regularity of $(W, d_W, \nu_W)$ that for almost every $x\in Z$, as a function of $y^\prime$, $|v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime)$ belongs to $L^1(W)$.
It follows from Lemma \ref{embedding-lp} that for almost every $x\in Z$,
$$\int_{Z}|v(x)-u\circ f(y)|^p \chi_{B(f(x), C_0t_n^{\theta_2})}(f(y)) d\nu_Z(y)\lesssim \int_W |v(x)-u(y^\prime)|^p\chi_{B(f(x), C_0t_n^{\theta_2})}(y^\prime) d\nu_W(y^\prime),$$
and thus, we deduce from \eqref{1-5-2} that
\begin{align*}
i_n \lesssim t_n^{\theta_2 Q_W-Q_Z} \vint_{B(f(x), C_0t_n^{\theta_2})} |v(x)-u(y^\prime)|^p d\nu_W(y^\prime).
\end{align*}This is what we want.
Since $u\in B^{s}_{p, p}(W)$, it follows from Lemma \ref{lem-besov-norm} that for any $n\geq n_0$,
\begin{equation*}
\int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime)<\infty,
\end{equation*}
which shows that $\vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)$ belongs to $L^1(W)$ as a function of $x^\prime$.
Again, by Lemma \ref{embedding-lp}, we obtain that
\begin{align*}
I_n&\lesssim t_n^{-s^\prime p+\theta_2 Q_W-Q_Z} \int_Z \vint_{B(f(x), C_0t_n^{\theta_2})} |u\circ f(x)-u(y^\prime)|^p d\nu_W(y^\prime) d\nu_Z(x)\\
&\lesssim t_n^{-s^\prime p+\theta_2 Q_W-Q_Z} \int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime).
\end{align*}
Assume that $s$ and $s^\prime$ satisfy the relation \eqref{s-s-relation}. Then we know that for any $n\geq n_0$,
$$t_n^{-s^\prime p+\theta_2 Q_W-Q_Z}\leq r^{\theta_2(Q_W+sp)-s^\prime p-Q_Z}(t_n^{\theta_2})^{-sp}.$$
This implies that
$$I_n\lesssim (t_n^{\theta_2})^{-sp} \int_W \vint_{B(x^\prime, C_0t_n^{\theta_2})} |u(x^\prime)-u(y^\prime)|^p d\nu_W(y^\prime)d\nu_W(x^\prime).$$
By substituting the estimate of $I_n$ into \eqref{zw-10-16}, we conclude from Lemma \ref{lem-besov-norm} that
$$\|v\|^p_{B^{s^\prime}_{p, p}(Z)} \lesssim \|u\|^p_{B^{s}_{p, p}(W)}. $$
Let $$f_{\#}(u)=u\circ f.$$ Then we have proved that $f_{\#}: B^{s}_{p, p}(W)\rightarrow B^{ s^\prime}_{p, p}(Z)$ is a bounded embedding.
\qed
\section{Power quasisymmetry, locally biH\"{o}lder continuity and uniform boundedness}\label{sec-4}
The purpose of this section is to prove Proposition \ref{uniformly-bounded} and Theorem \ref{thm-2}.
\begin{proof}[{\bf Proof of Proposition \ref{uniformly-bounded}}]
It follows from the assumption of $f$ being $r$-uniformly bounded that there must exist constants $a>0$ and $b>0$ such that for all $x\in Z$,
\begin{eqnarray}\label{1-7-1}a\leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b.\end{eqnarray}
Let $x\in Z$ and $s\in (0, 2{\operatorname{diam}} Z)$. To prove that $f$ is $s$-uniformly bounded, we only need to consider two cases: $s>r$ and $s<r$. For the first case, it follows from the fact $B(x, r)\subset B(x, s)$ and \eqref{1-7-1} that
\begin{eqnarray}\label{1-7-2}{\operatorname{diam}} f(B(x, s))\geq {\operatorname{diam}} f\big(B(x, r)\big)\geq a.\end{eqnarray}
If $B(x, s)\setminus B(x, r)=\emptyset$, obviously, we obtain from \eqref{1-7-1} that
\begin{eqnarray}\label{1-7-3}
{\operatorname{diam}} f(B(x, s))={\operatorname{diam}} f\big(B(x, r)\big)\leq b,
\end{eqnarray}
and if $B(x, s)\setminus B(x, r)\not=\emptyset$, it follows from the uniform perfectness of $(Z, d_Z)$ that there exists $z\in B(x, r)$ such that $$\frac{r}{\kappa}\leq d_Z(x, z)<r.$$ This indicates that for any $y\in B(x, s)$,
$$\frac{d_Z(x, y)}{d_Z(x, z)}\leq \frac{\kappa s}{r}.$$
Then the $\eta$-quasisymmetry of $f$ gives
$$\frac{d_W(f(x), f(y))}{d_W(f(x), f(z))}\leq \eta\left(\frac{\kappa s}{r}\right),$$
and thus, we get
$$d_W(f(x), f(y))\leq \eta\left(\frac{\kappa s}{r}\right) {\operatorname{diam}} f\big(B(x, r)\big)\leq \eta\left(\frac{\kappa s}{r}\right) b.$$
This implies that
\begin{eqnarray}\label{1-7-4} {\operatorname{diam}} f(B(x, s))\leq \eta\left(\frac{\kappa s}{r}\right) b.\end{eqnarray}
For the remaining case, that is, $s<r$, the fact $B(x, s)\subset B(x, r)$ leads to
\begin{eqnarray}\label{1-7-5} {\operatorname{diam}} f(B(x, s))\leq {\operatorname{diam}} f\big(B(x, r)\big)\leq b.\end{eqnarray}
If $B(x, r)\setminus B(x, s)=\emptyset$, apparently,
\begin{eqnarray} \label{1-7-6}
{\operatorname{diam}} f(B(x, s))={\operatorname{diam}} f\big(B(x, r)\big)\geq a,
\end{eqnarray}
and if $B(x, r)\setminus B(x, s)\not=\emptyset$, then the similar reasoning as in the proof of \eqref{1-7-4} ensures that
\begin{eqnarray} \label{1-7-7}
{\operatorname{diam}} f(B(x, s))\geq \frac{a}{\eta\left(\frac{\kappa r}{s}\right)}.
\end{eqnarray}
Now, we conclude from \eqref{1-7-2}$-$\eqref{1-7-7} that for all $x\in Z$,
$$a_1\leq {\operatorname{diam}} f(B(x, s))\leq b_1,$$
where
$$a_1=\min\left\{a,\;\frac{a}{\eta\left(\frac{\kappa r}{s}\right)}\right\}\;\;\mbox{and}\;\;b_1=\max\left\{b,\;\eta\left(\frac{\kappa s}{r}\right)b\right\}.$$
This shows that $f$ is $s$-uniformly bounded.
\end{proof}
\begin{proof}[{\bf Proof of Theorem \ref{thm-2}}]
$(1)\Rightarrow(2)$. Assume that $f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous with $\theta\geq 1$ and $0<r<2{\operatorname{diam}} Z$. Then there is $C\geq 1$ such that for any $x\in Z$ and any $y\in B(x, r)$,
\begin{equation}\label{lem-10-11}
C^{-1}d_Z(x, y)^{\theta} \leq d_W(f(x), f(y)) \leq Cd_Z(x, y)^{1/\theta},
\end{equation}
which leads to
$$
{\operatorname{diam}} f\big(B(x, r)\big)\leq 2Cr^{1/\theta}.
$$
Moreover, it follows from Lemma \ref{1-8-4} that there is $z\in Z$ such that
$$\frac{r}{\mu}\leq d_Z(x, z)<r,$$ where $\mu=\max\{8, \kappa\}$.
Then \eqref{lem-10-11} leads to
$${\operatorname{diam}} f\big(B(x, r)\big)\geq d_W(f(x), f(z))\geq \frac{r^\theta}{\mu^\theta C}.$$
These show that $f$ is $r$-uniformly bounded.
$(2)\Rightarrow(1)$.
Assume that $f$ is $r$-uniformly bounded with $0<r<2{\operatorname{diam}} Z$. This assumption implies that there are two constants $a>0$ and $b>0$ such that for any $\xi\in Z$,
\begin{equation}\label{l1-8-1}
a\leq {\operatorname{diam}} f(B(\xi, r))\leq b.
\end{equation}
Let $x\in Z$. By Lemma \ref{1-8-4}, we see that there is $\zeta\in Z$ such that
\begin{equation}\label{1-8-7}
\frac{r}{\mu}\leq d_Z(x, \zeta)<r,
\end{equation}
where $\mu=\max\{8, \kappa\}$.
We assert that
\begin{equation}\label{lemma-bdd}
\frac{a}{3\lambda {\mu}^\theta} \leq d_W(f(x), f(\zeta))\leq b.
\end{equation}
The right-side inequality of \eqref{lemma-bdd} easily follows from \eqref{l1-8-1}. For the proof of the left-side inequality, let $\zeta_1\in B(x, r)$ be such that
$$d_W(f(x), f(\zeta_1))\geq \frac {1}{3}{\operatorname{diam}} f\big(B(x, r)\big),$$
and then, it follows from \eqref{l1-8-1} that
$$d_W(f(x), f(\zeta_1))\geq \frac a 3.$$
Since
$$\frac{d_Z(x, \zeta_1)}{d_Z(x, \zeta)}\leq \mu,$$
we know from the assumption of $f$ being $(\theta, \lambda)$-power quasisymmetric with $\theta\geq 1$ and $\lambda\geq 1$ that
$$\frac{d_W(f(x), f(\zeta_1))}{d_W(f(x), f(\zeta))} \leq \lambda {\mu}^{\theta}.$$
Hence
$${d_W(f(x), f(\zeta))} \geq \frac{1}{\lambda {\mu}^\theta}d_W(f(x), f(\zeta_1)) \geq \frac{a}{3\lambda {\mu}^\theta},$$
which is what we need. Thus the estimates in \eqref{lemma-bdd} are proved.
Let $y\in Z$ be such that $$d_Z(x, y)<r.$$
If $d_Z(x, y)\geq d_Z(x, \zeta)$, then $r/{\mu}\leq d_Z(x, y)<r$. It follows from \eqref{lemma-bdd} that
\begin{eqnarray}\label{1-8-5}
\frac{a}{3\lambda (r\mu)^\theta} d_Z(x, y)^{\theta} \leq \frac{a}{3\lambda {\mu}^\theta} \leq d_W(f(x), f(y))\leq b\leq \frac{b\mu^{\frac{1}{\theta}}}{r^{\frac{1}{\theta}}} d_Z(x, y)^{\frac{1}{\theta}}.
\end{eqnarray}
If $d_Z(x, y)<d_Z(x, \zeta)$, it follows from the assumption of $f$ being $(\theta, \lambda)$-power quasisymmetric that
\begin{equation*}
\lambda^{-1} \left(\frac{d_Z(x, y)}{d_Z(x, \zeta)}\right)^{\theta}\leq \frac{d_W(f(x), f(y))}{d_W(f(x), f(\zeta))}\leq \lambda\left(\frac{d_Z(x, y)}{d_Z(x, \zeta)}\right)^{1/\theta},
\end{equation*}
and then, we deduce from \eqref{1-8-7} and \eqref{lemma-bdd} that
\begin{eqnarray}\label{1-8-6}
\frac{a}{3\lambda^2 {(r\mu)}^\theta } d_Z(x, y)^{\theta}\leq d_W(f(x), f(y))\leq \frac{\lambda b\mu^{\frac{1}{\theta}}}{r^{\frac{1}{\theta}}}d_Z(x, y)^{\frac{1}{\theta}}.
\end{eqnarray}
Now, we conclude from \eqref{1-8-5} and \eqref{1-8-6} that
$f$ is locally $(\theta, 1/\theta, r)$-biH\"{o}lder continuous, and hence, the theorem is proved.
\end{proof}
\section{Examples}\label{sec-5}
As an application of Theorem \ref{thm-2}, in this section, we construct two examples. The first example gives a quasisymmetric mapping between unbounded uniformly perfect spaces, which is not locally biH\"older continuous.
In the second example, we construct a locally biH\"older continuous mapping between unbounded Alhfors regular spaces, which is not biH\"{o}lder continuous. This example, together with Remark \ref{sec5}$(ii)$ below, also illustrates that Theorem \ref{thm-1} is more general than \cite[Proposition 7.2]{BBGS}.
\begin{example}\label{ex-add}
Let $f$ be the radial stretching $f(x)=|x|x$ of $({\operatorname{Re}\,}^2, |\cdot|)$,
where $|\cdot|$ denotes the usual Euclidean metric. Then $f$ is a power quasisymmetric mapping but not locally biH\"older continuous.
\end{example}
\begin{proof}
It follows from \cite[p. 49]{V1} or \cite[p. 309]{HKM} that $f$ is a quasiconformal mapping. It is a fundamental fact that quasiconformal self-mappings of Euclidean spaces with dimension at least two are quasisymmetric, see for example Gehring \cite{G1} or Heinonen-Koskela \cite{HK}. This fact implies that
$f$ is a quasisymmetric mapping. Then we know from Theorem A that $f$ is power quasisymmetric. Here, we refer interested readers to \cite{HK,V1} for the definitions of quasiconformal mappings.
Suppose on the contrary that $f$ is locally biH\"older continuous. By Theorem \ref{thm-2} and Proposition \ref{uniformly-bounded}, $f$ must be $1$-uniformly bounded. However, for any $(n, 0)\in \mathbb R^2$ with $n\in \mathbb N$, a direct computation gives that
$${\operatorname{diam}} f\left(B\big((n, 0), 1\big)\right)\geq (n+1)^2-n^2= 2n+1,$$
which contradicts the uniform boundedness condition. We conclude that $f$ is not locally biH\"older continuous.
\end{proof}
\begin{example}\label{ex}
Let $f$ be the following self-homeomorphism of $({\operatorname{Re}\,}^2, |\cdot|)$:
\begin{equation*}
f(x)=\left\{\begin{array}{cl}
0,& x=0,\\
\frac{x}{|x|}\cdot |x|^{\frac 12}, &0<|x|<1, \\
x, &|x|\geq 1.
\end{array}\right.
\end{equation*}
Then the following statements hold.
\begin{enumerate}
\item[$(1)$]
$f$ is power quasisymmetric.
\item[$(2)$]
$f$ is locally biH\"older continuous.
\item[$(3)$]
$f$ is not $(\theta_1, \theta_2)$-biH\"older continuous for any $\theta_1>0$ and $\theta_2>0$.
\end{enumerate}
\end{example}
\begin{proof}
$(1)$ The statement $(1)$ in the example follows from a similar argument with the one in the proof of Example \ref{ex-add}.
$(2)$ To show $f$ is locally biH\"older continuous, by Theorem \ref{thm-2}, it suffices to show that $f$ is $r$-uniformly bounded for some $r>0$. Choose $r=2$. Then it is obvious from the definition of $f$ that for any $x\in {\operatorname{Re}\,}^2$,
$$2\leq {\operatorname{diam}} f(B(x, 2))\leq 6.$$
This implies that $f$ is $2$-uniformly bounded, and hence, it is locally biH\"older continuous.
$(3)$ Suppose on the contrary that $f$ is $(\theta_1, \theta_2)$-biH\"older continuous for some $\theta_1$ and $\theta_2$ with $\theta_1\geq \theta_2>0$. Then $f$ is a snowflake mapping since $(\mathbb R^2, |\cdot|)$ is unbounded. That is, there are constants $C\geq 1$ and $\theta>0$ such that for any pair of $x$ and $y$,
$$
C^{-1}|x-y|^{\theta}\leq |f(y)-f(x)|\leq C|x-y|^{\theta}.
$$
However, for any $x$ with $|x|<1$,
$$
|f(x)-f(0)|=|x|^{\frac12},
$$ which implies that $\theta = \frac12$;
and for any $x$ with $|x|\geq1$,
$$
|f(x)-f(0)|=|x|,
$$ which shows that $\theta = 1$. We conclude from this contradiction that $f$ is not $(\theta_1, \theta_2)$-biH\"older continuous for any $\theta_1>0$ and $\theta_2>0$.
\end{proof}
\begin{rem}\label{sec5}
$(i)$ Following the arguments in the proofs of statements $(2)$ and $(3)$ in the Example \ref{ex}, it is not difficult to show that the mapping $f$ in Example \ref{ex} is locally $(1, \frac12, 2)$-biH\"older continuous. We omit the detailed computations here.
$(ii)$ It is known that ${\operatorname{Re}\,}^2$ is $2$-Ahlfors regular. Let $s>0$, $s^\prime>0$ and $p\geq 1$ be parameters such that $(s-2s^\prime)p\geq 2$. Then we see that all assumptions in Theorem \ref{thm-1} are satisfied. Therefore, we infer from Theorem \ref{thm-1} that
$f$ induces a canonical bounded embedding $f_{\#}: B^{s}_{p, p}(\mathbb R^2)\rightarrow B^{ s^\prime}_{p, p}(\mathbb R^2)$ via composition.
\end{rem}
\subsection*{Acknowledgments}
The second author (X. Wang) was partly supported by NNSF of China under the number 12071121, and the third author (Z. Wang) was partly supported by NNSF of China under the number 12101226.
\vspace*{5mm}
|
1,477,468,750,177 | arxiv | \section{Introduction}
In this paper, all graphs are finite and simple, which means no parallel edges and no loops
at their vertices. A proper $k$-coloring of a simple graph $G$ is an assignment (or a special case of labeling) of $k$ colors to the vertices of $G$ so that no two adjacent vertices share the same color. We say a graph $G$ is $c$-colorable if it admits a proper $c$-coloring. The chromatic number of a graph $G$ is the minimum $c$ such that $G$ is $c$-colorable, and this minimum color is denoted by $\chi(G)$.
It's well known that $ \chi(G)\le4 $ if $G$ is planar.
For a proper coloring, there exists at least one color that appears an odd number of times in the neighborhood of $v$, then we say $v$ admits an odd coloring. We use $c_o(v)$ to denote the color and $C_o(v)$ to denote the set of the odd colors. An odd $c$-coloring of a graph is a proper $c$-coloring with the additional constraint that each vertex admits an odd coloring. A graph G is odd $c$-colorable if it has an odd $c$-coloring. The odd chromatic number of a graph G, denoted by $\chi_o(G)$, is the minimum $c$ such that G has an odd $c$-coloring. Odd coloring has potential applications in many areas, for example, battery consumption aspects of sensor networks and in RFID protocols~\cite{smorodinsky2013conflict}.
Odd coloring was introduced very recently by Petru$\breve{s}$evski and $\breve{S}$krekovski\\~\cite{petruvsevski2021colorings}, who proved that planar graphs are odd $9$-colorable. Note that a 5-cycle is a planar graph whose odd chromatic number is exactly 5, they further conjectured that planar graphs are odd $5$-colorable. Petr and Portier~\cite{petr2022odd} proved that planar graphs are odd $8$-colorable. Fabrici\cite{fabrici2022proper} proved a strengthening version about planar graphs regarding similar coloring parameters.
Cranston~\cite{cranston2022odd}
studied the restriction of girth and proved that, $\chi_{o}(G)\leq5$ if $G$ is a planar graph with girth at least $7$, and $\chi_{o}(G)\leq6$ if $G$ is a planar graph with girth at least $6$.
Eun-Kyung Cho~\cite{cho2022odd} focused on a sparse graph and conjectured that, for $c\ge4$, if G is a graph with $mad(G)\le \frac{4c-4}{c+1}$, then $\chi_o(G)\le c$. And proved that, if G is a graph with $mad(G)\le mad(K^*_{c+1})= \frac{4c}{c+2}$, then $\chi_o(G)\le c$ for $c\ge 7$, unless G contains $K^*_{c_1}$ as a subgraph. They further proved that $\chi_{o}(G)\leq5$ if $G$ is a planar graph with girth at least $6$.
Suppose that $G$ is a toroidal graph. The $7$-color theorem~\cite{kauffman2009seven} shows that $\chi(G)\le7$. Notice that $K_7$ is a toroidal graph, $\chi(G)=7$. Note that $\chi_o(G)\ge\chi(G)$, if $G$ is a toroidal graph, then $\chi_o(G)\ge7$.
We proved that,
\begin{thm}\label{th1}
If $G$ is a toroidal graph, then $\chi_o(G)\le9$
\end{thm}
We prove Theorem~\ref{th1} by reduction. In the construction of the minimal counterexample, we organize the constraints in a creative way, which simplifies our proof greatly and can be modified to settle other coloring problems. Moreover, we pay a lot attention to summarizing the complex situations in the proof and present it in brevity. More precisely, we force the most difficult part into the configuration as is shown in Figure 1 via discharging method, and simplify the analysis of this main reducible configuration by splitting it into Lemmas~\ref{32123},~\ref{42123} and~\ref{1234}, and then come to a conclusion in Lemma~\ref{6-v} based on the former tool Lemmas and Claim.
\section{Proof}
Let $G$ be a counterexample to Theorem~\ref{th1} with the minimum number of $4^+$-vertices, and subject to that, the number of $5^+$-neighbors of $5^+$-vertex $G$ is minimized, and subject to these conditions $|E(G)|$ is minimized.
\begin{lem}\label{minimum degree}
The minimum degree $\delta(G)\ge5$
\end{lem}
\begin{proof}
Suppose otherwise that there is a $4$-vertex $v$ in $G$. Let $v_1,v_2,v_3,v_4$ be the neighbors of $v$. Let $G'$ be the graph obtained from $G-v$ by adding $v_1x_1v_2,v_2x_2v_3,v_3x_3v_1$ , where each of $x_i$, $i\in [3]$ is a new $2$-vertex.
By the minimality of $G$, $G'$ has an odd $9$-coloring $c'$.
Then we can get an odd $9$-coloring $c$ of $G$ by coloring each vertex other than $v$ in $G$ with the same color in $G'$ and coloring $v$ with $[9]\setminus \{c(v_1),c(v_2),c(v_3),c(v_4),c_o(v_1),c_o(v_2),c_o(v_3),c_o(v_4)\}$. Since each of $x_i$ is a $2$-vertex, $c(v_1)\neq(v_2)\neq(v_3)$. Then $v$ has an odd coloring, a contradiction.
\end{proof}
\begin{lem}\label{oddvet-non-adja}
The odd vertex is not adjacent to any odd vertex.
\end{lem}
\begin{proof}
Suppose otherwise that there exist two odd adjacent vertices $u$ and $v$. By Lemma~\ref{minimum degree}, $u$ and $v$ are $5^+$-vertices. Let $G'$ be the graph obtained from $G$ by splitting edge $uv$ with a $2$-vertex $w$. Since $5^+$-vertex $u$ and $v$ have fewer $5^+$-neighbors in $G'$, there is an odd $9$-coloring $c'$ of $G'$ by the minimality of $G$. Note that $c'(u)\neq c'(v)$ since $w$ is a $2$-vertex. Let $c(z)=c'(z)$ for $z\in v(G)$. Since $u$ and $v$ are odd vertices, $u$ and $v$ always admit an odd coloring. Then $c$ is an odd coloring of $G$, a contradiction.
\end{proof}
\begin{lem}\label{5-vetx}
Let $u$ be a $5$-vertex, $u_1,u_2,\ldots,u_5$ be the neighbors of $u$ in clockwise order, each of $u_2$ and $u_3$ be a $6$-vertex, $[uu_1u_2],[u_2uu_3],[u_3uu_4]$ be $3$-faces. Then $G$ has no such $5$-vertex $u$.
\end{lem}
\begin{proof}
Suppose otherwise that $G$ has such a $5$-vertex $u$ satisfied these constraints in Lemma~\ref{5-vetx}. Let $u_2',u_2'',u_2'''\notin \{u,u_3,u_1\}$ be the neighbors of {$u_2$; $u_3',u_3'',u_3'''\notin \{u,u_2,u_4\}$ }be the neighbors of $u_3$. Let $G'$ be the graph obtained from $G-\{u_2,u_3\}$ by adding edges between any two of $u_i',u_i'',u_i'''$ if they are not adjacent in $G$ for $i\in\{2,3\}$. Since $G'$ has fewer $4^+$-vertices than $G$, $G'$ has an odd $9$-coloring of $c'$ by the minimality of $G$. Let $c(w)=c'(w)$ for $w\in V(G)-\{u_2,u_3\}$.
Since $c'$ is proper, $c(u_i')\neq c(u_i'') \neq c(u_i''')$ for $i=2,3$.
If $c(u_1)\notin \{c(u_2'),c(u_2''),c(u_2''')\}$, then $u_2$ must have an odd coloring regardless the colors of $u$ and $u_3$ in $G$. Then color $u_2$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_2'),c(u_2''),c(u_2'''), c_o(u_2'), c_o(u_2''),c_o(u_2'''),c(u_1)\}$. If either
$\{c(u_2), c(u_4)\}\nsubseteq\{c(u_3'),c(u_3''),c(u_3''')\}$
or $c(u_2)=c(u_4)$, then $u_3$ must have an odd color regardless the color of $u$ in $G$. Then color $u_3$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_3'),c(u_3''),c(u_3'''),\\ c_o(u_3'),c_o(u_3''),c_o(u_3'''),c(u_2),c(u_4)\}$. Recolor $u$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_1),\\c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$. Since $u$ is a $5$-vertex, $u$ must have an odd color. Then $G$ has an odd $9$-coloring $c$, a contradiction. Thus,
$\{c(u_2), c(u_4)\}\subseteq\{c(u_3'),c(u_3''),c(u_3''')\}$
and $c(u_2)\neq c(u_4)$. We assume that $c(u_2)=c(u_3'),c(u_4)=c(u_3'')$. In this case, we first recolor $u$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_4),c(u_5),c_o(u_5), c(u_3''')\}$. Then $u_3$ has an odd color $c(u_3''')$. Then color $u_3$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_3'),c(u_3''),c(u_3'''), c_o(u_3'),c_o(u_3''),\\c_o(u_3'''),c(u),c_o(u_4)\}$, a contradiction.
Thus, $c(u_1)\in \{c(u_2'),c(u_2''),c(u_2''')\}$. By symmetry, $c(u_4)\in \{c(u_3'),c(u_3''),c(u_3''')\}$. We assume that $c(u_1)=c(u_2'), c(u_4)=c(u_3')$.
If $|\{c(u_2'),c(u_2''),c(u_2'''), c_o(u_2'), c_o(u_2''),c_o(u_2'''),c(u_3''),c(u_3''')\}|\leq 7 $, then color $u_3$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_3'),c(u_3''),c(u_3'''), c_o(u_3'),c_o(u_3''),c_o(u_3'''),c(u_2''),c(u_2''')\}$. Then $u_2$ must have an odd color $c(u_2'')$ or $c(u_2''')$ regardless the color of $u$ in $G$. Then color $u_2$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_2'),c(u_2''),c(u_2'''), c_o(u_2'), c_o(u_2''),c_o(u_2'''),c(u_3''),c(u_3'''), c(u_3)\}$. Since $|\{c(u_2'),c(u_2''),c(u_2'''),\\ c_o(u_2'),c_o(u_2''),c_o(u_2'''),c(u_3''),c(u_3''')\}|\leq 7 $, $u_2$ has at least one color. Then $u_3$ must have an odd color $c(u_3'')$ or $c(u_3''')$ regardless the color of $u$ in $G$. Finally recolor $u$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$, a contradiction. Thus, $|\{c(u_2'),c(u_2''),c(u_2'''), c_o(u_2'), c_o(u_2''),c_o(u_2'''),c(u_3''),c(u_3''')\}|= 8$.
First color $u_2$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_2'),c(u_2''),c(u_2'''), c_o(u_2'), c_o(u_2''),c_o(u_2'''),c(u_3''),c(u_3''')\}$. Then $u_3$ must have an odd color $c(u_3'')$ or $c(u_3''')$ regardless the color of $u$ in $G$. Let $\{c_1,c_2\}\in \lbrack 9 \rbrack \setminus \{c(u_3'),c(u_3''),c(u_3'''), c_o(u_3'),c_o(u_3''),c_o(u_3'''),c(u_2)\}$. If one of $c_1$ and $c_2$ is not in $\{c(u_2''),c_(u_2''')\}$, then color $u_3$ with this color. Then $u_2$ must have an odd color $c(u_2'')$ or $c(u_2''')$ regardless the color of $u$. Recolor $u$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),\\c(u_5),c_o(u_5)\}$, a contradiction. Thus, $\{c_1,c_2\}=\{c(u_2''),c(u_2''')\}$. Then color $u_3$ with $c_1$. If $c_2$ is not equal $[9]\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$, then recolor $u$ with the color in $\lbrack 9 \rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$. Then $u_2$ has an odd color $c_2$, a contradiction. Thus, $c_2$ is equal $\lbrack9\rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$. In this case, recolor $u$ with $c(u_2)$. Since $c_2$ is equal to the only color in $\lbrack9\rbrack\setminus\{c(u_1),c_o(u_1),c(u_2),\\c(u_3),c(u_4),c_o(u_4),c(u_5),c_o(u_5)\}$, the color of $u$ does not destroy
the proper coloring of $u_1,u_3,u_4,u_5$ and the odd coloring of $u_3,u_4,u_5$.
Then we choose one of $c(u_3'')$ and $c(u_3''')$ to color $u_2$ such that $u_1$ has an odd color. Then $u_2$ has an odd coloring $c_2$, $u_3$ has an odd coloring $c(u_3''')$ or $c(u_3'')$, a contradiction.
\end{proof}
A $6$-vertex $v$ is {\em special} if $v$ is incident with six $3$-faces. A vertex $u$ is {\em free} to the vertex $v$ if $u$ is adjacent to $v$ and $|C_o(u)|\ge 3$. Especially, we use $\overline{c_o}(u)$ to denote the the only odd color of $u$ if $|C_o(u)|=1$. Note that if $u$ is a $5$-vertex, then $|C_o(u)|=1$, $3$ or $5$.
In the following Lemmas \ref{32123}-\ref{6-v}, let $u$ be a special $6$-vertex, $u_1,u_2,\ldots,u_6$ be the neighbors of $u$ in the clockwise order, $v_{12},v_1,v_2 \notin \{u_6,u,u_2\}$ be the neighbors of $u_1$; $v_2,v_3,v_4 \notin \{u_1,u,u_3\}$ be the neighbors of $u_2$; $v_4,v_5,v_6 \notin \{u_2,u,u_4\}$ be the neighbors of $u_3$; $v_6,v_7,v_8 \notin \{u_3,u,u_5\}$ be the neighbors of $u_4$; $v_8,v_9,v_{10} \notin \{u_4,u,u_6\}$ be the neighbors of $u_5$; $v_{10},v_{11},v_{12} \notin \{u_5,u,u_1\}$ be the neighbors of $u_6$; each of $u,u_1,\ldots,u_6,v_1,v_2,\ldots,v_{12}$ is a special $6$-vertex, as is depicted in Figure 1.
\vskip 0.5cm
\unitlength=0.25mm
\begin{picture}(10,20)(0,0)
\put(230, -10){\makebox(0,0){$\bullet$}} \put(225, -20) {\scriptsize {\em $u_6$}} \put(230, -10){\line(1,0){40}} \put(230, -10){\line(2,3){20}}
\put(230, -10){\line(-2,3){20}} \put(230, -10){\line(-1,0){40}}
\put(270, -10){\makebox(0,0){$\bullet$}} \put(265, -20){\scriptsize {\em $u_1$}}
\put(270, -10){\line(2,-3){20}} \put(270, -10){\line(-2,3){20}}
\put(270, -10){\line(2,3){20}} \put(270, -10){\line(1,0){40}}
\put(290, -40){\makebox(0,0){$\bullet$}} \put(278, -38){\scriptsize {\em $u_2$}}
\put(290, -40){\line(-2,-3){20}} \put(290, -40){\line(2,3){20}}
\put(290, -40){\line(1,0){40}} \put(290, -40){\line(2,-3){20}}
\put(270, -70){\makebox(0,0){$\bullet$}} \put(265, -64){\scriptsize {\em $u_3$}}
\put(270, -70){\line(-1,0){40}} \put(270, -70){\line(1,0){40}}
\put(270, -70){\line(-2,-3){20}} \put(270, -70){\line(2,-3){20}}
\put(230, -70){\makebox(0,0){$\bullet$}} \put(225, -64){\scriptsize {\em $u_4$}}
\put(230, -70){\line(-2,3){20}} \put(230, -70){\line(2,-3){20}}
\put(230, -70){\line(-2,-3){20}} \put(230, -70){\line(-1,0){40}}
\put(210, -40){\makebox(0,0){$\bullet$}} \put(213, -38){\scriptsize {\em $u_5$}}
\put(210, -40){\line(2,3){20}} \put(210, -40){\line(-2,3){20}}
\put(210, -40){\line(-2,-3){20}} \put(210, -40){\line(-1,0){40}}
\put(210, 20){\makebox(0,0){$\bullet$}} \put(210, 23){\scriptsize {\em $v_{11}$}}
\put(210, 20){\line(1,0){40}} \put(210, 20){\line(-2,-3){20}}
\put(250, 20){\makebox(0,0){$\bullet$}} \put(250, 23){\scriptsize {\em $v_{12}$}}
\put(250, 20){\line(1,0){40}}
\put(290, 20){\makebox(0,0){$\bullet$}} \put(290, 23){\scriptsize {\em $v_1$}}
\put(290, 20){\line(2,-3){20}}
\put(310, -10){\makebox(0,0){$\bullet$}} \put(313, -11){\scriptsize {\em $v_2$}}
\put(310, -10){\line(2,-3){20}}
\put(330, -40){\makebox(0,0){$\bullet$}} \put(333, -41){\scriptsize {\em $v_3$}}
\put(330, -40){\line(-2,-3){20}}
\put(310, -70){\makebox(0,0){$\bullet$}} \put(313, -71){\scriptsize {\em $v_4$}}
\put(310, -70){\line(-2,-3){20}}
\put(290, -100){\makebox(0,0){$\bullet$}} \put(293, -100){\scriptsize {\em $v_5$}}
\put(290, -100){\line(-1,0){40}}
\put(250, -100){\makebox(0,0){$\bullet$}} \put(253, -98){\scriptsize {\em $v_6$}}
\put(250, -100){\line(-1,0){40}}
\put(210, -100){\makebox(0,0){$\bullet$}} \put(213, -98){\scriptsize {\em $v_7$}}
\put(210, -100){\line(-2,3){20}}
\put(190, -70){\makebox(0,0){$\bullet$}} \put(178, -71){\scriptsize {\em $v_8$}}
\put(190, -70){\line(-2,3){20}}
\put(170, -40){\makebox(0,0){$\bullet$}} \put(158, -41){\scriptsize {\em $v_9$}}
\put(170, -40){\line(2,3){20}}
\put(190, -10){\makebox(0,0){$\bullet$}} \put(174, -11){\scriptsize {\em $v_{10}$}}
\put(250, -40){\makebox(0,0){$\bullet$}} \put(252, -40){\scriptsize {\em $u$}}
\put(250, -40){\line(2,3){20}} \put(250, -40){\line(1,0){40}}
\put(250, -40){\line(2,-3){20}} \put(250, -40){\line(-2,-3){20}}
\put(250, -40){\line(-1,0){40}} \put(250, -40){\line(-2,3){20}}
\put(145, -120){\scriptsize {\em Figure 1. A cluster of special 6-vertices }}
\end{picture}
\vskip 4cm
In the following Lemmas \ref{32123}-\ref{1234}, let $c$ be an odd $9$-coloring of $G-u$.
\begin{lem}\label{32123}
Let $c(u_1)\neq c(u_2)\neq c(u_3)$, $c(u_2)=c(u_6)$, $c(u_3)=c(u_5)$, each of $u_1,u_2$ and $u_6$ has exactly one odd color and $\overline{c_o}(u_1)\neq \overline{c_o}(u_2) \neq \overline{c_o}(u_6)$, $\overline{c_o}(u_i)\notin \{c(u_1),c(u_2),c(u_3),c(u_4)\}$ for each $i\in \{1,2,6\}$. Then $u_1$ can be recolored
with one color in $\lbrack 9 \rbrack \setminus \{c(u_1),c(u_2), c(u_3), c(u_5)\}$
such that $u_2$ and $u_6$ are free to $u$.
\end{lem}
\begin{proof}
Since $c(u_2)=c(u_6)$ and $u_1$ has exactly one odd color in $G-u$, $c(v_2)=c(v_{12})$, $c(v_1)=\overline{c_o}(u_1)$. Since $c$ is proper, $c(v_{12})=c(v_2)\neq c(u_1)\neq c(u_2)$. If $c(v_2)=c(v_{12})\neq c(u_3)$, then $\overline{c_o}(u_2)=c(v_2)=c(v_{12})=\overline{c_o}(u_6)$, a contradiction. Thus, $c(v_2)=c(v_{12})=c(u_3)$. Since each of $u_2$ and $u_6$ has exactly one odd color and $c(u_5)=c(u_3)\neq c(u_1)$, $\{c(v_3),c(v_4)\}=\{\overline{c_o}(u_2),c(u_1)\}$ and $\{c(v_{10}),c(v_{11})\}=\{\overline{c_o}(u_6),c(u_1)\}$. Let $c_1$ be the color in $[9]\setminus\{c(u_1),c(u_2),c(u_3),c(u_4),c(v_1),c_o(v_1),\overline{c_o}(u_2),\overline{c_o}(u_6)\}$.
Recolor $u_1$ with $c_1$. Since $c(u_3)=c(v_2)=c(v_{12})$, $c_1\neq c(v_2)$ and $c_1\neq c(v_{12})$. Thus, each of $v_1,v_2,v_{12}$ is proper. Observe the neighbors of $v_2$, $c(v_1)\neq c(u_2)\neq c(v_3)$ since $c(v_3)\in\{c_o(u_2),c(u_1)\}$ and $c(v_1)=\overline{c_o}(u_1)$. Recall that $c_1\notin\{c(v_1),c(u_2),c(u_1),\overline{c_o}(u_2)\}$. Thus, $v_2$ must have an odd coloring in the case of $u_1$ with color $c_1$. By symmetry, $v_{12}$ must have an odd coloring in the case of $u_1$ with color $c_1$. Observe the neighbors of $u_2$, $c(u_3)\neq c(v_3) \neq c(v_4)\neq c_1$. Then $|C_o(u_2)|\ge3$
regardless of the color of $u$ in $G$. Thus, $u_2$ is free to $u$. By symmetry, $u_6$ is free to $u$.
\end{proof}
\begin{lem}\label{42123}
If $c(u_6)=c(u_2)\neq c(u_1)\neq c(u_3) \neq c(u_5)$, $|C_o(u_1)|=1$, $|C_o(u_2)|=1$ and $|C_o(u_2)|=1$, then $\{\overline{c_o}(u_1), \overline{c_o}(u_2), \overline{c_o}(u_6)\}$ occupies at most two different colors in $\lbrack 9 \rbrack \setminus \{c(u_1),c(u_2), c(u_3), c(u_5)\}$ together.
\end{lem}
\begin{proof}
Suppose otherwise that $\{\overline{c_o}(u_1), \overline{c_o}(u_2), \overline{c_o}(u_6)\}$ occupies three colors in $\lbrack 9 \rbrack \setminus \{c(u_1),c(u_2), \\c(u_3), c(u_5)\}$ together. Thus, $\overline{c_o}(u_1)\neq \overline{c_o}(u_2)\neq \overline{c_o}(u_6)$.
Since $u_1$ has an odd color and $c(u_2)=c(u_6)$, $c(v_2)=c(v_{12})$ and $c(v_1)=\overline{c_o}(v_1)$.
If $c(v_2)=c(u_3)$, then $c(v_{12})=c(u_3)$. Since $c(u_3)\neq c(u_5)$, $u_6$ has an odd color $c(v_{12})=c(u_3)$, a contradiction.
If $c(v_2)=c(u_5)$, then $u_2$ has an odd color $c(v_2)=c(u_5)$ since $c(u_3)\neq c(u_5)$. Then $\overline{c_o}(u_2)=c(u_5)$, a contradiction. Thus, $c(v_2)\notin\{c(u_3),c(u_5)\}$. Then $\{c(v_3),c(v_4)\}=\{c(u_1),c(u_3)\}$ since $u_2$ has exactly one odd color and $c(u_1)\neq c(u_3)$. Since $u_6$ has exactly one odd color and $c(u_1)\neq c(u_5)$, $\{c(v_{10}),c(v_{11})\}=\{c(u_1),c(u_5)\}$. Then $\overline{c_o}(u_2)=c(v_2)=c(v_{12})=\overline{c_o}(u_6)$, a contradiction.
\end{proof}
\begin{lem}\label{1234}
If $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_6)$, $|C_o(u_1)|=1$ and $|C_o(u_2)|=1$, then $\{\overline{c_o}(u_1), \overline{c_o}(u_2)\}$ occupies at most one color in $\lbrack 9 \rbrack \setminus \{c(u_1),c(u_2), c(u_3), c(u_6)\}$
\end{lem}
\begin{proof}
Suppose otherwise that $\{\overline{c_o}(u_1), \overline{c_o}(u_2)\}$ occupies two colors in $\lbrack 9 \rbrack \setminus \{c(u_1),c(u_2), c(u_3), c(u_6)\}$. Thus, $\overline{c_o}(u_1)\neq \overline{c_o}(u_2)$.
Since $u_2$ has an odd color and $c(u_3)\neq c(u_1)$, $\{\overline{c_o}(u_2), c(u_3), c(u_1)\}=\{c(v_2),c(v_3),c(v_4)\}$. Since $c$ is proper and $v_2u_1\in E(G-u)$, $c(v_2)=\overline{c_o}(u_2)$ or $c(v_2)=c(u_3)$. In the former case,
since $u_1$ has exactly one odd color and $\overline{c_o}(u_2)\neq c(u_2)\neq c(u_6) $, $\{c(v_1),c(v_{12})\}=\{c(u_2),c(u_6)\}$. Then $\overline{c_o}(u_1)=c(v_2)=\overline{c_o}(u_2)$, a contradiction.
In the latter case, $\{c(v_1),c(v_{12})\}=\{c(u_2),c(u_6)\}$ since $u_1$ has exactly one odd color and $c(u_3)\neq c(u_2)\neq c(u_6)$. Then $\overline{c_o}(u_1)=c(v_2)=c(u_3)$, a contradiction.
\end{proof}
\begin{lem}\label{6-v}
$G$ has no configure in Figure 1.
\end{lem}
\begin{proof}
Suppose otherwise that $G$ has the configure in Figure 1. Let $G'$ be the graph obtained from $G-u$ by adding paths $u_1x_1u_3,u_1x_2u_4,u_1x_3u_5$, where $x_1,x_2,x_3$ are $2$-vertices. Since $G'$ has fewer $6^+$-vertices than $G$, $G'$ has an odd $9$-coloring $c'$. Let $c(z)=c'(z)$ for each vertex $z\in V(G)-u$. Since $c'$ is an odd coloring, each of $x_1,x_2$ and $x_3$ has an odd color. Then $c(u_1)\neq c(u_3),c(u_1)\neq c(u_4),c(u_1)\neq c(u_5)$. Then all the color possibilities of $u_1,u_2,\ldots,u_6$ are shown as the following cases by symmetry.
We first establish the following claim:
{\bf Claim} Let $|\{c(u_1),c(u_2),\ldots,c(u_6)\}|=k$. If one of the following statements hold, then $u$ admits an odd coloring in $G$.
\begin{enumerate}
\item there exist at least $k-2$ neighbors { $u_i$}, $u_i$ satisfies that $u_i$ are free to $u$ or { $\overline{c_o}(u_i)$} $\in$ $\{c(u_1),c(u_2),\ldots,c(u_6)\}$;
\item $|\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),
\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}|< 9-k$.
\end{enumerate}
{\bf Case 1} $c(u_1)=1,c(u_2)=c(u_6)=2,c(u_3)=c(u_5)=3,c(u_4)=2$.
By Claim, $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),
\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\{4,5,6,7,8,9\}$. By Lemma \ref{32123}, we can recolor $u_1$ with color not in $\{c(u_1),c(u_2),c(u_3),c(u_4)\}$ such that $u_2$ and $u_6$ are free to $u$. Then color $u$ with a color in $[9]\setminus\{\overline{c_o}(u_1), \overline{c_o}(u_3),\overline{c_o}(u_4), \overline{c_o}(u_5),c(u_1),c(u_2),c(u_3),c(u_4)\}$.
Since $c(u_1)\neq c(u_4)$, $u$ always admits an odd coloring in $G$. Thus, the odd coloring $c'$ of $G'$ can return back to $G$, a contradiction.
{\bf Case 2} $c(u_1)=1,c(u_2)=c(u_6)=2,c(u_3)=c(u_5)=3,c(u_4)=4$. or $c(u_1)=1,c(u_2)=c(u_5)=3,c(u_6)=c(u_3)=2,c(u_4)=4$.
By claim, $\{5,6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$,
$\{\overline{c_o}(u_2), \overline{c_o}(u_3)\}$ occupies at most one color in $\{5,\ldots,9\}$ by Lemma \ref{1234}. If $\overline{c_o}(u_3)\in \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_2)\in \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_2), \overline{c_o}(u_3)\notin \{5,\ldots,9\}$, then $u$ admits an odd coloring in $G$ by~Claim.
In each case, $\overline{c_o}(u_5)\neq\overline{c_o}(u_6)$ and each of $\overline{c_o}(u_5)$ and $\overline{c_o}(u_6)$ be in $\{5,6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$.
\iffalse
By Claim, $\{5,6,7,8,9\}\subseteq\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),
\overline{c_o}(u_5),\overline{c_o}(u_6)\}$ while there exists at most one vertex $u_i$ such that $u_i$ are free to $u$ or $\overline{c_o}(u_i)\in\{c(u_1),c(u_2),\ldots,c(u_6)\}$.
We assume $i\in\{1,3\}$ by symmetry.
If $u_1$ is free to $u$ or $\overline{c_o}(u_1)\in \{c(u_1),c(u_2),c(u_3),c(u_4)\}$, then $\{5,6,7,8,9\}=\{\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}$. we can recolor $u_4$ with the color not in
$\{c(u_1),c(u_2),c(u_3),c(u_4)\}$ such that $u_3$ and $u_5$ are free to $u$. It's a contradiction to Claim.
If $u_3$ is free to $u$ or $\overline{c_o}(u_3)\in \{c(u_1),c(u_2),c(u_3),c(u_4)\}$, then $\{5,6,7,8,9\}=\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}$. Thus, $\overline{c_o}(u_5)\neq\overline{c_o}(u_6)\notin\{1,2,3,4\}$ and it contradicts Lemma~\ref{1234}.
\fi
\iffalse
{\bf Case 3} $c(u_1)=1,c(u_2)=c(u_5)=3,c(u_6)=c(u_3)=2,c(u_4)=4$.
{\textcolor{red} {By claim, $\{5,6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$,
there exist at least one $i\in\{2,3\}$ such that $\overline{c_o}(u_i)\notin \{5,\ldots,9\}$ by Lemma \ref{1234}. If $\overline{c_o}(u_2)\notin \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_3)\notin \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_2), \overline{c_o}(u_3)\notin \{5,\ldots,9\}$, then $u$ admits an odd coloring in $G$ by~Claim.
In each case, $\overline{c_o}(u_5)\neq\overline{c_o}(u_6)$ and each of $\overline{c_o}(u_5)$ and $\overline{c_o}(u_6)$ be in $\{5,6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$.}}
\fi
{\bf Case 3} $c(u_1)=1,c(u_2)=c(u_6)=c(u_4)=2,c(u_3)=3,c(u_5)=4$.
By claim, $\{5,6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_6)=c(u_2)\neq c(u_1)\neq c(u_3)\neq c(u_5)$,
$\overline{c_o}(u_i)$ for $i\in\{1,2,6\}$ occupies at most two different colors in $\{5,6,7,8,9\}$ by Lemma \ref{42123}. If $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_6)\}$ occupies at most one color in $\{5,6,7,8,9\}$,
then $\{5,6,7,8,9\}\setminus\{\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}\neq \emptyset$,
it contradicts Claim. If $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_6)\}$ occupies two colors in $\{5, 6, 7,8,9\}$, say $5$ and $6$, then $\{\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}= \{7,8,9\}$. If $\{\overline{c_o}(u_2),\overline{c_o}(u_6)\}= \{5,6\}$, then $\{\overline{c_o}(u_6),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{5,6,7,8,9\}$. If $\{\overline{c_o}(u_1),\overline{c_o}(u_6)\}= \{5,6\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_6),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{5,6,7,8,9\}$. If $\{\overline{c_o}(u_1),\overline{c_o}(u_2)\}= \{5,6\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{5,6,7,8,9\}$.
In the first two cases, $\overline{c_o}(u_4)\neq \overline{c_o}(u_5)\neq \overline{c_o}(u_6)$ and each of $\overline{c_o}(u_4),\overline{c_o}(u_5)$ and $\overline{c_o}(u_6)$ is in $\{5,6,7,8,9\}$, which contradicts Lemma \ref{42123} since $c(u_6)=c(u_4)\neq c(u_5) \neq c(u_1) \neq c(u_3)$. In the last case, $\overline{c_o}(u_2)\neq \overline{c_o}(u_3)\neq \overline{c_o}(u_4)$ and each of $\overline{c_o}(u_2),\overline{c_o}(u_3)$ and $\overline{c_o}(u_4)$ is in $\{5,6,7,8,9\}$, which contradicts Lemma \ref{42123} since $c(u_2)=c(u_4)\neq c(u_1) \neq c(u_3) \neq c(u_5)$.
\iffalse
{\bf Case.3} $c(u_1)=1,c(u_2)=c(u_6)=2,c(u_3)=3,c(u_5)=4,c(u_4)=5$.
In this case, each of $\{6,7,8,9\}$ must appears $c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4),c_o(u_5),c_o(u_6)$. Otherwise, $u$ has at least one color to get, a contradiction. Thus, $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_5)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_3),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_3),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_4),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$. In the {\textcolor{red}{first three}} cases, the odd coloring of $u_2$ and $u_3$ contradict Lemma \ref{1234}. In the fourth case, the odd coloring of $u_4$ and $u_5$ contradict Lemma \ref{1234}. In the fifth and sixth cases, the odd coloring of $u_1,u_2$ and $u_6$ contradict Lemma \ref{42123}. In the seventh case, the odd coloring of $u_3$ and $u_4$ contradict Lemma \ref{1234}. In the last two cases, the odd coloring of $u_5$ and $u_6$ contradict Lemma \ref{1234}.
\fi
{\bf Case 4} $c(u_1)=1,c(u_2)=c(u_4)=3,c(u_6)=c(u_3)=2,c(u_5)=4$.
By claim, $\{5,6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_1)\neq c(u_6)\neq c(u_5)\neq c(u_4)$, $\{\overline{c_o}(u_5), \overline{c_o}(u_6)\}$ occupies at most one color in $\{5,\ldots,9\}$ by Lemma~\ref{42123}.
If $\overline{c_o}(u_6)\in \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_5)\in \{5,\ldots,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{5,6,7,8,9\}$. If $\overline{c_o}(u_5), \overline{c_o}(u_6)\notin \{5,\ldots,9\}$, then $u$ admits an odd coloring in $G$ by~Claim.
In each case, $\overline{c_o}(u_2)\neq \overline{c_o}(u_3)\neq \overline{c_o}(u_4)$ and each of $\overline{c_o}(u_2),\overline{c_o}(u_3)$ and $\overline{c_o}(u_4)$ be in $\{5,6,7,8,9\}$, which contradicts Lemma \ref{42123} since $c(u_2)=c(u_4)\neq c(u_1) \neq c(u_3) \neq c(u_5)$.
\iffalse
{\bf Case.6} $c(u_1)=1,c(u_2)=c(u_6)=2,c(u_3)=3,c(u_5)=4,c(u_4)=5$.
By claim, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$ by Lemma \ref{1234}. In each case, $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$. Thus, at most one of $\overline{c_o}(u_6)$ and $\overline{c_o}(u_5)$ be in $\{6,7,8,9\}$ by Lemma \ref{1234}. Thus, in the former case, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$. They contradict Lemma \ref{1234} since $c(u_3)\neq c(u_4)\neq c(u_5)\neq c(u_6)$ and Lemma \ref{42123} since $c(u_6)=c(u_2)\neq c(u_1)\neq c(u_3)\neq c(u_5)$. In the latter case, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$. They contradict Lemma \ref{1234} since $c(u_3)\neq c(u_4)\neq c(u_5)\neq c(u_6)$ and $\neq c(u_1)\neq c(u_6)\neq c(u_5)\neq c(u_4)$.
\fi
{\bf Case 5} $c(u_1)=1,c(u_2)=c(u_6)=2,c(u_3)=3,c(u_5)=4,c(u_4)=5$.
By claim, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. By Lemma \ref{1234}, $\{\overline{c_o}(u_2), \overline{c_o}(u_3)\}$ occupies at most one color in $\{6,7,8,9\}$ due to $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$; $\{\overline{c_o}(u_5), \overline{c_o}(u_6)\}$ occupies at most one color in $\{6,7,8,9\}$ due to $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$. If $\overline{c_o}(u_2), \overline{c_o}(u_5)\in\{6,7,8,9\}$, then $ \{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{6,7,8,9\}$; if $\overline{c_o}(u_2), \overline{c_o}(u_6)\notin\{6,7,8,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\{6,7,8,9\}$. They contradict Lemma \ref{1234} since $c(u_3)\neq c(u_4)\neq c(u_5)\neq c(u_6)$ and Lemma \ref{42123} since $c(u_6)=c(u_2)\neq c(u_1)\neq c(u_3)\neq c(u_5)$. If $\overline{c_o}(u_3),\overline{c_o}(u_5)\in\{6,7,8,9\}$, then $ \{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\{6,7,8,9\}$; if $\overline{c_o}(u_3),\overline{c_o}(u_6)\in\{6,7,8,9\}$, then $\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\{6,7,8,9\}$. They contradict Lemma~\ref{1234} since $c(u_2)\neq c(u_3)\neq c(u_4)\neq c(u_5)$. If $\{\overline{c_o}(u_2),\overline{c_o}(u_3), \overline{c_o}(u_5),\overline{c_o}(u_6)\}$ occupies at most one color in $\{6,7,8,9\}$, then $u$ admits an odd coloring by Claim.
{\bf Case 6} $c(u_1)=1,c(u_2)=3,c(u_6)=(u_3)=2,c(u_5)=4,c(u_4)=5$.
In this case, $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$ and $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$. By same argument of Case 5, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}\\=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\\\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$. In the first case, we assume that $\overline{c_o}(u_1)=6,\overline{c_o}(u_2)=7,\overline{c_o}(u_4)=8,\overline{c_o}(u_5)=9$. Since $\overline{c_o}(u_1)=6$ and $c(u_2)\neq c(u_6)$, $\{6,2,3\}=\{c(v_{12}),c(v_1),c(v_2)\}$. Then $c(v_2)=6$ or $2$. If $c(v_2)=6$, then $\overline{c_o}(u_2)=6$, a contradiction. Thus, $c(v_2)=2$. By symmetry, $c(v_8)=2$. We color $u$ with $3$, recolor $u_2\in \lbrack9\rbrack\setminus\{1,2,3,6,7,\overline{c_o}(v_2),\overline{c_o}(v_3),\overline{c_o}(v_4)\}$. Since $\overline{c_o}(u_2)=7$, $c(v_4)=1$ or $7$. Since $\overline{c_o}(u_4)=8$, $c(v_4)=4$ or $8$. Then $u_3$ has an odd coloring. Since $\overline{c_o}(u_1)=6$, $c(v_{12})=3$ or $6$. Since $\overline{c_o}(u_4)=9$, $c(v_{10})=5$ or $9$. Then $u_6$ has an odd coloring. Then $\overline{c_o}(u_1)=6,\overline{c_o}(u_2)=7,\overline{c_o}(u_4)=8,\overline{c_o}(u_5)=9$. Thus, $G$ has a $9$-odd coloring, a contradiction. In the second case, $\overline{c_o}(u_1)\neq \overline{c_o}(u_6)$ and each of $\overline{c_o}(u_1)$ and $\overline{c_o}(u_6)$ be in $\{6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_2)\neq c(u_1) \neq c(u_6) \neq c(u_5)$. In the last two cases, $\overline{c_o}(u_3)\neq \overline{c_o}(u_4)$ and each of $\overline{c_o}(u_3)$ and $\overline{c_o}(u_4)$ be in $\{6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_2)\neq c(u_3) \neq c(u_4) \neq c(u_5)$.
{\bf Case 7} $c(u_1)=1,c(u_2)=3,c(u_6)=2,c(u_3)=4,c(u_5)=4,c(u_4)=5$.
In this case, $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$ and $c(u_1)\neq c(u_6) \neq c(u_5) \neq c(u_4)$. By same argument of Case 5, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}\\=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\\\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_6)\}=\varnothing$. In first two cases, $\overline{c_o}(u_1)\neq \overline{c_o}(u_2)$ and each of $\overline{c_o}(u_1)$ and $\overline{c_o}(u_2)$ be in $\{6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_3)\neq c(u_2) \neq c(u_1) \neq c(u_6)$. In the third case, $\overline{c_o}(u_3)\neq \overline{c_o}(u_4) \neq \overline{c_o}(u_5)$ and each of $\overline{c_o}(u_3), \overline{c_o}(u_4)$ and $\overline{c_o}(u_5)$ be in $\{6,7,8,9\}$, which contradicts Lemma \ref{42123} since $c(u_5)=c(u_3)\neq c(u_2) \neq c(u_4) \neq c(u_6)$. In last case, $\overline{c_o}(u_1)\neq \overline{c_o}(u_6)$ and each of $\overline{c_o}(u_1)$ and $\overline{c_o}(u_6)$ be in $\{6,7,8,9\}$, which contradicts Lemma \ref{1234} since $c(u_2)\neq c(u_1) \neq c(u_6) \neq c(u_5)$.
\iffalse
In this case, each of $\{6,7,8,9\}$ must appears $c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4),c_o(u_5),c_o(u_6)$. Otherwise, $u$ has at least one color to get, a contradiction. Thus, $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_5)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_3),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_3),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_4),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$.
In the {\textcolor{red}{first six}} cases, the odd coloring of $u_1$ and $u_2$ contradict Lemma \ref{1234}. In the seventh case, the odd coloring of $u_4$ and $u_5$ contradict Lemma \ref{1234}. In the last two cases, the odd coloring of $u_5$ and $u_6$ contradict Lemma \ref{1234}.
\fi
\iffalse
{\bf Case.9} $c(u_1)=1,c(u_2)=3,c(u_6)=c(u_4)=2,c(u_3)=4,c(u_5)=5$.
By claim, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. Since $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$ by Lemma \ref{1234}. In each case, $c(u_2)\neq c(u_1) \neq c(u_6) \neq c(u_5)$. Thus, at most one of $\overline{c_o}(u_6)$ and $\overline{c_o}(u_1)$ be in $\{6,7,8,9\}$ by Lemma \ref{1234}. Thus, in the former case, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. They contradict Lemma \ref{1234} since $c(u_3)\neq c(u_4)\neq c(u_5)\neq c(u_6)$ and Lemma \ref{42123} since $c(u_6)=c(u_2)\neq c(u_1)\neq c(u_3)\neq c(u_5)$. In the latter case, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. They contradict Lemma \ref{1234} since $c(u_3)\neq c(u_4)\neq c(u_5)\neq c(u_6)$ and $\neq c(u_1)\neq c(u_6)\neq c(u_5)\neq c(u_4)$.
\fi
{\bf Case 8} $c(u_1)=1,c(u_2)=3,c(u_6)=c(u_4)=2,c(u_3)=4,c(u_5)=5$.
By claim, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$. By Lemma \ref{1234}, $\{\overline{c_o}(u_3),\overline{c_o}(u_4)\}$ occupies at most one color in $\{6,7,8,9\}$ due to $c(u_2)\neq c(u_3)\neq c(u_4)\neq c(u_5)$; $\{\overline{c_o}(u_1) \overline{c_o}(u_6)\}$ occupies at most one color in $\{6,7,8,9\}$ due to $c(u_1)\neq c(u_2) \neq c(u_5) \neq c(u_6)$. Thus, $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_6),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_5)\}\\=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$ or $\{6,7,8,9\} \setminus\{\overline{c_o}(u_6),\\\overline{c_o}(u_2),\overline{c_o}(u_4),\overline{c_o}(u_5)\}=\varnothing$. In the former two cases, they contradict Lemma \ref{1234} since $c(u_1)\neq c(u_2)\neq c(u_3)\neq c(u_4)$. In the latter two cases, they contradict Lemma \ref{1234} due to $c(u_6)\neq c(u_1)\neq c(u_2)\neq c(u_3)$ and Lemma~\ref{42123} due to $c(u_6) = c(u_4)\neq c(u_3)\neq c(u_5)\neq c(u_1)$.
\iffalse
In this case, each of $\{6,7,8,9\}$ must appears $c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4),c_o(u_5),c_o(u_6)$. Otherwise, $u$ has at least one color to get, a contradiction. Thus, $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_4)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_5)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_3),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_4),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_2),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or $\{c_o(u_1),c_o(u_3),c_o(u_4),c_o(u_5)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_3),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$ or
$\{c_o(u_1),c_o(u_4),c_o(u_5),c_o(u_6)\}=\{6,7,8,9\}$.
In the {\textcolor{red}{first six}} cases, the odd coloring of $u_1$ and $u_2$ contradict Lemma \ref{1234}. In the seventh case, the odd coloring of $u_3$ and $u_4$ contradict Lemma \ref{1234}. In the last two cases, the odd coloring of $u_1$ and $u_6$ contradict Lemma \ref{1234}.
\fi
{\bf Case.9} $c(u_1)=1,c(u_2)=2,c(u_6)=6,c(u_3)=3,c(u_5)=5,c(u_4)=4$.
By claim, $\{7,8,9\} \setminus\{\overline{c_o}(u_1),\overline{c_o}(u_2),\overline{c_o}(u_3),\overline{c_o}(u_4),\overline{c_o}(u_5),\overline{c_o}(u_6)\}=\varnothing$.
By Lemma \ref{1234}, $\{u_i, u_{i+1}\}$ occupies at most one color in $\{7,8,9\}$, where $1\leq i \leq 6$ and $i+1=1$ if $i=6$.
Thus, $\{\overline{c_o}(u_1), \overline{c_o}(u_3), \overline{c_o}(u_5)\}=\{7,8,9\}$ by symmetry.
We assume that, $\overline{c_o}(u_1)=7,\overline{c_o}(u_3)=8,\overline{c_o}(u_5)=9$ and $c(u_1)\neq c(u_2)\neq \ldots \neq c(u_6)$. Then $c(v_{12})=2$ or $7$, $c(v_2)=6$ or $7$, $c(v_4)=4$ or $8$, $c(v_6)=2$ or $8$, $c(v_8)=9$ or $6$, $c(v_{10})=9$ or $4$. Then color $u$ with $1$. Recolor $u_1\in \lbrack9\rbrack\setminus\{1,2,6,7,c_o(v_1),c_o(v_2),c_o(v_{12})\}$. In each case, $u_2,u_4$ and $u_6$ has at least one odd coloring, $\overline{c_o}(u_1)=1,\overline{c_o}(u_3)=8,\overline{c_o}(u_5)=9$, and $u$ has the odd coloring, a contradiction.
\end{proof}
Now we are ready to complete the proof of Theorem~\ref{th1}. Let each $v\in V(G)$ have an initial charge of $\mu(v)=d(v)-6$, each $f\in\cup F(G)$ have an initial charge of $\mu(f)=2d(f)-6$. By Euler's Formula, $|V(G)|+|F(G)|-|E(G)|\geq 0$. Then $\sum_{v\in V }\mu(v)+\sum_{f\in F }\mu(f)=0$.
Let $\mu^*(x)$ denote the final charge of $x\in V(G)\cup F(G)$ after the discharging procedure. To lead to a contradiction, we shall prove that $\sum_{x\in V(G)\cup F(G)} \mu^*(x)\\ > 0$. Since the total sum of charges is unchanged in the discharging procedure, this contradiction proves Theorem~\ref{th1}.
We use the following discharging rules:
\begin{enumerate}[(R1)]
\item Every $ 4^{+} $-face sends $1$ to each incident $5$ -vertex.
\item Every $8^{+}$-vertex $u$ sends $\frac{3}{8}$ to each adjacent $5$ -vertex $v$ if both faces incident with the edge $uv$ are $3$-faces.
\end{enumerate}
By Lemma \ref{minimum degree}, $\delta (G)\geq 5$. Then we check that the final charge of each $5^+$-vertex and each face is nonnegative.
\begin{enumerate}[1.]
\item Let $f$ be a $3$-face. By Rules. $f$ is not involved in any discharging procedure. Thus, $\mu^*(f)=2d(f)-6=0$.
\item Let $f$ be a $4^+$-face. By Lemma \ref{oddvet-non-adja}, $5$-vertex is not adjacent to $5$-vertex. Then $f$ is incident with at most $\frac{d(f)}{2}$ $5$-vertices. By (R1), $f$ sends $1$ to each incident $5$-vertex. Thus, $\mu^*(f)=2d(f)-6-\frac{d(f)}{2}\geq0$. Note that $\mu^*(f)=0$ if and only if $d(f)=4$ and $f$ is incident with two $5$-vertices.
\item Let $v$ be a $5$-vertex. If $v$ is incident with at least two $4^+$-faces, then each of $4$-faces sends $1$ to $v$ by (R1). Thus, $\mu^*(v)=d(v)-6+2>0$. If $v$ is incident with one $4$-face, then $v$ is incident with four $3$-faces. Let $v_1,v_2,\ldots,v_5$ be the neighbors of $v$, the face incident with $v_1vv_2$ be a $4^+$-face. By Lemma \ref{oddvet-non-adja}, each of $v_1,\ldots,v_5$ is a $6^+$-vertex. By Lemma \ref{5-vetx} and one of $v_3$ and $v_4$ is a $8^+$-vertex. By (R2), this $8^+$-vertex sends $\frac{3}{8}$ to $v$. Thus, $\mu^*(v)=d(v)-6+1+\frac{3}{8}>0$. Thus, each face incident with $v$ is a $3$-face. By Lemma \ref{5-vetx}, at least one of $v_{i}$ and $v_{i+1}$ is a $8^+$-vertex where $i\in\{1,2,3,4,5\}$ and $i+1=1$ if $i=5$. Thus, at least three of $v_1,v_2,\ldots,v_5$ are $8^+$-vertices. By (R2), $\mu^*(v)=d(v)-6+\frac{3}{8}\times3>0$.
\item Let $v$ be a $6$- or $7$-vertex. By Rules. $v$ is not involved in any discharging procedure. Thus, $\mu^*(v)=d(v)-6=0$ if $f(f)=6$, $\mu^*(v)=d(v)-6>0$ if $f(f)=7$.
\item Let $v$ be a $8^+$-vertex. By (R2), $v$ sends $\frac{3}{8}$ to adjacent $5$-vertex $v$ if $uv$ is incident with two $3$-faces. Let $v_2$ be the $5$-neighbor of $v$ and $vv_2$ is incident with two $5$-faces $[vv_2v_1]$ and $[vv_2v_3]$. By Lemma \ref{oddvet-non-adja}, $5$-vertex is not adjacent to $5$-vertex. Then each of $v_1$ and $v_3$ are $6^+$-vertices. Thus, $v$ sends charge to at most $\frac{d(v)}{2}$ $5$-neighbors. Thus, $\mu^*(v)=d(v)-6-\frac{3}{8}\times\frac{d(v)}{2}>0$.
\end{enumerate}
From our hypothesis and the above discharging procedure, we know that the following configurations admit positive final charge and thus can not be contained in $G$. That's, if $G$ has a $7^+$-vertex or a $5$-vertex, then $\sum_{x\in V(G)\cup F(G)} \mu^*(x) > 0$; if $G$ has a $5^+$-face or a $4$-face incident with at most one $5$-vertex, then $\sum_{x\in V(G)\cup F(G)} \mu^*(x) > 0$; if $G$ has a $4$-face incident with two $5$-vertices, then these two $5$-vertices has final charge more than $0$, thus $\sum_{x\in V(G)\cup F(G)} \mu^*(x) > 0$. Thus, $G$ only has $6$-vertices and $3$-faces. By Lemma \ref{6-v}, $G$ has no such configurations, a contradiction.
\small
|
1,477,468,750,178 | arxiv | \section{Cookie Monster wants to play games}
Cookie Monster likes cookies. His mommy used his love for cookies to teach him to think and to play some mathematical games. She set up the following system with cookies. Cookie Monster's Mommy has a set of $k$ jars filled with cookies. In one move she allows Cookie Monster to choose any subset of jars and take the same number of cookies from each of those jars. Cookie Monster always wants to empty all of the jars in as few moves as possible.
For example, if there are three jars with 1, 2, and 4 cookies he needs three moves. He can empty them one jar at a time. Or he can take one cookie from all of the jars in the first move, after that he will still need two more moves. But if the three jars have 1, 2, and 3 cookies, he can empty them in two moves. In the first move he can take one cookie from the first jar and the third jar. After that the two non-empty jars have 2 cookies in each. So he can empty the whole set of jars in one more move.
If there are $k$ jars with distinct number of cookies it is always possible to empty them in $k$ moves. Cookie Monster Mommy tries to make it interesting and sets up jars so that it is always possible to empty $k$ jars with distinct number of cookies in fewer than $k$ moves. For example, once she arranged the Fibonacci sequence of cookies in jars: $\{1,2,3,5,8\}$. Cookie Monster figured out how to empty the jars in 3 moves.
Cookie Monster Mommy tries to invent interesting sequences of numbers to use as the number of cookies in the jars and Cookie Monster tries to find the smallest number of moves. This is like a game.
But still, something was missing in this game. His mommy was in charge of the cookies, and he tried to solve her puzzles. Cookie Monster realized that he wants a game with his mommy, where he feels equal: a game in which two people have the same options at each move.
\subsection{Authors' comments}
Mathematicians now call the smallest number of moves for a given set $S$ of cookies in jars the Cookie Monster number of the set $S$. It is denoted as $\CM (S)$. The problem of finding the Cookie Monster number of a set of jars is called the Cookie Monster Problem. The problem first appeared in \textit{The Inquisitive Problem Solver} by Vaderlind, Guy, and Larson \cite{VGL}. Mathematicians got interested and wrote papers about the cookie monster problem and the cookie monster number \cite{B, BK, BrK}. Now eating cookies is not enough for the monster. The mathematical name for his new interest is an \textit{impartial combinatorial game}, a game in which two players each have the same moves available on any turn.
As we will see, Cookie Monster discovers the games of Marienbad and Nim in Sections~\ref{marienbad}~and~\ref{nim}. In Section~\ref{twojars}, Cookie Monster invents how to convert the Cookie Monster problem into a game. Cookie Monster tries the simplest case with two jars first, and then he finds out that the game is already known as Wythoff's game. In Section~\ref{cmgame}, Cookie Monster examines the Cookie Monster game with three jars, which is a previously unknown game. In Sections~\ref{sumofgames}, \ref{oddgame}, and \ref{othergames}, Cookie Monster invents many variations of the Cookie Monster game and calculates their P-positions. In Section~\ref{bigpicture} Cookie Monster discovers properties of P-positions of all the games and finds out that the maximum element in a P-position is bounded in terms of other elements.
\section{The Game of Marienbad}\label{marienbad}
Cookie Monster started bugging his mommy for a game. But mommy wanted to watch a movie. Then Cookie Monster's Mommy said, ``I heard there is a game in the movie. Let me watch the movie; I will remember the game and then we can play it.'' Cookie Monster agreed. He even tried to watch the movie himself. The movie was called ``Last Year at Marienbad.'' But unfortunately, the movie wasn't animated, and even worse, it was black and white. He became bored in two minutes. But his mother promised to call him each time the game was played.
The mysterious man in the movie introduced the game by saying, ``I know a game I always win.'' The second man replied, ``If you can't lose, it's no game.'' And the mysterious man said, ``I can lose, but I always win.'' It intrigued Cookie Monster.
You can use any objects to play the game. In the movie they sometimes played it with matches. That was so cool. Cookie Monster's Mommy didn't allow him to touch matches. In the movie the men didn't light the matches, but still it was cool. There were four rows of matches, and there were 1, 3, 5, or 7 matches in each row as in Figure~\ref{fig:marienbad}. In one move, a player can choose a row and take any number of matches from that row. The player who is forced to take the last match loses. The mysterious man in the movie always won.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.4]{Marienbad.png}
\caption{Initial setup in the game of Marienbad}\label{fig:marienbad}
\end{figure}
After the movie Cookie Monster disappeared, and his Mommy didn't go looking for him; she had so many other more important things to do than play games. Meanwhile, Cookie Monster tried to figure out the game. He realized that at the end of the game, when only piles of one match remained, he would win if he left an odd number of such piles. This means that if all but one of the piles have one match, he has a winning strategy: he can take either all of the matches from the largest pile, or all but one match from it to make sure that an odd number of piles of one match are left. This is his end-game.
Cookie Monster started analyzing the game from the end. He denotes a position using a list of numbers in parentheses to represent the number of matches in each pile. The starting position is (1,3,5,7). He figured out that if after his move the numbers in piles are doubled, as in (3,3) or (1,1,5,5), he can win. Whatever his opponent does, Cookie Monster can keep this double property, and eventually, Cookie Monster will be able to use his end-game strategy. Cookie Monster decided to call the positions that he needs to move to in order to win the game P-positions, from the word \textbf{p}ositive. He feels good and positive moving to a P-position. In addition, Cookie Monster called the other positions N-positions from the word \textbf{n}egative, since he does not want to move into these positions.
Cookie Monster tried to play with himself and found another P-position: (1,2,3). If he finishes in this position, he is guaranteed to win. He continued his search and found two more P-positions: (1,4,5) and (2,4,6). Curiously, the latter is a double of the P-position (1,2,3).
He found more P-positions, but he kept forgetting the exact numbers. The only thing that he remembers is if the first player starts removing one match from (1,3,5,7), he needs to remove one match from any other pile.
Now he is ready to play with his mommy.
\subsection{Authors' comments}
The game in the movie is known as the game of Marienbad. It is a variation of another game called Nim. In the game of Nim you can have any initial position. In addition, the winning condition for Nim is different: the person who makes the last move wins. The variation in the movie, where the person who takes the last match loses, is called mis\`{e}re.
The notion of P-positions and N-positions exists in game theory. Cookie Monster reinvented these names with the same meaning. But in game theory, the name ``P-position'' actually comes from the fact that the \textbf{p}revious player wins after leaving such a position, assuming the use of an optimal strategy. Similarly, the name ``N-position'' is due to the fact that the \textbf{n}ext player wins with the correct strategy.
Not counting the very end of the game---when only piles of size one remain---the P-positions are the same for the regular Nim and the mis\`{e}re variation. Thus, the strategy is nearly the same for both versions; the only difference is in the end-game: In the standard game of Nim, a player needs to leave an even number of piles of size one (and no other piles) in order to win. In the mis\`{e}re variation, a player needs to leave an odd number of piles of size one in order to win.
To denote a position in a game, Cookie Monster uses an ordered tuple of numbers in parentheses. The starting position in Marienbad is (1,3,5,7), which is a P-position. The P-positions that Cookie Monster has trouble remembering are: (2,5,7), (3,4,7), (3,5,6), (1,2,4,7), (1,2,5,6), and (1,3,4,6).
There is actually a formula for P-positions of Nim; we will show it in the next section.
\section{Cookie Monster Plays Nim}\label{nim}
Now Cookie Monster is ready to play with his mommy. And mommy agreed to play with matches. Hooray! They started playing the game as it was set up in the movie and mommy was first to move. After Cookie Monster's Mommy lost several games she asked to start second. Cookie Monster couldn't delay this moment forever, so he agreed. That was a challenge. But Cookie Monster decided to start with a small move: the more matches are on the table the more difficult it is for his mom to figure out the winning strategy. Also, the longer the game goes on, the more moves both of them make, and the more chances there are for mommy to make a mistake.
Cookie Monster won again, and wanted to be the second player. His mommy realized that something fishy was going on: that the second player has an advantage. She refused to be the first player. Cookie Monster didn't want to risk losing, so he offered to play the standard game of Nim instead: the person who does not have a move loses. It seems the winning condition is the opposite, so one might think that the advantage moves from the second player to the first player.
Mommy agreed to be the first player, and she lost three times in a row. She wanted to be the second player again. To divert her, Cookie Monster suggested a ``fair'' game: they will both choose something. His mommy chooses a starting position, then Cookie Monster chooses who goes first. Of course, Cookie Monster knew that this game was not fair. This is because Cookie Monster will choose to be second if his mommy chooses a P-position, and first otherwise. This way he would always win.
Cookie Monster's Mommy is very smart. She lost every game because she was busy baking cookies and did not have enough time to think about the game. This time, however, she saw right through him and didn't agree to his ``fair'' suggestion. Oh well. At least he can try this game on his friends.
\subsection{Authors' comments}
There is a formula for P-positions in the game of Nim. From now on we will talk about the standard game, where the player who cannot move loses. This means that the position with all piles empty is a P-position.
In addition, we want to emphasize that any move from every P-position must end up in an N-position. And there should exist at least one move from every N-position to a P-position.
To state the formula, first we need to define the \textit{nim-sum}, or \textit{bitwise XOR} operation: $\oplus$. Nim-sum can be described as a binary addition without carry. In other words, the nim-sum of two numbers is produced by representing each number as a sum of distinct powers of two, canceling each power of two that appears twice, and adding the remaining powers of two. The nim-sum is a commutative and associative operation.
\begin{theorem}[Bouton]
The P-positions in the game of Nim are formed by a set of numbers with nim-sum zero.
\end{theorem}
The proof can be found in many places \cite{BCG, Bouton}. We just want to mention that Cookie Monster is very perceptive. He noticed that P-position (2,4,6) is the doubling of each pile of the P-position (1,2,3). By the nim-sum property this is always true: doubling each element in a P-position will result in a P-position.
Another observation of Cookie Monster was that if someone removes one match from one of the piles of the starting position (1,3,5,7), then to get to the next P-position he needs to remove one match from any other pile. This strategy works for any starting P-position where all piles are odd.
Note the following corollary:
\begin{cor}
Given the number of matches in all but one of the piles, there is a unique value of the number of matches in that pile that makes the position a P-position.
\end{cor}
\begin{proof}
The last pile is the nim-sum of the other piles.
\end{proof}
\section{Cookie Monster Game with 2 Jars}\label{twojars}
Cookie Monster played the game of Nim with his friends and always won. But then he said to himself, ``There is a Cookie Monster problem, there should be a Cookie Monster game.''
Here is how the Cookie Monster problem is converted to a two-player game. There are several jars filled with cookies. In one move a player chooses any subset of jars and takes the same number of cookies from each of those jars. The player who cannot move loses.
Cookie Monster decided to study his own game. If there is one jar and he starts he can win with one move by taking all the cookies. What happens if there are two jars?
If it were Nim with two jars, then the same number of cookies in both jars $(n,n)$ would constitute a P-position. But in the Cookie Monster game you can empty those two jars in one move, so this must be an N-position.
Cookie Monster calculated some small P-positions: $(1,2)$, $(3,5)$, $(4,7)$, $(6,10)$ and $(8,13)$. Some of them are pairs of consecutive Fibonacci numbers. Then he remembered where he saw other pairs. This is how he converts miles to kilometers.
Suppose you want to convert miles to kilometers. Take the number of miles, represent it as a sum of different Fibonacci numbers, then replace each Fibonacci number with the next one and sum the numbers to get your conversion. For example, 6---as a sum of different Fibonacci numbers---is $5+1$. Replacing each number by the next Fibonacci number, he finds that the new sum is $8+2=10$. The reverse conversion is similar: you just need to take the previous Fibonacci number instead of the next one.
\subsection{Authors' comments}
The Cookie Monster game with two jars was invented in 1907 and is called Wythoff's Game. The Cookie Monster game for more than two jars first appeared in literature more than 100 years later in 2013 \cite{B}.
Cookie Monster's method of converting miles to kilometers involves the Zeckendorf representation of a positive integer. To find the \textit{Zeckendorf representation}, take an integer $n$, subtract the largest Fibonacci number not greater than $n$, and repeat. As a result, $n$ is represented as a sum of $j$ distinct Fibonacci numbers: $n= F_{i_1} + F_{i_2} + \ldots + F_{i_j}$, where $i_1 < i_2 < \ldots < i_j$. If one of the Fibonacci numbers used is $1$, its index is defined to be $2$. In fact, there are no two neighboring Fibonacci numbers in this representation \cite{Z}. By construction, this representation is unique. Now we define the \textit{Fibonacci successor} of $n$, $\sigma(n)$, by shifting the index of Fibonacci numbers in the Zeckendorf representation \cite{CF}:
$$
\sigma(n) = F_{i_1+1} + F_{i_2+1} + \ldots + F_{i_j+1}.
$$
The trick of converting miles to kilometers is based on the fact that the ratio of kilometers to miles is 1.609, which is very close to the golden ratio, 1.618. The ratio $F_{n+1}/F_n$ is very close to the golden ratio for $n \geq 5$. Thus for $n > 10$, the Fibonacci successor $\sigma(n)$ is very close to $1.6n$. This justifies the miles to kilometers conversion.
In order to reverse the conversion, converting kilometers to miles, we can use the same principle, but in reverse. However, this does not quite work because not every number is a successor of another number. Only numbers that do not have $1$ in their Zeckendorf representations are successors. But we still can convert kilometers to miles. If the number of kilometers has 1 in its Zeckendorf representation we can either keep it or replace by 0. In an approximate calculation, it does not matter.
The following well-known theorem describes P-positions in the Wythoff's Game (see \cite{Co, Wy}).
\begin{theorem}
All P-positions can be described in the form $(n,\sigma(n))$. Every positive integer appears in one P-position. If the index $i_1$ of the smallest Fibonacci number in the Zeckendorf representation of $n$ is even, then $n$ is the first number, otherwise it is the second number.
\end{theorem}
Cookie Monster was correct in observing that some P-positions involved consecutive Fibonacci numbers. The corollary describes exactly which Fibonacci pairs are P-positions.
\begin{cor}
$(F_{2n}, F_{2n+1})$ is a P-position, and no other P-positions involve Fibonacci numbers.
\end{cor}
\section{Cookie Monster Game}\label{cmgame}
Cookie Monster started studying his own game with 3 jars. He wrote a program and found some P-positions. The P-positions with one empty jar are the same as in the Wythoff game. Here some more P-positions with all the jars non-empty and each jar has less than 10 cookies: $(1, 1, 4)$, $(1, 3, 3)$, $(1, 5, 6)$, $(2, 2, 6)$, $(2, 3, 8)$, $(2, 7, 7)$, $(3, 4, 4)$, $(3, 6, 9)$, $(5, 5, 7)$, $(5, 8, 8)$. As jars are permutable, we only need to write one of the permutations.
It was difficult to calculate these positions, and Cookie Monster went online to try to find some literature on the subject, and found only one paper. M.~Belzner~\cite{B} already studied the Cookie Monster game with 3 jars. She tried to calculate P-positions for the case when one of three jars contains 1 cookie and the other two are not empty. But the problem is so difficult that she made a mistake. She listed $(1,7,9)$ as a P-position. Cookie Monster can prove that $(1,7,9)$ is not a P-position. He can take 7 cookies from the last two jars (and eat all 14 of them) to get to $(0,1,2)$, which is a P-position. Consequently, all the following P-positions in~\cite{B} are wrong. The author probably was solving this manually. It is good that Cookie Monster can program.
\subsection{Authors' comments}
To our knowledge, \cite{B} is the only paper studying P-positions of the Cookie Monster game with more than 2 jars. That makes Cookie Monster the first person to correctly calculate the P-positions in this game.
\section{Nim and the First Two Jars}\label{sumofgames}
Cookie Monster realized how difficult the problem is and decided to invent other games. In Nim, each move involved one pile. In the Cookie Monster game, each move involves any subset of the piles. So Cookie Monster decided to do something in between. Each game will have \emph{permissible sets of jars} that the players can take from. In Nim, only sets of size 1 are permissible. In the Cookie Monster game, every subset of jars is permissible. His new games are between Nim and Cookie Monster. So all sets of size 1 are permissible, but not all subsets are permissible. The games can be generated by adding permissible sets to Nim or subtracting permissible sets from the Cookie Monster game.
Cookie Monster realized that there were many different games that he could make, so he invented a common name for them. He decided to call these new games \emph{Cookie-Monster-Nim games}, or \emph{CM-Nim games} for short.
Cookie Monster first decided to add exactly one set of jars from which he allowed to take the same number of cookies, since he thought adding fewer moves would make it easier to find P-positions.
First he tries to play the following game: You are allowed to take any number of cookies from individual jars. You are also allowed take the same number of cookies from both the first and the second jar. The P-positions with one empty jar will either be the same as Nim or as Wythoff P-positions, depending on which jar is empty. So he wrote a program to calculate some more P-positions where none of the jars is empty. He calculated the P-positions were none of the jars has more than six cookies: as the first and the second jars are interchangeable, we only need to write the positions such that the number of cookies in the first jar is not greater than the number in the second jar. These are the P-positions: $(1, 1, 2)$, $(1, 3, 4)$, $(1, 4, 5)$, $(1, 5, 3)$, $(2, 2, 1)$, $(2, 3, 5)$, $(2, 4, 3)$, $(2, 5, 4)$, $(3, 3, 6)$, $(3, 4, 2)$, $(3, 6, 1)$, $(4, 5, 6)$, $(6, 6, 3)$.
Once again Cookie Monster went online and discovered that this game is actually known: it is the sum of two games: Wythoff's game plus Nim with one jar. So Cookie Monster lost interest in it.
\subsection{Authors' Comments}
The \textit{sum} of two given games is defined as follows: Two players are playing two games against one another. On each move a player decides which game to play and makes one move.
To find the P-positions of this game, we use the following theorem that was discovered about all impartial combinatorial games.
\begin{theorem}[Sprague-Grundy Theorem]
Every position of any impartial game is equivalent to a Nim heap.
\end{theorem}
The proof of this theorem \cite{Grundy, Sprague} is fairly technical, so we will omit it here. Accordingly, there is a Sprague-Grundy function, which takes a position of an impartial game and gives the equivalent Nim heap, which is called a nimber.
The nimber equivalent to the sum of several games is the nim-sum of the nimbers of the positions in each game. In order for this sum of games to be a P-position, the nim-sum of the nimbers of each individual game must be equal to zero.
Cookie Monster cannot take from both the last jar and at least one of the first two in his new game. So this game is equivalent to the sum of Wythoff's game---the first two jars---and Nim with one jar---the third jar.
The Sprague-Grundy function of Nim with one jar is, of course, the nimber of the same size. The Sprague-Grundy function of the Wythoff game is more complicated, and no explicit formula is known, although many properties have been deduced \cite{BF}. In P-positions in Cookie Monster's new game, the number of cookies in the third jar is exactly the Sprague-Grundy function of Wythoff's game with the first two jars.
\section{Nim and All Three Jars}\label{oddgame}
Cookie Monster tried a different game: the one where as an additional move you are allowed to take the same number of cookies from all the three jars. He calls this the \emph{odd CM-Nim game} as you always take from an odd number of jars. He discovered that the P-positions are exactly the same as in the game of Nim. How could this be? If we add more moves should the number of P-positions decrease? Actually there are infinitely many P-positions, but still he feels that the number of P-positions should decrease in some sense.
\subsection{Authors' Comments}
The notion of an odd CM-Nim game can be extended to any number of jars. In one move, a player is allowed to take the same number of cookies from an odd number of jars. Cookie Monster was correct in thinking that P-positions of an odd game are the same as P-positions of Nim.
\begin{theorem}
The odd CM-Nim game with $k$ jars has the same P-positions as Nim with $k$ piles.
\end{theorem}
\begin{proof}
If the nim-sum of all the piles is zero, then one move cannot preserve this property. That is any move from a P-position of Nim goes to an N-position of Nim. At the same time, in Nim there exists a move from an N-position to a P-position. The same move exists in the odd Cookie Monster game. That means the P-positions of the odd Cookie Monster game are the same as P-positions of Nim.
\end{proof}
Using similar reasoning, we can prove the following theorem:
\begin{theorem}
If you add moves to a game that only move from the old game's P-positions to the old game's N-positions, the new game has the same set of P-positions as the old game.
\end{theorem}
\section{Other Games with 3 Jars}\label{othergames}
What other moves could Cookie Monster add to the game of Nim or subtract from the Cookie Monster game to invent other new games? Not to miss anything, Cookie Monster first looks at the games when you can take from at most two jars. In addition to taking from one jar at a time, you are allowed to take from two jars. It produces two more games. For each game there is a list of permissible sets. To differentiate permissible sets of jars from P-positions, Cookie Monster uses curly brackets for the sets of jars.
\begin{itemize}
\item \textbf{At-Most-2-Jars}: From any two jars: $\{1,2\}$, $\{1,3\}$, and $\{2,3\}$.
\item \textbf{Consecutive-At-Most-2-Jars}: $\{1,2\}$ and $\{2,3\}$.
\end{itemize}
What if the jars are not consecutive? Cookie Monster can relabel the jars any way he wants, so the game with permissible sets $\{1,2\}$ and $\{1,3\}$ is the same as Consecutive-At-Most-2-Jars.
Now Cookie Monster looks at the games where you are allowed to take from all three jars:
\begin{itemize}
\item \textbf{Consecutive-Include-First-Jar}: $\{1,2\}$, and $\{1,2,3\}$.
\item \textbf{Consecutive}: $\{1,2\}$, $\{2,3\}$ and $\{1,2,3\}$.
\end{itemize}
It is easy to calculate P-positions when one of the jars is empty because there are two jars left, and the game reduces to either Nim or Wythoff. Cookie Monster decides to compare the games by finding P-positions with 2 ones. He arranged them in Table~\ref{tbl:twoones}.
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{| l | r |}
\hline
Nim & (0,1,1) (1,0,1) (1,1,0) \\ \hline
Wythoff plus Nim & (0,1,1) (1,0,1) (1,1,2)\\ \hline
Odd Cookie Monster & (0,1,1) (1,0,1) (1,1,0) \\ \hline
Cookie Monster & (4,1,1) (1,4,1) (1,1,4) \\ \hline
At-Most-2-Jars & (1,1,1) \\ \hline
Consecutive-At-Most-2-Jars & (3,1,1) (1,0,1) (1,1,3) \\ \hline
Consecutive-Include-First-Jar & (0,1,1) (1,0,1) (1,1,2) \\ \hline
Consecutive & (3,1,1) (1,0,1) (1,1,3) \\
\hline
\end{tabular}
\end{center}
\caption{P-positions in CM-Nim games with 2 ones}\label{tbl:twoones}
\end{table}
Cookie Monster is a proud programmer. He calculated more P-positions in hopes of finding patterns. These are the P-positions that he found where each jar is non-empty and has at most 6 cookies:
\begin{itemize}
\item At-Most-2-Jars. P-positions are sorted as the order does not matter;: $(1, 1, 1)$, $(1, 3, 4)$, $(2, 2, 2)$, $(2, 3, 6)$, $(2, 5, 7)$, $(3, 3, 3)$, $(4, 4, 4)$, $(5, 5, 5)$, $(6, 6, 6)$.
\item Consecutive-At-Most-2-Jars. We can switch the first and the third jars in the P-positions, so the list has only P-positions, where the first jar has no more cookies than the last jar: $(1, 1, 3)$, $(1, 3, 2)$, $(1, 4, 4)$, $(2, 2, 5)$, $(2, 6, 3)$, $(2, 5, 6)$, $(3, 2, 4)$, $(3, 4, 5)$, $(4, 1, 6)$, $(4, 6, 5)$.
\item Consecutive-Include-First-Jar: Order matters in this case. $(1, 1, 2)$, $(1, 3, 4)$, $(1, 4, 5)$, $(1, 5, 3)$, $(2, 2, 1)$, $(2, 3, 5)$, $(2, 4, 3)$, $(2, 5, 4)$, $(3, 1, 4)$, $(3, 2, 5)$, $(3, 3, 6)$, $(3, 6, 1)$, $(4, 1, 5)$, $(4, 2, 3)$, $(4, 5, 2)$, $(5, 1, 3)$, $(5, 2, 4)$, $(5, 4, 2)$, $(6, 3, 1)$, $(6, 6, 3)$.
\item Consecutive. In this case, the first and the third jars are interchangeable, so the list has only P-positions where the first jar does not have more cookies than the last jar: $(1, 1, 3)$, $(1, 3, 6)$, $(1, 4, 4)$, $(1, 5, 2)$, $(1, 6, 5)$, $(2, 2, 5)$, $(2, 3, 3)$, $(2, 6, 4)$, $(3, 2, 4)$, $(4, 1, 6)$, $(5, 5, 6)$.
\end{itemize}
After staring at the P-positions, Cookie Monster notices only one pattern: In the game of At-Most-2-Jars with 3 jars, $(n,n,n)$ is always a P-position.
\subsection{Authors' comments}
Cookie Monster's observations about the At-Most-2-Jars game are correct. In fact, this game is a specific case of another game: the game with $k$ jars called \textit{All-but-$k$}, where a player can take any number of cookies from any subset of jars except from all of the $k$ jars.
\begin{theorem}
In the All-but-$k$ game, $(n,n,\ldots,n)$ is a P-position for any $n$. Further, if a position has $|\{a_1, \ldots, a_k\}| = 2$, it is an N-position.
\end{theorem}
\begin{proof}
By induction. Let $\overline{\mathcal{P}}$ be the set of our candidate P-positions, and let $\overline{\mathcal{N}}$ be the set of our candidate N-positions. Now we show that three properties hold:
\begin{enumerate}
\item from each P-position in $\overline{\mathcal{P}}$, we can only move to N-positions in $\overline{\mathcal{N}}$,
\item from every N-position in $\overline{\mathcal{N}}$, there exists a move to a P-position in $\overline{\mathcal{P}}$,
\item the terminal position is in $\overline{\mathcal{P}}$.
\end{enumerate}
If all the jars have the same number $n$ of cookies, and we take $c$ from some of the jars, then after the move the number of cookies is either $n$ or $n-c$, and both numbers exist. This proves the first statement.
If some of the jars have $n$ cookies and other jars have $n-c$ cookies. By taking $c$ cookies from all of the jars with $n$ cookies, all of the jars will have $n-c$ cookies, proving the second statement.
The third statement is trivial.
So if we start with any position in $\overline{\mathcal{P}}$, our opponent can only move to a position in $\overline{\mathcal{N}}$. We can then move back to a position in $\overline{\mathcal{P}}$ with a smaller total number of cookies. Eventually we will move to the terminal P-position and win.
\end{proof}
We can generalize it even further:
\begin{theorem}
In a game where a complement of every permissible set is permissible, $(n,n,\ldots,n)$ is a P-position for any $n$.
\end{theorem}
\section{The Big Picture}\label{bigpicture}
Cookie Monster is very pleased that he invented many new games. He is also proud that he discovered some properties of some of CM-Nim games. But mathematicians also like to look at the big picture: are there any properties that hold for all of the CM-Nim games, and not just for specific games?
Cookie Monster noticed that in all the games the last value of a P-position is uniquely defined by the previous terms. He also noticed that one of the numbers is never more than twice bigger than the other two. This is his contribution to the big picture.
\subsection{Authors' comments}
Cookie Monster's observation about the dependency of the last jar on the other jars is correct. The following theorem is true for all CM-Nim games with $k$ jars.
\begin{theorem}
In any CM-Nim game, for a position with the number of cookies in $k-1$ jars known and one unknown: $P = (a_1, \ldots, a_{k-1}, x)$, there is a unique value of $x$ such that the $P$ is a P-position.
\end{theorem}
\begin{proof}
The proof consists of two parts.
\emph{Part 1: Two P-positions cannot differ in the last number only}.
We can always make a move from a position with a greater value of $x$ to the position with the smaller value of $x$, contradicting the fact that a P-position can only move to N-positions.
\emph{Part 2: There exists $x$ such that $P$ is a P-position}.
Suppose that all of the positions $(a_1, \ldots, a_{k-1}, x)$ are N-positions, which means from each of them, there exists a move to a P-position. Clearly, this move cannot involve only taking from the last jar by our assumption, so our move must involve taking from at least one of the first $k-1$ jars.
Now we will give an upper bound $U$ on the number of possible moves that there are from these positions that could lead to P-positions. The maximum number of permissible sets is $2^k - 1$. Since we cannot take from the last jar only, we can take at most $m = \max\{a_1, \ldots, a_{k-1}\}$ cookies from each jar, so $U \leq m(2^k - 1)$.
Consider the following assumed N-positions: $(a_1, \ldots, a_{k-1}, x)$, where $0 \leq x \leq U$. Applying the Pigeonhole principle, we see there must be two positions where we used the same move to get to a P-position. Suppose that this move removes $b_i$ cookies from jar $i$, and let the two positions be $(a_1, \ldots, a_{k-1}, A)$ and $(a_1, \ldots, a_{k-1}, B)$. Then the P-positions where we moved to are $(a_1 - b_1, \ldots, a_{k-1} - b_{k-1}, A - b_k)$ and $(a_1 - b_1, \ldots, a_{k-1} - b_{k-1}, B - b_k)$. However, by part 1, this is a contradiction, which means that for some value of $x$, $(a_1, \ldots, a_{k-1}, x)$ is a P-position.
\end{proof}
We can easily generalize the proof for any combination of $n-1$ jars, not necessarily the first $n-1$. In other words, the $i$th number in a P-position is a function of the other numbers.
Cookie Monster is also correct about the bound on the largest number in P-positions of a CM-Nim game. To prove this bound, we need the following lemma:
\begin{lemma}\label{samejar}
Suppose that we start with a P-position, and a player takes from a set $S$ of jars. Then the other player cannot take from the exact same set $S$ of jars to get to a P-position.
\end{lemma}
\begin{proof}
Suppose that the other player can do so. But this means that the first player could have moved to this P-position instead, which is a contradiction.
\end{proof}
Now we are able to prove the bound.
\begin{theorem}
In a P-position of a CM-Nim game with $k$ jars, one of the numbers is never larger than twice the sum of the other numbers.
\end{theorem}
\begin{proof}
Suppose that a P-position is of the form $(a_1, a_2, \ldots, a_k)$. It suffices to prove the statement for the largest value of $a_i$, where $0 \leq i \leq k$, which we will call $a_j$. We can write this inequality as $2(\sum_{i=1}^n a_i - a_j) \geq a_j$.
We prove this statement by strong induction on the sum $T = \sum_{i=1}^{k} a_i - a_j$. Our base case is $T=0$, which is obvious. Now suppose that the statement is true for $T \leq n$. We want to prove the statement for $T = n+1$.
We proceed by contradiction. Suppose the position $P=(a_1, a_2,\ldots, a_k)$ is a P-position, and $2(n+1) < a_j$. All possible moves from $P$ must be to N-positions. Thus, the move that takes exactly one cookie from jar $j$ will result in an N-position.
From this N-position, we can then move to a P-position. If we want to get to a P-position, we cannot only take cookies from jar $j$ by Lemma~\ref{samejar}. Suppose that the P-position that we move to is $P' = (a'_1, a'_2, \ldots, a'_k)$, and let $T' = \sum_{i=1}^{k} a'_i - a'_j$. If we do not take from jar $j$ in our second move, we will have $2T' \le 2n < 2n + 2 - 1 < a_j - 1$, which, by the inductive hypothesis is a contradiction. Now suppose that we take $c$ cookies from jar $j$ as well as from $d$ of the other jars. Then $2cd \geq 1+c$, with equality if $c=d=1$, so $2T' = 2(n+1 - cd) < a_j - 1 - c$, which is a contradiction, and we are done.
\end{proof}
\section{Acknowledgements}\label{acknowledgements}
Cookie Monster and the authors are grateful to the MIT-PRIMES program for supplying cookies and supporting this research.
|
1,477,468,750,179 | arxiv | \section{Representation of the DP-SGD algorithm and its implementation in Opacus}\label{sec:scheme}
\section{Micro-Batching}
\label{sec:microbatching}
The following code snippet is a na\"ive way to yield the per-sample gradients through micro-batching.
{\small
\begin{verbatim}
for batch in Dataloader(train_dataset, batch_size):
all_per_sample_gradients = []
for x,y in batch:
y_hat = model(x)
loss = criterion(y_hat, y)
loss.backward()
per_sample_grads = [p.grad.detach().clone() for p in model.parameters()]
all_per_sample_gradients.append(per_sample_grads)
model.zero_grad() # reset p.grad
\end{verbatim}}
\section{Vectorized Computation}
\label{sec:vectorized}
In accordance with its speed objective, Opacus supports computing per-sample gradients efficiently, in a vectorized manner.
This is achieved by deriving a per-sample gradient formula for every layer and transforming it into a form that can be implemented using a single application of the \texttt{einsum} operator.
Due to space constraints, we discuss this approach only for the \texttt{nn.Linear} layer.
The implementation details for other layers and other related tutorials can be found in \href{https://opacus.ai/tutorials}{\texttt{opacus.ai/tutorials}}.
Consider one linear layer with weight matrix $W$.
We omit the bias from the forward pass equation and denote the forward pass by $Y=WX$, where $X$ is the input and $Y$ is the output of the linear layer.
$X$ is a matrix of size $d\times B$, with $B$ columns ($B$ is the batch size), where each column is an input vector of dimension $d$. Similarly, the output matrix $Y$ would be of size $r\times B$ where each column is the output vector corresponding to an element in the batch and $r$ is the output dimension.
The forward pass can be written as follows:
\[
Y_i^{(b)}=\sum_{j=1}^d W_{i,j} X_j^{(b)},
\]
where $Y_i^{(b)}$ denotes the $i$'th coordinate of the $b$'th sample in the batch.
In an ML problem, we typically need the derivative of the loss with respect to weights. Correspondingly, in Opacus we need the ``per-sample'' version of that, which is the per-sample derivative of the loss with respect to the weights $W$:
\[
\frac{\partial L}{\partial z}=\sum_{b=1}^{B}\sum_{i'=1}^{r} \frac{\partial L}{\partial Y_{i'}^{(b)}} \frac{\partial Y_{i'}^{(b)}}{\partial z}.
\]
Applying the chain rule above, we can now replace variable $z$ with $W_{i,j}$ and get
\[
\frac{\partial L}{\partial W_{i,j}}=\sum_{b=1}^{B}\sum_{i'=1}^{r} \frac{\partial L}{\partial Y_{i'}^{(b)}} \frac{\partial Y_{i'}^{(b)}}{\partial W_{i,j}} .
\]
We know from $Y=WX$ that $\frac{\partial Y_{i'}^{(b)}}{\partial W_{i,j}}$ is $X_j^{(b)}$ when $i=i'$, and is 0 otherwise. Continuing the above we have
\[
\frac{\partial L}{\partial W_{i,j}}=\sum_{b=1}^{B} \frac{\partial L}{\partial Y_{i'}^{(b)}} X_j^{(b)}.
\]
This equation corresponds to a matrix multiplication in PyTorch. In a regular backpropagation, the gradients of the loss function with respect to the weights (i.e., the gradients) are computed for the output of each layer and averaged over the batch. Since Opacus requires computing per-sample gradients, what we need is the following:
\begin{equation}
\label{per-sample-eq}
\frac{\partial L_\mathit{batch}}{\partial W_{i,j}}=\frac{\partial L}{\partial Y_{i'}^{(b)}} X_j^{(b)}.
\end{equation}
More generally, in a neural network with more layers, equation (\ref{per-sample-eq}) can be written as
\begin{equation}
\label{per-sample-eq-multi}
\frac{\partial L_\mathit{batch}}{\partial W^{(l)}_{i,j}}=\frac{\partial L}{\partial Z_{i}^{(l)(b)}} Z_j^{(l-1)(b)}
\end{equation}
for every layer $l$, where $Z_{i}^{(l)(b)}$ is the activation of the hidden layer $l$ for the $b$'th element of the batch of the neuron $i$. We refer to $\frac{\partial L}{\partial Z_{i}^{(l)(b)}}$ as the {\em highway gradient}.
We now explain how we compute the per-sample gradient equation (\ref{per-sample-eq-multi}) in Opacus efficiently. In order to remove the sum reduction to get to the equations (\ref{per-sample-eq}) and (\ref{per-sample-eq-multi}), we need to replace the matrix multiplication with a batched outer product. In PyTorch, \texttt{einsum} allows us to do that in vectorized form. The function \texttt{einsum} computes multi-dimensional linear algebraic array operations by representing them in a short-hand format based on the Einstein summation convention.
For instance, for computing the per-sample gradients for a linear layer, the \texttt{einsum} function can be written as \texttt{torch.einsum("n...i,n...j->nij", B, A)}, where variables \texttt{A} and \texttt{B} refer to activations and backpropagations, respectively. In Opacus activations and backpropagations essentially contain what we need for equation (\ref{per-sample-eq-multi}): using module and tensor hooks in PyTorch, Opacus stores the activations $Z_j^{(l-1)(b)}$ in forward hooks and access the highway gradients $\frac{\partial L}{\partial Z_{i}^{(l)(b)}}$ through backward hooks. That is how the method \texttt{torch.einsum("n...i,n...j->nij", B, A)} implements equation (\ref{per-sample-eq-multi}) for computing per-sample gradients for a \texttt{nn.Linear} layer. To understand the \texttt{einsum} expression itself, it is useful to think of it as a generalized version of \texttt{torch.bmm} (batch matrix multiplication) for multi-dimensional inputs. For 2D matrices \texttt{A} and \texttt{B}, \texttt{einsum} is equivalent to \texttt{torch.bmm(B.unsqueeze(2), A.unsqueeze(1))}. For higher dimensional inputs the idea is the same, while we also sum over extra dimensions.
\section{Detection of DP Violations}
\label{sec:violation}
In this section we explain how Opacus can detect whether an operation violates some DP guarantees by applying the following criteria. First, Opacus checks if all layers in the model are supported. Second, Opacus checks for violations that make a model incompatible with differentially private training, which is usually due to one of the two issues: 1) the model tracks some extra information not covered by DP guarantees, or 2) a module is known to do batch-level computations, thus rendering the computation of per-sample gradients impossible.
Some examples:
1) Opacus does not allow models to have batch normalization layers, as they share information across samples;
2) Opacus does not allow \texttt{track\_running\_stats} in instance-normalization layers, as they track statistics that are not covered by DP guarantees.
The above checks in Opacus for DP compatibility are not exhaustive. In particular, Opacus has no way of checking whether the model maintains the independence of the individual samples or tracks extraneous statistics. We plan to investigate ways to address this in the future releases of Opacus.
\section{Tracking Gradients}
\label{sec:track-gradient}
In this section we explain how Opacus makes it easy to keep track of the gradient at different stages of DP training. In the following code snippet, we show how in Opacus we can access intermediate stages of gradient computation throughout training:
{\small
\begin{verbatim}
# model, optimizer and data_loader are initialized with make_private()
for data, labels in data_loader:
output = model(data)
loss = criterion(output, labels)
loss.backward()
print(p.grad) # normal gradients computed by PyTorch autograd
print(p.grad_sample) # per-sample gradients computed by Opacus
# (no clipping, no noise)
optimizer.step()
print(p.grad_sample) # same as before optimizer.step() - this field is unchanged
print(p.summed_grad) # clipped and aggregated over a batch, but no noise
print(p.grad) # final gradients (clipped, noise added, aggregated over a batch)
optimizer.zero_grad() # all gradients are None now
\end{verbatim}}
\section{Additional Experimental Results}
\subsection{End-to-end benchmarks}
\begin{figure}[h]
\centering
\begin{minipage}[t]{0.49\linewidth}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/cumulative/cumulative_mnist.pdf}
\caption*{\small (a) MNIST with CNN}
\end{minipage}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/cumulative/cumulative_cifar10.pdf}
\caption*{\small (b) CIFAR-10 with CNN}
\end{minipage}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/cumulative/cumulative_embed.pdf}
\caption*{\small (c) IMDb with Embedding}
\end{minipage}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/cumulative/cumulative_lstm.pdf}
\caption*{\small (d) IMDb with LSTM}
\end{minipage}
\end{minipage}
\centering
\caption{Cumulative runtime over 20 epochs with batch size 512 for each framework. Using JIT compilation results in a slower first epoch.}
\label{fig:cumul}
\end{figure}
\cref{fig:cumul} shows each framework's cumulative runtime over 20 epochs with batch size 512 on each end-to-end model training task. Both JAX (DP) and Custom TFP (XLA) incur a large runtime overhead during the first epoch of up to 101$\times$ and 625$\times$ the runtime of subsequent epochs respectively due to JIT compilation. If training for relatively few epochs, disabling JIT compilation or using a framework that is optimized to run without JIT may reduce total runtime.
\subsection{Microbenchmarks}
Opacus provides custom implementations for the multi-head attention, RNN, GRU, and LSTM layers, which can be wrapped in \texttt{GradSampleModule} to enable training with DP. \cref{fig:microbe_appendix} compares the runtime and peak memory usage of the \texttt{torch.nn} module, the corresponding Opacus module without DP, and the corresponding Opacus module wrapped in \texttt{GradSampleModule} with DP enabled for these layers.
For RNN-based layers, Opacus's custom modules are responsible for most of the runtime overhead of enabling DP, as they are up to 11$\times$ slower than the corresponding \texttt{torch.nn} module. Wrapping the custom modules in \texttt{GradSampleModule} results in a \textasciitilde2$\times$ slowdown.
The small peak allocated memory overhead (where applicable) is purely due to wrapping the custom modules in \texttt{GradSampleModule} and collecting per-sample gradients, as Opacus's custom modules tend to use slightly less memory than the corresponding \texttt{torch.nn} modules.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.49\linewidth}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/details/dpmha.pdf}
\caption*{\small (a) Multi-head Attention}
\end{minipage}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/details/dprnn.pdf}
\caption*{\small (b) RNN}
\end{minipage}
\end{minipage}
\begin{minipage}[t]{0.49\linewidth}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/details/dpgru.pdf}
\caption*{\small (c) GRU}
\end{minipage}
\begin{minipage}[t]{0.99\linewidth}
\includegraphics[width=\linewidth]{figures/details/dplstm.pdf}
\caption*{\small (d) LSTM}
\end{minipage}
\end{minipage}
\caption{Comparing the \texttt{torch.nn} module, the corresponding Opacus module without DP, and the Opacus module wrapped in \texttt{GradSampleModule} with DP enabled for the multi-head attention, RNN, GRU, and LSTM layers. Top: Mean runtime (ms). Bottom: Peak allocated memory (MB).
}
\label{fig:microbe_appendix}
\end{figure}
\cref{tab:runtime} (runtime) and \cref{tab:memory} (memory) include the raw data used to generate \cref{fig:microbe} and \cref{fig:microbe_appendix}. \cref{tab:memory_breakdown} includes a breakdown of CUDA memory usage, as well as $L/C$ and $(L/C) / b$ for each layer and batch size.
\input{tables/runtimes}
\input{tables/memory}
\input{tables/memory_breakdown}
\subsection{Experiment Setup}
\label{sec:exp-details}
The exact hardware configurations are listed below. Software versions are listed in~\cref{tab:benchmark-versions}.
\textbf{End-to-end benchmarks.}
The end-to-end benchmarks were executed within a virtual environment on a public cloud with an Intel(R) Xeon(R) CPU @ 2.20GHz, NVIDIA A100 SXM4 (40GB VRAM), and 83GB RAM. We created and ran a separate Docker container for each framework. The Docker image source is
nvidia/cuda:11.4.2-cudnn8-devel-ubuntu20.04.
\textbf{Microbenchmarks.} The microbenchmarks were executed on a cloud instance running Ubuntu 18.04.5 LTS with an Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz, NVIDIA A100 SXM4 (40GB VRAM), and 1.1TB of RAM. CUDA memory was allocated in block sizes of 512.
For details on the settings for each layer, refer to \url{https://github.com/pytorch/opacus/blob/main/benchmarks/config.json}.
\begin{table}
\begin{center}
\begin{tabular}{l| l}
\toprule
\textbf{Software} & \textbf{Version}\\
\midrule
\multirow{2}{*}{Python} & 3.8.10 (end-to-end)\\
& 3.9.7 (microbenchmarks)\\
\midrule
dm-haiku & 0.0.5\\
JAX & 0.2.25\\
jaxlib & 0.1.73 \\
\midrule
PyTorch & 1.10.0 \\
\midrule
Opacus & 1.0.0 \\
\midrule
BackPACK & 0.1\\
backpack-for-pytorch & 1.4.0\\
\midrule
TensorFlow & 2.7.0 \\
TensorFlow Privacy & 0.7.3\\
\midrule
PyVacy & 0.0.1 + commit \texttt{2e0a9f}\\
\bottomrule
\end{tabular}
\end{center}
\caption{Software versions used in the end-to-end and microbenchmarks. The latter only use Python, PyTorch, and Opacus.}
\label{tab:benchmark-versions}
\end{table}
\subsection{End-to-end benchmarks}
Our end-to-end benchmarks are based on the Fast-DPSGD benchmarks~\cite{benchmark}. We evaluate Opacus on four end-to-end model training tasks against a JAX implementation of DP-SGD, a custom TensorFlow Privacy implementation, BackPACK, and PyVacy, as well as standard PyTorch without DP.
\subsubsection{Frameworks}
JAX is a general-purpose framework for high-performance numerical computing that uses just-in-time (JIT) compilation and static graph optimization. We use the custom implementation of DP-SGD found in \cite{benchmark} and denote it as \textit{JAX (DP)}.
TensorFlow Privacy is a TensorFlow library for differentially private model training. We use the custom implementation with vectorization and XLA-driven JIT compilation, which outperforms both the custom TensorFlow Privacy implementation without XLA and the existing TensorFlow Privacy library in~\cite{benchmark}. We denote it as {\em Custom TFP (XLA)} for consistency with \cite{benchmark}.
To enable per-sample gradient extraction, BackPACK extends several PyTorch layers with support for efficient Jacobian computation. In contrast, PyVacy processes each sample individually in a for-loop, forgoing parallelization.
\subsubsection{Experimental setup}
Following \cite{benchmark}, we train a CNN with 26,010 parameters on MNIST~\cite{mnist}, a handwritten digit recognition dataset, and a CNN with 605,226 parameters on CIFAR-10~\cite{cifar10}, a dataset of small color images. We train an embedding network and an LSTM network with 160,098 and 1,081,002 parameters respectively on the IMDb dataset~\cite{imdb}, which consists of movie reviews for sentiment classification.
For each model and framework, we train the model using the framework's implementation of DP-SGD with a given privacy budget.
Since Opacus is built on top of PyTorch, we also train each model using PyTorch \textit{without} DP to better understand the runtime overhead of enabling DP with Opacus compared to training without DP using PyTorch.
We benchmark the latest version of each framework as of December 8th, 2021 in a separate Docker container,
see~\cref{sec:exp-details} for details. Compared to the setup in~\cite{benchmark}, our GPU has more VRAM (40GB rather than 12GB), which allows us to benchmark larger batch sizes (512, 1024, 2048) for a more extensive comparison. The code is available at \url{https://github.com/TheSalon/fast-dpsgd/tree/latest}.
We report each framework's median per-epoch runtime on each end-to-end training task at various batch sizes in \cref{fig:results}.
\input{tables/fastdpsgd}
\subsubsection{Results}
JAX (DP) consistently achieves the lowest runtime among all DP-SGD frameworks on both MNIST and IMDb with the embedding network, even outperforming PyTorch without DP at smaller batch sizes. On CIFAR-10, JAX (DP) outperforms all other frameworks at smaller batch sizes, while Opacus surpasses JAX (DP) at larger batch sizes. On IMDb with the LSTM network, Custom TFP (XLA) consistently achieves the lowest runtime among all DP-SGD frameworks. Training with PyVacy consistently results in the highest runtime due to its use of micro-batching.
At a batch size of 2048, Opacus achieves the lowest runtime on CIFAR-10 and the second lowest runtime after JAX (DP) on MNIST and IMDb with the embedding network, its runtime being 1.4$\times$ and 3$\times$ that of JAX (DP) respectively. On IMDb with the LSTM network, Opacus achieves the third lowest runtime after Custom TFP (XLA) and JAX (DP), with 7$\times$ the runtime of Custom TFP (XLA) and 2.4$\times$ the runtime of JAX (DP).
While the median per-epoch runtime decreases as the batch size increases for most frameworks, the effect is strongest for Opacus and PyTorch: By increasing the batch size from 16 to 2048, Opacus's per-epoch runtime decreases by a factor ranging from 17$\times$ (on CIFAR-10) to 75$\times$ (on MNIST).
We report the mean per-epoch runtime reduction from increasing the batch size from 16 to 2048
for each framework: 40$\times$ (Opacus), 37.8$\times$ (PyTorch without DP), 12.8$\times$ (JAX (DP)), 10.5$\times$ (BackPACK), 6.3$\times$ (Custom TFP (XLA)), and 1$\times$ (PyVacy).
Since Opacus and PyTorch benefit the most from a larger batch size, increasing the batch size further may close the gap (where applicable) between Opacus and JAX (DP) or Custom TFP (XLA) and even result in Opacus outperforming them.
Both JAX (DP) and Custom TFP (XLA) rely on JIT compilation, which incurs a large runtime overhead in the first epoch (up to 101$\times$ and 625$\times$ the median per-epoch runtime, respectively) in exchange for a lower runtime in subsequent epochs.
See~\cref{fig:cumul} for each framework's cumulative runtime over 20 epochs.
On MNIST, CIFAR-10, and IMDb with the embedding network, enabling DP with Opacus incurs a 2$\times$ to 2.9$\times$ runtime overhead compared to training without DP using PyTorch. On IMDb with the LSTM network, the runtime overhead ranges from 25$\times$ to 30$\times$, see~\cref{sec:microresults} for further analysis.
Since Opacus is built on PyTorch, we expect
any future improvements to PyTorch's efficiency (e.g. \texttt{torch.vmap} graduating from the prototype stage) to benefit Opacus as well.
\subsection{Microbenchmarks}
We measure the runtime and peak allocated memory for one forward and one backward pass for each layer currently supported by Opacus, both with and without DP. We report the runtime and memory overhead of enabling DP with Opacus for each layer.
\subsubsection{Experimental setup}
For each layer that Opacus currently supports, we benchmark both the layer with DP enabled and the corresponding \texttt{torch.nn} module without DP at various batch sizes.
For the convolutional, normalization, linear, and embedding layers, which Opacus supports directly, wrapping the corresponding \texttt{torch.nn} module in Opacus's \texttt{GradSampleModule} enables DP. For the multi-head attention and RNN-based layers, which Opacus provides custom implementations for, wrapping the corresponding custom module in \texttt{GradSampleModule} enables DP
Each benchmark estimates the mean runtime and peak allocated CUDA memory for one forward and one backward pass by measuring the cumulative runtime and peak allocated CUDA memory for a total of 2{,}000 forward and backward passes on 100 different input batches.
See~\cref{sec:exp-details} for details. The microbenchmarking code is available at \url{https://github.com/pytorch/opacus/tree/main/benchmarks}.
We report the runtime and peak memory overhead of the DP-enabled layer relative to the corresponding \texttt{torch.nn} module for all currently supported layers in \cref{fig:microbe}.
\subsubsection{Memory requirements of DP}
Since DP-SGD requires per-sample gradients, training with DP-SGD uses more memory than training without DP-SGD. Let $L$ denote the number of trainable parameters in a given module
and let each parameter be of size 1. Let $C$ be the size of the features, label, and model output for a single data point. Let $M$ denote the total memory usage for one forward and one backward pass on a batch of size $b$. Then, ignoring intermediate computations and constant additive overhead such as from non-trainable parameters:
\begin{align}
M_\text{non-DP} &= bC + 2L,\\
M_\text{DP} &= bC + (1 + b)L.
\label{eq:mem1}
\end{align}
In both non-DP and DP training, the features, labels, and the module's output for $b$ data points occupy memory of size $bC$, and the module itself occupies memory of size $L$ by definition. Without DP, we expect the gradient to occupy memory of size $L$ as well, whereas with DP, we expect the gradient to occupy memory of size $bL$ due to $b$ per-sample gradients.
For $b \gg 1$, we can approximate the memory overhead as follows:\footnote{Since $L/C \sim b \leftrightarrow L \sim bC$.}
\begin{align}
\frac{M_\text{DP}}{M_\text{non-DP}} & \approx
\begin{cases}
\frac{bC + (1+b)L}{bC} \approx 1+\frac{L}{C} & \text{if } L/C \ll b\\
\frac{2+b}{3} \approx \frac{b}{3} & \text{if } L/C \approx b\\
\frac{1+b}{2} \approx \frac{b}2 & \text{if } L/C \gg b
\end{cases}
\label{eq:memory}
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/microbe.pdf}
\caption{Runtime and peak allocated memory overhead of enabling DP for each layer currently supported by Opacus at various batch sizes. Top: Runtime overhead (factor). Bottom: Peak allocated memory overhead (factor). The runtime overhead is the mean runtime for one forward and one backward pass of the DP-enabled layer divided by the mean runtime for one forward and one backward pass of the corresponding \texttt{torch.nn} module without DP. The overhead in terms of peak allocated memory is calculated in the same manner.}
\label{fig:microbe}
\end{figure}
\subsubsection{Results}
\label{sec:microresults}
\textbf{Runtime.}
For the convolutional, normalization, and multi-head attention layers, enabling DP with Opacus's \texttt{GradSampleModule} results in a 1.2$\times$ to 2.9$\times$ runtime increase, which we attribute to the calculation of per-sample gradients. For the linear and embedding layers, the runtime overhead increases with the batch size, reaching a factor of up to 5.5$\times$ and 49$\times$ respectively.
In contrast, enabling DP for RNN-based layers consistently incurs a large (up to 18$\times$) runtime overhead, which decreases as the batch size increases.
Opacus's custom RNN-based modules are responsible for most of this overhead,
their runtime being up to 11$\times$ the runtime of the corresponding \texttt{torch.nn} module. As with directly supported \texttt{torch.nn} modules, wrapping the custom modules in \texttt{GradSampleModule} results in a \textasciitilde2$\times$ slowdown. \cref{fig:microbe_appendix} compares the runtime of the \texttt{torch.nn} module, the corresponding custom module without DP, and the latter wrapped in \texttt{GradSampleModule} with DP enabled for the multi-head attention and RNN-based layers.
In practice, Opacus's custom LSTM with DP enabled performs competitively with other DP-SGD frameworks when training with large batch sizes. Recall that on the IMDb dataset, Opacus's median per-epoch runtime is only 2.4$\times$ the median per-epoch runtime of the custom JAX DP-SGD implementation while avoiding the latter's 101$\times$ JIT compilation overhead in the first epoch (see \cref{fig:results}, \cref{fig:cumul}).
\textbf{Peak allocated CUDA memory.}
For the normalization, multi-head attention, and RNN-based layers, enabling DP with Opacus's \texttt{GradSampleModule} results in an up to 1.5$\times$ increase in peak allocated memory. In our experiments, $L/C$ is much smaller than the batch size for these layers, hence the relatively constant memory overhead.
For the linear and embedding layers, $L/C$ is substantial compared to the batch size. Hence, as predicted by \cref{eq:memory} and depicted in \cref{fig:microbe}, the peak allocated memory overhead increases with the batch size, reaching factors of up to 129$\times$ and 334$\times$ respectively
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/embedding.pdf}
\caption{Runtime and peak allocated memory overhead of enabling DP for the embedding layer.
In addition to the batch size, we also vary \texttt{num\_embeddings} and thus, the size of the module $L$.
Left: Runtime overhead (factor). Right: Peak allocated memory overhead (factor). For each value of \texttt{num\textunderscore embeddings} from left to right, $L/C = 0.63, 5, 50, 496, 4{,}951, 9{,}901, 25{,}955$ respectively. Overheads are calculated as in \cref{fig:microbe}}
\label{fig:embedding}
\end{figure}
\cref{fig:embedding} shows how $L/C$ and the batch size affect the runtime and peak allocated memory overhead of enabling DP by example of the embedding layer. Comparing the predicted memory overhead of enabling DP to the measured peak allocated memory overhead for various \texttt{num\_embeddings} and batch sizes shows that on average, \cref{eq:memory} overestimates the memory overhead of enabling DP by $41\% \pm 17.6\%$ when $L/C \ll b$ and underestimates it by $23.3\% \pm 6.3\%$ when $L/C \gg b$.
For the convolutional layer, the peak allocated memory overhead increases slightly with the batch size. As opposed to the linear and embedding layers, $L/C \ll b$ and the phenomenon weakens as the batch size increases. We attribute this to the comparatively many intermediate computations and additive overheads of the convolutional layer that are not captured in~\cref{eq:mem1}: For the other supported \texttt{torch.nn} layers, $M_\text{non-DP}$ and $M_\text{DP}$, on average, explain 56.5\% $\pm$ 8.7\% and 57.5\% $\pm$ 14.9\% of the measured peak allocated CUDA memory, whereas for the convolutional layer, $M_\text{non-DP}$ and $M_\text{DP}$ explain only 17.7\% $\pm$ 7.1\% and 6.15\% $\pm$ 0.03\% of the measured peak allocated memory.
\section{Background and Introduction}
\label{sec:intro}
\input{introduction}
\section{Design Principles and Features}
\label{sec:design}
\input{design}
\section{Benchmarks}
\label{sec:experiments}
\input{experiments}
\section{Related Work}
\label{sec:related}
\input{related}
\section{Conclusions}
\label{sec:conclusions}
\input{conclusions}
\section*{Acknowledgements}
We are extremely grateful to Peter Romov for his substantial and valuable contributions to the Opacus codebase. Opacus also owes a debt of thanks to all of its open-source contributors who continuously helped in the development and maintaining of the Opacus library.
\printbibliography
\clearpage
|
1,477,468,750,180 | arxiv | \section{Introduction}
\label{sec:introduction}
\vspace*{-1ex}
A fundamental characteristic of a good \gls{HCI} system is its ability to effectively acquire \emph{and} disseminate knowledge about the tasks and environments in which it is involved.
A particular subclass of such systems, natural-language-driven conversational agents such as \emph{Alexa} and \emph{Siri}, have seen great success in a number of well-defined language-driven tasks.
Even such widely adopted systems suffer, however, when exposed to less circumscribed, more free-form situations.
Ultimately, an implicit requirement for the wide-scale success of such systems is the effective understanding of the environments and goals of the user -- an exceedingly difficult problem in the general case as it involves getting to grips with a variety of sub-problems (semantics, grounding, long-range dependencies) each of which are extremely difficult problems in themselves.
One avenue to ameliorate such issues is the incorporation of \emph{visual} context to help explicitly ground the language used -- providing a domain in which knowledge can be anchored and extracted from.
Conversely, this also provides a way in which language can be used to characterise visual information in richer terms, for example with sentences describing salient features in the image (referred to as ``captioning'')~\cite{Johnson_2016_CVPR,karpathy2015deep}.
In recent years, there has been considerable interest in visually-guided language generation in the form of \gls{VQA}~\cite{Antol_VQA} and subsequently visual dialogue~\cite{Das_VisualDialog}, both involving the task of \emph{answering} questions in the context of an image.
In the particular case of visual dialogue, along with the image, previously seen questions and answers (i.e.\ the dialogue history) are also accepted, and a relevant answer at the current time produced. We refer to this one-sided or answer-only form of visual dialogue as \gls{1VD}.
Inspired by these models and aiming to extend their capabilities, we establish the task of \gls{2VD} whereby an agent must be capable of acting as both the questioner and the answerer.
Our motivation for this is simple -- AI agents need to be able to both ask questions \emph{and} answer them, often interchangeably, rather do either one exclusively.
For example, a vision-based home-assistant (e.g.\ Amazon's \textit{Alexa}) may need to ask questions based on her visual input (``There is no toilet paper left. Would you like me to order more?'') but may also need to answer questions asked by humans (``Did you order the two-ply toilet paper?'').
The same question-answer capability is true for other applications.
For example, with aids for the visually-impaired, a user may need the answer to ``Where is the tea and kettle?'', but the system may equally need to query ``Are you looking for an Earl Grey or Rooibos teabag?'' to resolve potential ambiguities.
We take one step toward this broad research goal with \textsc{FlipDial}{}, a generative model capable of both \gls{1VD} and \gls{2VD}. The generative aspect of our model is served by using the \gls{CVAE}, a framework for learning deep conditional generative models while simultaneously amortising the cost of inference in such models over the dataset~\cite{Kingma_2014auto, Sohn_2015learning}.
Furthermore, inspired by the recent success of \glspl{CNN} in language generation and prediction tasks~\cite{hu2014convolutional,kalchbrennerconvolutional,pham2016convolutional}, we explore the use of \glspl{CNN} on sequences of sequences (i.e.\ a dialogue) to \emph{implicitly} capture all sequential dependences through the model.
Demonstrating the surprising effectiveness of this approach, we show sets of sensible and diverse answer generations for the \gls{1VD} task in \cref{fig:gen-vd-hook}.
We here provide a brief treatment of works related to visual dialogue. We reserve a
thorough comparison to Das et.al.~\cite{Das_VisualDialog} for \cref{sec:eval}, noting here that our fully-generative convolutional extension of their
model outperforms their state-of-the-art results on the answering of sequential visual-based questions (\gls{1VD}).
In another work, Das et.al.~\cite{das2017learning} present a Reinforcement Learning based
model to do \gls{1VD}, where they instantiate two separate agents, one
each for questioning and answering.
Crucially, the two agents are given \emph{different} information -- with one
(QBot) given the caption, and the other (ABot) given the image.
While this sets up the interesting task of performing image retrieval from
natural-language descriptions, it is also fundamentally different from having a
single agent perform both roles.
Jain et.al.~\cite{jain2017creativity} explore a complementary task to
\gls{VQA}~\cite{Antol_VQA} where the goal is instead to generate a (diverse) set of
relevant \emph{questions} given an image.
In their case, however, there is no dependence on a history of questions and
answers.
Finally, we note that Zhao et.al.~\cite{zhao2017learning} employ a similar
model structure to ours, using a \gls{CVAE} to model dialogue, but condition their model
on discourse-based constraints for a purely linguistic (rather than visuo-linguistic) dataset.
The tasks we target, our architectural differences (\glspl{CNN}), and the dataset and metrics we employ are distinct.
\noindent
Our primary contributions in this work are therefore:
\begin{compactitem}
\item A fully-generative, convolutional framework for visual dialogue that outperforms state-of-the-art models on sequential question answering (\gls{1VD}) using the generated answers, and establishes a baseline in the challenging two-way visual dialogue task (\acrshort{2VD}).
\item Evaluation using the \emph{predicted} (not ground-truth) dialogue -- essential for real-world conversational agents.
\item Novel evaluation metrics for generative models of two-way visual dialogue to quantify answer-generation quality, question relevance, and the models's generative capacity.
\end{compactitem}
\section{Preliminaries}
\label{sec:prelim}
Here we present a brief treatment of the preliminaries for deep generative models -- a conglomerate of deep neural networks and generative models.
In particular, we discuss the \gls{VAE}~\cite{Kingma_2014auto} which given a dataset
\(\mathcal{X}\) with elements \(\bm{x} \in \mathcal{X}\), simultaneously learns
\begin{inparaenum}[i)]
\item a variational approximation~\(\q{\bm{z} \mid \bm{x}}\)\footnote{Following the literature, the terms recognition model or inference network may also be used to refer to the posterior variational approximation.} to the unknown posterior distribution~\(\p{\bm{z} \mid \bm{x}}\) for latent variable~\(\bm{z}\), and
\item a generative model~\(\p{\bm{x}, \bm{z}}\) over data and latent variables.
\end{inparaenum}
These are both highly attractive prospects as the ability to approximate the posterior distribution helps \emph{amortise} inference for any given data point~\(\bm{x}\) over the entire dataset~\(\mathcal{X}\), and learning a generative model helps effectively capture the underlying abstractions in the data.
Learning in this model is achieved through a unified objective, involving the marginal likelihood (or \emph{evidence}) of the data, namely:
\begin{align*}
\log \p{\bm{x}}
&= \KL{\q{\bm{z} \mid \bm{x}}}{\p{\bm{z} \mid \bm{x}}}\\
&\quad+ \Ex[\q{\bm{z} \mid \bm{x}}]{\log \p{\bm{x}, \bm{z}} - \log \q{\bm{z} \mid \bm{x}}}\\
&\ge \Ex[\q{\bm{z} | \bm{x}}]{\log \p{\bm{x} | \bm{z}}} - \KL{\q{\bm{z} | \bm{x}}\!}{\!\p{\bm{z}}}
\refstepcounter{equation}\tag{\theequation}} \newcommand{\posref}[1]{{\bfseries #1}
\label{eq:elbo}
\end{align*}
The unknown true posterior~\(\p{\bm{z} \mid \bm{x}}\) in the first \gls{KL} divergence is intractable to compute making the objective difficult to optimise directly. Rather a lower-bound of the marginal log-likelihood \(\log \p{\bm{x}}\), referred to as the \gls{ELBO}, is maximised instead.
By introducing a condition variable~\(\bm{y}\), we capture a {\em conditional} posterior approximation \(\q{\bm{z} \mid \bm{x}, \bm{y}}\) and a {\em conditional} generative model \(\p{\bm{x}, \bm{z} \mid \bm{y}}\), thus deriving the \gls{CVAE}~\cite{Sohn_2015learning}. Similar to \cref{eq:elbo}, the conditional \gls{ELBO} is:
\begin{align*}
\log \p{\bm{x} \mid \bm{y}
&\ge \Ex[\q{\bm{z} \mid \bm{x}, \bm{y}}]{\log \p{\bm{x} \mid \bm{z}, \bm{y}}}\\
&\quad- \KL{\q{\bm{z} \mid \bm{x}, \bm{y}}}{\p{\bm{z} \mid \bm{y}}}
\refstepcounter{equation}\tag{\theequation}} \newcommand{\posref}[1]{{\bfseries #1}
\label{eq:c-elbo}
\end{align*}
where the first term is referred to as the reconstruction or negative \gls{CE} term, and the second, the regularisation or \gls{KL} divergence term. Here too, similar to the \gls{VAE}, \(\q{\bm{z} \mid \bm{x}, \bm{y}}\) and \(\p{\bm{z} \mid \bm{y}}\) are typically taken to be isotropic multivariate Gaussian distributions, whose parameters~\((\boldsymbol{\mu}_q, \boldsymbol{\sigma}^2_q)\) and~\((\boldsymbol{\mu}_p, \boldsymbol{\sigma}^2_p)\) are provided by \glspl{DNN} with parameters~\(\phi\) and~\(\theta\), respectively.
The generative model likelihood~\(\p{\bm{x} \mid \bm{z}, \bm{y}}\), whose form varies depending on the data type -- Gaussian or Laplace for images and Categorical for language models -- is also parametrised similarly.
In this work, we employ the \gls{CVAE} model for the task of eliciting dialogue \emph{given} contextual information from vision (images) and language (captions).
\glsreset{1VD}
\glsreset{2VD}
\section{Generative Models for Visual Dialogue}
\label{sec:vd-gen}
In applying deep generative models to visual dialogue, we begin by characterising a preliminary step toward it, \gls{VQA}.
In \gls{VQA}, the goal is to answer a single question in the context of a visual cue, typically an image.
The primary goal for such a model is to ensure that the elicited answer conforms to a stronger notion of relevance than simply answering the given question -- it must also relate to the visual cue provided.
This notion can be extended to \gls{1VD} which we define as the task of answering a \emph{sequence} of questions contextualised by an image (and a short caption describing its contents), similar to~\cite{Das_VisualDialog}.
Being able to exclusively answer questions, however, is not fully encompassing of true conversational agents. We therefore extend \gls{1VD} to the more general and realistic task of \gls{2VD}. Here the model must elicit not just answers given questions, but questions given answers as well -- generating \emph{both} components of a dialogue, contextualised by the given image and caption.
Generative \gls{1VD} and \gls{2VD} models introduce stochasticity in the latent representations.
As such, we begin by characterising our generative approach to \gls{2VD} using a \gls{CVAE}.
For a given image~\(\bm{i}\) and associated caption~\(\bm{c}\), we define a dialogue as a sequence of question-answer pairs~\(\dial[1:T] = \grpA{(\bm{q}_t, \bm{a}_t)}^T_{t=1}\), simply denoted~\(\dial\) when sequence indexing is unnecessary.
Additionally, we denote a dialogue context~\(\bm{h}\). When indexed by step as~\(\bm{h}_t\), it captures the dialogue subsequence~\(\dial[1:t]\).
With this formalisation, we characterise a generative model for \gls{2VD} under latent variable~\(\bm{z}\) as \( \p{\dial, \bm{z} \mid \bm{i}, \bm{c}, \bm{h}} = \p{\dial \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}}\; \p{\bm{z} \mid \bm{i}, \bm{c}, \bm{h}} \), with the corresponding recognition model defined as \(\q{\bm{z} \mid \dial, \bm{i}, \bm{c}, \bm{h}}\).
Note that with relation to \cref{eq:c-elbo}, data~\(\bm{x}\) is dialogue~\(\dial\)
and the condition variable is~\(\bm{y} = \grpB{\bm{i}, \bm{c}, \bm{h}}\), giving:
\begin{align*}
&\log \p{\dial \mid \bm{i}, \bm{c}, \bm{h}}\\
&\ge \Ex[\q{\bm{z} \mid \dial, \bm{i}, \bm{c}, \bm{h}}]{\log \p{\dial \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}}}\\
&\quad- \KL{\q{\bm{z} \mid \dial, \bm{i}, \bm{c}, \bm{h}}}{\p{\bm{z} \mid \bm{i}, \bm{c}, \bm{h}}},
\refstepcounter{equation}\tag{\theequation}} \newcommand{\posref}[1]{{\bfseries #1}
\label{eq:vd-elbo}
\end{align*}
with the graphical model structures shown in \cref{fig:vd-gm}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[node distance=0.7em, auto, thick]
\node[const] (img) {\(\bm{i}\)};
\node[const, right=of img] (capt) {\(\bm{c}\)};
\node[const, right=of capt] (ctx) {\(\bm{h}\)};
\node[obs, below right=of ctx] (dial) {\(\dial\)};
\node[latent, left=of dial] (z) {\(\bm{z}\)};
\edge{dial,img,capt,ctx} {z};
\end{tikzpicture}
%
\hspace*{3ex}
%
\begin{tikzpicture}[node distance=0.7em, auto, thick]
\node[const] (img) {\(\bm{i}\)};
\node[const, right=of img] (capt) {\(\bm{c}\)};
\node[const, right=of capt] (ctx) {\(\bm{h}\)};
\node[latent, below left=of img] (z) {\(\bm{z}\)};
\node[obs, right=of z] (dial) {\(\dial\)};
\edge{z,img,capt,ctx} {dial};
\end{tikzpicture}
\caption{%
\posref{Left:} Conditional recognition model and \posref{Right:} conditional generative model for \gls{2VD}.
}
\label{fig:vd-gm}
\end{figure}
The formulation in \cref{eq:vd-elbo} is general enough to be applied to single question-answering (\gls{VQA}) all the way to full two-way dialogue generation (\gls{2VD}).
Taking a step back from generative \gls{2VD}, we can re-frame the formulation for generative \gls{1VD} (i.e.\ sequential answer generation) by considering the generated component to be the answer to a particular question at step~\(t\), given context from the image, caption and the sequence of previous question-answers.
Simply put, this corresponds to the data~\(\bm{x}\) being the answer~\(\bm{a}_t\), conditioned on the image, its caption, the dialogue history to \(t\)-1, and the current question, or \(\bm{y} = \grpB{\bm{i}, \bm{c}, \bm{h}_{t-1}, \bm{q}_t}\).
For simplicity, we denote a compound context as~\(\bm{h}^{\texttt{+}}_t = \grpA{\bm{h}_{t-1}, \bm{q}_t}\) and reformulate \cref{eq:vd-elbo} for \gls{1VD} as:
\begin{align*}
&\log \p{\dial \mid \bm{i}, \bm{c}, \bm{h}} = \sum_{t=1}^T \log \p{\bm{a}_t \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t},\\[-2ex]
&\log \p{\bm{a}_t \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}\\
&\ge \Ex[\q{\bm{z} \mid \bm{a}_t, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}]
{\log \p{\bm{a}_t \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}}\\
&\quad- \KL{\q{\bm{z} \mid \bm{a}_t, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}}
{\p{\bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}},
\refstepcounter{equation}\tag{\theequation}} \newcommand{\posref}[1]{{\bfseries #1}
\label{eq:svqa-g-elbo}
\end{align*}
with the graphical model structures shown in \cref{fig:svqa-g-gm}.
\begin{figure}[h!]
\centering
\begin{tikzpicture}[node distance=0.7em, auto, thick]
\node[const] (img) {\(\bm{i}\)};
\node[const, below=of img] (capt) {\(\bm{c}\)};
\node[latent, right=of capt] (z) {\(\bm{z}\)};
\node[const, above=of z] (ctx) {\(\bm{h}^{\texttt{+}}_t\)};
\node[obs, right=1.5em of z] (dial) {\(\bm{a}_t\)};
\edge{dial,img,capt,ctx} {z};
\plate [inner xsep=8pt]{r} {(ctx)(dial)} {\rule[1.2em]{0pt}{0pt}$T$};
\end{tikzpicture}
%
\hspace*{3ex}
%
\begin{tikzpicture}[node distance=0.7em, auto, thick]
\node[const] (img) {\(\bm{i}\)};
\node[const, below=of img] (capt) {\(\bm{c}\)};
\node[obs, left=of capt] (dial) {\(\bm{a}_t\)};
\node[latent, left=1.5em of dial] (z) {\(\bm{z}\)};
\node[const, above=of dial] (ctx) {\(\bm{h}^{\texttt{+}}_t\)};
\edge{z,img,capt,ctx} {dial};
\plate [inner xsep=8pt]{r} {(z)(ctx)} {\rule[1.2em]{0pt}{0pt}$T$};
\end{tikzpicture}
\caption{%
\posref{Left:} Conditional recognition model and \posref{Right:} conditional generative model for \gls{1VD}.
}
\label{fig:svqa-g-gm}
\end{figure}
Our baseline~\cite{Das_VisualDialog} for the \gls{1VD} model can also be represented in our formulation by taking the variational posterior and generative prior to be conditional Dirac-Delta distributions. That is, \(\q{\bm{z} \mid \bm{a}_t, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t} = \p{\bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t} = \delta(\bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t)\).
This transforms the objective from \cref{eq:svqa-g-elbo} by
\begin{inparaenum}[a)]
\item replacing the expectation of the log-likelihood over the recognition model by an evaluation of the log-likelihood for a \emph{single} encoding (one that satisfies the Dirac-Delta), and
\item ignoring the \(\DKL\) regulariser, which is trivially 0.
\end{inparaenum}
This computes the marginal likelihood directly as just the model likelihood~\(\log \p{\bm{a}_t \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}\), where~\(\bm{z} \~ \delta(\bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t)\).
Note that while such models can ``generate'' answers to questions by sampling from the likelihood function, we typically don't call them generative since they effectively make the encoding of the data and conditions fully deterministic.
We explore and demonstrate the benefit of a fully generative treatment of \gls{1VD} in \cref{sec:eval}.
It also follows trivially that the basic \gls{VQA} model (for single question-answering) itself can be obtained from this \gls{1VD} model by simply assuming there is no dialogue history (i.e.\ step length \(T = 1\)).
\subsection{``Colouring'' Visual Dialogue with Convolutions}
\label{sec:vd-conv}
\textsc{FlipDial}'s convolutional formulation allows us to {\em implicitly} capture the sequential nature of sentences and sequences of sentences. Here we introduce how we encode questions, answers, and whole dialogues with \glspl{CNN}.
We begin by noting the prevalence of recurrent approaches (e.g.\ LSTM~\cite{Hochreiter1997}, GRU~\cite{chung2014empirical}) in modelling both visual dialogue and general dialogue to date~\cite{Das_VisualDialog,das2017learning,visdial_rl,jain2017creativity,zhao2017learning}.
Typically recurrence is employed at two levels -- at the lower level to sequentially generate the words of a sentence (a question or answer in the case of dialogue), and at a higher level to sequence these sentences together into a dialogue.
Recently however, there has been considerable interest in convolutional models of language~\cite{Bai_2018conv,hu2014convolutional,kalchbrennerconvolutional,pham2016convolutional}, which have shown to perform at least as well as recurrent models, if not better, on a number of different tasks.
They are also computationally more efficient, and typically suffer less from issues relating to exploding or vanishing gradients for which recurrent networks are known~\cite{pascanu2013difficulty}.
In modelling sentences with convolutions, the tokens (words) of the sentence are transformed into a stack of fixed-dimensional embeddings (e.g.\ using word2vec~\cite{mikolov2013distributed} or Glove~\cite{pennington2014glove}, or those learned for a specific task).
For a given sentence, say question~\(\bm{q}_t\), this results in an embedding~\(\e{\bm{q}_t} \in \mathbb{R}^{E \times L}\) for embedding size~\(E\) and sentence length~\(L\), where \(L\) can be bounded by the maximum sentence length in the corpus, with padding tokens employed where required.
This two-dimensional stack is essentially a single-channel `image' on which convolutions can be applied in the standard manner in order to encode the entire sentence.
Note this similarly applies to the answer~\(\bm{a}_t\) and caption~\(\bm{c}\), producing embedded~\(\e{\bm{a}}_t\) and~\(\e{\bm{c}}\), respectively.
We then extend this idea of viewing sentences as `images' to whole dialogues, producing a \emph{multi-channel} language embedding.
Here, the sequence of sentences itself can be seen as a stack of (a stack of) word embeddings~\(\e{\dial} \in \mathbb{R}^{E \times L \times 2T}\), where now the number of channels accounts for the number of questions and answers in the dialogue.
We refer to this process as ``colouring'' dialogue, by analogy to the most common meaning given to image channels -- colour.
Our primary motivation for adopting a convolutional approach here is to explore its efficacy in extending from simpler language tasks~\cite{hu2014convolutional, kalchbrennerconvolutional} to full visual dialogue. We hence instantiate the following models for \gls{1VD} and \gls{2VD}:
\begin{compactdesc}
\item [{\bf Answer} {[\gls{1VD}]}:] We employ the \gls{CVAE} formulation from \cref{eq:svqa-g-elbo,fig:svqa-g-gm} to iteratively generate answers, conditioned on the image, caption and current dialogue history.
\item[{\bf Block} {[\gls{1VD}, \gls{2VD}]}:] Using the \gls{CVAE} formulation from \cref{eq:vd-elbo,fig:vd-gm} we generate entire \emph{blocks} of dialogue directly (i.e.~\(\bm{h}=\emptyset\) since dialogue context is implicit rather than explicit). We allow the convolutional model to \emph{implicitly} supply the context instead. We consider this \gls{2VD}, although this block architecture can also generate iteratively, and can be evaluated on \gls{1VD} (see~\cref{sec:blockevalmethods}).
%
\item[{\bf Block Auto-Regressive} {[\gls{1VD}, \gls{2VD}]}:] We introduce an auto-regressive component to our generative model in the same sense as recent auto-regressive generative models for images~\cite{gulrajani2016pixelvae,van2016conditional}.
%
We augment the {\bf Block} model by feeding its output through an auto-regressive ({\sc AR}) module which explicitly enforces sequentiality in the generation of the dialogue blocks.
%
This effectively factorises the likelihood in \cref{eq:vd-elbo} as \(\p{\dial \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}} = \p{\dial^1 \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}} \prod^{N}_{n=2} \p{\dial^n \mid \dial^{1:n-1}}\) where \(N\) is the number of {\sc AR} layers, and \(\dial^1\) is the (intermediate) output from the standard {\bf Block} model. Note, again \(\bm{h}=\emptyset\), and \(\dial^{n}\) refers to an entire dialogue at the \(n\)-th {\sc AR} layer (rather than the \(t\)-th dialogue exchange as is denoted by \(\dial[t]\)).
%
\end{compactdesc}
\vspace*{-2ex}
\section{Experiments}
\label{sec:experiments}
We present an extensive quantitative and qualitative analysis of our models' performance in both \gls{1VD}, which requires answering a sequence of image-contextualised questions, and full \gls{2VD}, where both questions \textit{and} answers must be generated given a specific visual context. Our proposed generative models are denoted as follows:
\noindent%
\begin{tabular}{>{\,\,\bfseries}r@{\,\,--\,\,}l@{}}
{\bf A} & {\bf a}nswer architecture for \gls{1VD}\\
{\bf B} & {\bf b}lock dialogue architecture for \gls{1VD} \& \gls{2VD} \\
{\(\textbf{B}_{\textbf{AR}}\)} & auto-regressive extension of {\bf B}{} for \gls{1VD} \& \gls{2VD}
\end{tabular}
\noindent%
{\bf A}{} is a generative convolutional extension of our baseline~\cite{Das_VisualDialog} and is used to validate our methods against a standard benchmark in the \gls{1VD} task. {\bf B}{} and {\(\textbf{B}_{\textbf{AR}}\)}, like {\bf A}, are generative, but are extensions capable of doing full dialogue generation, a much more difficult task. Importantly, {\bf B}{} and {\(\textbf{B}_{\textbf{AR}}\)}{} are flexible in that despite being trained to generate a block of questions \emph{and} answers (\(\bm{h}=\emptyset\)), they can be \emph{evaluated} iteratively for both \gls{1VD} and \gls{2VD} (see~\cref{sec:blockevalmethods}). We summarise the data and condition variables for all models in \cref{tab:methodsumm}. To evaluate performance on both tasks, we propose novel evaluation metrics which augment those of our baseline~\cite{Das_VisualDialog}. To the best of our knowledge, we are the first to report models that can generate both questions and answers given an image and caption, a necessary step toward a truly conversational agent. Our key results are:
\begin{compactitem}
\item We set state-of-the-art results in the \gls{1VD} task on the \textit{VisDial} dataset, improving the mean rank of the generated answers by \(5.66\) (\cref{tab:results_ansgen}, \(\mathcal{S}_{\textit{w2v}}\)) compared to Das~\etal~\cite{Das_VisualDialog}.
\item Our block models are able to generate both questions and answers, a more difficult but more realistic task (\gls{2VD}).
\item Since our models are generative, we are able to show highly diverse and plausible question and answer generations based on the provided visual context.
\end{compactitem}
\begin{table}
\vspace*{3ex}
\caption{Data (\(\bm{x}\)) and condition (\(\bm{y}\)) variables for models {\bf A}{} and {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}{} for \gls{1VD} and \gls{2VD}. Models {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} can be evaluated as a block or iteratively (see \cref{sec:blockevalmethods}), accepting ground-truth (\(\bm{q} / \bm{a}\)) or predicted (\(\hat{\bm{q}} / \hat{\bm{a}}\)) dialogue history (see \cref{tab:blockevalmethods}). }
\label{tab:methodsumm}
\centering
\scalebox{0.72}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
Task & Model & \multicolumn{2}{c}{Train} & \multicolumn{2}{c}{Evaluate} & Eval method \\
\cmidrule(lr){3-6}
& & {\bf \(\bm{x}\)} & \bf \(\bm{y}\) & {\bf \(\bm{x}\)} & \bf \(\bm{y}\) & \\
\midrule
\multirow{2}*{\gls{1VD}} & {\bf A} & \(\bm{a}_t\) & \(\bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t\) & \(\emptyset\) & \(\bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t\) & \(-\) \\
& {\bf B}{}, {\(\textbf{B}_{\textbf{AR}}\)}{} & \(\dial\) & \(\bm{i}, \bm{c}\) & \{\(\dial\)--\(\bm{q}\bm{a}\), \(\dial\)--\(\bm{q}\hat{\bm{a}}\)\} & \(\bm{i}, \bm{c}\) & iterative \\
\midrule
\multirow{2}*{\gls{2VD}} & \multirow{2}*{{\bf B}{}, {\(\textbf{B}_{\textbf{AR}}\)}{}} & \multirow{2}*{\(\dial\)} & \multirow{2}*{\(\bm{i}, \bm{c}\)} & \(\emptyset\) & \multirow{2}*{\(\bm{i}, \bm{c}\)} & block \\
& & & & \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\) & & iterative\\
\bottomrule
\end{tabular}}
\vspace*{-3ex}
\end{table}
\paragraph{Datasets:}%
We use the \textit{VisDial}~\cite{Das_VisualDialog} dataset (v0.9) which contains Microsoft COCO images each paired with a caption and a dialogue of 10 question-answer pairs. The train/test split is \(82,783/40,504\) images, respectively.
\paragraph{Baseline:}%
Das \etal~\cite{Das_VisualDialog}'s best model, \texttt{MN-QIH-G}, is a recurrent encoder-decoder architecture which encodes the image \( \bm{i} \), the current question \( \bm{q}_t \) and the attention-weighted \textit{ground truth} dialogue history \( \dial[1:t-1]\). The output conditional likelihood distribution is then used to (token-wise) predict an answer. Our {\bf A}{} model is a generative and convolutional extension, evaluated using existing ranking-based metrics~\cite{Das_VisualDialog} on the generated and candidate answers. We also (iteratively) evaluate our {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} for \gls{1VD} as detailed in \cref{sec:blockevalmethods} (see \cref{tab:results_ansgen}).
\vspace*{-0.4ex}
\subsection{Network architectures and training}
\vspace*{-0.3ex}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{images/encoder_v3_2.pdf}\\%[0.6ex]
\includegraphics[width=0.45\textwidth]{images/decoder_nonar.pdf}\\%\\[2ex]
\includegraphics[width=0.45\textwidth]{images/decoder_ar.pdf}
\vspace*{-1ex}
\caption{Convolutional \posref{(top)} conditional encoder and prior architecture, \posref{(middle)} conditional decoder, and \posref{(bottom)} auto-regressive conditional decoder architectures, applying to both one- and two-way visual dialogue (\gls{1VD} and \gls{2VD}).}
\label{fig:nw-arch}
\vspace*{-1ex}
\end{figure}
Following the \gls{CVAE} formulation (\cref{sec:vd-gen}) and its convolutional interpretation (\cref{sec:vd-conv}), all our models ({\bf A}, {\bf B}{} and {\(\textbf{B}_{\textbf{AR}}\)}) have three core components: an encoder network, a prior network and a decoder network. \Cref{fig:nw-arch}~(top) shows the encoder and prior networks, and \cref{fig:nw-arch}~(middle, bottom) show the standard and auto-regressive decoder networks.
\paragraph{Prior network}
The prior neural network, parametrised by \(\theta\), takes as input the image \(\bm{i}\), the caption \(\bm{c}\) and the dialogue context. Referring to Table~\ref{tab:methodsumm}, for model {\bf A}, recall \(\bm{y}=\{\bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t\}\) where the context \(\bm{h}^{\texttt{+}}_t\) is the dialogue history up to \(t\text{-}1\) and the current question \(\bm{q}_t\). For models {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}, \(\bm{y}=\{\bm{i}, \bm{c}\}\) (note \(\bm{h}=\emptyset\)).
To obtain the image representation, we pass \(\bm{i}\) through \textit{VGG-16}~\cite{Simonyan_2014c} and extract the penultimate (\(4096\)-d) feature vector.
We pass caption \(\bm{c}\) through a pre-trained \textit{word2vec}~\cite{mikolov2013distributed} module (we do not learn these word embeddings).
If \(\bm{h} \neq \emptyset\), we pass the one-hot encoding of each word through a {\em learnable} word embedding module and stack these embeddings as described in \cref{sec:vd-conv}. We encode these condition variables convolutionally to obtain \(\bm{y}\), and pass this through a convolutional block to obtain \(\boldsymbol{\mu}_p\) and \(\log\boldsymbol{\sigma}^2_p\), the parameters of the conditional prior \(\p{\bm{z} \mid \bm{y}}\).
\paragraph{Encoder network} The encoder network, parametrised by \(\phi\), takes \(\bm{x}\) and the encoded condition \(\bm{y}\) (obtained from the prior network) as input.
For model {\bf A}, \(\bm{x}=\bm{a}_t\) while for {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}, \(\bm{x}\!=\!\dial\!=\!\grpA{(\bm{q}_t, \bm{a}_t)}^T_{t=1}\).
In all models, \(\bm{x}\) is transformed through a word-embedding module into a single-channel answer `image' for {\bf A}{}, or a multi-channel image of alternating questions and answers for {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}. The embedded output is then combined with \(\bm{y}\) to obtain \(\boldsymbol{\mu}_q\) and \(\log\boldsymbol{\sigma}^2_q\), the parameters of the conditional latent posterior \(\q{\bm{z} \mid \bm{x}, \bm{y}}\).
\paragraph{Decoder network} The decoder network
takes as input a latent \(\bm{z}\) and the encoded condition \(\bm{y}\).
The sample is transpose-convolved, combined with \(\bm{y}\) and further transformed to obtain an intermediate output volume of dimension \(E \times L \times M\), where \(E\) is the word embedding dimension, \(L\) is the maximum sentence length and \(M\) is the number of dialogue entries in \(\bm{x}\) (\(M=1\) for {\bf A}{}, \(M=2T\) for {\bf B}{} variants).
Following this,
{\bf A}{} and {\bf B}{} employ a standard linear layer, projecting the \(E\) dimension to the vocabulary size \(V\) (\cref{fig:nw-arch}~(middle)), whereas {\(\textbf{B}_{\textbf{AR}}\)}{} employs an autoregressive module followed by this standard linear layer (\cref{fig:nw-arch}~(bottom)).
At train time, the \(V\)-dimensional output is \textit{softmax}ed and the \gls{CE} term of the \gls{ELBO} computed. At test time, the \(\mathop{argmax}\) of the output provides the predicted word index.
The weights of the encoder and prior's learnable word embedding module and the decoder's final linear layer are shared.
\paragraph{Autoregressive module}
Inspired by \textit{PixelCNN}~\cite{oord2016pixelrnn} which sequentially predicts image pixels, and similar to \cite{gulrajani2016pixelvae}, we apply \(N = \{8, 10\}\) size-preserving autoregressive layers to the intermediate output of model {\bf B}{} (size \(E \times L \times 2T\)), and then project \(E\) to vocabulary size \(V\). Each layer employs masked convolutions, considering only `past' embeddings, sequentially predicting \(2T*L\) embeddings of size \(E\), enforcing sequentiality at both the sentence- and dialogue-level.
\paragraph{\gls{KL} annealing}
Motivated by \cite{bowman2015generating} in learning continuous latent embedding spaces for language, we employ \gls{KL} annealing in the loss objectives of \cref{eq:vd-elbo} and \cref{eq:svqa-g-elbo}.
We weight the \gls{KL} term by \(\alpha \in [0,1]\) linearly interpolated over 100 epochs, and then train for a further 50 epochs (\(\alpha=1\)).
\paragraph{Network and training hyper-parameters}
In embedding sentences, we pad to a maximum sequence length of \(L=64\) and use a word-embedding dimension of \(E=256\) (for \textit{word2vec}, \(E=300\)).
After pre-processing and filtering the vocabulary size is \(V=9710\) (see supplement for further details).
We use the Adam optimiser~\cite{Kingma_2014adam} with default parameters, a latent dimensionality of \(512\) and employ batch normalisation with momentum\(=0.001\) and learnable parameters.
For model {\bf A}{} we use a batch size of \(200\), and \(40\) for {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}.
We implement our pipeline using \textsc{PyTorch}{}~\cite{pytorch}.
\begin{table}
\vspace*{3ex}
\caption{Iterative evaluation of {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}{} for \gls{1VD} and \gls{2VD}. Under each condition, the input dialogue block is filled with ground-truth or predicted history (\(\bm{q}/\bm{a}\) or \(\hat{\bm{q}}/\hat{\bm{a}}\), respectively), while future entries are filled with the {\small\texttt{PAD}}{} token. }
\label{tab:blockevalmethods}
\vspace*{-1ex}
\centering
\scalebox{0.85}{%
\begin{tabular}{@{}c@{\hspace*{5ex}}cc@{\hspace*{5ex}}c@{}}
\toprule
& \multicolumn{2}{c}{\gls{1VD}} & \gls{2VD} \\
\cmidrule{2-4}
& \(\dial\)--\(\bm{q}\bm{a}\) & \(\dial\)--\(\bm{q}\hat{\bm{a}}\) & \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\) \\
\midrule
\({}<t\) & (\(\bm{q}, \bm{a}\)) & (\(\bm{q}, \hat{\bm{a}}\)) & (\(\hat{\bm{q}}, \hat{\bm{a}}\))\\
\({}=t\) & (\(\bm{q}\), {\small\texttt{PAD}}{}) & (\(\bm{q}\), {\small\texttt{PAD}}{}) & ({\small\texttt{PAD}}{}, {\small\texttt{PAD}}{}) \big/ (\(\hat{\bm{q}}\), {\small\texttt{PAD}}{})\\
\({}>t\) & ({\small\texttt{PAD}}{}, {\small\texttt{PAD}}{}) & ({\small\texttt{PAD}}{}, {\small\texttt{PAD}}{}) & ({\small\texttt{PAD}}{}, {\small\texttt{PAD}}{})\\
\bottomrule
\end{tabular}}
\vspace*{-3ex}
\end{table}
\subsection{Evaluation methods for block models}
\label{sec:blockevalmethods}
Although {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}{} generate whole blocks of dialogue directly (\(\bm{h}=\emptyset\)), they can be evaluated iteratively, lending them to both \gls{1VD} and \gls{2VD} (see supplement for descriptions of generation/reconstruction pipelines).
\begin{compactitem}
\item {\bf Block evaluation [\gls{2VD}]}. The generation pipeline generates whole blocks of dialogue directly, conditioned on the image and caption (i.e.\ \(\bm{x}=\emptyset\) and \(\bm{y}=\{\bm{i}, \bm{c}\}\) for {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} evaluation in \cref{tab:methodsumm}). This is \gls{2VD} since the model must generate a coherent block of both questions {\em and} answers.
\item {\bf Iterative evaluation}. The reconstruction pipeline can generate dialogue items iteratively. At time \(t\), the input dialogue block is filled with zeros ({\small\texttt{PAD}}{} token) and the ground-truth/predicted dialogue history to \(<t\) is slotted in (see below and \cref{tab:blockevalmethods}). This future-padded block is then encoded with the condition inputs, and then reconstructed. The \(t\)-th dialogue item is extracted (whether an answer if \gls{1VD} or a question/answer if \gls{2VD}), and this is repeated \(T\) (for \gls{1VD}) or \(2T\) (for \gls{2VD}) times. Variations are:
%
\begin{compactitem}
\item \(\dial\)--\(\bm{q}\bm{a}\){} {\bf [\gls{1VD}]}. At time \(t\), the input dialogue block is filled with the history of {\em ground-truth} questions and answers up to~\(t\text{-}1\), along with the current ground-truth question. All future entries are padded -- equivalent to~\cite{Das_VisualDialog} using the ground-truth dialogue history.
\item \(\dial\)--\(\bm{q}\hat{\bm{a}}\){} {\bf [\gls{1VD}]}. Similar to \(\dial\)--\(\bm{q}\bm{a}\), except that the input block is filled with the history of ground-truth questions and {\em previously predicted} answers along with the current ground-truth question. This is a more realistic \gls{1VD}.
\item \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\){} {\bf [\gls{2VD}]}. The most challenging and realistic condition in which the input block is filled with the history of previously predicted questions {\em and} answers.
\end{compactitem}
\end{compactitem}
\subsection{Evaluation and Analysis}
\label{sec:eval}
We evaluate our {\bf A}, {\bf B}, and {\(\textbf{B}_{\textbf{AR}}\)}{} models on the \gls{1VD} and \gls{2VD} tasks. Under \gls{1VD}, we predict an answer with each time step, given an image, caption and the current dialogue history (\cref{sec:vd} and \cref{tab:results_ansgen}), while under \gls{2VD}, we predict both questions \emph{and} answers (\cref{sec:twowayvd} and \Cref{tab:results_modz}). All three models are able to perform the first task , while only {\bf B}{} and {\(\textbf{B}_{\textbf{AR}}\)}{} are capable of the second task.
\noindent
\begin{table}[t]
\vspace*{3ex}
\caption{\acrshort{1VD} evaluation of {\bf A}{} and {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} on \textit{VisDial} (v0.9) test set. Results show ranking of answer candidates based on the score functions \(\mathcal{S}_{M}\) and \(\mathcal{S}_{\textit{w2v}}\).}
\vspace*{-1ex}
\label{tab:results_ansgen}
\centering
\scalebox{0.72}{%
\begin{tabular}{@{}ccrccccc@{}}
\toprule
\makecell{Score\\[-2pt]function} & & Method & {\bf MR} & {\bf MRR} & {\bf R@1} & {\bf R@5} & {\bf R@10} \\
\midrule
\multirow{3}*{\(\mathcal{S}_{M}\)} & & RL-QAbot~\cite{das2017learning} & 21.13 & 0.4370 & - & 53.67 & 60.48 \\
& & \texttt{MN-QIH-G}~\cite{Das_VisualDialog} & 17.06 & 0.5259 & 42.29 & 62.85 & 68.88 \\
& & {\bf A}{} ({\sc lw}) & 23.87 & 0.4220 & 30.48 & 53.78 & 57.52 \\
& & {\bf A}{} ({\sc elbo}) & 20.38 & 0.4549 & 34.08 & 56.18 & 61.11 \\
\midrule
\multirow{9}*{\(\mathcal{S}_{\textit{w2v}}\)}
& & \texttt{MN-QIH-G}~\cite{Das_VisualDialog} & 31.31 & 0.2215 & 16.01 & 22.42 & 34.76 \\
& & {\bf A}~({\sc recon}) & 15.36 & 0.4952 & 41.77 & 54.67 & 66.90 \\
& & {\bf A}~({\sc gen}) & {\bf 25.65} & 0.3227 & 25.88 & 33.43 & 47.75 \\
\cmidrule(l){2-8}
& & {\bf B}{} & 28.45 & 0.2927 & 23.50 & 29.11 & 42.29 \\
& \(\dial\)--\(\bm{q}\bm{a}\) & \modzar8{} & 25.87 & {\bf 0.3553} & {\bf 29.40} & {\bf 36.79} & {\bf 51.19} \\
& & \modzar10{} & 26.30 & 0.3422 & 28.00 & 35.34 & 50.54 \\
\cmidrule(l){2-8}
& & {\bf B}{} & 30.57 & 0.2188 & 16.06 & 20.88 & 35.37 \\
& \(\dial\)--\(\bm{q}\hat{\bm{a}}\) & \modzar8{} & 29.10 & 0.2864 & 22.52 & 29.01 & 48.43 \\
& & \modzar10{} & 29.15 & 0.2869 & 22.68 & 28.97 & 46.98 \\
\bottomrule
\end{tabular}}
\vspace*{-3ex}
\end{table}
\vspace*{-5ex}
\subsubsection{One-Way Visual Dialogue (\gls{1VD}) task}
\label{sec:vd}
We evaluate the performance of {\bf A}{} and {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} on \gls{1VD} using the candidate ranking metric of~\cite{Das_VisualDialog} as well as an extension of this which assesses the \textit{generated} answer quality (\cref{tab:results_ansgen}). \cref{fig:gen-vd-hook} and \cref{fig:qual_gen2} show our qualitative results for \gls{1VD}.
\paragraph{Candidate ranking by model log-likelihood [\(\mathbf{\mathcal{S}_{M}}\)]\hfill}
The \textit{VisDial} dataset~\cite{Das_VisualDialog} provides a set of 100 candidate answers \(\{\bm{a}^c_t\}^{100}_{c=1}\) for each question-answer pair at time \(t\) per image.
The set includes the ground-truth answer \(\bm{a}_t\) as well as similar, popular, and random answers.
Das~\etal~\cite{Das_VisualDialog} rank these candidates using the log-likelihood value of each under their model (conditioned on the image, caption and dialogue history, including the current question), and then observe the position of the ground-truth answer (closer to 1 is better). This position is averaged over the dataset to obtain the Mean Rank (MR). In addition, the Mean Reciprocal Rank (MRR; 1/MR) and recall rates at \(k=\{1, 5, 10\}\) are computed.
To compare against their baseline, we rank the 100 candidates answers by estimates of their \emph{marginal} likelihood from {\bf A}{}. This can be done with
\begin{inparaenum}[i)]
\item the conditional \gls{ELBO} (\cref{eq:svqa-g-elbo}), and by
\item likelihood weighting (\textsc{lw}) in the conditional generative model
\(\p{\bm{a}_t \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}
= \int \p{\bm{a}_t, \bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t} dz
= \int \p{\bm{z} \mid \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t} \p{\bm{a} \mid \bm{z}, \bm{i}, \bm{c}, \bm{h}^{\texttt{+}}_t}\, dz\).
\end{inparaenum}
Ranking by both these approaches is shown in the \(\mathbf{\mathcal{S}_{M}}\) section of \cref{tab:results_ansgen}, indicating that we are comparable to the state of the art in discriminative models of sequential \gls{VQA} \cite{Das_VisualDialog,das2017learning}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{images/teddy_gvd.pdf}
\includegraphics[width=0.95\linewidth]{images/umbrella_gvd.pdf}
\vspace*{-1ex}
\caption{Example generated answers from {\bf A}'s conditional prior -- conditioned on an image, caption, question and dialogue history. See supplement for further examples.}
\label{fig:qual_gen2}
\vspace*{-1ex}
\end{figure}
\paragraph{Candidate ranking by \textit{word2vec} cosine distance [\(\mathbf{\mathcal{S}_{\textit{w2v}}}\)]\hfill}
The evaluation protocol of \cite{Das_VisualDialog} scores and ranks a given set of candidate answers, without being a function of the actual answer \emph{predicted} by the model, \(\hat{\bm{a}}_t\).
This results in the rank of the ground-truth answer candidate reflecting its score under the model {\em relative} to the rest of the candidates' scores, rather than capturing the quality of the answer output by the model, which is left unobserved.
To remedy this, we instead score each candidate by the cosine distance between the \textit{word2vec} embedding of the predicted answer \(\hat{\bm{a}}_t\) and that candidate's \textit{word2vec} embedding.
We take the embedding of a sentence to be the average embedding over word tokens following Arora~\etal~\cite{aurora2017simple}.
In addition to accounting for the predicted answer, this method also allows semantic similarities to be captured such that if the predicted answer is similar (in meaning and/or words generated) to the ground-truth candidate answer, then the cosine distance will be small, and hence the ground-truth candidate's rank closer to 1\@.
We report these numbers for {\bf A}{}, iteratively-evaluated {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}, and also our baseline model \texttt{MN-QIH-G}~\cite{Das_VisualDialog}, which we re-evaluate using the \textit{word2vec} cosine distance ranking (see \(\mathcal{S}_{\textit{w2v}}\) in \cref{tab:results_ansgen}).
In the case of {\bf A}~({\sc gen}), we evaluate answer \emph{generations} from {\bf A}{} whereby we condition on \(\bm{i}, \bm{c}\) and \(\bm{h}^+_t\) via the prior network, sample \(\bm{z} \sim \mathcal{N}(\latent; \mup, \varp)\) and generate an answer via the decoder network. Here we show an improvement of 5.66 points in MR over the baseline.
On the other hand, {\bf A}~({\sc recon}) evaluates answer \emph{reconstructions} in which \(\bm{z}\) is sampled from \(\mathcal{N}(\latent; \muq, \varq)\) (where ground-truth answer \(\bm{a}_t\) is provided). We include {\bf A}~({\sc recon}) merely as an ``oracle'' autoencoder, observing its good ranking performance, but do not explicitly compare against it.
We also note that the ranking scores of the block models are worse (by 3-4 MR points) than those of {\bf A}. This is expected since {\bf A}{} is explicitly trained for \gls{1VD} which is not the case for {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}.
Despite this, the performance gap between {\bf A}{}~({\sc gen}) and {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} (with \(\dial\)--\(\bm{q}\bm{a}\)) is not large, bolstering our iterative evaluation method for the block architectures.
Note finally that the {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} models perform better under \(\dial\)--\(\bm{q}\bm{a}\){} than under \(\dial\)--\(\bm{q}\hat{\bm{a}}\){} (by 2-3 MR points). This is also expected as answering is easier with access to the ground-truth dialogue history rather than when only the previously \emph{predicted} answers (and ground-truth questions) are provided.
\vspace*{-2.5ex}
\subsubsection{Two-way Visual Dialogue (\gls{2VD}) task}
\label{sec:twowayvd}
\vspace*{-1ex}
Our flexible \gls{CVAE} formulation for visual dialogue allows us to move from \gls{1VD} to the generation of both questions \emph{and} answers (\gls{2VD}).
Despite this being inherently more challenging, {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} are able to generate diverse sets of questions and answers contextualised by the given image and caption.
\cref{fig:qual_gen1} shows snippets of our two-way dialogue generations.
In evaluating our models for \gls{2VD}, the candidate ranking protocol of~\cite{Das_VisualDialog} which relies on a \emph{given} question to rank the answer candidates, is no longer usable when the questions themselves are being generated.
This is the case for {\bf B}/{\(\textbf{B}_{\textbf{AR}}\)}{} block evaluation, which has no access to the ground-truth dialogue history, and the \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\){} iterative evaluation, when the full predicted history of questions and answers is provided (\cref{tab:blockevalmethods}).
We therefore look directly to the \gls{CE} and \gls{KL} terms of the \gls{ELBO} as well as propose two new metrics, \(sim_{\bm{c},\bm{q}}\) and \(sim_{\circlearrowleft}\), to compare our methods in the \gls{2VD} task:
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\linewidth]{images/sheep.pdf}
\includegraphics[width=0.95\linewidth]{images/baseball1.pdf}
\includegraphics[width=0.95\linewidth]{images/loft.pdf}
\includegraphics[width=0.95\linewidth]{images/baseball2.pdf}
\end{center}
\vspace*{-3.5ex}
\caption{Examples of two-way dialogue generation from the {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}{} models. Different colours indicate different generations -- coherent sets with a single colour, and failures in white. See supplement for further examples.}
\label{fig:qual_gen1}
\vspace*{-2ex}
\end{figure}
\begin{compactitem}
\item {\bf Question relevance (\(sim_{\bm{c},\bm{q}}\))}.
We expect a generated question to query an aspect of the image, and we use the presence of semantically similar words in both the question and image caption as a proxy of this. We compute the cosine distance between the (average) \textit{word2vec} embedding of each predicted question \(\bm{q}_t\) and that of the caption \(\bm{c}\), and average over all \(T\) questions in the dialogue (closer to 1 indicates higher semantic similarity).
\item{ \bf Latent dialogue dispersion (\(sim_{\circlearrowleft}\))}.
For a generated dialogue block~\(\dial^g\), \(sim_{\circlearrowleft}\) computes the \gls{KL} divergence \(\KL{q_\phi(\bm{z} | \dial^g, \bm{i}, \bm{c})}{q_\phi(\bm{z} | \dial, \bm{i}, \bm{c})}\), measuring how close the generated dialogue is to the true dialogue \(\dial\) in the latent space, given the same image~\(\bm{i}\) and caption~\(\bm{c}\).
\end{compactitem}
\noindent%
From \cref{tab:results_modz}, we observe a decrease in the loss terms as the auto-regressive capacity of the model increases (none \to{} 8 \to{} 10), suggesting that explicitly enforcing sequentiality in the dialogue generations is useful.
For \(\text{sim}_{\circlearrowleft}\) within a particular model, the dispersion values are typically larger for the harder task (without dialogue context).
We also observe that dispersion increases with number of \textsc{AR} layers, suggesting {\sc AR} improves the diversity of the model outputs, and avoids simply recovering data observed at train time.
\begin{table}[t]
\vspace*{3ex}
\caption{\acrshort{2VD} evaluation on \textit{VisDial} (v0.9) test set for {\bf B}{}/{\(\textbf{B}_{\textbf{AR}}\)}{} models. For \(\dial\), `\(\emptyset\)' indicates block evaluation, and `\(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\)' indicates iterative evaluation (see \Cref{sec:blockevalmethods}).}
\label{tab:results_modz}
\centering
\vspace*{-1ex}
\scalebox{0.9}{%
\begin{tabular}{@{}rccccr@{}}
\toprule
Method & \(\dial\) & {\bf CE} & {\bf KLD} & $\text{sim}_{\bm{c},q}$ & $\text{sim}_{\circlearrowleft}$ \\
\midrule
\multirow{2}*{{\bf B}} & \(\emptyset\) & 31.18 & 4.34 & 0.4931 & 14.20 \\
& \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\) & 25.40 & 4.01 & 0.4091 & 1.86 \\
\midrule
\multirow{2}*{\modzar8} & \(\emptyset\) & 28.81 & 2.54 & 0.4878 & 31.50 \\
& \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\) & 26.60 & 2.29 & 0.3884 & 2.39 \\
\midrule
\multirow{2}*{\modzar10} & \(\emptyset\) & 28.49 & 1.89 & 0.4927 & 44.34 \\
& \(\dial\)--\(\hat{\bm{q}}\hat{\bm{a}}\) & 24.93 & 1.80 & 0.4101 & 2.35 \\
\bottomrule
\end{tabular}}
\vspace*{-3ex}
\end{table}
While the proposed metrics provide a novel means to evaluate dialogue in a generative framework, like all language-based metrics, they are not complete.
The question-relevance metric, \(\text{sim}_{\bm{c},\bm{q}}\), can stagnate, and neither metric precludes redundant or nonsensical questions.
We intend for these metrics to \emph{augment} the bank of metrics available to evaluate dialogue and language models.
Further evaluation, including
\begin{inparaenum}[i)]
\item using auxiliary tasks, as in the image-retrieval task of~\cite{das2017learning}, to drive and evaluate the dialogues, and
\item turning to human evaluators to rate the generated dialogues,
\end{inparaenum}
can be instructive in painting a more complete picture of our models.
\vspace*{-1.5ex}
\section{Conclusion}
\vspace*{-1.5ex}
In this work we propose \textsc{FlipDial}{}, a generative convolutional model for visual dialogue which is able to generate answers (\gls{1VD}) as well as generate both questions \emph{and} answers (\gls{2VD}) based on a visual context. In the \gls{1VD} task, we set new state-of-the-art results with the answers generated by our model, and in the \gls{2VD} task, we are the first to establish a baseline, proposing two novel metrics to assess the quality of the generated dialogues. In addition, we propose and evaluate our models under a much more realistic setting for both visual dialogue tasks in which the \emph{predicted} rather than ground-truth dialogue history is provided at test time. This challenging setting is more akin to real-world situations in which dialogue agents must be able to evolve with their predicted exchanges. We emphasize that research focus must be directed here in the future. Finally, under all cases, the sets of questions and answers generated by our models are qualitatively good: diverse and plausible given the visual context.
Looking forward, we are interested in exploring additional methods for enforcing diversity in the generated questions and answers, as well as extending this work to explore \emph{recursive} models of reasoning for visual dialogue.
\paragraph{Acknowledgements}
This work was supported by the EPSRC, ERC grant ERC-2012-AdG 321162-HELIOS, EPSRC grant Seebibyte EP/M013774/1, EPSRC/MURI grant EP/N019474/1 and the Skye Foundation.
{\small
\bibliographystyle{ieee}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.